public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-15 22:24 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-15 22:24 UTC (permalink / raw
  To: gentoo-commits

commit:     ebdae207babbfb5ca502863cbc0331dbe5d94a5e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 15 22:23:52 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 15 22:23:52 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ebdae207

BMQ Ported to 6.10, thanks to Holger Hoffstätte

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |     8 +
 ...BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch | 11474 +++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch             |    13 +
 3 files changed, 11495 insertions(+)

diff --git a/0000_README b/0000_README
index 887cea4c..e017d0cb 100644
--- a/0000_README
+++ b/0000_README
@@ -82,3 +82,11 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
+From:   https://github.com/hhoffstaette/kernel-patches
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. default to n

diff --git a/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch b/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
new file mode 100644
index 00000000..4ebad9ef
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
@@ -0,0 +1,11474 @@
+diff --new-file -rup linux-6.10/Documentation/admin-guide/sysctl/kernel.rst linux-prjc-linux-6.10.y-prjc/Documentation/admin-guide/sysctl/kernel.rst
+--- linux-6.10/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 16:47:37.000000000 +0200
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
+diff --new-file -rup linux-6.10/Documentation/scheduler/sched-BMQ.txt linux-prjc-linux-6.10.y-prjc/Documentation/scheduler/sched-BMQ.txt
+--- linux-6.10/Documentation/scheduler/sched-BMQ.txt	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/Documentation/scheduler/sched-BMQ.txt	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --new-file -rup linux-6.10/fs/proc/base.c linux-prjc-linux-6.10.y-prjc/fs/proc/base.c
+--- linux-6.10/fs/proc/base.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/fs/proc/base.c	2024-07-15 16:47:37.000000000 +0200
+@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --new-file -rup linux-6.10/include/asm-generic/resource.h linux-prjc-linux-6.10.y-prjc/include/asm-generic/resource.h
+--- linux-6.10/include/asm-generic/resource.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/asm-generic/resource.h	2024-07-15 16:47:37.000000000 +0200
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --new-file -rup linux-6.10/include/linux/sched/deadline.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/deadline.h
+--- linux-6.10/include/linux/sched/deadline.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/deadline.h	2024-07-15 16:47:37.000000000 +0200
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_st
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --new-file -rup linux-6.10/include/linux/sched/prio.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/prio.h
+--- linux-6.10/include/linux/sched/prio.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/prio.h	2024-07-15 16:47:37.000000000 +0200
+@@ -18,6 +18,28 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++#endif
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --new-file -rup linux-6.10/include/linux/sched/rt.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/rt.h
+--- linux-6.10/include/linux/sched/rt.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/rt.h	2024-07-15 16:47:37.000000000 +0200
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(stru
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --new-file -rup linux-6.10/include/linux/sched/topology.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/topology.h
+--- linux-6.10/include/linux/sched/topology.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/topology.h	2024-07-15 16:47:37.000000000 +0200
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --new-file -rup linux-6.10/include/linux/sched.h linux-prjc-linux-6.10.y-prjc/include/linux/sched.h
+--- linux-6.10/include/linux/sched.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/include/linux/sched.h	2024-07-15 16:47:37.000000000 +0200
+@@ -774,9 +774,16 @@ struct task_struct {
+ 	struct alloc_tag		*alloc_tag;
+ #endif
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ 	int				on_cpu;
++#endif
++
++#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_ALT)
+ 	struct __call_single_node	wake_entry;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -790,6 +797,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -798,6 +806,19 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -809,6 +830,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1571,6 +1593,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
+ #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
+ 
+diff --new-file -rup linux-6.10/init/Kconfig linux-prjc-linux-6.10.y-prjc/init/Kconfig
+--- linux-6.10/init/Kconfig	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/init/Kconfig	2024-07-15 16:47:37.000000000 +0200
+@@ -638,6 +638,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -803,6 +804,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -849,6 +851,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -914,6 +945,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1015,6 +1047,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1037,6 +1070,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1285,6 +1319,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --new-file -rup linux-6.10/init/init_task.c linux-prjc-linux-6.10.y-prjc/init/init_task.c
+--- linux-6.10/init/init_task.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/init/init_task.c	2024-07-15 16:47:37.000000000 +0200
+@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --new-file -rup linux-6.10/kernel/Kconfig.preempt linux-prjc-linux-6.10.y-prjc/kernel/Kconfig.preempt
+--- linux-6.10/kernel/Kconfig.preempt	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/Kconfig.preempt	2024-07-15 16:47:37.000000000 +0200
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --new-file -rup linux-6.10/kernel/cgroup/cpuset.c linux-prjc-linux-6.10.y-prjc/kernel/cgroup/cpuset.c
+--- linux-6.10/kernel/cgroup/cpuset.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/cgroup/cpuset.c	2024-07-15 16:47:37.000000000 +0200
+@@ -846,7 +846,7 @@ out:
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1245,7 +1245,7 @@ static void rebuild_sched_domains_locked
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3301,12 +3301,15 @@ static int cpuset_can_attach(struct cgro
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3327,6 +3330,7 @@ static int cpuset_can_attach(struct cgro
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3350,12 +3354,14 @@ static void cpuset_cancel_attach(struct
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --new-file -rup linux-6.10/kernel/delayacct.c linux-prjc-linux-6.10.y-prjc/kernel/delayacct.c
+--- linux-6.10/kernel/delayacct.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/delayacct.c	2024-07-15 16:47:37.000000000 +0200
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --new-file -rup linux-6.10/kernel/exit.c linux-prjc-linux-6.10.y-prjc/kernel/exit.c
+--- linux-6.10/kernel/exit.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/exit.c	2024-07-15 16:47:37.000000000 +0200
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_st
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_st
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --new-file -rup linux-6.10/kernel/locking/rtmutex.c linux-prjc-linux-6.10.y-prjc/kernel/locking/rtmutex.c
+--- linux-6.10/kernel/locking/rtmutex.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/locking/rtmutex.c	2024-07-15 16:47:37.000000000 +0200
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waite
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_nod
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_nod
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --new-file -rup linux-6.10/kernel/sched/Makefile linux-prjc-linux-6.10.y-prjc/kernel/sched/Makefile
+--- linux-6.10/kernel/sched/Makefile	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/Makefile	2024-07-15 16:47:37.000000000 +0200
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --new-file -rup linux-6.10/kernel/sched/alt_core.c linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_core.c
+--- linux-6.10/kernel/sched/alt_core.c	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_core.c	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,8963 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.10-r0"
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 1] ____cacheline_aligned_in_smp;
++
++static cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++#endif
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++	idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	int last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++#ifdef CONFIG_SCHED_SMT
++			if (static_branch_likely(&sched_smt_present))
++				cpumask_andnot(sched_sg_idle_mask,
++					       sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++			cpumask_clear_cpu(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
++			cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		cpumask_set_cpu(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *next = p->sq_node.next;
++
++	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++		struct list_head *head;
++		unsigned long idx = next - &rq->queue.heads[0];
++
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++	delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_dequeue(rq, p);							\
++											\
++	__list_del_entry(&p->sq_node);							\
++	if (p->sq_node.prev == p->sq_node.next) {					\
++		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
++			  rq->queue.bitmap);						\
++		func;									\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_enqueue(rq, p);							\
++	{										\
++	int idx, prio;									\
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
++	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
++		set_bit(prio, rq->queue.bitmap);					\
++		func;									\
++	}										\
++	}
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *node = &p->sq_node;
++	int deq_idx, idx, prio;
++
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++	if (list_is_last(node, &rq->queue.heads[idx]))
++		return;
++
++	__list_del_entry(node);
++	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++	list_add_tail(node, &rq->queue.heads[idx]);
++	if (list_is_first(node, &rq->queue.heads[idx]))
++		set_bit(prio, rq->queue.bitmap);
++	update_sched_preempt_mask(rq);
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	/*
++	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++	 * part of the idle loop. This forces an exit from the idle loop
++	 * and a round trip to schedule(). Now this could be optimized
++	 * because a simple new idle loop iteration is enough to
++	 * re-evaluate the next tick. Provided some re-ordering of tick
++	 * nohz functions that would need to follow TIF_NR_POLLING
++	 * clearing:
++	 *
++	 * - On most archs, a simple fetch_or on ti::flags with a
++	 *   "0" value would be enough to know if an IPI needs to be sent.
++	 *
++	 * - x86 needs to perform a last need_resched() check between
++	 *   monitor and mwait which doesn't take timers into account.
++	 *   There a dedicated TIF_TIMER flag would be required to
++	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
++	 *   before mwait().
++	 *
++	 * However, remote timer enqueue is not such a frequent event
++	 * and testing of the above solutions didn't appear to report
++	 * much benefits.
++	 */
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	WRITE_ONCE(p->on_rq, 0);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++	int src_cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	src_cpu = cpu_of(rq);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_preempt_mask + ref);
++	if (prio < ref) {
++		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++			if (prio < cpu_rq(cpu)->prio)
++				cpumask_set_cpu(cpu, mask);
++		}
++	} else {
++		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++			if (prio >= cpu_rq(cpu)->prio)
++				cpumask_clear_cpu(cpu, mask);
++		}
++	}
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++	cpumask_t *mask = sched_preempt_mask + prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++		sched_preempt_mask_flush(mask, prio, pr);
++		atomic_set(&sched_prio_record, prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&mask, &allow_mask, sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
++	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++	return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++	if (!sched_asym_cpucap_active())
++		return true;
++
++	if (this_cpu == that_cpu)
++		return true;
++
++	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_SMP
++
++static int active_balance_cpu_stop(void *data)
++{
++	struct balance_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++	cpumask_t tmp;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	arg->active = 0;
++
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++	    !is_migration_disabled(p)) {
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++/* trigger_active_balance - for @cpu/@rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, struct balance_arg *arg)
++{
++	unsigned long flags;
++	struct task_struct *p;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++
++	res = (1 == rq->nr_running) &&					\
++	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
++	      cpumask_intersects(p->cpus_ptr, arg->cpumask) &&		\
++	      !arg->active;
++	if (res) {
++		arg->task = p;
++
++		arg->active = 1;
++	}
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res) {
++		preempt_disable();
++		raw_spin_unlock(&src_rq->lock);
++
++		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop,
++				    arg, &rq->active_balance_work);
++
++		preempt_enable();
++		raw_spin_lock(&src_rq->lock);
++	}
++
++	return res;
++}
++
++#ifdef CONFIG_SCHED_SMT
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq)
++{
++	cpumask_t chk;
++
++	if (cpumask_andnot(&chk, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
++				struct rq *target_rq = cpu_rq(i);
++				if (trigger_active_balance(rq, target_rq, &target_rq->sg_balance_arg))
++					return;
++			}
++		}
++	}
++}
++
++static DEFINE_PER_CPU(struct balance_callback, sg_balance_head) = {
++	.func = sg_balance,
++};
++#endif /* CONFIG_SCHED_SMT */
++
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	cpumask_t *topo_mask, *end_mask, chk;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++
++		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++			continue;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++			if (static_key_count(&sched_smt_present.key) > 1 &&
++			    cpumask_test_cpu(cpu, sched_sg_idle_mask) &&
++			    rq->online)
++				__queue_balance_callback(rq, &per_cpu(sg_balance_head, cpu));
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler sched_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr; this
++	 * barrier matches a full barrier in the proximity of the membarrier
++	 * system call exit.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
++		 *   on PowerPC and on RISC-V.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 *
++		 * The barrier matches a full barrier in the proximity of
++		 * the membarrier system call entry.
++		 *
++		 * On RISC-V, this barrier pairing is also needed for the
++		 * SYNC_CORE command when switching between processes, cf.
++		 * the inline comments in membarrier_arch_switch_mm().
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++		if (tsk->flags & PF_BLOCK_TS)
++			blk_plug_invalidate_ts(tsk);
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else if (tsk->flags & PF_IO_WORKER)
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		requeue_task(p, rq);
++		wakeup_preempt(rq);
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	if (task_on_rq_queued(p))
++		__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++static struct task_struct *find_get_task(pid_t pid)
++{
++	struct task_struct *p;
++	guard(rcu)();
++
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++
++	return p;
++}
++
++DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
++	     find_get_task(pid), pid_t pid)
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	guard(rcu)();
++
++	pcred = __task_cred(p);
++	return (uid_eq(cred->euid, pcred->euid) ||
++	        uid_eq(cred->euid, pcred->uid));
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	return sched_setscheduler(p, policy, &lparam);
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++static void get_params(struct task_struct *p, struct sched_attr *attr)
++{
++	if (task_has_rt_policy(p))
++		attr->sched_priority = p->rt_priority;
++	else
++		attr->sched_nice = task_nice(p);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
++		get_params(p, &attr);
++
++	return sched_setattr(p, &attr);
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		return -ESRCH;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (!retval)
++		retval = p->policy;
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		int retval;
++
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -EINVAL;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		if (task_has_rt_policy(p))
++			lp.sched_priority = p->rt_priority;
++	}
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -ESRCH;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		kattr.sched_policy = p->policy;
++		if (p->sched_reset_on_fork)
++			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		get_params(p, &kattr);
++		kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++	}
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	int retval;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (p->flags & PF_NO_SETAFFINITY)
++		return -EINVAL;
++
++	if (!check_same_owner(p)) {
++		guard(rcu)();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
++			return -EPERM;
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		return retval;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		return -ENOMEM;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	int retval;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	guard(raw_spinlock_irqsave)(&p->pi_lock);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	} else if (rq->nr_running > 1) {
++		do_sched_yield_type_1(p, rq);
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -EINVAL;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++	return 0;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++#ifdef CONFIG_SCHED_SMT
++	struct balance_arg balance_arg = {.cpumask = sched_sg_idle_mask, .active = 0};
++#endif
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++		rq->prio_idx = rq->prio;
++#endif
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->sg_balance_arg = balance_arg;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	rcu_read_lock();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		rcu_read_unlock();
++		t->last_mm_cid = -1;
++		return -1;
++	}
++	rcu_read_unlock();
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, dst_cid;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many cpus.
++	 *
++	 * If destination cid is already set, we may have to just clear
++	 * the src cid to ensure compactness in frequent migrations
++	 * scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed cpus, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed cpus.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++	if (!mm_cid_is_unset(dst_cid) &&
++	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (!mm_cid_is_unset(dst_cid)) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --new-file -rup linux-6.10/kernel/sched/alt_debug.c linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_debug.c
+--- linux-6.10/kernel/sched/alt_debug.c	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_debug.c	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --new-file -rup linux-6.10/kernel/sched/alt_sched.h linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_sched.h
+--- linux-6.10/kernel/sched/alt_sched.h	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_sched.h	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,976 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++			       struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++struct balance_arg {
++	struct task_struct	*task;
++	int			active;
++	cpumask_t		*cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue		queue		____cacheline_aligned;
++
++	int				prio;
++#ifdef CONFIG_SCHED_PDS
++	int				prio_idx;
++	u64				time_edge;
++#endif
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	struct balance_arg	sg_balance_arg		____cacheline_aligned;
++#endif
++	struct cpu_stop_work	active_balance_work;
++
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++				 unsigned long min,
++				 unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	/*
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cpumask);
++		if (cid < nr_cpu_ids)
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cpumask))
++		return -1;
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++__queue_balance_callback(struct rq *rq,
++			 struct balance_callback *head)
++{
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * Don't (re)queue an already queued item; nor queue anything when
++	 * balance_push() is active, see the comment with
++	 * balance_push_callback.
++	 */
++	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++		return;
++
++	head->next = rq->balance_callback;
++	rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#endif /* ALT_SCHED_H */
+diff --new-file -rup linux-6.10/kernel/sched/bmq.h linux-prjc-linux-6.10.y-prjc/kernel/sched/bmq.h
+--- linux-6.10/kernel/sched/bmq.h	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/bmq.h	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,98 @@
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
++	prio = task_sched_prio(p);		\
++	idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio;
++}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	s64 delta = this_rq()->clock_task > p->last_ran;
++
++	if (likely(delta > 0))
++		boost_task(p, delta  >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	boost_task(p, 1);
++}
+diff --new-file -rup linux-6.10/kernel/sched/build_policy.c linux-prjc-linux-6.10.y-prjc/kernel/sched/build_policy.c
+--- linux-6.10/kernel/sched/build_policy.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/build_policy.c	2024-07-15 16:47:37.000000000 +0200
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --new-file -rup linux-6.10/kernel/sched/build_utility.c linux-prjc-linux-6.10.y-prjc/kernel/sched/build_utility.c
+--- linux-6.10/kernel/sched/build_utility.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/build_utility.c	2024-07-15 16:47:37.000000000 +0200
+@@ -84,7 +84,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --new-file -rup linux-6.10/kernel/sched/cpufreq_schedutil.c linux-prjc-linux-6.10.y-prjc/kernel/sched/cpufreq_schedutil.c
+--- linux-6.10/kernel/sched/cpufreq_schedutil.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/cpufreq_schedutil.c	2024-07-15 16:47:37.000000000 +0200
+@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(i
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
+ 
+ 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+ 	util = max(util, boost);
+ 	sg_cpu->bw_min = min;
+ 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++	sg_cpu->bw_min = 0;
++	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(str
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct s
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --new-file -rup linux-6.10/kernel/sched/cputime.c linux-prjc-linux-6.10.y-prjc/kernel/sched/cputime.c
+--- linux-6.10/kernel/sched/cputime.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/cputime.c	2024-07-15 16:47:37.000000000 +0200
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struc
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_stru
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -617,7 +617,7 @@ out:
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --new-file -rup linux-6.10/kernel/sched/debug.c linux-prjc-linux-6.10.y-prjc/kernel/sched/debug.c
+--- linux-6.10/kernel/sched/debug.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/debug.c	2024-07-15 16:47:37.000000000 +0200
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sche
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sche
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sche
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_str
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --new-file -rup linux-6.10/kernel/sched/idle.c linux-prjc-linux-6.10.y-prjc/kernel/sched/idle.c
+--- linux-6.10/kernel/sched/idle.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/idle.c	2024-07-15 16:47:37.000000000 +0200
+@@ -430,6 +430,7 @@ void cpu_startup_entry(enum cpuhp_state
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -551,3 +552,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --new-file -rup linux-6.10/kernel/sched/pds.h linux-prjc-linux-6.10.y-prjc/kernel/sched/pds.h
+--- linux-6.10/kernel/sched/pds.h	1970-01-01 01:00:00.000000000 +0100
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pds.h	2024-07-15 16:47:37.000000000 +0200
+@@ -0,0 +1,134 @@
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
++	if (p->prio < MIN_NORMAL_PRIO) {							\
++		prio = p->prio >> 2;								\
++		idx = prio;									\
++	} else {										\
++		u64 sched_dl = max(p->deadline, rq->time_edge);					\
++		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
++		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
++	}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio_idx;
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --new-file -rup linux-6.10/kernel/sched/pelt.c linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.c
+--- linux-6.10/kernel/sched/pelt.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.c	2024-07-15 16:47:37.000000000 +0200
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa,
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struc
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * hardware:
+  *
+diff --new-file -rup linux-6.10/kernel/sched/pelt.h linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.h
+--- linux-6.10/kernel/sched/pelt.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.h	2024-07-15 16:47:37.000000000 +0200
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struc
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(stru
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --new-file -rup linux-6.10/kernel/sched/sched.h linux-prjc-linux-6.10.y-prjc/kernel/sched/sched.h
+--- linux-6.10/kernel/sched/sched.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/sched.h	2024-07-15 16:47:37.000000000 +0200
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3481,4 +3485,9 @@ static inline void init_sched_mm_cid(str
+ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+ extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --new-file -rup linux-6.10/kernel/sched/stats.c linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.c
+--- linux-6.10/kernel/sched/stats.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.c	2024-07-15 16:47:37.000000000 +0200
+@@ -125,9 +125,11 @@ static int show_schedstat(struct seq_fil
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
+ #endif
++#endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+ 
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_fil
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_fil
+ 		}
+ 		rcu_read_unlock();
+ #endif
++#endif
+ 	}
+ 	return 0;
+ }
+diff --new-file -rup linux-6.10/kernel/sched/stats.h linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.h
+--- linux-6.10/kernel/sched/stats.h	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.h	2024-07-15 16:47:37.000000000 +0200
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --new-file -rup linux-6.10/kernel/sched/topology.c linux-prjc-linux-6.10.y-prjc/kernel/sched/topology.c
+--- linux-6.10/kernel/sched/topology.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sched/topology.c	2024-07-15 16:47:37.000000000 +0200
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_lev
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sc
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_n
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --new-file -rup linux-6.10/kernel/sysctl.c linux-prjc-linux-6.10.y-prjc/kernel/sysctl.c
+--- linux-6.10/kernel/sysctl.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/sysctl.c	2024-07-15 16:47:37.000000000 +0200
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --new-file -rup linux-6.10/kernel/time/hrtimer.c linux-prjc-linux-6.10.y-prjc/kernel/time/hrtimer.c
+--- linux-6.10/kernel/time/hrtimer.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/time/hrtimer.c	2024-07-15 16:47:37.000000000 +0200
+@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, con
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --new-file -rup linux-6.10/kernel/time/posix-cpu-timers.c linux-prjc-linux-6.10.y-prjc/kernel/time/posix-cpu-timers.c
+--- linux-6.10/kernel/time/posix-cpu-timers.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/time/posix-cpu-timers.c	2024-07-15 16:47:37.000000000 +0200
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct t
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(stru
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(stru
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct t
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct t
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --new-file -rup linux-6.10/kernel/trace/trace_selftest.c linux-prjc-linux-6.10.y-prjc/kernel/trace/trace_selftest.c
+--- linux-6.10/kernel/trace/trace_selftest.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/trace/trace_selftest.c	2024-07-15 16:47:37.000000000 +0200
+@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --new-file -rup linux-6.10/kernel/workqueue.c linux-prjc-linux-6.10.y-prjc/kernel/workqueue.c
+--- linux-6.10/kernel/workqueue.c	2024-07-15 00:43:32.000000000 +0200
++++ linux-prjc-linux-6.10.y-prjc/kernel/workqueue.c	2024-07-15 16:47:37.000000000 +0200
+@@ -1248,6 +1248,7 @@ static bool kick_pool(struct worker_pool
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1277,6 +1278,8 @@ static bool kick_pool(struct worker_pool
+ 		}
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1405,7 +1408,11 @@ void wq_worker_running(struct task_struc
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1490,7 +1497,11 @@ void wq_worker_tick(struct task_struct *
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -3176,7 +3187,11 @@ __acquires(&pool->lock)
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
+ 	if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++		worker->current_at = worker->task->sched_time;
++#else
+ 		worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-19 22:35 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-19 22:35 UTC (permalink / raw
  To: gentoo-commits

commit:     f54ffc75e5aac69d195787d1cebb939136f28b28
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 19 22:35:08 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 19 22:35:08 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f54ffc75

ext4: use memtostr_pad() for s_volume_name

Bug: https://bugs.gentoo.org/936269

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ++++
 1900_ext4-memtostr_pad-fix.patch | 51 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/0000_README b/0000_README
index e017d0cb..f46d7e17 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1900_ext4-memtostr_pad-fix.patch
+From:	  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc:	  ext4: use memtostr_pad() for s_volume_name
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_ext4-memtostr_pad-fix.patch b/1900_ext4-memtostr_pad-fix.patch
new file mode 100644
index 00000000..1c32fc0c
--- /dev/null
+++ b/1900_ext4-memtostr_pad-fix.patch
@@ -0,0 +1,51 @@
+From be27cd64461c45a6088a91a04eba5cd44e1767ef Mon Sep 17 00:00:00 2001
+From: Kees Cook <keescook@chromium.org>
+Date: Thu, 23 May 2024 15:54:12 -0700
+Subject: ext4: use memtostr_pad() for s_volume_name
+
+As with the other strings in struct ext4_super_block, s_volume_name is
+not NUL terminated. The other strings were marked in commit 072ebb3bffe6
+("ext4: add nonstring annotations to ext4.h"). Using strscpy() isn't
+the right replacement for strncpy(); it should use memtostr_pad()
+instead.
+
+Reported-by: syzbot+50835f73143cc2905b9e@syzkaller.appspotmail.com
+Closes: https://lore.kernel.org/all/00000000000019f4c00619192c05@google.com/
+Fixes: 744a56389f73 ("ext4: replace deprecated strncpy with alternatives")
+Signed-off-by: Kees Cook <keescook@chromium.org>
+Link: https://patch.msgid.link/20240523225408.work.904-kees@kernel.org
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+---
+ fs/ext4/ext4.h  | 2 +-
+ fs/ext4/ioctl.c | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 983dad8c07ecd1..efed7f09876de9 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1347,7 +1347,7 @@ struct ext4_super_block {
+ /*60*/	__le32	s_feature_incompat;	/* incompatible feature set */
+ 	__le32	s_feature_ro_compat;	/* readonly-compatible feature set */
+ /*68*/	__u8	s_uuid[16];		/* 128-bit uuid for volume */
+-/*78*/	char	s_volume_name[EXT4_LABEL_MAX];	/* volume name */
++/*78*/	char	s_volume_name[EXT4_LABEL_MAX] __nonstring; /* volume name */
+ /*88*/	char	s_last_mounted[64] __nonstring;	/* directory where last mounted */
+ /*C8*/	__le32	s_algorithm_usage_bitmap; /* For compression */
+ 	/*
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index dab7acd4970923..e8bf5972dd47bf 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1151,7 +1151,7 @@ static int ext4_ioctl_getlabel(struct ext4_sb_info *sbi, char __user *user_label
+ 	BUILD_BUG_ON(EXT4_LABEL_MAX >= FSLABEL_MAX);
+ 
+ 	lock_buffer(sbi->s_sbh);
+-	strscpy_pad(label, sbi->s_es->s_volume_name);
++	memtostr_pad(label, sbi->s_es->s_volume_name);
+ 	unlock_buffer(sbi->s_sbh);
+ 
+ 	if (copy_to_user(user_label, label, sizeof(label)))
+-- 
+cgit 1.2.3-korg
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-24 16:43 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-24 16:43 UTC (permalink / raw
  To: gentoo-commits

commit:     ec4e6ed176fc62390cc62641acf5ad2a33517c8b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 24 16:42:59 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 24 16:42:59 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ec4e6ed1

Linux patch 6.10.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1001_linux-6.10.1.patch | 342 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 346 insertions(+)

diff --git a/0000_README b/0000_README
index f46d7e17..b671bc06 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-6.10.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.1
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1001_linux-6.10.1.patch b/1001_linux-6.10.1.patch
new file mode 100644
index 00000000..f0878e5d
--- /dev/null
+++ b/1001_linux-6.10.1.patch
@@ -0,0 +1,342 @@
+diff --git a/Makefile b/Makefile
+index 3d10e3aadeda2..9ae12a6c0ece2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index 2281d55df5456..d3521aadd43ee 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -746,15 +746,16 @@ int tpm_buf_check_hmac_response(struct tpm_chip *chip, struct tpm_buf *buf,
+ 	struct tpm2_auth *auth = chip->auth;
+ 	off_t offset_s, offset_p;
+ 	u8 rphash[SHA256_DIGEST_SIZE];
+-	u32 attrs;
++	u32 attrs, cc;
+ 	struct sha256_state sctx;
+ 	u16 tag = be16_to_cpu(head->tag);
+-	u32 cc = be32_to_cpu(auth->ordinal);
+ 	int parm_len, len, i, handles;
+ 
+ 	if (!auth)
+ 		return rc;
+ 
++	cc = be32_to_cpu(auth->ordinal);
++
+ 	if (auth->session >= TPM_HEADER_SIZE) {
+ 		WARN(1, "tpm session not filled correctly\n");
+ 		goto out;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
+index 61a4638d1be2f..237cb1ef79759 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
+@@ -622,7 +622,12 @@ static int iwl_mvm_tzone_get_temp(struct thermal_zone_device *device,
+ 
+ 	if (!iwl_mvm_firmware_running(mvm) ||
+ 	    mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR) {
+-		ret = -ENODATA;
++		/*
++		 * Tell the core that there is no valid temperature value to
++		 * return, but it need not worry about this.
++		 */
++		*temperature = THERMAL_TEMP_INVALID;
++		ret = 0;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index ecc748d15eb7c..4e7fec406ee59 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -300,8 +300,6 @@ static void monitor_thermal_zone(struct thermal_zone_device *tz)
+ 		thermal_zone_device_set_polling(tz, tz->passive_delay_jiffies);
+ 	else if (tz->polling_delay_jiffies)
+ 		thermal_zone_device_set_polling(tz, tz->polling_delay_jiffies);
+-	else if (tz->temperature == THERMAL_TEMP_INVALID)
+-		thermal_zone_device_set_polling(tz, msecs_to_jiffies(THERMAL_RECHECK_DELAY_MS));
+ }
+ 
+ static struct thermal_governor *thermal_get_tz_governor(struct thermal_zone_device *tz)
+@@ -382,7 +380,7 @@ static void handle_thermal_trip(struct thermal_zone_device *tz,
+ 	td->threshold = trip->temperature;
+ 
+ 	if (tz->last_temperature >= old_threshold &&
+-	    tz->last_temperature != THERMAL_TEMP_INVALID) {
++	    tz->last_temperature != THERMAL_TEMP_INIT) {
+ 		/*
+ 		 * Mitigation is under way, so it needs to stop if the zone
+ 		 * temperature falls below the low temperature of the trip.
+@@ -417,27 +415,6 @@ static void handle_thermal_trip(struct thermal_zone_device *tz,
+ 	}
+ }
+ 
+-static void update_temperature(struct thermal_zone_device *tz)
+-{
+-	int temp, ret;
+-
+-	ret = __thermal_zone_get_temp(tz, &temp);
+-	if (ret) {
+-		if (ret != -EAGAIN)
+-			dev_warn(&tz->device,
+-				 "failed to read out thermal zone (%d)\n",
+-				 ret);
+-		return;
+-	}
+-
+-	tz->last_temperature = tz->temperature;
+-	tz->temperature = temp;
+-
+-	trace_thermal_temperature(tz);
+-
+-	thermal_genl_sampling_temp(tz->id, temp);
+-}
+-
+ static void thermal_zone_device_check(struct work_struct *work)
+ {
+ 	struct thermal_zone_device *tz = container_of(work, struct
+@@ -452,7 +429,7 @@ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ 
+ 	INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_check);
+ 
+-	tz->temperature = THERMAL_TEMP_INVALID;
++	tz->temperature = THERMAL_TEMP_INIT;
+ 	tz->passive = 0;
+ 	tz->prev_low_trip = -INT_MAX;
+ 	tz->prev_high_trip = INT_MAX;
+@@ -501,6 +478,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ 	struct thermal_trip_desc *td;
+ 	LIST_HEAD(way_down_list);
+ 	LIST_HEAD(way_up_list);
++	int temp, ret;
+ 
+ 	if (tz->suspended)
+ 		return;
+@@ -508,10 +486,29 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ 	if (!thermal_zone_device_is_enabled(tz))
+ 		return;
+ 
+-	update_temperature(tz);
++	ret = __thermal_zone_get_temp(tz, &temp);
++	if (ret) {
++		if (ret != -EAGAIN)
++			dev_info(&tz->device, "Temperature check failed (%d)\n", ret);
+ 
+-	if (tz->temperature == THERMAL_TEMP_INVALID)
++		thermal_zone_device_set_polling(tz, msecs_to_jiffies(THERMAL_RECHECK_DELAY_MS));
++		return;
++	} else if (temp <= THERMAL_TEMP_INVALID) {
++		/*
++		 * Special case: No valid temperature value is available, but
++		 * the zone owner does not want the core to do anything about
++		 * it.  Continue regular zone polling if needed, so that this
++		 * function can be called again, but skip everything else.
++		 */
+ 		goto monitor;
++	}
++
++	tz->last_temperature = tz->temperature;
++	tz->temperature = temp;
++
++	trace_thermal_temperature(tz);
++
++	thermal_genl_sampling_temp(tz->id, temp);
+ 
+ 	__thermal_zone_set_trips(tz);
+ 
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 94eeb4011a481..5afd541d54b0b 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -133,6 +133,9 @@ struct thermal_zone_device {
+ 	struct thermal_trip_desc trips[] __counted_by(num_trips);
+ };
+ 
++/* Initial thermal zone temperature. */
++#define THERMAL_TEMP_INIT	INT_MIN
++
+ /*
+  * Default delay after a failing thermal zone temperature check before
+  * attempting to check it again.
+diff --git a/drivers/thermal/thermal_helpers.c b/drivers/thermal/thermal_helpers.c
+index d9f4e26ec1257..36f872b840ba8 100644
+--- a/drivers/thermal/thermal_helpers.c
++++ b/drivers/thermal/thermal_helpers.c
+@@ -140,6 +140,8 @@ int thermal_zone_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	}
+ 
+ 	ret = __thermal_zone_get_temp(tz, temp);
++	if (!ret && *temp <= THERMAL_TEMP_INVALID)
++		ret = -ENODATA;
+ 
+ unlock:
+ 	mutex_unlock(&tz->lock);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 983dad8c07ecd..efed7f09876de 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1347,7 +1347,7 @@ struct ext4_super_block {
+ /*60*/	__le32	s_feature_incompat;	/* incompatible feature set */
+ 	__le32	s_feature_ro_compat;	/* readonly-compatible feature set */
+ /*68*/	__u8	s_uuid[16];		/* 128-bit uuid for volume */
+-/*78*/	char	s_volume_name[EXT4_LABEL_MAX];	/* volume name */
++/*78*/	char	s_volume_name[EXT4_LABEL_MAX] __nonstring; /* volume name */
+ /*88*/	char	s_last_mounted[64] __nonstring;	/* directory where last mounted */
+ /*C8*/	__le32	s_algorithm_usage_bitmap; /* For compression */
+ 	/*
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index dab7acd497092..e8bf5972dd47b 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1151,7 +1151,7 @@ static int ext4_ioctl_getlabel(struct ext4_sb_info *sbi, char __user *user_label
+ 	BUILD_BUG_ON(EXT4_LABEL_MAX >= FSLABEL_MAX);
+ 
+ 	lock_buffer(sbi->s_sbh);
+-	strscpy_pad(label, sbi->s_es->s_volume_name);
++	memtostr_pad(label, sbi->s_es->s_volume_name);
+ 	unlock_buffer(sbi->s_sbh);
+ 
+ 	if (copy_to_user(user_label, label, sizeof(label)))
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 6397fdefd876d..c92937bed1331 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -1359,7 +1359,7 @@ ssize_t cifs_file_copychunk_range(unsigned int xid,
+ 	target_tcon = tlink_tcon(smb_file_target->tlink);
+ 
+ 	if (src_tcon->ses != target_tcon->ses) {
+-		cifs_dbg(VFS, "source and target of copy not on same server\n");
++		cifs_dbg(FYI, "source and target of copy not on same server\n");
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 1374635e89fae..04ec1b9737a89 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -123,6 +123,11 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq)
+ 	goto out;
+ }
+ 
++static void cifs_netfs_invalidate_cache(struct netfs_io_request *wreq)
++{
++	cifs_invalidate_cache(wreq->inode, 0);
++}
++
+ /*
+  * Split the read up according to how many credits we can get for each piece.
+  * It's okay to sleep here if we need to wait for more credit to become
+@@ -307,6 +312,7 @@ const struct netfs_request_ops cifs_req_ops = {
+ 	.begin_writeback	= cifs_begin_writeback,
+ 	.prepare_write		= cifs_prepare_write,
+ 	.issue_write		= cifs_issue_write,
++	.invalidate_cache	= cifs_netfs_invalidate_cache,
+ };
+ 
+ /*
+@@ -2358,13 +2364,18 @@ void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t
+ 				      bool was_async)
+ {
+ 	struct netfs_io_request *wreq = wdata->rreq;
+-	loff_t new_server_eof;
++	struct netfs_inode *ictx = netfs_inode(wreq->inode);
++	loff_t wrend;
+ 
+ 	if (result > 0) {
+-		new_server_eof = wdata->subreq.start + wdata->subreq.transferred + result;
++		wrend = wdata->subreq.start + wdata->subreq.transferred + result;
+ 
+-		if (new_server_eof > netfs_inode(wreq->inode)->remote_i_size)
+-			netfs_resize_file(netfs_inode(wreq->inode), new_server_eof, true);
++		if (wrend > ictx->zero_point &&
++		    (wdata->rreq->origin == NETFS_UNBUFFERED_WRITE ||
++		     wdata->rreq->origin == NETFS_DIO_WRITE))
++			ictx->zero_point = wrend;
++		if (wrend > ictx->remote_i_size)
++			netfs_resize_file(ictx, wrend, true);
+ 	}
+ 
+ 	netfs_write_subrequest_terminated(&wdata->subreq, result, was_async);
+@@ -2877,6 +2888,7 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to)
+ 		rc = netfs_start_io_direct(inode);
+ 		if (rc < 0)
+ 			goto out;
++		rc = -EACCES;
+ 		down_read(&cinode->lock_sem);
+ 		if (!cifs_find_lock_conflict(
+ 			    cfile, iocb->ki_pos, iov_iter_count(to),
+@@ -2889,6 +2901,7 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to)
+ 		rc = netfs_start_io_read(inode);
+ 		if (rc < 0)
+ 			goto out;
++		rc = -EACCES;
+ 		down_read(&cinode->lock_sem);
+ 		if (!cifs_find_lock_conflict(
+ 			    cfile, iocb->ki_pos, iov_iter_count(to),
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 2ae2dbb6202b3..bb84a89e59059 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4859,9 +4859,6 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 	struct cifs_io_parms *io_parms = NULL;
+ 	int credit_request;
+ 
+-	if (!wdata->server || test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags))
+-		server = wdata->server = cifs_pick_channel(tcon->ses);
+-
+ 	/*
+ 	 * in future we may get cifs_io_parms passed in from the caller,
+ 	 * but for now we construct it here...
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index 1a3c6f66f6205..dc627ebf01df8 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -209,7 +209,7 @@
+ 
+ /* CS35L56_MAIN_RENDER_USER_VOLUME */
+ #define CS35L56_MAIN_RENDER_USER_VOLUME_MIN		-400
+-#define CS35L56_MAIN_RENDER_USER_VOLUME_MAX		400
++#define CS35L56_MAIN_RENDER_USER_VOLUME_MAX		48
+ #define CS35L56_MAIN_RENDER_USER_VOLUME_MASK		0x0000FFC0
+ #define CS35L56_MAIN_RENDER_USER_VOLUME_SHIFT		6
+ #define CS35L56_MAIN_RENDER_USER_VOLUME_SIGNBIT		9
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index d2945c9c812b5..c95dc1736dd93 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -657,8 +657,10 @@ static int io_alloc_pbuf_ring(struct io_ring_ctx *ctx,
+ 	ring_size = reg->ring_entries * sizeof(struct io_uring_buf_ring);
+ 
+ 	bl->buf_ring = io_pages_map(&bl->buf_pages, &bl->buf_nr_pages, ring_size);
+-	if (!bl->buf_ring)
++	if (IS_ERR(bl->buf_ring)) {
++		bl->buf_ring = NULL;
+ 		return -ENOMEM;
++	}
+ 
+ 	bl->is_buf_ring = 1;
+ 	bl->is_mmap = 1;
+diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c
+index 758dfdf9d3eac..7f2f2f8c13fae 100644
+--- a/sound/soc/codecs/cs35l56.c
++++ b/sound/soc/codecs/cs35l56.c
+@@ -196,7 +196,11 @@ static const struct snd_kcontrol_new cs35l56_controls[] = {
+ 		       cs35l56_dspwait_get_volsw, cs35l56_dspwait_put_volsw),
+ 	SOC_SINGLE_S_EXT_TLV("Speaker Volume",
+ 			     CS35L56_MAIN_RENDER_USER_VOLUME,
+-			     6, -400, 400, 9, 0,
++			     CS35L56_MAIN_RENDER_USER_VOLUME_SHIFT,
++			     CS35L56_MAIN_RENDER_USER_VOLUME_MIN,
++			     CS35L56_MAIN_RENDER_USER_VOLUME_MAX,
++			     CS35L56_MAIN_RENDER_USER_VOLUME_SIGNBIT,
++			     0,
+ 			     cs35l56_dspwait_get_volsw,
+ 			     cs35l56_dspwait_put_volsw,
+ 			     vol_tlv),


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-24 16:44 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-24 16:44 UTC (permalink / raw
  To: gentoo-commits

commit:     ca2aa06ff25b1445c67ec2d8940a1336dc5e216f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 24 16:44:29 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 24 16:44:29 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ca2aa06f

Remove redundant patch

Removed:
1900_ext4-memtostr_pad-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ----
 1900_ext4-memtostr_pad-fix.patch | 51 ----------------------------------------
 2 files changed, 55 deletions(-)

diff --git a/0000_README b/0000_README
index b671bc06..2560b5da 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1900_ext4-memtostr_pad-fix.patch
-From:	  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
-Desc:	  ext4: use memtostr_pad() for s_volume_name
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_ext4-memtostr_pad-fix.patch b/1900_ext4-memtostr_pad-fix.patch
deleted file mode 100644
index 1c32fc0c..00000000
--- a/1900_ext4-memtostr_pad-fix.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From be27cd64461c45a6088a91a04eba5cd44e1767ef Mon Sep 17 00:00:00 2001
-From: Kees Cook <keescook@chromium.org>
-Date: Thu, 23 May 2024 15:54:12 -0700
-Subject: ext4: use memtostr_pad() for s_volume_name
-
-As with the other strings in struct ext4_super_block, s_volume_name is
-not NUL terminated. The other strings were marked in commit 072ebb3bffe6
-("ext4: add nonstring annotations to ext4.h"). Using strscpy() isn't
-the right replacement for strncpy(); it should use memtostr_pad()
-instead.
-
-Reported-by: syzbot+50835f73143cc2905b9e@syzkaller.appspotmail.com
-Closes: https://lore.kernel.org/all/00000000000019f4c00619192c05@google.com/
-Fixes: 744a56389f73 ("ext4: replace deprecated strncpy with alternatives")
-Signed-off-by: Kees Cook <keescook@chromium.org>
-Link: https://patch.msgid.link/20240523225408.work.904-kees@kernel.org
-Signed-off-by: Theodore Ts'o <tytso@mit.edu>
----
- fs/ext4/ext4.h  | 2 +-
- fs/ext4/ioctl.c | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
-index 983dad8c07ecd1..efed7f09876de9 100644
---- a/fs/ext4/ext4.h
-+++ b/fs/ext4/ext4.h
-@@ -1347,7 +1347,7 @@ struct ext4_super_block {
- /*60*/	__le32	s_feature_incompat;	/* incompatible feature set */
- 	__le32	s_feature_ro_compat;	/* readonly-compatible feature set */
- /*68*/	__u8	s_uuid[16];		/* 128-bit uuid for volume */
--/*78*/	char	s_volume_name[EXT4_LABEL_MAX];	/* volume name */
-+/*78*/	char	s_volume_name[EXT4_LABEL_MAX] __nonstring; /* volume name */
- /*88*/	char	s_last_mounted[64] __nonstring;	/* directory where last mounted */
- /*C8*/	__le32	s_algorithm_usage_bitmap; /* For compression */
- 	/*
-diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
-index dab7acd4970923..e8bf5972dd47bf 100644
---- a/fs/ext4/ioctl.c
-+++ b/fs/ext4/ioctl.c
-@@ -1151,7 +1151,7 @@ static int ext4_ioctl_getlabel(struct ext4_sb_info *sbi, char __user *user_label
- 	BUILD_BUG_ON(EXT4_LABEL_MAX >= FSLABEL_MAX);
- 
- 	lock_buffer(sbi->s_sbh);
--	strscpy_pad(label, sbi->s_es->s_volume_name);
-+	memtostr_pad(label, sbi->s_es->s_volume_name);
- 	unlock_buffer(sbi->s_sbh);
- 
- 	if (copy_to_user(user_label, label, sizeof(label)))
--- 
-cgit 1.2.3-korg
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-24 16:59 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-24 16:59 UTC (permalink / raw
  To: gentoo-commits

commit:     08313eaadefe3db6afdc29c1020c20543980ad33
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 24 16:58:34 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 24 16:58:34 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=08313eaa

Removal of BMQ due to compilation error

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |     8 -
 ...BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch | 11474 -------------------
 5021_BMQ-and-PDS-gentoo-defaults.patch             |    13 -
 3 files changed, 11495 deletions(-)

diff --git a/0000_README b/0000_README
index 2560b5da..d91abe89 100644
--- a/0000_README
+++ b/0000_README
@@ -86,11 +86,3 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-
-Patch:  5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
-From:   https://github.com/hhoffstaette/kernel-patches
-Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
-
-Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
-From:   https://gitweb.gentoo.org/proj/linux-patches.git/
-Desc:   Set defaults for BMQ. default to n

diff --git a/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch b/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
deleted file mode 100644
index 4ebad9ef..00000000
--- a/5020_BMQ-and-PDS-io-scheduler-holgerh-v6.10-r0.patch
+++ /dev/null
@@ -1,11474 +0,0 @@
-diff --new-file -rup linux-6.10/Documentation/admin-guide/sysctl/kernel.rst linux-prjc-linux-6.10.y-prjc/Documentation/admin-guide/sysctl/kernel.rst
---- linux-6.10/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 16:47:37.000000000 +0200
-@@ -1673,3 +1673,12 @@ is 10 seconds.
- 
- The softlockup threshold is (``2 * watchdog_thresh``). Setting this
- tunable to zero will disable lockup detection altogether.
-+
-+yield_type:
-+===========
-+
-+BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield() will be performed.
-+
-+  0 - No yield.
-+  1 - Requeue task. (default)
-diff --new-file -rup linux-6.10/Documentation/scheduler/sched-BMQ.txt linux-prjc-linux-6.10.y-prjc/Documentation/scheduler/sched-BMQ.txt
---- linux-6.10/Documentation/scheduler/sched-BMQ.txt	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/Documentation/scheduler/sched-BMQ.txt	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,110 @@
-+                         BitMap queue CPU Scheduler
-+                         --------------------------
-+
-+CONTENT
-+========
-+
-+ Background
-+ Design
-+   Overview
-+   Task policy
-+   Priority management
-+   BitMap Queue
-+   CPU Assignment and Migration
-+
-+
-+Background
-+==========
-+
-+BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
-+of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
-+and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
-+simple, while efficiency and scalable for interactive tasks, such as desktop,
-+movie playback and gaming etc.
-+
-+Design
-+======
-+
-+Overview
-+--------
-+
-+BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
-+each CPU is responsible for scheduling the tasks that are putting into it's
-+run queue.
-+
-+The run queue is a set of priority queues. Note that these queues are fifo
-+queue for non-rt tasks or priority queue for rt tasks in data structure. See
-+BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
-+that most applications are non-rt tasks. No matter the queue is fifo or
-+priority, In each queue is an ordered list of runnable tasks awaiting execution
-+and the data structures are the same. When it is time for a new task to run,
-+the scheduler simply looks the lowest numbered queueue that contains a task,
-+and runs the first task from the head of that queue. And per CPU idle task is
-+also in the run queue, so the scheduler can always find a task to run on from
-+its run queue.
-+
-+Each task will assigned the same timeslice(default 4ms) when it is picked to
-+start running. Task will be reinserted at the end of the appropriate priority
-+queue when it uses its whole timeslice. When the scheduler selects a new task
-+from the priority queue it sets the CPU's preemption timer for the remainder of
-+the previous timeslice. When that timer fires the scheduler will stop execution
-+on that task, select another task and start over again.
-+
-+If a task blocks waiting for a shared resource then it's taken out of its
-+priority queue and is placed in a wait queue for the shared resource. When it
-+is unblocked it will be reinserted in the appropriate priority queue of an
-+eligible CPU.
-+
-+Task policy
-+-----------
-+
-+BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
-+mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
-+NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
-+policy.
-+
-+DEADLINE
-+	It is squashed as priority 0 FIFO task.
-+
-+FIFO/RR
-+	All RT tasks share one single priority queue in BMQ run queue designed. The
-+complexity of insert operation is O(n). BMQ is not designed for system runs
-+with major rt policy tasks.
-+
-+NORMAL/BATCH/IDLE
-+	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
-+NORMAL policy tasks, but they just don't boost. To control the priority of
-+NORMAL/BATCH/IDLE tasks, simply use nice level.
-+
-+ISO
-+	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
-+task instead.
-+
-+Priority management
-+-------------------
-+
-+RT tasks have priority from 0-99. For non-rt tasks, there are three different
-+factors used to determine the effective priority of a task. The effective
-+priority being what is used to determine which queue it will be in.
-+
-+The first factor is simply the task’s static priority. Which is assigned from
-+task's nice level, within [-20, 19] in userland's point of view and [0, 39]
-+internally.
-+
-+The second factor is the priority boost. This is a value bounded between
-+[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
-+modified by the following cases:
-+
-+*When a thread has used up its entire timeslice, always deboost its boost by
-+increasing by one.
-+*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
-+and its switch-in time(time after last switch and run) below the thredhold
-+based on its priority boost, will boost its boost by decreasing by one buti is
-+capped at 0 (won’t go negative).
-+
-+The intent in this system is to ensure that interactive threads are serviced
-+quickly. These are usually the threads that interact directly with the user
-+and cause user-perceivable latency. These threads usually do little work and
-+spend most of their time blocked awaiting another user event. So they get the
-+priority boost from unblocking while background threads that do most of the
-+processing receive the priority penalty for using their entire timeslice.
-diff --new-file -rup linux-6.10/fs/proc/base.c linux-prjc-linux-6.10.y-prjc/fs/proc/base.c
---- linux-6.10/fs/proc/base.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/fs/proc/base.c	2024-07-15 16:47:37.000000000 +0200
-@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq
- 		seq_puts(m, "0 0 0\n");
- 	else
- 		seq_printf(m, "%llu %llu %lu\n",
--		   (unsigned long long)task->se.sum_exec_runtime,
-+		   (unsigned long long)tsk_seruntime(task),
- 		   (unsigned long long)task->sched_info.run_delay,
- 		   task->sched_info.pcount);
- 
-diff --new-file -rup linux-6.10/include/asm-generic/resource.h linux-prjc-linux-6.10.y-prjc/include/asm-generic/resource.h
---- linux-6.10/include/asm-generic/resource.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/asm-generic/resource.h	2024-07-15 16:47:37.000000000 +0200
-@@ -23,7 +23,7 @@
- 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
- 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
--	[RLIMIT_NICE]		= { 0, 0 },				\
-+	[RLIMIT_NICE]		= { 30, 30 },				\
- 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
- 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- }
-diff --new-file -rup linux-6.10/include/linux/sched/deadline.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/deadline.h
---- linux-6.10/include/linux/sched/deadline.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/deadline.h	2024-07-15 16:47:37.000000000 +0200
-@@ -2,6 +2,25 @@
- #ifndef _LINUX_SCHED_DEADLINE_H
- #define _LINUX_SCHED_DEADLINE_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+static inline int dl_task(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#define __tsk_deadline(p)	(0UL)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
-+#endif
-+
-+#else
-+
-+#define __tsk_deadline(p)	((p)->dl.deadline)
-+
- /*
-  * SCHED_DEADLINE tasks has negative priorities, reflecting
-  * the fact that any of them has higher prio than RT and
-@@ -23,6 +42,7 @@ static inline int dl_task(struct task_st
- {
- 	return dl_prio(p->prio);
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- static inline bool dl_time_before(u64 a, u64 b)
- {
-diff --new-file -rup linux-6.10/include/linux/sched/prio.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/prio.h
---- linux-6.10/include/linux/sched/prio.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/prio.h	2024-07-15 16:47:37.000000000 +0200
-@@ -18,6 +18,28 @@
- #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
- #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+/* Undefine MAX_PRIO and DEFAULT_PRIO */
-+#undef MAX_PRIO
-+#undef DEFAULT_PRIO
-+
-+/* +/- priority levels from the base priority */
-+#ifdef CONFIG_SCHED_BMQ
-+#define MAX_PRIORITY_ADJ	(12)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define MAX_PRIORITY_ADJ	(0)
-+#endif
-+
-+#define MIN_NORMAL_PRIO		(128)
-+#define NORMAL_PRIO_NUM		(64)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
-+#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
-+
-+#endif /* CONFIG_SCHED_ALT */
-+
- /*
-  * Convert user-nice values [ -20 ... 0 ... 19 ]
-  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
-diff --new-file -rup linux-6.10/include/linux/sched/rt.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/rt.h
---- linux-6.10/include/linux/sched/rt.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/rt.h	2024-07-15 16:47:37.000000000 +0200
-@@ -24,8 +24,10 @@ static inline bool task_is_realtime(stru
- 
- 	if (policy == SCHED_FIFO || policy == SCHED_RR)
- 		return true;
-+#ifndef CONFIG_SCHED_ALT
- 	if (policy == SCHED_DEADLINE)
- 		return true;
-+#endif
- 	return false;
- }
- 
-diff --new-file -rup linux-6.10/include/linux/sched/topology.h linux-prjc-linux-6.10.y-prjc/include/linux/sched/topology.h
---- linux-6.10/include/linux/sched/topology.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/linux/sched/topology.h	2024-07-15 16:47:37.000000000 +0200
-@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(
- 
- #endif	/* !CONFIG_SMP */
- 
--#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
-+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
-+	!defined(CONFIG_SCHED_ALT)
- extern void rebuild_sched_domains_energy(void);
- #else
- static inline void rebuild_sched_domains_energy(void)
-diff --new-file -rup linux-6.10/include/linux/sched.h linux-prjc-linux-6.10.y-prjc/include/linux/sched.h
---- linux-6.10/include/linux/sched.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/include/linux/sched.h	2024-07-15 16:47:37.000000000 +0200
-@@ -774,9 +774,16 @@ struct task_struct {
- 	struct alloc_tag		*alloc_tag;
- #endif
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
- 	int				on_cpu;
-+#endif
-+
-+#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_ALT)
- 	struct __call_single_node	wake_entry;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned int			wakee_flips;
- 	unsigned long			wakee_flip_decay_ts;
- 	struct task_struct		*last_wakee;
-@@ -790,6 +797,7 @@ struct task_struct {
- 	 */
- 	int				recent_used_cpu;
- 	int				wake_cpu;
-+#endif /* !CONFIG_SCHED_ALT */
- #endif
- 	int				on_rq;
- 
-@@ -798,6 +806,19 @@ struct task_struct {
- 	int				normal_prio;
- 	unsigned int			rt_priority;
- 
-+#ifdef CONFIG_SCHED_ALT
-+	u64				last_ran;
-+	s64				time_slice;
-+	struct list_head		sq_node;
-+#ifdef CONFIG_SCHED_BMQ
-+	int				boost_prio;
-+#endif /* CONFIG_SCHED_BMQ */
-+#ifdef CONFIG_SCHED_PDS
-+	u64				deadline;
-+#endif /* CONFIG_SCHED_PDS */
-+	/* sched_clock time spent running */
-+	u64				sched_time;
-+#else /* !CONFIG_SCHED_ALT */
- 	struct sched_entity		se;
- 	struct sched_rt_entity		rt;
- 	struct sched_dl_entity		dl;
-@@ -809,6 +830,7 @@ struct task_struct {
- 	unsigned long			core_cookie;
- 	unsigned int			core_occupation;
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_CGROUP_SCHED
- 	struct task_group		*sched_task_group;
-@@ -1571,6 +1593,15 @@ struct task_struct {
- 	 */
- };
- 
-+#ifdef CONFIG_SCHED_ALT
-+#define tsk_seruntime(t)		((t)->sched_time)
-+/* replace the uncertian rt_timeout with 0UL */
-+#define tsk_rttimeout(t)		(0UL)
-+#else /* CFS */
-+#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
-+#define tsk_rttimeout(t)	((t)->rt.timeout)
-+#endif /* !CONFIG_SCHED_ALT */
-+
- #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
- #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
- 
-diff --new-file -rup linux-6.10/init/Kconfig linux-prjc-linux-6.10.y-prjc/init/Kconfig
---- linux-6.10/init/Kconfig	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/init/Kconfig	2024-07-15 16:47:37.000000000 +0200
-@@ -638,6 +638,7 @@ config TASK_IO_ACCOUNTING
- 
- config PSI
- 	bool "Pressure stall information tracking"
-+	depends on !SCHED_ALT
- 	select KERNFS
- 	help
- 	  Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -803,6 +804,7 @@ menu "Scheduler features"
- config UCLAMP_TASK
- 	bool "Enable utilization clamping for RT/FAIR tasks"
- 	depends on CPU_FREQ_GOV_SCHEDUTIL
-+	depends on !SCHED_ALT
- 	help
- 	  This feature enables the scheduler to track the clamped utilization
- 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -849,6 +851,35 @@ config UCLAMP_BUCKETS_COUNT
- 
- 	  If in doubt, use the default value.
- 
-+menuconfig SCHED_ALT
-+	bool "Alternative CPU Schedulers"
-+	default y
-+	help
-+	  This feature enable alternative CPU scheduler"
-+
-+if SCHED_ALT
-+
-+choice
-+	prompt "Alternative CPU Scheduler"
-+	default SCHED_BMQ
-+
-+config SCHED_BMQ
-+	bool "BMQ CPU scheduler"
-+	help
-+	  The BitMap Queue CPU scheduler for excellent interactivity and
-+	  responsiveness on the desktop and solid scalability on normal
-+	  hardware and commodity servers.
-+
-+config SCHED_PDS
-+	bool "PDS CPU scheduler"
-+	help
-+	  The Priority and Deadline based Skip list multiple queue CPU
-+	  Scheduler.
-+
-+endchoice
-+
-+endif
-+
- endmenu
- 
- #
-@@ -914,6 +945,7 @@ config NUMA_BALANCING
- 	depends on ARCH_SUPPORTS_NUMA_BALANCING
- 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
- 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
-+	depends on !SCHED_ALT
- 	help
- 	  This option adds support for automatic NUMA aware memory/task placement.
- 	  The mechanism is quite primitive and is based on migrating memory when
-@@ -1015,6 +1047,7 @@ config FAIR_GROUP_SCHED
- 	depends on CGROUP_SCHED
- 	default CGROUP_SCHED
- 
-+if !SCHED_ALT
- config CFS_BANDWIDTH
- 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
- 	depends on FAIR_GROUP_SCHED
-@@ -1037,6 +1070,7 @@ config RT_GROUP_SCHED
- 	  realtime bandwidth for them.
- 	  See Documentation/scheduler/sched-rt-group.rst for more information.
- 
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
- 
- config SCHED_MM_CID
-@@ -1285,6 +1319,7 @@ config CHECKPOINT_RESTORE
- 
- config SCHED_AUTOGROUP
- 	bool "Automatic process group scheduling"
-+	depends on !SCHED_ALT
- 	select CGROUPS
- 	select CGROUP_SCHED
- 	select FAIR_GROUP_SCHED
-diff --new-file -rup linux-6.10/init/init_task.c linux-prjc-linux-6.10.y-prjc/init/init_task.c
---- linux-6.10/init/init_task.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/init/init_task.c	2024-07-15 16:47:37.000000000 +0200
-@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L
- 	.stack		= init_stack,
- 	.usage		= REFCOUNT_INIT(2),
- 	.flags		= PF_KTHREAD,
-+#ifdef CONFIG_SCHED_ALT
-+	.prio		= DEFAULT_PRIO,
-+	.static_prio	= DEFAULT_PRIO,
-+	.normal_prio	= DEFAULT_PRIO,
-+#else
- 	.prio		= MAX_PRIO - 20,
- 	.static_prio	= MAX_PRIO - 20,
- 	.normal_prio	= MAX_PRIO - 20,
-+#endif
- 	.policy		= SCHED_NORMAL,
- 	.cpus_ptr	= &init_task.cpus_mask,
- 	.user_cpus_ptr	= NULL,
-@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L
- 	.restart_block	= {
- 		.fn = do_no_restart_syscall,
- 	},
-+#ifdef CONFIG_SCHED_ALT
-+	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
-+#ifdef CONFIG_SCHED_BMQ
-+	.boost_prio	= 0,
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+	.deadline	= 0,
-+#endif
-+	.time_slice	= HZ,
-+#else
- 	.se		= {
- 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
- 	},
-@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L
- 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
- 		.time_slice	= RR_TIMESLICE,
- 	},
-+#endif
- 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
- #ifdef CONFIG_SMP
- 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
-diff --new-file -rup linux-6.10/kernel/Kconfig.preempt linux-prjc-linux-6.10.y-prjc/kernel/Kconfig.preempt
---- linux-6.10/kernel/Kconfig.preempt	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/Kconfig.preempt	2024-07-15 16:47:37.000000000 +0200
-@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
- 
- config SCHED_CORE
- 	bool "Core Scheduling for SMT"
--	depends on SCHED_SMT
-+	depends on SCHED_SMT && !SCHED_ALT
- 	help
- 	  This option permits Core Scheduling, a means of coordinated task
- 	  selection across SMT siblings. When enabled -- see
-diff --new-file -rup linux-6.10/kernel/cgroup/cpuset.c linux-prjc-linux-6.10.y-prjc/kernel/cgroup/cpuset.c
---- linux-6.10/kernel/cgroup/cpuset.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/cgroup/cpuset.c	2024-07-15 16:47:37.000000000 +0200
-@@ -846,7 +846,7 @@ out:
- 	return ret;
- }
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
- /*
-  * Helper routine for generate_sched_domains().
-  * Do cpusets a, b have overlapping effective cpus_allowed masks?
-@@ -1245,7 +1245,7 @@ static void rebuild_sched_domains_locked
- 	/* Have scheduler rebuild the domains */
- 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
- }
--#else /* !CONFIG_SMP */
-+#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
- static void rebuild_sched_domains_locked(void)
- {
- }
-@@ -3301,12 +3301,15 @@ static int cpuset_can_attach(struct cgro
- 				goto out_unlock;
- 		}
- 
-+#ifndef CONFIG_SCHED_ALT
- 		if (dl_task(task)) {
- 			cs->nr_migrate_dl_tasks++;
- 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
- 		}
-+#endif
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (!cs->nr_migrate_dl_tasks)
- 		goto out_success;
- 
-@@ -3327,6 +3330,7 @@ static int cpuset_can_attach(struct cgro
- 	}
- 
- out_success:
-+#endif
- 	/*
- 	 * Mark attach is in progress.  This makes validate_change() fail
- 	 * changes which zero cpus/mems_allowed.
-@@ -3350,12 +3354,14 @@ static void cpuset_cancel_attach(struct
- 	if (!cs->attach_in_progress)
- 		wake_up(&cpuset_attach_wq);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (cs->nr_migrate_dl_tasks) {
- 		int cpu = cpumask_any(cs->effective_cpus);
- 
- 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
- 		reset_migrate_dl_data(cs);
- 	}
-+#endif
- 
- 	mutex_unlock(&cpuset_mutex);
- }
-diff --new-file -rup linux-6.10/kernel/delayacct.c linux-prjc-linux-6.10.y-prjc/kernel/delayacct.c
---- linux-6.10/kernel/delayacct.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/delayacct.c	2024-07-15 16:47:37.000000000 +0200
-@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *
- 	 */
- 	t1 = tsk->sched_info.pcount;
- 	t2 = tsk->sched_info.run_delay;
--	t3 = tsk->se.sum_exec_runtime;
-+	t3 = tsk_seruntime(tsk);
- 
- 	d->cpu_count += t1;
- 
-diff --new-file -rup linux-6.10/kernel/exit.c linux-prjc-linux-6.10.y-prjc/kernel/exit.c
---- linux-6.10/kernel/exit.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/exit.c	2024-07-15 16:47:37.000000000 +0200
-@@ -175,7 +175,7 @@ static void __exit_signal(struct task_st
- 			sig->curr_target = next_thread(tsk);
- 	}
- 
--	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-+	add_device_randomness((const void*) &tsk_seruntime(tsk),
- 			      sizeof(unsigned long long));
- 
- 	/*
-@@ -196,7 +196,7 @@ static void __exit_signal(struct task_st
- 	sig->inblock += task_io_get_inblock(tsk);
- 	sig->oublock += task_io_get_oublock(tsk);
- 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
--	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
-+	sig->sum_sched_runtime += tsk_seruntime(tsk);
- 	sig->nr_threads--;
- 	__unhash_process(tsk, group_dead);
- 	write_sequnlock(&sig->stats_lock);
-diff --new-file -rup linux-6.10/kernel/locking/rtmutex.c linux-prjc-linux-6.10.y-prjc/kernel/locking/rtmutex.c
---- linux-6.10/kernel/locking/rtmutex.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/locking/rtmutex.c	2024-07-15 16:47:37.000000000 +0200
-@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waite
- 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
- 
- 	waiter->tree.prio = __waiter_prio(task);
--	waiter->tree.deadline = task->dl.deadline;
-+	waiter->tree.deadline = __tsk_deadline(task);
- }
- 
- /*
-@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter
-  * Only use with rt_waiter_node_{less,equal}()
-  */
- #define task_to_waiter_node(p)	\
--	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
-+	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
- #define task_to_waiter(p)	\
- 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
- 
- static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- 					       struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline < right->deadline);
-+#else
- 	if (left->prio < right->prio)
- 		return 1;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_nod
- 	 */
- 	if (dl_prio(left->prio))
- 		return dl_time_before(left->deadline, right->deadline);
-+#endif
- 
- 	return 0;
-+#endif
- }
- 
- static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- 						 struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline == right->deadline);
-+#else
- 	if (left->prio != right->prio)
- 		return 0;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_nod
- 	 */
- 	if (dl_prio(left->prio))
- 		return left->deadline == right->deadline;
-+#endif
- 
- 	return 1;
-+#endif
- }
- 
- static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
-diff --new-file -rup linux-6.10/kernel/sched/Makefile linux-prjc-linux-6.10.y-prjc/kernel/sched/Makefile
---- linux-6.10/kernel/sched/Makefile	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/Makefile	2024-07-15 16:47:37.000000000 +0200
-@@ -28,7 +28,12 @@ endif
- # These compilation units have roughly the same size and complexity - so their
- # build parallelizes well and finishes roughly at once:
- #
-+ifdef CONFIG_SCHED_ALT
-+obj-y += alt_core.o
-+obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
-+else
- obj-y += core.o
- obj-y += fair.o
-+endif
- obj-y += build_policy.o
- obj-y += build_utility.o
-diff --new-file -rup linux-6.10/kernel/sched/alt_core.c linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_core.c
---- linux-6.10/kernel/sched/alt_core.c	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_core.c	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,8963 @@
-+/*
-+ *  kernel/sched/alt_core.c
-+ *
-+ *  Core alternative kernel scheduler code and related syscalls
-+ *
-+ *  Copyright (C) 1991-2002  Linus Torvalds
-+ *
-+ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
-+ *		a whole lot of those previous things.
-+ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
-+ *		scheduler by Alfred Chen.
-+ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
-+ */
-+#include <linux/sched/clock.h>
-+#include <linux/sched/cputime.h>
-+#include <linux/sched/debug.h>
-+#include <linux/sched/hotplug.h>
-+#include <linux/sched/init.h>
-+#include <linux/sched/isolation.h>
-+#include <linux/sched/loadavg.h>
-+#include <linux/sched/mm.h>
-+#include <linux/sched/nohz.h>
-+#include <linux/sched/stat.h>
-+#include <linux/sched/wake_q.h>
-+
-+#include <linux/blkdev.h>
-+#include <linux/context_tracking.h>
-+#include <linux/cpuset.h>
-+#include <linux/delayacct.h>
-+#include <linux/init_task.h>
-+#include <linux/kcov.h>
-+#include <linux/kprobes.h>
-+#include <linux/nmi.h>
-+#include <linux/rseq.h>
-+#include <linux/scs.h>
-+
-+#include <uapi/linux/sched/types.h>
-+
-+#include <asm/irq_regs.h>
-+#include <asm/switch_to.h>
-+
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/sched.h>
-+#include <trace/events/ipi.h>
-+#undef CREATE_TRACE_POINTS
-+
-+#include "sched.h"
-+#include "smp.h"
-+
-+#include "pelt.h"
-+
-+#include "../../io_uring/io-wq.h"
-+#include "../smpboot.h"
-+
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
-+
-+/*
-+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
-+ * associated with them) to allow external modules to probe them.
-+ */
-+EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+#define sched_feat(x)	(1)
-+/*
-+ * Print a warning if need_resched is set for the given duration (if
-+ * LATENCY_WARN is enabled).
-+ *
-+ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
-+ * per boot.
-+ */
-+__read_mostly int sysctl_resched_latency_warn_ms = 100;
-+__read_mostly int sysctl_resched_latency_warn_once = 1;
-+#else
-+#define sched_feat(x)	(0)
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+#define ALT_SCHED_VERSION "v6.10-r0"
-+
-+/*
-+ * Compile time debug macro
-+ * #define ALT_SCHED_DEBUG
-+ */
-+
-+/* rt_prio(prio) defined in include/linux/sched/rt.h */
-+#define rt_task(p)		rt_prio((p)->prio)
-+#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
-+#define task_has_rt_policy(p)	(rt_policy((p)->policy))
-+
-+#define STOP_PRIO		(MAX_RT_PRIO - 1)
-+
-+/*
-+ * Time slice
-+ * (default: 4 msec, units: nanoseconds)
-+ */
-+unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#include "bmq.h"
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+#include "pds.h"
-+#endif
-+
-+struct affinity_context {
-+	const struct cpumask *new_mask;
-+	struct cpumask *user_mask;
-+	unsigned int flags;
-+};
-+
-+/* Reschedule if less than this many μs left */
-+#define RESCHED_NS		(100 << 10)
-+
-+/**
-+ * sched_yield_type - Type of sched_yield() will be performed.
-+ * 0: No yield.
-+ * 1: Requeue task. (default)
-+ */
-+int sched_yield_type __read_mostly = 1;
-+
-+#ifdef CONFIG_SMP
-+static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
-+
-+#ifdef CONFIG_SCHED_SMT
-+DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-+EXPORT_SYMBOL_GPL(sched_smt_present);
-+#endif
-+
-+/*
-+ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
-+ * the domain), this allows us to quickly tell if two cpus are in the same cache
-+ * domain, see cpus_share_cache().
-+ */
-+DEFINE_PER_CPU(int, sd_llc_id);
-+#endif /* CONFIG_SMP */
-+
-+static DEFINE_MUTEX(sched_hotcpu_mutex);
-+
-+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 1] ____cacheline_aligned_in_smp;
-+
-+static cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
-+#ifdef CONFIG_SCHED_SMT
-+static cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
-+#endif
-+
-+/* task function */
-+static inline const struct cpumask *task_user_cpus(struct task_struct *p)
-+{
-+	if (!p->user_cpus_ptr)
-+		return cpu_possible_mask; /* &init_task.cpus_mask */
-+	return p->user_cpus_ptr;
-+}
-+
-+/* sched_queue related functions */
-+static inline void sched_queue_init(struct sched_queue *q)
-+{
-+	int i;
-+
-+	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
-+	for(i = 0; i < SCHED_LEVELS; i++)
-+		INIT_LIST_HEAD(&q->heads[i]);
-+}
-+
-+/*
-+ * Init idle task and put into queue structure of rq
-+ * IMPORTANT: may be called multiple times for a single cpu
-+ */
-+static inline void sched_queue_init_idle(struct sched_queue *q,
-+					 struct task_struct *idle)
-+{
-+	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
-+	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
-+	idle->on_rq = TASK_ON_RQ_QUEUED;
-+}
-+
-+#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
-+	if (low < pr && pr <= high)				\
-+		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
-+
-+#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
-+	if (low < pr && pr <= high)				\
-+		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
-+
-+static atomic_t sched_prio_record = ATOMIC_INIT(0);
-+
-+/* water mark related functions */
-+static inline void update_sched_preempt_mask(struct rq *rq)
-+{
-+	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+	int last_prio = rq->prio;
-+	int cpu, pr;
-+
-+	if (prio == last_prio)
-+		return;
-+
-+	rq->prio = prio;
-+#ifdef CONFIG_SCHED_PDS
-+	rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+#endif
-+	cpu = cpu_of(rq);
-+	pr = atomic_read(&sched_prio_record);
-+
-+	if (prio < last_prio) {
-+		if (IDLE_TASK_SCHED_PRIO == last_prio) {
-+#ifdef CONFIG_SCHED_SMT
-+			if (static_branch_likely(&sched_smt_present))
-+				cpumask_andnot(sched_sg_idle_mask,
-+					       sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+			cpumask_clear_cpu(cpu, sched_idle_mask);
-+			last_prio -= 2;
-+		}
-+		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
-+
-+		return;
-+	}
-+	/* last_prio < prio */
-+	if (IDLE_TASK_SCHED_PRIO == prio) {
-+#ifdef CONFIG_SCHED_SMT
-+		if (static_branch_likely(&sched_smt_present) &&
-+		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
-+			cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+		cpumask_set_cpu(cpu, sched_idle_mask);
-+		prio -= 2;
-+	}
-+	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
-+}
-+
-+/*
-+ * This routine assume that the idle task always in queue
-+ */
-+static inline struct task_struct *sched_rq_first_task(struct rq *rq)
-+{
-+	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
-+
-+	return list_first_entry(head, struct task_struct, sq_node);
-+}
-+
-+static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
-+{
-+	struct list_head *next = p->sq_node.next;
-+
-+	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
-+		struct list_head *head;
-+		unsigned long idx = next - &rq->queue.heads[0];
-+
-+		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
-+				    sched_idx2prio(idx, rq) + 1);
-+		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+		return list_first_entry(head, struct task_struct, sq_node);
-+	}
-+
-+	return list_next_entry(p, sq_node);
-+}
-+
-+/*
-+ * Serialization rules:
-+ *
-+ * Lock order:
-+ *
-+ *   p->pi_lock
-+ *     rq->lock
-+ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
-+ *
-+ *  rq1->lock
-+ *    rq2->lock  where: rq1 < rq2
-+ *
-+ * Regular state:
-+ *
-+ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
-+ * local CPU's rq->lock, it optionally removes the task from the runqueue and
-+ * always looks at the local rq data structures to find the most eligible task
-+ * to run next.
-+ *
-+ * Task enqueue is also under rq->lock, possibly taken from another CPU.
-+ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
-+ * the local CPU to avoid bouncing the runqueue state around [ see
-+ * ttwu_queue_wakelist() ]
-+ *
-+ * Task wakeup, specifically wakeups that involve migration, are horribly
-+ * complicated to avoid having to take two rq->locks.
-+ *
-+ * Special state:
-+ *
-+ * System-calls and anything external will use task_rq_lock() which acquires
-+ * both p->pi_lock and rq->lock. As a consequence the state they change is
-+ * stable while holding either lock:
-+ *
-+ *  - sched_setaffinity()/
-+ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
-+ *  - set_user_nice():		p->se.load, p->*prio
-+ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
-+ *				p->se.load, p->rt_priority,
-+ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
-+ *  - sched_setnuma():		p->numa_preferred_nid
-+ *  - sched_move_task():        p->sched_task_group
-+ *  - uclamp_update_active()	p->uclamp*
-+ *
-+ * p->state <- TASK_*:
-+ *
-+ *   is changed locklessly using set_current_state(), __set_current_state() or
-+ *   set_special_state(), see their respective comments, or by
-+ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
-+ *   concurrent self.
-+ *
-+ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
-+ *
-+ *   is set by activate_task() and cleared by deactivate_task(), under
-+ *   rq->lock. Non-zero indicates the task is runnable, the special
-+ *   ON_RQ_MIGRATING state is used for migration without holding both
-+ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
-+ *
-+ * p->on_cpu <- { 0, 1 }:
-+ *
-+ *   is set by prepare_task() and cleared by finish_task() such that it will be
-+ *   set before p is scheduled-in and cleared after p is scheduled-out, both
-+ *   under rq->lock. Non-zero indicates the task is running on its CPU.
-+ *
-+ *   [ The astute reader will observe that it is possible for two tasks on one
-+ *     CPU to have ->on_cpu = 1 at the same time. ]
-+ *
-+ * task_cpu(p): is changed by set_task_cpu(), the rules are:
-+ *
-+ *  - Don't call set_task_cpu() on a blocked task:
-+ *
-+ *    We don't care what CPU we're not running on, this simplifies hotplug,
-+ *    the CPU assignment of blocked tasks isn't required to be valid.
-+ *
-+ *  - for try_to_wake_up(), called under p->pi_lock:
-+ *
-+ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
-+ *
-+ *  - for migration called under rq->lock:
-+ *    [ see task_on_rq_migrating() in task_rq_lock() ]
-+ *
-+ *    o move_queued_task()
-+ *    o detach_task()
-+ *
-+ *  - for migration called under double_rq_lock():
-+ *
-+ *    o __migrate_swap_task()
-+ *    o push_rt_task() / pull_rt_task()
-+ *    o push_dl_task() / pull_dl_task()
-+ *    o dl_task_offline_migration()
-+ *
-+ */
-+
-+/*
-+ * Context: p->pi_lock
-+ */
-+static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock(&rq->lock);
-+			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock(&rq->lock);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			*plock = NULL;
-+			return rq;
-+		}
-+	}
-+}
-+
-+static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
-+{
-+	if (NULL != lock)
-+		raw_spin_unlock(lock);
-+}
-+
-+static inline struct rq *
-+task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock_irqsave(&rq->lock, *flags);
-+			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&rq->lock, *flags);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			raw_spin_lock_irqsave(&p->pi_lock, *flags);
-+			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
-+				*plock = &p->pi_lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
-+		}
-+	}
-+}
-+
-+static inline void
-+task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
-+{
-+	raw_spin_unlock_irqrestore(lock, *flags);
-+}
-+
-+/*
-+ * __task_rq_lock - lock the rq @p resides on.
-+ */
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	lockdep_assert_held(&p->pi_lock);
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-+			return rq;
-+		raw_spin_unlock(&rq->lock);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+/*
-+ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
-+ */
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	for (;;) {
-+		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		/*
-+		 *	move_queued_task()		task_rq_lock()
-+		 *
-+		 *	ACQUIRE (rq->lock)
-+		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
-+		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
-+		 *	[S] ->cpu = new_cpu		[L] task_rq()
-+		 *					[L] ->on_rq
-+		 *	RELEASE (rq->lock)
-+		 *
-+		 * If we observe the old CPU in task_rq_lock(), the acquire of
-+		 * the old rq->lock will fully serialize against the stores.
-+		 *
-+		 * If we observe the new CPU in task_rq_lock(), the address
-+		 * dependency headed by '[L] rq = task_rq()' and the acquire
-+		 * will pair with the WMB to ensure we then also see migrating.
-+		 */
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
-+			return rq;
-+		}
-+		raw_spin_unlock(&rq->lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irqsave(&rq->lock, rf->flags);
-+}
-+
-+static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
-+}
-+
-+DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
-+		    rq_lock_irqsave(_T->lock, &_T->rf),
-+		    rq_unlock_irqrestore(_T->lock, &_T->rf),
-+		    struct rq_flags rf)
-+
-+void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
-+{
-+	raw_spinlock_t *lock;
-+
-+	/* Matches synchronize_rcu() in __sched_core_enable() */
-+	preempt_disable();
-+
-+	for (;;) {
-+		lock = __rq_lockp(rq);
-+		raw_spin_lock_nested(lock, subclass);
-+		if (likely(lock == __rq_lockp(rq))) {
-+			/* preempt_count *MUST* be > 1 */
-+			preempt_enable_no_resched();
-+			return;
-+		}
-+		raw_spin_unlock(lock);
-+	}
-+}
-+
-+void raw_spin_rq_unlock(struct rq *rq)
-+{
-+	raw_spin_unlock(rq_lockp(rq));
-+}
-+
-+/*
-+ * RQ-clock updating methods:
-+ */
-+
-+static void update_rq_clock_task(struct rq *rq, s64 delta)
-+{
-+/*
-+ * In theory, the compile should just see 0 here, and optimize out the call
-+ * to sched_rt_avg_update. But I don't trust it...
-+ */
-+	s64 __maybe_unused steal = 0, irq_delta = 0;
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
-+
-+	/*
-+	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
-+	 * this case when a previous update_rq_clock() happened inside a
-+	 * {soft,}irq region.
-+	 *
-+	 * When this happens, we stop ->clock_task and only update the
-+	 * prev_irq_time stamp to account for the part that fit, so that a next
-+	 * update will consume the rest. This ensures ->clock_task is
-+	 * monotonic.
-+	 *
-+	 * It does however cause some slight miss-attribution of {soft,}irq
-+	 * time, a more accurate solution would be to update the irq_time using
-+	 * the current rq->clock timestamp, except that would require using
-+	 * atomic ops.
-+	 */
-+	if (irq_delta > delta)
-+		irq_delta = delta;
-+
-+	rq->prev_irq_time += irq_delta;
-+	delta -= irq_delta;
-+	delayacct_irq(rq->curr, irq_delta);
-+#endif
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	if (static_key_false((&paravirt_steal_rq_enabled))) {
-+		steal = paravirt_steal_clock(cpu_of(rq));
-+		steal -= rq->prev_steal_time_rq;
-+
-+		if (unlikely(steal > delta))
-+			steal = delta;
-+
-+		rq->prev_steal_time_rq += steal;
-+		delta -= steal;
-+	}
-+#endif
-+
-+	rq->clock_task += delta;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	if ((irq_delta + steal))
-+		update_irq_load_avg(rq, irq_delta + steal);
-+#endif
-+}
-+
-+static inline void update_rq_clock(struct rq *rq)
-+{
-+	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
-+
-+	if (unlikely(delta <= 0))
-+		return;
-+	rq->clock += delta;
-+	sched_update_rq_clock(rq);
-+	update_rq_clock_task(rq, delta);
-+}
-+
-+/*
-+ * RQ Load update routine
-+ */
-+#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
-+#define RQ_UTIL_SHIFT			(8)
-+#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
-+
-+#define LOAD_BLOCK(t)		((t) >> 17)
-+#define LOAD_HALF_BLOCK(t)	((t) >> 16)
-+#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
-+
-+static inline void rq_load_update(struct rq *rq)
-+{
-+	u64 time = rq->clock;
-+	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
-+	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
-+	u64 curr = !!rq->nr_running;
-+
-+	if (delta) {
-+		rq->load_history = rq->load_history >> delta;
-+
-+		if (delta < RQ_UTIL_SHIFT) {
-+			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
-+			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
-+				rq->load_history ^= LOAD_BLOCK_BIT(delta);
-+		}
-+
-+		rq->load_block = BLOCK_MASK(time) * prev;
-+	} else {
-+		rq->load_block += (time - rq->load_stamp) * prev;
-+	}
-+	if (prev ^ curr)
-+		rq->load_history ^= CURRENT_LOAD_BIT;
-+	rq->load_stamp = time;
-+}
-+
-+unsigned long rq_load_util(struct rq *rq, unsigned long max)
-+{
-+	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
-+}
-+
-+#ifdef CONFIG_SMP
-+unsigned long sched_cpu_util(int cpu)
-+{
-+	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_CPU_FREQ
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid.  Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+	struct update_util_data *data;
-+
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
-+	if (data)
-+		data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+}
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+/*
-+ * Tick may be needed by tasks in the runqueue depending on their policy and
-+ * requirements. If tick is needed, lets send the target an IPI to kick it out
-+ * of nohz mode if necessary.
-+ */
-+static inline void sched_update_tick_dependency(struct rq *rq)
-+{
-+	int cpu = cpu_of(rq);
-+
-+	if (!tick_nohz_full_cpu(cpu))
-+		return;
-+
-+	if (rq->nr_running < 2)
-+		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
-+	else
-+		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
-+}
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_update_tick_dependency(struct rq *rq) { }
-+#endif
-+
-+bool sched_task_on_rq(struct task_struct *p)
-+{
-+	return task_on_rq_queued(p);
-+}
-+
-+unsigned long get_wchan(struct task_struct *p)
-+{
-+	unsigned long ip = 0;
-+	unsigned int state;
-+
-+	if (!p || p == current)
-+		return 0;
-+
-+	/* Only get wchan if task is blocked and we can keep it that way. */
-+	raw_spin_lock_irq(&p->pi_lock);
-+	state = READ_ONCE(p->__state);
-+	smp_rmb(); /* see try_to_wake_up() */
-+	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
-+		ip = __get_wchan(p);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	return ip;
-+}
-+
-+/*
-+ * Add/Remove/Requeue task to/from the runqueue routines
-+ * Context: rq->lock
-+ */
-+#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
-+	sched_info_dequeue(rq, p);							\
-+											\
-+	__list_del_entry(&p->sq_node);							\
-+	if (p->sq_node.prev == p->sq_node.next) {					\
-+		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
-+			  rq->queue.bitmap);						\
-+		func;									\
-+	}
-+
-+#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
-+	sched_info_enqueue(rq, p);							\
-+	{										\
-+	int idx, prio;									\
-+	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
-+	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
-+	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
-+		set_bit(prio, rq->queue.bitmap);					\
-+		func;									\
-+	}										\
-+	}
-+
-+static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+	--rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (1 == rq->nr_running)
-+		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+	++rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (2 == rq->nr_running)
-+		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq)
-+{
-+	struct list_head *node = &p->sq_node;
-+	int deq_idx, idx, prio;
-+
-+	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
-+		  cpu_of(rq), task_cpu(p));
-+#endif
-+	if (list_is_last(node, &rq->queue.heads[idx]))
-+		return;
-+
-+	__list_del_entry(node);
-+	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
-+		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
-+
-+	list_add_tail(node, &rq->queue.heads[idx]);
-+	if (list_is_first(node, &rq->queue.heads[idx]))
-+		set_bit(prio, rq->queue.bitmap);
-+	update_sched_preempt_mask(rq);
-+}
-+
-+/*
-+ * cmpxchg based fetch_or, macro so it works for different integer types
-+ */
-+#define fetch_or(ptr, mask)						\
-+	({								\
-+		typeof(ptr) _ptr = (ptr);				\
-+		typeof(mask) _mask = (mask);				\
-+		typeof(*_ptr) _val = *_ptr;				\
-+									\
-+		do {							\
-+		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
-+	_val;								\
-+})
-+
-+#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
-+/*
-+ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
-+ * this avoids any races wrt polling state changes and thereby avoids
-+ * spurious IPIs.
-+ */
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
-+}
-+
-+/*
-+ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
-+ *
-+ * If this returns true, then the idle task promises to call
-+ * sched_ttwu_pending() and reschedule soon.
-+ */
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	typeof(ti->flags) val = READ_ONCE(ti->flags);
-+
-+	do {
-+		if (!(val & _TIF_POLLING_NRFLAG))
-+			return false;
-+		if (val & _TIF_NEED_RESCHED)
-+			return true;
-+	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
-+
-+	return true;
-+}
-+
-+#else
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	set_tsk_need_resched(p);
-+	return true;
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline bool set_nr_if_polling(struct task_struct *p)
-+{
-+	return false;
-+}
-+#endif
-+#endif
-+
-+static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	struct wake_q_node *node = &task->wake_q;
-+
-+	/*
-+	 * Atomically grab the task, if ->wake_q is !nil already it means
-+	 * it's already queued (either by us or someone else) and will get the
-+	 * wakeup due to that.
-+	 *
-+	 * In order to ensure that a pending wakeup will observe our pending
-+	 * state, even in the failed case, an explicit smp_mb() must be used.
-+	 */
-+	smp_mb__before_atomic();
-+	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
-+		return false;
-+
-+	/*
-+	 * The head is context local, there can be no concurrency.
-+	 */
-+	*head->lastp = node;
-+	head->lastp = &node->next;
-+	return true;
-+}
-+
-+/**
-+ * wake_q_add() - queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ */
-+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (__wake_q_add(head, task))
-+		get_task_struct(task);
-+}
-+
-+/**
-+ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ *
-+ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
-+ * that already hold reference to @task can call the 'safe' version and trust
-+ * wake_q to do the right thing depending whether or not the @task is already
-+ * queued for wakeup.
-+ */
-+void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (!__wake_q_add(head, task))
-+		put_task_struct(task);
-+}
-+
-+void wake_up_q(struct wake_q_head *head)
-+{
-+	struct wake_q_node *node = head->first;
-+
-+	while (node != WAKE_Q_TAIL) {
-+		struct task_struct *task;
-+
-+		task = container_of(node, struct task_struct, wake_q);
-+		/* task can safely be re-inserted now: */
-+		node = node->next;
-+		task->wake_q.next = NULL;
-+
-+		/*
-+		 * wake_up_process() executes a full barrier, which pairs with
-+		 * the queueing in wake_q_add() so as not to miss wakeups.
-+		 */
-+		wake_up_process(task);
-+		put_task_struct(task);
-+	}
-+}
-+
-+/*
-+ * resched_curr - mark rq's current task 'to be rescheduled now'.
-+ *
-+ * On UP this means the setting of the need_resched flag, on SMP it
-+ * might also involve a cross-CPU call to trigger the scheduler on
-+ * the target CPU.
-+ */
-+static inline void resched_curr(struct rq *rq)
-+{
-+	struct task_struct *curr = rq->curr;
-+	int cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	if (test_tsk_need_resched(curr))
-+		return;
-+
-+	cpu = cpu_of(rq);
-+	if (cpu == smp_processor_id()) {
-+		set_tsk_need_resched(curr);
-+		set_preempt_need_resched();
-+		return;
-+	}
-+
-+	if (set_nr_and_not_polling(curr))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+void resched_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (cpu_online(cpu) || cpu == smp_processor_id())
-+		resched_curr(cpu_rq(cpu));
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_NO_HZ_COMMON
-+/*
-+ * This routine will record that the CPU is going idle with tick stopped.
-+ * This info will be used in performing idle load balancing in the future.
-+ */
-+void nohz_balance_enter_idle(int cpu) {}
-+
-+/*
-+ * In the semi idle case, use the nearest busy CPU for migrating timers
-+ * from an idle CPU.  This is good for power-savings.
-+ *
-+ * We don't do similar optimization for completely idle system, as
-+ * selecting an idle CPU will add more delays to the timers than intended
-+ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
-+ */
-+int get_nohz_timer_target(void)
-+{
-+	int i, cpu = smp_processor_id(), default_cpu = -1;
-+	struct cpumask *mask;
-+	const struct cpumask *hk_mask;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
-+		if (!idle_cpu(cpu))
-+			return cpu;
-+		default_cpu = cpu;
-+	}
-+
-+	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
-+
-+	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
-+	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
-+		for_each_cpu_and(i, mask, hk_mask)
-+			if (!idle_cpu(i))
-+				return i;
-+
-+	if (default_cpu == -1)
-+		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
-+	cpu = default_cpu;
-+
-+	return cpu;
-+}
-+
-+/*
-+ * When add_timer_on() enqueues a timer into the timer wheel of an
-+ * idle CPU then this timer might expire before the next timer event
-+ * which is scheduled to wake up that CPU. In case of a completely
-+ * idle system the next event might even be infinite time into the
-+ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
-+ * leaves the inner idle loop so the newly added timer is taken into
-+ * account when the CPU goes back to idle and evaluates the timer
-+ * wheel for the next timer event.
-+ */
-+static inline void wake_up_idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (cpu == smp_processor_id())
-+		return;
-+
-+	/*
-+	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
-+	 * part of the idle loop. This forces an exit from the idle loop
-+	 * and a round trip to schedule(). Now this could be optimized
-+	 * because a simple new idle loop iteration is enough to
-+	 * re-evaluate the next tick. Provided some re-ordering of tick
-+	 * nohz functions that would need to follow TIF_NR_POLLING
-+	 * clearing:
-+	 *
-+	 * - On most archs, a simple fetch_or on ti::flags with a
-+	 *   "0" value would be enough to know if an IPI needs to be sent.
-+	 *
-+	 * - x86 needs to perform a last need_resched() check between
-+	 *   monitor and mwait which doesn't take timers into account.
-+	 *   There a dedicated TIF_TIMER flag would be required to
-+	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
-+	 *   before mwait().
-+	 *
-+	 * However, remote timer enqueue is not such a frequent event
-+	 * and testing of the above solutions didn't appear to report
-+	 * much benefits.
-+	 */
-+	if (set_nr_and_not_polling(rq->idle))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+static inline bool wake_up_full_nohz_cpu(int cpu)
-+{
-+	/*
-+	 * We just need the target to call irq_exit() and re-evaluate
-+	 * the next tick. The nohz full kick at least implies that.
-+	 * If needed we can still optimize that later with an
-+	 * empty IRQ.
-+	 */
-+	if (cpu_is_offline(cpu))
-+		return true;  /* Don't try to wake offline CPUs. */
-+	if (tick_nohz_full_cpu(cpu)) {
-+		if (cpu != smp_processor_id() ||
-+		    tick_nohz_tick_stopped())
-+			tick_nohz_full_kick_cpu(cpu);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_nohz_cpu(int cpu)
-+{
-+	if (!wake_up_full_nohz_cpu(cpu))
-+		wake_up_idle_cpu(cpu);
-+}
-+
-+static void nohz_csd_func(void *info)
-+{
-+	struct rq *rq = info;
-+	int cpu = cpu_of(rq);
-+	unsigned int flags;
-+
-+	/*
-+	 * Release the rq::nohz_csd.
-+	 */
-+	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
-+	WARN_ON(!(flags & NOHZ_KICK_MASK));
-+
-+	rq->idle_balance = idle_cpu(cpu);
-+	if (rq->idle_balance && !need_resched()) {
-+		rq->nohz_idle_balance = flags;
-+		raise_softirq_irqoff(SCHED_SOFTIRQ);
-+	}
-+}
-+
-+#endif /* CONFIG_NO_HZ_COMMON */
-+#endif /* CONFIG_SMP */
-+
-+static inline void wakeup_preempt(struct rq *rq)
-+{
-+	if (sched_rq_first_task(rq) != rq->curr)
-+		resched_curr(rq);
-+}
-+
-+static __always_inline
-+int __task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	if (READ_ONCE(p->__state) & state)
-+		return 1;
-+
-+	if (READ_ONCE(p->saved_state) & state)
-+		return -1;
-+
-+	return 0;
-+}
-+
-+static __always_inline
-+int task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	/*
-+	 * Serialize against current_save_and_set_rtlock_wait_state(),
-+	 * current_restore_rtlock_saved_state(), and __refrigerator().
-+	 */
-+	guard(raw_spinlock_irq)(&p->pi_lock);
-+
-+	return __task_state_match(p, state);
-+}
-+
-+/*
-+ * wait_task_inactive - wait for a thread to unschedule.
-+ *
-+ * Wait for the thread to block in any of the states set in @match_state.
-+ * If it changes, i.e. @p might have woken up, then return zero.  When we
-+ * succeed in waiting for @p to be off its CPU, we return a positive number
-+ * (its total switch count).  If a second call a short while later returns the
-+ * same number, the caller can be sure that @p has remained unscheduled the
-+ * whole time.
-+ *
-+ * The caller must ensure that the task *will* unschedule sometime soon,
-+ * else this function might spin for a *long* time. This function can't
-+ * be called with interrupts off, or it may introduce deadlock with
-+ * smp_call_function() if an IPI is sent by the same process we are
-+ * waiting to become inactive.
-+ */
-+unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
-+{
-+	unsigned long flags;
-+	int running, queued, match;
-+	unsigned long ncsw;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+
-+		/*
-+		 * If the task is actively running on another CPU
-+		 * still, just relax and busy-wait without holding
-+		 * any locks.
-+		 *
-+		 * NOTE! Since we don't hold any locks, it's not
-+		 * even sure that "rq" stays as the right runqueue!
-+		 * But we don't care, since this will return false
-+		 * if the runqueue has changed and p is actually now
-+		 * running somewhere else!
-+		 */
-+		while (task_on_cpu(p)) {
-+			if (!task_state_match(p, match_state))
-+				return 0;
-+			cpu_relax();
-+		}
-+
-+		/*
-+		 * Ok, time to look more closely! We need the rq
-+		 * lock now, to be *sure*. If we're wrong, we'll
-+		 * just go back and repeat.
-+		 */
-+		task_access_lock_irqsave(p, &lock, &flags);
-+		trace_sched_wait_task(p);
-+		running = task_on_cpu(p);
-+		queued = p->on_rq;
-+		ncsw = 0;
-+		if ((match = __task_state_match(p, match_state))) {
-+			/*
-+			 * When matching on p->saved_state, consider this task
-+			 * still queued so it will wait.
-+			 */
-+			if (match < 0)
-+				queued = 1;
-+			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
-+		}
-+		task_access_unlock_irqrestore(p, lock, &flags);
-+
-+		/*
-+		 * If it changed from the expected state, bail out now.
-+		 */
-+		if (unlikely(!ncsw))
-+			break;
-+
-+		/*
-+		 * Was it really running after all now that we
-+		 * checked with the proper locks actually held?
-+		 *
-+		 * Oops. Go back and try again..
-+		 */
-+		if (unlikely(running)) {
-+			cpu_relax();
-+			continue;
-+		}
-+
-+		/*
-+		 * It's not enough that it's not actively running,
-+		 * it must be off the runqueue _entirely_, and not
-+		 * preempted!
-+		 *
-+		 * So if it was still runnable (but just not actively
-+		 * running right now), it's preempted, and we should
-+		 * yield - it could be a while.
-+		 */
-+		if (unlikely(queued)) {
-+			ktime_t to = NSEC_PER_SEC / HZ;
-+
-+			set_current_state(TASK_UNINTERRUPTIBLE);
-+			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
-+			continue;
-+		}
-+
-+		/*
-+		 * Ahh, all good. It wasn't running, and it wasn't
-+		 * runnable, which means that it will never become
-+		 * running in the future either. We're all done!
-+		 */
-+		break;
-+	}
-+
-+	return ncsw;
-+}
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+/*
-+ * Use HR-timers to deliver accurate preemption points.
-+ */
-+
-+static void hrtick_clear(struct rq *rq)
-+{
-+	if (hrtimer_active(&rq->hrtick_timer))
-+		hrtimer_cancel(&rq->hrtick_timer);
-+}
-+
-+/*
-+ * High-resolution timer tick.
-+ * Runs from hardirq context with interrupts disabled.
-+ */
-+static enum hrtimer_restart hrtick(struct hrtimer *timer)
-+{
-+	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
-+
-+	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
-+
-+	raw_spin_lock(&rq->lock);
-+	resched_curr(rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	return HRTIMER_NORESTART;
-+}
-+
-+/*
-+ * Use hrtick when:
-+ *  - enabled by features
-+ *  - hrtimer is actually high res
-+ */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	/**
-+	 * Alt schedule FW doesn't support sched_feat yet
-+	if (!sched_feat(HRTICK))
-+		return 0;
-+	*/
-+	if (!cpu_active(cpu_of(rq)))
-+		return 0;
-+	return hrtimer_is_hres_active(&rq->hrtick_timer);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void __hrtick_restart(struct rq *rq)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	ktime_t time = rq->hrtick_time;
-+
-+	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
-+}
-+
-+/*
-+ * called from hardirq (IPI) context
-+ */
-+static void __hrtick_start(void *arg)
-+{
-+	struct rq *rq = arg;
-+
-+	raw_spin_lock(&rq->lock);
-+	__hrtick_restart(rq);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	s64 delta;
-+
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense and can cause timer DoS.
-+	 */
-+	delta = max_t(s64, delay, 10000LL);
-+
-+	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
-+
-+	if (rq == this_rq())
-+		__hrtick_restart(rq);
-+	else
-+		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-+}
-+
-+#else
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense. Rely on vruntime for fairness.
-+	 */
-+	delay = max_t(u64, delay, 10000LL);
-+	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
-+		      HRTIMER_MODE_REL_PINNED_HARD);
-+}
-+#endif /* CONFIG_SMP */
-+
-+static void hrtick_rq_init(struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
-+#endif
-+
-+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
-+	rq->hrtick_timer.function = hrtick;
-+}
-+#else	/* CONFIG_SCHED_HRTICK */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	return 0;
-+}
-+
-+static inline void hrtick_clear(struct rq *rq)
-+{
-+}
-+
-+static inline void hrtick_rq_init(struct rq *rq)
-+{
-+}
-+#endif	/* CONFIG_SCHED_HRTICK */
-+
-+static inline int __normal_prio(int policy, int rt_prio, int static_prio)
-+{
-+	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
-+}
-+
-+/*
-+ * Calculate the expected normal priority: i.e. priority
-+ * without taking RT-inheritance into account. Might be
-+ * boosted by interactivity modifiers. Changes upon fork,
-+ * setprio syscalls, and whenever the interactivity
-+ * estimator recalculates.
-+ */
-+static inline int normal_prio(struct task_struct *p)
-+{
-+	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
-+}
-+
-+/*
-+ * Calculate the current priority, i.e. the priority
-+ * taken into account by the scheduler. This value might
-+ * be boosted by RT tasks as it will be RT if the task got
-+ * RT-boosted. If not then it returns p->normal_prio.
-+ */
-+static int effective_prio(struct task_struct *p)
-+{
-+	p->normal_prio = normal_prio(p);
-+	/*
-+	 * If we are RT tasks or we were boosted to RT priority,
-+	 * keep the priority unchanged. Otherwise, update priority
-+	 * to the normal priority:
-+	 */
-+	if (!rt_prio(p->prio))
-+		return p->normal_prio;
-+	return p->prio;
-+}
-+
-+/*
-+ * activate_task - move a task to the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static void activate_task(struct task_struct *p, struct rq *rq)
-+{
-+	enqueue_task(p, rq, ENQUEUE_WAKEUP);
-+
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+
-+	/*
-+	 * If in_iowait is set, the code below may not trigger any cpufreq
-+	 * utilization updates, so do it here explicitly with the IOWAIT flag
-+	 * passed.
-+	 */
-+	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
-+}
-+
-+/*
-+ * deactivate_task - remove a task from the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static inline void deactivate_task(struct task_struct *p, struct rq *rq)
-+{
-+	WRITE_ONCE(p->on_rq, 0);
-+	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+
-+	dequeue_task(p, rq, DEQUEUE_SLEEP);
-+
-+	cpufreq_update_util(rq, 0);
-+}
-+
-+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
-+	 * successfully executed on another CPU. We must ensure that updates of
-+	 * per-task data have been completed by this moment.
-+	 */
-+	smp_wmb();
-+
-+	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
-+#endif
-+}
-+
-+static inline bool is_migration_disabled(struct task_struct *p)
-+{
-+#ifdef CONFIG_SMP
-+	return p->migration_disabled;
-+#else
-+	return false;
-+#endif
-+}
-+
-+#define SCA_CHECK		0x01
-+#define SCA_USER		0x08
-+
-+#ifdef CONFIG_SMP
-+
-+void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
-+{
-+#ifdef CONFIG_SCHED_DEBUG
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * We should never call set_task_cpu() on a blocked task,
-+	 * ttwu() will sort out the placement.
-+	 */
-+	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
-+
-+#ifdef CONFIG_LOCKDEP
-+	/*
-+	 * The caller should hold either p->pi_lock or rq->lock, when changing
-+	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
-+	 *
-+	 * sched_move_task() holds both and thus holding either pins the cgroup,
-+	 * see task_group().
-+	 */
-+	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
-+				      lockdep_is_held(&task_rq(p)->lock)));
-+#endif
-+	/*
-+	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
-+	 */
-+	WARN_ON_ONCE(!cpu_online(new_cpu));
-+
-+	WARN_ON_ONCE(is_migration_disabled(p));
-+#endif
-+	trace_sched_migrate_task(p, new_cpu);
-+
-+	if (task_cpu(p) != new_cpu)
-+	{
-+		rseq_migrate(p);
-+		perf_event_task_migrate(p);
-+	}
-+
-+	__set_task_cpu(p, new_cpu);
-+}
-+
-+#define MDF_FORCE_ENABLED	0x80
-+
-+static void
-+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	/*
-+	 * This here violates the locking rules for affinity, since we're only
-+	 * supposed to change these variables while holding both rq->lock and
-+	 * p->pi_lock.
-+	 *
-+	 * HOWEVER, it magically works, because ttwu() is the only code that
-+	 * accesses these variables under p->pi_lock and only does so after
-+	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
-+	 * before finish_task().
-+	 *
-+	 * XXX do further audits, this smells like something putrid.
-+	 */
-+	SCHED_WARN_ON(!p->on_cpu);
-+	p->cpus_ptr = new_mask;
-+}
-+
-+void migrate_disable(void)
-+{
-+	struct task_struct *p = current;
-+	int cpu;
-+
-+	if (p->migration_disabled) {
-+		p->migration_disabled++;
-+		return;
-+	}
-+
-+	guard(preempt)();
-+	cpu = smp_processor_id();
-+	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
-+		cpu_rq(cpu)->nr_pinned++;
-+		p->migration_disabled = 1;
-+		p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
-+		/*
-+		 * Violates locking rules! see comment in __do_set_cpus_ptr().
-+		 */
-+		if (p->cpus_ptr == &p->cpus_mask)
-+			__do_set_cpus_ptr(p, cpumask_of(cpu));
-+	}
-+}
-+EXPORT_SYMBOL_GPL(migrate_disable);
-+
-+void migrate_enable(void)
-+{
-+	struct task_struct *p = current;
-+
-+	if (0 == p->migration_disabled)
-+		return;
-+
-+	if (p->migration_disabled > 1) {
-+		p->migration_disabled--;
-+		return;
-+	}
-+
-+	if (WARN_ON_ONCE(!p->migration_disabled))
-+		return;
-+
-+	/*
-+	 * Ensure stop_task runs either before or after this, and that
-+	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
-+	 */
-+	guard(preempt)();
-+	/*
-+	 * Assumption: current should be running on allowed cpu
-+	 */
-+	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
-+	if (p->cpus_ptr != &p->cpus_mask)
-+		__do_set_cpus_ptr(p, &p->cpus_mask);
-+	/*
-+	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
-+	 * regular cpus_mask, otherwise things that race (eg.
-+	 * select_fallback_rq) get confused.
-+	 */
-+	barrier();
-+	p->migration_disabled = 0;
-+	this_rq()->nr_pinned--;
-+}
-+EXPORT_SYMBOL_GPL(migrate_enable);
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return rq->nr_pinned;
-+}
-+
-+/*
-+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
-+ * __set_cpus_allowed_ptr() and select_fallback_rq().
-+ */
-+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-+{
-+	/* When not in the task's cpumask, no point in looking further. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/* migrate_disabled() must be allowed to finish. */
-+	if (is_migration_disabled(p))
-+		return cpu_online(cpu);
-+
-+	/* Non kernel threads are not allowed during either online or offline. */
-+	if (!(p->flags & PF_KTHREAD))
-+		return cpu_active(cpu) && task_cpu_possible(cpu, p);
-+
-+	/* KTHREAD_IS_PER_CPU is always allowed. */
-+	if (kthread_is_per_cpu(p))
-+		return cpu_online(cpu);
-+
-+	/* Regular kernel threads don't get to stay during offline. */
-+	if (cpu_dying(cpu))
-+		return false;
-+
-+	/* But are allowed during online. */
-+	return cpu_online(cpu);
-+}
-+
-+/*
-+ * This is how migration works:
-+ *
-+ * 1) we invoke migration_cpu_stop() on the target CPU using
-+ *    stop_one_cpu().
-+ * 2) stopper starts to run (implicitly forcing the migrated thread
-+ *    off the CPU)
-+ * 3) it checks whether the migrated task is still in the wrong runqueue.
-+ * 4) if it's in the wrong runqueue then the migration thread removes
-+ *    it and puts it into the right queue.
-+ * 5) stopper completes and stop_one_cpu() returns and the migration
-+ *    is done.
-+ */
-+
-+/*
-+ * move_queued_task - move a queued task to new rq.
-+ *
-+ * Returns (locked) new rq. Old rq's lock is released.
-+ */
-+static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
-+{
-+	int src_cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	src_cpu = cpu_of(rq);
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
-+	dequeue_task(p, rq, 0);
-+	set_task_cpu(p, new_cpu);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rq = cpu_rq(new_cpu);
-+
-+	raw_spin_lock(&rq->lock);
-+	WARN_ON_ONCE(task_cpu(p) != new_cpu);
-+
-+	sched_mm_cid_migrate_to(rq, p, src_cpu);
-+
-+	sched_task_sanity_check(p, rq);
-+	enqueue_task(p, rq, 0);
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+	wakeup_preempt(rq);
-+
-+	return rq;
-+}
-+
-+struct migration_arg {
-+	struct task_struct *task;
-+	int dest_cpu;
-+};
-+
-+/*
-+ * Move (not current) task off this CPU, onto the destination CPU. We're doing
-+ * this because either it can't run here any more (set_cpus_allowed()
-+ * away from this CPU, or CPU going down), or because we're
-+ * attempting to rebalance this task on exec (sched_exec).
-+ *
-+ * So we race with normal scheduler movements, but that's OK, as long
-+ * as the task is no longer on this CPU.
-+ */
-+static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
-+{
-+	/* Affinity changed (again). */
-+	if (!is_cpu_allowed(p, dest_cpu))
-+		return rq;
-+
-+	return move_queued_task(rq, p, dest_cpu);
-+}
-+
-+/*
-+ * migration_cpu_stop - this will be executed by a highprio stopper thread
-+ * and performs thread migration by bumping thread off CPU then
-+ * 'pushing' onto another runqueue.
-+ */
-+static int migration_cpu_stop(void *data)
-+{
-+	struct migration_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+
-+	/*
-+	 * The original target CPU might have gone down and we might
-+	 * be on another CPU but it doesn't matter.
-+	 */
-+	local_irq_save(flags);
-+	/*
-+	 * We need to explicitly wake pending tasks before running
-+	 * __migrate_task() such that we will not miss enforcing cpus_ptr
-+	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
-+	 */
-+	flush_smp_call_function_queue();
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+	/*
-+	 * If task_rq(p) != rq, it cannot be migrated here, because we're
-+	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
-+	 * we're holding p->pi_lock.
-+	 */
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		rq = __migrate_task(rq, p, arg->dest_cpu);
-+	}
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+static inline void
-+set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	cpumask_copy(&p->cpus_mask, ctx->new_mask);
-+	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
-+
-+	/*
-+	 * Swap in a new user_cpus_ptr if SCA_USER flag set
-+	 */
-+	if (ctx->flags & SCA_USER)
-+		swap(p->user_cpus_ptr, ctx->user_mask);
-+}
-+
-+static void
-+__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	lockdep_assert_held(&p->pi_lock);
-+	set_cpus_allowed_common(p, ctx);
-+}
-+
-+/*
-+ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
-+ * affinity (if any) should be destroyed too.
-+ */
-+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.user_mask = NULL,
-+		.flags     = SCA_USER,	/* clear the user requested mask */
-+	};
-+	union cpumask_rcuhead {
-+		cpumask_t cpumask;
-+		struct rcu_head rcu;
-+	};
-+
-+	__do_set_cpus_allowed(p, &ac);
-+
-+	/*
-+	 * Because this is called with p->pi_lock held, it is not possible
-+	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
-+	 * kfree_rcu().
-+	 */
-+	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
-+}
-+
-+static cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	/*
-+	 * See do_set_cpus_allowed() above for the rcu_head usage.
-+	 */
-+	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
-+
-+	return kmalloc_node(size, GFP_KERNEL, node);
-+}
-+
-+int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
-+		      int node)
-+{
-+	cpumask_t *user_mask;
-+	unsigned long flags;
-+
-+	/*
-+	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
-+	 * may differ by now due to racing.
-+	 */
-+	dst->user_cpus_ptr = NULL;
-+
-+	/*
-+	 * This check is racy and losing the race is a valid situation.
-+	 * It is not worth the extra overhead of taking the pi_lock on
-+	 * every fork/clone.
-+	 */
-+	if (data_race(!src->user_cpus_ptr))
-+		return 0;
-+
-+	user_mask = alloc_user_cpus_ptr(node);
-+	if (!user_mask)
-+		return -ENOMEM;
-+
-+	/*
-+	 * Use pi_lock to protect content of user_cpus_ptr
-+	 *
-+	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
-+	 * do_set_cpus_allowed().
-+	 */
-+	raw_spin_lock_irqsave(&src->pi_lock, flags);
-+	if (src->user_cpus_ptr) {
-+		swap(dst->user_cpus_ptr, user_mask);
-+		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
-+	}
-+	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
-+
-+	if (unlikely(user_mask))
-+		kfree(user_mask);
-+
-+	return 0;
-+}
-+
-+static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
-+{
-+	struct cpumask *user_mask = NULL;
-+
-+	swap(p->user_cpus_ptr, user_mask);
-+
-+	return user_mask;
-+}
-+
-+void release_user_cpus_ptr(struct task_struct *p)
-+{
-+	kfree(clear_user_cpus_ptr(p));
-+}
-+
-+#endif
-+
-+/**
-+ * task_curr - is this task currently executing on a CPU?
-+ * @p: the task in question.
-+ *
-+ * Return: 1 if the task is currently executing. 0 otherwise.
-+ */
-+inline int task_curr(const struct task_struct *p)
-+{
-+	return cpu_curr(task_cpu(p)) == p;
-+}
-+
-+#ifdef CONFIG_SMP
-+/***
-+ * kick_process - kick a running thread to enter/exit the kernel
-+ * @p: the to-be-kicked thread
-+ *
-+ * Cause a process which is running on another CPU to enter
-+ * kernel-mode, without any delay. (to get signals handled.)
-+ *
-+ * NOTE: this function doesn't have to take the runqueue lock,
-+ * because all it wants to ensure is that the remote task enters
-+ * the kernel. If the IPI races and the task has been migrated
-+ * to another CPU then no harm is done and the purpose has been
-+ * achieved as well.
-+ */
-+void kick_process(struct task_struct *p)
-+{
-+	guard(preempt)();
-+	int cpu = task_cpu(p);
-+
-+	if ((cpu != smp_processor_id()) && task_curr(p))
-+		smp_send_reschedule(cpu);
-+}
-+EXPORT_SYMBOL_GPL(kick_process);
-+
-+/*
-+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
-+ *
-+ * A few notes on cpu_active vs cpu_online:
-+ *
-+ *  - cpu_active must be a subset of cpu_online
-+ *
-+ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
-+ *    see __set_cpus_allowed_ptr(). At this point the newly online
-+ *    CPU isn't yet part of the sched domains, and balancing will not
-+ *    see it.
-+ *
-+ *  - on cpu-down we clear cpu_active() to mask the sched domains and
-+ *    avoid the load balancer to place new tasks on the to be removed
-+ *    CPU. Existing tasks will remain running there and will be taken
-+ *    off.
-+ *
-+ * This means that fallback selection must not select !active CPUs.
-+ * And can assume that any active CPU must be online. Conversely
-+ * select_task_rq() below may allow selection of !active CPUs in order
-+ * to satisfy the above rules.
-+ */
-+static int select_fallback_rq(int cpu, struct task_struct *p)
-+{
-+	int nid = cpu_to_node(cpu);
-+	const struct cpumask *nodemask = NULL;
-+	enum { cpuset, possible, fail } state = cpuset;
-+	int dest_cpu;
-+
-+	/*
-+	 * If the node that the CPU is on has been offlined, cpu_to_node()
-+	 * will return -1. There is no CPU on the node, and we should
-+	 * select the CPU on the other node.
-+	 */
-+	if (nid != -1) {
-+		nodemask = cpumask_of_node(nid);
-+
-+		/* Look for allowed, online CPU in same node. */
-+		for_each_cpu(dest_cpu, nodemask) {
-+			if (is_cpu_allowed(p, dest_cpu))
-+				return dest_cpu;
-+		}
-+	}
-+
-+	for (;;) {
-+		/* Any allowed, online CPU? */
-+		for_each_cpu(dest_cpu, p->cpus_ptr) {
-+			if (!is_cpu_allowed(p, dest_cpu))
-+				continue;
-+			goto out;
-+		}
-+
-+		/* No more Mr. Nice Guy. */
-+		switch (state) {
-+		case cpuset:
-+			if (cpuset_cpus_allowed_fallback(p)) {
-+				state = possible;
-+				break;
-+			}
-+			fallthrough;
-+		case possible:
-+			/*
-+			 * XXX When called from select_task_rq() we only
-+			 * hold p->pi_lock and again violate locking order.
-+			 *
-+			 * More yuck to audit.
-+			 */
-+			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
-+			state = fail;
-+			break;
-+
-+		case fail:
-+			BUG();
-+			break;
-+		}
-+	}
-+
-+out:
-+	if (state != cpuset) {
-+		/*
-+		 * Don't tell them about moving exiting tasks or
-+		 * kernel threads (both mm NULL), since they never
-+		 * leave kernel.
-+		 */
-+		if (p->mm && printk_ratelimit()) {
-+			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
-+					task_pid_nr(p), p->comm, cpu);
-+		}
-+	}
-+
-+	return dest_cpu;
-+}
-+
-+static inline void
-+sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
-+{
-+	int cpu;
-+
-+	cpumask_copy(mask, sched_preempt_mask + ref);
-+	if (prio < ref) {
-+		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
-+			if (prio < cpu_rq(cpu)->prio)
-+				cpumask_set_cpu(cpu, mask);
-+		}
-+	} else {
-+		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
-+			if (prio >= cpu_rq(cpu)->prio)
-+				cpumask_clear_cpu(cpu, mask);
-+		}
-+	}
-+}
-+
-+static inline int
-+preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
-+{
-+	cpumask_t *mask = sched_preempt_mask + prio;
-+	int pr = atomic_read(&sched_prio_record);
-+
-+	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
-+		sched_preempt_mask_flush(mask, prio, pr);
-+		atomic_set(&sched_prio_record, prio);
-+	}
-+
-+	return cpumask_and(preempt_mask, allow_mask, mask);
-+}
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	cpumask_t allow_mask, mask;
-+
-+	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
-+		return select_fallback_rq(task_cpu(p), p);
-+
-+	if (
-+#ifdef CONFIG_SCHED_SMT
-+	    cpumask_and(&mask, &allow_mask, sched_sg_idle_mask) ||
-+#endif
-+	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
-+	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
-+		return best_mask_cpu(task_cpu(p), &mask);
-+
-+	return best_mask_cpu(task_cpu(p), &allow_mask);
-+}
-+
-+void sched_set_stop_task(int cpu, struct task_struct *stop)
-+{
-+	static struct lock_class_key stop_pi_lock;
-+	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
-+	struct sched_param start_param = { .sched_priority = 0 };
-+	struct task_struct *old_stop = cpu_rq(cpu)->stop;
-+
-+	if (stop) {
-+		/*
-+		 * Make it appear like a SCHED_FIFO task, its something
-+		 * userspace knows about and won't get confused about.
-+		 *
-+		 * Also, it will make PI more or less work without too
-+		 * much confusion -- but then, stop work should not
-+		 * rely on PI working anyway.
-+		 */
-+		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
-+
-+		/*
-+		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
-+		 * adjust the effective priority of a task. As a result,
-+		 * rt_mutex_setprio() can trigger (RT) balancing operations,
-+		 * which can then trigger wakeups of the stop thread to push
-+		 * around the current task.
-+		 *
-+		 * The stop task itself will never be part of the PI-chain, it
-+		 * never blocks, therefore that ->pi_lock recursion is safe.
-+		 * Tell lockdep about this by placing the stop->pi_lock in its
-+		 * own class.
-+		 */
-+		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
-+	}
-+
-+	cpu_rq(cpu)->stop = stop;
-+
-+	if (old_stop) {
-+		/*
-+		 * Reset it back to a normal scheduling policy so that
-+		 * it can die in pieces.
-+		 */
-+		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
-+	}
-+}
-+
-+static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
-+			    raw_spinlock_t *lock, unsigned long irq_flags)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	/* Can the task run on the task's current CPU? If so, we're done */
-+	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+		if (p->migration_disabled) {
-+			if (likely(p->cpus_ptr != &p->cpus_mask))
-+				__do_set_cpus_ptr(p, &p->cpus_mask);
-+			p->migration_disabled = 0;
-+			p->migration_flags |= MDF_FORCE_ENABLED;
-+			/* When p is migrate_disabled, rq->lock should be held */
-+			rq->nr_pinned--;
-+		}
-+
-+		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
-+			struct migration_arg arg = { p, dest_cpu };
-+
-+			/* Need help from migration thread: drop lock and wait. */
-+			__task_access_unlock(p, lock);
-+			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+			return 0;
-+		}
-+		if (task_on_rq_queued(p)) {
-+			/*
-+			 * OK, since we're going to drop the lock immediately
-+			 * afterwards anyway.
-+			 */
-+			update_rq_clock(rq);
-+			rq = move_queued_task(rq, p, dest_cpu);
-+			lock = &rq->lock;
-+		}
-+	}
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return 0;
-+}
-+
-+static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
-+					 struct affinity_context *ctx,
-+					 struct rq *rq,
-+					 raw_spinlock_t *lock,
-+					 unsigned long irq_flags)
-+{
-+	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
-+	const struct cpumask *cpu_valid_mask = cpu_active_mask;
-+	bool kthread = p->flags & PF_KTHREAD;
-+	int dest_cpu;
-+	int ret = 0;
-+
-+	if (kthread || is_migration_disabled(p)) {
-+		/*
-+		 * Kernel threads are allowed on online && !active CPUs,
-+		 * however, during cpu-hot-unplug, even these might get pushed
-+		 * away if not KTHREAD_IS_PER_CPU.
-+		 *
-+		 * Specifically, migration_disabled() tasks must not fail the
-+		 * cpumask_any_and_distribute() pick below, esp. so on
-+		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
-+		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
-+		 */
-+		cpu_valid_mask = cpu_online_mask;
-+	}
-+
-+	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	/*
-+	 * Must re-check here, to close a race against __kthread_bind(),
-+	 * sched_setaffinity() is not guaranteed to observe the flag.
-+	 */
-+	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
-+		goto out;
-+
-+	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
-+	if (dest_cpu >= nr_cpu_ids) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	__do_set_cpus_allowed(p, ctx);
-+
-+	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
-+
-+out:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+
-+	return ret;
-+}
-+
-+/*
-+ * Change a given task's CPU affinity. Migrate the thread to a
-+ * is removed from the allowed bitmask.
-+ *
-+ * NOTE: the caller must have a valid reference to the task, the
-+ * task must not exit() & deallocate itself prematurely. The
-+ * call is not atomic; no spinlocks may be held.
-+ */
-+static int __set_cpus_allowed_ptr(struct task_struct *p,
-+				  struct affinity_context *ctx)
-+{
-+	unsigned long irq_flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
-+	 * flags are set.
-+	 */
-+	if (p->user_cpus_ptr &&
-+	    !(ctx->flags & SCA_USER) &&
-+	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
-+		ctx->new_mask = rq->scratch_mask;
-+
-+
-+	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
-+}
-+
-+int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+
-+	return __set_cpus_allowed_ptr(p, &ac);
-+}
-+EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
-+
-+/*
-+ * Change a given task's CPU affinity to the intersection of its current
-+ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
-+ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
-+ * affinity or use cpu_online_mask instead.
-+ *
-+ * If the resulting mask is empty, leave the affinity unchanged and return
-+ * -EINVAL.
-+ */
-+static int restrict_cpus_allowed_ptr(struct task_struct *p,
-+				     struct cpumask *new_mask,
-+				     const struct cpumask *subset_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+	unsigned long irq_flags;
-+	raw_spinlock_t *lock;
-+	struct rq *rq;
-+	int err;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
-+		err = -EINVAL;
-+		goto err_unlock;
-+	}
-+
-+	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
-+
-+err_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return err;
-+}
-+
-+/*
-+ * Restrict the CPU affinity of task @p so that it is a subset of
-+ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
-+ * old affinity mask. If the resulting mask is empty, we warn and walk
-+ * up the cpuset hierarchy until we find a suitable mask.
-+ */
-+void force_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	cpumask_var_t new_mask;
-+	const struct cpumask *override_mask = task_cpu_possible_mask(p);
-+
-+	alloc_cpumask_var(&new_mask, GFP_KERNEL);
-+
-+	/*
-+	 * __migrate_task() can fail silently in the face of concurrent
-+	 * offlining of the chosen destination CPU, so take the hotplug
-+	 * lock to ensure that the migration succeeds.
-+	 */
-+	cpus_read_lock();
-+	if (!cpumask_available(new_mask))
-+		goto out_set_mask;
-+
-+	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
-+		goto out_free_mask;
-+
-+	/*
-+	 * We failed to find a valid subset of the affinity mask for the
-+	 * task, so override it based on its cpuset hierarchy.
-+	 */
-+	cpuset_cpus_allowed(p, new_mask);
-+	override_mask = new_mask;
-+
-+out_set_mask:
-+	if (printk_ratelimit()) {
-+		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
-+				task_pid_nr(p), p->comm,
-+				cpumask_pr_args(override_mask));
-+	}
-+
-+	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
-+out_free_mask:
-+	cpus_read_unlock();
-+	free_cpumask_var(new_mask);
-+}
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
-+
-+/*
-+ * Restore the affinity of a task @p which was previously restricted by a
-+ * call to force_compatible_cpus_allowed_ptr().
-+ *
-+ * It is the caller's responsibility to serialise this with any calls to
-+ * force_compatible_cpus_allowed_ptr(@p).
-+ */
-+void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = task_user_cpus(p),
-+		.flags     = 0,
-+	};
-+	int ret;
-+
-+	/*
-+	 * Try to restore the old affinity mask with __sched_setaffinity().
-+	 * Cpuset masking will be done there too.
-+	 */
-+	ret = __sched_setaffinity(p, &ac);
-+	WARN_ON_ONCE(ret);
-+}
-+
-+#else /* CONFIG_SMP */
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+static inline int
-+__set_cpus_allowed_ptr(struct task_struct *p,
-+		       struct affinity_context *ctx)
-+{
-+	return set_cpus_allowed_ptr(p, ctx->new_mask);
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return false;
-+}
-+
-+static inline cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	return NULL;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+static void
-+ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq;
-+
-+	if (!schedstat_enabled())
-+		return;
-+
-+	rq = this_rq();
-+
-+#ifdef CONFIG_SMP
-+	if (cpu == rq->cpu) {
-+		__schedstat_inc(rq->ttwu_local);
-+		__schedstat_inc(p->stats.nr_wakeups_local);
-+	} else {
-+		/** Alt schedule FW ToDo:
-+		 * How to do ttwu_wake_remote
-+		 */
-+	}
-+#endif /* CONFIG_SMP */
-+
-+	__schedstat_inc(rq->ttwu_count);
-+	__schedstat_inc(p->stats.nr_wakeups);
-+}
-+
-+/*
-+ * Mark the task runnable.
-+ */
-+static inline void ttwu_do_wakeup(struct task_struct *p)
-+{
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	trace_sched_wakeup(p);
-+}
-+
-+static inline void
-+ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+	if (p->sched_contributes_to_load)
-+		rq->nr_uninterruptible--;
-+
-+	if (
-+#ifdef CONFIG_SMP
-+	    !(wake_flags & WF_MIGRATED) &&
-+#endif
-+	    p->in_iowait) {
-+		delayacct_blkio_end(p);
-+		atomic_dec(&task_rq(p)->nr_iowait);
-+	}
-+
-+	activate_task(p, rq);
-+	wakeup_preempt(rq);
-+
-+	ttwu_do_wakeup(p);
-+}
-+
-+/*
-+ * Consider @p being inside a wait loop:
-+ *
-+ *   for (;;) {
-+ *      set_current_state(TASK_UNINTERRUPTIBLE);
-+ *
-+ *      if (CONDITION)
-+ *         break;
-+ *
-+ *      schedule();
-+ *   }
-+ *   __set_current_state(TASK_RUNNING);
-+ *
-+ * between set_current_state() and schedule(). In this case @p is still
-+ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
-+ * an atomic manner.
-+ *
-+ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
-+ * then schedule() must still happen and p->state can be changed to
-+ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
-+ * need to do a full wakeup with enqueue.
-+ *
-+ * Returns: %true when the wakeup is done,
-+ *          %false otherwise.
-+ */
-+static int ttwu_runnable(struct task_struct *p, int wake_flags)
-+{
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	int ret = 0;
-+
-+	rq = __task_access_lock(p, &lock);
-+	if (task_on_rq_queued(p)) {
-+		if (!task_on_cpu(p)) {
-+			/*
-+			 * When on_rq && !on_cpu the task is preempted, see if
-+			 * it should preempt the task that is current now.
-+			 */
-+			update_rq_clock(rq);
-+			wakeup_preempt(rq);
-+		}
-+		ttwu_do_wakeup(p);
-+		ret = 1;
-+	}
-+	__task_access_unlock(p, lock);
-+
-+	return ret;
-+}
-+
-+#ifdef CONFIG_SMP
-+void sched_ttwu_pending(void *arg)
-+{
-+	struct llist_node *llist = arg;
-+	struct rq *rq = this_rq();
-+	struct task_struct *p, *t;
-+	struct rq_flags rf;
-+
-+	if (!llist)
-+		return;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	update_rq_clock(rq);
-+
-+	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
-+		if (WARN_ON_ONCE(p->on_cpu))
-+			smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
-+			set_task_cpu(p, cpu_of(rq));
-+
-+		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
-+	}
-+
-+	/*
-+	 * Must be after enqueueing at least once task such that
-+	 * idle_cpu() does not observe a false-negative -- if it does,
-+	 * it is possible for select_idle_siblings() to stack a number
-+	 * of tasks on this CPU during that window.
-+	 *
-+	 * It is ok to clear ttwu_pending when another task pending.
-+	 * We will receive IPI after local irq enabled and then enqueue it.
-+	 * Since now nr_running > 0, idle_cpu() will always get correct result.
-+	 */
-+	WRITE_ONCE(rq->ttwu_pending, 0);
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Prepare the scene for sending an IPI for a remote smp_call
-+ *
-+ * Returns true if the caller can proceed with sending the IPI.
-+ * Returns false otherwise.
-+ */
-+bool call_function_single_prep_ipi(int cpu)
-+{
-+	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
-+		trace_sched_wake_idle_without_ipi(cpu);
-+		return false;
-+	}
-+
-+	return true;
-+}
-+
-+/*
-+ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
-+ * necessary. The wakee CPU on receipt of the IPI will queue the task
-+ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
-+ * of the wakeup instead of the waker.
-+ */
-+static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
-+
-+	WRITE_ONCE(rq->ttwu_pending, 1);
-+	__smp_call_single_queue(cpu, &p->wake_entry.llist);
-+}
-+
-+static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
-+{
-+	/*
-+	 * Do not complicate things with the async wake_list while the CPU is
-+	 * in hotplug state.
-+	 */
-+	if (!cpu_active(cpu))
-+		return false;
-+
-+	/* Ensure the task will still be allowed to run on the CPU. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/*
-+	 * If the CPU does not share cache, then queue the task on the
-+	 * remote rqs wakelist to avoid accessing remote data.
-+	 */
-+	if (!cpus_share_cache(smp_processor_id(), cpu))
-+		return true;
-+
-+	if (cpu == smp_processor_id())
-+		return false;
-+
-+	/*
-+	 * If the wakee cpu is idle, or the task is descheduling and the
-+	 * only running task on the CPU, then use the wakelist to offload
-+	 * the task activation to the idle (or soon-to-be-idle) CPU as
-+	 * the current CPU is likely busy. nr_running is checked to
-+	 * avoid unnecessary task stacking.
-+	 *
-+	 * Note that we can only get here with (wakee) p->on_rq=0,
-+	 * p->on_cpu can be whatever, we've done the dequeue, so
-+	 * the wakee has been accounted out of ->nr_running.
-+	 */
-+	if (!cpu_rq(cpu)->nr_running)
-+		return true;
-+
-+	return false;
-+}
-+
-+static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
-+		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
-+		__ttwu_queue_wakelist(p, cpu, wake_flags);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_if_idle(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	guard(rcu)();
-+	if (is_idle_task(rcu_dereference(rq->curr))) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		if (is_idle_task(rq->curr))
-+			resched_curr(rq);
-+	}
-+}
-+
-+extern struct static_key_false sched_asym_cpucapacity;
-+
-+static __always_inline bool sched_asym_cpucap_active(void)
-+{
-+	return static_branch_unlikely(&sched_asym_cpucapacity);
-+}
-+
-+bool cpus_equal_capacity(int this_cpu, int that_cpu)
-+{
-+	if (!sched_asym_cpucap_active())
-+		return true;
-+
-+	if (this_cpu == that_cpu)
-+		return true;
-+
-+	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
-+}
-+
-+bool cpus_share_cache(int this_cpu, int that_cpu)
-+{
-+	if (this_cpu == that_cpu)
-+		return true;
-+
-+	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-+}
-+#else /* !CONFIG_SMP */
-+
-+static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	return false;
-+}
-+
-+#endif /* CONFIG_SMP */
-+
-+static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (ttwu_queue_wakelist(p, cpu, wake_flags))
-+		return;
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+	ttwu_do_activate(rq, p, wake_flags);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Invoked from try_to_wake_up() to check whether the task can be woken up.
-+ *
-+ * The caller holds p::pi_lock if p != current or has preemption
-+ * disabled when p == current.
-+ *
-+ * The rules of saved_state:
-+ *
-+ *   The related locking code always holds p::pi_lock when updating
-+ *   p::saved_state, which means the code is fully serialized in both cases.
-+ *
-+ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
-+ *  No other bits set. This allows to distinguish all wakeup scenarios.
-+ *
-+ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
-+ *  allows us to prevent early wakeup of tasks before they can be run on
-+ *  asymmetric ISA architectures (eg ARMv9).
-+ */
-+static __always_inline
-+bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
-+{
-+	int match;
-+
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
-+			     state != TASK_RTLOCK_WAIT);
-+	}
-+
-+	*success = !!(match = __task_state_match(p, state));
-+
-+	/*
-+	 * Saved state preserves the task state across blocking on
-+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
-+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
-+	 * because it waits for a lock wakeup or __thaw_task(). Also
-+	 * indicate success because from the regular waker's point of
-+	 * view this has succeeded.
-+	 *
-+	 * After acquiring the lock the task will restore p::__state
-+	 * from p::saved_state which ensures that the regular
-+	 * wakeup is not lost. The restore will also set
-+	 * p::saved_state to TASK_RUNNING so any further tests will
-+	 * not result in false positives vs. @success
-+	 */
-+	if (match < 0)
-+		p->saved_state = TASK_RUNNING;
-+
-+	return match > 0;
-+}
-+
-+/*
-+ * Notes on Program-Order guarantees on SMP systems.
-+ *
-+ *  MIGRATION
-+ *
-+ * The basic program-order guarantee on SMP systems is that when a task [t]
-+ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
-+ * execution on its new CPU [c1].
-+ *
-+ * For migration (of runnable tasks) this is provided by the following means:
-+ *
-+ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
-+ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
-+ *     rq(c1)->lock (if not at the same time, then in that order).
-+ *  C) LOCK of the rq(c1)->lock scheduling in task
-+ *
-+ * Transitivity guarantees that B happens after A and C after B.
-+ * Note: we only require RCpc transitivity.
-+ * Note: the CPU doing B need not be c0 or c1
-+ *
-+ * Example:
-+ *
-+ *   CPU0            CPU1            CPU2
-+ *
-+ *   LOCK rq(0)->lock
-+ *   sched-out X
-+ *   sched-in Y
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(0)->lock // orders against CPU0
-+ *                                   dequeue X
-+ *                                   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(1)->lock
-+ *                                   enqueue X
-+ *                                   UNLOCK rq(1)->lock
-+ *
-+ *                   LOCK rq(1)->lock // orders against CPU2
-+ *                   sched-out Z
-+ *                   sched-in X
-+ *                   UNLOCK rq(1)->lock
-+ *
-+ *
-+ *  BLOCKING -- aka. SLEEP + WAKEUP
-+ *
-+ * For blocking we (obviously) need to provide the same guarantee as for
-+ * migration. However the means are completely different as there is no lock
-+ * chain to provide order. Instead we do:
-+ *
-+ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
-+ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
-+ *
-+ * Example:
-+ *
-+ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
-+ *
-+ *   LOCK rq(0)->lock LOCK X->pi_lock
-+ *   dequeue X
-+ *   sched-out X
-+ *   smp_store_release(X->on_cpu, 0);
-+ *
-+ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
-+ *                    X->state = WAKING
-+ *                    set_task_cpu(X,2)
-+ *
-+ *                    LOCK rq(2)->lock
-+ *                    enqueue X
-+ *                    X->state = RUNNING
-+ *                    UNLOCK rq(2)->lock
-+ *
-+ *                                          LOCK rq(2)->lock // orders against CPU1
-+ *                                          sched-out Z
-+ *                                          sched-in X
-+ *                                          UNLOCK rq(2)->lock
-+ *
-+ *                    UNLOCK X->pi_lock
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *
-+ * However; for wakeups there is a second guarantee we must provide, namely we
-+ * must observe the state that lead to our wakeup. That is, not only must our
-+ * task observe its own prior state, it must also observe the stores prior to
-+ * its wakeup.
-+ *
-+ * This means that any means of doing remote wakeups must order the CPU doing
-+ * the wakeup against the CPU the task is going to end up running on. This,
-+ * however, is already required for the regular Program-Order guarantee above,
-+ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
-+ *
-+ */
-+
-+/**
-+ * try_to_wake_up - wake up a thread
-+ * @p: the thread to be awakened
-+ * @state: the mask of task states that can be woken
-+ * @wake_flags: wake modifier flags (WF_*)
-+ *
-+ * Conceptually does:
-+ *
-+ *   If (@state & @p->state) @p->state = TASK_RUNNING.
-+ *
-+ * If the task was not queued/runnable, also place it back on a runqueue.
-+ *
-+ * This function is atomic against schedule() which would dequeue the task.
-+ *
-+ * It issues a full memory barrier before accessing @p->state, see the comment
-+ * with set_current_state().
-+ *
-+ * Uses p->pi_lock to serialize against concurrent wake-ups.
-+ *
-+ * Relies on p->pi_lock stabilizing:
-+ *  - p->sched_class
-+ *  - p->cpus_ptr
-+ *  - p->sched_task_group
-+ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
-+ *
-+ * Tries really hard to only take one task_rq(p)->lock for performance.
-+ * Takes rq->lock in:
-+ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
-+ *  - ttwu_queue()       -- new rq, for enqueue of the task;
-+ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
-+ *
-+ * As a consequence we race really badly with just about everything. See the
-+ * many memory barriers and their comments for details.
-+ *
-+ * Return: %true if @p->state changes (an actual wakeup was done),
-+ *	   %false otherwise.
-+ */
-+int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
-+{
-+	guard(preempt)();
-+	int cpu, success = 0;
-+
-+	if (p == current) {
-+		/*
-+		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
-+		 * == smp_processor_id()'. Together this means we can special
-+		 * case the whole 'p->on_rq && ttwu_runnable()' case below
-+		 * without taking any locks.
-+		 *
-+		 * In particular:
-+		 *  - we rely on Program-Order guarantees for all the ordering,
-+		 *  - we're serialized against set_special_state() by virtue of
-+		 *    it disabling IRQs (this allows not taking ->pi_lock).
-+		 */
-+		if (!ttwu_state_match(p, state, &success))
-+			goto out;
-+
-+		trace_sched_waking(p);
-+		ttwu_do_wakeup(p);
-+		goto out;
-+	}
-+
-+	/*
-+	 * If we are going to wake up a thread waiting for CONDITION we
-+	 * need to ensure that CONDITION=1 done by the caller can not be
-+	 * reordered with p->state check below. This pairs with smp_store_mb()
-+	 * in set_current_state() that the waiting thread does.
-+	 */
-+	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
-+		smp_mb__after_spinlock();
-+		if (!ttwu_state_match(p, state, &success))
-+			break;
-+
-+		trace_sched_waking(p);
-+
-+		/*
-+		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
-+		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
-+		 * in smp_cond_load_acquire() below.
-+		 *
-+		 * sched_ttwu_pending()			try_to_wake_up()
-+		 *   STORE p->on_rq = 1			  LOAD p->state
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * [task p]
-+		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * A similar smp_rmb() lives in __task_needs_rq_lock().
-+		 */
-+		smp_rmb();
-+		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
-+			break;
-+
-+#ifdef CONFIG_SMP
-+		/*
-+		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
-+		 * possible to, falsely, observe p->on_cpu == 0.
-+		 *
-+		 * One must be running (->on_cpu == 1) in order to remove oneself
-+		 * from the runqueue.
-+		 *
-+		 * __schedule() (switch to task 'p')	try_to_wake_up()
-+		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (put 'p' to sleep)
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
-+		 * schedule()'s deactivate_task() has 'happened' and p will no longer
-+		 * care about it's own p->state. See the comment in __schedule().
-+		 */
-+		smp_acquire__after_ctrl_dep();
-+
-+		/*
-+		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
-+		 * == 0), which means we need to do an enqueue, change p->state to
-+		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
-+		 * enqueue, such as ttwu_queue_wakelist().
-+		 */
-+		WRITE_ONCE(p->__state, TASK_WAKING);
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, considering queueing p on the remote CPUs wake_list
-+		 * which potentially sends an IPI instead of spinning on p->on_cpu to
-+		 * let the waker make forward progress. This is safe because IRQs are
-+		 * disabled and the IPI will deliver after on_cpu is cleared.
-+		 *
-+		 * Ensure we load task_cpu(p) after p->on_cpu:
-+		 *
-+		 * set_task_cpu(p, cpu);
-+		 *   STORE p->cpu = @cpu
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock
-+		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
-+		 *   STORE p->on_cpu = 1                LOAD p->cpu
-+		 *
-+		 * to ensure we observe the correct CPU on which the task is currently
-+		 * scheduling.
-+		 */
-+		if (smp_load_acquire(&p->on_cpu) &&
-+		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
-+			break;
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, wait until it's done referencing the task.
-+		 *
-+		 * Pairs with the smp_store_release() in finish_task().
-+		 *
-+		 * This ensures that tasks getting woken will be fully ordered against
-+		 * their previous state and preserve Program Order.
-+		 */
-+		smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		sched_task_ttwu(p);
-+
-+		if ((wake_flags & WF_CURRENT_CPU) &&
-+		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
-+			cpu = smp_processor_id();
-+		else
-+			cpu = select_task_rq(p);
-+
-+		if (cpu != task_cpu(p)) {
-+			if (p->in_iowait) {
-+				delayacct_blkio_end(p);
-+				atomic_dec(&task_rq(p)->nr_iowait);
-+			}
-+
-+			wake_flags |= WF_MIGRATED;
-+			set_task_cpu(p, cpu);
-+		}
-+#else
-+		sched_task_ttwu(p);
-+
-+		cpu = task_cpu(p);
-+#endif /* CONFIG_SMP */
-+
-+		ttwu_queue(p, cpu, wake_flags);
-+	}
-+out:
-+	if (success)
-+		ttwu_stat(p, task_cpu(p), wake_flags);
-+
-+	return success;
-+}
-+
-+static bool __task_needs_rq_lock(struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
-+	 * the task is blocked. Make sure to check @state since ttwu() can drop
-+	 * locks at the end, see ttwu_queue_wakelist().
-+	 */
-+	if (state == TASK_RUNNING || state == TASK_WAKING)
-+		return true;
-+
-+	/*
-+	 * Ensure we load p->on_rq after p->__state, otherwise it would be
-+	 * possible to, falsely, observe p->on_rq == 0.
-+	 *
-+	 * See try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	if (p->on_rq)
-+		return true;
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * Ensure the task has finished __schedule() and will not be referenced
-+	 * anymore. Again, see try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	smp_cond_load_acquire(&p->on_cpu, !VAL);
-+#endif
-+
-+	return false;
-+}
-+
-+/**
-+ * task_call_func - Invoke a function on task in fixed state
-+ * @p: Process for which the function is to be invoked, can be @current.
-+ * @func: Function to invoke.
-+ * @arg: Argument to function.
-+ *
-+ * Fix the task in it's current state by avoiding wakeups and or rq operations
-+ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
-+ * to work out what the state is, if required.  Given that @func can be invoked
-+ * with a runqueue lock held, it had better be quite lightweight.
-+ *
-+ * Returns:
-+ *   Whatever @func returns
-+ */
-+int task_call_func(struct task_struct *p, task_call_f func, void *arg)
-+{
-+	struct rq *rq = NULL;
-+	struct rq_flags rf;
-+	int ret;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
-+
-+	if (__task_needs_rq_lock(p))
-+		rq = __task_rq_lock(p, &rf);
-+
-+	/*
-+	 * At this point the task is pinned; either:
-+	 *  - blocked and we're holding off wakeups      (pi->lock)
-+	 *  - woken, and we're holding off enqueue       (rq->lock)
-+	 *  - queued, and we're holding off schedule     (rq->lock)
-+	 *  - running, and we're holding off de-schedule (rq->lock)
-+	 *
-+	 * The called function (@func) can use: task_curr(), p->on_rq and
-+	 * p->__state to differentiate between these states.
-+	 */
-+	ret = func(p, arg);
-+
-+	if (rq)
-+		__task_rq_unlock(rq, &rf);
-+
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
-+	return ret;
-+}
-+
-+/**
-+ * cpu_curr_snapshot - Return a snapshot of the currently running task
-+ * @cpu: The CPU on which to snapshot the task.
-+ *
-+ * Returns the task_struct pointer of the task "currently" running on
-+ * the specified CPU.  If the same task is running on that CPU throughout,
-+ * the return value will be a pointer to that task's task_struct structure.
-+ * If the CPU did any context switches even vaguely concurrently with the
-+ * execution of this function, the return value will be a pointer to the
-+ * task_struct structure of a randomly chosen task that was running on
-+ * that CPU somewhere around the time that this function was executing.
-+ *
-+ * If the specified CPU was offline, the return value is whatever it
-+ * is, perhaps a pointer to the task_struct structure of that CPU's idle
-+ * task, but there is no guarantee.  Callers wishing a useful return
-+ * value must take some action to ensure that the specified CPU remains
-+ * online throughout.
-+ *
-+ * This function executes full memory barriers before and after fetching
-+ * the pointer, which permits the caller to confine this function's fetch
-+ * with respect to the caller's accesses to other shared variables.
-+ */
-+struct task_struct *cpu_curr_snapshot(int cpu)
-+{
-+	struct task_struct *t;
-+
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	t = rcu_dereference(cpu_curr(cpu));
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	return t;
-+}
-+
-+/**
-+ * wake_up_process - Wake up a specific process
-+ * @p: The process to be woken up.
-+ *
-+ * Attempt to wake up the nominated process and move it to the set of runnable
-+ * processes.
-+ *
-+ * Return: 1 if the process was woken up, 0 if it was already running.
-+ *
-+ * This function executes a full memory barrier before accessing the task state.
-+ */
-+int wake_up_process(struct task_struct *p)
-+{
-+	return try_to_wake_up(p, TASK_NORMAL, 0);
-+}
-+EXPORT_SYMBOL(wake_up_process);
-+
-+int wake_up_state(struct task_struct *p, unsigned int state)
-+{
-+	return try_to_wake_up(p, state, 0);
-+}
-+
-+/*
-+ * Perform scheduler related setup for a newly forked process p.
-+ * p is forked by current.
-+ *
-+ * __sched_fork() is basic setup used by init_idle() too:
-+ */
-+static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	p->on_rq			= 0;
-+	p->on_cpu			= 0;
-+	p->utime			= 0;
-+	p->stime			= 0;
-+	p->sched_time			= 0;
-+
-+#ifdef CONFIG_SCHEDSTATS
-+	/* Even if schedstat is disabled, there should not be garbage */
-+	memset(&p->stats, 0, sizeof(p->stats));
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+	INIT_HLIST_HEAD(&p->preempt_notifiers);
-+#endif
-+
-+#ifdef CONFIG_COMPACTION
-+	p->capture_control = NULL;
-+#endif
-+#ifdef CONFIG_SMP
-+	p->wake_entry.u_flags = CSD_TYPE_TTWU;
-+#endif
-+	init_sched_mm_cid(p);
-+}
-+
-+/*
-+ * fork()/clone()-time setup:
-+ */
-+int sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	__sched_fork(clone_flags, p);
-+	/*
-+	 * We mark the process as NEW here. This guarantees that
-+	 * nobody will actually run it, and a signal or other external
-+	 * event cannot wake it up and insert it on the runqueue either.
-+	 */
-+	p->__state = TASK_NEW;
-+
-+	/*
-+	 * Make sure we do not leak PI boosting priority to the child.
-+	 */
-+	p->prio = current->normal_prio;
-+
-+	/*
-+	 * Revert to default priority/policy on fork if requested.
-+	 */
-+	if (unlikely(p->sched_reset_on_fork)) {
-+		if (task_has_rt_policy(p)) {
-+			p->policy = SCHED_NORMAL;
-+			p->static_prio = NICE_TO_PRIO(0);
-+			p->rt_priority = 0;
-+		} else if (PRIO_TO_NICE(p->static_prio) < 0)
-+			p->static_prio = NICE_TO_PRIO(0);
-+
-+		p->prio = p->normal_prio = p->static_prio;
-+
-+		/*
-+		 * We don't need the reset flag anymore after the fork. It has
-+		 * fulfilled its duty:
-+		 */
-+		p->sched_reset_on_fork = 0;
-+	}
-+
-+#ifdef CONFIG_SCHED_INFO
-+	if (unlikely(sched_info_on()))
-+		memset(&p->sched_info, 0, sizeof(p->sched_info));
-+#endif
-+	init_task_preempt_count(p);
-+
-+	return 0;
-+}
-+
-+void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	/*
-+	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
-+	 * required yet, but lockdep gets upset if rules are violated.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	/*
-+	 * Share the timeslice between parent and child, thus the
-+	 * total amount of pending timeslices in the system doesn't change,
-+	 * resulting in more scheduling fairness.
-+	 */
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->curr->time_slice /= 2;
-+	p->time_slice = rq->curr->time_slice;
-+#ifdef CONFIG_SCHED_HRTICK
-+	hrtick_start(rq, rq->curr->time_slice);
-+#endif
-+
-+	if (p->time_slice < RESCHED_NS) {
-+		p->time_slice = sysctl_sched_base_slice;
-+		resched_curr(rq);
-+	}
-+	sched_task_fork(p, rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rseq_migrate(p);
-+	/*
-+	 * We're setting the CPU for the first time, we don't migrate,
-+	 * so use __set_task_cpu().
-+	 */
-+	__set_task_cpu(p, smp_processor_id());
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+void sched_post_fork(struct task_struct *p)
-+{
-+}
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+DEFINE_STATIC_KEY_FALSE(sched_schedstats);
-+
-+static void set_schedstats(bool enabled)
-+{
-+	if (enabled)
-+		static_branch_enable(&sched_schedstats);
-+	else
-+		static_branch_disable(&sched_schedstats);
-+}
-+
-+void force_schedstat_enabled(void)
-+{
-+	if (!schedstat_enabled()) {
-+		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
-+		static_branch_enable(&sched_schedstats);
-+	}
-+}
-+
-+static int __init setup_schedstats(char *str)
-+{
-+	int ret = 0;
-+	if (!str)
-+		goto out;
-+
-+	if (!strcmp(str, "enable")) {
-+		set_schedstats(true);
-+		ret = 1;
-+	} else if (!strcmp(str, "disable")) {
-+		set_schedstats(false);
-+		ret = 1;
-+	}
-+out:
-+	if (!ret)
-+		pr_warn("Unable to parse schedstats=\n");
-+
-+	return ret;
-+}
-+__setup("schedstats=", setup_schedstats);
-+
-+#ifdef CONFIG_PROC_SYSCTL
-+static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
-+		size_t *lenp, loff_t *ppos)
-+{
-+	struct ctl_table t;
-+	int err;
-+	int state = static_branch_likely(&sched_schedstats);
-+
-+	if (write && !capable(CAP_SYS_ADMIN))
-+		return -EPERM;
-+
-+	t = *table;
-+	t.data = &state;
-+	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
-+	if (err < 0)
-+		return err;
-+	if (write)
-+		set_schedstats(state);
-+	return err;
-+}
-+
-+static struct ctl_table sched_core_sysctls[] = {
-+	{
-+		.procname       = "sched_schedstats",
-+		.data           = NULL,
-+		.maxlen         = sizeof(unsigned int),
-+		.mode           = 0644,
-+		.proc_handler   = sysctl_schedstats,
-+		.extra1         = SYSCTL_ZERO,
-+		.extra2         = SYSCTL_ONE,
-+	},
-+};
-+static int __init sched_core_sysctl_init(void)
-+{
-+	register_sysctl_init("kernel", sched_core_sysctls);
-+	return 0;
-+}
-+late_initcall(sched_core_sysctl_init);
-+#endif /* CONFIG_PROC_SYSCTL */
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+/*
-+ * wake_up_new_task - wake up a newly created task for the first time.
-+ *
-+ * This function will do some initial scheduler statistics housekeeping
-+ * that must be done for every newly created context, then puts the task
-+ * on the runqueue and wakes it.
-+ */
-+void wake_up_new_task(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	rq = cpu_rq(select_task_rq(p));
-+#ifdef CONFIG_SMP
-+	rseq_migrate(p);
-+	/*
-+	 * Fork balancing, do it here and not earlier because:
-+	 * - cpus_ptr can change in the fork path
-+	 * - any previously selected CPU might disappear through hotplug
-+	 *
-+	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
-+	 * as we're not fully set-up yet.
-+	 */
-+	__set_task_cpu(p, cpu_of(rq));
-+#endif
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	activate_task(p, rq);
-+	trace_sched_wakeup_new(p);
-+	wakeup_preempt(rq);
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+
-+static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
-+
-+void preempt_notifier_inc(void)
-+{
-+	static_branch_inc(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_inc);
-+
-+void preempt_notifier_dec(void)
-+{
-+	static_branch_dec(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_dec);
-+
-+/**
-+ * preempt_notifier_register - tell me when current is being preempted & rescheduled
-+ * @notifier: notifier struct to register
-+ */
-+void preempt_notifier_register(struct preempt_notifier *notifier)
-+{
-+	if (!static_branch_unlikely(&preempt_notifier_key))
-+		WARN(1, "registering preempt_notifier while notifiers disabled\n");
-+
-+	hlist_add_head(&notifier->link, &current->preempt_notifiers);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_register);
-+
-+/**
-+ * preempt_notifier_unregister - no longer interested in preemption notifications
-+ * @notifier: notifier struct to unregister
-+ *
-+ * This is *not* safe to call from within a preemption notifier.
-+ */
-+void preempt_notifier_unregister(struct preempt_notifier *notifier)
-+{
-+	hlist_del(&notifier->link);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
-+
-+static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_in(notifier, raw_smp_processor_id());
-+}
-+
-+static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_in_preempt_notifiers(curr);
-+}
-+
-+static void
-+__fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				   struct task_struct *next)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_out(notifier, next);
-+}
-+
-+static __always_inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_out_preempt_notifiers(curr, next);
-+}
-+
-+#else /* !CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+}
-+
-+static inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+}
-+
-+#endif /* CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void prepare_task(struct task_struct *next)
-+{
-+	/*
-+	 * Claim the task as running, we do this before switching to it
-+	 * such that any running task will have this set.
-+	 *
-+	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
-+	 * its ordering comment.
-+	 */
-+	WRITE_ONCE(next->on_cpu, 1);
-+}
-+
-+static inline void finish_task(struct task_struct *prev)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * This must be the very last reference to @prev from this CPU. After
-+	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
-+	 * must ensure this doesn't happen until the switch is completely
-+	 * finished.
-+	 *
-+	 * In particular, the load of prev->state in finish_task_switch() must
-+	 * happen before this.
-+	 *
-+	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
-+	 */
-+	smp_store_release(&prev->on_cpu, 0);
-+#else
-+	prev->on_cpu = 0;
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	void (*func)(struct rq *rq);
-+	struct balance_callback *next;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	while (head) {
-+		func = (void (*)(struct rq *))head->func;
-+		next = head->next;
-+		head->next = NULL;
-+		head = next;
-+
-+		func(rq);
-+	}
-+}
-+
-+static void balance_push(struct rq *rq);
-+
-+/*
-+ * balance_push_callback is a right abuse of the callback interface and plays
-+ * by significantly different rules.
-+ *
-+ * Where the normal balance_callback's purpose is to be ran in the same context
-+ * that queued it (only later, when it's safe to drop rq->lock again),
-+ * balance_push_callback is specifically targeted at __schedule().
-+ *
-+ * This abuse is tolerated because it places all the unlikely/odd cases behind
-+ * a single test, namely: rq->balance_callback == NULL.
-+ */
-+struct balance_callback balance_push_callback = {
-+	.next = NULL,
-+	.func = balance_push,
-+};
-+
-+static inline struct balance_callback *
-+__splice_balance_callbacks(struct rq *rq, bool split)
-+{
-+	struct balance_callback *head = rq->balance_callback;
-+
-+	if (likely(!head))
-+		return NULL;
-+
-+	lockdep_assert_rq_held(rq);
-+	/*
-+	 * Must not take balance_push_callback off the list when
-+	 * splice_balance_callbacks() and balance_callbacks() are not
-+	 * in the same rq->lock section.
-+	 *
-+	 * In that case it would be possible for __schedule() to interleave
-+	 * and observe the list empty.
-+	 */
-+	if (split && head == &balance_push_callback)
-+		head = NULL;
-+	else
-+		rq->balance_callback = NULL;
-+
-+	return head;
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return __splice_balance_callbacks(rq, true);
-+}
-+
-+static void __balance_callbacks(struct rq *rq)
-+{
-+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	unsigned long flags;
-+
-+	if (unlikely(head)) {
-+		raw_spin_lock_irqsave(&rq->lock, flags);
-+		do_balance_callbacks(rq, head);
-+		raw_spin_unlock_irqrestore(&rq->lock, flags);
-+	}
-+}
-+
-+#else
-+
-+static inline void __balance_callbacks(struct rq *rq)
-+{
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return NULL;
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+}
-+
-+#endif
-+
-+static inline void
-+prepare_lock_switch(struct rq *rq, struct task_struct *next)
-+{
-+	/*
-+	 * Since the runqueue lock will be released by the next
-+	 * task (which is an invalid locking op but in the case
-+	 * of the scheduler it's an obvious special-case), so we
-+	 * do an early lockdep release here:
-+	 */
-+	spin_release(&rq->lock.dep_map, _THIS_IP_);
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+	/* this is a valid case when another task releases the spinlock */
-+	rq->lock.owner = next;
-+#endif
-+}
-+
-+static inline void finish_lock_switch(struct rq *rq)
-+{
-+	/*
-+	 * If we are tracking spinlock dependencies then we have to
-+	 * fix up the runqueue lock - which gets 'carried over' from
-+	 * prev into current:
-+	 */
-+	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-+	__balance_callbacks(rq);
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+/*
-+ * NOP if the arch has not defined these:
-+ */
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static inline void kmap_local_sched_out(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_out();
-+#endif
-+}
-+
-+static inline void kmap_local_sched_in(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_in();
-+#endif
-+}
-+
-+/**
-+ * prepare_task_switch - prepare to switch tasks
-+ * @rq: the runqueue preparing to switch
-+ * @next: the task we are going to switch to.
-+ *
-+ * This is called with the rq lock held and interrupts off. It must
-+ * be paired with a subsequent finish_task_switch after the context
-+ * switch.
-+ *
-+ * prepare_task_switch sets up locking and calls architecture specific
-+ * hooks.
-+ */
-+static inline void
-+prepare_task_switch(struct rq *rq, struct task_struct *prev,
-+		    struct task_struct *next)
-+{
-+	kcov_prepare_switch(prev);
-+	sched_info_switch(rq, prev, next);
-+	perf_event_task_sched_out(prev, next);
-+	rseq_preempt(prev);
-+	fire_sched_out_preempt_notifiers(prev, next);
-+	kmap_local_sched_out();
-+	prepare_task(next);
-+	prepare_arch_switch(next);
-+}
-+
-+/**
-+ * finish_task_switch - clean up after a task-switch
-+ * @rq: runqueue associated with task-switch
-+ * @prev: the thread we just switched away from.
-+ *
-+ * finish_task_switch must be called after the context switch, paired
-+ * with a prepare_task_switch call before the context switch.
-+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
-+ * and do any other architecture-specific cleanup actions.
-+ *
-+ * Note that we may have delayed dropping an mm in context_switch(). If
-+ * so, we finish that here outside of the runqueue lock.  (Doing it
-+ * with the lock held can cause deadlocks; see schedule() for
-+ * details.)
-+ *
-+ * The context switch have flipped the stack from under us and restored the
-+ * local variables which were saved when this task called schedule() in the
-+ * past. prev == current is still correct but we need to recalculate this_rq
-+ * because prev may have moved to another CPU.
-+ */
-+static struct rq *finish_task_switch(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	struct rq *rq = this_rq();
-+	struct mm_struct *mm = rq->prev_mm;
-+	unsigned int prev_state;
-+
-+	/*
-+	 * The previous task will have left us with a preempt_count of 2
-+	 * because it left us after:
-+	 *
-+	 *	schedule()
-+	 *	  preempt_disable();			// 1
-+	 *	  __schedule()
-+	 *	    raw_spin_lock_irq(&rq->lock)	// 2
-+	 *
-+	 * Also, see FORK_PREEMPT_COUNT.
-+	 */
-+	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
-+		      "corrupted preempt_count: %s/%d/0x%x\n",
-+		      current->comm, current->pid, preempt_count()))
-+		preempt_count_set(FORK_PREEMPT_COUNT);
-+
-+	rq->prev_mm = NULL;
-+
-+	/*
-+	 * A task struct has one reference for the use as "current".
-+	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
-+	 * schedule one last time. The schedule call will never return, and
-+	 * the scheduled task must drop that reference.
-+	 *
-+	 * We must observe prev->state before clearing prev->on_cpu (in
-+	 * finish_task), otherwise a concurrent wakeup can get prev
-+	 * running on another CPU and we could rave with its RUNNING -> DEAD
-+	 * transition, resulting in a double drop.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	vtime_task_switch(prev);
-+	perf_event_task_sched_in(prev, current);
-+	finish_task(prev);
-+	tick_nohz_task_switch();
-+	finish_lock_switch(rq);
-+	finish_arch_post_lock_switch();
-+	kcov_finish_switch(current);
-+	/*
-+	 * kmap_local_sched_out() is invoked with rq::lock held and
-+	 * interrupts disabled. There is no requirement for that, but the
-+	 * sched out code does not have an interrupt enabled section.
-+	 * Restoring the maps on sched in does not require interrupts being
-+	 * disabled either.
-+	 */
-+	kmap_local_sched_in();
-+
-+	fire_sched_in_preempt_notifiers(current);
-+	/*
-+	 * When switching through a kernel thread, the loop in
-+	 * membarrier_{private,global}_expedited() may have observed that
-+	 * kernel thread and not issued an IPI. It is therefore possible to
-+	 * schedule between user->kernel->user threads without passing though
-+	 * switch_mm(). Membarrier requires a barrier after storing to
-+	 * rq->curr, before returning to userspace, so provide them here:
-+	 *
-+	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
-+	 *   provided by mmdrop(),
-+	 * - a sync_core for SYNC_CORE.
-+	 */
-+	if (mm) {
-+		membarrier_mm_sync_core_before_usermode(mm);
-+		mmdrop_sched(mm);
-+	}
-+	if (unlikely(prev_state == TASK_DEAD)) {
-+		/* Task is done with its stack. */
-+		put_task_stack(prev);
-+
-+		put_task_struct_rcu_user(prev);
-+	}
-+
-+	return rq;
-+}
-+
-+/**
-+ * schedule_tail - first thing a freshly forked thread must call.
-+ * @prev: the thread we just switched away from.
-+ */
-+asmlinkage __visible void schedule_tail(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	/*
-+	 * New tasks start with FORK_PREEMPT_COUNT, see there and
-+	 * finish_task_switch() for details.
-+	 *
-+	 * finish_task_switch() will drop rq->lock() and lower preempt_count
-+	 * and the preempt_enable() will end up enabling preemption (on
-+	 * PREEMPT_COUNT kernels).
-+	 */
-+
-+	finish_task_switch(prev);
-+	preempt_enable();
-+
-+	if (current->set_child_tid)
-+		put_user(task_pid_vnr(current), current->set_child_tid);
-+
-+	calculate_sigpending();
-+}
-+
-+/*
-+ * context_switch - switch to the new MM and the new thread's register state.
-+ */
-+static __always_inline struct rq *
-+context_switch(struct rq *rq, struct task_struct *prev,
-+	       struct task_struct *next)
-+{
-+	prepare_task_switch(rq, prev, next);
-+
-+	/*
-+	 * For paravirt, this is coupled with an exit in switch_to to
-+	 * combine the page table reload and the switch backend into
-+	 * one hypercall.
-+	 */
-+	arch_start_context_switch(prev);
-+
-+	/*
-+	 * kernel -> kernel   lazy + transfer active
-+	 *   user -> kernel   lazy + mmgrab() active
-+	 *
-+	 * kernel ->   user   switch + mmdrop() active
-+	 *   user ->   user   switch
-+	 *
-+	 * switch_mm_cid() needs to be updated if the barriers provided
-+	 * by context_switch() are modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		enter_lazy_tlb(prev->active_mm, next);
-+
-+		next->active_mm = prev->active_mm;
-+		if (prev->mm)                           // from user
-+			mmgrab(prev->active_mm);
-+		else
-+			prev->active_mm = NULL;
-+	} else {                                        // to user
-+		membarrier_switch_mm(rq, prev->active_mm, next->mm);
-+		/*
-+		 * sys_membarrier() requires an smp_mb() between setting
-+		 * rq->curr / membarrier_switch_mm() and returning to userspace.
-+		 *
-+		 * The below provides this either through switch_mm(), or in
-+		 * case 'prev->active_mm == next->mm' through
-+		 * finish_task_switch()'s mmdrop().
-+		 */
-+		switch_mm_irqs_off(prev->active_mm, next->mm, next);
-+		lru_gen_use_mm(next->mm);
-+
-+		if (!prev->mm) {                        // from kernel
-+			/* will mmdrop() in finish_task_switch(). */
-+			rq->prev_mm = prev->active_mm;
-+			prev->active_mm = NULL;
-+		}
-+	}
-+
-+	/* switch_mm_cid() requires the memory barriers above. */
-+	switch_mm_cid(rq, prev, next);
-+
-+	prepare_lock_switch(rq, next);
-+
-+	/* Here we just switch the register state and the stack. */
-+	switch_to(prev, next, prev);
-+	barrier();
-+
-+	return finish_task_switch(prev);
-+}
-+
-+/*
-+ * nr_running, nr_uninterruptible and nr_context_switches:
-+ *
-+ * externally visible scheduler statistics: current number of runnable
-+ * threads, total number of context switches performed since bootup.
-+ */
-+unsigned int nr_running(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_online_cpu(i)
-+		sum += cpu_rq(i)->nr_running;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Check if only the current task is running on the CPU.
-+ *
-+ * Caution: this function does not check that the caller has disabled
-+ * preemption, thus the result might have a time-of-check-to-time-of-use
-+ * race.  The caller is responsible to use it correctly, for example:
-+ *
-+ * - from a non-preemptible section (of course)
-+ *
-+ * - from a thread that is bound to a single CPU
-+ *
-+ * - in a loop with very short iterations (e.g. a polling loop)
-+ */
-+bool single_task_running(void)
-+{
-+	return raw_rq()->nr_running == 1;
-+}
-+EXPORT_SYMBOL(single_task_running);
-+
-+unsigned long long nr_context_switches_cpu(int cpu)
-+{
-+	return cpu_rq(cpu)->nr_switches;
-+}
-+
-+unsigned long long nr_context_switches(void)
-+{
-+	int i;
-+	unsigned long long sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += cpu_rq(i)->nr_switches;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Consumers of these two interfaces, like for example the cpuidle menu
-+ * governor, are using nonsensical data. Preferring shallow idle state selection
-+ * for a CPU that has IO-wait which might not even end up running the task when
-+ * it does become runnable.
-+ */
-+
-+unsigned int nr_iowait_cpu(int cpu)
-+{
-+	return atomic_read(&cpu_rq(cpu)->nr_iowait);
-+}
-+
-+/*
-+ * IO-wait accounting, and how it's mostly bollocks (on SMP).
-+ *
-+ * The idea behind IO-wait account is to account the idle time that we could
-+ * have spend running if it were not for IO. That is, if we were to improve the
-+ * storage performance, we'd have a proportional reduction in IO-wait time.
-+ *
-+ * This all works nicely on UP, where, when a task blocks on IO, we account
-+ * idle time as IO-wait, because if the storage were faster, it could've been
-+ * running and we'd not be idle.
-+ *
-+ * This has been extended to SMP, by doing the same for each CPU. This however
-+ * is broken.
-+ *
-+ * Imagine for instance the case where two tasks block on one CPU, only the one
-+ * CPU will have IO-wait accounted, while the other has regular idle. Even
-+ * though, if the storage were faster, both could've ran at the same time,
-+ * utilising both CPUs.
-+ *
-+ * This means, that when looking globally, the current IO-wait accounting on
-+ * SMP is a lower bound, by reason of under accounting.
-+ *
-+ * Worse, since the numbers are provided per CPU, they are sometimes
-+ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
-+ * associated with any one particular CPU, it can wake to another CPU than it
-+ * blocked on. This means the per CPU IO-wait number is meaningless.
-+ *
-+ * Task CPU affinities can make all that even more 'interesting'.
-+ */
-+
-+unsigned int nr_iowait(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += nr_iowait_cpu(i);
-+
-+	return sum;
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+/*
-+ * sched_exec - execve() is a valuable balancing opportunity, because at
-+ * this point the task has the smallest effective memory and cache
-+ * footprint.
-+ */
-+void sched_exec(void)
-+{
-+}
-+
-+#endif
-+
-+DEFINE_PER_CPU(struct kernel_stat, kstat);
-+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
-+
-+EXPORT_PER_CPU_SYMBOL(kstat);
-+EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
-+
-+static inline void update_curr(struct rq *rq, struct task_struct *p)
-+{
-+	s64 ns = rq->clock_task - p->last_ran;
-+
-+	p->sched_time += ns;
-+	cgroup_account_cputime(p, ns);
-+	account_group_exec_runtime(p, ns);
-+
-+	p->time_slice -= ns;
-+	p->last_ran = rq->clock_task;
-+}
-+
-+/*
-+ * Return accounted runtime for the task.
-+ * Return separately the current's pending runtime that have not been
-+ * accounted yet.
-+ */
-+unsigned long long task_sched_runtime(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	u64 ns;
-+
-+#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
-+	/*
-+	 * 64-bit doesn't need locks to atomically read a 64-bit value.
-+	 * So we have a optimization chance when the task's delta_exec is 0.
-+	 * Reading ->on_cpu is racy, but this is ok.
-+	 *
-+	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
-+	 * If we race with it entering CPU, unaccounted time is 0. This is
-+	 * indistinguishable from the read occurring a few cycles earlier.
-+	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
-+	 * been accounted, so we're correct here as well.
-+	 */
-+	if (!p->on_cpu || !task_on_rq_queued(p))
-+		return tsk_seruntime(p);
-+#endif
-+
-+	rq = task_access_lock_irqsave(p, &lock, &flags);
-+	/*
-+	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
-+	 * project cycles that may never be accounted to this
-+	 * thread, breaking clock_gettime().
-+	 */
-+	if (p == rq->curr && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		update_curr(rq, p);
-+	}
-+	ns = tsk_seruntime(p);
-+	task_access_unlock_irqrestore(p, lock, &flags);
-+
-+	return ns;
-+}
-+
-+/* This manages tasks that have run out of timeslice during a scheduler_tick */
-+static inline void scheduler_task_tick(struct rq *rq)
-+{
-+	struct task_struct *p = rq->curr;
-+
-+	if (is_idle_task(p))
-+		return;
-+
-+	update_curr(rq, p);
-+	cpufreq_update_util(rq, 0);
-+
-+	/*
-+	 * Tasks have less than RESCHED_NS of time slice left they will be
-+	 * rescheduled.
-+	 */
-+	if (p->time_slice >= RESCHED_NS)
-+		return;
-+	set_tsk_need_resched(p);
-+	set_preempt_need_resched();
-+}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+static u64 cpu_resched_latency(struct rq *rq)
-+{
-+	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
-+	u64 resched_latency, now = rq_clock(rq);
-+	static bool warned_once;
-+
-+	if (sysctl_resched_latency_warn_once && warned_once)
-+		return 0;
-+
-+	if (!need_resched() || !latency_warn_ms)
-+		return 0;
-+
-+	if (system_state == SYSTEM_BOOTING)
-+		return 0;
-+
-+	if (!rq->last_seen_need_resched_ns) {
-+		rq->last_seen_need_resched_ns = now;
-+		rq->ticks_without_resched = 0;
-+		return 0;
-+	}
-+
-+	rq->ticks_without_resched++;
-+	resched_latency = now - rq->last_seen_need_resched_ns;
-+	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
-+		return 0;
-+
-+	warned_once = true;
-+
-+	return resched_latency;
-+}
-+
-+static int __init setup_resched_latency_warn_ms(char *str)
-+{
-+	long val;
-+
-+	if ((kstrtol(str, 0, &val))) {
-+		pr_warn("Unable to set resched_latency_warn_ms\n");
-+		return 1;
-+	}
-+
-+	sysctl_resched_latency_warn_ms = val;
-+	return 1;
-+}
-+__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
-+#else
-+static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+/*
-+ * This function gets called by the timer code, with HZ frequency.
-+ * We call it with interrupts disabled.
-+ */
-+void sched_tick(void)
-+{
-+	int cpu __maybe_unused = smp_processor_id();
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *curr = rq->curr;
-+	u64 resched_latency;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		arch_scale_freq_tick();
-+
-+	sched_clock_tick();
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	scheduler_task_tick(rq);
-+	if (sched_feat(LATENCY_WARN))
-+		resched_latency = cpu_resched_latency(rq);
-+	calc_global_load_tick(rq);
-+
-+	task_tick_mm_cid(rq, rq->curr);
-+
-+	raw_spin_unlock(&rq->lock);
-+
-+	if (sched_feat(LATENCY_WARN) && resched_latency)
-+		resched_latency_warn(cpu, resched_latency);
-+
-+	perf_event_task_tick();
-+
-+	if (curr->flags & PF_WQ_WORKER)
-+		wq_worker_tick(curr);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static int active_balance_cpu_stop(void *data)
-+{
-+	struct balance_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+	cpumask_t tmp;
-+
-+	local_irq_save(flags);
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+
-+	arg->active = 0;
-+
-+	if (task_on_rq_queued(p) && task_rq(p) == rq &&
-+	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
-+	    !is_migration_disabled(p)) {
-+		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
-+		rq = move_queued_task(rq, p, dcpu);
-+	}
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+/* trigger_active_balance - for @cpu/@rq */
-+static inline int
-+trigger_active_balance(struct rq *src_rq, struct rq *rq, struct balance_arg *arg)
-+{
-+	unsigned long flags;
-+	struct task_struct *p;
-+	int res;
-+
-+	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-+		return 0;
-+
-+	res = (1 == rq->nr_running) &&					\
-+	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
-+	      cpumask_intersects(p->cpus_ptr, arg->cpumask) &&		\
-+	      !arg->active;
-+	if (res) {
-+		arg->task = p;
-+
-+		arg->active = 1;
-+	}
-+
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	if (res) {
-+		preempt_disable();
-+		raw_spin_unlock(&src_rq->lock);
-+
-+		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop,
-+				    arg, &rq->active_balance_work);
-+
-+		preempt_enable();
-+		raw_spin_lock(&src_rq->lock);
-+	}
-+
-+	return res;
-+}
-+
-+#ifdef CONFIG_SCHED_SMT
-+/*
-+ * sg_balance - slibing group balance check for run queue @rq
-+ */
-+static inline void sg_balance(struct rq *rq)
-+{
-+	cpumask_t chk;
-+
-+	if (cpumask_andnot(&chk, cpu_active_mask, sched_idle_mask) &&
-+	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
-+		int i, cpu = cpu_of(rq);
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
-+				struct rq *target_rq = cpu_rq(i);
-+				if (trigger_active_balance(rq, target_rq, &target_rq->sg_balance_arg))
-+					return;
-+			}
-+		}
-+	}
-+}
-+
-+static DEFINE_PER_CPU(struct balance_callback, sg_balance_head) = {
-+	.func = sg_balance,
-+};
-+#endif /* CONFIG_SCHED_SMT */
-+
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+
-+struct tick_work {
-+	int			cpu;
-+	atomic_t		state;
-+	struct delayed_work	work;
-+};
-+/* Values for ->state, see diagram below. */
-+#define TICK_SCHED_REMOTE_OFFLINE	0
-+#define TICK_SCHED_REMOTE_OFFLINING	1
-+#define TICK_SCHED_REMOTE_RUNNING	2
-+
-+/*
-+ * State diagram for ->state:
-+ *
-+ *
-+ *          TICK_SCHED_REMOTE_OFFLINE
-+ *                    |   ^
-+ *                    |   |
-+ *                    |   | sched_tick_remote()
-+ *                    |   |
-+ *                    |   |
-+ *                    +--TICK_SCHED_REMOTE_OFFLINING
-+ *                    |   ^
-+ *                    |   |
-+ * sched_tick_start() |   | sched_tick_stop()
-+ *                    |   |
-+ *                    V   |
-+ *          TICK_SCHED_REMOTE_RUNNING
-+ *
-+ *
-+ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
-+ * and sched_tick_start() are happy to leave the state in RUNNING.
-+ */
-+
-+static struct tick_work __percpu *tick_work_cpu;
-+
-+static void sched_tick_remote(struct work_struct *work)
-+{
-+	struct delayed_work *dwork = to_delayed_work(work);
-+	struct tick_work *twork = container_of(dwork, struct tick_work, work);
-+	int cpu = twork->cpu;
-+	struct rq *rq = cpu_rq(cpu);
-+	int os;
-+
-+	/*
-+	 * Handle the tick only if it appears the remote CPU is running in full
-+	 * dynticks mode. The check is racy by nature, but missing a tick or
-+	 * having one too much is no big deal because the scheduler tick updates
-+	 * statistics and checks timeslices in a time-independent way, regardless
-+	 * of when exactly it is running.
-+	 */
-+	if (tick_nohz_tick_stopped_cpu(cpu)) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		struct task_struct *curr = rq->curr;
-+
-+		if (cpu_online(cpu)) {
-+			update_rq_clock(rq);
-+
-+			if (!is_idle_task(curr)) {
-+				/*
-+				 * Make sure the next tick runs within a
-+				 * reasonable amount of time.
-+				 */
-+				u64 delta = rq_clock_task(rq) - curr->last_ran;
-+				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
-+			}
-+			scheduler_task_tick(rq);
-+
-+			calc_load_nohz_remote(rq);
-+		}
-+	}
-+
-+	/*
-+	 * Run the remote tick once per second (1Hz). This arbitrary
-+	 * frequency is large enough to avoid overload but short enough
-+	 * to keep scheduler internal stats reasonably up to date.  But
-+	 * first update state to reflect hotplug activity if required.
-+	 */
-+	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
-+	if (os == TICK_SCHED_REMOTE_RUNNING)
-+		queue_delayed_work(system_unbound_wq, dwork, HZ);
-+}
-+
-+static void sched_tick_start(int cpu)
-+{
-+	int os;
-+	struct tick_work *twork;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
-+	if (os == TICK_SCHED_REMOTE_OFFLINE) {
-+		twork->cpu = cpu;
-+		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
-+		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
-+	}
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+static void sched_tick_stop(int cpu)
-+{
-+	struct tick_work *twork;
-+	int os;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	/* There cannot be competing actions, but don't rely on stop-machine. */
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
-+	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
-+	/* Don't cancel, as this would mess up the state machine. */
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+int __init sched_tick_offload_init(void)
-+{
-+	tick_work_cpu = alloc_percpu(struct tick_work);
-+	BUG_ON(!tick_work_cpu);
-+	return 0;
-+}
-+
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_tick_start(int cpu) { }
-+static inline void sched_tick_stop(int cpu) { }
-+#endif
-+
-+#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
-+				defined(CONFIG_PREEMPT_TRACER))
-+/*
-+ * If the value passed in is equal to the current preempt count
-+ * then we just disabled preemption. Start timing the latency.
-+ */
-+static inline void preempt_latency_start(int val)
-+{
-+	if (preempt_count() == val) {
-+		unsigned long ip = get_lock_parent_ip();
-+#ifdef CONFIG_DEBUG_PREEMPT
-+		current->preempt_disable_ip = ip;
-+#endif
-+		trace_preempt_off(CALLER_ADDR0, ip);
-+	}
-+}
-+
-+void preempt_count_add(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
-+		return;
-+#endif
-+	__preempt_count_add(val);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Spinlock count overflowing soon?
-+	 */
-+	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
-+				PREEMPT_MASK - 10);
-+#endif
-+	preempt_latency_start(val);
-+}
-+EXPORT_SYMBOL(preempt_count_add);
-+NOKPROBE_SYMBOL(preempt_count_add);
-+
-+/*
-+ * If the value passed in equals to the current preempt count
-+ * then we just enabled preemption. Stop timing the latency.
-+ */
-+static inline void preempt_latency_stop(int val)
-+{
-+	if (preempt_count() == val)
-+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-+}
-+
-+void preempt_count_sub(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
-+		return;
-+	/*
-+	 * Is the spinlock portion underflowing?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
-+			!(preempt_count() & PREEMPT_MASK)))
-+		return;
-+#endif
-+
-+	preempt_latency_stop(val);
-+	__preempt_count_sub(val);
-+}
-+EXPORT_SYMBOL(preempt_count_sub);
-+NOKPROBE_SYMBOL(preempt_count_sub);
-+
-+#else
-+static inline void preempt_latency_start(int val) { }
-+static inline void preempt_latency_stop(int val) { }
-+#endif
-+
-+static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	return p->preempt_disable_ip;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+/*
-+ * Print scheduling while atomic bug:
-+ */
-+static noinline void __schedule_bug(struct task_struct *prev)
-+{
-+	/* Save this before calling printk(), since that will clobber it */
-+	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	if (oops_in_progress)
-+		return;
-+
-+	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
-+		prev->comm, prev->pid, preempt_count());
-+
-+	debug_show_held_locks(prev);
-+	print_modules();
-+	if (irqs_disabled())
-+		print_irqtrace_events(prev);
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		pr_err("Preemption disabled at:");
-+		print_ip_sym(KERN_ERR, preempt_disable_ip);
-+	}
-+	check_panic_on_warn("scheduling while atomic");
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+
-+/*
-+ * Various schedule()-time debugging checks and statistics:
-+ */
-+static inline void schedule_debug(struct task_struct *prev, bool preempt)
-+{
-+#ifdef CONFIG_SCHED_STACK_END_CHECK
-+	if (task_stack_end_corrupted(prev))
-+		panic("corrupted stack end detected inside scheduler\n");
-+
-+	if (task_scs_end_corrupted(prev))
-+		panic("corrupted shadow stack detected inside scheduler\n");
-+#endif
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
-+		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
-+			prev->comm, prev->pid, prev->non_block_count);
-+		dump_stack();
-+		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+	}
-+#endif
-+
-+	if (unlikely(in_atomic_preempt_off())) {
-+		__schedule_bug(prev);
-+		preempt_count_set(PREEMPT_DISABLED);
-+	}
-+	rcu_sleep_check();
-+	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
-+
-+	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
-+
-+	schedstat_inc(this_rq()->sched_count);
-+}
-+
-+#ifdef ALT_SCHED_DEBUG
-+void alt_sched_debug(void)
-+{
-+	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
-+	       sched_rq_pending_mask.bits[0],
-+	       sched_idle_mask->bits[0],
-+	       sched_sg_idle_mask.bits[0]);
-+}
-+#else
-+inline void alt_sched_debug(void) {}
-+#endif
-+
-+#ifdef	CONFIG_SMP
-+
-+#ifdef CONFIG_PREEMPT_RT
-+#define SCHED_NR_MIGRATE_BREAK 8
-+#else
-+#define SCHED_NR_MIGRATE_BREAK 32
-+#endif
-+
-+const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
-+
-+/*
-+ * Migrate pending tasks in @rq to @dest_cpu
-+ */
-+static inline int
-+migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
-+{
-+	struct task_struct *p, *skip = rq->curr;
-+	int nr_migrated = 0;
-+	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
-+
-+	/* WA to check rq->curr is still on rq */
-+	if (!task_on_rq_queued(skip))
-+		return 0;
-+
-+	while (skip != rq->idle && nr_tries &&
-+	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
-+		skip = sched_rq_next_task(p, rq);
-+		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
-+			__SCHED_DEQUEUE_TASK(p, rq, 0, );
-+			set_task_cpu(p, dest_cpu);
-+			sched_task_sanity_check(p, dest_rq);
-+			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
-+			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
-+			nr_migrated++;
-+		}
-+		nr_tries--;
-+	}
-+
-+	return nr_migrated;
-+}
-+
-+static inline int take_other_rq_tasks(struct rq *rq, int cpu)
-+{
-+	cpumask_t *topo_mask, *end_mask, chk;
-+
-+	if (unlikely(!rq->online))
-+		return 0;
-+
-+	if (cpumask_empty(&sched_rq_pending_mask))
-+		return 0;
-+
-+	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
-+	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
-+	do {
-+		int i;
-+
-+		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
-+			continue;
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			int nr_migrated;
-+			struct rq *src_rq;
-+
-+			src_rq = cpu_rq(i);
-+			if (!do_raw_spin_trylock(&src_rq->lock))
-+				continue;
-+			spin_acquire(&src_rq->lock.dep_map,
-+				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
-+
-+			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
-+				src_rq->nr_running -= nr_migrated;
-+				if (src_rq->nr_running < 2)
-+					cpumask_clear_cpu(i, &sched_rq_pending_mask);
-+
-+				spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+				do_raw_spin_unlock(&src_rq->lock);
-+
-+				rq->nr_running += nr_migrated;
-+				if (rq->nr_running > 1)
-+					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
-+
-+				update_sched_preempt_mask(rq);
-+				cpufreq_update_util(rq, 0);
-+
-+				return 1;
-+			}
-+
-+			spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+			do_raw_spin_unlock(&src_rq->lock);
-+		}
-+	} while (++topo_mask < end_mask);
-+
-+	return 0;
-+}
-+#endif
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+
-+	sched_task_renew(p, rq);
-+
-+	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
-+		requeue_task(p, rq);
-+}
-+
-+/*
-+ * Timeslices below RESCHED_NS are considered as good as expired as there's no
-+ * point rescheduling when there's so little time left.
-+ */
-+static inline void check_curr(struct task_struct *p, struct rq *rq)
-+{
-+	if (unlikely(rq->idle == p))
-+		return;
-+
-+	update_curr(rq, p);
-+
-+	if (p->time_slice < RESCHED_NS)
-+		time_slice_expired(p, rq);
-+}
-+
-+static inline struct task_struct *
-+choose_next_task(struct rq *rq, int cpu)
-+{
-+	struct task_struct *next = sched_rq_first_task(rq);
-+
-+	if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+		if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+			if (static_key_count(&sched_smt_present.key) > 1 &&
-+			    cpumask_test_cpu(cpu, sched_sg_idle_mask) &&
-+			    rq->online)
-+				__queue_balance_callback(rq, &per_cpu(sg_balance_head, cpu));
-+#endif
-+			schedstat_inc(rq->sched_goidle);
-+			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
-+			return next;
-+#ifdef	CONFIG_SMP
-+		}
-+		next = sched_rq_first_task(rq);
-+#endif
-+	}
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+	hrtick_start(rq, next->time_slice);
-+#endif
-+	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
-+	return next;
-+}
-+
-+/*
-+ * Constants for the sched_mode argument of __schedule().
-+ *
-+ * The mode argument allows RT enabled kernels to differentiate a
-+ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
-+ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
-+ * optimize the AND operation out and just check for zero.
-+ */
-+#define SM_NONE			0x0
-+#define SM_PREEMPT		0x1
-+#define SM_RTLOCK_WAIT		0x2
-+
-+#ifndef CONFIG_PREEMPT_RT
-+# define SM_MASK_PREEMPT	(~0U)
-+#else
-+# define SM_MASK_PREEMPT	SM_PREEMPT
-+#endif
-+
-+/*
-+ * schedule() is the main scheduler function.
-+ *
-+ * The main means of driving the scheduler and thus entering this function are:
-+ *
-+ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
-+ *
-+ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
-+ *      paths. For example, see arch/x86/entry_64.S.
-+ *
-+ *      To drive preemption between tasks, the scheduler sets the flag in timer
-+ *      interrupt handler sched_tick().
-+ *
-+ *   3. Wakeups don't really cause entry into schedule(). They add a
-+ *      task to the run-queue and that's it.
-+ *
-+ *      Now, if the new task added to the run-queue preempts the current
-+ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
-+ *      called on the nearest possible occasion:
-+ *
-+ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
-+ *
-+ *         - in syscall or exception context, at the next outmost
-+ *           preempt_enable(). (this might be as soon as the wake_up()'s
-+ *           spin_unlock()!)
-+ *
-+ *         - in IRQ context, return from interrupt-handler to
-+ *           preemptible context
-+ *
-+ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
-+ *         then at the next:
-+ *
-+ *          - cond_resched() call
-+ *          - explicit schedule() call
-+ *          - return from syscall or exception to user-space
-+ *          - return from interrupt-handler to user-space
-+ *
-+ * WARNING: must be called with preemption disabled!
-+ */
-+static void __sched notrace __schedule(unsigned int sched_mode)
-+{
-+	struct task_struct *prev, *next;
-+	unsigned long *switch_count;
-+	unsigned long prev_state;
-+	struct rq *rq;
-+	int cpu;
-+
-+	cpu = smp_processor_id();
-+	rq = cpu_rq(cpu);
-+	prev = rq->curr;
-+
-+	schedule_debug(prev, !!sched_mode);
-+
-+	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
-+	hrtick_clear(rq);
-+
-+	local_irq_disable();
-+	rcu_note_context_switch(!!sched_mode);
-+
-+	/*
-+	 * Make sure that signal_pending_state()->signal_pending() below
-+	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
-+	 * done by the caller to avoid the race with signal_wake_up():
-+	 *
-+	 * __set_current_state(@state)		signal_wake_up()
-+	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
-+	 *					  wake_up_state(p, state)
-+	 *   LOCK rq->lock			    LOCK p->pi_state
-+	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
-+	 *     if (signal_pending_state())	    if (p->state & @state)
-+	 *
-+	 * Also, the membarrier system call requires a full memory barrier
-+	 * after coming from user-space, before storing to rq->curr; this
-+	 * barrier matches a full barrier in the proximity of the membarrier
-+	 * system call exit.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+	smp_mb__after_spinlock();
-+
-+	update_rq_clock(rq);
-+
-+	switch_count = &prev->nivcsw;
-+	/*
-+	 * We must load prev->state once (task_struct::state is volatile), such
-+	 * that we form a control dependency vs deactivate_task() below.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
-+		if (signal_pending_state(prev_state, prev)) {
-+			WRITE_ONCE(prev->__state, TASK_RUNNING);
-+		} else {
-+			prev->sched_contributes_to_load =
-+				(prev_state & TASK_UNINTERRUPTIBLE) &&
-+				!(prev_state & TASK_NOLOAD) &&
-+				!(prev_state & TASK_FROZEN);
-+
-+			if (prev->sched_contributes_to_load)
-+				rq->nr_uninterruptible++;
-+
-+			/*
-+			 * __schedule()			ttwu()
-+			 *   prev_state = prev->state;    if (p->on_rq && ...)
-+			 *   if (prev_state)		    goto out;
-+			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
-+			 *				  p->state = TASK_WAKING
-+			 *
-+			 * Where __schedule() and ttwu() have matching control dependencies.
-+			 *
-+			 * After this, schedule() must not care about p->state any more.
-+			 */
-+			sched_task_deactivate(prev, rq);
-+			deactivate_task(prev, rq);
-+
-+			if (prev->in_iowait) {
-+				atomic_inc(&rq->nr_iowait);
-+				delayacct_blkio_start();
-+			}
-+		}
-+		switch_count = &prev->nvcsw;
-+	}
-+
-+	check_curr(prev, rq);
-+
-+	next = choose_next_task(rq, cpu);
-+	clear_tsk_need_resched(prev);
-+	clear_preempt_need_resched();
-+#ifdef CONFIG_SCHED_DEBUG
-+	rq->last_seen_need_resched_ns = 0;
-+#endif
-+
-+	if (likely(prev != next)) {
-+		next->last_ran = rq->clock_task;
-+
-+		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
-+		rq->nr_switches++;
-+		/*
-+		 * RCU users of rcu_dereference(rq->curr) may not see
-+		 * changes to task_struct made by pick_next_task().
-+		 */
-+		RCU_INIT_POINTER(rq->curr, next);
-+		/*
-+		 * The membarrier system call requires each architecture
-+		 * to have a full memory barrier after updating
-+		 * rq->curr, before returning to user-space.
-+		 *
-+		 * Here are the schemes providing that barrier on the
-+		 * various architectures:
-+		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
-+		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
-+		 *   on PowerPC and on RISC-V.
-+		 * - finish_lock_switch() for weakly-ordered
-+		 *   architectures where spin_unlock is a full barrier,
-+		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
-+		 *   is a RELEASE barrier),
-+		 *
-+		 * The barrier matches a full barrier in the proximity of
-+		 * the membarrier system call entry.
-+		 *
-+		 * On RISC-V, this barrier pairing is also needed for the
-+		 * SYNC_CORE command when switching between processes, cf.
-+		 * the inline comments in membarrier_arch_switch_mm().
-+		 */
-+		++*switch_count;
-+
-+		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
-+
-+		/* Also unlocks the rq: */
-+		rq = context_switch(rq, prev, next);
-+
-+		cpu = cpu_of(rq);
-+	} else {
-+		__balance_callbacks(rq);
-+		raw_spin_unlock_irq(&rq->lock);
-+	}
-+}
-+
-+void __noreturn do_task_dead(void)
-+{
-+	/* Causes final put_task_struct in finish_task_switch(): */
-+	set_special_state(TASK_DEAD);
-+
-+	/* Tell freezer to ignore us: */
-+	current->flags |= PF_NOFREEZE;
-+
-+	__schedule(SM_NONE);
-+	BUG();
-+
-+	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
-+	for (;;)
-+		cpu_relax();
-+}
-+
-+static inline void sched_submit_work(struct task_struct *tsk)
-+{
-+	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
-+	unsigned int task_flags;
-+
-+	/*
-+	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
-+	 * will use a blocking primitive -- which would lead to recursion.
-+	 */
-+	lock_map_acquire_try(&sched_map);
-+
-+	task_flags = tsk->flags;
-+	/*
-+	 * If a worker goes to sleep, notify and ask workqueue whether it
-+	 * wants to wake up a task to maintain concurrency.
-+	 */
-+	if (task_flags & PF_WQ_WORKER)
-+		wq_worker_sleeping(tsk);
-+	else if (task_flags & PF_IO_WORKER)
-+		io_wq_worker_sleeping(tsk);
-+
-+	/*
-+	 * spinlock and rwlock must not flush block requests.  This will
-+	 * deadlock if the callback attempts to acquire a lock which is
-+	 * already acquired.
-+	 */
-+	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
-+
-+	/*
-+	 * If we are going to sleep and we have plugged IO queued,
-+	 * make sure to submit it to avoid deadlocks.
-+	 */
-+	blk_flush_plug(tsk->plug, true);
-+
-+	lock_map_release(&sched_map);
-+}
-+
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
-+		if (tsk->flags & PF_BLOCK_TS)
-+			blk_plug_invalidate_ts(tsk);
-+		if (tsk->flags & PF_WQ_WORKER)
-+			wq_worker_running(tsk);
-+		else if (tsk->flags & PF_IO_WORKER)
-+			io_wq_worker_running(tsk);
-+	}
-+}
-+
-+static __always_inline void __schedule_loop(unsigned int sched_mode)
-+{
-+	do {
-+		preempt_disable();
-+		__schedule(sched_mode);
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+}
-+
-+asmlinkage __visible void __sched schedule(void)
-+{
-+	struct task_struct *tsk = current;
-+
-+#ifdef CONFIG_RT_MUTEXES
-+	lockdep_assert(!tsk->sched_rt_mutex);
-+#endif
-+
-+	if (!task_is_running(tsk))
-+		sched_submit_work(tsk);
-+	__schedule_loop(SM_NONE);
-+	sched_update_worker(tsk);
-+}
-+EXPORT_SYMBOL(schedule);
-+
-+/*
-+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
-+ * state (have scheduled out non-voluntarily) by making sure that all
-+ * tasks have either left the run queue or have gone into user space.
-+ * As idle tasks do not do either, they must not ever be preempted
-+ * (schedule out non-voluntarily).
-+ *
-+ * schedule_idle() is similar to schedule_preempt_disable() except that it
-+ * never enables preemption because it does not call sched_submit_work().
-+ */
-+void __sched schedule_idle(void)
-+{
-+	/*
-+	 * As this skips calling sched_submit_work(), which the idle task does
-+	 * regardless because that function is a nop when the task is in a
-+	 * TASK_RUNNING state, make sure this isn't used someplace that the
-+	 * current task can be in any other state. Note, idle is always in the
-+	 * TASK_RUNNING state.
-+	 */
-+	WARN_ON_ONCE(current->__state);
-+	do {
-+		__schedule(SM_NONE);
-+	} while (need_resched());
-+}
-+
-+#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
-+asmlinkage __visible void __sched schedule_user(void)
-+{
-+	/*
-+	 * If we come here after a random call to set_need_resched(),
-+	 * or we have been woken up remotely but the IPI has not yet arrived,
-+	 * we haven't yet exited the RCU idle mode. Do it here manually until
-+	 * we find a better solution.
-+	 *
-+	 * NB: There are buggy callers of this function.  Ideally we
-+	 * should warn if prev_state != CONTEXT_USER, but that will trigger
-+	 * too frequently to make sense yet.
-+	 */
-+	enum ctx_state prev_state = exception_enter();
-+	schedule();
-+	exception_exit(prev_state);
-+}
-+#endif
-+
-+/**
-+ * schedule_preempt_disabled - called with preemption disabled
-+ *
-+ * Returns with preemption disabled. Note: preempt_count must be 1
-+ */
-+void __sched schedule_preempt_disabled(void)
-+{
-+	sched_preempt_enable_no_resched();
-+	schedule();
-+	preempt_disable();
-+}
-+
-+#ifdef CONFIG_PREEMPT_RT
-+void __sched notrace schedule_rtlock(void)
-+{
-+	__schedule_loop(SM_RTLOCK_WAIT);
-+}
-+NOKPROBE_SYMBOL(schedule_rtlock);
-+#endif
-+
-+static void __sched notrace preempt_schedule_common(void)
-+{
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		__schedule(SM_PREEMPT);
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+
-+		/*
-+		 * Check again in case we missed a preemption opportunity
-+		 * between schedule and now.
-+		 */
-+	} while (need_resched());
-+}
-+
-+#ifdef CONFIG_PREEMPTION
-+/*
-+ * This is the entry point to schedule() from in-kernel preemption
-+ * off of preempt_enable.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule(void)
-+{
-+	/*
-+	 * If there is a non-zero preempt_count or interrupts are disabled,
-+	 * we do not want to preempt the current task. Just return..
-+	 */
-+	if (likely(!preemptible()))
-+		return;
-+
-+	preempt_schedule_common();
-+}
-+NOKPROBE_SYMBOL(preempt_schedule);
-+EXPORT_SYMBOL(preempt_schedule);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_dynamic_enabled
-+#define preempt_schedule_dynamic_enabled	preempt_schedule
-+#define preempt_schedule_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
-+void __sched notrace dynamic_preempt_schedule(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
-+		return;
-+	preempt_schedule();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
-+EXPORT_SYMBOL(dynamic_preempt_schedule);
-+#endif
-+#endif
-+
-+/**
-+ * preempt_schedule_notrace - preempt_schedule called by tracing
-+ *
-+ * The tracing infrastructure uses preempt_enable_notrace to prevent
-+ * recursion and tracing preempt enabling caused by the tracing
-+ * infrastructure itself. But as tracing can happen in areas coming
-+ * from userspace or just about to enter userspace, a preempt enable
-+ * can occur before user_exit() is called. This will cause the scheduler
-+ * to be called when the system is still in usermode.
-+ *
-+ * To prevent this, the preempt_enable_notrace will use this function
-+ * instead of preempt_schedule() to exit user context if needed before
-+ * calling the scheduler.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
-+{
-+	enum ctx_state prev_ctx;
-+
-+	if (likely(!preemptible()))
-+		return;
-+
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		/*
-+		 * Needs preempt disabled in case user_exit() is traced
-+		 * and the tracer calls preempt_enable_notrace() causing
-+		 * an infinite recursion.
-+		 */
-+		prev_ctx = exception_enter();
-+		__schedule(SM_PREEMPT);
-+		exception_exit(prev_ctx);
-+
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+	} while (need_resched());
-+}
-+EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_notrace_dynamic_enabled
-+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
-+#define preempt_schedule_notrace_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
-+void __sched notrace dynamic_preempt_schedule_notrace(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
-+		return;
-+	preempt_schedule_notrace();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
-+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
-+#endif
-+#endif
-+
-+#endif /* CONFIG_PREEMPTION */
-+
-+/*
-+ * This is the entry point to schedule() from kernel preemption
-+ * off of irq context.
-+ * Note, that this is called and return with irqs disabled. This will
-+ * protect us against recursive calling from irq.
-+ */
-+asmlinkage __visible void __sched preempt_schedule_irq(void)
-+{
-+	enum ctx_state prev_state;
-+
-+	/* Catch callers which need to be fixed */
-+	BUG_ON(preempt_count() || !irqs_disabled());
-+
-+	prev_state = exception_enter();
-+
-+	do {
-+		preempt_disable();
-+		local_irq_enable();
-+		__schedule(SM_PREEMPT);
-+		local_irq_disable();
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+
-+	exception_exit(prev_state);
-+}
-+
-+int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
-+			  void *key)
-+{
-+	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
-+	return try_to_wake_up(curr->private, mode, wake_flags);
-+}
-+EXPORT_SYMBOL(default_wake_function);
-+
-+static inline void check_task_changed(struct task_struct *p, struct rq *rq)
-+{
-+	/* Trigger resched if task sched_prio has been modified. */
-+	if (task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		requeue_task(p, rq);
-+		wakeup_preempt(rq);
-+	}
-+}
-+
-+static void __setscheduler_prio(struct task_struct *p, int prio)
-+{
-+	p->prio = prio;
-+}
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+/*
-+ * Would be more useful with typeof()/auto_type but they don't mix with
-+ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
-+ * name such that if someone were to implement this function we get to compare
-+ * notes.
-+ */
-+#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
-+
-+void rt_mutex_pre_schedule(void)
-+{
-+	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
-+	sched_submit_work(current);
-+}
-+
-+void rt_mutex_schedule(void)
-+{
-+	lockdep_assert(current->sched_rt_mutex);
-+	__schedule_loop(SM_NONE);
-+}
-+
-+void rt_mutex_post_schedule(void)
-+{
-+	sched_update_worker(current);
-+	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
-+}
-+
-+static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
-+{
-+	if (pi_task)
-+		prio = min(prio, pi_task->prio);
-+
-+	return prio;
-+}
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	struct task_struct *pi_task = rt_mutex_get_top_task(p);
-+
-+	return __rt_effective_prio(pi_task, prio);
-+}
-+
-+/*
-+ * rt_mutex_setprio - set the current priority of a task
-+ * @p: task to boost
-+ * @pi_task: donor task
-+ *
-+ * This function changes the 'effective' priority of a task. It does
-+ * not touch ->normal_prio like __setscheduler().
-+ *
-+ * Used by the rt_mutex code to implement priority inheritance
-+ * logic. Call site only calls if the priority of the task changed.
-+ */
-+void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
-+{
-+	int prio;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	/* XXX used to be waiter->prio, not waiter->task->prio */
-+	prio = __rt_effective_prio(pi_task, p->normal_prio);
-+
-+	/*
-+	 * If nothing changed; bail early.
-+	 */
-+	if (p->pi_top_task == pi_task && prio == p->prio)
-+		return;
-+
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Set under pi_lock && rq->lock, such that the value can be used under
-+	 * either lock.
-+	 *
-+	 * Note that there is loads of tricky to make this pointer cache work
-+	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
-+	 * ensure a task is de-boosted (pi_task is set to NULL) before the
-+	 * task is allowed to run again (and can exit). This ensures the pointer
-+	 * points to a blocked task -- which guarantees the task is present.
-+	 */
-+	p->pi_top_task = pi_task;
-+
-+	/*
-+	 * For FIFO/RR we only need to set prio, if that matches we're done.
-+	 */
-+	if (prio == p->prio)
-+		goto out_unlock;
-+
-+	/*
-+	 * Idle task boosting is a nono in general. There is one
-+	 * exception, when PREEMPT_RT and NOHZ is active:
-+	 *
-+	 * The idle task calls get_next_timer_interrupt() and holds
-+	 * the timer wheel base->lock on the CPU and another CPU wants
-+	 * to access the timer (probably to cancel it). We can safely
-+	 * ignore the boosting request, as the idle CPU runs this code
-+	 * with interrupts disabled and will complete the lock
-+	 * protected section without being interrupted. So there is no
-+	 * real need to boost.
-+	 */
-+	if (unlikely(p == rq->idle)) {
-+		WARN_ON(p != rq->curr);
-+		WARN_ON(p->pi_blocked_on);
-+		goto out_unlock;
-+	}
-+
-+	trace_sched_pi_setprio(p, pi_task);
-+
-+	__setscheduler_prio(p, prio);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+
-+	if (task_on_rq_queued(p))
-+		__balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+
-+	preempt_enable();
-+}
-+#else
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	return prio;
-+}
-+#endif
-+
-+void set_user_nice(struct task_struct *p, long nice)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
-+		return;
-+	/*
-+	 * We have to be careful, if called from sys_setpriority(),
-+	 * the task might be in the middle of scheduling on another CPU.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	p->static_prio = NICE_TO_PRIO(nice);
-+	/*
-+	 * The RT priorities are set via sched_setscheduler(), but we still
-+	 * allow the 'normal' nice value to be set - but as expected
-+	 * it won't have any effect on scheduling until the task is
-+	 * not SCHED_NORMAL/SCHED_BATCH:
-+	 */
-+	if (task_has_rt_policy(p))
-+		goto out_unlock;
-+
-+	p->prio = effective_prio(p);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+EXPORT_SYMBOL(set_user_nice);
-+
-+/*
-+ * is_nice_reduction - check if nice value is an actual reduction
-+ *
-+ * Similar to can_nice() but does not perform a capability check.
-+ *
-+ * @p: task
-+ * @nice: nice value
-+ */
-+static bool is_nice_reduction(const struct task_struct *p, const int nice)
-+{
-+	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
-+	int nice_rlim = nice_to_rlimit(nice);
-+
-+	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
-+}
-+
-+/*
-+ * can_nice - check if a task can reduce its nice value
-+ * @p: task
-+ * @nice: nice value
-+ */
-+int can_nice(const struct task_struct *p, const int nice)
-+{
-+	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
-+}
-+
-+#ifdef __ARCH_WANT_SYS_NICE
-+
-+/*
-+ * sys_nice - change the priority of the current process.
-+ * @increment: priority increment
-+ *
-+ * sys_setpriority is a more generic, but much slower function that
-+ * does similar things.
-+ */
-+SYSCALL_DEFINE1(nice, int, increment)
-+{
-+	long nice, retval;
-+
-+	/*
-+	 * Setpriority might change our priority at the same moment.
-+	 * We don't have to worry. Conceptually one call occurs first
-+	 * and we have a single winner.
-+	 */
-+
-+	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
-+	nice = task_nice(current) + increment;
-+
-+	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
-+	if (increment < 0 && !can_nice(current, nice))
-+		return -EPERM;
-+
-+	retval = security_task_setnice(current, nice);
-+	if (retval)
-+		return retval;
-+
-+	set_user_nice(current, nice);
-+	return 0;
-+}
-+
-+#endif
-+
-+/**
-+ * task_prio - return the priority value of a given task.
-+ * @p: the task in question.
-+ *
-+ * Return: The priority value as seen by users in /proc.
-+ *
-+ * sched policy         return value   kernel prio    user prio/nice
-+ *
-+ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
-+ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
-+ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
-+ */
-+int task_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
-+		task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+/**
-+ * idle_cpu - is a given CPU idle currently?
-+ * @cpu: the processor in question.
-+ *
-+ * Return: 1 if the CPU is currently idle. 0 otherwise.
-+ */
-+int idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (rq->curr != rq->idle)
-+		return 0;
-+
-+	if (rq->nr_running)
-+		return 0;
-+
-+#ifdef CONFIG_SMP
-+	if (rq->ttwu_pending)
-+		return 0;
-+#endif
-+
-+	return 1;
-+}
-+
-+/**
-+ * idle_task - return the idle task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * Return: The idle task for the cpu @cpu.
-+ */
-+struct task_struct *idle_task(int cpu)
-+{
-+	return cpu_rq(cpu)->idle;
-+}
-+
-+/**
-+ * find_process_by_pid - find a process with a matching PID value.
-+ * @pid: the pid in question.
-+ *
-+ * The task of @pid, if found. %NULL otherwise.
-+ */
-+static inline struct task_struct *find_process_by_pid(pid_t pid)
-+{
-+	return pid ? find_task_by_vpid(pid) : current;
-+}
-+
-+static struct task_struct *find_get_task(pid_t pid)
-+{
-+	struct task_struct *p;
-+	guard(rcu)();
-+
-+	p = find_process_by_pid(pid);
-+	if (likely(p))
-+		get_task_struct(p);
-+
-+	return p;
-+}
-+
-+DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
-+	     find_get_task(pid), pid_t pid)
-+
-+/*
-+ * sched_setparam() passes in -1 for its policy, to let the functions
-+ * it calls know not to change it.
-+ */
-+#define SETPARAM_POLICY -1
-+
-+static void __setscheduler_params(struct task_struct *p,
-+		const struct sched_attr *attr)
-+{
-+	int policy = attr->sched_policy;
-+
-+	if (policy == SETPARAM_POLICY)
-+		policy = p->policy;
-+
-+	p->policy = policy;
-+
-+	/*
-+	 * allow normal nice value to be set, but will not have any
-+	 * effect on scheduling until the task not SCHED_NORMAL/
-+	 * SCHED_BATCH
-+	 */
-+	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
-+
-+	/*
-+	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
-+	 * !rt_policy. Always setting this ensures that things like
-+	 * getparam()/getattr() don't report silly values for !rt tasks.
-+	 */
-+	p->rt_priority = attr->sched_priority;
-+	p->normal_prio = normal_prio(p);
-+}
-+
-+/*
-+ * check the target process has a UID that matches the current process's
-+ */
-+static bool check_same_owner(struct task_struct *p)
-+{
-+	const struct cred *cred = current_cred(), *pcred;
-+	guard(rcu)();
-+
-+	pcred = __task_cred(p);
-+	return (uid_eq(cred->euid, pcred->euid) ||
-+	        uid_eq(cred->euid, pcred->uid));
-+}
-+
-+/*
-+ * Allow unprivileged RT tasks to decrease priority.
-+ * Only issue a capable test if needed and only once to avoid an audit
-+ * event on permitted non-privileged operations:
-+ */
-+static int user_check_sched_setscheduler(struct task_struct *p,
-+					 const struct sched_attr *attr,
-+					 int policy, int reset_on_fork)
-+{
-+	if (rt_policy(policy)) {
-+		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
-+
-+		/* Can't set/change the rt policy: */
-+		if (policy != p->policy && !rlim_rtprio)
-+			goto req_priv;
-+
-+		/* Can't increase priority: */
-+		if (attr->sched_priority > p->rt_priority &&
-+		    attr->sched_priority > rlim_rtprio)
-+			goto req_priv;
-+	}
-+
-+	/* Can't change other user's priorities: */
-+	if (!check_same_owner(p))
-+		goto req_priv;
-+
-+	/* Normal users shall not reset the sched_reset_on_fork flag: */
-+	if (p->sched_reset_on_fork && !reset_on_fork)
-+		goto req_priv;
-+
-+	return 0;
-+
-+req_priv:
-+	if (!capable(CAP_SYS_NICE))
-+		return -EPERM;
-+
-+	return 0;
-+}
-+
-+static int __sched_setscheduler(struct task_struct *p,
-+				const struct sched_attr *attr,
-+				bool user, bool pi)
-+{
-+	const struct sched_attr dl_squash_attr = {
-+		.size		= sizeof(struct sched_attr),
-+		.sched_policy	= SCHED_FIFO,
-+		.sched_nice	= 0,
-+		.sched_priority = 99,
-+	};
-+	int oldpolicy = -1, policy = attr->sched_policy;
-+	int retval, newprio;
-+	struct balance_callback *head;
-+	unsigned long flags;
-+	struct rq *rq;
-+	int reset_on_fork;
-+	raw_spinlock_t *lock;
-+
-+	/* The pi code expects interrupts enabled */
-+	BUG_ON(pi && in_interrupt());
-+
-+	/*
-+	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
-+	 */
-+	if (unlikely(SCHED_DEADLINE == policy)) {
-+		attr = &dl_squash_attr;
-+		policy = attr->sched_policy;
-+	}
-+recheck:
-+	/* Double check policy once rq lock held */
-+	if (policy < 0) {
-+		reset_on_fork = p->sched_reset_on_fork;
-+		policy = oldpolicy = p->policy;
-+	} else {
-+		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
-+
-+		if (policy > SCHED_IDLE)
-+			return -EINVAL;
-+	}
-+
-+	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
-+		return -EINVAL;
-+
-+	/*
-+	 * Valid priorities for SCHED_FIFO and SCHED_RR are
-+	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
-+	 * SCHED_BATCH and SCHED_IDLE is 0.
-+	 */
-+	if (attr->sched_priority < 0 ||
-+	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
-+	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
-+		return -EINVAL;
-+	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
-+	    (attr->sched_priority != 0))
-+		return -EINVAL;
-+
-+	if (user) {
-+		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
-+		if (retval)
-+			return retval;
-+
-+		retval = security_task_setscheduler(p);
-+		if (retval)
-+			return retval;
-+	}
-+
-+	/*
-+	 * Make sure no PI-waiters arrive (or leave) while we are
-+	 * changing the priority of the task:
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+
-+	/*
-+	 * To be able to change p->policy safely, task_access_lock()
-+	 * must be called.
-+	 * IF use task_access_lock() here:
-+	 * For the task p which is not running, reading rq->stop is
-+	 * racy but acceptable as ->stop doesn't change much.
-+	 * An enhancemnet can be made to read rq->stop saftly.
-+	 */
-+	rq = __task_access_lock(p, &lock);
-+
-+	/*
-+	 * Changing the policy of the stop threads its a very bad idea
-+	 */
-+	if (p == rq->stop) {
-+		retval = -EINVAL;
-+		goto unlock;
-+	}
-+
-+	/*
-+	 * If not changing anything there's no need to proceed further:
-+	 */
-+	if (unlikely(policy == p->policy)) {
-+		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
-+			goto change;
-+		if (!rt_policy(policy) &&
-+		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
-+			goto change;
-+
-+		p->sched_reset_on_fork = reset_on_fork;
-+		retval = 0;
-+		goto unlock;
-+	}
-+change:
-+
-+	/* Re-check policy now with rq lock held */
-+	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
-+		policy = oldpolicy = -1;
-+		__task_access_unlock(p, lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+		goto recheck;
-+	}
-+
-+	p->sched_reset_on_fork = reset_on_fork;
-+
-+	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
-+	if (pi) {
-+		/*
-+		 * Take priority boosted tasks into account. If the new
-+		 * effective priority is unchanged, we just store the new
-+		 * normal parameters and do not touch the scheduler class and
-+		 * the runqueue. This will be done when the task deboost
-+		 * itself.
-+		 */
-+		newprio = rt_effective_prio(p, newprio);
-+	}
-+
-+	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
-+		__setscheduler_params(p, attr);
-+		__setscheduler_prio(p, newprio);
-+	}
-+
-+	check_task_changed(p, rq);
-+
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+	head = splice_balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	if (pi)
-+		rt_mutex_adjust_pi(p);
-+
-+	/* Run balance callbacks after we've adjusted the PI chain: */
-+	balance_callbacks(rq, head);
-+	preempt_enable();
-+
-+	return 0;
-+
-+unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+	return retval;
-+}
-+
-+static int _sched_setscheduler(struct task_struct *p, int policy,
-+			       const struct sched_param *param, bool check)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy   = policy,
-+		.sched_priority = param->sched_priority,
-+		.sched_nice     = PRIO_TO_NICE(p->static_prio),
-+	};
-+
-+	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
-+	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
-+		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		policy &= ~SCHED_RESET_ON_FORK;
-+		attr.sched_policy = policy;
-+	}
-+
-+	return __sched_setscheduler(p, &attr, check, true);
-+}
-+
-+/**
-+ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Use sched_set_fifo(), read its comment.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ *
-+ * NOTE that the task may be already dead.
-+ */
-+int sched_setscheduler(struct task_struct *p, int policy,
-+		       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, true);
-+}
-+
-+int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, true, true);
-+}
-+
-+int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, false, true);
-+}
-+EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
-+
-+/**
-+ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Just like sched_setscheduler, only don't bother checking if the
-+ * current context has permission.  For example, this is needed in
-+ * stop_machine(): we create temporary high priority worker threads,
-+ * but our caller might not have that capability.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+int sched_setscheduler_nocheck(struct task_struct *p, int policy,
-+			       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, false);
-+}
-+
-+/*
-+ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
-+ * incapable of resource management, which is the one thing an OS really should
-+ * be doing.
-+ *
-+ * This is of course the reason it is limited to privileged users only.
-+ *
-+ * Worse still; it is fundamentally impossible to compose static priority
-+ * workloads. You cannot take two correctly working static prio workloads
-+ * and smash them together and still expect them to work.
-+ *
-+ * For this reason 'all' FIFO tasks the kernel creates are basically at:
-+ *
-+ *   MAX_RT_PRIO / 2
-+ *
-+ * The administrator _MUST_ configure the system, the kernel simply doesn't
-+ * know enough information to make a sensible choice.
-+ */
-+void sched_set_fifo(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo);
-+
-+/*
-+ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
-+ */
-+void sched_set_fifo_low(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = 1 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo_low);
-+
-+void sched_set_normal(struct task_struct *p, int nice)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+		.sched_nice = nice,
-+	};
-+	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_normal);
-+
-+static int
-+do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
-+{
-+	struct sched_param lparam;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
-+		return -EFAULT;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	return sched_setscheduler(p, policy, &lparam);
-+}
-+
-+/*
-+ * Mimics kernel/events/core.c perf_copy_attr().
-+ */
-+static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
-+{
-+	u32 size;
-+	int ret;
-+
-+	/* Zero the full structure, so that a short copy will be nice: */
-+	memset(attr, 0, sizeof(*attr));
-+
-+	ret = get_user(size, &uattr->size);
-+	if (ret)
-+		return ret;
-+
-+	/* ABI compatibility quirk: */
-+	if (!size)
-+		size = SCHED_ATTR_SIZE_VER0;
-+
-+	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
-+		goto err_size;
-+
-+	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
-+	if (ret) {
-+		if (ret == -E2BIG)
-+			goto err_size;
-+		return ret;
-+	}
-+
-+	/*
-+	 * XXX: Do we want to be lenient like existing syscalls; or do we want
-+	 * to be strict and return an error on out-of-bounds values?
-+	 */
-+	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
-+
-+	/* sched/core.c uses zero here but we already know ret is zero */
-+	return 0;
-+
-+err_size:
-+	put_user(sizeof(*attr), &uattr->size);
-+	return -E2BIG;
-+}
-+
-+/**
-+ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
-+ * @pid: the pid in question.
-+ * @policy: new policy.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ * @param: structure containing the new RT priority.
-+ */
-+SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
-+{
-+	if (policy < 0)
-+		return -EINVAL;
-+
-+	return do_sched_setscheduler(pid, policy, param);
-+}
-+
-+/**
-+ * sys_sched_setparam - set/change the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
-+}
-+
-+static void get_params(struct task_struct *p, struct sched_attr *attr)
-+{
-+	if (task_has_rt_policy(p))
-+		attr->sched_priority = p->rt_priority;
-+	else
-+		attr->sched_nice = task_nice(p);
-+}
-+
-+/**
-+ * sys_sched_setattr - same as above, but with extended sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ */
-+SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
-+			       unsigned int, flags)
-+{
-+	struct sched_attr attr;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || flags)
-+		return -EINVAL;
-+
-+	retval = sched_copy_attr(uattr, &attr);
-+	if (retval)
-+		return retval;
-+
-+	if ((int)attr.sched_policy < 0)
-+		return -EINVAL;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
-+		get_params(p, &attr);
-+
-+	return sched_setattr(p, &attr);
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
-+ * @pid: the pid in question.
-+ *
-+ * Return: On success, the policy of the thread. Otherwise, a negative error
-+ * code.
-+ */
-+SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
-+{
-+	struct task_struct *p;
-+	int retval = -EINVAL;
-+
-+	if (pid < 0)
-+		return -ESRCH;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (!retval)
-+		retval = p->policy;
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the RT priority.
-+ *
-+ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
-+ * code.
-+ */
-+SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	struct sched_param lp = { .sched_priority = 0 };
-+	struct task_struct *p;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		int retval;
-+
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -EINVAL;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		if (task_has_rt_policy(p))
-+			lp.sched_priority = p->rt_priority;
-+	}
-+
-+	/*
-+	 * This one might sleep, we cannot do it with a spinlock held ...
-+	 */
-+	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
-+}
-+
-+/*
-+ * Copy the kernel size attribute structure (which might be larger
-+ * than what user-space knows about) to user-space.
-+ *
-+ * Note that all cases are valid: user-space buffer can be larger or
-+ * smaller than the kernel-space buffer. The usual case is that both
-+ * have the same size.
-+ */
-+static int
-+sched_attr_copy_to_user(struct sched_attr __user *uattr,
-+			struct sched_attr *kattr,
-+			unsigned int usize)
-+{
-+	unsigned int ksize = sizeof(*kattr);
-+
-+	if (!access_ok(uattr, usize))
-+		return -EFAULT;
-+
-+	/*
-+	 * sched_getattr() ABI forwards and backwards compatibility:
-+	 *
-+	 * If usize == ksize then we just copy everything to user-space and all is good.
-+	 *
-+	 * If usize < ksize then we only copy as much as user-space has space for,
-+	 * this keeps ABI compatibility as well. We skip the rest.
-+	 *
-+	 * If usize > ksize then user-space is using a newer version of the ABI,
-+	 * which part the kernel doesn't know about. Just ignore it - tooling can
-+	 * detect the kernel's knowledge of attributes from the attr->size value
-+	 * which is set to ksize in this case.
-+	 */
-+	kattr->size = min(usize, ksize);
-+
-+	if (copy_to_user(uattr, kattr, kattr->size))
-+		return -EFAULT;
-+
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ * @usize: sizeof(attr) for fwd/bwd comp.
-+ * @flags: for future extension.
-+ */
-+SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
-+		unsigned int, usize, unsigned int, flags)
-+{
-+	struct sched_attr kattr = { };
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
-+	    usize < SCHED_ATTR_SIZE_VER0 || flags)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -ESRCH;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		kattr.sched_policy = p->policy;
-+		if (p->sched_reset_on_fork)
-+			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		get_params(p, &kattr);
-+		kattr.sched_flags &= SCHED_FLAG_ALL;
-+
-+#ifdef CONFIG_UCLAMP_TASK
-+		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
-+		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
-+#endif
-+	}
-+
-+	return sched_attr_copy_to_user(uattr, &kattr, usize);
-+}
-+
-+#ifdef CONFIG_SMP
-+int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	int retval;
-+	cpumask_var_t cpus_allowed, new_mask;
-+
-+	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
-+		retval = -ENOMEM;
-+		goto out_free_cpus_allowed;
-+	}
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
-+
-+	ctx->new_mask = new_mask;
-+	ctx->flags |= SCA_CHECK;
-+
-+	retval = __set_cpus_allowed_ptr(p, ctx);
-+	if (retval)
-+		goto out_free_new_mask;
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	if (!cpumask_subset(new_mask, cpus_allowed)) {
-+		/*
-+		 * We must have raced with a concurrent cpuset
-+		 * update. Just reset the cpus_allowed to the
-+		 * cpuset's cpus_allowed
-+		 */
-+		cpumask_copy(new_mask, cpus_allowed);
-+
-+		/*
-+		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
-+		 * will restore the previous user_cpus_ptr value.
-+		 *
-+		 * In the unlikely event a previous user_cpus_ptr exists,
-+		 * we need to further restrict the mask to what is allowed
-+		 * by that old user_cpus_ptr.
-+		 */
-+		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
-+			bool empty = !cpumask_and(new_mask, new_mask,
-+						  ctx->user_mask);
-+
-+			if (WARN_ON_ONCE(empty))
-+				cpumask_copy(new_mask, cpus_allowed);
-+		}
-+		__set_cpus_allowed_ptr(p, ctx);
-+		retval = -EINVAL;
-+	}
-+
-+out_free_new_mask:
-+	free_cpumask_var(new_mask);
-+out_free_cpus_allowed:
-+	free_cpumask_var(cpus_allowed);
-+	return retval;
-+}
-+
-+long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
-+{
-+	struct affinity_context ac;
-+	struct cpumask *user_mask;
-+	int retval;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		return -EINVAL;
-+
-+	if (!check_same_owner(p)) {
-+		guard(rcu)();
-+		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
-+			return -EPERM;
-+	}
-+
-+	retval = security_task_setscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	/*
-+	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
-+	 * alloc_user_cpus_ptr() returns NULL.
-+	 */
-+	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
-+	if (user_mask) {
-+		cpumask_copy(user_mask, in_mask);
-+	} else if (IS_ENABLED(CONFIG_SMP)) {
-+		return -ENOMEM;
-+	}
-+
-+	ac = (struct affinity_context){
-+		.new_mask  = in_mask,
-+		.user_mask = user_mask,
-+		.flags     = SCA_USER,
-+	};
-+
-+	retval = __sched_setaffinity(p, &ac);
-+	kfree(ac.user_mask);
-+
-+	return retval;
-+}
-+
-+static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
-+			     struct cpumask *new_mask)
-+{
-+	if (len < cpumask_size())
-+		cpumask_clear(new_mask);
-+	else if (len > cpumask_size())
-+		len = cpumask_size();
-+
-+	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
-+}
-+
-+/**
-+ * sys_sched_setaffinity - set the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to the new CPU mask
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	cpumask_var_t new_mask;
-+	int retval;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
-+	if (retval == 0)
-+		retval = sched_setaffinity(pid, new_mask);
-+	free_cpumask_var(new_mask);
-+	return retval;
-+}
-+
-+long sched_getaffinity(pid_t pid, cpumask_t *mask)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	guard(raw_spinlock_irqsave)(&p->pi_lock);
-+	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getaffinity - get the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to hold the current CPU mask
-+ *
-+ * Return: size of CPU mask copied to user_mask_ptr on success. An
-+ * error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	int ret;
-+	cpumask_var_t mask;
-+
-+	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
-+		return -EINVAL;
-+	if (len & (sizeof(unsigned long)-1))
-+		return -EINVAL;
-+
-+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	ret = sched_getaffinity(pid, mask);
-+	if (ret == 0) {
-+		unsigned int retlen = min(len, cpumask_size());
-+
-+		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
-+			ret = -EFAULT;
-+		else
-+			ret = retlen;
-+	}
-+	free_cpumask_var(mask);
-+
-+	return ret;
-+}
-+
-+static void do_sched_yield(void)
-+{
-+	struct rq *rq;
-+	struct rq_flags rf;
-+	struct task_struct *p;
-+
-+	if (!sched_yield_type)
-+		return;
-+
-+	rq = this_rq_lock_irq(&rf);
-+
-+	schedstat_inc(rq->yld_count);
-+
-+	p = current;
-+	if (rt_task(p)) {
-+		if (task_on_rq_queued(p))
-+			requeue_task(p, rq);
-+	} else if (rq->nr_running > 1) {
-+		do_sched_yield_type_1(p, rq);
-+		if (task_on_rq_queued(p))
-+			requeue_task(p, rq);
-+	}
-+
-+	preempt_disable();
-+	raw_spin_unlock_irq(&rq->lock);
-+	sched_preempt_enable_no_resched();
-+
-+	schedule();
-+}
-+
-+/**
-+ * sys_sched_yield - yield the current processor to other threads.
-+ *
-+ * This function yields the current CPU to other tasks. If there are no
-+ * other threads running on this CPU then this function will return.
-+ *
-+ * Return: 0.
-+ */
-+SYSCALL_DEFINE0(sched_yield)
-+{
-+	do_sched_yield();
-+	return 0;
-+}
-+
-+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
-+int __sched __cond_resched(void)
-+{
-+	if (should_resched(0)) {
-+		preempt_schedule_common();
-+		return 1;
-+	}
-+	/*
-+	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
-+	 * whether the current CPU is in an RCU read-side critical section,
-+	 * so the tick can report quiescent states even for CPUs looping
-+	 * in kernel context.  In contrast, in non-preemptible kernels,
-+	 * RCU readers leave no in-memory hints, which means that CPU-bound
-+	 * processes executing in kernel context might never report an
-+	 * RCU quiescent state.  Therefore, the following code causes
-+	 * cond_resched() to report a quiescent state, but only when RCU
-+	 * is in urgent need of one.
-+	 */
-+#ifndef CONFIG_PREEMPT_RCU
-+	rcu_all_qs();
-+#endif
-+	return 0;
-+}
-+EXPORT_SYMBOL(__cond_resched);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define cond_resched_dynamic_enabled	__cond_resched
-+#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(cond_resched);
-+
-+#define might_resched_dynamic_enabled	__cond_resched
-+#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(might_resched);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
-+int __sched dynamic_cond_resched(void)
-+{
-+	klp_sched_try_switch();
-+	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_cond_resched);
-+
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
-+int __sched dynamic_might_resched(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_might_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_might_resched);
-+#endif
-+#endif
-+
-+/*
-+ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
-+ * call schedule, and on return reacquire the lock.
-+ *
-+ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
-+ * operations here to prevent schedule() from being called twice (once via
-+ * spin_unlock(), once by hand).
-+ */
-+int __cond_resched_lock(spinlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held(lock);
-+
-+	if (spin_needbreak(lock) || resched) {
-+		spin_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		spin_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_lock);
-+
-+int __cond_resched_rwlock_read(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_read(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		read_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		read_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_read);
-+
-+int __cond_resched_rwlock_write(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_write(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		write_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		write_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_write);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+
-+#ifdef CONFIG_GENERIC_ENTRY
-+#include <linux/entry-common.h>
-+#endif
-+
-+/*
-+ * SC:cond_resched
-+ * SC:might_resched
-+ * SC:preempt_schedule
-+ * SC:preempt_schedule_notrace
-+ * SC:irqentry_exit_cond_resched
-+ *
-+ *
-+ * NONE:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * VOLUNTARY:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- __cond_resched
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * FULL:
-+ *   cond_resched               <- RET0
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- preempt_schedule
-+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
-+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
-+ */
-+
-+enum {
-+	preempt_dynamic_undefined = -1,
-+	preempt_dynamic_none,
-+	preempt_dynamic_voluntary,
-+	preempt_dynamic_full,
-+};
-+
-+int preempt_dynamic_mode = preempt_dynamic_undefined;
-+
-+int sched_dynamic_mode(const char *str)
-+{
-+	if (!strcmp(str, "none"))
-+		return preempt_dynamic_none;
-+
-+	if (!strcmp(str, "voluntary"))
-+		return preempt_dynamic_voluntary;
-+
-+	if (!strcmp(str, "full"))
-+		return preempt_dynamic_full;
-+
-+	return -EINVAL;
-+}
-+
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
-+#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
-+#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
-+#else
-+#error "Unsupported PREEMPT_DYNAMIC mechanism"
-+#endif
-+
-+static DEFINE_MUTEX(sched_dynamic_mutex);
-+static bool klp_override;
-+
-+static void __sched_dynamic_update(int mode)
-+{
-+	/*
-+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-+	 * the ZERO state, which is invalid.
-+	 */
-+	if (!klp_override)
-+		preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(might_resched);
-+	preempt_dynamic_enable(preempt_schedule);
-+	preempt_dynamic_enable(preempt_schedule_notrace);
-+	preempt_dynamic_enable(irqentry_exit_cond_resched);
-+
-+	switch (mode) {
-+	case preempt_dynamic_none:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: none\n");
-+		break;
-+
-+	case preempt_dynamic_voluntary:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_enable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: voluntary\n");
-+		break;
-+
-+	case preempt_dynamic_full:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_enable(preempt_schedule);
-+		preempt_dynamic_enable(preempt_schedule_notrace);
-+		preempt_dynamic_enable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: full\n");
-+		break;
-+	}
-+
-+	preempt_dynamic_mode = mode;
-+}
-+
-+void sched_dynamic_update(int mode)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+	__sched_dynamic_update(mode);
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
-+
-+static int klp_cond_resched(void)
-+{
-+	__klp_sched_try_switch();
-+	return __cond_resched();
-+}
-+
-+void sched_dynamic_klp_enable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = true;
-+	static_call_update(cond_resched, klp_cond_resched);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+void sched_dynamic_klp_disable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = false;
-+	__sched_dynamic_update(preempt_dynamic_mode);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
-+
-+
-+static int __init setup_preempt_mode(char *str)
-+{
-+	int mode = sched_dynamic_mode(str);
-+	if (mode < 0) {
-+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-+		return 0;
-+	}
-+
-+	sched_dynamic_update(mode);
-+	return 1;
-+}
-+__setup("preempt=", setup_preempt_mode);
-+
-+static void __init preempt_dynamic_init(void)
-+{
-+	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-+		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-+			sched_dynamic_update(preempt_dynamic_none);
-+		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-+			sched_dynamic_update(preempt_dynamic_voluntary);
-+		} else {
-+			/* Default static call setting, nothing to do */
-+			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-+			preempt_dynamic_mode = preempt_dynamic_full;
-+			pr_info("Dynamic Preempt: full\n");
-+		}
-+	}
-+}
-+
-+#define PREEMPT_MODEL_ACCESSOR(mode) \
-+	bool preempt_model_##mode(void)						 \
-+	{									 \
-+		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
-+		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
-+	}									 \
-+	EXPORT_SYMBOL_GPL(preempt_model_##mode)
-+
-+PREEMPT_MODEL_ACCESSOR(none);
-+PREEMPT_MODEL_ACCESSOR(voluntary);
-+PREEMPT_MODEL_ACCESSOR(full);
-+
-+#else /* !CONFIG_PREEMPT_DYNAMIC */
-+
-+static inline void preempt_dynamic_init(void) { }
-+
-+#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
-+
-+/**
-+ * yield - yield the current processor to other threads.
-+ *
-+ * Do not ever use this function, there's a 99% chance you're doing it wrong.
-+ *
-+ * The scheduler is at all times free to pick the calling task as the most
-+ * eligible task to run, if removing the yield() call from your code breaks
-+ * it, it's already broken.
-+ *
-+ * Typical broken usage is:
-+ *
-+ * while (!event)
-+ * 	yield();
-+ *
-+ * where one assumes that yield() will let 'the other' process run that will
-+ * make event true. If the current task is a SCHED_FIFO task that will never
-+ * happen. Never use yield() as a progress guarantee!!
-+ *
-+ * If you want to use yield() to wait for something, use wait_event().
-+ * If you want to use yield() to be 'nice' for others, use cond_resched().
-+ * If you still want to use yield(), do not!
-+ */
-+void __sched yield(void)
-+{
-+	set_current_state(TASK_RUNNING);
-+	do_sched_yield();
-+}
-+EXPORT_SYMBOL(yield);
-+
-+/**
-+ * yield_to - yield the current processor to another thread in
-+ * your thread group, or accelerate that thread toward the
-+ * processor it's on.
-+ * @p: target task
-+ * @preempt: whether task preemption is allowed or not
-+ *
-+ * It's the caller's job to ensure that the target task struct
-+ * can't go away on us before we can do any checks.
-+ *
-+ * In Alt schedule FW, yield_to is not supported.
-+ *
-+ * Return:
-+ *	true (>0) if we indeed boosted the target task.
-+ *	false (0) if we failed to boost the target.
-+ *	-ESRCH if there's no task to yield to.
-+ */
-+int __sched yield_to(struct task_struct *p, bool preempt)
-+{
-+	return 0;
-+}
-+EXPORT_SYMBOL_GPL(yield_to);
-+
-+int io_schedule_prepare(void)
-+{
-+	int old_iowait = current->in_iowait;
-+
-+	current->in_iowait = 1;
-+	blk_flush_plug(current->plug, true);
-+	return old_iowait;
-+}
-+
-+void io_schedule_finish(int token)
-+{
-+	current->in_iowait = token;
-+}
-+
-+/*
-+ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
-+ * that process accounting knows that this is a task in IO wait state.
-+ *
-+ * But don't do that if it is a deliberate, throttling IO wait (this task
-+ * has set its backing_dev_info: the queue against which it should throttle)
-+ */
-+
-+long __sched io_schedule_timeout(long timeout)
-+{
-+	int token;
-+	long ret;
-+
-+	token = io_schedule_prepare();
-+	ret = schedule_timeout(timeout);
-+	io_schedule_finish(token);
-+
-+	return ret;
-+}
-+EXPORT_SYMBOL(io_schedule_timeout);
-+
-+void __sched io_schedule(void)
-+{
-+	int token;
-+
-+	token = io_schedule_prepare();
-+	schedule();
-+	io_schedule_finish(token);
-+}
-+EXPORT_SYMBOL(io_schedule);
-+
-+/**
-+ * sys_sched_get_priority_max - return maximum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the maximum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = MAX_RT_PRIO - 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+/**
-+ * sys_sched_get_priority_min - return minimum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the minimum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	alt_sched_debug();
-+
-+	if (pid < 0)
-+		return -EINVAL;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -EINVAL;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	*t = ns_to_timespec64(sysctl_sched_base_slice);
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_rr_get_interval - return the default timeslice of a process.
-+ * @pid: pid of the process.
-+ * @interval: userspace pointer to the timeslice value.
-+ *
-+ *
-+ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
-+ * an error code.
-+ */
-+SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
-+		struct __kernel_timespec __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_timespec64(&t, interval);
-+
-+	return retval;
-+}
-+
-+#ifdef CONFIG_COMPAT_32BIT_TIME
-+SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
-+		struct old_timespec32 __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_old_timespec32(&t, interval);
-+	return retval;
-+}
-+#endif
-+
-+void sched_show_task(struct task_struct *p)
-+{
-+	unsigned long free = 0;
-+	int ppid;
-+
-+	if (!try_get_task_stack(p))
-+		return;
-+
-+	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
-+
-+	if (task_is_running(p))
-+		pr_cont("  running task    ");
-+#ifdef CONFIG_DEBUG_STACK_USAGE
-+	free = stack_not_used(p);
-+#endif
-+	ppid = 0;
-+	rcu_read_lock();
-+	if (pid_alive(p))
-+		ppid = task_pid_nr(rcu_dereference(p->real_parent));
-+	rcu_read_unlock();
-+	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
-+		free, task_pid_nr(p), task_tgid_nr(p),
-+		ppid, read_task_thread_flags(p));
-+
-+	print_worker_info(KERN_INFO, p);
-+	print_stop_info(KERN_INFO, p);
-+	show_stack(p, NULL, KERN_INFO);
-+	put_task_stack(p);
-+}
-+EXPORT_SYMBOL_GPL(sched_show_task);
-+
-+static inline bool
-+state_filter_match(unsigned long state_filter, struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/* no filter, everything matches */
-+	if (!state_filter)
-+		return true;
-+
-+	/* filter, but doesn't match */
-+	if (!(state & state_filter))
-+		return false;
-+
-+	/*
-+	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
-+	 * TASK_KILLABLE).
-+	 */
-+	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
-+		return false;
-+
-+	return true;
-+}
-+
-+
-+void show_state_filter(unsigned int state_filter)
-+{
-+	struct task_struct *g, *p;
-+
-+	rcu_read_lock();
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * reset the NMI-timeout, listing all files on a slow
-+		 * console might take a lot of time:
-+		 * Also, reset softlockup watchdogs on all CPUs, because
-+		 * another CPU might be blocked waiting for us to process
-+		 * an IPI.
-+		 */
-+		touch_nmi_watchdog();
-+		touch_all_softlockup_watchdogs();
-+		if (state_filter_match(state_filter, p))
-+			sched_show_task(p);
-+	}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	/* TODO: Alt schedule FW should support this
-+	if (!state_filter)
-+		sysrq_sched_debug_show();
-+	*/
-+#endif
-+	rcu_read_unlock();
-+	/*
-+	 * Only show locks if all tasks are dumped:
-+	 */
-+	if (!state_filter)
-+		debug_show_all_locks();
-+}
-+
-+void dump_cpu_task(int cpu)
-+{
-+	if (cpu == smp_processor_id() && in_hardirq()) {
-+		struct pt_regs *regs;
-+
-+		regs = get_irq_regs();
-+		if (regs) {
-+			show_regs(regs);
-+			return;
-+		}
-+	}
-+
-+	if (trigger_single_cpu_backtrace(cpu))
-+		return;
-+
-+	pr_info("Task dump for CPU %d:\n", cpu);
-+	sched_show_task(cpu_curr(cpu));
-+}
-+
-+/**
-+ * init_idle - set up an idle thread for a given CPU
-+ * @idle: task in question
-+ * @cpu: CPU the idle task belongs to
-+ *
-+ * NOTE: this function does not set the idle thread's NEED_RESCHED
-+ * flag, to make booting more robust.
-+ */
-+void __init init_idle(struct task_struct *idle, int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	struct affinity_context ac = (struct affinity_context) {
-+		.new_mask  = cpumask_of(cpu),
-+		.flags     = 0,
-+	};
-+#endif
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	__sched_fork(0, idle);
-+
-+	raw_spin_lock_irqsave(&idle->pi_lock, flags);
-+	raw_spin_lock(&rq->lock);
-+
-+	idle->last_ran = rq->clock_task;
-+	idle->__state = TASK_RUNNING;
-+	/*
-+	 * PF_KTHREAD should already be set at this point; regardless, make it
-+	 * look like a proper per-CPU kthread.
-+	 */
-+	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
-+	kthread_set_per_cpu(idle, cpu);
-+
-+	sched_queue_init_idle(&rq->queue, idle);
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * It's possible that init_idle() gets called multiple times on a task,
-+	 * in that case do_set_cpus_allowed() will not do the right thing.
-+	 *
-+	 * And since this is boot we can forgo the serialisation.
-+	 */
-+	set_cpus_allowed_common(idle, &ac);
-+#endif
-+
-+	/* Silence PROVE_RCU */
-+	rcu_read_lock();
-+	__set_task_cpu(idle, cpu);
-+	rcu_read_unlock();
-+
-+	rq->idle = idle;
-+	rcu_assign_pointer(rq->curr, idle);
-+	idle->on_cpu = 1;
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
-+
-+	/* Set the preempt count _outside_ the spinlocks! */
-+	init_idle_preempt_count(idle, cpu);
-+
-+	ftrace_graph_init_idle_task(idle, cpu);
-+	vtime_init_idle(idle, cpu);
-+#ifdef CONFIG_SMP
-+	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
-+			      const struct cpumask __maybe_unused *trial)
-+{
-+	return 1;
-+}
-+
-+int task_can_attach(struct task_struct *p)
-+{
-+	int ret = 0;
-+
-+	/*
-+	 * Kthreads which disallow setaffinity shouldn't be moved
-+	 * to a new cpuset; we don't want to change their CPU
-+	 * affinity and isolating such threads by their set of
-+	 * allowed nodes is unnecessary.  Thus, cpusets are not
-+	 * applicable for such threads.  This prevents checking for
-+	 * success of set_cpus_allowed_ptr() on all attached tasks
-+	 * before cpus_mask may be changed.
-+	 */
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		ret = -EINVAL;
-+
-+	return ret;
-+}
-+
-+bool sched_smp_initialized __read_mostly;
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+/*
-+ * Ensures that the idle task is using init_mm right before its CPU goes
-+ * offline.
-+ */
-+void idle_task_exit(void)
-+{
-+	struct mm_struct *mm = current->active_mm;
-+
-+	BUG_ON(current != this_rq()->idle);
-+
-+	if (mm != &init_mm) {
-+		switch_mm(mm, &init_mm, current);
-+		finish_arch_post_lock_switch();
-+	}
-+
-+	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
-+}
-+
-+static int __balance_push_cpu_stop(void *arg)
-+{
-+	struct task_struct *p = arg;
-+	struct rq *rq = this_rq();
-+	struct rq_flags rf;
-+	int cpu;
-+
-+	raw_spin_lock_irq(&p->pi_lock);
-+	rq_lock(rq, &rf);
-+
-+	update_rq_clock(rq);
-+
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		cpu = select_fallback_rq(rq->cpu, p);
-+		rq = __migrate_task(rq, p, cpu);
-+	}
-+
-+	rq_unlock(rq, &rf);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	put_task_struct(p);
-+
-+	return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
-+
-+/*
-+ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
-+ * effective when the hotplug motion is down.
-+ */
-+static void balance_push(struct rq *rq)
-+{
-+	struct task_struct *push_task = rq->curr;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*
-+	 * Ensure the thing is persistent until balance_push_set(.on = false);
-+	 */
-+	rq->balance_callback = &balance_push_callback;
-+
-+	/*
-+	 * Only active while going offline and when invoked on the outgoing
-+	 * CPU.
-+	 */
-+	if (!cpu_dying(rq->cpu) || rq != this_rq())
-+		return;
-+
-+	/*
-+	 * Both the cpu-hotplug and stop task are in this case and are
-+	 * required to complete the hotplug process.
-+	 */
-+	if (kthread_is_per_cpu(push_task) ||
-+	    is_migration_disabled(push_task)) {
-+
-+		/*
-+		 * If this is the idle task on the outgoing CPU try to wake
-+		 * up the hotplug control thread which might wait for the
-+		 * last task to vanish. The rcuwait_active() check is
-+		 * accurate here because the waiter is pinned on this CPU
-+		 * and can't obviously be running in parallel.
-+		 *
-+		 * On RT kernels this also has to check whether there are
-+		 * pinned and scheduled out tasks on the runqueue. They
-+		 * need to leave the migrate disabled section first.
-+		 */
-+		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
-+		    rcuwait_active(&rq->hotplug_wait)) {
-+			raw_spin_unlock(&rq->lock);
-+			rcuwait_wake_up(&rq->hotplug_wait);
-+			raw_spin_lock(&rq->lock);
-+		}
-+		return;
-+	}
-+
-+	get_task_struct(push_task);
-+	/*
-+	 * Temporarily drop rq->lock such that we can wake-up the stop task.
-+	 * Both preemption and IRQs are still disabled.
-+	 */
-+	preempt_disable();
-+	raw_spin_unlock(&rq->lock);
-+	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
-+			    this_cpu_ptr(&push_work));
-+	preempt_enable();
-+	/*
-+	 * At this point need_resched() is true and we'll take the loop in
-+	 * schedule(). The next pick is obviously going to be the stop task
-+	 * which kthread_is_per_cpu() and will push this task away.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct rq_flags rf;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	if (on) {
-+		WARN_ON_ONCE(rq->balance_callback);
-+		rq->balance_callback = &balance_push_callback;
-+	} else if (rq->balance_callback == &balance_push_callback) {
-+		rq->balance_callback = NULL;
-+	}
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Invoked from a CPUs hotplug control thread after the CPU has been marked
-+ * inactive. All tasks which are not per CPU kernel threads are either
-+ * pushed off this CPU now via balance_push() or placed on a different CPU
-+ * during wakeup. Wait until the CPU is quiescent.
-+ */
-+static void balance_hotplug_wait(void)
-+{
-+	struct rq *rq = this_rq();
-+
-+	rcuwait_wait_event(&rq->hotplug_wait,
-+			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
-+			   TASK_UNINTERRUPTIBLE);
-+}
-+
-+#else
-+
-+static void balance_push(struct rq *rq)
-+{
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+}
-+
-+static inline void balance_hotplug_wait(void)
-+{
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+static void set_rq_offline(struct rq *rq)
-+{
-+	if (rq->online) {
-+		update_rq_clock(rq);
-+		rq->online = false;
-+	}
-+}
-+
-+static void set_rq_online(struct rq *rq)
-+{
-+	if (!rq->online)
-+		rq->online = true;
-+}
-+
-+/*
-+ * used to mark begin/end of suspend/resume:
-+ */
-+static int num_cpus_frozen;
-+
-+/*
-+ * Update cpusets according to cpu_active mask.  If cpusets are
-+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
-+ * around partition_sched_domains().
-+ *
-+ * If we come here as part of a suspend/resume, don't touch cpusets because we
-+ * want to restore it back to its original state upon resume anyway.
-+ */
-+static void cpuset_cpu_active(void)
-+{
-+	if (cpuhp_tasks_frozen) {
-+		/*
-+		 * num_cpus_frozen tracks how many CPUs are involved in suspend
-+		 * resume sequence. As long as this is not the last online
-+		 * operation in the resume sequence, just build a single sched
-+		 * domain, ignoring cpusets.
-+		 */
-+		partition_sched_domains(1, NULL, NULL);
-+		if (--num_cpus_frozen)
-+			return;
-+		/*
-+		 * This is the last CPU online operation. So fall through and
-+		 * restore the original sched domains by considering the
-+		 * cpuset configurations.
-+		 */
-+		cpuset_force_rebuild();
-+	}
-+
-+	cpuset_update_active_cpus();
-+}
-+
-+static int cpuset_cpu_inactive(unsigned int cpu)
-+{
-+	if (!cpuhp_tasks_frozen) {
-+		cpuset_update_active_cpus();
-+	} else {
-+		num_cpus_frozen++;
-+		partition_sched_domains(1, NULL, NULL);
-+	}
-+	return 0;
-+}
-+
-+int sched_cpu_activate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/*
-+	 * Clear the balance_push callback and prepare to schedule
-+	 * regular tasks.
-+	 */
-+	balance_push_set(cpu, false);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going up, increment the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
-+		static_branch_inc_cpuslocked(&sched_smt_present);
-+#endif
-+	set_cpu_active(cpu, true);
-+
-+	if (sched_smp_initialized)
-+		cpuset_cpu_active();
-+
-+	/*
-+	 * Put the rq online, if not already. This happens:
-+	 *
-+	 * 1) In the early boot process, because we build the real domains
-+	 *    after all cpus have been brought up.
-+	 *
-+	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
-+	 *    domains.
-+	 */
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_online(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	return 0;
-+}
-+
-+int sched_cpu_deactivate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+	int ret;
-+
-+	set_cpu_active(cpu, false);
-+
-+	/*
-+	 * From this point forward, this CPU will refuse to run any task that
-+	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
-+	 * push those tasks away until this gets cleared, see
-+	 * sched_cpu_dying().
-+	 */
-+	balance_push_set(cpu, true);
-+
-+	/*
-+	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
-+	 * users of this state to go away such that all new such users will
-+	 * observe it.
-+	 *
-+	 * Specifically, we rely on ttwu to no longer target this CPU, see
-+	 * ttwu_queue_cond() and is_cpu_allowed().
-+	 *
-+	 * Do sync before park smpboot threads to take care the rcu boost case.
-+	 */
-+	synchronize_rcu();
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_offline(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going down, decrement the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+		static_branch_dec_cpuslocked(&sched_smt_present);
-+		if (!static_branch_likely(&sched_smt_present))
-+			cpumask_clear(sched_sg_idle_mask);
-+	}
-+#endif
-+
-+	if (!sched_smp_initialized)
-+		return 0;
-+
-+	ret = cpuset_cpu_inactive(cpu);
-+	if (ret) {
-+		balance_push_set(cpu, false);
-+		set_cpu_active(cpu, true);
-+		return ret;
-+	}
-+
-+	return 0;
-+}
-+
-+static void sched_rq_cpu_starting(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	rq->calc_load_update = calc_load_update;
-+}
-+
-+int sched_cpu_starting(unsigned int cpu)
-+{
-+	sched_rq_cpu_starting(cpu);
-+	sched_tick_start(cpu);
-+	return 0;
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+
-+/*
-+ * Invoked immediately before the stopper thread is invoked to bring the
-+ * CPU down completely. At this point all per CPU kthreads except the
-+ * hotplug thread (current) and the stopper thread (inactive) have been
-+ * either parked or have been unbound from the outgoing CPU. Ensure that
-+ * any of those which might be on the way out are gone.
-+ *
-+ * If after this point a bound task is being woken on this CPU then the
-+ * responsible hotplug callback has failed to do it's job.
-+ * sched_cpu_dying() will catch it with the appropriate fireworks.
-+ */
-+int sched_cpu_wait_empty(unsigned int cpu)
-+{
-+	balance_hotplug_wait();
-+	return 0;
-+}
-+
-+/*
-+ * Since this CPU is going 'away' for a while, fold any nr_active delta we
-+ * might have. Called from the CPU stopper task after ensuring that the
-+ * stopper is the last running task on the CPU, so nr_active count is
-+ * stable. We need to take the teardown thread which is calling this into
-+ * account, so we hand in adjust = 1 to the load calculation.
-+ *
-+ * Also see the comment "Global load-average calculations".
-+ */
-+static void calc_load_migrate(struct rq *rq)
-+{
-+	long delta = calc_load_fold_active(rq, 1);
-+
-+	if (delta)
-+		atomic_long_add(delta, &calc_load_tasks);
-+}
-+
-+static void dump_rq_tasks(struct rq *rq, const char *loglvl)
-+{
-+	struct task_struct *g, *p;
-+	int cpu = cpu_of(rq);
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
-+	for_each_process_thread(g, p) {
-+		if (task_cpu(p) != cpu)
-+			continue;
-+
-+		if (!task_on_rq_queued(p))
-+			continue;
-+
-+		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
-+	}
-+}
-+
-+int sched_cpu_dying(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/* Handle pending wakeups and then migrate everything off */
-+	sched_tick_stop(cpu);
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
-+		WARN(true, "Dying CPU not properly vacated!");
-+		dump_rq_tasks(rq, KERN_WARNING);
-+	}
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	calc_load_migrate(rq);
-+	hrtick_clear(rq);
-+	return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+static void sched_init_topology_cpumask_early(void)
-+{
-+	int cpu;
-+	cpumask_t *tmp;
-+
-+	for_each_possible_cpu(cpu) {
-+		/* init topo masks */
-+		tmp = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		cpumask_copy(tmp, cpu_possible_mask);
-+		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
-+	}
-+}
-+
-+#define TOPOLOGY_CPUMASK(name, mask, last)\
-+	if (cpumask_and(topo, topo, mask)) {					\
-+		cpumask_copy(topo, mask);					\
-+		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
-+		       cpu, (topo++)->bits[0]);					\
-+	}									\
-+	if (!last)								\
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
-+				  nr_cpumask_bits);
-+
-+static void sched_init_topology_cpumask(void)
-+{
-+	int cpu;
-+	cpumask_t *topo;
-+
-+	for_each_online_cpu(cpu) {
-+		/* take chance to reset time slice for idle tasks */
-+		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
-+
-+		topo = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
-+				  nr_cpumask_bits);
-+#ifdef CONFIG_SCHED_SMT
-+		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
-+#endif
-+		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
-+
-+		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
-+		per_cpu(sched_cpu_llc_mask, cpu) = topo;
-+		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
-+
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
-+		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
-+		       cpu, per_cpu(sd_llc_id, cpu),
-+		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
-+			      per_cpu(sched_cpu_topo_masks, cpu)));
-+	}
-+}
-+#endif
-+
-+void __init sched_init_smp(void)
-+{
-+	/* Move init over to a non-isolated CPU */
-+	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
-+		BUG();
-+	current->flags &= ~PF_NO_SETAFFINITY;
-+
-+	sched_init_topology_cpumask();
-+
-+	sched_smp_initialized = true;
-+}
-+
-+static int __init migration_init(void)
-+{
-+	sched_cpu_starting(smp_processor_id());
-+	return 0;
-+}
-+early_initcall(migration_init);
-+
-+#else
-+void __init sched_init_smp(void)
-+{
-+	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
-+}
-+#endif /* CONFIG_SMP */
-+
-+int in_sched_functions(unsigned long addr)
-+{
-+	return in_lock_functions(addr) ||
-+		(addr >= (unsigned long)__sched_text_start
-+		&& addr < (unsigned long)__sched_text_end);
-+}
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/*
-+ * Default task group.
-+ * Every task in system belongs to this group at bootup.
-+ */
-+struct task_group root_task_group;
-+LIST_HEAD(task_groups);
-+
-+/* Cacheline aligned slab cache for task_group */
-+static struct kmem_cache *task_group_cache __ro_after_init;
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+void __init sched_init(void)
-+{
-+	int i;
-+	struct rq *rq;
-+#ifdef CONFIG_SCHED_SMT
-+	struct balance_arg balance_arg = {.cpumask = sched_sg_idle_mask, .active = 0};
-+#endif
-+
-+	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
-+			 " by Alfred Chen.\n");
-+
-+	wait_bit_init();
-+
-+#ifdef CONFIG_SMP
-+	for (i = 0; i < SCHED_QUEUE_BITS; i++)
-+		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+	task_group_cache = KMEM_CACHE(task_group, 0);
-+
-+	list_add(&root_task_group.list, &task_groups);
-+	INIT_LIST_HEAD(&root_task_group.children);
-+	INIT_LIST_HEAD(&root_task_group.siblings);
-+#endif /* CONFIG_CGROUP_SCHED */
-+	for_each_possible_cpu(i) {
-+		rq = cpu_rq(i);
-+
-+		sched_queue_init(&rq->queue);
-+		rq->prio = IDLE_TASK_SCHED_PRIO;
-+#ifdef CONFIG_SCHED_PDS
-+		rq->prio_idx = rq->prio;
-+#endif
-+
-+		raw_spin_lock_init(&rq->lock);
-+		rq->nr_running = rq->nr_uninterruptible = 0;
-+		rq->calc_load_active = 0;
-+		rq->calc_load_update = jiffies + LOAD_FREQ;
-+#ifdef CONFIG_SMP
-+		rq->online = false;
-+		rq->cpu = i;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		rq->sg_balance_arg = balance_arg;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
-+#endif
-+		rq->balance_callback = &balance_push_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+		rcuwait_init(&rq->hotplug_wait);
-+#endif
-+#endif /* CONFIG_SMP */
-+		rq->nr_switches = 0;
-+
-+		hrtick_rq_init(rq);
-+		atomic_set(&rq->nr_iowait, 0);
-+
-+		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
-+	}
-+#ifdef CONFIG_SMP
-+	/* Set rq->online for cpu 0 */
-+	cpu_rq(0)->online = true;
-+#endif
-+	/*
-+	 * The boot idle thread does lazy MMU switching as well:
-+	 */
-+	mmgrab(&init_mm);
-+	enter_lazy_tlb(&init_mm, current);
-+
-+	/*
-+	 * The idle task doesn't need the kthread struct to function, but it
-+	 * is dressed up as a per-CPU kthread and thus needs to play the part
-+	 * if we want to avoid special-casing it in code that deals with per-CPU
-+	 * kthreads.
-+	 */
-+	WARN_ON(!set_kthread_struct(current));
-+
-+	/*
-+	 * Make us the idle thread. Technically, schedule() should not be
-+	 * called from this thread, however somewhere below it might be,
-+	 * but because we are the idle thread, we just pick up running again
-+	 * when this runqueue becomes "idle".
-+	 */
-+	init_idle(current, smp_processor_id());
-+
-+	calc_load_update = jiffies + LOAD_FREQ;
-+
-+#ifdef CONFIG_SMP
-+	idle_thread_set_boot_cpu();
-+	balance_push_set(smp_processor_id(), false);
-+
-+	sched_init_topology_cpumask_early();
-+#endif /* SMP */
-+
-+	preempt_dynamic_init();
-+}
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+
-+void __might_sleep(const char *file, int line)
-+{
-+	unsigned int state = get_current_state();
-+	/*
-+	 * Blocking primitives will set (and therefore destroy) current->state,
-+	 * since we will exit with TASK_RUNNING make sure we enter with it,
-+	 * otherwise we will destroy state.
-+	 */
-+	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
-+			"do not call blocking ops when !TASK_RUNNING; "
-+			"state=%x set at [<%p>] %pS\n", state,
-+			(void *)current->task_state_change,
-+			(void *)current->task_state_change);
-+
-+	__might_resched(file, line, 0);
-+}
-+EXPORT_SYMBOL(__might_sleep);
-+
-+static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
-+{
-+	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
-+		return;
-+
-+	if (preempt_count() == preempt_offset)
-+		return;
-+
-+	pr_err("Preemption disabled at:");
-+	print_ip_sym(KERN_ERR, ip);
-+}
-+
-+static inline bool resched_offsets_ok(unsigned int offsets)
-+{
-+	unsigned int nested = preempt_count();
-+
-+	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
-+
-+	return nested == offsets;
-+}
-+
-+void __might_resched(const char *file, int line, unsigned int offsets)
-+{
-+	/* Ratelimiting timestamp: */
-+	static unsigned long prev_jiffy;
-+
-+	unsigned long preempt_disable_ip;
-+
-+	/* WARN_ON_ONCE() by default, no rate limit required: */
-+	rcu_sleep_check();
-+
-+	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
-+	     !is_idle_task(current) && !current->non_block_count) ||
-+	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
-+	    oops_in_progress)
-+		return;
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	/* Save this before calling printk(), since that will clobber it: */
-+	preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
-+	       file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), current->non_block_count,
-+	       current->pid, current->comm);
-+	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
-+	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
-+
-+	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
-+		pr_err("RCU nest depth: %d, expected: %u\n",
-+		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
-+	}
-+
-+	if (task_stack_end_corrupted(current))
-+		pr_emerg("Thread overran stack, or stack corrupted\n");
-+
-+	debug_show_held_locks(current);
-+	if (irqs_disabled())
-+		print_irqtrace_events(current);
-+
-+	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
-+				 preempt_disable_ip);
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL(__might_resched);
-+
-+void __cant_sleep(const char *file, int line, int preempt_offset)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > preempt_offset)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
-+	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
-+			in_atomic(), irqs_disabled(),
-+			current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_sleep);
-+
-+#ifdef CONFIG_SMP
-+void __cant_migrate(const char *file, int line)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (is_migration_disabled(current))
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > 0)
-+		return;
-+
-+	if (current->migration_flags & MDF_FORCE_ENABLED)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
-+	       current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_migrate);
-+#endif
-+#endif
-+
-+#ifdef CONFIG_MAGIC_SYSRQ
-+void normalize_rt_tasks(void)
-+{
-+	struct task_struct *g, *p;
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+	};
-+
-+	read_lock(&tasklist_lock);
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * Only normalize user tasks:
-+		 */
-+		if (p->flags & PF_KTHREAD)
-+			continue;
-+
-+		schedstat_set(p->stats.wait_start,  0);
-+		schedstat_set(p->stats.sleep_start, 0);
-+		schedstat_set(p->stats.block_start, 0);
-+
-+		if (!rt_task(p)) {
-+			/*
-+			 * Renice negative nice level userspace
-+			 * tasks back to 0:
-+			 */
-+			if (task_nice(p) < 0)
-+				set_user_nice(p, 0);
-+			continue;
-+		}
-+
-+		__sched_setscheduler(p, &attr, false, false);
-+	}
-+	read_unlock(&tasklist_lock);
-+}
-+#endif /* CONFIG_MAGIC_SYSRQ */
-+
-+#if defined(CONFIG_KGDB_KDB)
-+/*
-+ * These functions are only useful for kdb.
-+ *
-+ * They can only be called when the whole system has been
-+ * stopped - every CPU needs to be quiescent, and no scheduling
-+ * activity can take place. Using them for anything else would
-+ * be a serious bug, and as a result, they aren't even visible
-+ * under any other configuration.
-+ */
-+
-+/**
-+ * curr_task - return the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ *
-+ * Return: The current task for @cpu.
-+ */
-+struct task_struct *curr_task(int cpu)
-+{
-+	return cpu_curr(cpu);
-+}
-+
-+#endif /* defined(CONFIG_KGDB_KDB) */
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+static void sched_free_group(struct task_group *tg)
-+{
-+	kmem_cache_free(task_group_cache, tg);
-+}
-+
-+static void sched_free_group_rcu(struct rcu_head *rhp)
-+{
-+	sched_free_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+static void sched_unregister_group(struct task_group *tg)
-+{
-+	/*
-+	 * We have to wait for yet another RCU grace period to expire, as
-+	 * print_cfs_stats() might run concurrently.
-+	 */
-+	call_rcu(&tg->rcu, sched_free_group_rcu);
-+}
-+
-+/* allocate runqueue etc for a new task group */
-+struct task_group *sched_create_group(struct task_group *parent)
-+{
-+	struct task_group *tg;
-+
-+	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
-+	if (!tg)
-+		return ERR_PTR(-ENOMEM);
-+
-+	return tg;
-+}
-+
-+void sched_online_group(struct task_group *tg, struct task_group *parent)
-+{
-+}
-+
-+/* rcu callback to free various structures associated with a task group */
-+static void sched_unregister_group_rcu(struct rcu_head *rhp)
-+{
-+	/* Now it should be safe to free those cfs_rqs: */
-+	sched_unregister_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+void sched_destroy_group(struct task_group *tg)
-+{
-+	/* Wait for possible concurrent references to cfs_rqs complete: */
-+	call_rcu(&tg->rcu, sched_unregister_group_rcu);
-+}
-+
-+void sched_release_group(struct task_group *tg)
-+{
-+}
-+
-+static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
-+{
-+	return css ? container_of(css, struct task_group, css) : NULL;
-+}
-+
-+static struct cgroup_subsys_state *
-+cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
-+{
-+	struct task_group *parent = css_tg(parent_css);
-+	struct task_group *tg;
-+
-+	if (!parent) {
-+		/* This is early initialization for the top cgroup */
-+		return &root_task_group.css;
-+	}
-+
-+	tg = sched_create_group(parent);
-+	if (IS_ERR(tg))
-+		return ERR_PTR(-ENOMEM);
-+	return &tg->css;
-+}
-+
-+/* Expose task group only after completing cgroup initialization */
-+static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+	struct task_group *parent = css_tg(css->parent);
-+
-+	if (parent)
-+		sched_online_group(tg, parent);
-+	return 0;
-+}
-+
-+static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	sched_release_group(tg);
-+}
-+
-+static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	/*
-+	 * Relies on the RCU grace period between css_released() and this.
-+	 */
-+	sched_unregister_group(tg);
-+}
-+
-+#ifdef CONFIG_RT_GROUP_SCHED
-+static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static void cpu_cgroup_attach(struct cgroup_taskset *tset)
-+{
-+}
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+static DEFINE_MUTEX(shares_mutex);
-+
-+static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
-+{
-+	/*
-+	 * We can't change the weight of the root cgroup.
-+	 */
-+	if (&root_task_group == tg)
-+		return -EINVAL;
-+
-+	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
-+
-+	mutex_lock(&shares_mutex);
-+	if (tg->shares == shares)
-+		goto done;
-+
-+	tg->shares = shares;
-+done:
-+	mutex_unlock(&shares_mutex);
-+	return 0;
-+}
-+
-+static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cftype, u64 shareval)
-+{
-+	if (shareval > scale_load_down(ULONG_MAX))
-+		shareval = MAX_SHARES;
-+	return sched_group_set_shares(css_tg(css), scale_load(shareval));
-+}
-+
-+static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	return (u64) scale_load_down(tg->shares);
-+}
-+#endif
-+
-+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, s64 cfs_quota_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 cfs_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, u64 cfs_burst_us)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 val)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 rt_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_legacy_files[] = {
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	{
-+		.name = "shares",
-+		.read_u64 = cpu_shares_read_u64,
-+		.write_u64 = cpu_shares_write_u64,
-+	},
-+#endif
-+	{
-+		.name = "cfs_quota_us",
-+		.read_s64 = cpu_cfs_quota_read_s64,
-+		.write_s64 = cpu_cfs_quota_write_s64,
-+	},
-+	{
-+		.name = "cfs_period_us",
-+		.read_u64 = cpu_cfs_period_read_u64,
-+		.write_u64 = cpu_cfs_period_write_u64,
-+	},
-+	{
-+		.name = "cfs_burst_us",
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "stat",
-+		.seq_show = cpu_cfs_stat_show,
-+	},
-+	{
-+		.name = "stat.local",
-+		.seq_show = cpu_cfs_local_stat_show,
-+	},
-+	{
-+		.name = "rt_runtime_us",
-+		.read_s64 = cpu_rt_runtime_read,
-+		.write_s64 = cpu_rt_runtime_write,
-+	},
-+	{
-+		.name = "rt_period_us",
-+		.read_u64 = cpu_rt_period_read_uint,
-+		.write_u64 = cpu_rt_period_write_uint,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* Terminate */
-+};
-+
-+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, u64 weight)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
-+				    struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
-+				     struct cftype *cft, s64 nice)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 idle)
-+{
-+	return 0;
-+}
-+
-+static int cpu_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_max_write(struct kernfs_open_file *of,
-+			     char *buf, size_t nbytes, loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_files[] = {
-+	{
-+		.name = "weight",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_weight_read_u64,
-+		.write_u64 = cpu_weight_write_u64,
-+	},
-+	{
-+		.name = "weight.nice",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_weight_nice_read_s64,
-+		.write_s64 = cpu_weight_nice_write_s64,
-+	},
-+	{
-+		.name = "idle",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_idle_read_s64,
-+		.write_s64 = cpu_idle_write_s64,
-+	},
-+	{
-+		.name = "max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_max_show,
-+		.write = cpu_max_write,
-+	},
-+	{
-+		.name = "max.burst",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* terminate */
-+};
-+
-+static int cpu_extra_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+static int cpu_local_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+struct cgroup_subsys cpu_cgrp_subsys = {
-+	.css_alloc	= cpu_cgroup_css_alloc,
-+	.css_online	= cpu_cgroup_css_online,
-+	.css_released	= cpu_cgroup_css_released,
-+	.css_free	= cpu_cgroup_css_free,
-+	.css_extra_stat_show = cpu_extra_stat_show,
-+	.css_local_stat_show = cpu_local_stat_show,
-+#ifdef CONFIG_RT_GROUP_SCHED
-+	.can_attach	= cpu_cgroup_can_attach,
-+#endif
-+	.attach		= cpu_cgroup_attach,
-+	.legacy_cftypes	= cpu_legacy_files,
-+	.dfl_cftypes	= cpu_files,
-+	.early_init	= true,
-+	.threaded	= true,
-+};
-+#endif	/* CONFIG_CGROUP_SCHED */
-+
-+#undef CREATE_TRACE_POINTS
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#
-+/*
-+ * @cid_lock: Guarantee forward-progress of cid allocation.
-+ *
-+ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
-+ * is only used when contention is detected by the lock-free allocation so
-+ * forward progress can be guaranteed.
-+ */
-+DEFINE_RAW_SPINLOCK(cid_lock);
-+
-+/*
-+ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
-+ *
-+ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
-+ * detected, it is set to 1 to ensure that all newly coming allocations are
-+ * serialized by @cid_lock until the allocation which detected contention
-+ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
-+ * of a cid allocation.
-+ */
-+int use_cid_lock;
-+
-+/*
-+ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
-+ * concurrently with respect to the execution of the source runqueue context
-+ * switch.
-+ *
-+ * There is one basic properties we want to guarantee here:
-+ *
-+ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
-+ * used by a task. That would lead to concurrent allocation of the cid and
-+ * userspace corruption.
-+ *
-+ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
-+ * that a pair of loads observe at least one of a pair of stores, which can be
-+ * shown as:
-+ *
-+ *      X = Y = 0
-+ *
-+ *      w[X]=1          w[Y]=1
-+ *      MB              MB
-+ *      r[Y]=y          r[X]=x
-+ *
-+ * Which guarantees that x==0 && y==0 is impossible. But rather than using
-+ * values 0 and 1, this algorithm cares about specific state transitions of the
-+ * runqueue current task (as updated by the scheduler context switch), and the
-+ * per-mm/cpu cid value.
-+ *
-+ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
-+ * task->mm != mm for the rest of the discussion. There are two scheduler state
-+ * transitions on context switch we care about:
-+ *
-+ * (TSA) Store to rq->curr with transition from (N) to (Y)
-+ *
-+ * (TSB) Store to rq->curr with transition from (Y) to (N)
-+ *
-+ * On the remote-clear side, there is one transition we care about:
-+ *
-+ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
-+ *
-+ * There is also a transition to UNSET state which can be performed from all
-+ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
-+ * guarantees that only a single thread will succeed:
-+ *
-+ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
-+ *
-+ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
-+ * when a thread is actively using the cid (property (1)).
-+ *
-+ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
-+ *
-+ * Scenario A) (TSA)+(TMA) (from next task perspective)
-+ *
-+ * CPU0                                      CPU1
-+ *
-+ * Context switch CS-1                       Remote-clear
-+ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
-+ *                                             (implied barrier after cmpxchg)
-+ *   - switch_mm_cid()
-+ *     - memory barrier (see switch_mm_cid()
-+ *       comment explaining how this barrier
-+ *       is combined with other scheduler
-+ *       barriers)
-+ *     - mm_cid_get (next)
-+ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
-+ *
-+ * This Dekker ensures that either task (Y) is observed by the
-+ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
-+ * observed.
-+ *
-+ * If task (Y) store is observed by rcu_dereference(), it means that there is
-+ * still an active task on the cpu. Remote-clear will therefore not transition
-+ * to UNSET, which fulfills property (1).
-+ *
-+ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
-+ * it will move its state to UNSET, which clears the percpu cid perhaps
-+ * uselessly (which is not an issue for correctness). Because task (Y) is not
-+ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
-+ * state to UNSET is done with a cmpxchg expecting that the old state has the
-+ * LAZY flag set, only one thread will successfully UNSET.
-+ *
-+ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
-+ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
-+ * CPU1 will observe task (Y) and do nothing more, which is fine.
-+ *
-+ * What we are effectively preventing with this Dekker is a scenario where
-+ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
-+ * because this would UNSET a cid which is actively used.
-+ */
-+
-+void sched_mm_cid_migrate_from(struct task_struct *t)
-+{
-+	t->migrate_from_cpu = task_cpu(t);
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
-+					  struct task_struct *t,
-+					  struct mm_cid *src_pcpu_cid)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct task_struct *src_task;
-+	int src_cid, last_mm_cid;
-+
-+	if (!mm)
-+		return -1;
-+
-+	last_mm_cid = t->last_mm_cid;
-+	/*
-+	 * If the migrated task has no last cid, or if the current
-+	 * task on src rq uses the cid, it means the source cid does not need
-+	 * to be moved to the destination cpu.
-+	 */
-+	if (last_mm_cid == -1)
-+		return -1;
-+	src_cid = READ_ONCE(src_pcpu_cid->cid);
-+	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
-+		return -1;
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq, it means we
-+	 * are not the last task to be migrated from this cpu for this mm, so
-+	 * there is no need to move src_cid to the destination cpu.
-+	 */
-+	rcu_read_lock();
-+	src_task = rcu_dereference(src_rq->curr);
-+	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+		rcu_read_unlock();
-+		t->last_mm_cid = -1;
-+		return -1;
-+	}
-+	rcu_read_unlock();
-+
-+	return src_cid;
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
-+					      struct task_struct *t,
-+					      struct mm_cid *src_pcpu_cid,
-+					      int src_cid)
-+{
-+	struct task_struct *src_task;
-+	struct mm_struct *mm = t->mm;
-+	int lazy_cid;
-+
-+	if (src_cid == -1)
-+		return -1;
-+
-+	/*
-+	 * Attempt to clear the source cpu cid to move it to the destination
-+	 * cpu.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(src_cid);
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
-+		return -1;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, this task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		src_task = rcu_dereference(src_rq->curr);
-+		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+			rcu_read_unlock();
-+			/*
-+			 * We observed an active task for this mm, there is therefore
-+			 * no point in moving this cid to the destination cpu.
-+			 */
-+			t->last_mm_cid = -1;
-+			return -1;
-+		}
-+	}
-+
-+	/*
-+	 * The src_cid is unused, so it can be unset.
-+	 */
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+		return -1;
-+	return src_cid;
-+}
-+
-+/*
-+ * Migration to dst cpu. Called with dst_rq lock held.
-+ * Interrupts are disabled, which keeps the window of cid ownership without the
-+ * source rq lock held small.
-+ */
-+void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
-+{
-+	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
-+	struct mm_struct *mm = t->mm;
-+	int src_cid, dst_cid;
-+	struct rq *src_rq;
-+
-+	lockdep_assert_rq_held(dst_rq);
-+
-+	if (!mm)
-+		return;
-+	if (src_cpu == -1) {
-+		t->last_mm_cid = -1;
-+		return;
-+	}
-+	/*
-+	 * Move the src cid if the dst cid is unset. This keeps id
-+	 * allocation closest to 0 in cases where few threads migrate around
-+	 * many cpus.
-+	 *
-+	 * If destination cid is already set, we may have to just clear
-+	 * the src cid to ensure compactness in frequent migrations
-+	 * scenarios.
-+	 *
-+	 * It is not useful to clear the src cid when the number of threads is
-+	 * greater or equal to the number of allowed cpus, because user-space
-+	 * can expect that the number of allowed cids can reach the number of
-+	 * allowed cpus.
-+	 */
-+	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
-+	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
-+	if (!mm_cid_is_unset(dst_cid) &&
-+	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
-+		return;
-+	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
-+	src_rq = cpu_rq(src_cpu);
-+	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
-+	if (src_cid == -1)
-+		return;
-+	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
-+							    src_cid);
-+	if (src_cid == -1)
-+		return;
-+	if (!mm_cid_is_unset(dst_cid)) {
-+		__mm_cid_put(mm, src_cid);
-+		return;
-+	}
-+	/* Move src_cid to dst cpu. */
-+	mm_cid_snapshot_time(dst_rq, mm);
-+	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
-+}
-+
-+static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
-+				      int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *t;
-+	int cid, lazy_cid;
-+
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid))
-+		return;
-+
-+	/*
-+	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
-+	 * there happens to be other tasks left on the source cpu using this
-+	 * mm, the next task using this mm will reallocate its cid on context
-+	 * switch.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(cid);
-+	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
-+		return;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, that task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		t = rcu_dereference(rq->curr);
-+		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
-+			return;
-+	}
-+
-+	/*
-+	 * The cid is unused, so it can be unset.
-+	 * Disable interrupts to keep the window of cid ownership without rq
-+	 * lock small.
-+	 */
-+	scoped_guard (irqsave) {
-+		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, cid);
-+	}
-+}
-+
-+static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct mm_cid *pcpu_cid;
-+	struct task_struct *curr;
-+	u64 rq_clock;
-+
-+	/*
-+	 * rq->clock load is racy on 32-bit but one spurious clear once in a
-+	 * while is irrelevant.
-+	 */
-+	rq_clock = READ_ONCE(rq->clock);
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+
-+	/*
-+	 * In order to take care of infrequently scheduled tasks, bump the time
-+	 * snapshot associated with this cid if an active task using the mm is
-+	 * observed on this rq.
-+	 */
-+	scoped_guard (rcu) {
-+		curr = rcu_dereference(rq->curr);
-+		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
-+			WRITE_ONCE(pcpu_cid->time, rq_clock);
-+			return;
-+		}
-+	}
-+
-+	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
-+					     int weight)
-+{
-+	struct mm_cid *pcpu_cid;
-+	int cid;
-+
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid) || cid < weight)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void task_mm_cid_work(struct callback_head *work)
-+{
-+	unsigned long now = jiffies, old_scan, next_scan;
-+	struct task_struct *t = current;
-+	struct cpumask *cidmask;
-+	struct mm_struct *mm;
-+	int weight, cpu;
-+
-+	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-+
-+	work->next = work;	/* Prevent double-add */
-+	if (t->flags & PF_EXITING)
-+		return;
-+	mm = t->mm;
-+	if (!mm)
-+		return;
-+	old_scan = READ_ONCE(mm->mm_cid_next_scan);
-+	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	if (!old_scan) {
-+		unsigned long res;
-+
-+		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-+		if (res != old_scan)
-+			old_scan = res;
-+		else
-+			old_scan = next_scan;
-+	}
-+	if (time_before(now, old_scan))
-+		return;
-+	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-+		return;
-+	cidmask = mm_cidmask(mm);
-+	/* Clear cids that were not recently used. */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_old(mm, cpu);
-+	weight = cpumask_weight(cidmask);
-+	/*
-+	 * Clear cids that are greater or equal to the cidmask weight to
-+	 * recompact it.
-+	 */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-+}
-+
-+void init_sched_mm_cid(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	int mm_users = 0;
-+
-+	if (mm) {
-+		mm_users = atomic_read(&mm->mm_users);
-+		if (mm_users == 1)
-+			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	}
-+	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-+	init_task_work(&t->cid_work, task_mm_cid_work);
-+}
-+
-+void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-+{
-+	struct callback_head *work = &curr->cid_work;
-+	unsigned long now = jiffies;
-+
-+	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-+	    work->next != work)
-+		return;
-+	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-+		return;
-+	task_work_add(curr, work, TWA_RESUME);
-+}
-+
-+void sched_mm_cid_exit_signals(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_before_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_after_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	scoped_guard (rq_lock_irqsave, rq) {
-+		preempt_enable_no_resched();	/* holding spinlock */
-+		WRITE_ONCE(t->mm_cid_active, 1);
-+		/*
-+		 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+		 * Matches barrier in sched_mm_cid_remote_clear_old().
-+		 */
-+		smp_mb();
-+		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
-+	}
-+	rseq_set_notify_resume(t);
-+}
-+
-+void sched_mm_cid_fork(struct task_struct *t)
-+{
-+	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
-+	t->mm_cid_active = 1;
-+}
-+#endif
-diff --new-file -rup linux-6.10/kernel/sched/alt_debug.c linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_debug.c
---- linux-6.10/kernel/sched/alt_debug.c	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_debug.c	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,32 @@
-+/*
-+ * kernel/sched/alt_debug.c
-+ *
-+ * Print the alt scheduler debugging details
-+ *
-+ * Author: Alfred Chen
-+ * Date  : 2020
-+ */
-+#include "sched.h"
-+#include "linux/sched/debug.h"
-+
-+/*
-+ * This allows printing both to /proc/sched_debug and
-+ * to the console
-+ */
-+#define SEQ_printf(m, x...)			\
-+ do {						\
-+	if (m)					\
-+		seq_printf(m, x);		\
-+	else					\
-+		pr_cont(x);			\
-+ } while (0)
-+
-+void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
-+			  struct seq_file *m)
-+{
-+	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
-+						get_nr_threads(p));
-+}
-+
-+void proc_sched_set_task(struct task_struct *p)
-+{}
-diff --new-file -rup linux-6.10/kernel/sched/alt_sched.h linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_sched.h
---- linux-6.10/kernel/sched/alt_sched.h	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/alt_sched.h	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,976 @@
-+#ifndef ALT_SCHED_H
-+#define ALT_SCHED_H
-+
-+#include <linux/context_tracking.h>
-+#include <linux/profile.h>
-+#include <linux/stop_machine.h>
-+#include <linux/syscalls.h>
-+#include <linux/tick.h>
-+
-+#include <trace/events/power.h>
-+#include <trace/events/sched.h>
-+
-+#include "../workqueue_internal.h"
-+
-+#include "cpupri.h"
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/* task group related information */
-+struct task_group {
-+	struct cgroup_subsys_state css;
-+
-+	struct rcu_head rcu;
-+	struct list_head list;
-+
-+	struct task_group *parent;
-+	struct list_head siblings;
-+	struct list_head children;
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	unsigned long		shares;
-+#endif
-+};
-+
-+extern struct task_group *sched_create_group(struct task_group *parent);
-+extern void sched_online_group(struct task_group *tg,
-+			       struct task_group *parent);
-+extern void sched_destroy_group(struct task_group *tg);
-+extern void sched_release_group(struct task_group *tg);
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+#define MIN_SCHED_NORMAL_PRIO	(32)
-+/*
-+ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
-+ *
-+ * -- BMQ --
-+ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
-+ * -- PDS --
-+ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
-+ */
-+#define SCHED_LEVELS		(64 + 1)
-+
-+#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
-+extern void resched_latency_warn(int cpu, u64 latency);
-+#else
-+# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
-+static inline void resched_latency_warn(int cpu, u64 latency) {}
-+#endif
-+
-+/*
-+ * Increase resolution of nice-level calculations for 64-bit architectures.
-+ * The extra resolution improves shares distribution and load balancing of
-+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
-+ * hierarchies, especially on larger systems. This is not a user-visible change
-+ * and does not change the user-interface for setting shares/weights.
-+ *
-+ * We increase resolution only if we have enough bits to allow this increased
-+ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
-+ * are pretty high and the returns do not justify the increased costs.
-+ *
-+ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
-+ * increase coverage and consistency always enable it on 64-bit platforms.
-+ */
-+#ifdef CONFIG_64BIT
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load_down(w) \
-+({ \
-+	unsigned long __w = (w); \
-+	if (__w) \
-+		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
-+	__w; \
-+})
-+#else
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		(w)
-+# define scale_load_down(w)	(w)
-+#endif
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
-+
-+/*
-+ * A weight of 0 or 1 can cause arithmetics problems.
-+ * A weight of a cfs_rq is the sum of weights of which entities
-+ * are queued on this cfs_rq, so a weight of a entity should not be
-+ * too large, so as the shares value of a task group.
-+ * (The default weight is 1024 - so there's no practical
-+ *  limitation from this.)
-+ */
-+#define MIN_SHARES		(1UL <<  1)
-+#define MAX_SHARES		(1UL << 18)
-+#endif
-+
-+/*
-+ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
-+ */
-+#ifdef CONFIG_SCHED_DEBUG
-+# define const_debug __read_mostly
-+#else
-+# define const_debug const
-+#endif
-+
-+/* task_struct::on_rq states: */
-+#define TASK_ON_RQ_QUEUED	1
-+#define TASK_ON_RQ_MIGRATING	2
-+
-+static inline int task_on_rq_queued(struct task_struct *p)
-+{
-+	return p->on_rq == TASK_ON_RQ_QUEUED;
-+}
-+
-+static inline int task_on_rq_migrating(struct task_struct *p)
-+{
-+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
-+}
-+
-+/* Wake flags. The first three directly map to some SD flag value */
-+#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
-+#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
-+#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
-+
-+#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
-+#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
-+#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
-+
-+#ifdef CONFIG_SMP
-+static_assert(WF_EXEC == SD_BALANCE_EXEC);
-+static_assert(WF_FORK == SD_BALANCE_FORK);
-+static_assert(WF_TTWU == SD_BALANCE_WAKE);
-+#endif
-+
-+#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
-+
-+struct sched_queue {
-+	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
-+	struct list_head heads[SCHED_LEVELS];
-+};
-+
-+struct rq;
-+struct cpuidle_state;
-+
-+struct balance_callback {
-+	struct balance_callback *next;
-+	void (*func)(struct rq *rq);
-+};
-+
-+struct balance_arg {
-+	struct task_struct	*task;
-+	int			active;
-+	cpumask_t		*cpumask;
-+};
-+
-+/*
-+ * This is the main, per-CPU runqueue data structure.
-+ * This data should only be modified by the local cpu.
-+ */
-+struct rq {
-+	/* runqueue lock: */
-+	raw_spinlock_t			lock;
-+
-+	struct task_struct __rcu	*curr;
-+	struct task_struct		*idle;
-+	struct task_struct		*stop;
-+	struct mm_struct		*prev_mm;
-+
-+	struct sched_queue		queue		____cacheline_aligned;
-+
-+	int				prio;
-+#ifdef CONFIG_SCHED_PDS
-+	int				prio_idx;
-+	u64				time_edge;
-+#endif
-+
-+	/* switch count */
-+	u64 nr_switches;
-+
-+	atomic_t nr_iowait;
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	u64 last_seen_need_resched_ns;
-+	int ticks_without_resched;
-+#endif
-+
-+#ifdef CONFIG_MEMBARRIER
-+	int membarrier_state;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+	int cpu;		/* cpu of this runqueue */
-+	bool online;
-+
-+	unsigned int		ttwu_pending;
-+	unsigned char		nohz_idle_balance;
-+	unsigned char		idle_balance;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	struct sched_avg	avg_irq;
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+	struct balance_arg	sg_balance_arg		____cacheline_aligned;
-+#endif
-+	struct cpu_stop_work	active_balance_work;
-+
-+	struct balance_callback	*balance_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+	struct rcuwait		hotplug_wait;
-+#endif
-+	unsigned int		nr_pinned;
-+
-+#endif /* CONFIG_SMP */
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	u64 prev_irq_time;
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+#ifdef CONFIG_PARAVIRT
-+	u64 prev_steal_time;
-+#endif /* CONFIG_PARAVIRT */
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	u64 prev_steal_time_rq;
-+#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
-+
-+	/* For genenal cpu load util */
-+	s32 load_history;
-+	u64 load_block;
-+	u64 load_stamp;
-+
-+	/* calc_load related fields */
-+	unsigned long calc_load_update;
-+	long calc_load_active;
-+
-+	/* Ensure that all clocks are in the same cache line */
-+	u64			clock ____cacheline_aligned;
-+	u64			clock_task;
-+
-+	unsigned int  nr_running;
-+	unsigned long nr_uninterruptible;
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+#ifdef CONFIG_SMP
-+	call_single_data_t hrtick_csd;
-+#endif
-+	struct hrtimer		hrtick_timer;
-+	ktime_t			hrtick_time;
-+#endif
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+	/* latency stats */
-+	struct sched_info rq_sched_info;
-+	unsigned long long rq_cpu_time;
-+	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
-+
-+	/* sys_sched_yield() stats */
-+	unsigned int yld_count;
-+
-+	/* schedule() stats */
-+	unsigned int sched_switch;
-+	unsigned int sched_count;
-+	unsigned int sched_goidle;
-+
-+	/* try_to_wake_up() stats */
-+	unsigned int ttwu_count;
-+	unsigned int ttwu_local;
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+#ifdef CONFIG_CPU_IDLE
-+	/* Must be inspected within a rcu lock section */
-+	struct cpuidle_state *idle_state;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#ifdef CONFIG_SMP
-+	call_single_data_t	nohz_csd;
-+#endif
-+	atomic_t		nohz_flags;
-+#endif /* CONFIG_NO_HZ_COMMON */
-+
-+	/* Scratch cpumask to be temporarily used under rq_lock */
-+	cpumask_var_t		scratch_mask;
-+};
-+
-+extern unsigned int sysctl_sched_base_slice;
-+
-+extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
-+
-+extern unsigned long calc_load_update;
-+extern atomic_long_t calc_load_tasks;
-+
-+extern void calc_global_load_tick(struct rq *this_rq);
-+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
-+
-+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
-+#define this_rq()		this_cpu_ptr(&runqueues)
-+#define task_rq(p)		cpu_rq(task_cpu(p))
-+#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
-+#define raw_rq()		raw_cpu_ptr(&runqueues)
-+
-+#ifdef CONFIG_SMP
-+#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
-+void register_sched_domain_sysctl(void);
-+void unregister_sched_domain_sysctl(void);
-+#else
-+static inline void register_sched_domain_sysctl(void)
-+{
-+}
-+static inline void unregister_sched_domain_sysctl(void)
-+{
-+}
-+#endif
-+
-+extern bool sched_smp_initialized;
-+
-+enum {
-+#ifdef CONFIG_SCHED_SMT
-+	SMT_LEVEL_SPACE_HOLDER,
-+#endif
-+	COREGROUP_LEVEL_SPACE_HOLDER,
-+	CORE_LEVEL_SPACE_HOLDER,
-+	OTHER_LEVEL_SPACE_HOLDER,
-+	NR_CPU_AFFINITY_LEVELS
-+};
-+
-+DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+
-+static inline int
-+__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
-+{
-+	int cpu;
-+
-+	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
-+		mask++;
-+
-+	return cpu;
-+}
-+
-+static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
-+{
-+	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
-+}
-+
-+#endif
-+
-+#ifndef arch_scale_freq_tick
-+static __always_inline
-+void arch_scale_freq_tick(void)
-+{
-+}
-+#endif
-+
-+#ifndef arch_scale_freq_capacity
-+static __always_inline
-+unsigned long arch_scale_freq_capacity(int cpu)
-+{
-+	return SCHED_CAPACITY_SCALE;
-+}
-+#endif
-+
-+static inline u64 __rq_clock_broken(struct rq *rq)
-+{
-+	return READ_ONCE(rq->clock);
-+}
-+
-+static inline u64 rq_clock(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock;
-+}
-+
-+static inline u64 rq_clock_task(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock_task;
-+}
-+
-+/*
-+ * {de,en}queue flags:
-+ *
-+ * DEQUEUE_SLEEP  - task is no longer runnable
-+ * ENQUEUE_WAKEUP - task just became runnable
-+ *
-+ */
-+
-+#define DEQUEUE_SLEEP		0x01
-+
-+#define ENQUEUE_WAKEUP		0x01
-+
-+
-+/*
-+ * Below are scheduler API which using in other kernel code
-+ * It use the dummy rq_flags
-+ * ToDo : BMQ need to support these APIs for compatibility with mainline
-+ * scheduler code.
-+ */
-+struct rq_flags {
-+	unsigned long flags;
-+};
-+
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock);
-+
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock);
-+
-+static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+}
-+
-+static inline void
-+rq_lock(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+rq_lock_irq(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irq(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+static inline struct rq *
-+this_rq_lock_irq(struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	local_irq_disable();
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	return rq;
-+}
-+
-+static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
-+{
-+	return &rq->lock;
-+}
-+
-+static inline raw_spinlock_t *rq_lockp(struct rq *rq)
-+{
-+	return __rq_lockp(rq);
-+}
-+
-+static inline void lockdep_assert_rq_held(struct rq *rq)
-+{
-+	lockdep_assert_held(__rq_lockp(rq));
-+}
-+
-+extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
-+extern void raw_spin_rq_unlock(struct rq *rq);
-+
-+static inline void raw_spin_rq_lock(struct rq *rq)
-+{
-+	raw_spin_rq_lock_nested(rq, 0);
-+}
-+
-+static inline void raw_spin_rq_lock_irq(struct rq *rq)
-+{
-+	local_irq_disable();
-+	raw_spin_rq_lock(rq);
-+}
-+
-+static inline void raw_spin_rq_unlock_irq(struct rq *rq)
-+{
-+	raw_spin_rq_unlock(rq);
-+	local_irq_enable();
-+}
-+
-+static inline int task_current(struct rq *rq, struct task_struct *p)
-+{
-+	return rq->curr == p;
-+}
-+
-+static inline bool task_on_cpu(struct task_struct *p)
-+{
-+	return p->on_cpu;
-+}
-+
-+extern int task_running_nice(struct task_struct *p);
-+
-+extern struct static_key_false sched_schedstats;
-+
-+#ifdef CONFIG_CPU_IDLE
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+	rq->idle_state = idle_state;
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	WARN_ON(!rcu_read_lock_held());
-+	return rq->idle_state;
-+}
-+#else
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	return NULL;
-+}
-+#endif
-+
-+static inline int cpu_of(const struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	return rq->cpu;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+extern void resched_cpu(int cpu);
-+
-+#include "stats.h"
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#define NOHZ_BALANCE_KICK_BIT	0
-+#define NOHZ_STATS_KICK_BIT	1
-+
-+#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
-+#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
-+
-+#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
-+
-+#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
-+
-+/* TODO: needed?
-+extern void nohz_balance_exit_idle(struct rq *rq);
-+#else
-+static inline void nohz_balance_exit_idle(struct rq *rq) { }
-+*/
-+#endif
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+struct irqtime {
-+	u64			total;
-+	u64			tick_delta;
-+	u64			irq_start_time;
-+	struct u64_stats_sync	sync;
-+};
-+
-+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
-+
-+/*
-+ * Returns the irqtime minus the softirq time computed by ksoftirqd.
-+ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
-+ * and never move forward.
-+ */
-+static inline u64 irq_time_read(int cpu)
-+{
-+	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
-+	unsigned int seq;
-+	u64 total;
-+
-+	do {
-+		seq = __u64_stats_fetch_begin(&irqtime->sync);
-+		total = irqtime->total;
-+	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
-+
-+	return total;
-+}
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+
-+#ifdef CONFIG_CPU_FREQ
-+DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+extern int __init sched_tick_offload_init(void);
-+#else
-+static inline int sched_tick_offload_init(void) { return 0; }
-+#endif
-+
-+#ifdef arch_scale_freq_capacity
-+#ifndef arch_scale_freq_invariant
-+#define arch_scale_freq_invariant()	(true)
-+#endif
-+#else /* arch_scale_freq_capacity */
-+#define arch_scale_freq_invariant()	(false)
-+#endif
-+
-+#ifdef CONFIG_SMP
-+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
-+				 unsigned long min,
-+				 unsigned long max);
-+#endif /* CONFIG_SMP */
-+
-+extern void schedule_idle(void);
-+
-+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-+
-+/*
-+ * !! For sched_setattr_nocheck() (kernel) only !!
-+ *
-+ * This is actually gross. :(
-+ *
-+ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
-+ * tasks, but still be able to sleep. We need this on platforms that cannot
-+ * atomically change clock frequency. Remove once fast switching will be
-+ * available on such platforms.
-+ *
-+ * SUGOV stands for SchedUtil GOVernor.
-+ */
-+#define SCHED_FLAG_SUGOV	0x10000000
-+
-+#ifdef CONFIG_MEMBARRIER
-+/*
-+ * The scheduler provides memory barriers required by membarrier between:
-+ * - prior user-space memory accesses and store to rq->membarrier_state,
-+ * - store to rq->membarrier_state and following user-space memory accesses.
-+ * In the same way it provides those guarantees around store to rq->curr.
-+ */
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+	int membarrier_state;
-+
-+	if (prev_mm == next_mm)
-+		return;
-+
-+	membarrier_state = atomic_read(&next_mm->membarrier_state);
-+	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
-+		return;
-+
-+	WRITE_ONCE(rq->membarrier_state, membarrier_state);
-+}
-+#else
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+}
-+#endif
-+
-+#ifdef CONFIG_NUMA
-+extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
-+#else
-+static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return nr_cpu_ids;
-+}
-+#endif
-+
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
-+
-+extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+extern int preempt_dynamic_mode;
-+extern int sched_dynamic_mode(const char *str);
-+extern void sched_dynamic_update(int mode);
-+#endif
-+
-+static inline void nohz_run_idle_balance(int cpu) { }
-+
-+static inline
-+unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-+				  struct task_struct *p)
-+{
-+	return util;
-+}
-+
-+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
-+#define MM_CID_SCAN_DELAY	100			/* 100ms */
-+
-+extern raw_spinlock_t cid_lock;
-+extern int use_cid_lock;
-+
-+extern void sched_mm_cid_migrate_from(struct task_struct *t);
-+extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
-+extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-+extern void init_sched_mm_cid(struct task_struct *t);
-+
-+static inline void __mm_cid_put(struct mm_struct *mm, int cid)
-+{
-+	if (cid < 0)
-+		return;
-+	cpumask_clear_cpu(cid, mm_cidmask(mm));
-+}
-+
-+/*
-+ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
-+ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
-+ * be held to transition to other states.
-+ *
-+ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
-+ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
-+ */
-+static inline void mm_cid_put_lazy(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (!mm_cid_is_lazy_put(cid) ||
-+	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid, res;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	for (;;) {
-+		if (mm_cid_is_unset(cid))
-+			return MM_CID_UNSET;
-+		/*
-+		 * Attempt transition from valid or lazy-put to unset.
-+		 */
-+		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
-+		if (res == cid)
-+			break;
-+		cid = res;
-+	}
-+	return cid;
-+}
-+
-+static inline void mm_cid_put(struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = mm_cid_pcpu_unset(mm);
-+	if (cid == MM_CID_UNSET)
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int __mm_cid_try_get(struct mm_struct *mm)
-+{
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	cpumask = mm_cidmask(mm);
-+	/*
-+	 * Retry finding first zero bit if the mask is temporarily
-+	 * filled. This only happens during concurrent remote-clear
-+	 * which owns a cid without holding a rq lock.
-+	 */
-+	for (;;) {
-+		cid = cpumask_first_zero(cpumask);
-+		if (cid < nr_cpu_ids)
-+			break;
-+		cpu_relax();
-+	}
-+	if (cpumask_test_and_set_cpu(cid, cpumask))
-+		return -1;
-+	return cid;
-+}
-+
-+/*
-+ * Save a snapshot of the current runqueue time of this cpu
-+ * with the per-cpu cid value, allowing to estimate how recently it was used.
-+ */
-+static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
-+
-+	lockdep_assert_rq_held(rq);
-+	WRITE_ONCE(pcpu_cid->time, rq->clock);
-+}
-+
-+static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	/*
-+	 * All allocations (even those using the cid_lock) are lock-free. If
-+	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
-+	 * guarantee forward progress.
-+	 */
-+	if (!READ_ONCE(use_cid_lock)) {
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto end;
-+		raw_spin_lock(&cid_lock);
-+	} else {
-+		raw_spin_lock(&cid_lock);
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto unlock;
-+	}
-+
-+	/*
-+	 * cid concurrently allocated. Retry while forcing following
-+	 * allocations to use the cid_lock to ensure forward progress.
-+	 */
-+	WRITE_ONCE(use_cid_lock, 1);
-+	/*
-+	 * Set use_cid_lock before allocation. Only care about program order
-+	 * because this is only required for forward progress.
-+	 */
-+	barrier();
-+	/*
-+	 * Retry until it succeeds. It is guaranteed to eventually succeed once
-+	 * all newcoming allocations observe the use_cid_lock flag set.
-+	 */
-+	do {
-+		cid = __mm_cid_try_get(mm);
-+		cpu_relax();
-+	} while (cid < 0);
-+	/*
-+	 * Allocate before clearing use_cid_lock. Only care about
-+	 * program order because this is for forward progress.
-+	 */
-+	barrier();
-+	WRITE_ONCE(use_cid_lock, 0);
-+unlock:
-+	raw_spin_unlock(&cid_lock);
-+end:
-+	mm_cid_snapshot_time(rq, mm);
-+	return cid;
-+}
-+
-+static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	lockdep_assert_rq_held(rq);
-+	cpumask = mm_cidmask(mm);
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (mm_cid_is_valid(cid)) {
-+		mm_cid_snapshot_time(rq, mm);
-+		return cid;
-+	}
-+	if (mm_cid_is_lazy_put(cid)) {
-+		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+	}
-+	cid = __mm_cid_get(rq, mm);
-+	__this_cpu_write(pcpu_cid->cid, cid);
-+	return cid;
-+}
-+
-+static inline void switch_mm_cid(struct rq *rq,
-+				 struct task_struct *prev,
-+				 struct task_struct *next)
-+{
-+	/*
-+	 * Provide a memory barrier between rq->curr store and load of
-+	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
-+	 *
-+	 * Should be adapted if context_switch() is modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		/*
-+		 * user -> kernel transition does not guarantee a barrier, but
-+		 * we can use the fact that it performs an atomic operation in
-+		 * mmgrab().
-+		 */
-+		if (prev->mm)                           // from user
-+			smp_mb__after_mmgrab();
-+		/*
-+		 * kernel -> kernel transition does not change rq->curr->mm
-+		 * state. It stays NULL.
-+		 */
-+	} else {                                        // to user
-+		/*
-+		 * kernel -> user transition does not provide a barrier
-+		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
-+		 * Provide it here.
-+		 */
-+		if (!prev->mm)                          // from kernel
-+			smp_mb();
-+		/*
-+		 * user -> user transition guarantees a memory barrier through
-+		 * switch_mm() when current->mm changes. If current->mm is
-+		 * unchanged, no barrier is needed.
-+		 */
-+	}
-+	if (prev->mm_cid_active) {
-+		mm_cid_snapshot_time(rq, prev->mm);
-+		mm_cid_put_lazy(prev);
-+		prev->mm_cid = -1;
-+	}
-+	if (next->mm_cid_active)
-+		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
-+}
-+
-+#else
-+static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
-+static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
-+static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
-+static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-+static inline void init_sched_mm_cid(struct task_struct *t) { }
-+#endif
-+
-+#ifdef CONFIG_SMP
-+extern struct balance_callback balance_push_callback;
-+
-+static inline void
-+__queue_balance_callback(struct rq *rq,
-+			 struct balance_callback *head)
-+{
-+	lockdep_assert_rq_held(rq);
-+
-+	/*
-+	 * Don't (re)queue an already queued item; nor queue anything when
-+	 * balance_push() is active, see the comment with
-+	 * balance_push_callback.
-+	 */
-+	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
-+		return;
-+
-+	head->next = rq->balance_callback;
-+	rq->balance_callback = head;
-+}
-+#endif /* CONFIG_SMP */
-+
-+#endif /* ALT_SCHED_H */
-diff --new-file -rup linux-6.10/kernel/sched/bmq.h linux-prjc-linux-6.10.y-prjc/kernel/sched/bmq.h
---- linux-6.10/kernel/sched/bmq.h	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/bmq.h	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,98 @@
-+#define ALT_SCHED_NAME "BMQ"
-+
-+/*
-+ * BMQ only routines
-+ */
-+static inline void boost_task(struct task_struct *p, int n)
-+{
-+	int limit;
-+
-+	switch (p->policy) {
-+	case SCHED_NORMAL:
-+		limit = -MAX_PRIORITY_ADJ;
-+		break;
-+	case SCHED_BATCH:
-+		limit = 0;
-+		break;
-+	default:
-+		return;
-+	}
-+
-+	p->boost_prio = max(limit, p->boost_prio - n);
-+}
-+
-+static inline void deboost_task(struct task_struct *p)
-+{
-+	if (p->boost_prio < MAX_PRIORITY_ADJ)
-+		p->boost_prio++;
-+}
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms) {}
-+
-+/* This API is used in task_prio(), return value readed by human users */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
-+	prio = task_sched_prio(p);		\
-+	idx = prio;
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+	return prio;
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+	return idx;
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+	return rq->prio;
-+}
-+
-+inline int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio + p->boost_prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq) {}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	deboost_task(p);
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
-+static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->boost_prio = MAX_PRIORITY_ADJ;
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p)
-+{
-+	s64 delta = this_rq()->clock_task > p->last_ran;
-+
-+	if (likely(delta > 0))
-+		boost_task(p, delta  >> 22);
-+}
-+
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
-+{
-+	boost_task(p, 1);
-+}
-diff --new-file -rup linux-6.10/kernel/sched/build_policy.c linux-prjc-linux-6.10.y-prjc/kernel/sched/build_policy.c
---- linux-6.10/kernel/sched/build_policy.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/build_policy.c	2024-07-15 16:47:37.000000000 +0200
-@@ -42,13 +42,19 @@
- 
- #include "idle.c"
- 
-+#ifndef CONFIG_SCHED_ALT
- #include "rt.c"
-+#endif
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- # include "cpudeadline.c"
-+#endif
- # include "pelt.c"
- #endif
- 
- #include "cputime.c"
--#include "deadline.c"
- 
-+#ifndef CONFIG_SCHED_ALT
-+#include "deadline.c"
-+#endif
-diff --new-file -rup linux-6.10/kernel/sched/build_utility.c linux-prjc-linux-6.10.y-prjc/kernel/sched/build_utility.c
---- linux-6.10/kernel/sched/build_utility.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/build_utility.c	2024-07-15 16:47:37.000000000 +0200
-@@ -84,7 +84,9 @@
- 
- #ifdef CONFIG_SMP
- # include "cpupri.c"
-+#ifndef CONFIG_SCHED_ALT
- # include "stop_task.c"
-+#endif
- # include "topology.c"
- #endif
- 
-diff --new-file -rup linux-6.10/kernel/sched/cpufreq_schedutil.c linux-prjc-linux-6.10.y-prjc/kernel/sched/cpufreq_schedutil.c
---- linux-6.10/kernel/sched/cpufreq_schedutil.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/cpufreq_schedutil.c	2024-07-15 16:47:37.000000000 +0200
-@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(i
- 
- static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
- 
- 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
- 	util = max(util, boost);
- 	sg_cpu->bw_min = min;
- 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
-+#else /* CONFIG_SCHED_ALT */
-+	sg_cpu->bw_min = 0;
-+	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
-+#endif /* CONFIG_SCHED_ALT */
- }
- 
- /**
-@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(str
-  */
- static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
- 		sg_cpu->sg_policy->limits_changed = true;
-+#endif
- }
- 
- static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
-@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct s
- 	}
- 
- 	ret = sched_setattr_nocheck(thread, &attr);
-+
- 	if (ret) {
- 		kthread_stop(thread);
- 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-diff --new-file -rup linux-6.10/kernel/sched/cputime.c linux-prjc-linux-6.10.y-prjc/kernel/sched/cputime.c
---- linux-6.10/kernel/sched/cputime.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/cputime.c	2024-07-15 16:47:37.000000000 +0200
-@@ -126,7 +126,7 @@ void account_user_time(struct task_struc
- 	p->utime += cputime;
- 	account_group_user_time(p, cputime);
- 
--	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
-+	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
- 
- 	/* Add user time to cpustat. */
- 	task_group_account_field(p, index, cputime);
-@@ -150,7 +150,7 @@ void account_guest_time(struct task_stru
- 	p->gtime += cputime;
- 
- 	/* Add guest time to cpustat. */
--	if (task_nice(p) > 0) {
-+	if (task_running_nice(p)) {
- 		task_group_account_field(p, CPUTIME_NICE, cputime);
- 		cpustat[CPUTIME_GUEST_NICE] += cputime;
- 	} else {
-@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64
- #ifdef CONFIG_64BIT
- static inline u64 read_sum_exec_runtime(struct task_struct *t)
- {
--	return t->se.sum_exec_runtime;
-+	return tsk_seruntime(t);
- }
- #else
- static u64 read_sum_exec_runtime(struct task_struct *t)
-@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct
- 	struct rq *rq;
- 
- 	rq = task_rq_lock(t, &rf);
--	ns = t->se.sum_exec_runtime;
-+	ns = tsk_seruntime(t);
- 	task_rq_unlock(rq, t, &rf);
- 
- 	return ns;
-@@ -617,7 +617,7 @@ out:
- void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
- {
- 	struct task_cputime cputime = {
--		.sum_exec_runtime = p->se.sum_exec_runtime,
-+		.sum_exec_runtime = tsk_seruntime(p),
- 	};
- 
- 	if (task_cputime(p, &cputime.utime, &cputime.stime))
-diff --new-file -rup linux-6.10/kernel/sched/debug.c linux-prjc-linux-6.10.y-prjc/kernel/sched/debug.c
---- linux-6.10/kernel/sched/debug.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/debug.c	2024-07-15 16:47:37.000000000 +0200
-@@ -7,6 +7,7 @@
-  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * This allows printing both to /sys/kernel/debug/sched/debug and
-  * to the console
-@@ -215,6 +216,7 @@ static const struct file_operations sche
- };
- 
- #endif /* SMP */
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 
-@@ -278,6 +280,7 @@ static const struct file_operations sche
- 
- #endif /* CONFIG_PREEMPT_DYNAMIC */
- 
-+#ifndef CONFIG_SCHED_ALT
- __read_mostly bool sched_debug_verbose;
- 
- #ifdef CONFIG_SMP
-@@ -332,6 +335,7 @@ static const struct file_operations sche
- 	.llseek		= seq_lseek,
- 	.release	= seq_release,
- };
-+#endif /* !CONFIG_SCHED_ALT */
- 
- static struct dentry *debugfs_sched;
- 
-@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
- 
- 	debugfs_sched = debugfs_create_dir("sched", NULL);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
- 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
- #endif
- 
- 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
- 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
- 
-@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
- #endif
- 
- 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- 
- 	return 0;
- }
- late_initcall(sched_init_debug);
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 
- static cpumask_var_t		sd_sysctl_cpus;
-@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_str
- 	memset(&p->stats, 0, sizeof(p->stats));
- #endif
- }
-+#endif /* !CONFIG_SCHED_ALT */
- 
- void resched_latency_warn(int cpu, u64 latency)
- {
-diff --new-file -rup linux-6.10/kernel/sched/idle.c linux-prjc-linux-6.10.y-prjc/kernel/sched/idle.c
---- linux-6.10/kernel/sched/idle.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/idle.c	2024-07-15 16:47:37.000000000 +0200
-@@ -430,6 +430,7 @@ void cpu_startup_entry(enum cpuhp_state
- 		do_idle();
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * idle-task scheduling class.
-  */
-@@ -551,3 +552,4 @@ DEFINE_SCHED_CLASS(idle) = {
- 	.switched_to		= switched_to_idle,
- 	.update_curr		= update_curr_idle,
- };
-+#endif
-diff --new-file -rup linux-6.10/kernel/sched/pds.h linux-prjc-linux-6.10.y-prjc/kernel/sched/pds.h
---- linux-6.10/kernel/sched/pds.h	1970-01-01 01:00:00.000000000 +0100
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pds.h	2024-07-15 16:47:37.000000000 +0200
-@@ -0,0 +1,134 @@
-+#define ALT_SCHED_NAME "PDS"
-+
-+static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
-+
-+#define SCHED_NORMAL_PRIO_NUM	(32)
-+#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
-+
-+/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
-+#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
-+
-+/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
-+static __read_mostly int sched_timeslice_shift = 23;
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	u64 sched_dl = max(p->deadline, rq->time_edge);
-+
-+#ifdef ALT_SCHED_DEBUG
-+	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
-+		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
-+		return SCHED_NORMAL_PRIO_NUM - 1;
-+#endif
-+
-+	return sched_dl - rq->time_edge;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
-+	if (p->prio < MIN_NORMAL_PRIO) {							\
-+		prio = p->prio >> 2;								\
-+		idx = prio;									\
-+	} else {										\
-+		u64 sched_dl = max(p->deadline, rq->time_edge);					\
-+		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
-+		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
-+	}
-+
-+static inline int sched_prio2idx(int sched_prio, struct rq *rq)
-+{
-+	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_prio :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
-+}
-+
-+static inline int sched_idx2prio(int sched_idx, struct rq *rq)
-+{
-+	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_idx :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+	return rq->prio_idx;
-+}
-+
-+int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq)
-+{
-+	struct list_head head;
-+	u64 old = rq->time_edge;
-+	u64 now = rq->clock >> sched_timeslice_shift;
-+	u64 prio, delta;
-+	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
-+
-+	if (now == old)
-+		return;
-+
-+	rq->time_edge = now;
-+	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
-+	INIT_LIST_HEAD(&head);
-+
-+	prio = MIN_SCHED_NORMAL_PRIO;
-+	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
-+		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
-+				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
-+
-+	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
-+	if (!list_empty(&head)) {
-+		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
-+
-+		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
-+		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
-+	}
-+	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
-+		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
-+
-+	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
-+		return;
-+
-+	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
-+	rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	if (p->prio >= MIN_NORMAL_PRIO)
-+		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
-+			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
-+{
-+	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
-+	if (unlikely(p->deadline > max_dl))
-+		p->deadline = max_dl;
-+}
-+
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p) {}
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
-diff --new-file -rup linux-6.10/kernel/sched/pelt.c linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.c
---- linux-6.10/kernel/sched/pelt.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.c	2024-07-15 16:47:37.000000000 +0200
-@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa,
- 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * sched_entity:
-  *
-@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struc
- 
- 	return 0;
- }
-+#endif
- 
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- /*
-  * hardware:
-  *
-diff --new-file -rup linux-6.10/kernel/sched/pelt.h linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.h
---- linux-6.10/kernel/sched/pelt.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/pelt.h	2024-07-15 16:47:37.000000000 +0200
-@@ -1,13 +1,15 @@
- #ifdef CONFIG_SMP
- #include "sched-pelt.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
- int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
- int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
- int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
- int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
-+#endif
- 
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
- 
- static inline u64 hw_load_avg(struct rq *rq)
-@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struc
- 	return PELT_MIN_DIVIDER + avg->period_contrib;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void cfs_se_util_change(struct sched_avg *avg)
- {
- 	unsigned int enqueued;
-@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(stru
- 	return rq_clock_pelt(rq_of(cfs_rq));
- }
- #endif
-+#endif /* CONFIG_SCHED_ALT */
- 
- #else
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline int
- update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
- {
-@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq
- {
- 	return 0;
- }
-+#endif
- 
- static inline int
- update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
-diff --new-file -rup linux-6.10/kernel/sched/sched.h linux-prjc-linux-6.10.y-prjc/kernel/sched/sched.h
---- linux-6.10/kernel/sched/sched.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/sched.h	2024-07-15 16:47:37.000000000 +0200
-@@ -5,6 +5,10 @@
- #ifndef _KERNEL_SCHED_SCHED_H
- #define _KERNEL_SCHED_SCHED_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_sched.h"
-+#else
-+
- #include <linux/sched/affinity.h>
- #include <linux/sched/autogroup.h>
- #include <linux/sched/cpufreq.h>
-@@ -3481,4 +3485,9 @@ static inline void init_sched_mm_cid(str
- extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
- extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
- 
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+	return (task_nice(p) > 0);
-+}
-+#endif /* !CONFIG_SCHED_ALT */
- #endif /* _KERNEL_SCHED_SCHED_H */
-diff --new-file -rup linux-6.10/kernel/sched/stats.c linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.c
---- linux-6.10/kernel/sched/stats.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.c	2024-07-15 16:47:37.000000000 +0200
-@@ -125,9 +125,11 @@ static int show_schedstat(struct seq_fil
- 	} else {
- 		struct rq *rq;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		struct sched_domain *sd;
- 		int dcount = 0;
- #endif
-+#endif
- 		cpu = (unsigned long)(v - 2);
- 		rq = cpu_rq(cpu);
- 
-@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_fil
- 		seq_printf(seq, "\n");
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		/* domain-specific stats */
- 		rcu_read_lock();
- 		for_each_domain(cpu, sd) {
-@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_fil
- 		}
- 		rcu_read_unlock();
- #endif
-+#endif
- 	}
- 	return 0;
- }
-diff --new-file -rup linux-6.10/kernel/sched/stats.h linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.h
---- linux-6.10/kernel/sched/stats.h	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/stats.h	2024-07-15 16:47:37.000000000 +0200
-@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart
- 
- #endif /* CONFIG_SCHEDSTATS */
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_FAIR_GROUP_SCHED
- struct sched_entity_stats {
- 	struct sched_entity     se;
-@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity
- #endif
- 	return &task_of(se)->stats;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PSI
- void psi_task_change(struct task_struct *task, int clear, int set);
-diff --new-file -rup linux-6.10/kernel/sched/topology.c linux-prjc-linux-6.10.y-prjc/kernel/sched/topology.c
---- linux-6.10/kernel/sched/topology.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sched/topology.c	2024-07-15 16:47:37.000000000 +0200
-@@ -3,6 +3,7 @@
-  * Scheduler topology setup/handling methods
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- #include <linux/bsearch.h>
- 
- DEFINE_MUTEX(sched_domains_mutex);
-@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
-  */
- 
- static int default_relax_domain_level = -1;
-+#endif /* CONFIG_SCHED_ALT */
- int sched_domain_level_max;
- 
-+#ifndef CONFIG_SCHED_ALT
- static int __init setup_relax_domain_level(char *str)
- {
- 	if (kstrtoint(str, 0, &default_relax_domain_level))
-@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_lev
- 
- 	return sd;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- /*
-  * Topology list, bottom-up.
-@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sc
- 	sched_domain_topology_saved = NULL;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_NUMA
- 
- static const struct cpumask *sd_numa_mask(int cpu)
-@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_n
- 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
- 	mutex_unlock(&sched_domains_mutex);
- }
-+#else /* CONFIG_SCHED_ALT */
-+DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
-+
-+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-+			     struct sched_domain_attr *dattr_new)
-+{}
-+
-+#ifdef CONFIG_NUMA
-+int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return best_mask_cpu(cpu, cpus);
-+}
-+
-+int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
-+{
-+	return cpumask_nth(cpu, cpus);
-+}
-+
-+const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
-+{
-+	return ERR_PTR(-EOPNOTSUPP);
-+}
-+EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
-+#endif /* CONFIG_NUMA */
-+#endif
-diff --new-file -rup linux-6.10/kernel/sysctl.c linux-prjc-linux-6.10.y-prjc/kernel/sysctl.c
---- linux-6.10/kernel/sysctl.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/sysctl.c	2024-07-15 16:47:37.000000000 +0200
-@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
- 
- /* Constants used for minimum and maximum */
- 
-+#ifdef CONFIG_SCHED_ALT
-+extern int sched_yield_type;
-+#endif
-+
- #ifdef CONFIG_PERF_EVENTS
- static const int six_hundred_forty_kb = 640 * 1024;
- #endif
-@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
- 		.proc_handler	= proc_dointvec,
- 	},
- #endif
-+#ifdef CONFIG_SCHED_ALT
-+	{
-+		.procname	= "yield_type",
-+		.data		= &sched_yield_type,
-+		.maxlen		= sizeof (int),
-+		.mode		= 0644,
-+		.proc_handler	= &proc_dointvec_minmax,
-+		.extra1		= SYSCTL_ZERO,
-+		.extra2		= SYSCTL_TWO,
-+	},
-+#endif
- #if defined(CONFIG_S390) && defined(CONFIG_SMP)
- 	{
- 		.procname	= "spin_retry",
-diff --new-file -rup linux-6.10/kernel/time/hrtimer.c linux-prjc-linux-6.10.y-prjc/kernel/time/hrtimer.c
---- linux-6.10/kernel/time/hrtimer.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/time/hrtimer.c	2024-07-15 16:47:37.000000000 +0200
-@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, con
- 	int ret = 0;
- 	u64 slack;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	slack = current->timer_slack_ns;
--	if (rt_task(current))
-+	if (dl_task(current) || rt_task(current))
-+#endif
- 		slack = 0;
- 
- 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
-diff --new-file -rup linux-6.10/kernel/time/posix-cpu-timers.c linux-prjc-linux-6.10.y-prjc/kernel/time/posix-cpu-timers.c
---- linux-6.10/kernel/time/posix-cpu-timers.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/time/posix-cpu-timers.c	2024-07-15 16:47:37.000000000 +0200
-@@ -223,7 +223,7 @@ static void task_sample_cputime(struct t
- 	u64 stime, utime;
- 
- 	task_cputime(p, &utime, &stime);
--	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
-+	store_samples(samples, stime, utime, tsk_seruntime(p));
- }
- 
- static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
-@@ -867,6 +867,7 @@ static void collect_posix_cputimers(stru
- 	}
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void check_dl_overrun(struct task_struct *tsk)
- {
- 	if (tsk->dl.dl_overrun) {
-@@ -874,6 +875,7 @@ static inline void check_dl_overrun(stru
- 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
- 	}
- }
-+#endif
- 
- static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
- {
-@@ -901,8 +903,10 @@ static void check_thread_timers(struct t
- 	u64 samples[CPUCLOCK_MAX];
- 	unsigned long soft;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk))
- 		check_dl_overrun(tsk);
-+#endif
- 
- 	if (expiry_cache_is_inactive(pct))
- 		return;
-@@ -916,7 +920,7 @@ static void check_thread_timers(struct t
- 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
- 	if (soft != RLIM_INFINITY) {
- 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
--		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
-+		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
- 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
- 
- 		/* At the hard limit, send SIGKILL. No further action. */
-@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(
- 			return true;
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk) && tsk->dl.dl_overrun)
- 		return true;
-+#endif
- 
- 	return false;
- }
-diff --new-file -rup linux-6.10/kernel/trace/trace_selftest.c linux-prjc-linux-6.10.y-prjc/kernel/trace/trace_selftest.c
---- linux-6.10/kernel/trace/trace_selftest.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/trace/trace_selftest.c	2024-07-15 16:47:37.000000000 +0200
-@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void
- {
- 	/* Make this a -deadline thread */
- 	static const struct sched_attr attr = {
-+#ifdef CONFIG_SCHED_ALT
-+		/* No deadline on BMQ/PDS, use RR */
-+		.sched_policy = SCHED_RR,
-+#else
- 		.sched_policy = SCHED_DEADLINE,
- 		.sched_runtime = 100000ULL,
- 		.sched_deadline = 10000000ULL,
- 		.sched_period = 10000000ULL
-+#endif
- 	};
- 	struct wakeup_test_data *x = data;
- 
-diff --new-file -rup linux-6.10/kernel/workqueue.c linux-prjc-linux-6.10.y-prjc/kernel/workqueue.c
---- linux-6.10/kernel/workqueue.c	2024-07-15 00:43:32.000000000 +0200
-+++ linux-prjc-linux-6.10.y-prjc/kernel/workqueue.c	2024-07-15 16:47:37.000000000 +0200
-@@ -1248,6 +1248,7 @@ static bool kick_pool(struct worker_pool
- 
- 	p = worker->task;
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 	/*
- 	 * Idle @worker is about to execute @work and waking up provides an
-@@ -1277,6 +1278,8 @@ static bool kick_pool(struct worker_pool
- 		}
- 	}
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-+
- 	wake_up_process(p);
- 	return true;
- }
-@@ -1405,7 +1408,11 @@ void wq_worker_running(struct task_struc
- 	 * CPU intensive auto-detection cares about how long a work item hogged
- 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
- 	 */
-+#ifdef CONFIG_SCHED_ALT
-+	worker->current_at = worker->task->sched_time;
-+#else
- 	worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 
- 	WRITE_ONCE(worker->sleeping, 0);
- }
-@@ -1490,7 +1497,11 @@ void wq_worker_tick(struct task_struct *
- 	 * We probably want to make this prettier in the future.
- 	 */
- 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
-+#ifdef CONFIG_SCHED_ALT
-+	    worker->task->sched_time - worker->current_at <
-+#else
- 	    worker->task->se.sum_exec_runtime - worker->current_at <
-+#endif
- 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
- 		return;
- 
-@@ -3176,7 +3187,11 @@ __acquires(&pool->lock)
- 	worker->current_func = work->func;
- 	worker->current_pwq = pwq;
- 	if (worker->task)
-+#ifdef CONFIG_SCHED_ALT
-+		worker->current_at = worker->task->sched_time;
-+#else
- 		worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 	work_data = *work_data_bits(work);
- 	worker->current_color = get_work_color(work_data);
- 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
deleted file mode 100644
index 6dc48eec..00000000
--- a/5021_BMQ-and-PDS-gentoo-defaults.patch
+++ /dev/null
@@ -1,13 +0,0 @@
---- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
-+++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
-@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
- 	  If in doubt, use the default value.
- 
- menuconfig SCHED_ALT
-+	depends on X86_64
- 	bool "Alternative CPU Schedulers"
--	default y
-+	default n
- 	help
- 	  This feature enable alternative CPU scheduler"
- 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-27 13:45 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-27 13:45 UTC (permalink / raw
  To: gentoo-commits

commit:     4cbee0c6502f56a8b994cf6e55c28fe99dc5094c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 27 13:45:10 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 27 13:45:10 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4cbee0c6

Linux patch 6.10.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1001_linux-6.10.2.patch | 908 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 912 insertions(+)

diff --git a/0000_README b/0000_README
index d91abe89..f3d968e8 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-6.10.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.1
 
+Patch:  1001_linux-6.10.2.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.2
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1001_linux-6.10.2.patch b/1001_linux-6.10.2.patch
new file mode 100644
index 00000000..0b87c2e6
--- /dev/null
+++ b/1001_linux-6.10.2.patch
@@ -0,0 +1,908 @@
+diff --git a/Makefile b/Makefile
+index 9ae12a6c0ece2..07e1aec72c171 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 17ab6c4759580..625abd976cac2 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -685,6 +685,7 @@ dwc_0: usb@8a00000 {
+ 				clocks = <&xo>;
+ 				clock-names = "ref";
+ 				tx-fifo-resize;
++				snps,parkmode-disable-ss-quirk;
+ 				snps,is-utmi-l1-suspend;
+ 				snps,hird-threshold = /bits/ 8 <0x0>;
+ 				snps,dis_u2_susphy_quirk;
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 5d42de829e75f..ca75b7de7bf37 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -666,6 +666,7 @@ dwc_0: usb@8a00000 {
+ 				interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&qusb_phy_0>, <&ssphy_0>;
+ 				phy-names = "usb2-phy", "usb3-phy";
++				snps,parkmode-disable-ss-quirk;
+ 				snps,is-utmi-l1-suspend;
+ 				snps,hird-threshold = /bits/ 8 <0x0>;
+ 				snps,dis_u2_susphy_quirk;
+@@ -715,6 +716,7 @@ dwc_1: usb@8c00000 {
+ 				interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&qusb_phy_1>, <&ssphy_1>;
+ 				phy-names = "usb2-phy", "usb3-phy";
++				snps,parkmode-disable-ss-quirk;
+ 				snps,is-utmi-l1-suspend;
+ 				snps,hird-threshold = /bits/ 8 <0x0>;
+ 				snps,dis_u2_susphy_quirk;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 8d2cb6f410956..6e7a4bb08b35c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -3091,6 +3091,7 @@ usb3_dwc3: usb@6a00000 {
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
+ 				snps,is-utmi-l1-suspend;
++				snps,parkmode-disable-ss-quirk;
+ 				tx-fifo-resize;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index d795b2bbe1330..2dbef4b526ab7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -2164,6 +2164,7 @@ usb3_dwc3: usb@a800000 {
+ 				interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&qusb2phy>, <&usb3phy>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 				snps,has-lpm-erratum;
+diff --git a/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts b/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
+index bb5191422660b..8c27d52139a1b 100644
+--- a/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
++++ b/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
+@@ -59,6 +59,17 @@ hdmi_con: endpoint {
+ 		};
+ 	};
+ 
++	i2c2_gpio: i2c {
++		compatible = "i2c-gpio";
++
++		sda-gpios = <&tlmm 6 GPIO_ACTIVE_HIGH>;
++		scl-gpios = <&tlmm 7 GPIO_ACTIVE_HIGH>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		status = "disabled";
++	};
++
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+@@ -199,7 +210,7 @@ &gpi_dma0 {
+ 	status = "okay";
+ };
+ 
+-&i2c2 {
++&i2c2_gpio {
+ 	clock-frequency = <400000>;
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index 2c39bb1b97db5..cb8a62714a302 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -60,6 +60,17 @@ hdmi_con: endpoint {
+ 		};
+ 	};
+ 
++	i2c2_gpio: i2c {
++		compatible = "i2c-gpio";
++
++		sda-gpios = <&tlmm 6 GPIO_ACTIVE_HIGH>;
++		scl-gpios = <&tlmm 7 GPIO_ACTIVE_HIGH>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		status = "disabled";
++	};
++
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+@@ -190,7 +201,7 @@ zap-shader {
+ 	};
+ };
+ 
+-&i2c2 {
++&i2c2_gpio {
+ 	clock-frequency = <400000>;
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 4774a859bd7ea..e9deffe3aaf6c 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3067,6 +3067,7 @@ usb_1_dwc3: usb@a600000 {
+ 				iommus = <&apps_smmu 0x540 0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&usb_1_hsphy>, <&usb_1_qmpphy QMP_USB43DP_USB3_PHY>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 				maximum-speed = "super-speed";
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index fc9ec367e3a5a..2f7780f629ac5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -4150,6 +4150,7 @@ usb_1_dwc3: usb@a600000 {
+ 				iommus = <&apps_smmu 0xe0 0x0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&usb_1_hsphy>, <&usb_1_qmpphy QMP_USB43DP_USB3_PHY>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 				maximum-speed = "super-speed";
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index f5921b80ef943..5f6884b2367d9 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -1302,6 +1302,7 @@ usb3_dwc3: usb@a800000 {
+ 				interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 
+ 				phys = <&qusb2phy0>, <&usb3_qmpphy>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 10de2bd46ffcc..d817a7751086e 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4106,6 +4106,7 @@ usb_1_dwc3: usb@a600000 {
+ 				iommus = <&apps_smmu 0x740 0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&usb_1_hsphy>, <&usb_1_qmpphy QMP_USB43DP_USB3_PHY>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 			};
+@@ -4161,6 +4162,7 @@ usb_2_dwc3: usb@a800000 {
+ 				iommus = <&apps_smmu 0x760 0>;
+ 				snps,dis_u2_susphy_quirk;
+ 				snps,dis_enblslpm_quirk;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&usb_2_hsphy>, <&usb_2_qmpphy>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 9ed062150aaf2..b4ce5a322107e 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -1656,6 +1656,7 @@ usb_dwc3: usb@4e00000 {
+ 				snps,has-lpm-erratum;
+ 				snps,hird-threshold = /bits/ 8 <0x10>;
+ 				snps,usb3_lpm_capable;
++				snps,parkmode-disable-ss-quirk;
+ 
+ 				usb-role-switch;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 84ff20a96c838..6c7eac0110ba1 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1890,6 +1890,7 @@ usb_1_dwc3: usb@a600000 {
+ 				snps,dis_enblslpm_quirk;
+ 				snps,has-lpm-erratum;
+ 				snps,hird-threshold = /bits/ 8 <0x10>;
++				snps,parkmode-disable-ss-quirk;
+ 				phys = <&usb_1_hsphy>, <&usb_1_qmpphy QMP_USB43DP_USB3_PHY>;
+ 				phy-names = "usb2-phy", "usb3-phy";
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index be6b1e7d07ce3..7618ae1f8b1c9 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -659,7 +659,7 @@ &pcie6a {
+ };
+ 
+ &pcie6a_phy {
+-	vdda-phy-supply = <&vreg_l3j_0p8>;
++	vdda-phy-supply = <&vreg_l1d_0p8>;
+ 	vdda-pll-supply = <&vreg_l2j_1p2>;
+ 
+ 	status = "okay";
+@@ -841,7 +841,7 @@ &uart21 {
+ 
+ &usb_1_ss0_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_0_eusb2_repeater>;
+ 
+@@ -849,6 +849,9 @@ &usb_1_ss0_hsphy {
+ };
+ 
+ &usb_1_ss0_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l1j_0p8>;
++
+ 	status = "okay";
+ };
+ 
+@@ -863,7 +866,7 @@ &usb_1_ss0_dwc3 {
+ 
+ &usb_1_ss1_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_1_eusb2_repeater>;
+ 
+@@ -871,6 +874,9 @@ &usb_1_ss1_hsphy {
+ };
+ 
+ &usb_1_ss1_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
+ 	status = "okay";
+ };
+ 
+@@ -885,7 +891,7 @@ &usb_1_ss1_dwc3 {
+ 
+ &usb_1_ss2_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_2_eusb2_repeater>;
+ 
+@@ -893,6 +899,9 @@ &usb_1_ss2_hsphy {
+ };
+ 
+ &usb_1_ss2_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 8f67c393b871b..5567636c8b27f 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -470,7 +470,7 @@ &pcie6a {
+ };
+ 
+ &pcie6a_phy {
+-	vdda-phy-supply = <&vreg_l3j_0p8>;
++	vdda-phy-supply = <&vreg_l1d_0p8>;
+ 	vdda-pll-supply = <&vreg_l2j_1p2>;
+ 
+ 	status = "okay";
+@@ -537,7 +537,7 @@ &uart21 {
+ 
+ &usb_1_ss0_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_0_eusb2_repeater>;
+ 
+@@ -545,6 +545,9 @@ &usb_1_ss0_hsphy {
+ };
+ 
+ &usb_1_ss0_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l1j_0p8>;
++
+ 	status = "okay";
+ };
+ 
+@@ -559,7 +562,7 @@ &usb_1_ss0_dwc3 {
+ 
+ &usb_1_ss1_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_1_eusb2_repeater>;
+ 
+@@ -567,6 +570,9 @@ &usb_1_ss1_hsphy {
+ };
+ 
+ &usb_1_ss1_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
+ 	status = "okay";
+ };
+ 
+@@ -581,7 +587,7 @@ &usb_1_ss1_dwc3 {
+ 
+ &usb_1_ss2_hsphy {
+ 	vdd-supply = <&vreg_l2e_0p8>;
+-	vdda12-supply = <&vreg_l3e_1p2>;
++	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_2_eusb2_repeater>;
+ 
+@@ -589,6 +595,9 @@ &usb_1_ss2_hsphy {
+ };
+ 
+ &usb_1_ss2_qmpphy {
++	vdda-phy-supply = <&vreg_l3e_1p2>;
++	vdda-pll-supply = <&vreg_l2d_0p9>;
++
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 65747f15dbec4..c848966ed1759 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -433,12 +433,13 @@ static void do_exception(struct pt_regs *regs, int access)
+ 			handle_fault_error_nolock(regs, 0);
+ 		else
+ 			do_sigsegv(regs, SEGV_MAPERR);
+-	} else if (fault & VM_FAULT_SIGBUS) {
++	} else if (fault & (VM_FAULT_SIGBUS | VM_FAULT_HWPOISON)) {
+ 		if (!user_mode(regs))
+ 			handle_fault_error_nolock(regs, 0);
+ 		else
+ 			do_sigbus(regs);
+ 	} else {
++		pr_emerg("Unexpected fault flags: %08x\n", fault);
+ 		BUG();
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 101038395c3b4..772604feb6acd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -2017,7 +2017,7 @@ static int sdma_v4_0_process_trap_irq(struct amdgpu_device *adev,
+ 				      struct amdgpu_irq_src *source,
+ 				      struct amdgpu_iv_entry *entry)
+ {
+-	uint32_t instance;
++	int instance;
+ 
+ 	DRM_DEBUG("IH: SDMA trap\n");
+ 	instance = sdma_v4_0_irq_id_to_seq(entry->client_id);
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index bfdd3875fe865..77574f7a3bd45 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1177,6 +1177,11 @@ static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
+ 	struct sk_buff *skb;
+ 	int err, depth;
+ 
++	if (unlikely(xdp->data_end - xdp->data < ETH_HLEN)) {
++		err = -EINVAL;
++		goto err;
++	}
++
+ 	if (q->flags & IFF_VNET_HDR)
+ 		vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 9254bca2813dc..4a5107117b4a6 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2451,6 +2451,9 @@ static int tun_xdp_one(struct tun_struct *tun,
+ 	bool skb_xdp = false;
+ 	struct page *page;
+ 
++	if (unlikely(datasize < ETH_HLEN))
++		return -EINVAL;
++
+ 	xdp_prog = rcu_dereference(tun->xdp_prog);
+ 	if (xdp_prog) {
+ 		if (gso->gso_type) {
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index ec8cd7c7bbfc1..0e38bb145e8f5 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -150,6 +150,9 @@ struct f_midi2 {
+ 
+ #define func_to_midi2(f)	container_of(f, struct f_midi2, func)
+ 
++/* convert from MIDI protocol number (1 or 2) to SNDRV_UMP_EP_INFO_PROTO_* */
++#define to_ump_protocol(v)	(((v) & 3) << 8)
++
+ /* get EP name string */
+ static const char *ump_ep_name(const struct f_midi2_ep *ep)
+ {
+@@ -564,8 +567,7 @@ static void reply_ump_stream_ep_config(struct f_midi2_ep *ep)
+ 		.status = UMP_STREAM_MSG_STATUS_STREAM_CFG,
+ 	};
+ 
+-	if ((ep->info.protocol & SNDRV_UMP_EP_INFO_PROTO_MIDI_MASK) ==
+-	    SNDRV_UMP_EP_INFO_PROTO_MIDI2)
++	if (ep->info.protocol == 2)
+ 		rep.protocol = UMP_STREAM_MSG_EP_INFO_CAP_MIDI2 >> 8;
+ 	else
+ 		rep.protocol = UMP_STREAM_MSG_EP_INFO_CAP_MIDI1 >> 8;
+@@ -627,13 +629,13 @@ static void process_ump_stream_msg(struct f_midi2_ep *ep, const u32 *data)
+ 		return;
+ 	case UMP_STREAM_MSG_STATUS_STREAM_CFG_REQUEST:
+ 		if (*data & UMP_STREAM_MSG_EP_INFO_CAP_MIDI2) {
+-			ep->info.protocol = SNDRV_UMP_EP_INFO_PROTO_MIDI2;
++			ep->info.protocol = 2;
+ 			DBG(midi2, "Switching Protocol to MIDI2\n");
+ 		} else {
+-			ep->info.protocol = SNDRV_UMP_EP_INFO_PROTO_MIDI1;
++			ep->info.protocol = 1;
+ 			DBG(midi2, "Switching Protocol to MIDI1\n");
+ 		}
+-		snd_ump_switch_protocol(ep->ump, ep->info.protocol);
++		snd_ump_switch_protocol(ep->ump, to_ump_protocol(ep->info.protocol));
+ 		reply_ump_stream_ep_config(ep);
+ 		return;
+ 	case UMP_STREAM_MSG_STATUS_FB_DISCOVERY:
+@@ -1065,7 +1067,8 @@ static void f_midi2_midi1_ep_out_complete(struct usb_ep *usb_ep,
+ 		group = midi2->out_cable_mapping[cable].group;
+ 		bytes = midi1_packet_bytes[*buf & 0x0f];
+ 		for (c = 0; c < bytes; c++) {
+-			snd_ump_convert_to_ump(cvt, group, ep->info.protocol,
++			snd_ump_convert_to_ump(cvt, group,
++					       to_ump_protocol(ep->info.protocol),
+ 					       buf[c + 1]);
+ 			if (cvt->ump_bytes) {
+ 				snd_ump_receive(ep->ump, cvt->ump,
+@@ -1375,7 +1378,7 @@ static void assign_block_descriptors(struct f_midi2 *midi2,
+ 			desc->nNumGroupTrm = b->num_groups;
+ 			desc->iBlockItem = ep->blks[blk].string_id;
+ 
+-			if (ep->info.protocol & SNDRV_UMP_EP_INFO_PROTO_MIDI2)
++			if (ep->info.protocol == 2)
+ 				desc->bMIDIProtocol = USB_MS_MIDI_PROTO_2_0;
+ 			else
+ 				desc->bMIDIProtocol = USB_MS_MIDI_PROTO_1_0_128;
+@@ -1552,7 +1555,7 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ 		if (midi2->info.static_block)
+ 			ump->info.flags |= SNDRV_UMP_EP_INFO_STATIC_BLOCKS;
+ 		ump->info.protocol_caps = (ep->info.protocol_caps & 3) << 8;
+-		ump->info.protocol = (ep->info.protocol & 3) << 8;
++		ump->info.protocol = to_ump_protocol(ep->info.protocol);
+ 		ump->info.version = 0x0101;
+ 		ump->info.family_id = ep->info.family;
+ 		ump->info.model_id = ep->info.model;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 9987055293b35..2999ed5d83f5e 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -797,7 +797,7 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 		       size_t buf_size)
+ {
+ 	struct jfs_ea_list *ealist;
+-	struct jfs_ea *ea;
++	struct jfs_ea *ea, *ealist_end;
+ 	struct ea_buffer ea_buf;
+ 	int xattr_size;
+ 	ssize_t size;
+@@ -817,9 +817,16 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 		goto not_found;
+ 
+ 	ealist = (struct jfs_ea_list *) ea_buf.xattr;
++	ealist_end = END_EALIST(ealist);
+ 
+ 	/* Find the named attribute */
+-	for (ea = FIRST_EA(ealist); ea < END_EALIST(ealist); ea = NEXT_EA(ea))
++	for (ea = FIRST_EA(ealist); ea < ealist_end; ea = NEXT_EA(ea)) {
++		if (unlikely(ea + 1 > ealist_end) ||
++		    unlikely(NEXT_EA(ea) > ealist_end)) {
++			size = -EUCLEAN;
++			goto release;
++		}
++
+ 		if ((namelen == ea->namelen) &&
+ 		    memcmp(name, ea->name, namelen) == 0) {
+ 			/* Found it */
+@@ -834,6 +841,7 @@ ssize_t __jfs_getxattr(struct inode *inode, const char *name, void *data,
+ 			memcpy(data, value, size);
+ 			goto release;
+ 		}
++	}
+       not_found:
+ 	size = -ENODATA;
+       release:
+@@ -861,7 +869,7 @@ ssize_t jfs_listxattr(struct dentry * dentry, char *data, size_t buf_size)
+ 	ssize_t size = 0;
+ 	int xattr_size;
+ 	struct jfs_ea_list *ealist;
+-	struct jfs_ea *ea;
++	struct jfs_ea *ea, *ealist_end;
+ 	struct ea_buffer ea_buf;
+ 
+ 	down_read(&JFS_IP(inode)->xattr_sem);
+@@ -876,9 +884,16 @@ ssize_t jfs_listxattr(struct dentry * dentry, char *data, size_t buf_size)
+ 		goto release;
+ 
+ 	ealist = (struct jfs_ea_list *) ea_buf.xattr;
++	ealist_end = END_EALIST(ealist);
+ 
+ 	/* compute required size of list */
+-	for (ea = FIRST_EA(ealist); ea < END_EALIST(ealist); ea = NEXT_EA(ea)) {
++	for (ea = FIRST_EA(ealist); ea < ealist_end; ea = NEXT_EA(ea)) {
++		if (unlikely(ea + 1 > ealist_end) ||
++		    unlikely(NEXT_EA(ea) > ealist_end)) {
++			size = -EUCLEAN;
++			goto release;
++		}
++
+ 		if (can_list(ea))
+ 			size += name_size(ea) + 1;
+ 	}
+diff --git a/fs/locks.c b/fs/locks.c
+index bdd94c32256f5..9afb16e0683ff 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2570,8 +2570,9 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
+ 	error = do_lock_file_wait(filp, cmd, file_lock);
+ 
+ 	/*
+-	 * Attempt to detect a close/fcntl race and recover by releasing the
+-	 * lock that was just acquired. There is no need to do that when we're
++	 * Detect close/fcntl races and recover by zapping all POSIX locks
++	 * associated with this file and our files_struct, just like on
++	 * filp_flush(). There is no need to do that when we're
+ 	 * unlocking though, or for OFD locks.
+ 	 */
+ 	if (!error && file_lock->c.flc_type != F_UNLCK &&
+@@ -2586,9 +2587,7 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
+ 		f = files_lookup_fd_locked(files, fd);
+ 		spin_unlock(&files->file_lock);
+ 		if (f != filp) {
+-			file_lock->c.flc_type = F_UNLCK;
+-			error = do_lock_file_wait(filp, cmd, file_lock);
+-			WARN_ON_ONCE(error);
++			locks_remove_posix(filp, files);
+ 			error = -EBADF;
+ 		}
+ 	}
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index d7807d255dfe2..40926bf392d28 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -724,7 +724,8 @@ static bool check_rstbl(const struct RESTART_TABLE *rt, size_t bytes)
+ 
+ 	if (!rsize || rsize > bytes ||
+ 	    rsize + sizeof(struct RESTART_TABLE) > bytes || bytes < ts ||
+-	    le16_to_cpu(rt->total) > ne || ff > ts || lf > ts ||
++	    le16_to_cpu(rt->total) > ne ||
++			ff > ts - sizeof(__le32) || lf > ts - sizeof(__le32) ||
+ 	    (ff && ff < sizeof(struct RESTART_TABLE)) ||
+ 	    (lf && lf < sizeof(struct RESTART_TABLE))) {
+ 		return false;
+@@ -754,6 +755,9 @@ static bool check_rstbl(const struct RESTART_TABLE *rt, size_t bytes)
+ 			return false;
+ 
+ 		off = le32_to_cpu(*(__le32 *)Add2Ptr(rt, off));
++
++		if (off > ts - sizeof(__le32))
++			return false;
+ 	}
+ 
+ 	return true;
+@@ -3722,6 +3726,8 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 
+ 	u64 rec_lsn, checkpt_lsn = 0, rlsn = 0;
+ 	struct ATTR_NAME_ENTRY *attr_names = NULL;
++	u32 attr_names_bytes = 0;
++	u32 oatbl_bytes = 0;
+ 	struct RESTART_TABLE *dptbl = NULL;
+ 	struct RESTART_TABLE *trtbl = NULL;
+ 	const struct RESTART_TABLE *rt;
+@@ -3736,6 +3742,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	struct NTFS_RESTART *rst = NULL;
+ 	struct lcb *lcb = NULL;
+ 	struct OPEN_ATTR_ENRTY *oe;
++	struct ATTR_NAME_ENTRY *ane;
+ 	struct TRANSACTION_ENTRY *tr;
+ 	struct DIR_PAGE_ENTRY *dp;
+ 	u32 i, bytes_per_attr_entry;
+@@ -4314,17 +4321,40 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	lcb = NULL;
+ 
+ check_attribute_names2:
+-	if (rst->attr_names_len && oatbl) {
+-		struct ATTR_NAME_ENTRY *ane = attr_names;
+-		while (ane->off) {
++	if (attr_names && oatbl) {
++		off = 0;
++		for (;;) {
++			/* Check we can use attribute name entry 'ane'. */
++			static_assert(sizeof(*ane) == 4);
++			if (off + sizeof(*ane) > attr_names_bytes) {
++				/* just ignore the rest. */
++				break;
++			}
++
++			ane = Add2Ptr(attr_names, off);
++			t16 = le16_to_cpu(ane->off);
++			if (!t16) {
++				/* this is the only valid exit. */
++				break;
++			}
++
++			/* Check we can use open attribute entry 'oe'. */
++			if (t16 + sizeof(*oe) > oatbl_bytes) {
++				/* just ignore the rest. */
++				break;
++			}
++
+ 			/* TODO: Clear table on exit! */
+-			oe = Add2Ptr(oatbl, le16_to_cpu(ane->off));
++			oe = Add2Ptr(oatbl, t16);
+ 			t16 = le16_to_cpu(ane->name_bytes);
++			off += t16 + sizeof(*ane);
++			if (off > attr_names_bytes) {
++				/* just ignore the rest. */
++				break;
++			}
+ 			oe->name_len = t16 / sizeof(short);
+ 			oe->ptr = ane->name;
+ 			oe->is_attr_name = 2;
+-			ane = Add2Ptr(ane,
+-				      sizeof(struct ATTR_NAME_ENTRY) + t16);
+ 		}
+ 	}
+ 
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index d620d4c53c6fa..f0beb173dbba2 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -294,13 +294,16 @@ static void ocfs2_dx_dir_name_hash(struct inode *dir, const char *name, int len,
+  * bh passed here can be an inode block or a dir data block, depending
+  * on the inode inline data flag.
+  */
+-static int ocfs2_check_dir_entry(struct inode * dir,
+-				 struct ocfs2_dir_entry * de,
+-				 struct buffer_head * bh,
++static int ocfs2_check_dir_entry(struct inode *dir,
++				 struct ocfs2_dir_entry *de,
++				 struct buffer_head *bh,
++				 char *buf,
++				 unsigned int size,
+ 				 unsigned long offset)
+ {
+ 	const char *error_msg = NULL;
+ 	const int rlen = le16_to_cpu(de->rec_len);
++	const unsigned long next_offset = ((char *) de - buf) + rlen;
+ 
+ 	if (unlikely(rlen < OCFS2_DIR_REC_LEN(1)))
+ 		error_msg = "rec_len is smaller than minimal";
+@@ -308,9 +311,11 @@ static int ocfs2_check_dir_entry(struct inode * dir,
+ 		error_msg = "rec_len % 4 != 0";
+ 	else if (unlikely(rlen < OCFS2_DIR_REC_LEN(de->name_len)))
+ 		error_msg = "rec_len is too small for name_len";
+-	else if (unlikely(
+-		 ((char *) de - bh->b_data) + rlen > dir->i_sb->s_blocksize))
+-		error_msg = "directory entry across blocks";
++	else if (unlikely(next_offset > size))
++		error_msg = "directory entry overrun";
++	else if (unlikely(next_offset > size - OCFS2_DIR_REC_LEN(1)) &&
++		 next_offset != size)
++		error_msg = "directory entry too close to end";
+ 
+ 	if (unlikely(error_msg != NULL))
+ 		mlog(ML_ERROR, "bad entry in directory #%llu: %s - "
+@@ -352,16 +357,17 @@ static inline int ocfs2_search_dirblock(struct buffer_head *bh,
+ 	de_buf = first_de;
+ 	dlimit = de_buf + bytes;
+ 
+-	while (de_buf < dlimit) {
++	while (de_buf < dlimit - OCFS2_DIR_MEMBER_LEN) {
+ 		/* this code is executed quadratically often */
+ 		/* do minimal checking `by hand' */
+ 
+ 		de = (struct ocfs2_dir_entry *) de_buf;
+ 
+-		if (de_buf + namelen <= dlimit &&
++		if (de->name + namelen <= dlimit &&
+ 		    ocfs2_match(namelen, name, de)) {
+ 			/* found a match - just to be sure, do a full check */
+-			if (!ocfs2_check_dir_entry(dir, de, bh, offset)) {
++			if (!ocfs2_check_dir_entry(dir, de, bh, first_de,
++						   bytes, offset)) {
+ 				ret = -1;
+ 				goto bail;
+ 			}
+@@ -1138,7 +1144,7 @@ static int __ocfs2_delete_entry(handle_t *handle, struct inode *dir,
+ 	pde = NULL;
+ 	de = (struct ocfs2_dir_entry *) first_de;
+ 	while (i < bytes) {
+-		if (!ocfs2_check_dir_entry(dir, de, bh, i)) {
++		if (!ocfs2_check_dir_entry(dir, de, bh, first_de, bytes, i)) {
+ 			status = -EIO;
+ 			mlog_errno(status);
+ 			goto bail;
+@@ -1635,7 +1641,8 @@ int __ocfs2_add_entry(handle_t *handle,
+ 		/* These checks should've already been passed by the
+ 		 * prepare function, but I guess we can leave them
+ 		 * here anyway. */
+-		if (!ocfs2_check_dir_entry(dir, de, insert_bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, insert_bh, data_start,
++					   size, offset)) {
+ 			retval = -ENOENT;
+ 			goto bail;
+ 		}
+@@ -1774,7 +1781,8 @@ static int ocfs2_dir_foreach_blk_id(struct inode *inode,
+ 		}
+ 
+ 		de = (struct ocfs2_dir_entry *) (data->id_data + ctx->pos);
+-		if (!ocfs2_check_dir_entry(inode, de, di_bh, ctx->pos)) {
++		if (!ocfs2_check_dir_entry(inode, de, di_bh, (char *)data->id_data,
++					   i_size_read(inode), ctx->pos)) {
+ 			/* On error, skip the f_pos to the end. */
+ 			ctx->pos = i_size_read(inode);
+ 			break;
+@@ -1867,7 +1875,8 @@ static int ocfs2_dir_foreach_blk_el(struct inode *inode,
+ 		while (ctx->pos < i_size_read(inode)
+ 		       && offset < sb->s_blocksize) {
+ 			de = (struct ocfs2_dir_entry *) (bh->b_data + offset);
+-			if (!ocfs2_check_dir_entry(inode, de, bh, offset)) {
++			if (!ocfs2_check_dir_entry(inode, de, bh, bh->b_data,
++						   sb->s_blocksize, offset)) {
+ 				/* On error, skip the f_pos to the
+ 				   next block. */
+ 				ctx->pos = (ctx->pos | (sb->s_blocksize - 1)) + 1;
+@@ -3339,7 +3348,7 @@ static int ocfs2_find_dir_space_id(struct inode *dir, struct buffer_head *di_bh,
+ 	struct super_block *sb = dir->i_sb;
+ 	struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	struct ocfs2_dir_entry *de, *last_de = NULL;
+-	char *de_buf, *limit;
++	char *first_de, *de_buf, *limit;
+ 	unsigned long offset = 0;
+ 	unsigned int rec_len, new_rec_len, free_space;
+ 
+@@ -3352,14 +3361,16 @@ static int ocfs2_find_dir_space_id(struct inode *dir, struct buffer_head *di_bh,
+ 	else
+ 		free_space = dir->i_sb->s_blocksize - i_size_read(dir);
+ 
+-	de_buf = di->id2.i_data.id_data;
++	first_de = di->id2.i_data.id_data;
++	de_buf = first_de;
+ 	limit = de_buf + i_size_read(dir);
+ 	rec_len = OCFS2_DIR_REC_LEN(namelen);
+ 
+ 	while (de_buf < limit) {
+ 		de = (struct ocfs2_dir_entry *)de_buf;
+ 
+-		if (!ocfs2_check_dir_entry(dir, de, di_bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, di_bh, first_de,
++					   i_size_read(dir), offset)) {
+ 			ret = -ENOENT;
+ 			goto out;
+ 		}
+@@ -3441,7 +3452,8 @@ static int ocfs2_find_dir_space_el(struct inode *dir, const char *name,
+ 			/* move to next block */
+ 			de = (struct ocfs2_dir_entry *) bh->b_data;
+ 		}
+-		if (!ocfs2_check_dir_entry(dir, de, bh, offset)) {
++		if (!ocfs2_check_dir_entry(dir, de, bh, bh->b_data, blocksize,
++					   offset)) {
+ 			status = -ENOENT;
+ 			goto bail;
+ 		}
+diff --git a/sound/core/pcm_dmaengine.c b/sound/core/pcm_dmaengine.c
+index cc5db93b9132c..4786b5a0b984f 100644
+--- a/sound/core/pcm_dmaengine.c
++++ b/sound/core/pcm_dmaengine.c
+@@ -352,8 +352,12 @@ EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_open_request_chan);
+ int snd_dmaengine_pcm_sync_stop(struct snd_pcm_substream *substream)
+ {
+ 	struct dmaengine_pcm_runtime_data *prtd = substream_to_prtd(substream);
++	struct dma_tx_state state;
++	enum dma_status status;
+ 
+-	dmaengine_synchronize(prtd->dma_chan);
++	status = dmaengine_tx_status(prtd->dma_chan, prtd->cookie, &state);
++	if (status != DMA_PAUSED)
++		dmaengine_synchronize(prtd->dma_chan);
+ 
+ 	return 0;
+ }
+diff --git a/sound/core/seq/seq_ump_client.c b/sound/core/seq/seq_ump_client.c
+index c627d72f7fe20..9cdfbeae3ed59 100644
+--- a/sound/core/seq/seq_ump_client.c
++++ b/sound/core/seq/seq_ump_client.c
+@@ -28,6 +28,7 @@ struct seq_ump_group {
+ 	int group;			/* group index (0-based) */
+ 	unsigned int dir_bits;		/* directions */
+ 	bool active;			/* activeness */
++	bool valid;			/* valid group (referred by blocks) */
+ 	char name[64];			/* seq port name */
+ };
+ 
+@@ -210,6 +211,13 @@ static void fill_port_info(struct snd_seq_port_info *port,
+ 		sprintf(port->name, "Group %d", group->group + 1);
+ }
+ 
++/* skip non-existing group for static blocks */
++static bool skip_group(struct seq_ump_client *client, struct seq_ump_group *group)
++{
++	return !group->valid &&
++		(client->ump->info.flags & SNDRV_UMP_EP_INFO_STATIC_BLOCKS);
++}
++
+ /* create a new sequencer port per UMP group */
+ static int seq_ump_group_init(struct seq_ump_client *client, int group_index)
+ {
+@@ -217,6 +225,9 @@ static int seq_ump_group_init(struct seq_ump_client *client, int group_index)
+ 	struct snd_seq_port_info *port __free(kfree) = NULL;
+ 	struct snd_seq_port_callback pcallbacks;
+ 
++	if (skip_group(client, group))
++		return 0;
++
+ 	port = kzalloc(sizeof(*port), GFP_KERNEL);
+ 	if (!port)
+ 		return -ENOMEM;
+@@ -250,6 +261,9 @@ static void update_port_infos(struct seq_ump_client *client)
+ 		return;
+ 
+ 	for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++) {
++		if (skip_group(client, &client->groups[i]))
++			continue;
++
+ 		old->addr.client = client->seq_client;
+ 		old->addr.port = i;
+ 		err = snd_seq_kernel_client_ctl(client->seq_client,
+@@ -284,6 +298,7 @@ static void update_group_attrs(struct seq_ump_client *client)
+ 		group->dir_bits = 0;
+ 		group->active = 0;
+ 		group->group = i;
++		group->valid = false;
+ 	}
+ 
+ 	list_for_each_entry(fb, &client->ump->block_list, list) {
+@@ -291,6 +306,7 @@ static void update_group_attrs(struct seq_ump_client *client)
+ 			break;
+ 		group = &client->groups[fb->info.first_group];
+ 		for (i = 0; i < fb->info.num_groups; i++, group++) {
++			group->valid = true;
+ 			if (fb->info.active)
+ 				group->active = 1;
+ 			switch (fb->info.direction) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 766f0b1d3e9d6..6f4512b598eaa 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10384,6 +10384,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++	SND_PCI_QUIRK(0x10ec, 0x119e, "Positivo SU C1400", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x11bc, "VAIO VJFE-IL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -10398,6 +10399,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc1a3, "Samsung Galaxy Book Pro (NP935XDB-KC1SE)", ALC298_FIXUP_SAMSUNG_AMP),
++	SND_PCI_QUIRK(0x144d, 0xc1a4, "Samsung Galaxy Book Pro 360 (NT935QBD)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc1a6, "Samsung Galaxy Book Pro 360 (NP930QBD)", ALC298_FIXUP_SAMSUNG_AMP),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
+@@ -10539,6 +10541,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
+ 	SND_PCI_QUIRK(0x17aa, 0x231e, "Thinkpad", ALC287_FIXUP_LENOVO_THKPAD_WH_ALC1318),
+ 	SND_PCI_QUIRK(0x17aa, 0x231f, "Thinkpad", ALC287_FIXUP_LENOVO_THKPAD_WH_ALC1318),
++	SND_PCI_QUIRK(0x17aa, 0x2326, "Hera2", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-27 22:04 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-27 22:04 UTC (permalink / raw
  To: gentoo-commits

commit:     0c10794820c6ae659983deb3128623ef16eb4e7b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 27 22:04:00 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 27 22:04:00 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0c107948

BMQ(BitMap Queue) Scheduler. use=experimental

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |     8 +
 5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch | 11474 +++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch       |    12 +
 3 files changed, 11494 insertions(+)

diff --git a/0000_README b/0000_README
index f3d968e8..f2aa0e92 100644
--- a/0000_README
+++ b/0000_README
@@ -90,3 +90,11 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
+From:   https://github.com/Frogging-Family/linux-tkg
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. default to n

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
new file mode 100644
index 00000000..5f577d24
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
@@ -0,0 +1,11474 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+--- a/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 00:43:32.000000000 +0200
++++ b/Documentation/admin-guide/sysctl/kernel.rst	2024-07-16 11:57:40.064869171 +0200
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+--- a/Documentation/scheduler/sched-BMQ.txt	1970-01-01 01:00:00.000000000 +0100
++++ b/Documentation/scheduler/sched-BMQ.txt	2024-07-16 11:57:40.268882498 +0200
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+--- a/fs/proc/base.c	2024-07-15 00:43:32.000000000 +0200
++++ b/fs/proc/base.c	2024-07-16 11:57:44.113126230 +0200
+@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+--- a/include/asm-generic/resource.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/asm-generic/resource.h	2024-07-16 11:57:44.173129878 +0200
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+--- a/include/linux/sched/deadline.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/linux/sched/deadline.h	2024-07-16 11:57:44.289136930 +0200
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_st
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+--- a/include/linux/sched/prio.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/linux/sched/prio.h	2024-07-16 11:57:44.289136930 +0200
+@@ -18,6 +18,28 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++#endif
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+--- a/include/linux/sched/rt.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/linux/sched/rt.h	2024-07-16 11:57:44.289136930 +0200
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(stru
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+--- a/include/linux/sched/topology.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/linux/sched/topology.h	2024-07-16 11:57:44.289136930 +0200
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+--- a/include/linux/sched.h	2024-07-15 00:43:32.000000000 +0200
++++ b/include/linux/sched.h	2024-07-16 11:57:44.285136688 +0200
+@@ -774,9 +774,16 @@ struct task_struct {
+ 	struct alloc_tag		*alloc_tag;
+ #endif
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ 	int				on_cpu;
++#endif
++
++#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_ALT)
+ 	struct __call_single_node	wake_entry;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -790,6 +797,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -798,6 +806,19 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -809,6 +830,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1571,6 +1593,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
+ #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
+ 
+diff --git a/init/init_task.c b/init/init_task.c
+--- a/init/init_task.c	2024-07-15 00:43:32.000000000 +0200
++++ b/init/init_task.c	2024-07-16 11:57:44.401143740 +0200
+@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/init/Kconfig b/init/Kconfig
+--- a/init/Kconfig	2024-07-15 00:43:32.000000000 +0200
++++ b/init/Kconfig	2024-07-16 11:57:44.401143740 +0200
+@@ -638,6 +638,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -803,6 +804,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -849,6 +851,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -914,6 +945,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1015,6 +1047,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1037,6 +1070,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1285,6 +1319,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+--- a/kernel/cgroup/cpuset.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/cgroup/cpuset.c	2024-07-16 11:57:44.421144957 +0200
+@@ -846,7 +846,7 @@ out:
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1245,7 +1245,7 @@ static void rebuild_sched_domains_locked
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3301,12 +3301,15 @@ static int cpuset_can_attach(struct cgro
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3327,6 +3330,7 @@ static int cpuset_can_attach(struct cgro
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3350,12 +3354,14 @@ static void cpuset_cancel_attach(struct
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+--- a/kernel/delayacct.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/delayacct.c	2024-07-16 11:57:44.421144957 +0200
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+--- a/kernel/exit.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/exit.c	2024-07-16 11:57:44.425145200 +0200
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_st
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_st
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+--- a/kernel/Kconfig.preempt	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/Kconfig.preempt	2024-07-16 11:57:44.409144227 +0200
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+--- a/kernel/locking/rtmutex.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/locking/rtmutex.c	2024-07-16 11:57:44.433145686 +0200
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waite
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_nod
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_nod
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+--- a/kernel/sched/alt_core.c	1970-01-01 01:00:00.000000000 +0100
++++ b/kernel/sched/alt_core.c	2024-07-16 11:57:44.445146416 +0200
+@@ -0,0 +1,8963 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.10-r0"
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 1] ____cacheline_aligned_in_smp;
++
++static cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++#endif
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++	idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	int last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++#ifdef CONFIG_SCHED_SMT
++			if (static_branch_likely(&sched_smt_present))
++				cpumask_andnot(sched_sg_idle_mask,
++					       sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++			cpumask_clear_cpu(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
++			cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		cpumask_set_cpu(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *next = p->sq_node.next;
++
++	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++		struct list_head *head;
++		unsigned long idx = next - &rq->queue.heads[0];
++
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++	delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_dequeue(rq, p);							\
++											\
++	__list_del_entry(&p->sq_node);							\
++	if (p->sq_node.prev == p->sq_node.next) {					\
++		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
++			  rq->queue.bitmap);						\
++		func;									\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_enqueue(rq, p);							\
++	{										\
++	int idx, prio;									\
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
++	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
++		set_bit(prio, rq->queue.bitmap);					\
++		func;									\
++	}										\
++	}
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *node = &p->sq_node;
++	int deq_idx, idx, prio;
++
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++	if (list_is_last(node, &rq->queue.heads[idx]))
++		return;
++
++	__list_del_entry(node);
++	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++	list_add_tail(node, &rq->queue.heads[idx]);
++	if (list_is_first(node, &rq->queue.heads[idx]))
++		set_bit(prio, rq->queue.bitmap);
++	update_sched_preempt_mask(rq);
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	/*
++	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++	 * part of the idle loop. This forces an exit from the idle loop
++	 * and a round trip to schedule(). Now this could be optimized
++	 * because a simple new idle loop iteration is enough to
++	 * re-evaluate the next tick. Provided some re-ordering of tick
++	 * nohz functions that would need to follow TIF_NR_POLLING
++	 * clearing:
++	 *
++	 * - On most archs, a simple fetch_or on ti::flags with a
++	 *   "0" value would be enough to know if an IPI needs to be sent.
++	 *
++	 * - x86 needs to perform a last need_resched() check between
++	 *   monitor and mwait which doesn't take timers into account.
++	 *   There a dedicated TIF_TIMER flag would be required to
++	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
++	 *   before mwait().
++	 *
++	 * However, remote timer enqueue is not such a frequent event
++	 * and testing of the above solutions didn't appear to report
++	 * much benefits.
++	 */
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	WRITE_ONCE(p->on_rq, 0);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++	int src_cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	src_cpu = cpu_of(rq);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_preempt_mask + ref);
++	if (prio < ref) {
++		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++			if (prio < cpu_rq(cpu)->prio)
++				cpumask_set_cpu(cpu, mask);
++		}
++	} else {
++		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++			if (prio >= cpu_rq(cpu)->prio)
++				cpumask_clear_cpu(cpu, mask);
++		}
++	}
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++	cpumask_t *mask = sched_preempt_mask + prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++		sched_preempt_mask_flush(mask, prio, pr);
++		atomic_set(&sched_prio_record, prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&mask, &allow_mask, sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
++	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++	return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++	if (!sched_asym_cpucap_active())
++		return true;
++
++	if (this_cpu == that_cpu)
++		return true;
++
++	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_SMP
++
++static int active_balance_cpu_stop(void *data)
++{
++	struct balance_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++	cpumask_t tmp;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	arg->active = 0;
++
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++	    !is_migration_disabled(p)) {
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++/* trigger_active_balance - for @cpu/@rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, struct balance_arg *arg)
++{
++	unsigned long flags;
++	struct task_struct *p;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++
++	res = (1 == rq->nr_running) &&					\
++	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
++	      cpumask_intersects(p->cpus_ptr, arg->cpumask) &&		\
++	      !arg->active;
++	if (res) {
++		arg->task = p;
++
++		arg->active = 1;
++	}
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res) {
++		preempt_disable();
++		raw_spin_unlock(&src_rq->lock);
++
++		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop,
++				    arg, &rq->active_balance_work);
++
++		preempt_enable();
++		raw_spin_lock(&src_rq->lock);
++	}
++
++	return res;
++}
++
++#ifdef CONFIG_SCHED_SMT
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq)
++{
++	cpumask_t chk;
++
++	if (cpumask_andnot(&chk, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
++				struct rq *target_rq = cpu_rq(i);
++				if (trigger_active_balance(rq, target_rq, &target_rq->sg_balance_arg))
++					return;
++			}
++		}
++	}
++}
++
++static DEFINE_PER_CPU(struct balance_callback, sg_balance_head) = {
++	.func = sg_balance,
++};
++#endif /* CONFIG_SCHED_SMT */
++
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	cpumask_t *topo_mask, *end_mask, chk;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++
++		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++			continue;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++			if (static_key_count(&sched_smt_present.key) > 1 &&
++			    cpumask_test_cpu(cpu, sched_sg_idle_mask) &&
++			    rq->online)
++				__queue_balance_callback(rq, &per_cpu(sg_balance_head, cpu));
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler sched_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr; this
++	 * barrier matches a full barrier in the proximity of the membarrier
++	 * system call exit.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
++		 *   on PowerPC and on RISC-V.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 *
++		 * The barrier matches a full barrier in the proximity of
++		 * the membarrier system call entry.
++		 *
++		 * On RISC-V, this barrier pairing is also needed for the
++		 * SYNC_CORE command when switching between processes, cf.
++		 * the inline comments in membarrier_arch_switch_mm().
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++		if (tsk->flags & PF_BLOCK_TS)
++			blk_plug_invalidate_ts(tsk);
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else if (tsk->flags & PF_IO_WORKER)
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		requeue_task(p, rq);
++		wakeup_preempt(rq);
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	if (task_on_rq_queued(p))
++		__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++static struct task_struct *find_get_task(pid_t pid)
++{
++	struct task_struct *p;
++	guard(rcu)();
++
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++
++	return p;
++}
++
++DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
++	     find_get_task(pid), pid_t pid)
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	guard(rcu)();
++
++	pcred = __task_cred(p);
++	return (uid_eq(cred->euid, pcred->euid) ||
++	        uid_eq(cred->euid, pcred->uid));
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	return sched_setscheduler(p, policy, &lparam);
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++static void get_params(struct task_struct *p, struct sched_attr *attr)
++{
++	if (task_has_rt_policy(p))
++		attr->sched_priority = p->rt_priority;
++	else
++		attr->sched_nice = task_nice(p);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
++		get_params(p, &attr);
++
++	return sched_setattr(p, &attr);
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		return -ESRCH;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (!retval)
++		retval = p->policy;
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		int retval;
++
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -EINVAL;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		if (task_has_rt_policy(p))
++			lp.sched_priority = p->rt_priority;
++	}
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -ESRCH;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		kattr.sched_policy = p->policy;
++		if (p->sched_reset_on_fork)
++			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		get_params(p, &kattr);
++		kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++	}
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	int retval;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (p->flags & PF_NO_SETAFFINITY)
++		return -EINVAL;
++
++	if (!check_same_owner(p)) {
++		guard(rcu)();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
++			return -EPERM;
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		return retval;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		return -ENOMEM;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	int retval;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	guard(raw_spinlock_irqsave)(&p->pi_lock);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	} else if (rq->nr_running > 1) {
++		do_sched_yield_type_1(p, rq);
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -EINVAL;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++	return 0;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++#ifdef CONFIG_SCHED_SMT
++	struct balance_arg balance_arg = {.cpumask = sched_sg_idle_mask, .active = 0};
++#endif
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++		rq->prio_idx = rq->prio;
++#endif
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->sg_balance_arg = balance_arg;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	rcu_read_lock();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		rcu_read_unlock();
++		t->last_mm_cid = -1;
++		return -1;
++	}
++	rcu_read_unlock();
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, dst_cid;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many cpus.
++	 *
++	 * If destination cid is already set, we may have to just clear
++	 * the src cid to ensure compactness in frequent migrations
++	 * scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed cpus, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed cpus.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++	if (!mm_cid_is_unset(dst_cid) &&
++	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (!mm_cid_is_unset(dst_cid)) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+--- a/kernel/sched/alt_debug.c	1970-01-01 01:00:00.000000000 +0100
++++ b/kernel/sched/alt_debug.c	2024-07-16 11:57:44.445146416 +0200
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+--- a/kernel/sched/alt_sched.h	1970-01-01 01:00:00.000000000 +0100
++++ b/kernel/sched/alt_sched.h	2024-07-16 11:57:44.445146416 +0200
+@@ -0,0 +1,976 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++			       struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++struct balance_arg {
++	struct task_struct	*task;
++	int			active;
++	cpumask_t		*cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue		queue		____cacheline_aligned;
++
++	int				prio;
++#ifdef CONFIG_SCHED_PDS
++	int				prio_idx;
++	u64				time_edge;
++#endif
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	struct balance_arg	sg_balance_arg		____cacheline_aligned;
++#endif
++	struct cpu_stop_work	active_balance_work;
++
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++				 unsigned long min,
++				 unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	/*
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cpumask);
++		if (cid < nr_cpu_ids)
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cpumask))
++		return -1;
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++__queue_balance_callback(struct rq *rq,
++			 struct balance_callback *head)
++{
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * Don't (re)queue an already queued item; nor queue anything when
++	 * balance_push() is active, see the comment with
++	 * balance_push_callback.
++	 */
++	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++		return;
++
++	head->next = rq->balance_callback;
++	rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+--- a/kernel/sched/bmq.h	1970-01-01 01:00:00.000000000 +0100
++++ b/kernel/sched/bmq.h	2024-07-16 11:57:44.445146416 +0200
+@@ -0,0 +1,98 @@
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
++	prio = task_sched_prio(p);		\
++	idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio;
++}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	s64 delta = this_rq()->clock_task > p->last_ran;
++
++	if (likely(delta > 0))
++		boost_task(p, delta  >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	boost_task(p, 1);
++}
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+--- a/kernel/sched/build_policy.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/build_policy.c	2024-07-16 11:57:44.445146416 +0200
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+--- a/kernel/sched/build_utility.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/build_utility.c	2024-07-16 11:57:44.445146416 +0200
+@@ -84,7 +84,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+--- a/kernel/sched/cpufreq_schedutil.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/cpufreq_schedutil.c	2024-07-16 11:57:44.445146416 +0200
+@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(i
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
+ 
+ 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+ 	util = max(util, boost);
+ 	sg_cpu->bw_min = min;
+ 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++	sg_cpu->bw_min = 0;
++	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(str
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct s
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+--- a/kernel/sched/cputime.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/cputime.c	2024-07-16 11:57:44.445146416 +0200
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struc
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_stru
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -617,7 +617,7 @@ out:
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+--- a/kernel/sched/debug.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/debug.c	2024-07-16 11:57:44.445146416 +0200
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sche
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sche
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sche
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_str
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+--- a/kernel/sched/idle.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/idle.c	2024-07-16 11:57:44.449146659 +0200
+@@ -430,6 +430,7 @@ void cpu_startup_entry(enum cpuhp_state
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -551,3 +552,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+--- a/kernel/sched/Makefile	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/Makefile	2024-07-16 11:57:44.441146173 +0200
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+--- a/kernel/sched/pds.h	1970-01-01 01:00:00.000000000 +0100
++++ b/kernel/sched/pds.h	2024-07-16 11:57:44.449146659 +0200
+@@ -0,0 +1,134 @@
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
++	if (p->prio < MIN_NORMAL_PRIO) {							\
++		prio = p->prio >> 2;								\
++		idx = prio;									\
++	} else {										\
++		u64 sched_dl = max(p->deadline, rq->time_edge);					\
++		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
++		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
++	}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio_idx;
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+--- a/kernel/sched/pelt.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/pelt.c	2024-07-16 11:57:44.449146659 +0200
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa,
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struc
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * hardware:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+--- a/kernel/sched/pelt.h	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/pelt.h	2024-07-16 11:57:44.449146659 +0200
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struc
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(stru
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+--- a/kernel/sched/sched.h	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/sched.h	2024-07-16 11:57:44.449146659 +0200
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3481,4 +3485,9 @@ static inline void init_sched_mm_cid(str
+ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+ extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+--- a/kernel/sched/stats.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/stats.c	2024-07-16 11:57:44.449146659 +0200
+@@ -125,9 +125,11 @@ static int show_schedstat(struct seq_fil
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
+ #endif
++#endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+ 
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_fil
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_fil
+ 		}
+ 		rcu_read_unlock();
+ #endif
++#endif
+ 	}
+ 	return 0;
+ }
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+--- a/kernel/sched/stats.h	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/stats.h	2024-07-16 11:57:44.449146659 +0200
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+--- a/kernel/sched/topology.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sched/topology.c	2024-07-16 11:57:44.449146659 +0200
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_lev
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sc
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_n
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+--- a/kernel/sysctl.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/sysctl.c	2024-07-16 11:57:44.449146659 +0200
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+--- a/kernel/time/hrtimer.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/time/hrtimer.c	2024-07-16 11:57:44.453146902 +0200
+@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, con
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+--- a/kernel/time/posix-cpu-timers.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/time/posix-cpu-timers.c	2024-07-16 11:57:44.453146902 +0200
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct t
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(stru
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(stru
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct t
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct t
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+--- a/kernel/trace/trace_selftest.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/trace/trace_selftest.c	2024-07-16 11:57:44.461147389 +0200
+@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+--- a/kernel/workqueue.c	2024-07-15 00:43:32.000000000 +0200
++++ b/kernel/workqueue.c	2024-07-16 11:57:44.465147632 +0200
+@@ -1248,6 +1248,7 @@ static bool kick_pool(struct worker_pool
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1277,6 +1278,8 @@ static bool kick_pool(struct worker_pool
+ 		}
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1405,7 +1408,11 @@ void wq_worker_running(struct task_struc
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1490,7 +1497,11 @@ void wq_worker_tick(struct task_struct *
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -3176,7 +3187,11 @@ __acquires(&pool->lock)
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
+ 	if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++		worker->current_at = worker->task->sched_time;
++#else
+ 		worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..409dda6b
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,12 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-07-27 22:16 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-07-27 22:16 UTC (permalink / raw
  To: gentoo-commits

commit:     1b430d349fc20344677df4edb1d368863aa97642
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 27 22:15:05 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jul 27 22:15:05 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1b430d34

Update cpu optimization patch. use=experimental

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 88 ++++++++++++++++-----------
 1 file changed, 51 insertions(+), 37 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 75c48bf1..b5382da3 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,4 +1,4 @@
-From 71dd30c3e2ab2852b0290ae1f34ce1c7f8655040 Mon Sep 17 00:00:00 2001
+rom 86977b5357d9212d57841bc325e80f43081bb333 Mon Sep 17 00:00:00 2001
 From: graysky <therealgraysky@proton.me>
 Date: Wed, 21 Feb 2024 08:38:13 -0500
 
@@ -32,8 +32,9 @@ CPU-specific microarchitectures include:
 • AMD Family 15h (Excavator)
 • AMD Family 17h (Zen)
 • AMD Family 17h (Zen 2)
-• AMD Family 19h (Zen 3)†
-• AMD Family 19h (Zen 4)§
+• AMD Family 19h (Zen 3)**
+• AMD Family 19h (Zen 4)‡
+• AMD Family 1Ah (Zen 5)§
 • Intel Silvermont low-power processors
 • Intel Goldmont low-power processors (Apollo Lake and Denverton)
 • Intel Goldmont Plus low-power processors (Gemini Lake)
@@ -50,18 +51,20 @@ CPU-specific microarchitectures include:
 • Intel Xeon (Cascade Lake)
 • Intel Xeon (Cooper Lake)*
 • Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
-• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)‡
-• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
-• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
-• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)§
-• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)§
-• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)§
+• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)†
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)†
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)†
+• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)‡
+• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)‡
+• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)‡
 
 Notes: If not otherwise noted, gcc >=9.1 is required for support.
        *Requires gcc >=10.1 or clang >=10.0
-       †Required gcc >=10.3 or clang >=12.0
-       ‡Required gcc >=11.1 or clang >=12.0
-       §Required gcc >=13.0 or clang >=15.0.5
+      **Required gcc >=10.3 or clang >=12.0
+       †Required gcc >=11.1 or clang >=12.0
+       ‡Required gcc >=13.0 or clang >=15.0.5
+       §Required gcc >=14.1 or clang >=19.0?
+
 
 It also offers to compile passing the 'native' option which, "selects the CPU
 to generate code for at compilation time by determining the processor type of
@@ -101,13 +104,13 @@ REFERENCES
 4.  https://github.com/graysky2/kernel_gcc_patch/issues/15
 5.  http://www.linuxforge.net/docs/linux/linux-gcc.php
 ---
- arch/x86/Kconfig.cpu            | 424 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  44 +++-
- arch/x86/include/asm/vermagic.h |  74 ++++++
- 3 files changed, 526 insertions(+), 16 deletions(-)
+ arch/x86/Kconfig.cpu            | 432 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  45 +++-
+ arch/x86/include/asm/vermagic.h |  76 ++++++
+ 3 files changed, 537 insertions(+), 16 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 2a7279d80460a..6924a0f5f1c26 100644
+index 2a7279d80460a..55941c31ade35 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
@@ -128,7 +131,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  	depends on X86_32
  	help
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
-@@ -173,12 +173,106 @@ config MK7
+@@ -173,12 +173,114 @@ config MK7
  	  flags to GCC.
  
  config MK8
@@ -232,11 +235,19 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	  Select this for AMD Family 19h Zen 4 processors.
 +
 +	  Enables -march=znver4
++
++config MZEN5
++	bool "AMD Zen 5"
++	depends on (CC_IS_GCC && GCC_VERSION >= 141000) || (CC_IS_CLANG && CLANG_VERSION >= 180000)
++	help
++	  Select this for AMD Family 19h Zen 5 processors.
++
++	  Enables -march=znver5
 +
  config MCRUSOE
  	bool "Crusoe"
  	depends on X86_32
-@@ -270,7 +364,7 @@ config MPSC
+@@ -270,7 +372,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
  
  config MCORE2
@@ -245,7 +256,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  	help
  
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,6 +372,8 @@ config MCORE2
+@@ -278,6 +380,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
  
@@ -254,7 +265,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  config MATOM
  	bool "Intel Atom"
  	help
-@@ -287,6 +383,212 @@ config MATOM
+@@ -287,6 +391,212 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
  
@@ -467,7 +478,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  config GENERIC_CPU
  	bool "Generic-x86-64"
  	depends on X86_64
-@@ -294,6 +596,50 @@ config GENERIC_CPU
+@@ -294,6 +604,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
  
@@ -518,14 +529,14 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  endchoice
  
  config X86_GENERIC
-@@ -318,9 +664,17 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,9 +672,17 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
  	int
  	default "7" if MPENTIUM4 || MPSC
 -	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 \
 +	|| MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER \
-+	|| MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM || MWESTMERE || MSILVERMONT \
++	|| MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT \
 +	|| MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL \
 +	|| MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
 +	|| MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE \
@@ -538,7 +549,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  
  config X86_F00F_BUG
  	def_bool y
-@@ -332,15 +686,27 @@ config X86_INVD_BUG
+@@ -332,15 +694,27 @@ config X86_INVD_BUG
  
  config X86_ALIGNMENT_16
  	def_bool y
@@ -561,7 +572,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM \
 +	|| MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX \
 +	|| MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER \
-+	|| MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM \
++	|| MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM \
 +	|| MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE \
 +	|| MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE \
 +	|| MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
@@ -569,7 +580,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  
  #
  # P6_NOPs are a relatively minor optimization that require a family >=
-@@ -356,11 +722,22 @@ config X86_USE_PPRO_CHECKSUM
+@@ -356,11 +730,22 @@ config X86_USE_PPRO_CHECKSUM
  config X86_P6_NOP
  	def_bool y
  	depends on X86_64
@@ -586,7 +597,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM \
 +	|| MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 \
 +	|| MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER \
-+	|| MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM \
++	|| MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM \
 +	|| MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL \
 +	|| MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
 +	|| MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS \
@@ -594,7 +605,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  
  config X86_HAVE_PAE
  	def_bool y
-@@ -368,18 +745,37 @@ config X86_HAVE_PAE
+@@ -368,18 +753,37 @@ config X86_HAVE_PAE
  
  config X86_CMPXCHG64
  	def_bool y
@@ -602,7 +613,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
 +	|| M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 \
 +	|| MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN \
-+	|| MZEN2 || MZEN3 || MZEN4 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS \
++	|| MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS \
 +	|| MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE \
 +	|| MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
 +	|| MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
@@ -615,7 +626,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
 +	|| MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 \
 +	|| MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR \
-+	|| MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT \
++	|| MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT \
 +	|| MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX \
 +	|| MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS \
 +	|| MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD)
@@ -627,7 +638,7 @@ index 2a7279d80460a..6924a0f5f1c26 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
 +	|| MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8 ||  MK8SSE3 \
 +	|| MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER \
-+	|| MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM || MWESTMERE || MSILVERMONT \
++	|| MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT \
 +	|| MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL \
 +	|| MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
 +	|| MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MRAPTORLAKE \
@@ -636,10 +647,10 @@ index 2a7279d80460a..6924a0f5f1c26 100644
  	default "4"
  
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index da8f3caf27815..c873d10df15d0 100644
+index 5ab93fcdd691d..ac203b599befd 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -152,8 +152,48 @@ else
+@@ -156,8 +156,49 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
          cflags-$(CONFIG_MK8)		+= -march=k8
          cflags-$(CONFIG_MPSC)		+= -march=nocona
@@ -658,6 +669,7 @@ index da8f3caf27815..c873d10df15d0 100644
 +        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
 +        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
 +        cflags-$(CONFIG_MZEN4) 	+= -march=znver4
++        cflags-$(CONFIG_MZEN5) 	+= -march=znver5
 +        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
 +        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
 +        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
@@ -691,7 +703,7 @@ index da8f3caf27815..c873d10df15d0 100644
          KBUILD_CFLAGS += $(cflags-y)
  
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec37..02c1386eb653e 100644
+index 75884d2cdec37..7acca9b5a9d56 100644
 --- a/arch/x86/include/asm/vermagic.h
 +++ b/arch/x86/include/asm/vermagic.h
 @@ -17,6 +17,54 @@
@@ -749,7 +761,7 @@ index 75884d2cdec37..02c1386eb653e 100644
  #elif defined CONFIG_MATOM
  #define MODULE_PROC_FAMILY "ATOM "
  #elif defined CONFIG_M686
-@@ -35,6 +83,32 @@
+@@ -35,6 +83,34 @@
  #define MODULE_PROC_FAMILY "K7 "
  #elif defined CONFIG_MK8
  #define MODULE_PROC_FAMILY "K8 "
@@ -779,9 +791,11 @@ index 75884d2cdec37..02c1386eb653e 100644
 +#define MODULE_PROC_FAMILY "ZEN3 "
 +#elif defined CONFIG_MZEN4
 +#define MODULE_PROC_FAMILY "ZEN4 "
++#elif defined CONFIG_MZEN5
++#define MODULE_PROC_FAMILY "ZEN5 "
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
 -- 
-2.43.2
+2.45.0
 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-03 15:16 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-03 15:16 UTC (permalink / raw
  To: gentoo-commits

commit:     3d6d5815699a3eec9e13019fcac398bd3992bb7d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug  3 15:15:05 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug  3 15:15:05 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3d6d5815

Linux 6.10.3, and additional patches

Rename 6.10.1 patch
Add jump label fix patch thanks to Holger Hoffstätte

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |     8 +
 1001_linux-6.10.1.patch => 1000_linux-6.10.1.patch |     0
 1002_linux-6.10.3.patch                            | 31017 +++++++++++++++++++
 2950_jump-label-fix.patch                          |    57 +
 4 files changed, 31082 insertions(+)

diff --git a/0000_README b/0000_README
index f2aa0e92..3c01ecbe 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-6.10.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.2
 
+Patch:  1002_linux-6.10.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.3
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
@@ -79,6 +83,10 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  2950_jump-label-fix.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/
+Desc:   jump_label: Fix a regression
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/1001_linux-6.10.1.patch b/1000_linux-6.10.1.patch
similarity index 100%
rename from 1001_linux-6.10.1.patch
rename to 1000_linux-6.10.1.patch

diff --git a/1002_linux-6.10.3.patch b/1002_linux-6.10.3.patch
new file mode 100644
index 00000000..77023431
--- /dev/null
+++ b/1002_linux-6.10.3.patch
@@ -0,0 +1,31017 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 27ec49af1bf27..2569e7f19b476 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4749,7 +4749,9 @@
+ 			none - Limited to cond_resched() calls
+ 			voluntary - Limited to cond_resched() and might_sleep() calls
+ 			full - Any section that isn't explicitly preempt disabled
+-			       can be preempted anytime.
++			       can be preempted anytime.  Tasks will also yield
++			       contended spinlocks (if the critical section isn't
++			       explicitly preempt disabled beyond the lock itself).
+ 
+ 	print-fatal-signals=
+ 			[KNL] debug: print fatal signals
+diff --git a/Documentation/arch/powerpc/kvm-nested.rst b/Documentation/arch/powerpc/kvm-nested.rst
+index 630602a8aa008..5defd13cc6c17 100644
+--- a/Documentation/arch/powerpc/kvm-nested.rst
++++ b/Documentation/arch/powerpc/kvm-nested.rst
+@@ -546,7 +546,9 @@ table information.
+ +--------+-------+----+--------+----------------------------------+
+ | 0x1052 | 0x08  | RW |   T    | CTRL                             |
+ +--------+-------+----+--------+----------------------------------+
+-| 0x1053-|       |    |        | Reserved                         |
++| 0x1053 | 0x08  | RW |   T    | DPDES                            |
+++--------+-------+----+--------+----------------------------------+
++| 0x1054-|       |    |        | Reserved                         |
+ | 0x1FFF |       |    |        |                                  |
+ +--------+-------+----+--------+----------------------------------+
+ | 0x2000 | 0x04  | RW |   T    | CR                               |
+diff --git a/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb3-uni-phy.yaml b/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb3-uni-phy.yaml
+index 325585bc881ba..212e56eb1bec2 100644
+--- a/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb3-uni-phy.yaml
++++ b/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb3-uni-phy.yaml
+@@ -20,7 +20,7 @@ properties:
+       - qcom,ipq8074-qmp-usb3-phy
+       - qcom,ipq9574-qmp-usb3-phy
+       - qcom,msm8996-qmp-usb3-phy
+-      - com,qdu1000-qmp-usb3-uni-phy
++      - qcom,qdu1000-qmp-usb3-uni-phy
+       - qcom,sa8775p-qmp-usb3-uni-phy
+       - qcom,sc8280xp-qmp-usb3-uni-phy
+       - qcom,sdm845-qmp-usb3-uni-phy
+diff --git a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+index 68398e7e86556..606b80965a44a 100644
+--- a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
++++ b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
+@@ -49,7 +49,10 @@ properties:
+       to take when the temperature crosses those thresholds.
+ 
+ patternProperties:
+-  "^[a-zA-Z][a-zA-Z0-9\\-]{1,12}-thermal$":
++  # Node name is limited in size due to Linux kernel requirements - 19
++  # characters in total (see THERMAL_NAME_LENGTH, including terminating NUL
++  # byte):
++  "^[a-zA-Z][a-zA-Z0-9\\-]{1,10}-thermal$":
+     type: object
+     description:
+       Each thermal zone node contains information about how frequently it
+diff --git a/Documentation/networking/xsk-tx-metadata.rst b/Documentation/networking/xsk-tx-metadata.rst
+index bd033fe95cca5..e76b0cfc32f7d 100644
+--- a/Documentation/networking/xsk-tx-metadata.rst
++++ b/Documentation/networking/xsk-tx-metadata.rst
+@@ -11,12 +11,16 @@ metadata on the receive side.
+ General Design
+ ==============
+ 
+-The headroom for the metadata is reserved via ``tx_metadata_len`` in
+-``struct xdp_umem_reg``. The metadata length is therefore the same for
+-every socket that shares the same umem. The metadata layout is a fixed UAPI,
+-refer to ``union xsk_tx_metadata`` in ``include/uapi/linux/if_xdp.h``.
+-Thus, generally, the ``tx_metadata_len`` field above should contain
+-``sizeof(union xsk_tx_metadata)``.
++The headroom for the metadata is reserved via ``tx_metadata_len`` and
++``XDP_UMEM_TX_METADATA_LEN`` flag in ``struct xdp_umem_reg``. The metadata
++length is therefore the same for every socket that shares the same umem.
++The metadata layout is a fixed UAPI, refer to ``union xsk_tx_metadata`` in
++``include/uapi/linux/if_xdp.h``. Thus, generally, the ``tx_metadata_len``
++field above should contain ``sizeof(union xsk_tx_metadata)``.
++
++Note that in the original implementation the ``XDP_UMEM_TX_METADATA_LEN``
++flag was not required. Applications might attempt to create a umem
++with a flag first and if it fails, do another attempt without a flag.
+ 
+ The headroom and the metadata itself should be located right before
+ ``xdp_desc->addr`` in the umem frame. Within a frame, the metadata
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index a71d91978d9ef..eec8df1dde06a 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -1403,6 +1403,12 @@ Instead, an abort (data abort if the cause of the page-table update
+ was a load or a store, instruction abort if it was an instruction
+ fetch) is injected in the guest.
+ 
++S390:
++^^^^^
++
++Returns -EINVAL if the VM has the KVM_VM_S390_UCONTROL flag set.
++Returns -EINVAL if called on a protected VM.
++
+ 4.36 KVM_SET_TSS_ADDR
+ ---------------------
+ 
+@@ -6273,6 +6279,12 @@ state.  At VM creation time, all memory is shared, i.e. the PRIVATE attribute
+ is '0' for all gfns.  Userspace can control whether memory is shared/private by
+ toggling KVM_MEMORY_ATTRIBUTE_PRIVATE via KVM_SET_MEMORY_ATTRIBUTES as needed.
+ 
++S390:
++^^^^^
++
++Returns -EINVAL if the VM has the KVM_VM_S390_UCONTROL flag set.
++Returns -EINVAL if called on a protected VM.
++
+ 4.141 KVM_SET_MEMORY_ATTRIBUTES
+ -------------------------------
+ 
+diff --git a/Makefile b/Makefile
+index 07e1aec72c171..c0af6d8aeb05f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/allwinner/Makefile b/arch/arm/boot/dts/allwinner/Makefile
+index 4247f19b1adc2..cd0d044882cf8 100644
+--- a/arch/arm/boot/dts/allwinner/Makefile
++++ b/arch/arm/boot/dts/allwinner/Makefile
+@@ -261,68 +261,6 @@ dtb-$(CONFIG_MACH_SUN8I) += \
+ 	sun8i-v3s-licheepi-zero.dtb \
+ 	sun8i-v3s-licheepi-zero-dock.dtb \
+ 	sun8i-v40-bananapi-m2-berry.dtb
+-dtb-$(CONFIG_MACH_SUN8I) += \
+-	sun8i-a23-evb.dtb \
+-	sun8i-a23-gt90h-v4.dtb \
+-	sun8i-a23-inet86dz.dtb \
+-	sun8i-a23-ippo-q8h-v5.dtb \
+-	sun8i-a23-ippo-q8h-v1.2.dtb \
+-	sun8i-a23-polaroid-mid2407pxe03.dtb \
+-	sun8i-a23-polaroid-mid2809pxe04.dtb \
+-	sun8i-a23-q8-tablet.dtb \
+-	sun8i-a33-et-q8-v1.6.dtb \
+-	sun8i-a33-ga10h-v1.1.dtb \
+-	sun8i-a33-inet-d978-rev2.dtb \
+-	sun8i-a33-ippo-q8h-v1.2.dtb \
+-	sun8i-a33-olinuxino.dtb \
+-	sun8i-a33-q8-tablet.dtb \
+-	sun8i-a33-sinlinx-sina33.dtb \
+-	sun8i-a83t-allwinner-h8homlet-v2.dtb \
+-	sun8i-a83t-bananapi-m3.dtb \
+-	sun8i-a83t-cubietruck-plus.dtb \
+-	sun8i-a83t-tbs-a711.dtb \
+-	sun8i-h2-plus-bananapi-m2-zero.dtb \
+-	sun8i-h2-plus-libretech-all-h3-cc.dtb \
+-	sun8i-h2-plus-orangepi-r1.dtb \
+-	sun8i-h2-plus-orangepi-zero.dtb \
+-	sun8i-h3-bananapi-m2-plus.dtb \
+-	sun8i-h3-bananapi-m2-plus-v1.2.dtb \
+-	sun8i-h3-beelink-x2.dtb \
+-	sun8i-h3-libretech-all-h3-cc.dtb \
+-	sun8i-h3-mapleboard-mp130.dtb \
+-	sun8i-h3-nanopi-duo2.dtb \
+-	sun8i-h3-nanopi-m1.dtb\
+-	\
+-	sun8i-h3-nanopi-m1-plus.dtb \
+-	sun8i-h3-nanopi-neo.dtb \
+-	sun8i-h3-nanopi-neo-air.dtb \
+-	sun8i-h3-nanopi-r1.dtb \
+-	sun8i-h3-orangepi-2.dtb \
+-	sun8i-h3-orangepi-lite.dtb \
+-	sun8i-h3-orangepi-one.dtb \
+-	sun8i-h3-orangepi-pc.dtb \
+-	sun8i-h3-orangepi-pc-plus.dtb \
+-	sun8i-h3-orangepi-plus.dtb \
+-	sun8i-h3-orangepi-plus2e.dtb \
+-	sun8i-h3-orangepi-zero-plus2.dtb \
+-	sun8i-h3-rervision-dvk.dtb \
+-	sun8i-h3-zeropi.dtb \
+-	sun8i-h3-emlid-neutis-n5h3-devboard.dtb \
+-	sun8i-r16-bananapi-m2m.dtb \
+-	sun8i-r16-nintendo-nes-classic.dtb \
+-	sun8i-r16-nintendo-super-nes-classic.dtb \
+-	sun8i-r16-parrot.dtb \
+-	sun8i-r40-bananapi-m2-ultra.dtb \
+-	sun8i-r40-oka40i-c.dtb \
+-	sun8i-s3-elimo-initium.dtb \
+-	sun8i-s3-lichee-zero-plus.dtb \
+-	sun8i-s3-pinecube.dtb \
+-	sun8i-t113s-mangopi-mq-r-t113.dtb \
+-	sun8i-t3-cqa3t-bv3.dtb \
+-	sun8i-v3-sl631-imx179.dtb \
+-	sun8i-v3s-licheepi-zero.dtb \
+-	sun8i-v3s-licheepi-zero-dock.dtb \
+-	sun8i-v40-bananapi-m2-berry.dtb
+ dtb-$(CONFIG_MACH_SUN9I) += \
+ 	sun9i-a80-optimus.dtb \
+ 	sun9i-a80-cubieboard4.dtb
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6q-kontron-samx6i.dtsi b/arch/arm/boot/dts/nxp/imx/imx6q-kontron-samx6i.dtsi
+index 4d6a0c3e8455f..ff062f4fd726e 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6q-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6q-kontron-samx6i.dtsi
+@@ -5,31 +5,8 @@
+ 
+ #include "imx6q.dtsi"
+ #include "imx6qdl-kontron-samx6i.dtsi"
+-#include <dt-bindings/gpio/gpio.h>
+ 
+ / {
+ 	model = "Kontron SMARC sAMX6i Quad/Dual";
+ 	compatible = "kontron,imx6q-samx6i", "fsl,imx6q";
+ };
+-
+-/* Quad/Dual SoMs have 3 chip-select signals */
+-&ecspi4 {
+-	cs-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 29 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 25 GPIO_ACTIVE_LOW>;
+-};
+-
+-&pinctrl_ecspi4 {
+-	fsl,pins = <
+-		MX6QDL_PAD_EIM_D21__ECSPI4_SCLK 0x100b1
+-		MX6QDL_PAD_EIM_D28__ECSPI4_MOSI 0x100b1
+-		MX6QDL_PAD_EIM_D22__ECSPI4_MISO 0x100b1
+-
+-		/* SPI4_IMX_CS2# - connected to internal flash */
+-		MX6QDL_PAD_EIM_D24__GPIO3_IO24 0x1b0b0
+-		/* SPI4_IMX_CS0# - connected to SMARC SPI0_CS0# */
+-		MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b0
+-		/* SPI4_CS3# - connected to  SMARC SPI0_CS1# */
+-		MX6QDL_PAD_EIM_D25__GPIO3_IO25 0x1b0b0
+-	>;
+-};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-kontron-samx6i.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-kontron-samx6i.dtsi
+index 85aeebc9485dd..668d33d1ff0c1 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-kontron-samx6i.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-kontron-samx6i.dtsi
+@@ -244,7 +244,8 @@ &ecspi4 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_ecspi4>;
+ 	cs-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>,
+-		   <&gpio3 29 GPIO_ACTIVE_LOW>;
++		   <&gpio3 29 GPIO_ACTIVE_LOW>,
++		   <&gpio3 25 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+ 	/* default boot source: workaround #1 for errata ERR006282 */
+@@ -259,7 +260,7 @@ smarc_flash: flash@0 {
+ &fec {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-mode = "rgmii";
++	phy-connection-type = "rgmii-id";
+ 	phy-handle = <&ethphy>;
+ 
+ 	mdio {
+@@ -269,7 +270,7 @@ mdio {
+ 		ethphy: ethernet-phy@1 {
+ 			compatible = "ethernet-phy-ieee802.3-c22";
+ 			reg = <1>;
+-			reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
++			reset-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <1000>;
+ 		};
+ 	};
+@@ -464,6 +465,8 @@ MX6QDL_PAD_EIM_D22__ECSPI4_MISO 0x100b1
+ 			MX6QDL_PAD_EIM_D24__GPIO3_IO24 0x1b0b0
+ 			/* SPI_IMX_CS0# - connected to SMARC SPI0_CS0# */
+ 			MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b0
++			/* SPI4_CS3# - connected to SMARC SPI0_CS1# */
++			MX6QDL_PAD_EIM_D25__GPIO3_IO25 0x1b0b0
+ 		>;
+ 	};
+ 
+@@ -516,7 +519,7 @@ MX6QDL_PAD_RGMII_RX_CTL__RGMII_RX_CTL 0x1b0b0
+ 			MX6QDL_PAD_ENET_MDIO__ENET_MDIO       0x1b0b0
+ 			MX6QDL_PAD_ENET_MDC__ENET_MDC         0x1b0b0
+ 			MX6QDL_PAD_ENET_REF_CLK__ENET_TX_CLK  0x1b0b0
+-			MX6QDL_PAD_ENET_CRS_DV__GPIO1_IO25    0x1b0b0 /* RST_GBE0_PHY# */
++			MX6QDL_PAD_NANDF_D1__GPIO2_IO01       0x1b0b0 /* RST_GBE0_PHY# */
+ 		>;
+ 	};
+ 
+@@ -729,7 +732,7 @@ &pcie {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_pcie>;
+ 	wake-up-gpio = <&gpio6 18 GPIO_ACTIVE_HIGH>;
+-	reset-gpio = <&gpio3 13 GPIO_ACTIVE_HIGH>;
++	reset-gpio = <&gpio3 13 GPIO_ACTIVE_LOW>;
+ };
+ 
+ /* LCD_BKLT_PWM */
+@@ -817,5 +820,6 @@ &wdog1 {
+ 	/* CPLD is feeded by watchdog (hardwired) */
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_wdog1>;
++	fsl,ext-reset-output;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8226-microsoft-common.dtsi b/arch/arm/boot/dts/qcom/qcom-msm8226-microsoft-common.dtsi
+index 525d8c608b06f..8839b23fc6936 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8226-microsoft-common.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-msm8226-microsoft-common.dtsi
+@@ -287,6 +287,10 @@ &sdhc_2 {
+ 	status = "okay";
+ };
+ 
++&smbb {
++	status = "okay";
++};
++
+ &usb {
+ 	extcon = <&smbb>;
+ 	dr_mode = "peripheral";
+diff --git a/arch/arm/boot/dts/st/stm32mp151.dtsi b/arch/arm/boot/dts/st/stm32mp151.dtsi
+index 90c5c72c87ab7..4f878ec102c1f 100644
+--- a/arch/arm/boot/dts/st/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp151.dtsi
+@@ -50,6 +50,7 @@ timer {
+ 			     <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>;
+ 		interrupt-parent = <&intc>;
++		arm,no-tick-in-suspend;
+ 	};
+ 
+ 	clocks {
+diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c
+index 3c5f5a3cb480c..10ab16dcd8276 100644
+--- a/arch/arm/mach-pxa/spitz.c
++++ b/arch/arm/mach-pxa/spitz.c
+@@ -520,10 +520,8 @@ static struct gpiod_lookup_table spitz_ads7846_gpio_table = {
+ static struct gpiod_lookup_table spitz_lcdcon_gpio_table = {
+ 	.dev_id = "spi2.1",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_BACKLIGHT_CONT,
+-			    "BL_CONT", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_BACKLIGHT_ON,
+-			    "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.1", 6, "BL_CONT", GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP("sharp-scoop.1", 7, "BL_ON", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+ };
+@@ -531,10 +529,8 @@ static struct gpiod_lookup_table spitz_lcdcon_gpio_table = {
+ static struct gpiod_lookup_table akita_lcdcon_gpio_table = {
+ 	.dev_id = "spi2.1",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", AKITA_GPIO_BACKLIGHT_CONT,
+-			    "BL_CONT", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", AKITA_GPIO_BACKLIGHT_ON,
+-			    "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 3, "BL_ON", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 4, "BL_CONT", GPIO_ACTIVE_LOW),
+ 		{ },
+ 	},
+ };
+@@ -964,12 +960,9 @@ static inline void spitz_i2c_init(void) {}
+ static struct gpiod_lookup_table spitz_audio_gpio_table = {
+ 	.dev_id = "spitz-audio",
+ 	.table = {
+-		GPIO_LOOKUP("sharp-scoop.0", SPITZ_GPIO_MUTE_L - SPITZ_SCP_GPIO_BASE,
+-			    "mute-l", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("sharp-scoop.0", SPITZ_GPIO_MUTE_R - SPITZ_SCP_GPIO_BASE,
+-			    "mute-r", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("sharp-scoop.1", SPITZ_GPIO_MIC_BIAS - SPITZ_SCP2_GPIO_BASE,
+-			    "mic", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 3, "mute-l", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 4, "mute-r", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.1", 8, "mic", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+ };
+@@ -977,12 +970,9 @@ static struct gpiod_lookup_table spitz_audio_gpio_table = {
+ static struct gpiod_lookup_table akita_audio_gpio_table = {
+ 	.dev_id = "spitz-audio",
+ 	.table = {
+-		GPIO_LOOKUP("sharp-scoop.0", SPITZ_GPIO_MUTE_L - SPITZ_SCP_GPIO_BASE,
+-			    "mute-l", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("sharp-scoop.0", SPITZ_GPIO_MUTE_R - SPITZ_SCP_GPIO_BASE,
+-			    "mute-r", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("i2c-max7310", AKITA_GPIO_MIC_BIAS - AKITA_IOEXP_GPIO_BASE,
+-			    "mic", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 3, "mute-l", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sharp-scoop.0", 4, "mute-r", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("i2c-max7310", 2, "mic", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+ };
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index 67c425341a951..ab01b51de5590 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -25,6 +25,8 @@
+ 
+ #include "fault.h"
+ 
++#ifdef CONFIG_MMU
++
+ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ {
+ 	unsigned long addr = (unsigned long)unsafe_src;
+@@ -32,8 +34,6 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ 	return addr >= TASK_SIZE && ULONG_MAX - addr >= size;
+ }
+ 
+-#ifdef CONFIG_MMU
+-
+ /*
+  * This is useful to dump out the page tables associated with
+  * 'addr' in mm 'mm'.
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index b058ed78faf00..dbadbdb8f9310 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -215,6 +215,11 @@ hdmi_tx: hdmi-tx@0 {
+ 				#sound-dai-cells = <0>;
+ 				status = "disabled";
+ 
++				assigned-clocks = <&clkc CLKID_HDMI_SEL>,
++						  <&clkc CLKID_HDMI>;
++				assigned-clock-parents = <&xtal>, <0>;
++				assigned-clock-rates = <0>, <24000000>;
++
+ 				/* VPU VENC Input */
+ 				hdmi_tx_venc_port: port@0 {
+ 					reg = <0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12.dtsi
+index e732df3f3114d..664912d1beaab 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12.dtsi
+@@ -363,6 +363,10 @@ &ethmac {
+ 	power-domains = <&pwrc PWRC_G12A_ETH_ID>;
+ };
+ 
++&hdmi_tx {
++	power-domains = <&pwrc PWRC_G12A_VPU_ID>;
++};
++
+ &vpu {
+ 	power-domains = <&pwrc PWRC_G12A_VPU_ID>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+index 12ef6e81c8bd6..ed00e67e6923a 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+@@ -311,10 +311,16 @@ &hdmi_tx {
+ 		 <&reset RESET_HDMI_SYSTEM_RESET>,
+ 		 <&reset RESET_HDMI_TX>;
+ 	reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
+-	clocks = <&clkc CLKID_HDMI_PCLK>,
+-		 <&clkc CLKID_CLK81>,
++	clocks = <&clkc CLKID_HDMI>,
++		 <&clkc CLKID_HDMI_PCLK>,
+ 		 <&clkc CLKID_GCLK_VENCI_INT0>;
+ 	clock-names = "isfr", "iahb", "venci";
++	power-domains = <&pwrc PWRC_GXBB_VPU_ID>;
++
++	assigned-clocks = <&clkc CLKID_HDMI_SEL>,
++			  <&clkc CLKID_HDMI>;
++	assigned-clock-parents = <&xtal>, <0>;
++	assigned-clock-rates = <0>, <24000000>;
+ };
+ 
+ &sysctrl {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index 17bcfa4702e17..f58d1790de1cb 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -323,10 +323,16 @@ &hdmi_tx {
+ 		 <&reset RESET_HDMI_SYSTEM_RESET>,
+ 		 <&reset RESET_HDMI_TX>;
+ 	reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
+-	clocks = <&clkc CLKID_HDMI_PCLK>,
+-		 <&clkc CLKID_CLK81>,
++	clocks = <&clkc CLKID_HDMI>,
++		 <&clkc CLKID_HDMI_PCLK>,
+ 		 <&clkc CLKID_GCLK_VENCI_INT0>;
+ 	clock-names = "isfr", "iahb", "venci";
++	power-domains = <&pwrc PWRC_GXBB_VPU_ID>;
++
++	assigned-clocks = <&clkc CLKID_HDMI_SEL>,
++			  <&clkc CLKID_HDMI>;
++	assigned-clock-parents = <&xtal>, <0>;
++	assigned-clock-rates = <0>, <24000000>;
+ };
+ 
+ &sysctrl {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+index 643f94d9d08e1..13e742ba00bea 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+@@ -339,7 +339,7 @@ tdmin_lb: audio-controller@3c0 {
+ 		};
+ 
+ 		spdifin: audio-controller@400 {
+-			compatible = "amlogic,g12a-spdifin",
++			compatible = "amlogic,sm1-spdifin",
+ 				     "amlogic,axg-spdifin";
+ 			reg = <0x0 0x400 0x0 0x30>;
+ 			#sound-dai-cells = <0>;
+@@ -353,7 +353,7 @@ spdifin: audio-controller@400 {
+ 		};
+ 
+ 		spdifout_a: audio-controller@480 {
+-			compatible = "amlogic,g12a-spdifout",
++			compatible = "amlogic,sm1-spdifout",
+ 				     "amlogic,axg-spdifout";
+ 			reg = <0x0 0x480 0x0 0x50>;
+ 			#sound-dai-cells = <0>;
+@@ -518,6 +518,10 @@ &gpio_intc {
+ 		     "amlogic,meson-gpio-intc";
+ };
+ 
++&hdmi_tx {
++	power-domains = <&pwrc PWRC_SM1_VPU_ID>;
++};
++
+ &pcie {
+ 	power-domains = <&pwrc PWRC_SM1_PCIE_ID>;
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index b92abb5a5c536..ee0c864f27e89 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -789,6 +789,23 @@ pgc_usb2_phy: power-domain@3 {
+ 						reg = <IMX8MP_POWER_DOMAIN_USB2_PHY>;
+ 					};
+ 
++					pgc_mlmix: power-domain@4 {
++						#power-domain-cells = <0>;
++						reg = <IMX8MP_POWER_DOMAIN_MLMIX>;
++						clocks = <&clk IMX8MP_CLK_ML_AXI>,
++							 <&clk IMX8MP_CLK_ML_AHB>,
++							 <&clk IMX8MP_CLK_NPU_ROOT>;
++						assigned-clocks = <&clk IMX8MP_CLK_ML_CORE>,
++								  <&clk IMX8MP_CLK_ML_AXI>,
++								  <&clk IMX8MP_CLK_ML_AHB>;
++						assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>,
++									 <&clk IMX8MP_SYS_PLL1_800M>,
++									 <&clk IMX8MP_SYS_PLL1_800M>;
++						assigned-clock-rates = <800000000>,
++								       <800000000>,
++								       <300000000>;
++					};
++
+ 					pgc_audio: power-domain@5 {
+ 						#power-domain-cells = <0>;
+ 						reg = <IMX8MP_POWER_DOMAIN_AUDIOMIX>;
+@@ -821,6 +838,12 @@ pgc_gpumix: power-domain@7 {
+ 						assigned-clock-rates = <800000000>, <400000000>;
+ 					};
+ 
++					pgc_vpumix: power-domain@8 {
++						#power-domain-cells = <0>;
++						reg = <IMX8MP_POWER_DOMAIN_VPUMIX>;
++						clocks = <&clk IMX8MP_CLK_VPU_ROOT>;
++					};
++
+ 					pgc_gpu3d: power-domain@9 {
+ 						#power-domain-cells = <0>;
+ 						reg = <IMX8MP_POWER_DOMAIN_GPU3D>;
+@@ -836,6 +859,28 @@ pgc_mediamix: power-domain@10 {
+ 							 <&clk IMX8MP_CLK_MEDIA_APB_ROOT>;
+ 					};
+ 
++					pgc_vpu_g1: power-domain@11 {
++						#power-domain-cells = <0>;
++						power-domains = <&pgc_vpumix>;
++						reg = <IMX8MP_POWER_DOMAIN_VPU_G1>;
++						clocks = <&clk IMX8MP_CLK_VPU_G1_ROOT>;
++					};
++
++					pgc_vpu_g2: power-domain@12 {
++						#power-domain-cells = <0>;
++						power-domains = <&pgc_vpumix>;
++						reg = <IMX8MP_POWER_DOMAIN_VPU_G2>;
++						clocks = <&clk IMX8MP_CLK_VPU_G2_ROOT>;
++
++					};
++
++					pgc_vpu_vc8000e: power-domain@13 {
++						#power-domain-cells = <0>;
++						power-domains = <&pgc_vpumix>;
++						reg = <IMX8MP_POWER_DOMAIN_VPU_VC8000E>;
++						clocks = <&clk IMX8MP_CLK_VPU_VC8KE_ROOT>;
++					};
++
+ 					pgc_hdmimix: power-domain@14 {
+ 						#power-domain-cells = <0>;
+ 						reg = <IMX8MP_POWER_DOMAIN_HDMIMIX>;
+@@ -873,50 +918,6 @@ pgc_ispdwp: power-domain@18 {
+ 						reg = <IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP>;
+ 						clocks = <&clk IMX8MP_CLK_MEDIA_ISP_ROOT>;
+ 					};
+-
+-					pgc_vpumix: power-domain@19 {
+-						#power-domain-cells = <0>;
+-						reg = <IMX8MP_POWER_DOMAIN_VPUMIX>;
+-						clocks = <&clk IMX8MP_CLK_VPU_ROOT>;
+-					};
+-
+-					pgc_vpu_g1: power-domain@20 {
+-						#power-domain-cells = <0>;
+-						power-domains = <&pgc_vpumix>;
+-						reg = <IMX8MP_POWER_DOMAIN_VPU_G1>;
+-						clocks = <&clk IMX8MP_CLK_VPU_G1_ROOT>;
+-					};
+-
+-					pgc_vpu_g2: power-domain@21 {
+-						#power-domain-cells = <0>;
+-						power-domains = <&pgc_vpumix>;
+-						reg = <IMX8MP_POWER_DOMAIN_VPU_G2>;
+-						clocks = <&clk IMX8MP_CLK_VPU_G2_ROOT>;
+-					};
+-
+-					pgc_vpu_vc8000e: power-domain@22 {
+-						#power-domain-cells = <0>;
+-						power-domains = <&pgc_vpumix>;
+-						reg = <IMX8MP_POWER_DOMAIN_VPU_VC8000E>;
+-						clocks = <&clk IMX8MP_CLK_VPU_VC8KE_ROOT>;
+-					};
+-
+-					pgc_mlmix: power-domain@24 {
+-						#power-domain-cells = <0>;
+-						reg = <IMX8MP_POWER_DOMAIN_MLMIX>;
+-						clocks = <&clk IMX8MP_CLK_ML_AXI>,
+-							 <&clk IMX8MP_CLK_ML_AHB>,
+-							 <&clk IMX8MP_CLK_NPU_ROOT>;
+-						assigned-clocks = <&clk IMX8MP_CLK_ML_CORE>,
+-								  <&clk IMX8MP_CLK_ML_AXI>,
+-								  <&clk IMX8MP_CLK_ML_AHB>;
+-						assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>,
+-									 <&clk IMX8MP_SYS_PLL1_800M>,
+-									 <&clk IMX8MP_SYS_PLL1_800M>;
+-						assigned-clock-rates = <800000000>,
+-								       <800000000>,
+-								       <300000000>;
+-					};
+ 				};
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index 224bb289660c0..2791de5b28f6a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -329,8 +329,8 @@ asm_sel {
+ 	/* eMMC is shared pin with parallel NAND */
+ 	emmc_pins_default: emmc-pins-default {
+ 		mux {
+-			function = "emmc", "emmc_rst";
+-			groups = "emmc";
++			function = "emmc";
++			groups = "emmc", "emmc_rst";
+ 		};
+ 
+ 		/* "NDL0","NDL1","NDL2","NDL3","NDL4","NDL5","NDL6","NDL7",
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+index 41629769bdc85..8c3e2e2578bce 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+@@ -268,8 +268,8 @@ &pio {
+ 	/* eMMC is shared pin with parallel NAND */
+ 	emmc_pins_default: emmc-pins-default {
+ 		mux {
+-			function = "emmc", "emmc_rst";
+-			groups = "emmc";
++			function = "emmc";
++			groups = "emmc", "emmc_rst";
+ 		};
+ 
+ 		/* "NDL0","NDL1","NDL2","NDL3","NDL4","NDL5","NDL6","NDL7",
+diff --git a/arch/arm64/boot/dts/mediatek/mt7981b.dtsi b/arch/arm64/boot/dts/mediatek/mt7981b.dtsi
+index 4feff3d1c5f4e..178e1e96c3a49 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7981b.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7981b.dtsi
+@@ -78,10 +78,10 @@ pwm@10048000 {
+ 			compatible = "mediatek,mt7981-pwm";
+ 			reg = <0 0x10048000 0 0x1000>;
+ 			clocks = <&infracfg CLK_INFRA_PWM_STA>,
+-				<&infracfg CLK_INFRA_PWM_HCK>,
+-				<&infracfg CLK_INFRA_PWM1_CK>,
+-				<&infracfg CLK_INFRA_PWM2_CK>,
+-				<&infracfg CLK_INFRA_PWM3_CK>;
++				 <&infracfg CLK_INFRA_PWM_HCK>,
++				 <&infracfg CLK_INFRA_PWM1_CK>,
++				 <&infracfg CLK_INFRA_PWM2_CK>,
++				 <&infracfg CLK_INFRA_PWM3_CK>;
+ 			clock-names = "top", "main", "pwm1", "pwm2", "pwm3";
+ 			#pwm-cells = <2>;
+ 		};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-audio-da7219.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-audio-da7219.dtsi
+index 8b57706ac8140..586eee79c73cf 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-audio-da7219.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-audio-da7219.dtsi
+@@ -27,7 +27,7 @@ da7219_aad {
+ 			dlg,btn-cfg = <50>;
+ 			dlg,mic-det-thr = <500>;
+ 			dlg,jack-ins-deb = <20>;
+-			dlg,jack-det-rate = "32ms_64ms";
++			dlg,jack-det-rate = "32_64";
+ 			dlg,jack-rem-deb = <1>;
+ 
+ 			dlg,a-d-btn-thr = <0xa>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
+index 6a7ae616512d6..0d5a11c93c681 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
+@@ -17,7 +17,7 @@ bt_wakeup: bt-wakeup {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&bt_pins_wakeup>;
+ 
+-		wobt {
++		event-wobt {
+ 			label = "Wake on BT";
+ 			gpios = <&pio 42 GPIO_ACTIVE_HIGH>;
+ 			linux,code = <KEY_WAKEUP>;
+@@ -47,10 +47,8 @@ trackpad@2c {
+ 	};
+ };
+ 
+-&wifi_wakeup {
+-	wowlan {
+-		gpios = <&pio 113 GPIO_ACTIVE_LOW>;
+-	};
++&wifi_wakeup_event {
++	gpios = <&pio 113 GPIO_ACTIVE_LOW>;
+ };
+ 
+ &wifi_pwrseq {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index 7592e3b860377..fa4ab4d2899f9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -155,21 +155,24 @@ anx_bridge: anx7625@58 {
+ 		vdd18-supply = <&pp1800_mipibrdg>;
+ 		vdd33-supply = <&vddio_mipibrdg>;
+ 
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-		port@0 {
+-			reg = <0>;
++		ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
+ 
+-			anx7625_in: endpoint {
+-				remote-endpoint = <&dsi_out>;
++			port@0 {
++				reg = <0>;
++
++				anx7625_in: endpoint {
++					remote-endpoint = <&dsi_out>;
++				};
+ 			};
+-		};
+ 
+-		port@1 {
+-			reg = <1>;
++			port@1 {
++				reg = <1>;
+ 
+-			anx7625_out: endpoint {
+-				remote-endpoint = <&panel_in>;
++				anx7625_out: endpoint {
++					remote-endpoint = <&panel_in>;
++				};
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 100191c6453ba..2fbd226bf142c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -152,7 +152,7 @@ wifi_wakeup: wifi-wakeup {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&wifi_pins_wakeup>;
+ 
+-		button-wowlan {
++		wifi_wakeup_event: event-wowlan {
+ 			label = "Wake on WiFi";
+ 			gpios = <&pio 113 GPIO_ACTIVE_HIGH>;
+ 			linux,code = <KEY_WAKEUP>;
+@@ -803,7 +803,6 @@ pins-tx {
+ 		};
+ 		pins-rts {
+ 			pinmux = <PINMUX_GPIO47__FUNC_URTS1>;
+-			output-enable;
+ 		};
+ 		pins-cts {
+ 			pinmux = <PINMUX_GPIO46__FUNC_UCTS1>;
+@@ -822,7 +821,6 @@ pins-tx {
+ 		};
+ 		pins-rts {
+ 			pinmux = <PINMUX_GPIO47__FUNC_URTS1>;
+-			output-enable;
+ 		};
+ 		pins-cts {
+ 			pinmux = <PINMUX_GPIO46__FUNC_UCTS1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index 7a704246678f0..08d71ddf36683 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -147,6 +147,7 @@ pp3300_mipibrdg: regulator-3v3-mipibrdg {
+ 		regulator-boot-on;
+ 		gpio = <&pio 127 GPIO_ACTIVE_HIGH>;
+ 		vin-supply = <&pp3300_g>;
++		off-on-delay-us = <500000>;
+ 	};
+ 
+ 	/* separately switched 3.3V power rail */
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 84cbdf6e9eb0c..47dea10dd3b8b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -2234,7 +2234,7 @@ vpu1_crit: trip-crit {
+ 			};
+ 		};
+ 
+-		gpu0-thermal {
++		gpu-thermal {
+ 			polling-delay = <1000>;
+ 			polling-delay-passive = <250>;
+ 			thermal-sensors = <&lvts_ap MT8192_AP_GPU0>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 5d8b68f86ce44..2ee45752583c0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -3880,7 +3880,7 @@ vpu1_crit: trip-crit {
+ 			};
+ 		};
+ 
+-		gpu0-thermal {
++		gpu-thermal {
+ 			polling-delay = <1000>;
+ 			polling-delay-passive = <250>;
+ 			thermal-sensors = <&lvts_ap MT8195_AP_GPU0>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+index e5d9b671a4057..97634cc04e659 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+@@ -528,7 +528,7 @@ i2c6_pins: i2c6-pins {
+ 		pins {
+ 			pinmux = <PINMUX_GPIO25__FUNC_SDA6>,
+ 				 <PINMUX_GPIO26__FUNC_SCL6>;
+-			bias-pull-up = <MTK_PULL_SET_RSEL_111>;
++			bias-disable;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-common.dtsi b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-common.dtsi
+index 5ab583be9e0a0..0386636a29f05 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-common.dtsi
+@@ -405,7 +405,6 @@ &usb3_dwc3 {
+ 
+ &hsusb_phy1 {
+ 	status = "okay";
+-	extcon = <&typec>;
+ 
+ 	vdda-pll-supply = <&vreg_l12a_1p8>;
+ 	vdda-phy-dpdm-supply = <&vreg_l24a_3p075>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 6e7a4bb08b35c..0717605ac5a0e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -2102,7 +2102,7 @@ ufshc: ufshc@624000 {
+ 				<&gcc GCC_UFS_RX_SYMBOL_0_CLK>;
+ 			freq-table-hz =
+ 				<100000000 200000000>,
+-				<0 0>,
++				<100000000 200000000>,
+ 				<0 0>,
+ 				<0 0>,
+ 				<0 0>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index 2dbef4b526ab7..a88bff737d173 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -1590,7 +1590,6 @@ adreno_smmu: iommu@5040000 {
+ 			 * SoC VDDMX RPM Power Domain in the Adreno driver.
+ 			 */
+ 			power-domains = <&gpucc GPU_GX_GDSC>;
+-			status = "disabled";
+ 		};
+ 
+ 		gpucc: clock-controller@5065000 {
+diff --git a/arch/arm64/boot/dts/qcom/qcm6490-fairphone-fp5.dts b/arch/arm64/boot/dts/qcom/qcm6490-fairphone-fp5.dts
+index f3432701945f7..8cd2fe80dbb2c 100644
+--- a/arch/arm64/boot/dts/qcom/qcm6490-fairphone-fp5.dts
++++ b/arch/arm64/boot/dts/qcom/qcm6490-fairphone-fp5.dts
+@@ -864,7 +864,6 @@ sw_ctrl_default: sw-ctrl-default-state {
+ };
+ 
+ &uart5 {
+-	compatible = "qcom,geni-debug-uart";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qcm6490-idp.dts b/arch/arm64/boot/dts/qcom/qcm6490-idp.dts
+index 47ca2d0003414..107302680f562 100644
+--- a/arch/arm64/boot/dts/qcom/qcm6490-idp.dts
++++ b/arch/arm64/boot/dts/qcom/qcm6490-idp.dts
+@@ -658,7 +658,6 @@ &tlmm {
+ };
+ 
+ &uart5 {
+-	compatible = "qcom,geni-debug-uart";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+index a085ff5b5fb21..7256b51eb08f9 100644
+--- a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
++++ b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+@@ -632,7 +632,6 @@ &tlmm {
+ };
+ 
+ &uart5 {
+-	compatible = "qcom,geni-debug-uart";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000.dtsi b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+index f90f03fa6a24f..1da40f4b4f8ac 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000.dtsi
++++ b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+@@ -1478,6 +1478,21 @@ system-cache-controller@19200000 {
+ 				    "llcc7_base",
+ 				    "llcc_broadcast_base";
+ 			interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
++
++			nvmem-cells = <&multi_chan_ddr>;
++			nvmem-cell-names = "multi-chan-ddr";
++		};
++
++		sec_qfprom: efuse@221c8000 {
++			compatible = "qcom,qdu1000-sec-qfprom", "qcom,sec-qfprom";
++			reg = <0 0x221c8000 0 0x1000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
++
++			multi_chan_ddr: multi-chan-ddr@12b {
++				reg = <0x12b 0x1>;
++				bits = <0 2>;
++			};
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index cb8a62714a302..1888d99d398b1 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -305,7 +305,7 @@ pmi632_ss_in: endpoint {
+ 
+ &pmi632_vbus {
+ 	regulator-min-microamp = <500000>;
+-	regulator-max-microamp = <3000000>;
++	regulator-max-microamp = <1000000>;
+ 	status = "okay";
+ };
+ 
+@@ -414,6 +414,8 @@ vreg_l9a_1p8: l9 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-allow-set-load;
++			regulator-always-on;
++			regulator-boot-on;
+ 		};
+ 
+ 		vreg_l10a_1p8: l10 {
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 1b3dc0ece54de..490e0369f5299 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2504,6 +2504,7 @@ ethernet1: ethernet@23000000 {
+ 			phy-names = "serdes";
+ 
+ 			iommus = <&apps_smmu 0x140 0xf>;
++			dma-coherent;
+ 
+ 			snps,tso;
+ 			snps,pbl = <32>;
+@@ -2538,6 +2539,7 @@ ethernet0: ethernet@23040000 {
+ 			phy-names = "serdes";
+ 
+ 			iommus = <&apps_smmu 0x120 0xf>;
++			dma-coherent;
+ 
+ 			snps,tso;
+ 			snps,pbl = <32>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-kb.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-kb.dts
+index 919bfaea6189c..340cb119d0a0d 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-kb.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-kb.dts
+@@ -12,6 +12,6 @@ / {
+ 	compatible = "google,lazor-rev1-sku2", "google,lazor-rev2-sku2", "qcom,sc7180";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-lte.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-lte.dts
+index eb20157f6af98..d45e60e3eb9eb 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-lte.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r1-lte.dts
+@@ -17,6 +17,6 @@ &ap_sar_sensor_i2c {
+ 	status = "okay";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-kb.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-kb.dts
+index 45d34718a1bce..e906ce877b8cd 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-kb.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-kb.dts
+@@ -18,6 +18,6 @@ / {
+ 	compatible = "google,lazor-sku2", "qcom,sc7180";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-lte.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-lte.dts
+index 79028d0dd1b0c..4b9ee15b09f6b 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-lte.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r10-lte.dts
+@@ -22,6 +22,6 @@ &ap_sar_sensor_i2c {
+ 	status = "okay";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-kb.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-kb.dts
+index 3459b81c56283..a960553f39946 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-kb.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-kb.dts
+@@ -21,6 +21,6 @@ / {
+ 		"qcom,sc7180";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-lte.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-lte.dts
+index ff8f47da109d8..82bd9ed7e21a9 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-lte.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r3-lte.dts
+@@ -25,6 +25,6 @@ &ap_sar_sensor_i2c {
+ 	status = "okay";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-kb.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-kb.dts
+index faf527972977a..6278c1715d3fd 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-kb.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-kb.dts
+@@ -18,6 +18,6 @@ / {
+ 	compatible = "google,lazor-rev9-sku2", "qcom,sc7180";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-lte.dts b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-lte.dts
+index d737fd0637fbc..0ec1697ae2c97 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-lte.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-r9-lte.dts
+@@ -22,6 +22,6 @@ &ap_sar_sensor_i2c {
+ 	status = "okay";
+ };
+ 
+-&keyboard_backlight {
++&pwmleds {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index 8513be2971201..098a8b4c793e6 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -359,10 +359,11 @@ max98360a: audio-codec-0 {
+ 		#sound-dai-cells = <0>;
+ 	};
+ 
+-	pwmleds {
++	pwmleds: pwmleds {
+ 		compatible = "pwm-leds";
++		status = "disabled";
++
+ 		keyboard_backlight: led-0 {
+-			status = "disabled";
+ 			label = "cros_ec::kbd_backlight";
+ 			function = LED_FUNCTION_KBD_BACKLIGHT;
+ 			pwms = <&cros_ec_pwm 0>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index e9deffe3aaf6c..9ab0c98cac054 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1582,8 +1582,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ 		};
+ 
+ 		ufs_mem_phy: phy@1d87000 {
+-			compatible = "qcom,sc7180-qmp-ufs-phy",
+-				     "qcom,sm7150-qmp-ufs-phy";
++			compatible = "qcom,sc7180-qmp-ufs-phy";
+ 			reg = <0 0x01d87000 0 0x1000>;
+ 			clocks = <&rpmhcc RPMH_CXO_CLK>,
+ 				 <&gcc GCC_UFS_PHY_PHY_AUX_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+index a0059527d9e48..7370aa0dbf0e3 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+@@ -495,7 +495,6 @@ wcd_tx: codec@0,3 {
+ };
+ 
+ &uart5 {
+-	compatible = "qcom,geni-debug-uart";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+index f9b96bd2477ea..7d1d5bbbbbd95 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+@@ -427,7 +427,6 @@ wcd_tx: codec@0,3 {
+ };
+ 
+ uart_dbg: &uart5 {
+-	compatible = "qcom,geni-debug-uart";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 2f7780f629ac5..c4a05d7b7ce65 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1440,12 +1440,12 @@ spi5: spi@994000 {
+ 			};
+ 
+ 			uart5: serial@994000 {
+-				compatible = "qcom,geni-uart";
++				compatible = "qcom,geni-debug-uart";
+ 				reg = <0 0x00994000 0 0x4000>;
+ 				clocks = <&gcc GCC_QUPV3_WRAP0_S5_CLK>;
+ 				clock-names = "se";
+ 				pinctrl-names = "default";
+-				pinctrl-0 = <&qup_uart5_cts>, <&qup_uart5_rts>, <&qup_uart5_tx>, <&qup_uart5_rx>;
++				pinctrl-0 = <&qup_uart5_tx>, <&qup_uart5_rx>;
+ 				interrupts = <GIC_SPI 606 IRQ_TYPE_LEVEL_HIGH>;
+ 				power-domains = <&rpmhpd SC7280_CX>;
+ 				operating-points-v2 = <&qup_opp_table>;
+@@ -5408,16 +5408,6 @@ qup_uart4_rx: qup-uart4-rx-state {
+ 				function = "qup04";
+ 			};
+ 
+-			qup_uart5_cts: qup-uart5-cts-state {
+-				pins = "gpio20";
+-				function = "qup05";
+-			};
+-
+-			qup_uart5_rts: qup-uart5-rts-state {
+-				pins = "gpio21";
+-				function = "qup05";
+-			};
+-
+ 			qup_uart5_tx: qup-uart5-tx-state {
+ 				pins = "gpio22";
+ 				function = "qup05";
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index 581a70c34fd29..da69577b6f09b 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -1890,7 +1890,7 @@ pcie3: pcie@1c08000 {
+ 			power-domains = <&gcc PCIE_3_GDSC>;
+ 
+ 			interconnects = <&aggre2_noc MASTER_PCIE_3 0 &mc_virt SLAVE_EBI_CH0 0>,
+-					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_0 0>;
++					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_3 0>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			phys = <&pcie3_phy>;
+@@ -2012,7 +2012,7 @@ pcie1: pcie@1c10000 {
+ 			power-domains = <&gcc PCIE_1_GDSC>;
+ 
+ 			interconnects = <&aggre2_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI_CH0 0>,
+-					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_0 0>;
++					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_1 0>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			phys = <&pcie1_phy>;
+@@ -2134,7 +2134,7 @@ pcie2: pcie@1c18000 {
+ 			power-domains = <&gcc PCIE_2_GDSC>;
+ 
+ 			interconnects = <&aggre2_noc MASTER_PCIE_2 0 &mc_virt SLAVE_EBI_CH0 0>,
+-					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_0 0>;
++					<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_2 0>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			phys = <&pcie2_phy>;
+@@ -2245,6 +2245,8 @@ ufs_mem_phy: phy-wrapper@1d87000 {
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			#phy-cells = <0>;
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index 4bf99b6b6e5fb..6b759e67f4d3d 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -299,7 +299,7 @@ linux,cma {
+ 	thermal-zones {
+ 		skin-temp-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <0>;
++
+ 			thermal-sensors = <&pmk8280_adc_tm 5>;
+ 
+ 			trips {
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi
+index 945de77911de1..1e3babf2e40d8 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi
+@@ -14,7 +14,7 @@ / {
+ 	thermal-zones {
+ 		pm8280_1_thermal: pm8280-1-thermal {
+ 			polling-delay-passive = <100>;
+-			polling-delay = <0>;
++
+ 			thermal-sensors = <&pm8280_1_temp_alarm>;
+ 
+ 			trips {
+@@ -34,7 +34,7 @@ trip1 {
+ 
+ 		pm8280_2_thermal: pm8280-2-thermal {
+ 			polling-delay-passive = <100>;
+-			polling-delay = <0>;
++
+ 			thermal-sensors = <&pm8280_2_temp_alarm>;
+ 
+ 			trips {
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 59f0a850671a3..b0b0ab7794466 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -5833,7 +5833,6 @@ sound: sound {
+ 	thermal-zones {
+ 		cpu0-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 1>;
+ 
+@@ -5848,7 +5847,6 @@ cpu-crit {
+ 
+ 		cpu1-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 2>;
+ 
+@@ -5863,7 +5861,6 @@ cpu-crit {
+ 
+ 		cpu2-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 3>;
+ 
+@@ -5878,7 +5875,6 @@ cpu-crit {
+ 
+ 		cpu3-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 4>;
+ 
+@@ -5893,7 +5889,6 @@ cpu-crit {
+ 
+ 		cpu4-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 5>;
+ 
+@@ -5908,7 +5903,6 @@ cpu-crit {
+ 
+ 		cpu5-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 6>;
+ 
+@@ -5923,7 +5917,6 @@ cpu-crit {
+ 
+ 		cpu6-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 7>;
+ 
+@@ -5938,7 +5931,6 @@ cpu-crit {
+ 
+ 		cpu7-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 8>;
+ 
+@@ -5953,7 +5945,6 @@ cpu-crit {
+ 
+ 		cluster0-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens0 9>;
+ 
+@@ -5967,13 +5958,25 @@ cpu-crit {
+ 		};
+ 
+ 		gpu-thermal {
+-			polling-delay-passive = <0>;
+-			polling-delay = <0>;
++			polling-delay-passive = <250>;
+ 
+ 			thermal-sensors = <&tsens2 2>;
+ 
++			cooling-maps {
++				map0 {
++					trip = <&gpu_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++				};
++			};
++
+ 			trips {
+-				gpu-crit {
++				gpu_alert0: trip-point0 {
++					temperature = <85000>;
++					hysteresis = <1000>;
++					type = "passive";
++				};
++
++				trip-point1 {
+ 					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "critical";
+@@ -5983,7 +5986,6 @@ gpu-crit {
+ 
+ 		mem-thermal {
+ 			polling-delay-passive = <250>;
+-			polling-delay = <1000>;
+ 
+ 			thermal-sensors = <&tsens1 15>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index d817a7751086e..4ad82b0eb1139 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2666,6 +2666,8 @@ ufs_mem_phy: phy@1d87000 {
+ 				      "ref_aux",
+ 				      "qref";
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 47dc42f6e936c..8e30f8cc0916c 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -494,6 +494,7 @@ ecsh: hid@5c {
+ &ipa {
+ 	qcom,gsi-loader = "self";
+ 	memory-region = <&ipa_fw_mem>;
++	firmware-name = "qcom/sdm850/LENOVO/81JL/ipa_fws.elf";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index b4ce5a322107e..8fa3bacfb2391 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -1231,6 +1231,8 @@ ufs_mem_phy: phy@4807000 {
+ 				      "ref_aux",
+ 				      "qref";
+ 
++			power-domains = <&gcc GCC_UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 6c7eac0110ba1..60383f0d09f32 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1197,6 +1197,8 @@ ufs_mem_phy: phy@1d87000 {
+ 				      "ref_aux",
+ 				      "qref";
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
+@@ -1321,6 +1323,7 @@ fastrpc {
+ 					compatible = "qcom,fastrpc";
+ 					qcom,glink-channels = "fastrpcglink-apps-dsp";
+ 					label = "adsp";
++					qcom,non-secure-domain;
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+@@ -1580,6 +1583,7 @@ fastrpc {
+ 					compatible = "qcom,fastrpc";
+ 					qcom,glink-channels = "fastrpcglink-apps-dsp";
+ 					label = "cdsp";
++					qcom,non-secure-domain;
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 8ccade628f1f4..b2af44bc3b78c 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -2580,6 +2580,8 @@ ufs_mem_phy: phy@1d87000 {
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			#phy-cells = <0>;
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index f7c4700f00c36..da936548c2ac3 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1779,6 +1779,8 @@ ufs_mem_phy: phy@1d87000 {
+ 				      "ref_aux",
+ 				      "qref";
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 616461fcbab99..59428d2ee1ad8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -4429,6 +4429,8 @@ ufs_mem_phy: phy@1d87000 {
+ 				 <&gcc GCC_UFS_PHY_PHY_AUX_CLK>,
+ 				 <&gcc GCC_UFS_0_CLKREF_EN>;
+ 
++			power-domains = <&gcc UFS_PHY_GDSC>;
++
+ 			resets = <&ufs_mem_hc 0>;
+ 			reset-names = "ufsphy";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index 7618ae1f8b1c9..b063dd28149e7 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -840,7 +840,7 @@ &uart21 {
+ };
+ 
+ &usb_1_ss0_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_0_eusb2_repeater>;
+@@ -865,7 +865,7 @@ &usb_1_ss0_dwc3 {
+ };
+ 
+ &usb_1_ss1_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_1_eusb2_repeater>;
+@@ -890,7 +890,7 @@ &usb_1_ss1_dwc3 {
+ };
+ 
+ &usb_1_ss2_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_2_eusb2_repeater>;
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 5567636c8b27f..df3577fcd93c9 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -536,7 +536,7 @@ &uart21 {
+ };
+ 
+ &usb_1_ss0_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_0_eusb2_repeater>;
+@@ -561,7 +561,7 @@ &usb_1_ss0_dwc3 {
+ };
+ 
+ &usb_1_ss1_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_1_eusb2_repeater>;
+@@ -586,7 +586,7 @@ &usb_1_ss1_dwc3 {
+ };
+ 
+ &usb_1_ss2_hsphy {
+-	vdd-supply = <&vreg_l2e_0p8>;
++	vdd-supply = <&vreg_l3j_0p8>;
+ 	vdda12-supply = <&vreg_l2j_1p2>;
+ 
+ 	phys = <&smb2360_2_eusb2_repeater>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index cfa70b441e329..d76347001cc13 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -2919,6 +2919,9 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r8a779f0.dtsi b/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
+index 72cf30341fc4d..9629adb47d99f 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
+@@ -1324,7 +1324,10 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ 
+ 	ufs30_clk: ufs30-clk {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+index 9bc542bc61690..873588a84e15f 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+@@ -2359,6 +2359,9 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r8a779h0.dtsi b/arch/arm64/boot/dts/renesas/r8a779h0.dtsi
+index 6d791024cabe1..792afe1a45747 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779h0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779h0.dtsi
+@@ -16,7 +16,6 @@ / {
+ 
+ 	cluster0_opp: opp-table-0 {
+ 		compatible = "operating-points-v2";
+-		opp-shared;
+ 
+ 		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+index 165bfcfef3bcc..18ef297db9336 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+@@ -50,7 +50,10 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index 88634ae432872..1a9891ba6c02c 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -1334,6 +1334,9 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index e89bfe4085f5d..a2318478a66ba 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -1342,6 +1342,9 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+index f5f3f4f4c8d67..a2adc4e27ce97 100644
+--- a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+@@ -294,6 +294,9 @@ timer {
+ 		interrupts-extended = <&gic GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ 				      <&gic GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+-				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
++				      <&gic GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>,
++				      <&gic GIC_PPI 12 IRQ_TYPE_LEVEL_LOW>;
++		interrupt-names = "sec-phys", "phys", "virt", "hyp-phys",
++				  "hyp-virt";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-rock-pi-s.dts b/arch/arm64/boot/dts/rockchip/rk3308-rock-pi-s.dts
+index 079101cddd65f..f1d4118ffb7d6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-rock-pi-s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-rock-pi-s.dts
+@@ -17,6 +17,7 @@ aliases {
+ 		ethernet0 = &gmac;
+ 		mmc0 = &emmc;
+ 		mmc1 = &sdmmc;
++		mmc2 = &sdio;
+ 	};
+ 
+ 	chosen {
+@@ -144,11 +145,25 @@ &emmc {
+ 
+ &gmac {
+ 	clock_in_out = "output";
++	phy-handle = <&rtl8201f>;
+ 	phy-supply = <&vcc_io>;
+-	snps,reset-gpio = <&gpio0 RK_PA7 GPIO_ACTIVE_LOW>;
+-	snps,reset-active-low;
+-	snps,reset-delays-us = <0 50000 50000>;
+ 	status = "okay";
++
++	mdio {
++		compatible = "snps,dwmac-mdio";
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		rtl8201f: ethernet-phy@1 {
++			compatible = "ethernet-phy-ieee802.3-c22";
++			reg = <1>;
++			pinctrl-names = "default";
++			pinctrl-0 = <&mac_rst>;
++			reset-assert-us = <20000>;
++			reset-deassert-us = <50000>;
++			reset-gpios = <&gpio0 RK_PA7 GPIO_ACTIVE_LOW>;
++		};
++	};
+ };
+ 
+ &gpio0 {
+@@ -221,6 +236,26 @@ &pinctrl {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&rtc_32k>;
+ 
++	bluetooth {
++		bt_reg_on: bt-reg-on {
++			rockchip,pins = <4 RK_PB3 RK_FUNC_GPIO &pcfg_pull_none>;
++		};
++
++		bt_wake_host: bt-wake-host {
++			rockchip,pins = <4 RK_PB4 RK_FUNC_GPIO &pcfg_pull_down>;
++		};
++
++		host_wake_bt: host-wake-bt {
++			rockchip,pins = <4 RK_PB2 RK_FUNC_GPIO &pcfg_pull_none>;
++		};
++	};
++
++	gmac {
++		mac_rst: mac-rst {
++			rockchip,pins = <0 RK_PA7 RK_FUNC_GPIO &pcfg_pull_none>;
++		};
++	};
++
+ 	leds {
+ 		green_led: green-led {
+ 			rockchip,pins = <0 RK_PA6 RK_FUNC_GPIO &pcfg_pull_none>;
+@@ -264,15 +299,31 @@ &sdio {
+ 	cap-sd-highspeed;
+ 	cap-sdio-irq;
+ 	keep-power-in-suspend;
+-	max-frequency = <1000000>;
++	max-frequency = <100000000>;
+ 	mmc-pwrseq = <&sdio_pwrseq>;
++	no-mmc;
++	no-sd;
+ 	non-removable;
+-	sd-uhs-sdr104;
++	sd-uhs-sdr50;
++	vmmc-supply = <&vcc_io>;
++	vqmmc-supply = <&vcc_1v8>;
+ 	status = "okay";
++
++	rtl8723ds: wifi@1 {
++		reg = <1>;
++		interrupt-parent = <&gpio0>;
++		interrupts = <RK_PA0 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-names = "host-wake";
++		pinctrl-names = "default";
++		pinctrl-0 = <&wifi_host_wake>;
++	};
+ };
+ 
+ &sdmmc {
++	cap-mmc-highspeed;
+ 	cap-sd-highspeed;
++	disable-wp;
++	vmmc-supply = <&vcc_io>;
+ 	status = "okay";
+ };
+ 
+@@ -291,16 +342,22 @@ u2phy_otg: otg-port {
+ };
+ 
+ &uart0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&uart0_xfer>;
+ 	status = "okay";
+ };
+ 
+ &uart4 {
++	uart-has-rtscts;
+ 	status = "okay";
+ 
+ 	bluetooth {
+-		compatible = "realtek,rtl8723bs-bt";
+-		device-wake-gpios = <&gpio4 RK_PB3 GPIO_ACTIVE_HIGH>;
++		compatible = "realtek,rtl8723ds-bt";
++		device-wake-gpios = <&gpio4 RK_PB2 GPIO_ACTIVE_HIGH>;
++		enable-gpios = <&gpio4 RK_PB3 GPIO_ACTIVE_HIGH>;
+ 		host-wake-gpios = <&gpio4 RK_PB4 GPIO_ACTIVE_HIGH>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&bt_reg_on &bt_wake_host &host_wake_bt>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 07dcc949b8997..b01efd6d042c8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -850,8 +850,8 @@ cru: clock-controller@ff440000 {
+ 			<0>, <24000000>,
+ 			<24000000>, <24000000>,
+ 			<15000000>, <15000000>,
+-			<100000000>, <100000000>,
+-			<100000000>, <100000000>,
++			<300000000>, <100000000>,
++			<400000000>, <100000000>,
+ 			<50000000>, <100000000>,
+ 			<100000000>, <100000000>,
+ 			<50000000>, <50000000>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-roc-pc.dts b/arch/arm64/boot/dts/rockchip/rk3566-roc-pc.dts
+index 63eea27293fe9..67e7801bd4896 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-roc-pc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-roc-pc.dts
+@@ -269,7 +269,7 @@ rk809: pmic@20 {
+ 		vcc9-supply = <&vcc3v3_sys>;
+ 
+ 		codec {
+-			mic-in-differential;
++			rockchip,mic-in-differential;
+ 		};
+ 
+ 		regulators {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts b/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
+index 19f8fc369b130..8c3ab07d38079 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
+@@ -475,7 +475,7 @@ regulator-state-mem {
+ 		};
+ 
+ 		codec {
+-			mic-in-differential;
++			rockchip,mic-in-differential;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dts b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dts
+index 58ab7e9971dbc..b5e67990dd0f8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dts
+@@ -11,6 +11,10 @@ aliases {
+ 	};
+ };
+ 
++&pmu_io_domains {
++	vccio3-supply = <&vccio_sd>;
++};
++
+ &sdmmc0 {
+ 	bus-width = <4>;
+ 	cap-mmc-highspeed;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dtsi b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dtsi
+index 89e84e3a92629..25c49bdbadbcb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r66s.dtsi
+@@ -39,9 +39,9 @@ status_led: led-status {
+ 		};
+ 	};
+ 
+-	dc_12v: dc-12v-regulator {
++	vcc12v_dcin: vcc12v-dcin-regulator {
+ 		compatible = "regulator-fixed";
+-		regulator-name = "dc_12v";
++		regulator-name = "vcc12v_dcin";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 		regulator-min-microvolt = <12000000>;
+@@ -65,7 +65,7 @@ vcc3v3_sys: vcc3v3-sys-regulator {
+ 		regulator-boot-on;
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+-		vin-supply = <&dc_12v>;
++		vin-supply = <&vcc12v_dcin>;
+ 	};
+ 
+ 	vcc5v0_sys: vcc5v0-sys-regulator {
+@@ -75,16 +75,7 @@ vcc5v0_sys: vcc5v0-sys-regulator {
+ 		regulator-boot-on;
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+-		vin-supply = <&dc_12v>;
+-	};
+-
+-	vcc5v0_usb_host: vcc5v0-usb-host-regulator {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vcc5v0_usb_host";
+-		regulator-always-on;
+-		regulator-boot-on;
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
++		vin-supply = <&vcc12v_dcin>;
+ 	};
+ 
+ 	vcc5v0_usb_otg: vcc5v0-usb-otg-regulator {
+@@ -94,8 +85,9 @@ vcc5v0_usb_otg: vcc5v0-usb-otg-regulator {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&vcc5v0_usb_otg_en>;
+ 		regulator-name = "vcc5v0_usb_otg";
+-		regulator-always-on;
+-		regulator-boot-on;
++		regulator-min-microvolt = <5000000>;
++		regulator-max-microvolt = <5000000>;
++		vin-supply = <&vcc5v0_sys>;
+ 	};
+ };
+ 
+@@ -123,6 +115,10 @@ &cpu3 {
+ 	cpu-supply = <&vdd_cpu>;
+ };
+ 
++&display_subsystem {
++	status = "disabled";
++};
++
+ &gpu {
+ 	mali-supply = <&vdd_gpu>;
+ 	status = "okay";
+@@ -405,8 +401,8 @@ vcc5v0_usb_otg_en: vcc5v0-usb-otg-en {
+ &pmu_io_domains {
+ 	pmuio1-supply = <&vcc3v3_pmu>;
+ 	pmuio2-supply = <&vcc3v3_pmu>;
+-	vccio1-supply = <&vccio_acodec>;
+-	vccio3-supply = <&vccio_sd>;
++	vccio1-supply = <&vcc_3v3>;
++	vccio2-supply = <&vcc_1v8>;
+ 	vccio4-supply = <&vcc_1v8>;
+ 	vccio5-supply = <&vcc_3v3>;
+ 	vccio6-supply = <&vcc_1v8>;
+@@ -429,28 +425,12 @@ &uart2 {
+ 	status = "okay";
+ };
+ 
+-&usb_host0_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host0_ohci {
+-	status = "okay";
+-};
+-
+ &usb_host0_xhci {
+ 	dr_mode = "host";
+ 	extcon = <&usb2phy0>;
+ 	status = "okay";
+ };
+ 
+-&usb_host1_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host1_ohci {
+-	status = "okay";
+-};
+-
+ &usb_host1_xhci {
+ 	status = "okay";
+ };
+@@ -460,7 +440,7 @@ &usb2phy0 {
+ };
+ 
+ &usb2phy0_host {
+-	phy-supply = <&vcc5v0_usb_host>;
++	phy-supply = <&vcc5v0_sys>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r68s.dts b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r68s.dts
+index e1fe5e442689a..ce2a5e1ccefc3 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r68s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-fastrhino-r68s.dts
+@@ -39,7 +39,7 @@ &gmac0_tx_bus2
+ 		     &gmac0_rx_bus2
+ 		     &gmac0_rgmii_clk
+ 		     &gmac0_rgmii_bus>;
+-	snps,reset-gpio = <&gpio0 RK_PB0 GPIO_ACTIVE_LOW>;
++	snps,reset-gpio = <&gpio1 RK_PB0 GPIO_ACTIVE_LOW>;
+ 	snps,reset-active-low;
+ 	/* Reset time is 15ms, 50ms for rtl8211f */
+ 	snps,reset-delays-us = <0 15000 50000>;
+@@ -61,7 +61,7 @@ &gmac1m1_tx_bus2
+ 		     &gmac1m1_rx_bus2
+ 		     &gmac1m1_rgmii_clk
+ 		     &gmac1m1_rgmii_bus>;
+-	snps,reset-gpio = <&gpio0 RK_PB1 GPIO_ACTIVE_LOW>;
++	snps,reset-gpio = <&gpio1 RK_PB1 GPIO_ACTIVE_LOW>;
+ 	snps,reset-active-low;
+ 	/* Reset time is 15ms, 50ms for rtl8211f */
+ 	snps,reset-delays-us = <0 15000 50000>;
+@@ -71,18 +71,18 @@ &gmac1m1_rgmii_clk
+ };
+ 
+ &mdio0 {
+-	rgmii_phy0: ethernet-phy@0 {
++	rgmii_phy0: ethernet-phy@1 {
+ 		compatible = "ethernet-phy-ieee802.3-c22";
+-		reg = <0>;
++		reg = <0x1>;
+ 		pinctrl-0 = <&eth_phy0_reset_pin>;
+ 		pinctrl-names = "default";
+ 	};
+ };
+ 
+ &mdio1 {
+-	rgmii_phy1: ethernet-phy@0 {
++	rgmii_phy1: ethernet-phy@1 {
+ 		compatible = "ethernet-phy-ieee802.3-c22";
+-		reg = <0>;
++		reg = <0x1>;
+ 		pinctrl-0 = <&eth_phy1_reset_pin>;
+ 		pinctrl-names = "default";
+ 	};
+@@ -102,6 +102,10 @@ eth_phy1_reset_pin: eth-phy1-reset-pin {
+ 	};
+ };
+ 
++&pmu_io_domains {
++	vccio3-supply = <&vcc_3v3>;
++};
++
+ &sdhci {
+ 	bus-width = <8>;
+ 	max-frequency = <200000000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+index ebdedea15ad16..59f1403b4fa56 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+@@ -531,10 +531,6 @@ regulator-state-mem {
+ 				};
+ 			};
+ 		};
+-
+-		codec {
+-			mic-in-differential;
+-		};
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index d8543b5557ee7..3e2a8bfcafeaa 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -790,6 +790,7 @@ vop_mmu: iommu@fe043e00 {
+ 		clocks = <&cru ACLK_VOP>, <&cru HCLK_VOP>;
+ 		clock-names = "aclk", "iface";
+ 		#iommu-cells = <0>;
++		power-domains = <&power RK3568_PD_VO>;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 448a59dc53a77..0f2722c4bcc32 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -141,8 +141,8 @@ main_pktdma: dma-controller@485c0000 {
+ 			compatible = "ti,am64-dmss-pktdma";
+ 			reg = <0x00 0x485c0000 0x00 0x100>,
+ 			      <0x00 0x4a800000 0x00 0x20000>,
+-			      <0x00 0x4aa00000 0x00 0x40000>,
+-			      <0x00 0x4b800000 0x00 0x400000>,
++			      <0x00 0x4aa00000 0x00 0x20000>,
++			      <0x00 0x4b800000 0x00 0x200000>,
+ 			      <0x00 0x485e0000 0x00 0x10000>,
+ 			      <0x00 0x484a0000 0x00 0x2000>,
+ 			      <0x00 0x484c0000 0x00 0x2000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+index 2038c5e046390..359f53f3e019b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+@@ -1364,8 +1364,6 @@ &mcasp0 {
+ 	       0 0 0 0
+ 	>;
+ 	tdm-slots = <2>;
+-	rx-num-evt = <32>;
+-	tx-num-evt = <32>;
+ 	#sound-dai-cells = <0>;
+ 	status = "disabled";
+ };
+@@ -1382,8 +1380,6 @@ &mcasp1 {
+ 	       0 0 0 0
+ 	>;
+ 	tdm-slots = <2>;
+-	rx-num-evt = <32>;
+-	tx-num-evt = <32>;
+ 	#sound-dai-cells = <0>;
+ 	status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am625-beagleplay.dts b/arch/arm64/boot/dts/ti/k3-am625-beagleplay.dts
+index 18e3070a86839..70de288d728e4 100644
+--- a/arch/arm64/boot/dts/ti/k3-am625-beagleplay.dts
++++ b/arch/arm64/boot/dts/ti/k3-am625-beagleplay.dts
+@@ -924,6 +924,4 @@ &mcasp1 {
+ 	       0 0 0 0
+ 	       0 0 0 0
+ 	>;
+-	tx-num-evt = <32>;
+-	rx-num-evt = <32>;
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am625-phyboard-lyra-rdk.dts b/arch/arm64/boot/dts/ti/k3-am625-phyboard-lyra-rdk.dts
+index 50d2573c840ee..6c24e4d39ee80 100644
+--- a/arch/arm64/boot/dts/ti/k3-am625-phyboard-lyra-rdk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am625-phyboard-lyra-rdk.dts
+@@ -441,8 +441,6 @@ &mcasp2 {
+ 			0 0 0 0
+ 			0 0 0 0
+ 	>;
+-	tx-num-evt = <32>;
+-	rx-num-evt = <32>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index bf9c2d9c6439a..ce4a2f1056300 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -120,8 +120,8 @@ main_pktdma: dma-controller@485c0000 {
+ 			compatible = "ti,am64-dmss-pktdma";
+ 			reg = <0x00 0x485c0000 0x00 0x100>,
+ 			      <0x00 0x4a800000 0x00 0x20000>,
+-			      <0x00 0x4aa00000 0x00 0x40000>,
+-			      <0x00 0x4b800000 0x00 0x400000>,
++			      <0x00 0x4aa00000 0x00 0x20000>,
++			      <0x00 0x4b800000 0x00 0x200000>,
+ 			      <0x00 0x485e0000 0x00 0x10000>,
+ 			      <0x00 0x484a0000 0x00 0x2000>,
+ 			      <0x00 0x484c0000 0x00 0x2000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+index fa43cd0b631e6..e026f65738b39 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+@@ -701,8 +701,6 @@ &mcasp1 {
+ 	       0 0 0 0
+ 	       0 0 0 0
+ 	>;
+-	tx-num-evt = <32>;
+-	rx-num-evt = <32>;
+ };
+ 
+ &dss {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
+index 900d1f9530a2a..2b9bc77a05404 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
+@@ -123,8 +123,8 @@ main_pktdma: dma-controller@485c0000 {
+ 			compatible = "ti,am64-dmss-pktdma";
+ 			reg = <0x00 0x485c0000 0x00 0x100>,
+ 			      <0x00 0x4a800000 0x00 0x20000>,
+-			      <0x00 0x4aa00000 0x00 0x40000>,
+-			      <0x00 0x4b800000 0x00 0x400000>,
++			      <0x00 0x4aa00000 0x00 0x20000>,
++			      <0x00 0x4b800000 0x00 0x200000>,
+ 			      <0x00 0x485e0000 0x00 0x10000>,
+ 			      <0x00 0x484a0000 0x00 0x2000>,
+ 			      <0x00 0x484c0000 0x00 0x2000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts b/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
+index 6e72346591113..fb980d46e3041 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
+@@ -207,7 +207,7 @@ main_mcasp1_pins_default: main-mcasp1-default-pins {
+ 		pinctrl-single,pins = <
+ 			AM62PX_IOPAD(0x0090, PIN_INPUT, 2) /* (U24) GPMC0_BE0n_CLE.MCASP1_ACLKX */
+ 			AM62PX_IOPAD(0x0098, PIN_INPUT, 2) /* (AA24) GPMC0_WAIT0.MCASP1_AFSX */
+-			AM62PX_IOPAD(0x008c, PIN_INPUT, 2) /* (T25) GPMC0_WEn.MCASP1_AXR0 */
++			AM62PX_IOPAD(0x008c, PIN_OUTPUT, 2) /* (T25) GPMC0_WEn.MCASP1_AXR0 */
+ 			AM62PX_IOPAD(0x0084, PIN_INPUT, 2) /* (R25) GPMC0_ADVn_ALE.MCASP1_AXR2 */
+ 		>;
+ 	};
+@@ -549,8 +549,6 @@ &mcasp1 {
+ 	       0 0 0 0
+ 	       0 0 0 0
+ 	>;
+-	tx-num-evt = <32>;
+-	rx-num-evt = <32>;
+ };
+ 
+ &fss {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+index 3c45782ab2b78..63b4e88e3a94a 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+@@ -504,8 +504,6 @@ &mcasp1 {
+ 	       0 0 0 0
+ 	       0 0 0 0
+ 	>;
+-	tx-num-evt = <32>;
+-	rx-num-evt = <32>;
+ };
+ 
+ &dss {
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t.dts b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t.dts
+index 234d76e4e9445..5b5e9eeec5ac4 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t.dts
+@@ -282,7 +282,6 @@ &main_uart3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&main_uart3_default_pins>;
+ 	uart-has-rtscts;
+-	rs485-rts-active-low;
+ 	linux,rs485-enabled-at-boot-time;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s.dtsi b/arch/arm64/boot/dts/ti/k3-j722s.dtsi
+index c75744edb1433..9132b0232b0ba 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j722s.dtsi
+@@ -83,6 +83,14 @@ &inta_main_dmss {
+ 	ti,interrupt-ranges = <7 71 21>;
+ };
+ 
++&main_gpio0 {
++	ti,ngpio = <87>;
++};
++
++&main_gpio1 {
++	ti,ngpio = <73>;
++};
++
+ &oc_sram {
+ 	reg = <0x00 0x70000000 0x00 0x40000>;
+ 	ranges = <0x00 0x00 0x70000000 0x40000>;
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index f8efbc128446d..7a4f5604be3f7 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -1065,6 +1065,28 @@ static inline bool pgtable_l5_enabled(void) { return false; }
+ 
+ #define p4d_offset_kimg(dir,addr)	((p4d_t *)dir)
+ 
++static inline
++p4d_t *p4d_offset_lockless_folded(pgd_t *pgdp, pgd_t pgd, unsigned long addr)
++{
++	/*
++	 * With runtime folding of the pud, pud_offset_lockless() passes
++	 * the 'pgd_t *' we return here to p4d_to_folded_pud(), which
++	 * will offset the pointer assuming that it points into
++	 * a page-table page. However, the fast GUP path passes us a
++	 * pgd_t allocated on the stack and so we must use the original
++	 * pointer in 'pgdp' to construct the p4d pointer instead of
++	 * using the generic p4d_offset_lockless() implementation.
++	 *
++	 * Note: reusing the original pointer means that we may
++	 * dereference the same (live) page-table entry multiple times.
++	 * This is safe because it is still only loaded once in the
++	 * context of each level and the CPU guarantees same-address
++	 * read-after-read ordering.
++	 */
++	return p4d_offset(pgdp, addr);
++}
++#define p4d_offset_lockless p4d_offset_lockless_folded
++
+ #endif  /* CONFIG_PGTABLE_LEVELS > 4 */
+ 
+ #define pgd_ERROR(e)	\
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 31c8b3094dd7b..5de85dccc09cd 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -767,13 +767,15 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 	}
+ }
+ 
+-static const char *ipi_types[NR_IPI] __tracepoint_string = {
++static const char *ipi_types[MAX_IPI] __tracepoint_string = {
+ 	[IPI_RESCHEDULE]	= "Rescheduling interrupts",
+ 	[IPI_CALL_FUNC]		= "Function call interrupts",
+ 	[IPI_CPU_STOP]		= "CPU stop interrupts",
+ 	[IPI_CPU_CRASH_STOP]	= "CPU stop (for crash dump) interrupts",
+ 	[IPI_TIMER]		= "Timer broadcast interrupts",
+ 	[IPI_IRQ_WORK]		= "IRQ work interrupts",
++	[IPI_CPU_BACKTRACE]	= "CPU backtrace interrupts",
++	[IPI_KGDB_ROUNDUP]	= "KGDB roundup interrupts",
+ };
+ 
+ static void smp_cross_call(const struct cpumask *target, unsigned int ipinr);
+@@ -784,7 +786,7 @@ int arch_show_interrupts(struct seq_file *p, int prec)
+ {
+ 	unsigned int cpu, i;
+ 
+-	for (i = 0; i < NR_IPI; i++) {
++	for (i = 0; i < MAX_IPI; i++) {
+ 		seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
+ 			   prec >= 4 ? " " : "");
+ 		for_each_online_cpu(cpu)
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 720336d288568..1bf483ec971d9 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -2141,7 +2141,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ 	emit(A64_STR64I(A64_R(20), A64_SP, regs_off + 8), ctx);
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+-		emit_addr_mov_i64(A64_R(0), (const u64)im, ctx);
++		emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
+ 		emit_call((const u64)__bpf_tramp_enter, ctx);
+ 	}
+ 
+@@ -2185,7 +2185,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+ 		im->ip_epilogue = ctx->ro_image + ctx->idx;
+-		emit_addr_mov_i64(A64_R(0), (const u64)im, ctx);
++		emit_a64_mov_i64(A64_R(0), (const u64)im, ctx);
+ 		emit_call((const u64)__bpf_tramp_exit, ctx);
+ 	}
+ 
+diff --git a/arch/loongarch/kernel/hw_breakpoint.c b/arch/loongarch/kernel/hw_breakpoint.c
+index 621ad7634df71..a6e4b605bfa8d 100644
+--- a/arch/loongarch/kernel/hw_breakpoint.c
++++ b/arch/loongarch/kernel/hw_breakpoint.c
+@@ -221,7 +221,7 @@ static int hw_breakpoint_control(struct perf_event *bp,
+ 		}
+ 		enable = csr_read64(LOONGARCH_CSR_CRMD);
+ 		csr_write64(CSR_CRMD_WE | enable, LOONGARCH_CSR_CRMD);
+-		if (bp->hw.target)
++		if (bp->hw.target && test_tsk_thread_flag(bp->hw.target, TIF_LOAD_WATCH))
+ 			regs->csr_prmd |= CSR_PRMD_PWE;
+ 		break;
+ 	case HW_BREAKPOINT_UNINSTALL:
+diff --git a/arch/loongarch/kernel/ptrace.c b/arch/loongarch/kernel/ptrace.c
+index 200109de1971a..19dc6eff45ccc 100644
+--- a/arch/loongarch/kernel/ptrace.c
++++ b/arch/loongarch/kernel/ptrace.c
+@@ -589,6 +589,7 @@ static int ptrace_hbp_set_ctrl(unsigned int note_type,
+ 	struct perf_event *bp;
+ 	struct perf_event_attr attr;
+ 	struct arch_hw_breakpoint_ctrl ctrl;
++	struct thread_info *ti = task_thread_info(tsk);
+ 
+ 	bp = ptrace_hbp_get_initialised_bp(note_type, tsk, idx);
+ 	if (IS_ERR(bp))
+@@ -613,8 +614,10 @@ static int ptrace_hbp_set_ctrl(unsigned int note_type,
+ 		if (err)
+ 			return err;
+ 		attr.disabled = 0;
++		set_ti_thread_flag(ti, TIF_LOAD_WATCH);
+ 	} else {
+ 		attr.disabled = 1;
++		clear_ti_thread_flag(ti, TIF_LOAD_WATCH);
+ 	}
+ 
+ 	return modify_user_hw_breakpoint(bp, &attr);
+diff --git a/arch/m68k/amiga/config.c b/arch/m68k/amiga/config.c
+index d4b170c861bf0..0147130dc34e3 100644
+--- a/arch/m68k/amiga/config.c
++++ b/arch/m68k/amiga/config.c
+@@ -180,6 +180,15 @@ int __init amiga_parse_bootinfo(const struct bi_record *record)
+ 			dev->slotsize = be16_to_cpu(cd->cd_SlotSize);
+ 			dev->boardaddr = be32_to_cpu(cd->cd_BoardAddr);
+ 			dev->boardsize = be32_to_cpu(cd->cd_BoardSize);
++
++			/* CS-LAB Warp 1260 workaround */
++			if (be16_to_cpu(dev->rom.er_Manufacturer) == ZORRO_MANUF(ZORRO_PROD_CSLAB_WARP_1260) &&
++			    dev->rom.er_Product == ZORRO_PROD(ZORRO_PROD_CSLAB_WARP_1260)) {
++
++				/* turn off all interrupts */
++				pr_info("Warp 1260 card detected: applying interrupt storm workaround\n");
++				*(uint32_t *)(dev->boardaddr + 0x1000) = 0xfff;
++			}
+ 		} else
+ 			pr_warn("amiga_parse_bootinfo: too many AutoConfig devices\n");
+ #endif /* CONFIG_ZORRO */
+diff --git a/arch/m68k/atari/ataints.c b/arch/m68k/atari/ataints.c
+index 23256434191c3..0465444ceb216 100644
+--- a/arch/m68k/atari/ataints.c
++++ b/arch/m68k/atari/ataints.c
+@@ -301,11 +301,7 @@ void __init atari_init_IRQ(void)
+ 
+ 	if (ATARIHW_PRESENT(SCU)) {
+ 		/* init the SCU if present */
+-		tt_scu.sys_mask = 0x10;		/* enable VBL (for the cursor) and
+-									 * disable HSYNC interrupts (who
+-									 * needs them?)  MFP and SCC are
+-									 * enabled in VME mask
+-									 */
++		tt_scu.sys_mask = 0x0;		/* disable all interrupts */
+ 		tt_scu.vme_mask = 0x60;		/* enable MFP and SCC ints */
+ 	} else {
+ 		/* If no SCU and no Hades, the HSYNC interrupt needs to be
+diff --git a/arch/m68k/include/asm/cmpxchg.h b/arch/m68k/include/asm/cmpxchg.h
+index d7f3de9c5d6f7..4ba14f3535fcb 100644
+--- a/arch/m68k/include/asm/cmpxchg.h
++++ b/arch/m68k/include/asm/cmpxchg.h
+@@ -32,7 +32,7 @@ static inline unsigned long __arch_xchg(unsigned long x, volatile void * ptr, in
+ 		x = tmp;
+ 		break;
+ 	default:
+-		tmp = __invalid_xchg_size(x, ptr, size);
++		x = __invalid_xchg_size(x, ptr, size);
+ 		break;
+ 	}
+ 
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 80aecba248922..5785a3d5ccfbb 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -170,7 +170,7 @@ cflags-$(CONFIG_CPU_NEVADA)	+= $(call cc-option,-march=rm5200,-march=mips4) \
+ 			-Wa,--trap
+ cflags-$(CONFIG_CPU_RM7000)	+= $(call cc-option,-march=rm7000,-march=mips4) \
+ 			-Wa,--trap
+-cflags-$(CONFIG_CPU_SB1)	+= $(call cc-option,-march=sb1,-march=mips64r1) \
++cflags-$(CONFIG_CPU_SB1)	+= $(call cc-option,-march=sb1,-march=mips64) \
+ 			-Wa,--trap
+ cflags-$(CONFIG_CPU_SB1)	+= $(call cc-option,-mno-mdmx)
+ cflags-$(CONFIG_CPU_SB1)	+= $(call cc-option,-mno-mips3d)
+diff --git a/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi b/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
+index ee3e2153dd13f..c0be84a6e81fd 100644
+--- a/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
++++ b/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
+@@ -23,14 +23,6 @@ cpu0: cpu@0 {
+ 		};
+ 	};
+ 
+-	memory@200000 {
+-		compatible = "memory";
+-		device_type = "memory";
+-		reg = <0x00000000 0x00200000 0x00000000 0x0ee00000>, /* 238 MB at 2 MB */
+-			<0x00000000 0x20000000 0x00000000 0x1f000000>, /* 496 MB at 512 MB */
+-			<0x00000001 0x10000000 0x00000001 0xb0000000>; /* 6912 MB at 4352MB */
+-	};
+-
+ 	cpu_clk: cpu_clk {
+ 		#clock-cells = <0>;
+ 		compatible = "fixed-clock";
+@@ -52,6 +44,13 @@ package0: bus@10000000 {
+ 			0 0x40000000 0 0x40000000 0 0x40000000
+ 			0xfe 0x00000000 0xfe 0x00000000 0 0x40000000>;
+ 
++		isa@18000000 {
++			compatible = "isa";
++			#size-cells = <1>;
++			#address-cells = <2>;
++			ranges = <1 0x0 0x0 0x18000000 0x4000>;
++		};
++
+ 		pm: reset-controller@1fe07000 {
+ 			compatible = "loongson,ls2k-pm";
+ 			reg = <0 0x1fe07000 0 0x422>;
+@@ -137,7 +136,8 @@ gmac@3,0 {
+ 					     <13 IRQ_TYPE_LEVEL_LOW>;
+ 				interrupt-names = "macirq", "eth_lpi";
+ 				interrupt-parent = <&liointc0>;
+-				phy-mode = "rgmii";
++				phy-mode = "rgmii-id";
++				phy-handle = <&phy1>;
+ 				mdio {
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+@@ -160,7 +160,8 @@ gmac@3,1 {
+ 					     <15 IRQ_TYPE_LEVEL_LOW>;
+ 				interrupt-names = "macirq", "eth_lpi";
+ 				interrupt-parent = <&liointc0>;
+-				phy-mode = "rgmii";
++				phy-mode = "rgmii-id";
++				phy-handle = <&phy1>;
+ 				mdio {
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+diff --git a/arch/mips/include/asm/mach-loongson64/boot_param.h b/arch/mips/include/asm/mach-loongson64/boot_param.h
+index e007edd6b60a7..9218b3ae33832 100644
+--- a/arch/mips/include/asm/mach-loongson64/boot_param.h
++++ b/arch/mips/include/asm/mach-loongson64/boot_param.h
+@@ -42,12 +42,14 @@ enum loongson_cpu_type {
+ 	Legacy_1B = 0x5,
+ 	Legacy_2G = 0x6,
+ 	Legacy_2H = 0x7,
++	Legacy_2K = 0x8,
+ 	Loongson_1A = 0x100,
+ 	Loongson_1B = 0x101,
+ 	Loongson_2E = 0x200,
+ 	Loongson_2F = 0x201,
+ 	Loongson_2G = 0x202,
+ 	Loongson_2H = 0x203,
++	Loongson_2K = 0x204,
+ 	Loongson_3A = 0x300,
+ 	Loongson_3B = 0x301
+ };
+diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
+index c2930a75b7e44..1e782275850a3 100644
+--- a/arch/mips/include/asm/mips-cm.h
++++ b/arch/mips/include/asm/mips-cm.h
+@@ -240,6 +240,10 @@ GCR_ACCESSOR_RO(32, 0x0d0, gic_status)
+ GCR_ACCESSOR_RO(32, 0x0f0, cpc_status)
+ #define CM_GCR_CPC_STATUS_EX			BIT(0)
+ 
++/* GCR_ACCESS - Controls core/IOCU access to GCRs */
++GCR_ACCESSOR_RW(32, 0x120, access_cm3)
++#define CM_GCR_ACCESS_ACCESSEN			GENMASK(7, 0)
++
+ /* GCR_L2_CONFIG - Indicates L2 cache configuration when Config5.L2C=1 */
+ GCR_ACCESSOR_RW(32, 0x130, l2_config)
+ #define CM_GCR_L2_CONFIG_BYPASS			BIT(20)
+diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
+index 9cc087dd1c194..395622c373258 100644
+--- a/arch/mips/kernel/smp-cps.c
++++ b/arch/mips/kernel/smp-cps.c
+@@ -317,7 +317,10 @@ static void boot_core(unsigned int core, unsigned int vpe_id)
+ 	write_gcr_co_reset_ext_base(CM_GCR_Cx_RESET_EXT_BASE_UEB);
+ 
+ 	/* Ensure the core can access the GCRs */
+-	set_gcr_access(1 << core);
++	if (mips_cm_revision() < CM_REV_CM3)
++		set_gcr_access(1 << core);
++	else
++		set_gcr_access_cm3(1 << core);
+ 
+ 	if (mips_cpc_present()) {
+ 		/* Reset the core */
+diff --git a/arch/mips/loongson64/env.c b/arch/mips/loongson64/env.c
+index ef3750a6ffacf..09ff052698614 100644
+--- a/arch/mips/loongson64/env.c
++++ b/arch/mips/loongson64/env.c
+@@ -88,6 +88,12 @@ void __init prom_lefi_init_env(void)
+ 	cpu_clock_freq = ecpu->cpu_clock_freq;
+ 	loongson_sysconf.cputype = ecpu->cputype;
+ 	switch (ecpu->cputype) {
++	case Legacy_2K:
++	case Loongson_2K:
++		smp_group[0] = 0x900000001fe11000;
++		loongson_sysconf.cores_per_node = 2;
++		loongson_sysconf.cores_per_package = 2;
++		break;
+ 	case Legacy_3A:
+ 	case Loongson_3A:
+ 		loongson_sysconf.cores_per_node = 4;
+@@ -221,6 +227,8 @@ void __init prom_lefi_init_env(void)
+ 		default:
+ 			break;
+ 		}
++	} else if ((read_c0_prid() & PRID_IMP_MASK) == PRID_IMP_LOONGSON_64R) {
++		loongson_fdt_blob = __dtb_loongson64_2core_2k1000_begin;
+ 	} else if ((read_c0_prid() & PRID_IMP_MASK) == PRID_IMP_LOONGSON_64G) {
+ 		if (loongson_sysconf.bridgetype == LS7A)
+ 			loongson_fdt_blob = __dtb_loongson64g_4core_ls7a_begin;
+diff --git a/arch/mips/loongson64/reset.c b/arch/mips/loongson64/reset.c
+index e01c8d4a805a9..3e20ade0503ad 100644
+--- a/arch/mips/loongson64/reset.c
++++ b/arch/mips/loongson64/reset.c
+@@ -11,6 +11,7 @@
+ #include <linux/init.h>
+ #include <linux/kexec.h>
+ #include <linux/pm.h>
++#include <linux/reboot.h>
+ #include <linux/slab.h>
+ 
+ #include <asm/bootinfo.h>
+@@ -21,36 +22,21 @@
+ #include <loongson.h>
+ #include <boot_param.h>
+ 
+-static void loongson_restart(char *command)
++static int firmware_restart(struct sys_off_data *unusedd)
+ {
+ 
+ 	void (*fw_restart)(void) = (void *)loongson_sysconf.restart_addr;
+ 
+ 	fw_restart();
+-	while (1) {
+-		if (cpu_wait)
+-			cpu_wait();
+-	}
++	return NOTIFY_DONE;
+ }
+ 
+-static void loongson_poweroff(void)
++static int firmware_poweroff(struct sys_off_data *unused)
+ {
+ 	void (*fw_poweroff)(void) = (void *)loongson_sysconf.poweroff_addr;
+ 
+ 	fw_poweroff();
+-	while (1) {
+-		if (cpu_wait)
+-			cpu_wait();
+-	}
+-}
+-
+-static void loongson_halt(void)
+-{
+-	pr_notice("\n\n** You can safely turn off the power now **\n\n");
+-	while (1) {
+-		if (cpu_wait)
+-			cpu_wait();
+-	}
++	return NOTIFY_DONE;
+ }
+ 
+ #ifdef CONFIG_KEXEC_CORE
+@@ -154,9 +140,17 @@ static void loongson_crash_shutdown(struct pt_regs *regs)
+ 
+ static int __init mips_reboot_setup(void)
+ {
+-	_machine_restart = loongson_restart;
+-	_machine_halt = loongson_halt;
+-	pm_power_off = loongson_poweroff;
++	if (loongson_sysconf.restart_addr) {
++		register_sys_off_handler(SYS_OFF_MODE_RESTART,
++				 SYS_OFF_PRIO_FIRMWARE,
++				 firmware_restart, NULL);
++	}
++
++	if (loongson_sysconf.poweroff_addr) {
++		register_sys_off_handler(SYS_OFF_MODE_POWER_OFF,
++				 SYS_OFF_PRIO_FIRMWARE,
++				 firmware_poweroff, NULL);
++	}
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ 	kexec_argv = kmalloc(KEXEC_ARGV_SIZE, GFP_KERNEL);
+diff --git a/arch/mips/loongson64/smp.c b/arch/mips/loongson64/smp.c
+index 5a990cdef91a6..66d049cdcf145 100644
+--- a/arch/mips/loongson64/smp.c
++++ b/arch/mips/loongson64/smp.c
+@@ -466,12 +466,25 @@ static void loongson3_smp_finish(void)
+ static void __init loongson3_smp_setup(void)
+ {
+ 	int i = 0, num = 0; /* i: physical id, num: logical id */
++	int max_cpus = 0;
+ 
+ 	init_cpu_possible(cpu_none_mask);
+ 
++	for (i = 0; i < ARRAY_SIZE(smp_group); i++) {
++		if (!smp_group[i])
++			break;
++		max_cpus += loongson_sysconf.cores_per_node;
++	}
++
++	if (max_cpus < loongson_sysconf.nr_cpus) {
++		pr_err("SMP Groups are less than the number of CPUs\n");
++		loongson_sysconf.nr_cpus = max_cpus ? max_cpus : 1;
++	}
++
+ 	/* For unified kernel, NR_CPUS is the maximum possible value,
+ 	 * loongson_sysconf.nr_cpus is the really present value
+ 	 */
++	i = 0;
+ 	while (i < loongson_sysconf.nr_cpus) {
+ 		if (loongson_sysconf.reserved_cpus_mask & (1<<i)) {
+ 			/* Reserved physical CPU cores */
+@@ -492,14 +505,14 @@ static void __init loongson3_smp_setup(void)
+ 		__cpu_logical_map[num] = -1;
+ 		num++;
+ 	}
+-
+ 	csr_ipi_probe();
+ 	ipi_set0_regs_init();
+ 	ipi_clear0_regs_init();
+ 	ipi_status0_regs_init();
+ 	ipi_en0_regs_init();
+ 	ipi_mailbox_buf_init();
+-	ipi_write_enable(0);
++	if (smp_group[0])
++		ipi_write_enable(0);
+ 
+ 	cpu_set_core(&cpu_data[0],
+ 		     cpu_logical_map(0) % loongson_sysconf.cores_per_package);
+@@ -818,6 +831,9 @@ static int loongson3_disable_clock(unsigned int cpu)
+ 	uint64_t core_id = cpu_core(&cpu_data[cpu]);
+ 	uint64_t package_id = cpu_data[cpu].package;
+ 
++	if (!loongson_chipcfg[package_id] || !loongson_freqctrl[package_id])
++		return 0;
++
+ 	if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) {
+ 		LOONGSON_CHIPCFG(package_id) &= ~(1 << (12 + core_id));
+ 	} else {
+@@ -832,6 +848,9 @@ static int loongson3_enable_clock(unsigned int cpu)
+ 	uint64_t core_id = cpu_core(&cpu_data[cpu]);
+ 	uint64_t package_id = cpu_data[cpu].package;
+ 
++	if (!loongson_chipcfg[package_id] || !loongson_freqctrl[package_id])
++		return 0;
++
+ 	if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) {
+ 		LOONGSON_CHIPCFG(package_id) |= 1 << (12 + core_id);
+ 	} else {
+diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c
+old mode 100755
+new mode 100644
+diff --git a/arch/mips/sgi-ip30/ip30-console.c b/arch/mips/sgi-ip30/ip30-console.c
+index 7c6dcf6e73f70..a5f10097b9859 100644
+--- a/arch/mips/sgi-ip30/ip30-console.c
++++ b/arch/mips/sgi-ip30/ip30-console.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ 
+ #include <linux/io.h>
++#include <linux/processor.h>
+ 
+ #include <asm/sn/ioc3.h>
+ #include <asm/setup.h>
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index dc9b902de8ea9..9656e956ed135 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -86,6 +86,7 @@ config PARISC
+ 	select HAVE_SOFTIRQ_ON_OWN_STACK if IRQSTACKS
+ 	select TRACE_IRQFLAGS_SUPPORT
+ 	select HAVE_FUNCTION_DESCRIPTORS if 64BIT
++	select PCI_MSI_ARCH_FALLBACKS if PCI_MSI
+ 
+ 	help
+ 	  The PA-RISC microprocessor is designed by Hewlett-Packard and used
+diff --git a/arch/powerpc/configs/85xx-hw.config b/arch/powerpc/configs/85xx-hw.config
+index 524db76f47b73..8aff832173977 100644
+--- a/arch/powerpc/configs/85xx-hw.config
++++ b/arch/powerpc/configs/85xx-hw.config
+@@ -24,6 +24,7 @@ CONFIG_FS_ENET=y
+ CONFIG_FSL_CORENET_CF=y
+ CONFIG_FSL_DMA=y
+ CONFIG_FSL_HV_MANAGER=y
++CONFIG_FSL_IFC=y
+ CONFIG_FSL_PQ_MDIO=y
+ CONFIG_FSL_RIO=y
+ CONFIG_FSL_XGMAC_MDIO=y
+@@ -58,6 +59,7 @@ CONFIG_INPUT_FF_MEMLESS=m
+ CONFIG_MARVELL_PHY=y
+ CONFIG_MDIO_BUS_MUX_GPIO=y
+ CONFIG_MDIO_BUS_MUX_MMIOREG=y
++CONFIG_MEMORY=y
+ CONFIG_MMC_SDHCI_OF_ESDHC=y
+ CONFIG_MMC_SDHCI_PLTFM=y
+ CONFIG_MMC_SDHCI=y
+diff --git a/arch/powerpc/include/asm/guest-state-buffer.h b/arch/powerpc/include/asm/guest-state-buffer.h
+index 808149f315763..d107abe1468fe 100644
+--- a/arch/powerpc/include/asm/guest-state-buffer.h
++++ b/arch/powerpc/include/asm/guest-state-buffer.h
+@@ -81,6 +81,7 @@
+ #define KVMPPC_GSID_HASHKEYR			0x1050
+ #define KVMPPC_GSID_HASHPKEYR			0x1051
+ #define KVMPPC_GSID_CTRL			0x1052
++#define KVMPPC_GSID_DPDES			0x1053
+ 
+ #define KVMPPC_GSID_CR				0x2000
+ #define KVMPPC_GSID_PIDR			0x2001
+@@ -110,7 +111,7 @@
+ #define KVMPPC_GSE_META_COUNT (KVMPPC_GSE_META_END - KVMPPC_GSE_META_START + 1)
+ 
+ #define KVMPPC_GSE_DW_REGS_START KVMPPC_GSID_GPR(0)
+-#define KVMPPC_GSE_DW_REGS_END KVMPPC_GSID_CTRL
++#define KVMPPC_GSE_DW_REGS_END KVMPPC_GSID_DPDES
+ #define KVMPPC_GSE_DW_REGS_COUNT \
+ 	(KVMPPC_GSE_DW_REGS_END - KVMPPC_GSE_DW_REGS_START + 1)
+ 
+diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
+index 3e1e2a698c9e7..10618622d7efc 100644
+--- a/arch/powerpc/include/asm/kvm_book3s.h
++++ b/arch/powerpc/include/asm/kvm_book3s.h
+@@ -594,6 +594,7 @@ static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu)		\
+ 
+ 
+ KVMPPC_BOOK3S_VCORE_ACCESSOR(vtb, 64, KVMPPC_GSID_VTB)
++KVMPPC_BOOK3S_VCORE_ACCESSOR(dpdes, 64, KVMPPC_GSID_DPDES)
+ KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(arch_compat, 32, KVMPPC_GSID_LOGICAL_PVR)
+ KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(lpcr, 64, KVMPPC_GSID_LPCR)
+ KVMPPC_BOOK3S_VCORE_ACCESSOR_SET(tb_offset, 64, KVMPPC_GSID_TB_OFFSET)
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 60819751e55e6..0be07ed407c70 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -331,6 +331,7 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
+ 					  void *data)
+ {
+ 	const char *type = of_get_flat_dt_prop(node, "device_type", NULL);
++	const __be32 *cpu_version = NULL;
+ 	const __be32 *prop;
+ 	const __be32 *intserv;
+ 	int i, nthreads;
+@@ -420,7 +421,7 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
+ 		prop = of_get_flat_dt_prop(node, "cpu-version", NULL);
+ 		if (prop && (be32_to_cpup(prop) & 0xff000000) == 0x0f000000) {
+ 			identify_cpu(0, be32_to_cpup(prop));
+-			seq_buf_printf(&ppc_hw_desc, "0x%04x ", be32_to_cpup(prop));
++			cpu_version = prop;
+ 		}
+ 
+ 		check_cpu_feature_properties(node);
+@@ -431,6 +432,12 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
+ 	}
+ 
+ 	identical_pvr_fixup(node);
++
++	// We can now add the CPU name & PVR to the hardware description
++	seq_buf_printf(&ppc_hw_desc, "%s 0x%04lx ", cur_cpu_spec->cpu_name, mfspr(SPRN_PVR));
++	if (cpu_version)
++		seq_buf_printf(&ppc_hw_desc, "0x%04x ", be32_to_cpup(cpu_version));
++
+ 	init_mmu_slb_size(node);
+ 
+ #ifdef CONFIG_PPC64
+@@ -881,9 +888,6 @@ void __init early_init_devtree(void *params)
+ 
+ 	dt_cpu_ftrs_scan();
+ 
+-	// We can now add the CPU name & PVR to the hardware description
+-	seq_buf_printf(&ppc_hw_desc, "%s 0x%04lx ", cur_cpu_spec->cpu_name, mfspr(SPRN_PVR));
+-
+ 	/* Retrieve CPU related informations from the flat tree
+ 	 * (altivec support, boot CPU ID, ...)
+ 	 */
+diff --git a/arch/powerpc/kexec/core_64.c b/arch/powerpc/kexec/core_64.c
+index 72b12bc10f90b..222aa326dacee 100644
+--- a/arch/powerpc/kexec/core_64.c
++++ b/arch/powerpc/kexec/core_64.c
+@@ -467,9 +467,15 @@ static int add_node_props(void *fdt, int node_offset, const struct device_node *
+  * @fdt:              Flattened device tree of the kernel.
+  *
+  * Returns 0 on success, negative errno on error.
++ *
++ * Note: expecting no subnodes under /cpus/<node> with device_type == "cpu".
++ * If this changes, update this function to include them.
+  */
+ int update_cpus_node(void *fdt)
+ {
++	int prev_node_offset;
++	const char *device_type;
++	const struct fdt_property *prop;
+ 	struct device_node *cpus_node, *dn;
+ 	int cpus_offset, cpus_subnode_offset, ret = 0;
+ 
+@@ -480,30 +486,44 @@ int update_cpus_node(void *fdt)
+ 		return cpus_offset;
+ 	}
+ 
+-	if (cpus_offset > 0) {
+-		ret = fdt_del_node(fdt, cpus_offset);
++	prev_node_offset = cpus_offset;
++	/* Delete sub-nodes of /cpus node with device_type == "cpu" */
++	for (cpus_subnode_offset = fdt_first_subnode(fdt, cpus_offset); cpus_subnode_offset >= 0;) {
++		/* Ignore nodes that do not have a device_type property or device_type != "cpu" */
++		prop = fdt_get_property(fdt, cpus_subnode_offset, "device_type", NULL);
++		if (!prop || strcmp(prop->data, "cpu")) {
++			prev_node_offset = cpus_subnode_offset;
++			goto next_node;
++		}
++
++		ret = fdt_del_node(fdt, cpus_subnode_offset);
+ 		if (ret < 0) {
+-			pr_err("Error deleting /cpus node: %s\n", fdt_strerror(ret));
+-			return -EINVAL;
++			pr_err("Failed to delete a cpus sub-node: %s\n", fdt_strerror(ret));
++			return ret;
+ 		}
++next_node:
++		if (prev_node_offset == cpus_offset)
++			cpus_subnode_offset = fdt_first_subnode(fdt, cpus_offset);
++		else
++			cpus_subnode_offset = fdt_next_subnode(fdt, prev_node_offset);
+ 	}
+ 
+-	/* Add cpus node to fdt */
+-	cpus_offset = fdt_add_subnode(fdt, fdt_path_offset(fdt, "/"), "cpus");
+-	if (cpus_offset < 0) {
+-		pr_err("Error creating /cpus node: %s\n", fdt_strerror(cpus_offset));
++	cpus_node = of_find_node_by_path("/cpus");
++	/* Fail here to avoid kexec/kdump kernel boot hung */
++	if (!cpus_node) {
++		pr_err("No /cpus node found\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	/* Add cpus node properties */
+-	cpus_node = of_find_node_by_path("/cpus");
+-	ret = add_node_props(fdt, cpus_offset, cpus_node);
+-	of_node_put(cpus_node);
+-	if (ret < 0)
+-		return ret;
++	/* Add all /cpus sub-nodes of device_type == "cpu" to FDT */
++	for_each_child_of_node(cpus_node, dn) {
++		/* Ignore device nodes that do not have a device_type property
++		 * or device_type != "cpu".
++		 */
++		device_type = of_get_property(dn, "device_type", NULL);
++		if (!device_type || strcmp(device_type, "cpu"))
++			continue;
+ 
+-	/* Loop through all subnodes of cpus and add them to fdt */
+-	for_each_node_by_type(dn, "cpu") {
+ 		cpus_subnode_offset = fdt_add_subnode(fdt, cpus_offset, dn->full_name);
+ 		if (cpus_subnode_offset < 0) {
+ 			pr_err("Unable to add %s subnode: %s\n", dn->full_name,
+@@ -517,6 +537,7 @@ int update_cpus_node(void *fdt)
+ 			goto out;
+ 	}
+ out:
++	of_node_put(cpus_node);
+ 	of_node_put(dn);
+ 	return ret;
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index daaf7faf21a5e..d8352e4d9cdc7 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2305,7 +2305,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+ 		*val = get_reg_val(id, kvmppc_get_siar_hv(vcpu));
+ 		break;
+ 	case KVM_REG_PPC_SDAR:
+-		*val = get_reg_val(id, kvmppc_get_siar_hv(vcpu));
++		*val = get_reg_val(id, kvmppc_get_sdar_hv(vcpu));
+ 		break;
+ 	case KVM_REG_PPC_SIER:
+ 		*val = get_reg_val(id, kvmppc_get_sier_hv(vcpu, 0));
+@@ -2540,7 +2540,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+ 		vcpu->arch.mmcrs = set_reg_val(id, *val);
+ 		break;
+ 	case KVM_REG_PPC_MMCR3:
+-		*val = get_reg_val(id, vcpu->arch.mmcr[3]);
++		kvmppc_set_mmcr_hv(vcpu, 3, set_reg_val(id, *val));
+ 		break;
+ 	case KVM_REG_PPC_PMC1 ... KVM_REG_PPC_PMC8:
+ 		i = id - KVM_REG_PPC_PMC1;
+@@ -4116,6 +4116,11 @@ static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	int trap;
+ 	long rc;
+ 
++	if (vcpu->arch.doorbell_request) {
++		vcpu->arch.doorbell_request = 0;
++		kvmppc_set_dpdes(vcpu, 1);
++	}
++
+ 	io = &vcpu->arch.nestedv2_io;
+ 
+ 	msr = mfmsr();
+diff --git a/arch/powerpc/kvm/book3s_hv_nestedv2.c b/arch/powerpc/kvm/book3s_hv_nestedv2.c
+index 1091f7a83b255..342f583147709 100644
+--- a/arch/powerpc/kvm/book3s_hv_nestedv2.c
++++ b/arch/powerpc/kvm/book3s_hv_nestedv2.c
+@@ -311,6 +311,10 @@ static int gs_msg_ops_vcpu_fill_info(struct kvmppc_gs_buff *gsb,
+ 			rc = kvmppc_gse_put_u64(gsb, iden,
+ 						vcpu->arch.vcore->vtb);
+ 			break;
++		case KVMPPC_GSID_DPDES:
++			rc = kvmppc_gse_put_u64(gsb, iden,
++						vcpu->arch.vcore->dpdes);
++			break;
+ 		case KVMPPC_GSID_LPCR:
+ 			rc = kvmppc_gse_put_u64(gsb, iden,
+ 						vcpu->arch.vcore->lpcr);
+@@ -543,6 +547,9 @@ static int gs_msg_ops_vcpu_refresh_info(struct kvmppc_gs_msg *gsm,
+ 		case KVMPPC_GSID_VTB:
+ 			vcpu->arch.vcore->vtb = kvmppc_gse_get_u64(gse);
+ 			break;
++		case KVMPPC_GSID_DPDES:
++			vcpu->arch.vcore->dpdes = kvmppc_gse_get_u64(gse);
++			break;
+ 		case KVMPPC_GSID_LPCR:
+ 			vcpu->arch.vcore->lpcr = kvmppc_gse_get_u64(gse);
+ 			break;
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index d32abe7fe6ab7..d11767208bfc1 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1984,8 +1984,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+ 			break;
+ 
+ 		r = -ENXIO;
+-		if (!xive_enabled())
++		if (!xive_enabled()) {
++			fdput(f);
+ 			break;
++		}
+ 
+ 		r = -EPERM;
+ 		dev = kvm_device_from_filp(f.file);
+diff --git a/arch/powerpc/kvm/test-guest-state-buffer.c b/arch/powerpc/kvm/test-guest-state-buffer.c
+index 4720b8dc88379..2571ccc618c96 100644
+--- a/arch/powerpc/kvm/test-guest-state-buffer.c
++++ b/arch/powerpc/kvm/test-guest-state-buffer.c
+@@ -151,7 +151,7 @@ static void test_gs_bitmap(struct kunit *test)
+ 		i++;
+ 	}
+ 
+-	for (u16 iden = KVMPPC_GSID_GPR(0); iden <= KVMPPC_GSID_CTRL; iden++) {
++	for (u16 iden = KVMPPC_GSID_GPR(0); iden <= KVMPPC_GSE_DW_REGS_END; iden++) {
+ 		kvmppc_gsbm_set(&gsbm, iden);
+ 		kvmppc_gsbm_set(&gsbm1, iden);
+ 		KUNIT_EXPECT_TRUE(test, kvmppc_gsbm_test(&gsbm, iden));
+diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
+index 43d4842bb1c7a..d93433e26dedb 100644
+--- a/arch/powerpc/mm/nohash/8xx.c
++++ b/arch/powerpc/mm/nohash/8xx.c
+@@ -94,7 +94,8 @@ static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa,
+ 		return -EINVAL;
+ 
+ 	set_huge_pte_at(&init_mm, va, ptep,
+-			pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), psize);
++			pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)),
++			1UL << mmu_psize_to_shift(psize));
+ 
+ 	return 0;
+ }
+diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
+index 75fa98221d485..af105e1bc3fca 100644
+--- a/arch/powerpc/xmon/ppc-dis.c
++++ b/arch/powerpc/xmon/ppc-dis.c
+@@ -122,32 +122,21 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
+   bool insn_is_short;
+   ppc_cpu_t dialect;
+ 
+-  dialect = PPC_OPCODE_PPC | PPC_OPCODE_COMMON
+-            | PPC_OPCODE_64 | PPC_OPCODE_POWER4 | PPC_OPCODE_ALTIVEC;
++  dialect = PPC_OPCODE_PPC | PPC_OPCODE_COMMON;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER5))
+-    dialect |= PPC_OPCODE_POWER5;
++  if (IS_ENABLED(CONFIG_PPC64))
++    dialect |= PPC_OPCODE_64 | PPC_OPCODE_POWER4 | PPC_OPCODE_CELL |
++	PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7 | PPC_OPCODE_POWER8 |
++	PPC_OPCODE_POWER9;
+ 
+-  if (cpu_has_feature(CPU_FTRS_CELL))
+-    dialect |= (PPC_OPCODE_CELL | PPC_OPCODE_ALTIVEC);
++  if (cpu_has_feature(CPU_FTR_TM))
++    dialect |= PPC_OPCODE_HTM;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER6))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_ALTIVEC);
++  if (cpu_has_feature(CPU_FTR_ALTIVEC))
++    dialect |= PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2;
+ 
+-  if (cpu_has_feature(CPU_FTRS_POWER7))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-                | PPC_OPCODE_ALTIVEC | PPC_OPCODE_VSX);
+-
+-  if (cpu_has_feature(CPU_FTRS_POWER8))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-		| PPC_OPCODE_POWER8 | PPC_OPCODE_HTM
+-		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2 | PPC_OPCODE_VSX);
+-
+-  if (cpu_has_feature(CPU_FTRS_POWER9))
+-    dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+-		| PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
+-		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
+-		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
++  if (cpu_has_feature(CPU_FTR_VSX))
++    dialect |= PPC_OPCODE_VSX | PPC_OPCODE_VSX3;
+ 
+   /* Get the major opcode of the insn.  */
+   opcode = NULL;
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index 4236a69c35cb3..a00f7523cb91f 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -165,9 +165,20 @@ secondary_start_sbi:
+ #endif
+ 	call .Lsetup_trap_vector
+ 	scs_load_current
+-	tail smp_callin
++	call smp_callin
+ #endif /* CONFIG_SMP */
+ 
++.align 2
++.Lsecondary_park:
++	/*
++	 * Park this hart if we:
++	 *  - have too many harts on CONFIG_RISCV_BOOT_SPINWAIT
++	 *  - receive an early trap, before setup_trap_vector finished
++	 *  - fail in smp_callin(), as a successful one wouldn't return
++	 */
++	wfi
++	j .Lsecondary_park
++
+ .align 2
+ .Lsetup_trap_vector:
+ 	/* Set trap vector to exception handler */
+@@ -181,12 +192,6 @@ secondary_start_sbi:
+ 	csrw CSR_SCRATCH, zero
+ 	ret
+ 
+-.align 2
+-.Lsecondary_park:
+-	/* We lack SMP support or have too many harts, so park this hart */
+-	wfi
+-	j .Lsecondary_park
+-
+ SYM_CODE_END(_start)
+ 
+ SYM_CODE_START(_start_kernel)
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index 1319b29ce3b59..19baf0d574d35 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -214,6 +214,15 @@ asmlinkage __visible void smp_callin(void)
+ 	struct mm_struct *mm = &init_mm;
+ 	unsigned int curr_cpuid = smp_processor_id();
+ 
++	if (has_vector()) {
++		/*
++		 * Return as early as possible so the hart with a mismatching
++		 * vlen won't boot.
++		 */
++		if (riscv_v_setup_vsize())
++			return;
++	}
++
+ 	/* All kernel threads share the same mm context.  */
+ 	mmgrab(mm);
+ 	current->active_mm = mm;
+@@ -226,11 +235,6 @@ asmlinkage __visible void smp_callin(void)
+ 	numa_add_cpu(curr_cpuid);
+ 	set_cpu_online(curr_cpuid, true);
+ 
+-	if (has_vector()) {
+-		if (riscv_v_setup_vsize())
+-			elf_hwcap &= ~COMPAT_HWCAP_ISA_V;
+-	}
+-
+ 	riscv_user_isa_enable();
+ 
+ 	/*
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 79a001d5533ea..212b015e09b75 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -16,6 +16,8 @@
+ #include "bpf_jit.h"
+ 
+ #define RV_FENTRY_NINSNS 2
++/* imm that allows emit_imm to emit max count insns */
++#define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF
+ 
+ #define RV_REG_TCC RV_REG_A6
+ #define RV_REG_TCC_SAVED RV_REG_S6 /* Store A6 in S6 if program do calls */
+@@ -915,7 +917,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
+ 		orig_call += RV_FENTRY_NINSNS * 4;
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+-		emit_imm(RV_REG_A0, (const s64)im, ctx);
++		emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
+ 		ret = emit_call((const u64)__bpf_tramp_enter, true, ctx);
+ 		if (ret)
+ 			return ret;
+@@ -976,7 +978,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
+ 
+ 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+ 		im->ip_epilogue = ctx->insns + ctx->ninsns;
+-		emit_imm(RV_REG_A0, (const s64)im, ctx);
++		emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
+ 		ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
+ 		if (ret)
+ 			goto out;
+@@ -1045,6 +1047,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
+ {
+ 	int ret;
+ 	struct rv_jit_context ctx;
++	u32 size = image_end - image;
+ 
+ 	ctx.ninsns = 0;
+ 	/*
+@@ -1058,11 +1061,16 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
+ 	ctx.ro_insns = image;
+ 	ret = __arch_prepare_bpf_trampoline(im, m, tlinks, func_addr, flags, &ctx);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+-	bpf_flush_icache(ctx.insns, ctx.insns + ctx.ninsns);
++	if (WARN_ON(size < ninsns_rvoff(ctx.ninsns))) {
++		ret = -E2BIG;
++		goto out;
++	}
+ 
+-	return ninsns_rvoff(ret);
++	bpf_flush_icache(image, image_end);
++out:
++	return ret < 0 ? ret : size;
+ }
+ 
+ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 1434642e9cba0..6968be98af117 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -556,25 +556,31 @@ static int cfdiag_diffctr(struct cpu_cf_events *cpuhw, unsigned long auth)
+ 	struct cf_trailer_entry *trailer_start, *trailer_stop;
+ 	struct cf_ctrset_entry *ctrstart, *ctrstop;
+ 	size_t offset = 0;
++	int i;
+ 
+-	auth &= (1 << CPUMF_LCCTL_ENABLE_SHIFT) - 1;
+-	do {
++	for (i = CPUMF_CTR_SET_BASIC; i < CPUMF_CTR_SET_MAX; ++i) {
+ 		ctrstart = (struct cf_ctrset_entry *)(cpuhw->start + offset);
+ 		ctrstop = (struct cf_ctrset_entry *)(cpuhw->stop + offset);
+ 
++		/* Counter set not authorized */
++		if (!(auth & cpumf_ctr_ctl[i]))
++			continue;
++		/* Counter set size zero was not saved */
++		if (!cpum_cf_read_setsize(i))
++			continue;
++
+ 		if (memcmp(ctrstop, ctrstart, sizeof(*ctrstop))) {
+ 			pr_err_once("cpum_cf_diag counter set compare error "
+ 				    "in set %i\n", ctrstart->set);
+ 			return 0;
+ 		}
+-		auth &= ~cpumf_ctr_ctl[ctrstart->set];
+ 		if (ctrstart->def == CF_DIAG_CTRSET_DEF) {
+ 			cfdiag_diffctrset((u64 *)(ctrstart + 1),
+ 					  (u64 *)(ctrstop + 1), ctrstart->ctr);
+ 			offset += ctrstart->ctr * sizeof(u64) +
+ 							sizeof(*ctrstart);
+ 		}
+-	} while (ctrstart->def && auth);
++	}
+ 
+ 	/* Save time_stamp from start of event in stop's trailer */
+ 	trailer_start = (struct cf_trailer_entry *)(cpuhw->start + offset);
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 90c2c786bb355..610e6f794511a 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -149,7 +149,7 @@ unsigned long __bootdata_preserved(max_mappable);
+ struct physmem_info __bootdata(physmem_info);
+ 
+ struct vm_layout __bootdata_preserved(vm_layout);
+-EXPORT_SYMBOL_GPL(vm_layout);
++EXPORT_SYMBOL(vm_layout);
+ int __bootdata_preserved(__kaslr_enabled);
+ unsigned int __bootdata_preserved(zlib_dfltcc_support);
+ EXPORT_SYMBOL(zlib_dfltcc_support);
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 265fea37e0308..016993e9eb72f 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -318,6 +318,13 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
+ 			rc = make_folio_secure(folio, uvcb);
+ 			folio_unlock(folio);
+ 		}
++
++		/*
++		 * Once we drop the PTL, the folio may get unmapped and
++		 * freed immediately. We need a temporary reference.
++		 */
++		if (rc == -EAGAIN)
++			folio_get(folio);
+ 	}
+ unlock:
+ 	pte_unmap_unlock(ptep, ptelock);
+@@ -330,6 +337,7 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
+ 		 * completion, this is just a useless check, but it is safe.
+ 		 */
+ 		folio_wait_writeback(folio);
++		folio_put(folio);
+ 	} else if (rc == -EBUSY) {
+ 		/*
+ 		 * If we have tried a local drain and the folio refcount
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 54b5b2565df8d..4a74effe68704 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -5749,6 +5749,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ {
+ 	gpa_t size;
+ 
++	if (kvm_is_ucontrol(kvm))
++		return -EINVAL;
++
+ 	/* When we are protected, we should not change the memory slots */
+ 	if (kvm_s390_pv_get_handle(kvm))
+ 		return -EINVAL;
+diff --git a/arch/s390/pci/pci_irq.c b/arch/s390/pci/pci_irq.c
+index 0ef83b6ac0db7..84482a9213322 100644
+--- a/arch/s390/pci/pci_irq.c
++++ b/arch/s390/pci/pci_irq.c
+@@ -268,33 +268,20 @@ static void zpci_floating_irq_handler(struct airq_struct *airq,
+ 	}
+ }
+ 
+-int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
++static int __alloc_airq(struct zpci_dev *zdev, int msi_vecs,
++			unsigned long *bit)
+ {
+-	struct zpci_dev *zdev = to_zpci(pdev);
+-	unsigned int hwirq, msi_vecs, cpu;
+-	unsigned long bit;
+-	struct msi_desc *msi;
+-	struct msi_msg msg;
+-	int cpu_addr;
+-	int rc, irq;
+-
+-	zdev->aisb = -1UL;
+-	zdev->msi_first_bit = -1U;
+-	if (type == PCI_CAP_ID_MSI && nvec > 1)
+-		return 1;
+-	msi_vecs = min_t(unsigned int, nvec, zdev->max_msi);
+-
+ 	if (irq_delivery == DIRECTED) {
+ 		/* Allocate cpu vector bits */
+-		bit = airq_iv_alloc(zpci_ibv[0], msi_vecs);
+-		if (bit == -1UL)
++		*bit = airq_iv_alloc(zpci_ibv[0], msi_vecs);
++		if (*bit == -1UL)
+ 			return -EIO;
+ 	} else {
+ 		/* Allocate adapter summary indicator bit */
+-		bit = airq_iv_alloc_bit(zpci_sbv);
+-		if (bit == -1UL)
++		*bit = airq_iv_alloc_bit(zpci_sbv);
++		if (*bit == -1UL)
+ 			return -EIO;
+-		zdev->aisb = bit;
++		zdev->aisb = *bit;
+ 
+ 		/* Create adapter interrupt vector */
+ 		zdev->aibv = airq_iv_create(msi_vecs, AIRQ_IV_DATA | AIRQ_IV_BITLOCK, NULL);
+@@ -302,27 +289,66 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 			return -ENOMEM;
+ 
+ 		/* Wire up shortcut pointer */
+-		zpci_ibv[bit] = zdev->aibv;
++		zpci_ibv[*bit] = zdev->aibv;
+ 		/* Each function has its own interrupt vector */
+-		bit = 0;
++		*bit = 0;
+ 	}
++	return 0;
++}
+ 
+-	/* Request MSI interrupts */
++int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
++{
++	unsigned int hwirq, msi_vecs, irqs_per_msi, i, cpu;
++	struct zpci_dev *zdev = to_zpci(pdev);
++	struct msi_desc *msi;
++	struct msi_msg msg;
++	unsigned long bit;
++	int cpu_addr;
++	int rc, irq;
++
++	zdev->aisb = -1UL;
++	zdev->msi_first_bit = -1U;
++
++	msi_vecs = min_t(unsigned int, nvec, zdev->max_msi);
++	if (msi_vecs < nvec) {
++		pr_info("%s requested %d irqs, allocate system limit of %d",
++			pci_name(pdev), nvec, zdev->max_msi);
++	}
++
++	rc = __alloc_airq(zdev, msi_vecs, &bit);
++	if (rc < 0)
++		return rc;
++
++	/*
++	 * Request MSI interrupts:
++	 * When using MSI, nvec_used interrupt sources and their irq
++	 * descriptors are controlled through one msi descriptor.
++	 * Thus the outer loop over msi descriptors shall run only once,
++	 * while two inner loops iterate over the interrupt vectors.
++	 * When using MSI-X, each interrupt vector/irq descriptor
++	 * is bound to exactly one msi descriptor (nvec_used is one).
++	 * So the inner loops are executed once, while the outer iterates
++	 * over the MSI-X descriptors.
++	 */
+ 	hwirq = bit;
+ 	msi_for_each_desc(msi, &pdev->dev, MSI_DESC_NOTASSOCIATED) {
+-		rc = -EIO;
+ 		if (hwirq - bit >= msi_vecs)
+ 			break;
+-		irq = __irq_alloc_descs(-1, 0, 1, 0, THIS_MODULE,
+-				(irq_delivery == DIRECTED) ?
+-				msi->affinity : NULL);
++		irqs_per_msi = min_t(unsigned int, msi_vecs, msi->nvec_used);
++		irq = __irq_alloc_descs(-1, 0, irqs_per_msi, 0, THIS_MODULE,
++					(irq_delivery == DIRECTED) ?
++					msi->affinity : NULL);
+ 		if (irq < 0)
+ 			return -ENOMEM;
+-		rc = irq_set_msi_desc(irq, msi);
+-		if (rc)
+-			return rc;
+-		irq_set_chip_and_handler(irq, &zpci_irq_chip,
+-					 handle_percpu_irq);
++
++		for (i = 0; i < irqs_per_msi; i++) {
++			rc = irq_set_msi_desc_off(irq, i, msi);
++			if (rc)
++				return rc;
++			irq_set_chip_and_handler(irq + i, &zpci_irq_chip,
++						 handle_percpu_irq);
++		}
++
+ 		msg.data = hwirq - bit;
+ 		if (irq_delivery == DIRECTED) {
+ 			if (msi->affinity)
+@@ -335,31 +361,35 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 			msg.address_lo |= (cpu_addr << 8);
+ 
+ 			for_each_possible_cpu(cpu) {
+-				airq_iv_set_data(zpci_ibv[cpu], hwirq, irq);
++				for (i = 0; i < irqs_per_msi; i++)
++					airq_iv_set_data(zpci_ibv[cpu],
++							 hwirq + i, irq + i);
+ 			}
+ 		} else {
+ 			msg.address_lo = zdev->msi_addr & 0xffffffff;
+-			airq_iv_set_data(zdev->aibv, hwirq, irq);
++			for (i = 0; i < irqs_per_msi; i++)
++				airq_iv_set_data(zdev->aibv, hwirq + i, irq + i);
+ 		}
+ 		msg.address_hi = zdev->msi_addr >> 32;
+ 		pci_write_msi_msg(irq, &msg);
+-		hwirq++;
++		hwirq += irqs_per_msi;
+ 	}
+ 
+ 	zdev->msi_first_bit = bit;
+-	zdev->msi_nr_irqs = msi_vecs;
++	zdev->msi_nr_irqs = hwirq - bit;
+ 
+ 	rc = zpci_set_irq(zdev);
+ 	if (rc)
+ 		return rc;
+ 
+-	return (msi_vecs == nvec) ? 0 : msi_vecs;
++	return (zdev->msi_nr_irqs == nvec) ? 0 : zdev->msi_nr_irqs;
+ }
+ 
+ void arch_teardown_msi_irqs(struct pci_dev *pdev)
+ {
+ 	struct zpci_dev *zdev = to_zpci(pdev);
+ 	struct msi_desc *msi;
++	unsigned int i;
+ 	int rc;
+ 
+ 	/* Disable interrupts */
+@@ -369,8 +399,10 @@ void arch_teardown_msi_irqs(struct pci_dev *pdev)
+ 
+ 	/* Release MSI interrupts */
+ 	msi_for_each_desc(msi, &pdev->dev, MSI_DESC_ASSOCIATED) {
+-		irq_set_msi_desc(msi->irq, NULL);
+-		irq_free_desc(msi->irq);
++		for (i = 0; i < msi->nvec_used; i++) {
++			irq_set_msi_desc(msi->irq + i, NULL);
++			irq_free_desc(msi->irq + i);
++		}
+ 		msi->msg.address_lo = 0;
+ 		msi->msg.address_hi = 0;
+ 		msi->msg.data = 0;
+diff --git a/arch/sparc/include/asm/oplib_64.h b/arch/sparc/include/asm/oplib_64.h
+index a67abebd43592..1b86d02a84556 100644
+--- a/arch/sparc/include/asm/oplib_64.h
++++ b/arch/sparc/include/asm/oplib_64.h
+@@ -247,6 +247,7 @@ void prom_sun4v_guest_soft_state(void);
+ int prom_ihandle2path(int handle, char *buffer, int bufsize);
+ 
+ /* Client interface level routines. */
++void prom_cif_init(void *cif_handler);
+ void p1275_cmd_direct(unsigned long *);
+ 
+ #endif /* !(__SPARC64_OPLIB_H) */
+diff --git a/arch/sparc/prom/init_64.c b/arch/sparc/prom/init_64.c
+index 103aa91043185..f7b8a1a865b8f 100644
+--- a/arch/sparc/prom/init_64.c
++++ b/arch/sparc/prom/init_64.c
+@@ -26,9 +26,6 @@ phandle prom_chosen_node;
+  * routines in the prom library.
+  * It gets passed the pointer to the PROM vector.
+  */
+-
+-extern void prom_cif_init(void *);
+-
+ void __init prom_init(void *cif_handler)
+ {
+ 	phandle node;
+diff --git a/arch/sparc/prom/p1275.c b/arch/sparc/prom/p1275.c
+index 889aa602f8d86..51c3f984bbf72 100644
+--- a/arch/sparc/prom/p1275.c
++++ b/arch/sparc/prom/p1275.c
+@@ -49,7 +49,7 @@ void p1275_cmd_direct(unsigned long *args)
+ 	local_irq_restore(flags);
+ }
+ 
+-void prom_cif_init(void *cif_handler, void *cif_stack)
++void prom_cif_init(void *cif_handler)
+ {
+ 	p1275buf.prom_cif_handler = (void (*)(long *))cif_handler;
+ }
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index ef805eaa9e013..093c87879d08b 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -447,43 +447,31 @@ static int bulk_req_safe_read(
+ 	return n;
+ }
+ 
+-/* Called without dev->lock held, and only in interrupt context. */
+-static void ubd_handler(void)
++static void ubd_end_request(struct io_thread_req *io_req)
+ {
+-	int n;
+-	int count;
+-
+-	while(1){
+-		n = bulk_req_safe_read(
+-			thread_fd,
+-			irq_req_buffer,
+-			&irq_remainder,
+-			&irq_remainder_size,
+-			UBD_REQ_BUFFER_SIZE
+-		);
+-		if (n < 0) {
+-			if(n == -EAGAIN)
+-				break;
+-			printk(KERN_ERR "spurious interrupt in ubd_handler, "
+-			       "err = %d\n", -n);
+-			return;
+-		}
+-		for (count = 0; count < n/sizeof(struct io_thread_req *); count++) {
+-			struct io_thread_req *io_req = (*irq_req_buffer)[count];
+-
+-			if ((io_req->error == BLK_STS_NOTSUPP) && (req_op(io_req->req) == REQ_OP_DISCARD)) {
+-				blk_queue_max_discard_sectors(io_req->req->q, 0);
+-				blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
+-			}
+-			blk_mq_end_request(io_req->req, io_req->error);
+-			kfree(io_req);
+-		}
++	if (io_req->error == BLK_STS_NOTSUPP) {
++		if (req_op(io_req->req) == REQ_OP_DISCARD)
++			blk_queue_max_discard_sectors(io_req->req->q, 0);
++		else if (req_op(io_req->req) == REQ_OP_WRITE_ZEROES)
++			blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
+ 	}
++	blk_mq_end_request(io_req->req, io_req->error);
++	kfree(io_req);
+ }
+ 
+ static irqreturn_t ubd_intr(int irq, void *dev)
+ {
+-	ubd_handler();
++	int len, i;
++
++	while ((len = bulk_req_safe_read(thread_fd, irq_req_buffer,
++			&irq_remainder, &irq_remainder_size,
++			UBD_REQ_BUFFER_SIZE)) >= 0) {
++		for (i = 0; i < len / sizeof(struct io_thread_req *); i++)
++			ubd_end_request((*irq_req_buffer)[i]);
++	}
++
++	if (len < 0 && len != -EAGAIN)
++		pr_err("spurious interrupt in %s, err = %d\n", __func__, len);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index a8bfe8be15260..5b5fd8f68d9c1 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -875,9 +875,9 @@ static int setup_time_travel_start(char *str)
+ 	return 1;
+ }
+ 
+-__setup("time-travel-start", setup_time_travel_start);
++__setup("time-travel-start=", setup_time_travel_start);
+ __uml_help(setup_time_travel_start,
+-"time-travel-start=<seconds>\n"
++"time-travel-start=<nanoseconds>\n"
+ "Configure the UML instance's wall clock to start at this value rather than\n"
+ "the host's wall clock at the time of UML boot.\n");
+ #endif
+diff --git a/arch/um/os-Linux/signal.c b/arch/um/os-Linux/signal.c
+index 787cfb9a03088..b11ed66c8bb0e 100644
+--- a/arch/um/os-Linux/signal.c
++++ b/arch/um/os-Linux/signal.c
+@@ -8,6 +8,7 @@
+ 
+ #include <stdlib.h>
+ #include <stdarg.h>
++#include <stdbool.h>
+ #include <errno.h>
+ #include <signal.h>
+ #include <string.h>
+@@ -65,9 +66,7 @@ static void sig_handler_common(int sig, struct siginfo *si, mcontext_t *mc)
+ 
+ int signals_enabled;
+ #ifdef UML_CONFIG_UML_TIME_TRAVEL_SUPPORT
+-static int signals_blocked;
+-#else
+-#define signals_blocked 0
++static int signals_blocked, signals_blocked_pending;
+ #endif
+ static unsigned int signals_pending;
+ static unsigned int signals_active = 0;
+@@ -76,14 +75,27 @@ static void sig_handler(int sig, struct siginfo *si, mcontext_t *mc)
+ {
+ 	int enabled = signals_enabled;
+ 
+-	if ((signals_blocked || !enabled) && (sig == SIGIO)) {
++#ifdef UML_CONFIG_UML_TIME_TRAVEL_SUPPORT
++	if ((signals_blocked ||
++	     __atomic_load_n(&signals_blocked_pending, __ATOMIC_SEQ_CST)) &&
++	    (sig == SIGIO)) {
++		/* increment so unblock will do another round */
++		__atomic_add_fetch(&signals_blocked_pending, 1,
++				   __ATOMIC_SEQ_CST);
++		return;
++	}
++#endif
++
++	if (!enabled && (sig == SIGIO)) {
+ 		/*
+ 		 * In TT_MODE_EXTERNAL, need to still call time-travel
+-		 * handlers unless signals are also blocked for the
+-		 * external time message processing. This will mark
+-		 * signals_pending by itself (only if necessary.)
++		 * handlers. This will mark signals_pending by itself
++		 * (only if necessary.)
++		 * Note we won't get here if signals are hard-blocked
++		 * (which is handled above), in that case the hard-
++		 * unblock will handle things.
+ 		 */
+-		if (!signals_blocked && time_travel_mode == TT_MODE_EXTERNAL)
++		if (time_travel_mode == TT_MODE_EXTERNAL)
+ 			sigio_run_timetravel_handlers();
+ 		else
+ 			signals_pending |= SIGIO_MASK;
+@@ -380,33 +392,99 @@ int um_set_signals_trace(int enable)
+ #ifdef UML_CONFIG_UML_TIME_TRAVEL_SUPPORT
+ void mark_sigio_pending(void)
+ {
++	/*
++	 * It would seem that this should be atomic so
++	 * it isn't a read-modify-write with a signal
++	 * that could happen in the middle, losing the
++	 * value set by the signal.
++	 *
++	 * However, this function is only called when in
++	 * time-travel=ext simulation mode, in which case
++	 * the only signal ever pending is SIGIO, which
++	 * is blocked while this can be called, and the
++	 * timer signal (SIGALRM) cannot happen.
++	 */
+ 	signals_pending |= SIGIO_MASK;
+ }
+ 
+ void block_signals_hard(void)
+ {
+-	if (signals_blocked)
+-		return;
+-	signals_blocked = 1;
++	signals_blocked++;
+ 	barrier();
+ }
+ 
+ void unblock_signals_hard(void)
+ {
++	static bool unblocking;
++
+ 	if (!signals_blocked)
++		panic("unblocking signals while not blocked");
++
++	if (--signals_blocked)
+ 		return;
+-	/* Must be set to 0 before we check the pending bits etc. */
+-	signals_blocked = 0;
++	/*
++	 * Must be set to 0 before we check pending so the
++	 * SIGIO handler will run as normal unless we're still
++	 * going to process signals_blocked_pending.
++	 */
+ 	barrier();
+ 
+-	if (signals_pending && signals_enabled) {
+-		/* this is a bit inefficient, but that's not really important */
+-		block_signals();
+-		unblock_signals();
+-	} else if (signals_pending & SIGIO_MASK) {
+-		/* we need to run time-travel handlers even if not enabled */
+-		sigio_run_timetravel_handlers();
++	/*
++	 * Note that block_signals_hard()/unblock_signals_hard() can be called
++	 * within the unblock_signals()/sigio_run_timetravel_handlers() below.
++	 * This would still be prone to race conditions since it's actually a
++	 * call _within_ e.g. vu_req_read_message(), where we observed this
++	 * issue, which loops. Thus, if the inner call handles the recorded
++	 * pending signals, we can get out of the inner call with the real
++	 * signal hander no longer blocked, and still have a race. Thus don't
++	 * handle unblocking in the inner call, if it happens, but only in
++	 * the outermost call - 'unblocking' serves as an ownership for the
++	 * signals_blocked_pending decrement.
++	 */
++	if (unblocking)
++		return;
++	unblocking = true;
++
++	while (__atomic_load_n(&signals_blocked_pending, __ATOMIC_SEQ_CST)) {
++		if (signals_enabled) {
++			/* signals are enabled so we can touch this */
++			signals_pending |= SIGIO_MASK;
++			/*
++			 * this is a bit inefficient, but that's
++			 * not really important
++			 */
++			block_signals();
++			unblock_signals();
++		} else {
++			/*
++			 * we need to run time-travel handlers even
++			 * if not enabled
++			 */
++			sigio_run_timetravel_handlers();
++		}
++
++		/*
++		 * The decrement of signals_blocked_pending must be atomic so
++		 * that the signal handler will either happen before or after
++		 * the decrement, not during a read-modify-write:
++		 *  - If it happens before, it can increment it and we'll
++		 *    decrement it and do another round in the loop.
++		 *  - If it happens after it'll see 0 for both signals_blocked
++		 *    and signals_blocked_pending and thus run the handler as
++		 *    usual (subject to signals_enabled, but that's unrelated.)
++		 *
++		 * Note that a call to unblock_signals_hard() within the calls
++		 * to unblock_signals() or sigio_run_timetravel_handlers() above
++		 * will do nothing due to the 'unblocking' state, so this cannot
++		 * underflow as the only one decrementing will be the outermost
++		 * one.
++		 */
++		if (__atomic_sub_fetch(&signals_blocked_pending, 1,
++				       __ATOMIC_SEQ_CST) < 0)
++			panic("signals_blocked_pending underflow");
+ 	}
++
++	unblocking = false;
+ }
+ #endif
+ 
+diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler
+index 59aedf32c4eaa..6d20a6ce0507d 100644
+--- a/arch/x86/Kconfig.assembler
++++ b/arch/x86/Kconfig.assembler
+@@ -36,6 +36,6 @@ config AS_VPCLMULQDQ
+ 	  Supported by binutils >= 2.30 and LLVM integrated assembler
+ 
+ config AS_WRUSS
+-	def_bool $(as-instr,wrussq %rax$(comma)(%rbx))
++	def_bool $(as-instr64,wrussq %rax$(comma)(%rbx))
+ 	help
+ 	  Supported by binutils >= 2.31 and LLVM integrated assembler
+diff --git a/arch/x86/Makefile.um b/arch/x86/Makefile.um
+index 2106a2bd152bf..a46b1397ad01c 100644
+--- a/arch/x86/Makefile.um
++++ b/arch/x86/Makefile.um
+@@ -9,6 +9,7 @@ core-y += arch/x86/crypto/
+ #
+ ifeq ($(CONFIG_CC_IS_CLANG),y)
+ KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
++KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json
+ KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2
+ endif
+ 
+diff --git a/arch/x86/entry/syscall_32.c b/arch/x86/entry/syscall_32.c
+index c2235bae17ef6..8cc9950d7104a 100644
+--- a/arch/x86/entry/syscall_32.c
++++ b/arch/x86/entry/syscall_32.c
+@@ -14,9 +14,12 @@
+ #endif
+ 
+ #define __SYSCALL(nr, sym) extern long __ia32_##sym(const struct pt_regs *);
+-
++#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __ia32_##sym(const struct pt_regs *);
+ #include <asm/syscalls_32.h>
+-#undef __SYSCALL
++#undef  __SYSCALL
++
++#undef  __SYSCALL_NORETURN
++#define __SYSCALL_NORETURN __SYSCALL
+ 
+ /*
+  * The sys_call_table[] is no longer used for system calls, but
+@@ -28,11 +31,10 @@
+ const sys_call_ptr_t sys_call_table[] = {
+ #include <asm/syscalls_32.h>
+ };
+-#undef __SYSCALL
++#undef  __SYSCALL
+ #endif
+ 
+ #define __SYSCALL(nr, sym) case nr: return __ia32_##sym(regs);
+-
+ long ia32_sys_call(const struct pt_regs *regs, unsigned int nr)
+ {
+ 	switch (nr) {
+diff --git a/arch/x86/entry/syscall_64.c b/arch/x86/entry/syscall_64.c
+index 33b3f09e6f151..ba8354424860c 100644
+--- a/arch/x86/entry/syscall_64.c
++++ b/arch/x86/entry/syscall_64.c
+@@ -8,8 +8,12 @@
+ #include <asm/syscall.h>
+ 
+ #define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
++#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
+ #include <asm/syscalls_64.h>
+-#undef __SYSCALL
++#undef  __SYSCALL
++
++#undef  __SYSCALL_NORETURN
++#define __SYSCALL_NORETURN __SYSCALL
+ 
+ /*
+  * The sys_call_table[] is no longer used for system calls, but
+@@ -20,10 +24,9 @@
+ const sys_call_ptr_t sys_call_table[] = {
+ #include <asm/syscalls_64.h>
+ };
+-#undef __SYSCALL
++#undef  __SYSCALL
+ 
+ #define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
+-
+ long x64_sys_call(const struct pt_regs *regs, unsigned int nr)
+ {
+ 	switch (nr) {
+diff --git a/arch/x86/entry/syscall_x32.c b/arch/x86/entry/syscall_x32.c
+index 03de4a9321318..fb77908f44f37 100644
+--- a/arch/x86/entry/syscall_x32.c
++++ b/arch/x86/entry/syscall_x32.c
+@@ -8,11 +8,14 @@
+ #include <asm/syscall.h>
+ 
+ #define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
++#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
+ #include <asm/syscalls_x32.h>
+-#undef __SYSCALL
++#undef  __SYSCALL
+ 
+-#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
++#undef  __SYSCALL_NORETURN
++#define __SYSCALL_NORETURN __SYSCALL
+ 
++#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
+ long x32_sys_call(const struct pt_regs *regs, unsigned int nr)
+ {
+ 	switch (nr) {
+diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
+index d6ebcab1d8b28..4b71a2607bf58 100644
+--- a/arch/x86/entry/syscalls/syscall_32.tbl
++++ b/arch/x86/entry/syscalls/syscall_32.tbl
+@@ -2,7 +2,7 @@
+ # 32-bit system call numbers and entry vectors
+ #
+ # The format is:
+-# <number> <abi> <name> <entry point> <compat entry point>
++# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
+ #
+ # The __ia32_sys and __ia32_compat_sys stubs are created on-the-fly for
+ # sys_*() system calls and compat_sys_*() compat system calls if
+@@ -12,7 +12,7 @@
+ # The abi is always "i386" for this file.
+ #
+ 0	i386	restart_syscall		sys_restart_syscall
+-1	i386	exit			sys_exit
++1	i386	exit			sys_exit			-			noreturn
+ 2	i386	fork			sys_fork
+ 3	i386	read			sys_read
+ 4	i386	write			sys_write
+@@ -263,7 +263,7 @@
+ 249	i386	io_cancel		sys_io_cancel
+ 250	i386	fadvise64		sys_ia32_fadvise64
+ # 251 is available for reuse (was briefly sys_set_zone_reclaim)
+-252	i386	exit_group		sys_exit_group
++252	i386	exit_group		sys_exit_group			-			noreturn
+ 253	i386	lookup_dcookie
+ 254	i386	epoll_create		sys_epoll_create
+ 255	i386	epoll_ctl		sys_epoll_ctl
+diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
+index a396f6e6ab5bf..a8068f937290a 100644
+--- a/arch/x86/entry/syscalls/syscall_64.tbl
++++ b/arch/x86/entry/syscalls/syscall_64.tbl
+@@ -2,7 +2,7 @@
+ # 64-bit system call numbers and entry vectors
+ #
+ # The format is:
+-# <number> <abi> <name> <entry point>
++# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
+ #
+ # The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls
+ #
+@@ -68,7 +68,7 @@
+ 57	common	fork			sys_fork
+ 58	common	vfork			sys_vfork
+ 59	64	execve			sys_execve
+-60	common	exit			sys_exit
++60	common	exit			sys_exit			-			noreturn
+ 61	common	wait4			sys_wait4
+ 62	common	kill			sys_kill
+ 63	common	uname			sys_newuname
+@@ -239,7 +239,7 @@
+ 228	common	clock_gettime		sys_clock_gettime
+ 229	common	clock_getres		sys_clock_getres
+ 230	common	clock_nanosleep		sys_clock_nanosleep
+-231	common	exit_group		sys_exit_group
++231	common	exit_group		sys_exit_group			-			noreturn
+ 232	common	epoll_wait		sys_epoll_wait
+ 233	common	epoll_ctl		sys_epoll_ctl
+ 234	common	tgkill			sys_tgkill
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 4ccb8fa483e61..5a4bfe9aea237 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -639,7 +639,7 @@ void amd_uncore_df_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
+ 	info.split.aux_data = 0;
+ 	info.split.num_pmcs = NUM_COUNTERS_NB;
+ 	info.split.gid = 0;
+-	info.split.cid = topology_die_id(cpu);
++	info.split.cid = topology_logical_package_id(cpu);
+ 
+ 	if (pmu_version >= 2) {
+ 		ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
+@@ -654,17 +654,20 @@ int amd_uncore_df_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ {
+ 	struct attribute **df_attr = amd_uncore_df_format_attr;
+ 	struct amd_uncore_pmu *pmu;
++	int num_counters;
+ 
+ 	/* Run just once */
+ 	if (uncore->init_done)
+ 		return amd_uncore_ctx_init(uncore, cpu);
+ 
++	num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
++	if (!num_counters)
++		goto done;
++
+ 	/* No grouping, single instance for a system */
+ 	uncore->pmus = kzalloc(sizeof(*uncore->pmus), GFP_KERNEL);
+-	if (!uncore->pmus) {
+-		uncore->num_pmus = 0;
++	if (!uncore->pmus)
+ 		goto done;
+-	}
+ 
+ 	/*
+ 	 * For Family 17h and above, the Northbridge counters are repurposed
+@@ -674,7 +677,7 @@ int amd_uncore_df_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ 	pmu = &uncore->pmus[0];
+ 	strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_df" : "amd_nb",
+ 		sizeof(pmu->name));
+-	pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
++	pmu->num_counters = num_counters;
+ 	pmu->msr_base = MSR_F15H_NB_PERF_CTL;
+ 	pmu->rdpmc_base = RDPMC_BASE_NB;
+ 	pmu->group = amd_uncore_ctx_gid(uncore, cpu);
+@@ -785,17 +788,20 @@ int amd_uncore_l3_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ {
+ 	struct attribute **l3_attr = amd_uncore_l3_format_attr;
+ 	struct amd_uncore_pmu *pmu;
++	int num_counters;
+ 
+ 	/* Run just once */
+ 	if (uncore->init_done)
+ 		return amd_uncore_ctx_init(uncore, cpu);
+ 
++	num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
++	if (!num_counters)
++		goto done;
++
+ 	/* No grouping, single instance for a system */
+ 	uncore->pmus = kzalloc(sizeof(*uncore->pmus), GFP_KERNEL);
+-	if (!uncore->pmus) {
+-		uncore->num_pmus = 0;
++	if (!uncore->pmus)
+ 		goto done;
+-	}
+ 
+ 	/*
+ 	 * For Family 17h and above, L3 cache counters are available instead
+@@ -805,7 +811,7 @@ int amd_uncore_l3_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ 	pmu = &uncore->pmus[0];
+ 	strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_l3" : "amd_l2",
+ 		sizeof(pmu->name));
+-	pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
++	pmu->num_counters = num_counters;
+ 	pmu->msr_base = MSR_F16H_L2I_PERF_CTL;
+ 	pmu->rdpmc_base = RDPMC_BASE_LLC;
+ 	pmu->group = amd_uncore_ctx_gid(uncore, cpu);
+@@ -893,8 +899,8 @@ void amd_uncore_umc_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
+ 	cpuid(EXT_PERFMON_DEBUG_FEATURES, &eax, &ebx.full, &ecx, &edx);
+ 	info.split.aux_data = ecx;	/* stash active mask */
+ 	info.split.num_pmcs = ebx.split.num_umc_pmc;
+-	info.split.gid = topology_die_id(cpu);
+-	info.split.cid = topology_die_id(cpu);
++	info.split.gid = topology_logical_package_id(cpu);
++	info.split.cid = topology_logical_package_id(cpu);
+ 	*per_cpu_ptr(uncore->info, cpu) = info;
+ }
+ 
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 5b0dd07b1ef19..acd367c453341 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2547,6 +2547,7 @@ static ssize_t set_attr_rdpmc(struct device *cdev,
+ 			      struct device_attribute *attr,
+ 			      const char *buf, size_t count)
+ {
++	static DEFINE_MUTEX(rdpmc_mutex);
+ 	unsigned long val;
+ 	ssize_t ret;
+ 
+@@ -2560,6 +2561,8 @@ static ssize_t set_attr_rdpmc(struct device *cdev,
+ 	if (x86_pmu.attr_rdpmc_broken)
+ 		return -ENOTSUPP;
+ 
++	guard(mutex)(&rdpmc_mutex);
++
+ 	if (val != x86_pmu.attr_rdpmc) {
+ 		/*
+ 		 * Changing into or out of never available or always available,
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index 9d6e8f13d13a7..dd18320558914 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -81,7 +81,7 @@
+  *	MSR_PKG_C7_RESIDENCY:  Package C7 Residency Counter.
+  *			       perf code: 0x03
+  *			       Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,CNL,
+- *						KBL,CML,ICL,TGL,RKL,ADL,RPL,MTL
++ *						KBL,CML,ICL,TGL,RKL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C8_RESIDENCY:  Package C8 Residency Counter.
+  *			       perf code: 0x04
+@@ -90,8 +90,7 @@
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C9_RESIDENCY:  Package C9 Residency Counter.
+  *			       perf code: 0x05
+- *			       Available model: HSW ULT,KBL,CNL,CML,ICL,TGL,RKL,
+- *						ADL,RPL,MTL
++ *			       Available model: HSW ULT,KBL,CNL,CML,ICL,TGL,RKL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C10_RESIDENCY: Package C10 Residency Counter.
+  *			       perf code: 0x06
+@@ -637,9 +636,7 @@ static const struct cstate_model adl_cstates __initconst = {
+ 	.pkg_events		= BIT(PERF_CSTATE_PKG_C2_RES) |
+ 				  BIT(PERF_CSTATE_PKG_C3_RES) |
+ 				  BIT(PERF_CSTATE_PKG_C6_RES) |
+-				  BIT(PERF_CSTATE_PKG_C7_RES) |
+ 				  BIT(PERF_CSTATE_PKG_C8_RES) |
+-				  BIT(PERF_CSTATE_PKG_C9_RES) |
+ 				  BIT(PERF_CSTATE_PKG_C10_RES),
+ };
+ 
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index e010bfed84170..80a4f712217b7 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1831,8 +1831,12 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
+ 	set_linear_ip(regs, basic->ip);
+ 	regs->flags = PERF_EFLAGS_EXACT;
+ 
+-	if ((sample_type & PERF_SAMPLE_WEIGHT_STRUCT) && (x86_pmu.flags & PMU_FL_RETIRE_LATENCY))
+-		data->weight.var3_w = format_size >> PEBS_RETIRE_LATENCY_OFFSET & PEBS_LATENCY_MASK;
++	if (sample_type & PERF_SAMPLE_WEIGHT_STRUCT) {
++		if (x86_pmu.flags & PMU_FL_RETIRE_LATENCY)
++			data->weight.var3_w = format_size >> PEBS_RETIRE_LATENCY_OFFSET & PEBS_LATENCY_MASK;
++		else
++			data->weight.var3_w = 0;
++	}
+ 
+ 	/*
+ 	 * The record for MEMINFO is in front of GP
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 14db6d9d318b3..b4aa8daa47738 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -878,7 +878,7 @@ static void pt_update_head(struct pt *pt)
+  */
+ static void *pt_buffer_region(struct pt_buffer *buf)
+ {
+-	return phys_to_virt(TOPA_ENTRY(buf->cur, buf->cur_idx)->base << TOPA_SHIFT);
++	return phys_to_virt((phys_addr_t)TOPA_ENTRY(buf->cur, buf->cur_idx)->base << TOPA_SHIFT);
+ }
+ 
+ /**
+@@ -990,7 +990,7 @@ pt_topa_entry_for_page(struct pt_buffer *buf, unsigned int pg)
+ 	 * order allocations, there shouldn't be many of these.
+ 	 */
+ 	list_for_each_entry(topa, &buf->tables, list) {
+-		if (topa->offset + topa->size > pg << PAGE_SHIFT)
++		if (topa->offset + topa->size > (unsigned long)pg << PAGE_SHIFT)
+ 			goto found;
+ 	}
+ 
+diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
+index 96906a62aacda..f5e46c04c145d 100644
+--- a/arch/x86/events/intel/pt.h
++++ b/arch/x86/events/intel/pt.h
+@@ -33,8 +33,8 @@ struct topa_entry {
+ 	u64	rsvd2	: 1;
+ 	u64	size	: 4;
+ 	u64	rsvd3	: 2;
+-	u64	base	: 36;
+-	u64	rsvd4	: 16;
++	u64	base	: 40;
++	u64	rsvd4	: 12;
+ };
+ 
+ /* TSC to Core Crystal Clock Ratio */
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 74b8b21e8990b..1891c2c7823b1 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -462,6 +462,7 @@
+ #define SPR_UBOX_DID				0x3250
+ 
+ /* SPR CHA */
++#define SPR_CHA_EVENT_MASK_EXT			0xffffffff
+ #define SPR_CHA_PMON_CTL_TID_EN			(1 << 16)
+ #define SPR_CHA_PMON_EVENT_MASK			(SNBEP_PMON_RAW_EVENT_MASK | \
+ 						 SPR_CHA_PMON_CTL_TID_EN)
+@@ -478,6 +479,7 @@ DEFINE_UNCORE_FORMAT_ATTR(umask_ext, umask, "config:8-15,32-43,45-55");
+ DEFINE_UNCORE_FORMAT_ATTR(umask_ext2, umask, "config:8-15,32-57");
+ DEFINE_UNCORE_FORMAT_ATTR(umask_ext3, umask, "config:8-15,32-39");
+ DEFINE_UNCORE_FORMAT_ATTR(umask_ext4, umask, "config:8-15,32-55");
++DEFINE_UNCORE_FORMAT_ATTR(umask_ext5, umask, "config:8-15,32-63");
+ DEFINE_UNCORE_FORMAT_ATTR(qor, qor, "config:16");
+ DEFINE_UNCORE_FORMAT_ATTR(edge, edge, "config:18");
+ DEFINE_UNCORE_FORMAT_ATTR(tid_en, tid_en, "config:19");
+@@ -5958,7 +5960,7 @@ static struct intel_uncore_ops spr_uncore_chabox_ops = {
+ 
+ static struct attribute *spr_uncore_cha_formats_attr[] = {
+ 	&format_attr_event.attr,
+-	&format_attr_umask_ext4.attr,
++	&format_attr_umask_ext5.attr,
+ 	&format_attr_tid_en2.attr,
+ 	&format_attr_edge.attr,
+ 	&format_attr_inv.attr,
+@@ -5994,7 +5996,7 @@ ATTRIBUTE_GROUPS(uncore_alias);
+ static struct intel_uncore_type spr_uncore_chabox = {
+ 	.name			= "cha",
+ 	.event_mask		= SPR_CHA_PMON_EVENT_MASK,
+-	.event_mask_ext		= SPR_RAW_EVENT_MASK_EXT,
++	.event_mask_ext		= SPR_CHA_EVENT_MASK_EXT,
+ 	.num_shared_regs	= 1,
+ 	.constraints		= skx_uncore_chabox_constraints,
+ 	.ops			= &spr_uncore_chabox_ops,
+diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
+index 5187fcf4b610b..8d5a2b4f5f006 100644
+--- a/arch/x86/include/asm/kvm-x86-ops.h
++++ b/arch/x86/include/asm/kvm-x86-ops.h
+@@ -85,7 +85,6 @@ KVM_X86_OP_OPTIONAL(update_cr8_intercept)
+ KVM_X86_OP(refresh_apicv_exec_ctrl)
+ KVM_X86_OP_OPTIONAL(hwapic_irr_update)
+ KVM_X86_OP_OPTIONAL(hwapic_isr_update)
+-KVM_X86_OP_OPTIONAL_RET0(guest_apic_has_interrupt)
+ KVM_X86_OP_OPTIONAL(load_eoi_exitmap)
+ KVM_X86_OP_OPTIONAL(set_virtual_apic_mode)
+ KVM_X86_OP_OPTIONAL(set_apic_access_page_addr)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index f8ca74e7678f3..d0274b3be2c40 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1714,7 +1714,6 @@ struct kvm_x86_ops {
+ 	void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
+ 	void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
+ 	void (*hwapic_isr_update)(int isr);
+-	bool (*guest_apic_has_interrupt)(struct kvm_vcpu *vcpu);
+ 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+ 	void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
+ 	void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
+@@ -1819,7 +1818,7 @@ struct kvm_x86_nested_ops {
+ 	bool (*is_exception_vmexit)(struct kvm_vcpu *vcpu, u8 vector,
+ 				    u32 error_code);
+ 	int (*check_events)(struct kvm_vcpu *vcpu);
+-	bool (*has_events)(struct kvm_vcpu *vcpu);
++	bool (*has_events)(struct kvm_vcpu *vcpu, bool for_injection);
+ 	void (*triple_fault)(struct kvm_vcpu *vcpu);
+ 	int (*get_state)(struct kvm_vcpu *vcpu,
+ 			 struct kvm_nested_state __user *user_kvm_nested_state,
+diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h
+index 42fee8959df7b..896909f306e30 100644
+--- a/arch/x86/include/asm/shstk.h
++++ b/arch/x86/include/asm/shstk.h
+@@ -21,6 +21,7 @@ unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clon
+ void shstk_free(struct task_struct *p);
+ int setup_signal_shadow_stack(struct ksignal *ksig);
+ int restore_signal_shadow_stack(void);
++int shstk_update_last_frame(unsigned long val);
+ #else
+ static inline long shstk_prctl(struct task_struct *task, int option,
+ 			       unsigned long arg2) { return -EINVAL; }
+@@ -31,6 +32,7 @@ static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p,
+ static inline void shstk_free(struct task_struct *p) {}
+ static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; }
+ static inline int restore_signal_shadow_stack(void) { return 0; }
++static inline int shstk_update_last_frame(unsigned long val) { return 0; }
+ #endif /* CONFIG_X86_USER_SHADOW_STACK */
+ 
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index 8e3c53b4d070e..64280879c68c0 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -83,7 +83,7 @@ static int x86_of_pci_irq_enable(struct pci_dev *dev)
+ 
+ 	ret = pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
+ 	if (ret)
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 	if (!pin)
+ 		return 0;
+ 
+diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
+index 6f1e9883f0742..9797d4cdb78a2 100644
+--- a/arch/x86/kernel/shstk.c
++++ b/arch/x86/kernel/shstk.c
+@@ -577,3 +577,14 @@ long shstk_prctl(struct task_struct *task, int option, unsigned long arg2)
+ 		return wrss_control(true);
+ 	return -EINVAL;
+ }
++
++int shstk_update_last_frame(unsigned long val)
++{
++	unsigned long ssp;
++
++	if (!features_enabled(ARCH_SHSTK_SHSTK))
++		return 0;
++
++	ssp = get_user_shstk_addr();
++	return write_user_shstk_64((u64 __user *)ssp, (u64)val);
++}
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index 6c07f6daaa227..6402fb3089d26 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -1076,8 +1076,13 @@ arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs
+ 		return orig_ret_vaddr;
+ 
+ 	nleft = copy_to_user((void __user *)regs->sp, &trampoline_vaddr, rasize);
+-	if (likely(!nleft))
++	if (likely(!nleft)) {
++		if (shstk_update_last_frame(trampoline_vaddr)) {
++			force_sig(SIGSEGV);
++			return -1;
++		}
+ 		return orig_ret_vaddr;
++	}
+ 
+ 	if (nleft != rasize) {
+ 		pr_err("return address clobbered: pid=%d, %%sp=%#lx, %%ip=%#lx\n",
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index d4ed681785fd6..547fca3709feb 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -97,7 +97,6 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ 	.required_apicv_inhibits = VMX_REQUIRED_APICV_INHIBITS,
+ 	.hwapic_irr_update = vmx_hwapic_irr_update,
+ 	.hwapic_isr_update = vmx_hwapic_isr_update,
+-	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
+ 	.sync_pir_to_irr = vmx_sync_pir_to_irr,
+ 	.deliver_interrupt = vmx_deliver_interrupt,
+ 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 643935a0f70ab..7c57d6524f754 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -12,6 +12,7 @@
+ #include "mmu.h"
+ #include "nested.h"
+ #include "pmu.h"
++#include "posted_intr.h"
+ #include "sgx.h"
+ #include "trace.h"
+ #include "vmx.h"
+@@ -3899,8 +3900,8 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
+ 	if (!pi_test_and_clear_on(vmx->nested.pi_desc))
+ 		return 0;
+ 
+-	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
+-	if (max_irr != 256) {
++	max_irr = pi_find_highest_vector(vmx->nested.pi_desc);
++	if (max_irr > 0) {
+ 		vapic_page = vmx->nested.virtual_apic_map.hva;
+ 		if (!vapic_page)
+ 			goto mmio_needed;
+@@ -4031,10 +4032,46 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
+ 	       to_vmx(vcpu)->nested.preemption_timer_expired;
+ }
+ 
+-static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
++static bool vmx_has_nested_events(struct kvm_vcpu *vcpu, bool for_injection)
+ {
+-	return nested_vmx_preemption_timer_pending(vcpu) ||
+-	       to_vmx(vcpu)->nested.mtf_pending;
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
++	void *vapic = vmx->nested.virtual_apic_map.hva;
++	int max_irr, vppr;
++
++	if (nested_vmx_preemption_timer_pending(vcpu) ||
++	    vmx->nested.mtf_pending)
++		return true;
++
++	/*
++	 * Virtual Interrupt Delivery doesn't require manual injection.  Either
++	 * the interrupt is already in GUEST_RVI and will be recognized by CPU
++	 * at VM-Entry, or there is a KVM_REQ_EVENT pending and KVM will move
++	 * the interrupt from the PIR to RVI prior to entering the guest.
++	 */
++	if (for_injection)
++		return false;
++
++	if (!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
++	    __vmx_interrupt_blocked(vcpu))
++		return false;
++
++	if (!vapic)
++		return false;
++
++	vppr = *((u32 *)(vapic + APIC_PROCPRI));
++
++	max_irr = vmx_get_rvi();
++	if ((max_irr & 0xf0) > (vppr & 0xf0))
++		return true;
++
++	if (vmx->nested.pi_pending && vmx->nested.pi_desc &&
++	    pi_test_on(vmx->nested.pi_desc)) {
++		max_irr = pi_find_highest_vector(vmx->nested.pi_desc);
++		if (max_irr > 0 && (max_irr & 0xf0) > (vppr & 0xf0))
++			return true;
++	}
++
++	return false;
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h
+index 6b2a0226257ea..1715d2ab07be5 100644
+--- a/arch/x86/kvm/vmx/posted_intr.h
++++ b/arch/x86/kvm/vmx/posted_intr.h
+@@ -1,6 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef __KVM_X86_VMX_POSTED_INTR_H
+ #define __KVM_X86_VMX_POSTED_INTR_H
++
++#include <linux/find.h>
+ #include <asm/posted_intr.h>
+ 
+ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
+@@ -12,4 +14,12 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ 		       uint32_t guest_irq, bool set);
+ void vmx_pi_start_assignment(struct kvm *kvm);
+ 
++static inline int pi_find_highest_vector(struct pi_desc *pi_desc)
++{
++	int vec;
++
++	vec = find_last_bit((unsigned long *)pi_desc->pir, 256);
++	return vec < 256 ? vec : -1;
++}
++
+ #endif /* __KVM_X86_VMX_POSTED_INTR_H */
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index b3c83c06f8265..2792c50869773 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4108,26 +4108,6 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+-{
+-	struct vcpu_vmx *vmx = to_vmx(vcpu);
+-	void *vapic_page;
+-	u32 vppr;
+-	int rvi;
+-
+-	if (WARN_ON_ONCE(!is_guest_mode(vcpu)) ||
+-		!nested_cpu_has_vid(get_vmcs12(vcpu)) ||
+-		WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn))
+-		return false;
+-
+-	rvi = vmx_get_rvi();
+-
+-	vapic_page = vmx->nested.virtual_apic_map.hva;
+-	vppr = *((u32 *)(vapic_page + APIC_PROCPRI));
+-
+-	return ((rvi & 0xf0) > (vppr & 0xf0));
+-}
+-
+ void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -5052,14 +5032,19 @@ int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+ 	return !vmx_nmi_blocked(vcpu);
+ }
+ 
++bool __vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
++{
++	return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) ||
++	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
++		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
++}
++
+ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
+ {
+ 	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
+ 		return false;
+ 
+-	return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) ||
+-	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+-		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
++	return __vmx_interrupt_blocked(vcpu);
+ }
+ 
+ int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 7b64e271a9319..2e23a01fe3206 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -406,6 +406,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
+ bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu);
+ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
+ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
++bool __vmx_interrupt_blocked(struct kvm_vcpu *vcpu);
+ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu);
+ bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
+ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index 502704596c832..d404227c164d6 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -49,7 +49,6 @@ void vmx_apicv_pre_state_restore(struct kvm_vcpu *vcpu);
+ bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason);
+ void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+ void vmx_hwapic_isr_update(int max_isr);
+-bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu);
+ int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+ void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+ 			   int trig_mode, int vector);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0763a0f72a067..0b7adf3bc58a6 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10516,7 +10516,7 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
+ 
+ 	if (is_guest_mode(vcpu) &&
+ 	    kvm_x86_ops.nested_ops->has_events &&
+-	    kvm_x86_ops.nested_ops->has_events(vcpu))
++	    kvm_x86_ops.nested_ops->has_events(vcpu, true))
+ 		*req_immediate_exit = true;
+ 
+ 	/*
+@@ -13100,12 +13100,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
+ 		kvm_arch_free_memslot(kvm, old);
+ }
+ 
+-static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+-{
+-	return (is_guest_mode(vcpu) &&
+-		static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
+-}
+-
+ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
+ {
+ 	if (!list_empty_careful(&vcpu->async_pf.done))
+@@ -13136,9 +13130,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
+ 	if (kvm_test_request(KVM_REQ_PMI, vcpu))
+ 		return true;
+ 
+-	if (kvm_arch_interrupt_allowed(vcpu) &&
+-	    (kvm_cpu_has_interrupt(vcpu) ||
+-	    kvm_guest_apic_has_interrupt(vcpu)))
++	if (kvm_arch_interrupt_allowed(vcpu) && kvm_cpu_has_interrupt(vcpu))
+ 		return true;
+ 
+ 	if (kvm_hv_has_stimer_pending(vcpu))
+@@ -13146,7 +13138,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
+ 
+ 	if (is_guest_mode(vcpu) &&
+ 	    kvm_x86_ops.nested_ops->has_events &&
+-	    kvm_x86_ops.nested_ops->has_events(vcpu))
++	    kvm_x86_ops.nested_ops->has_events(vcpu, false))
+ 		return true;
+ 
+ 	if (kvm_xen_has_pending_events(vcpu))
+diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c
+index 8edd622066044..722a33be08a18 100644
+--- a/arch/x86/pci/intel_mid_pci.c
++++ b/arch/x86/pci/intel_mid_pci.c
+@@ -233,9 +233,9 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev)
+ 		return 0;
+ 
+ 	ret = pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &gsi);
+-	if (ret < 0) {
++	if (ret) {
+ 		dev_warn(&dev->dev, "Failed to read interrupt line: %d\n", ret);
+-		return ret;
++		return pcibios_err_to_errno(ret);
+ 	}
+ 
+ 	id = x86_match_cpu(intel_mid_cpu_ids);
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 652cd53e77f64..0f2fe524f60dc 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -38,10 +38,10 @@ static int xen_pcifront_enable_irq(struct pci_dev *dev)
+ 	u8 gsi;
+ 
+ 	rc = pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &gsi);
+-	if (rc < 0) {
++	if (rc) {
+ 		dev_warn(&dev->dev, "Xen PCI: failed to read interrupt line: %d\n",
+ 			 rc);
+-		return rc;
++		return pcibios_err_to_errno(rc);
+ 	}
+ 	/* In PV DomU the Xen PCI backend puts the PIRQ in the interrupt line.*/
+ 	pirq = gsi;
+diff --git a/arch/x86/platform/intel/iosf_mbi.c b/arch/x86/platform/intel/iosf_mbi.c
+index fdd49d70b4373..c81cea208c2c4 100644
+--- a/arch/x86/platform/intel/iosf_mbi.c
++++ b/arch/x86/platform/intel/iosf_mbi.c
+@@ -62,7 +62,7 @@ static int iosf_mbi_pci_read_mdr(u32 mcrx, u32 mcr, u32 *mdr)
+ 
+ fail_read:
+ 	dev_err(&mbi_pdev->dev, "PCI config access failed with %d\n", result);
+-	return result;
++	return pcibios_err_to_errno(result);
+ }
+ 
+ static int iosf_mbi_pci_write_mdr(u32 mcrx, u32 mcr, u32 mdr)
+@@ -91,7 +91,7 @@ static int iosf_mbi_pci_write_mdr(u32 mcrx, u32 mcr, u32 mdr)
+ 
+ fail_write:
+ 	dev_err(&mbi_pdev->dev, "PCI config access failed with %d\n", result);
+-	return result;
++	return pcibios_err_to_errno(result);
+ }
+ 
+ int iosf_mbi_read(u8 port, u8 opcode, u32 offset, u32 *mdr)
+diff --git a/arch/x86/um/sys_call_table_32.c b/arch/x86/um/sys_call_table_32.c
+index 89df5d89d6640..51655133eee36 100644
+--- a/arch/x86/um/sys_call_table_32.c
++++ b/arch/x86/um/sys_call_table_32.c
+@@ -9,6 +9,10 @@
+ #include <linux/cache.h>
+ #include <asm/syscall.h>
+ 
++extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long,
++				      unsigned long, unsigned long,
++				      unsigned long, unsigned long);
++
+ /*
+  * Below you can see, in terms of #define's, the differences between the x86-64
+  * and the UML syscall table.
+@@ -22,15 +26,13 @@
+ #define sys_vm86 sys_ni_syscall
+ 
+ #define __SYSCALL_WITH_COMPAT(nr, native, compat)	__SYSCALL(nr, native)
++#define __SYSCALL_NORETURN __SYSCALL
+ 
+ #define __SYSCALL(nr, sym) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+ #include <asm/syscalls_32.h>
++#undef  __SYSCALL
+ 
+-#undef __SYSCALL
+ #define __SYSCALL(nr, sym) sym,
+-
+-extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+-
+ const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
+ #include <asm/syscalls_32.h>
+ };
+diff --git a/arch/x86/um/sys_call_table_64.c b/arch/x86/um/sys_call_table_64.c
+index b0b4cfd2308c8..943d414f21093 100644
+--- a/arch/x86/um/sys_call_table_64.c
++++ b/arch/x86/um/sys_call_table_64.c
+@@ -9,6 +9,10 @@
+ #include <linux/cache.h>
+ #include <asm/syscall.h>
+ 
++extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long,
++				      unsigned long, unsigned long,
++				      unsigned long, unsigned long);
++
+ /*
+  * Below you can see, in terms of #define's, the differences between the x86-64
+  * and the UML syscall table.
+@@ -18,14 +22,13 @@
+ #define sys_iopl sys_ni_syscall
+ #define sys_ioperm sys_ni_syscall
+ 
++#define __SYSCALL_NORETURN __SYSCALL
++
+ #define __SYSCALL(nr, sym) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+ #include <asm/syscalls_64.h>
++#undef  __SYSCALL
+ 
+-#undef __SYSCALL
+ #define __SYSCALL(nr, sym) sym,
+-
+-extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+-
+ const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
+ #include <asm/syscalls_64.h>
+ };
+diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
+index 0ae10535c6999..0ce17766c0e52 100644
+--- a/arch/x86/virt/svm/sev.c
++++ b/arch/x86/virt/svm/sev.c
+@@ -120,7 +120,7 @@ static __init void snp_enable(void *arg)
+ 
+ bool snp_probe_rmptable_info(void)
+ {
+-	u64 max_rmp_pfn, calc_rmp_sz, rmp_sz, rmp_base, rmp_end;
++	u64 rmp_sz, rmp_base, rmp_end;
+ 
+ 	rdmsrl(MSR_AMD64_RMP_BASE, rmp_base);
+ 	rdmsrl(MSR_AMD64_RMP_END, rmp_end);
+@@ -137,28 +137,11 @@ bool snp_probe_rmptable_info(void)
+ 
+ 	rmp_sz = rmp_end - rmp_base + 1;
+ 
+-	/*
+-	 * Calculate the amount the memory that must be reserved by the BIOS to
+-	 * address the whole RAM, including the bookkeeping area. The RMP itself
+-	 * must also be covered.
+-	 */
+-	max_rmp_pfn = max_pfn;
+-	if (PHYS_PFN(rmp_end) > max_pfn)
+-		max_rmp_pfn = PHYS_PFN(rmp_end);
+-
+-	calc_rmp_sz = (max_rmp_pfn << 4) + RMPTABLE_CPU_BOOKKEEPING_SZ;
+-
+-	if (calc_rmp_sz > rmp_sz) {
+-		pr_err("Memory reserved for the RMP table does not cover full system RAM (expected 0x%llx got 0x%llx)\n",
+-		       calc_rmp_sz, rmp_sz);
+-		return false;
+-	}
+-
+ 	probed_rmp_base = rmp_base;
+ 	probed_rmp_size = rmp_sz;
+ 
+ 	pr_info("RMP table physical range [0x%016llx - 0x%016llx]\n",
+-		probed_rmp_base, probed_rmp_base + probed_rmp_size - 1);
++		rmp_base, rmp_end);
+ 
+ 	return true;
+ }
+@@ -206,9 +189,8 @@ void __init snp_fixup_e820_tables(void)
+  */
+ static int __init snp_rmptable_init(void)
+ {
++	u64 max_rmp_pfn, calc_rmp_sz, rmptable_size, rmp_end, val;
+ 	void *rmptable_start;
+-	u64 rmptable_size;
+-	u64 val;
+ 
+ 	if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))
+ 		return 0;
+@@ -219,10 +201,28 @@ static int __init snp_rmptable_init(void)
+ 	if (!probed_rmp_size)
+ 		goto nosnp;
+ 
++	rmp_end = probed_rmp_base + probed_rmp_size - 1;
++
++	/*
++	 * Calculate the amount the memory that must be reserved by the BIOS to
++	 * address the whole RAM, including the bookkeeping area. The RMP itself
++	 * must also be covered.
++	 */
++	max_rmp_pfn = max_pfn;
++	if (PFN_UP(rmp_end) > max_pfn)
++		max_rmp_pfn = PFN_UP(rmp_end);
++
++	calc_rmp_sz = (max_rmp_pfn << 4) + RMPTABLE_CPU_BOOKKEEPING_SZ;
++	if (calc_rmp_sz > probed_rmp_size) {
++		pr_err("Memory reserved for the RMP table does not cover full system RAM (expected 0x%llx got 0x%llx)\n",
++		       calc_rmp_sz, probed_rmp_size);
++		goto nosnp;
++	}
++
+ 	rmptable_start = memremap(probed_rmp_base, probed_rmp_size, MEMREMAP_WB);
+ 	if (!rmptable_start) {
+ 		pr_err("Failed to map RMP table\n");
+-		return 1;
++		goto nosnp;
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 99918beccd80c..6bcbdf3b7999f 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -730,7 +730,7 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 		 * immediate unmapping.
+ 		 */
+ 		map_ops[i].status = GNTST_general_error;
+-		unmap[0].host_addr = map_ops[i].host_addr,
++		unmap[0].host_addr = map_ops[i].host_addr;
+ 		unmap[0].handle = map_ops[i].handle;
+ 		map_ops[i].handle = INVALID_GRANT_HANDLE;
+ 		if (map_ops[i].flags & GNTMAP_device_map)
+@@ -740,7 +740,7 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ 
+ 		if (kmap_ops) {
+ 			kmap_ops[i].status = GNTST_general_error;
+-			unmap[1].host_addr = kmap_ops[i].host_addr,
++			unmap[1].host_addr = kmap_ops[i].host_addr;
+ 			unmap[1].handle = kmap_ops[i].handle;
+ 			kmap_ops[i].handle = INVALID_GRANT_HANDLE;
+ 			if (kmap_ops[i].flags & GNTMAP_device_map)
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 8b528e12136f5..741581a752c47 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -454,6 +454,7 @@ bool bio_integrity_prep(struct bio *bio)
+ 	unsigned long start, end;
+ 	unsigned int len, nr_pages;
+ 	unsigned int bytes, offset, i;
++	gfp_t gfp = GFP_NOIO;
+ 
+ 	if (!bi)
+ 		return true;
+@@ -476,11 +477,19 @@ bool bio_integrity_prep(struct bio *bio)
+ 		if (!bi->profile->generate_fn ||
+ 		    !(bi->flags & BLK_INTEGRITY_GENERATE))
+ 			return true;
++
++		/*
++		 * Zero the memory allocated to not leak uninitialized kernel
++		 * memory to disk.  For PI this only affects the app tag, but
++		 * for non-integrity metadata it affects the entire metadata
++		 * buffer.
++		 */
++		gfp |= __GFP_ZERO;
+ 	}
+ 
+ 	/* Allocate kernel buffer for protection data */
+ 	len = bio_integrity_bytes(bi, bio_sectors(bio));
+-	buf = kmalloc(len, GFP_NOIO);
++	buf = kmalloc(len, gfp);
+ 	if (unlikely(buf == NULL)) {
+ 		printk(KERN_ERR "could not allocate integrity buffer\n");
+ 		goto err_end_io;
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 3b4df8e5ac9e5..1c85a5c79ee63 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -448,6 +448,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data)
+ 	if (data->cmd_flags & REQ_NOWAIT)
+ 		data->flags |= BLK_MQ_REQ_NOWAIT;
+ 
++retry:
++	data->ctx = blk_mq_get_ctx(q);
++	data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx);
++
+ 	if (q->elevator) {
+ 		/*
+ 		 * All requests use scheduler tags when an I/O scheduler is
+@@ -469,13 +473,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data)
+ 			if (ops->limit_depth)
+ 				ops->limit_depth(data->cmd_flags, data);
+ 		}
+-	}
+-
+-retry:
+-	data->ctx = blk_mq_get_ctx(q);
+-	data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx);
+-	if (!(data->rq_flags & RQF_SCHED_TAGS))
++	} else {
+ 		blk_mq_tag_busy(data->hctx);
++	}
+ 
+ 	if (data->flags & BLK_MQ_REQ_RESERVED)
+ 		data->rq_flags |= RQF_RESV;
+@@ -2914,6 +2914,17 @@ static void blk_mq_use_cached_rq(struct request *rq, struct blk_plug *plug,
+ 	INIT_LIST_HEAD(&rq->queuelist);
+ }
+ 
++static bool bio_unaligned(const struct bio *bio, struct request_queue *q)
++{
++	unsigned int bs_mask = queue_logical_block_size(q) - 1;
++
++	/* .bi_sector of any zero sized bio need to be initialized */
++	if ((bio->bi_iter.bi_size & bs_mask) ||
++	    ((bio->bi_iter.bi_sector << SECTOR_SHIFT) & bs_mask))
++		return true;
++	return false;
++}
++
+ /**
+  * blk_mq_submit_bio - Create and send a request to block device.
+  * @bio: Bio pointer.
+@@ -2966,6 +2977,15 @@ void blk_mq_submit_bio(struct bio *bio)
+ 			return;
+ 	}
+ 
++	/*
++	 * Device reconfiguration may change logical block size, so alignment
++	 * check has to be done with queue usage counter held
++	 */
++	if (unlikely(bio_unaligned(bio, q))) {
++		bio_io_error(bio);
++		goto queue_exit;
++	}
++
+ 	if (unlikely(bio_may_exceed_limits(bio, &q->limits))) {
+ 		bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
+ 		if (!bio)
+diff --git a/block/genhd.c b/block/genhd.c
+index 8f1f3c6b4d672..c5fca3e893a06 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -663,12 +663,12 @@ void del_gendisk(struct gendisk *disk)
+ 	 */
+ 	if (!test_bit(GD_DEAD, &disk->state))
+ 		blk_report_disk_dead(disk, false);
+-	__blk_mark_disk_dead(disk);
+ 
+ 	/*
+ 	 * Drop all partitions now that the disk is marked dead.
+ 	 */
+ 	mutex_lock(&disk->open_mutex);
++	__blk_mark_disk_dead(disk);
+ 	xa_for_each_start(&disk->part_tbl, idx, part, 1)
+ 		drop_partition(part);
+ 	mutex_unlock(&disk->open_mutex);
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 94eede4fb9ebe..acdc28756d9d7 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -487,6 +487,20 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	return rq;
+ }
+ 
++/*
++ * 'depth' is a number in the range 1..INT_MAX representing a number of
++ * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests since
++ * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow().
++ * Values larger than q->nr_requests have the same effect as q->nr_requests.
++ */
++static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdepth)
++{
++	struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags;
++	const unsigned int nrr = hctx->queue->nr_requests;
++
++	return ((qdepth << bt->sb.shift) + nrr - 1) / nrr;
++}
++
+ /*
+  * Called by __blk_mq_alloc_request(). The shallow_depth value set by this
+  * function is used by __blk_mq_get_tag().
+@@ -503,7 +517,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ 	 * Throttle asynchronous requests and writes such that these requests
+ 	 * do not block the allocation of synchronous requests.
+ 	 */
+-	data->shallow_depth = dd->async_depth;
++	data->shallow_depth = dd_to_word_depth(data->hctx, dd->async_depth);
+ }
+ 
+ /* Called by blk_mq_update_nr_requests(). */
+@@ -513,9 +527,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
+ 	struct deadline_data *dd = q->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
+ 
+-	dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
++	dd->async_depth = q->nr_requests;
+ 
+-	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
++	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1);
+ }
+ 
+ /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index b21a7b246a0dc..2d0a24a565084 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -570,9 +570,7 @@ static bool binder_has_work(struct binder_thread *thread, bool do_proc_work)
+ static bool binder_available_for_proc_work_ilocked(struct binder_thread *thread)
+ {
+ 	return !thread->transaction_stack &&
+-		binder_worklist_empty_ilocked(&thread->todo) &&
+-		(thread->looper & (BINDER_LOOPER_STATE_ENTERED |
+-				   BINDER_LOOPER_STATE_REGISTERED));
++		binder_worklist_empty_ilocked(&thread->todo);
+ }
+ 
+ static void binder_wakeup_poll_threads_ilocked(struct binder_proc *proc,
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index bb4d30d377ae5..076fbeadce015 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -230,6 +230,80 @@ void ata_scsi_set_sense_information(struct ata_device *dev,
+ 				   SCSI_SENSE_BUFFERSIZE, information);
+ }
+ 
++/**
++ *	ata_scsi_set_passthru_sense_fields - Set ATA fields in sense buffer
++ *	@qc: ATA PASS-THROUGH command.
++ *
++ *	Populates "ATA Status Return sense data descriptor" / "Fixed format
++ *	sense data" with ATA taskfile fields.
++ *
++ *	LOCKING:
++ *	None.
++ */
++static void ata_scsi_set_passthru_sense_fields(struct ata_queued_cmd *qc)
++{
++	struct scsi_cmnd *cmd = qc->scsicmd;
++	struct ata_taskfile *tf = &qc->result_tf;
++	unsigned char *sb = cmd->sense_buffer;
++
++	if ((sb[0] & 0x7f) >= 0x72) {
++		unsigned char *desc;
++		u8 len;
++
++		/* descriptor format */
++		len = sb[7];
++		desc = (char *)scsi_sense_desc_find(sb, len + 8, 9);
++		if (!desc) {
++			if (SCSI_SENSE_BUFFERSIZE < len + 14)
++				return;
++			sb[7] = len + 14;
++			desc = sb + 8 + len;
++		}
++		desc[0] = 9;
++		desc[1] = 12;
++		/*
++		 * Copy registers into sense buffer.
++		 */
++		desc[2] = 0x00;
++		desc[3] = tf->error;
++		desc[5] = tf->nsect;
++		desc[7] = tf->lbal;
++		desc[9] = tf->lbam;
++		desc[11] = tf->lbah;
++		desc[12] = tf->device;
++		desc[13] = tf->status;
++
++		/*
++		 * Fill in Extend bit, and the high order bytes
++		 * if applicable.
++		 */
++		if (tf->flags & ATA_TFLAG_LBA48) {
++			desc[2] |= 0x01;
++			desc[4] = tf->hob_nsect;
++			desc[6] = tf->hob_lbal;
++			desc[8] = tf->hob_lbam;
++			desc[10] = tf->hob_lbah;
++		}
++	} else {
++		/* Fixed sense format */
++		sb[0] |= 0x80;
++		sb[3] = tf->error;
++		sb[4] = tf->status;
++		sb[5] = tf->device;
++		sb[6] = tf->nsect;
++		if (tf->flags & ATA_TFLAG_LBA48)  {
++			sb[8] |= 0x80;
++			if (tf->hob_nsect)
++				sb[8] |= 0x40;
++			if (tf->hob_lbal || tf->hob_lbam || tf->hob_lbah)
++				sb[8] |= 0x20;
++		}
++		sb[9] = tf->lbal;
++		sb[10] = tf->lbam;
++		sb[11] = tf->lbah;
++	}
++}
++
+ static void ata_scsi_set_invalid_field(struct ata_device *dev,
+ 				       struct scsi_cmnd *cmd, u16 field, u8 bit)
+ {
+@@ -837,10 +911,8 @@ static void ata_to_sense_error(unsigned id, u8 drv_stat, u8 drv_err, u8 *sk,
+  *	ata_gen_passthru_sense - Generate check condition sense block.
+  *	@qc: Command that completed.
+  *
+- *	This function is specific to the ATA descriptor format sense
+- *	block specified for the ATA pass through commands.  Regardless
+- *	of whether the command errored or not, return a sense
+- *	block. Copy all controller registers into the sense
++ *	This function is specific to the ATA pass through commands.
++ *	Regardless of whether the command errored or not, return a sense
+  *	block. If there was no error, we get the request from an ATA
+  *	passthrough command, so we use the following sense data:
+  *	sk = RECOVERED ERROR
+@@ -855,7 +927,6 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ 	struct scsi_cmnd *cmd = qc->scsicmd;
+ 	struct ata_taskfile *tf = &qc->result_tf;
+ 	unsigned char *sb = cmd->sense_buffer;
+-	unsigned char *desc = sb + 8;
+ 	u8 sense_key, asc, ascq;
+ 
+ 	memset(sb, 0, SCSI_SENSE_BUFFERSIZE);
+@@ -870,67 +941,8 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ 				   &sense_key, &asc, &ascq);
+ 		ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq);
+ 	} else {
+-		/*
+-		 * ATA PASS-THROUGH INFORMATION AVAILABLE
+-		 * Always in descriptor format sense.
+-		 */
+-		scsi_build_sense(cmd, 1, RECOVERED_ERROR, 0, 0x1D);
+-	}
+-
+-	if ((cmd->sense_buffer[0] & 0x7f) >= 0x72) {
+-		u8 len;
+-
+-		/* descriptor format */
+-		len = sb[7];
+-		desc = (char *)scsi_sense_desc_find(sb, len + 8, 9);
+-		if (!desc) {
+-			if (SCSI_SENSE_BUFFERSIZE < len + 14)
+-				return;
+-			sb[7] = len + 14;
+-			desc = sb + 8 + len;
+-		}
+-		desc[0] = 9;
+-		desc[1] = 12;
+-		/*
+-		 * Copy registers into sense buffer.
+-		 */
+-		desc[2] = 0x00;
+-		desc[3] = tf->error;
+-		desc[5] = tf->nsect;
+-		desc[7] = tf->lbal;
+-		desc[9] = tf->lbam;
+-		desc[11] = tf->lbah;
+-		desc[12] = tf->device;
+-		desc[13] = tf->status;
+-
+-		/*
+-		 * Fill in Extend bit, and the high order bytes
+-		 * if applicable.
+-		 */
+-		if (tf->flags & ATA_TFLAG_LBA48) {
+-			desc[2] |= 0x01;
+-			desc[4] = tf->hob_nsect;
+-			desc[6] = tf->hob_lbal;
+-			desc[8] = tf->hob_lbam;
+-			desc[10] = tf->hob_lbah;
+-		}
+-	} else {
+-		/* Fixed sense format */
+-		desc[0] = tf->error;
+-		desc[1] = tf->status;
+-		desc[2] = tf->device;
+-		desc[3] = tf->nsect;
+-		desc[7] = 0;
+-		if (tf->flags & ATA_TFLAG_LBA48)  {
+-			desc[8] |= 0x80;
+-			if (tf->hob_nsect)
+-				desc[8] |= 0x40;
+-			if (tf->hob_lbal || tf->hob_lbam || tf->hob_lbah)
+-				desc[8] |= 0x20;
+-		}
+-		desc[9] = tf->lbal;
+-		desc[10] = tf->lbam;
+-		desc[11] = tf->lbah;
++		/* ATA PASS-THROUGH INFORMATION AVAILABLE */
++		ata_scsi_set_sense(qc->dev, cmd, RECOVERED_ERROR, 0, 0x1D);
+ 	}
+ }
+ 
+@@ -1632,26 +1644,32 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
+ {
+ 	struct scsi_cmnd *cmd = qc->scsicmd;
+ 	u8 *cdb = cmd->cmnd;
+-	int need_sense = (qc->err_mask != 0) &&
+-		!(qc->flags & ATA_QCFLAG_SENSE_VALID);
++	bool have_sense = qc->flags & ATA_QCFLAG_SENSE_VALID;
++	bool is_ata_passthru = cdb[0] == ATA_16 || cdb[0] == ATA_12;
++	bool is_ck_cond_request = cdb[2] & 0x20;
++	bool is_error = qc->err_mask != 0;
+ 
+ 	/* For ATA pass thru (SAT) commands, generate a sense block if
+ 	 * user mandated it or if there's an error.  Note that if we
+-	 * generate because the user forced us to [CK_COND =1], a check
++	 * generate because the user forced us to [CK_COND=1], a check
+ 	 * condition is generated and the ATA register values are returned
+ 	 * whether the command completed successfully or not. If there
+-	 * was no error, we use the following sense data:
++	 * was no error, and CK_COND=1, we use the following sense data:
+ 	 * sk = RECOVERED ERROR
+ 	 * asc,ascq = ATA PASS-THROUGH INFORMATION AVAILABLE
+ 	 */
+-	if (((cdb[0] == ATA_16) || (cdb[0] == ATA_12)) &&
+-	    ((cdb[2] & 0x20) || need_sense))
+-		ata_gen_passthru_sense(qc);
+-	else if (need_sense)
++	if (is_ata_passthru && (is_ck_cond_request || is_error || have_sense)) {
++		if (!have_sense)
++			ata_gen_passthru_sense(qc);
++		ata_scsi_set_passthru_sense_fields(qc);
++		if (is_ck_cond_request)
++			set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION);
++	} else if (is_error && !have_sense) {
+ 		ata_gen_ata_sense(qc);
+-	else
++	} else {
+ 		/* Keep the SCSI ML and status byte, clear host byte. */
+ 		cmd->result &= 0x0000ffff;
++	}
+ 
+ 	ata_qc_done(qc);
+ }
+@@ -2590,14 +2608,8 @@ static void atapi_qc_complete(struct ata_queued_cmd *qc)
+ 	/* handle completion from EH */
+ 	if (unlikely(err_mask || qc->flags & ATA_QCFLAG_SENSE_VALID)) {
+ 
+-		if (!(qc->flags & ATA_QCFLAG_SENSE_VALID)) {
+-			/* FIXME: not quite right; we don't want the
+-			 * translation of taskfile registers into a
+-			 * sense descriptors, since that's only
+-			 * correct for ATA, not ATAPI
+-			 */
++		if (!(qc->flags & ATA_QCFLAG_SENSE_VALID))
+ 			ata_gen_passthru_sense(qc);
+-		}
+ 
+ 		/* SCSI EH automatically locks door if sdev->locked is
+ 		 * set.  Sometimes door lock request continues to
+diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
+index ce987944662c8..8a7034b41d50e 100644
+--- a/drivers/auxdisplay/ht16k33.c
++++ b/drivers/auxdisplay/ht16k33.c
+@@ -483,6 +483,7 @@ static int ht16k33_led_probe(struct device *dev, struct led_classdev *led,
+ 	led->max_brightness = MAX_BRIGHTNESS;
+ 
+ 	err = devm_led_classdev_register_ext(dev, led, &init_data);
++	fwnode_handle_put(init_data.fwnode);
+ 	if (err)
+ 		dev_err(dev, "Failed to register LED\n");
+ 
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 3df0025d12aa4..8d709dbd4e0c1 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -896,9 +896,12 @@ void *devm_krealloc(struct device *dev, void *ptr, size_t new_size, gfp_t gfp)
+ 	/*
+ 	 * Otherwise: allocate new, larger chunk. We need to allocate before
+ 	 * taking the lock as most probably the caller uses GFP_KERNEL.
++	 * alloc_dr() will call check_dr_size() to reserve extra memory
++	 * for struct devres automatically, so size @new_size user request
++	 * is delivered to it directly as devm_kmalloc() does.
+ 	 */
+ 	new_dr = alloc_dr(devm_kmalloc_release,
+-			  total_new_size, gfp, dev_to_node(dev));
++			  new_size, gfp, dev_to_node(dev));
+ 	if (!new_dr)
+ 		return NULL;
+ 
+@@ -1222,7 +1225,11 @@ EXPORT_SYMBOL_GPL(__devm_alloc_percpu);
+  */
+ void devm_free_percpu(struct device *dev, void __percpu *pdata)
+ {
+-	WARN_ON(devres_destroy(dev, devm_percpu_release, devm_percpu_match,
++	/*
++	 * Use devres_release() to prevent memory leakage as
++	 * devm_free_pages() does.
++	 */
++	WARN_ON(devres_release(dev, devm_percpu_release, devm_percpu_match,
+ 			       (__force void *)pdata));
+ }
+ EXPORT_SYMBOL_GPL(devm_free_percpu);
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 75f189e42f885..f940580193526 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -227,7 +227,7 @@ MODULE_PARM_DESC(mbps, "Cache size in MiB for memory-backed device. Default: 0 (
+ 
+ static bool g_fua = true;
+ module_param_named(fua, g_fua, bool, 0444);
+-MODULE_PARM_DESC(zoned, "Enable/disable FUA support when cache_size is used. Default: true");
++MODULE_PARM_DESC(fua, "Enable/disable FUA support when cache_size is used. Default: true");
+ 
+ static unsigned int g_mbps;
+ module_param_named(mbps, g_mbps, uint, 0444);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 26ff5cd2bf0ab..da22ce38c0390 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -362,7 +362,7 @@ enum rbd_watch_state {
+ enum rbd_lock_state {
+ 	RBD_LOCK_STATE_UNLOCKED,
+ 	RBD_LOCK_STATE_LOCKED,
+-	RBD_LOCK_STATE_RELEASING,
++	RBD_LOCK_STATE_QUIESCING,
+ };
+ 
+ /* WatchNotify::ClientId */
+@@ -422,7 +422,7 @@ struct rbd_device {
+ 	struct list_head	running_list;
+ 	struct completion	acquire_wait;
+ 	int			acquire_err;
+-	struct completion	releasing_wait;
++	struct completion	quiescing_wait;
+ 
+ 	spinlock_t		object_map_lock;
+ 	u8			*object_map;
+@@ -525,7 +525,7 @@ static bool __rbd_is_lock_owner(struct rbd_device *rbd_dev)
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 
+ 	return rbd_dev->lock_state == RBD_LOCK_STATE_LOCKED ||
+-	       rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING;
++	       rbd_dev->lock_state == RBD_LOCK_STATE_QUIESCING;
+ }
+ 
+ static bool rbd_is_lock_owner(struct rbd_device *rbd_dev)
+@@ -3457,13 +3457,14 @@ static void rbd_lock_del_request(struct rbd_img_request *img_req)
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 	spin_lock(&rbd_dev->lock_lists_lock);
+ 	if (!list_empty(&img_req->lock_item)) {
++		rbd_assert(!list_empty(&rbd_dev->running_list));
+ 		list_del_init(&img_req->lock_item);
+-		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
++		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_QUIESCING &&
+ 			       list_empty(&rbd_dev->running_list));
+ 	}
+ 	spin_unlock(&rbd_dev->lock_lists_lock);
+ 	if (need_wakeup)
+-		complete(&rbd_dev->releasing_wait);
++		complete(&rbd_dev->quiescing_wait);
+ }
+ 
+ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+@@ -3476,11 +3477,6 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req)
+ 	if (rbd_lock_add_request(img_req))
+ 		return 1;
+ 
+-	if (rbd_dev->opts->exclusive) {
+-		WARN_ON(1); /* lock got released? */
+-		return -EROFS;
+-	}
+-
+ 	/*
+ 	 * Note the use of mod_delayed_work() in rbd_acquire_lock()
+ 	 * and cancel_delayed_work() in wake_lock_waiters().
+@@ -4181,16 +4177,16 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ 	/*
+ 	 * Ensure that all in-flight IO is flushed.
+ 	 */
+-	rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
+-	rbd_assert(!completion_done(&rbd_dev->releasing_wait));
++	rbd_dev->lock_state = RBD_LOCK_STATE_QUIESCING;
++	rbd_assert(!completion_done(&rbd_dev->quiescing_wait));
+ 	if (list_empty(&rbd_dev->running_list))
+ 		return true;
+ 
+ 	up_write(&rbd_dev->lock_rwsem);
+-	wait_for_completion(&rbd_dev->releasing_wait);
++	wait_for_completion(&rbd_dev->quiescing_wait);
+ 
+ 	down_write(&rbd_dev->lock_rwsem);
+-	if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
++	if (rbd_dev->lock_state != RBD_LOCK_STATE_QUIESCING)
+ 		return false;
+ 
+ 	rbd_assert(list_empty(&rbd_dev->running_list));
+@@ -4601,6 +4597,10 @@ static void rbd_reacquire_lock(struct rbd_device *rbd_dev)
+ 			rbd_warn(rbd_dev, "failed to update lock cookie: %d",
+ 				 ret);
+ 
++		if (rbd_dev->opts->exclusive)
++			rbd_warn(rbd_dev,
++			     "temporarily releasing lock on exclusive mapping");
++
+ 		/*
+ 		 * Lock cookie cannot be updated on older OSDs, so do
+ 		 * a manual release and queue an acquire.
+@@ -5383,7 +5383,7 @@ static struct rbd_device *__rbd_dev_create(struct rbd_spec *spec)
+ 	INIT_LIST_HEAD(&rbd_dev->acquiring_list);
+ 	INIT_LIST_HEAD(&rbd_dev->running_list);
+ 	init_completion(&rbd_dev->acquire_wait);
+-	init_completion(&rbd_dev->releasing_wait);
++	init_completion(&rbd_dev->quiescing_wait);
+ 
+ 	spin_lock_init(&rbd_dev->object_map_lock);
+ 
+@@ -6589,11 +6589,6 @@ static int rbd_add_acquire_lock(struct rbd_device *rbd_dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	/*
+-	 * The lock may have been released by now, unless automatic lock
+-	 * transitions are disabled.
+-	 */
+-	rbd_assert(!rbd_dev->opts->exclusive || rbd_is_lock_owner(rbd_dev));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 4e159948c912c..3b58839321333 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -48,6 +48,9 @@
+ 
+ #define UBLK_MINORS		(1U << MINORBITS)
+ 
++/* private ioctl command mirror */
++#define UBLK_CMD_DEL_DEV_ASYNC	_IOC_NR(UBLK_U_CMD_DEL_DEV_ASYNC)
++
+ /* All UBLK_F_* have to be included into UBLK_F_ALL */
+ #define UBLK_F_ALL (UBLK_F_SUPPORT_ZERO_COPY \
+ 		| UBLK_F_URING_CMD_COMP_IN_TASK \
+@@ -2904,7 +2907,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
+ 	case UBLK_CMD_DEL_DEV:
+ 		ret = ublk_ctrl_del_dev(&ub, true);
+ 		break;
+-	case UBLK_U_CMD_DEL_DEV_ASYNC:
++	case UBLK_CMD_DEL_DEV_ASYNC:
+ 		ret = ublk_ctrl_del_dev(&ub, false);
+ 		break;
+ 	case UBLK_CMD_GET_QUEUE_AFFINITY:
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index fd7c0ff2139ce..67aa63dabcff1 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1063,8 +1063,7 @@ static char *encode_disk_name(char *ptr, unsigned int n)
+ }
+ 
+ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
+-		struct blkfront_info *info, u16 sector_size,
+-		unsigned int physical_sector_size)
++		struct blkfront_info *info)
+ {
+ 	struct queue_limits lim = {};
+ 	struct gendisk *gd;
+@@ -1159,8 +1158,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
+ 
+ 	info->rq = gd->queue;
+ 	info->gd = gd;
+-	info->sector_size = sector_size;
+-	info->physical_sector_size = physical_sector_size;
+ 
+ 	xlvbd_flush(info);
+ 
+@@ -2315,8 +2312,6 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
+ static void blkfront_connect(struct blkfront_info *info)
+ {
+ 	unsigned long long sectors;
+-	unsigned long sector_size;
+-	unsigned int physical_sector_size;
+ 	int err, i;
+ 	struct blkfront_ring_info *rinfo;
+ 
+@@ -2355,7 +2350,7 @@ static void blkfront_connect(struct blkfront_info *info)
+ 	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+ 			    "sectors", "%llu", &sectors,
+ 			    "info", "%u", &info->vdisk_info,
+-			    "sector-size", "%lu", &sector_size,
++			    "sector-size", "%lu", &info->sector_size,
+ 			    NULL);
+ 	if (err) {
+ 		xenbus_dev_fatal(info->xbdev, err,
+@@ -2369,9 +2364,9 @@ static void blkfront_connect(struct blkfront_info *info)
+ 	 * provide this. Assume physical sector size to be the same as
+ 	 * sector_size in that case.
+ 	 */
+-	physical_sector_size = xenbus_read_unsigned(info->xbdev->otherend,
++	info->physical_sector_size = xenbus_read_unsigned(info->xbdev->otherend,
+ 						    "physical-sector-size",
+-						    sector_size);
++						    info->sector_size);
+ 	blkfront_gather_backend_features(info);
+ 	for_each_rinfo(info, rinfo, i) {
+ 		err = blkfront_setup_indirect(rinfo);
+@@ -2383,8 +2378,7 @@ static void blkfront_connect(struct blkfront_info *info)
+ 		}
+ 	}
+ 
+-	err = xlvbd_alloc_gendisk(sectors, info, sector_size,
+-				  physical_sector_size);
++	err = xlvbd_alloc_gendisk(sectors, info);
+ 	if (err) {
+ 		xenbus_dev_fatal(info->xbdev, err, "xlvbd_add at %s",
+ 				 info->xbdev->otherend);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 0c855c3ee1c1c..7ecc67deecb09 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -26,21 +26,11 @@
+ #define ECDSA_OFFSET		644
+ #define ECDSA_HEADER_LEN	320
+ 
+-#define BTINTEL_PPAG_NAME   "PPAG"
+-
+ enum {
+ 	DSM_SET_WDISABLE2_DELAY = 1,
+ 	DSM_SET_RESET_METHOD = 3,
+ };
+ 
+-/* structure to store the PPAG data read from ACPI table */
+-struct btintel_ppag {
+-	u32	domain;
+-	u32     mode;
+-	acpi_status status;
+-	struct hci_dev *hdev;
+-};
+-
+ #define CMD_WRITE_BOOT_PARAMS	0xfc0e
+ struct cmd_write_boot_params {
+ 	__le32 boot_addr;
+@@ -1324,65 +1314,6 @@ static int btintel_read_debug_features(struct hci_dev *hdev,
+ 	return 0;
+ }
+ 
+-static acpi_status btintel_ppag_callback(acpi_handle handle, u32 lvl, void *data,
+-					 void **ret)
+-{
+-	acpi_status status;
+-	size_t len;
+-	struct btintel_ppag *ppag = data;
+-	union acpi_object *p, *elements;
+-	struct acpi_buffer string = {ACPI_ALLOCATE_BUFFER, NULL};
+-	struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+-	struct hci_dev *hdev = ppag->hdev;
+-
+-	status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &string);
+-	if (ACPI_FAILURE(status)) {
+-		bt_dev_warn(hdev, "PPAG-BT: ACPI Failure: %s", acpi_format_exception(status));
+-		return status;
+-	}
+-
+-	len = strlen(string.pointer);
+-	if (len < strlen(BTINTEL_PPAG_NAME)) {
+-		kfree(string.pointer);
+-		return AE_OK;
+-	}
+-
+-	if (strncmp((char *)string.pointer + len - 4, BTINTEL_PPAG_NAME, 4)) {
+-		kfree(string.pointer);
+-		return AE_OK;
+-	}
+-	kfree(string.pointer);
+-
+-	status = acpi_evaluate_object(handle, NULL, NULL, &buffer);
+-	if (ACPI_FAILURE(status)) {
+-		ppag->status = status;
+-		bt_dev_warn(hdev, "PPAG-BT: ACPI Failure: %s", acpi_format_exception(status));
+-		return status;
+-	}
+-
+-	p = buffer.pointer;
+-	ppag = (struct btintel_ppag *)data;
+-
+-	if (p->type != ACPI_TYPE_PACKAGE || p->package.count != 2) {
+-		kfree(buffer.pointer);
+-		bt_dev_warn(hdev, "PPAG-BT: Invalid object type: %d or package count: %d",
+-			    p->type, p->package.count);
+-		ppag->status = AE_ERROR;
+-		return AE_ERROR;
+-	}
+-
+-	elements = p->package.elements;
+-
+-	/* PPAG table is located at element[1] */
+-	p = &elements[1];
+-
+-	ppag->domain = (u32)p->package.elements[0].integer.value;
+-	ppag->mode = (u32)p->package.elements[1].integer.value;
+-	ppag->status = AE_OK;
+-	kfree(buffer.pointer);
+-	return AE_CTRL_TERMINATE;
+-}
+-
+ static int btintel_set_debug_features(struct hci_dev *hdev,
+ 			       const struct intel_debug_features *features)
+ {
+@@ -2427,10 +2358,13 @@ static int btintel_configure_offload(struct hci_dev *hdev)
+ 
+ static void btintel_set_ppag(struct hci_dev *hdev, struct intel_version_tlv *ver)
+ {
+-	struct btintel_ppag ppag;
+ 	struct sk_buff *skb;
+ 	struct hci_ppag_enable_cmd ppag_cmd;
+ 	acpi_handle handle;
++	struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
++	union acpi_object *p, *elements;
++	u32 domain, mode;
++	acpi_status status;
+ 
+ 	/* PPAG is not supported if CRF is HrP2, Jfp2, JfP1 */
+ 	switch (ver->cnvr_top & 0xFFF) {
+@@ -2448,22 +2382,34 @@ static void btintel_set_ppag(struct hci_dev *hdev, struct intel_version_tlv *ver
+ 		return;
+ 	}
+ 
+-	memset(&ppag, 0, sizeof(ppag));
+-
+-	ppag.hdev = hdev;
+-	ppag.status = AE_NOT_FOUND;
+-	acpi_walk_namespace(ACPI_TYPE_PACKAGE, handle, 1, NULL,
+-			    btintel_ppag_callback, &ppag, NULL);
+-
+-	if (ACPI_FAILURE(ppag.status)) {
+-		if (ppag.status == AE_NOT_FOUND) {
++	status = acpi_evaluate_object(handle, "PPAG", NULL, &buffer);
++	if (ACPI_FAILURE(status)) {
++		if (status == AE_NOT_FOUND) {
+ 			bt_dev_dbg(hdev, "PPAG-BT: ACPI entry not found");
+ 			return;
+ 		}
++		bt_dev_warn(hdev, "PPAG-BT: ACPI Failure: %s", acpi_format_exception(status));
++		return;
++	}
++
++	p = buffer.pointer;
++	if (p->type != ACPI_TYPE_PACKAGE || p->package.count != 2) {
++		bt_dev_warn(hdev, "PPAG-BT: Invalid object type: %d or package count: %d",
++			    p->type, p->package.count);
++		kfree(buffer.pointer);
+ 		return;
+ 	}
+ 
+-	if (ppag.domain != 0x12) {
++	elements = p->package.elements;
++
++	/* PPAG table is located at element[1] */
++	p = &elements[1];
++
++	domain = (u32)p->package.elements[0].integer.value;
++	mode = (u32)p->package.elements[1].integer.value;
++	kfree(buffer.pointer);
++
++	if (domain != 0x12) {
+ 		bt_dev_dbg(hdev, "PPAG-BT: Bluetooth domain is disabled in ACPI firmware");
+ 		return;
+ 	}
+@@ -2474,19 +2420,22 @@ static void btintel_set_ppag(struct hci_dev *hdev, struct intel_version_tlv *ver
+ 	 * BIT 1 : 0 Disabled in China
+ 	 *         1 Enabled in China
+ 	 */
+-	if ((ppag.mode & 0x01) != BIT(0) && (ppag.mode & 0x02) != BIT(1)) {
+-		bt_dev_dbg(hdev, "PPAG-BT: EU, China mode are disabled in CB/BIOS");
++	mode &= 0x03;
++
++	if (!mode) {
++		bt_dev_dbg(hdev, "PPAG-BT: EU, China mode are disabled in BIOS");
+ 		return;
+ 	}
+ 
+-	ppag_cmd.ppag_enable_flags = cpu_to_le32(ppag.mode);
++	ppag_cmd.ppag_enable_flags = cpu_to_le32(mode);
+ 
+-	skb = __hci_cmd_sync(hdev, INTEL_OP_PPAG_CMD, sizeof(ppag_cmd), &ppag_cmd, HCI_CMD_TIMEOUT);
++	skb = __hci_cmd_sync(hdev, INTEL_OP_PPAG_CMD, sizeof(ppag_cmd),
++			     &ppag_cmd, HCI_CMD_TIMEOUT);
+ 	if (IS_ERR(skb)) {
+ 		bt_dev_warn(hdev, "Failed to send PPAG Enable (%ld)", PTR_ERR(skb));
+ 		return;
+ 	}
+-	bt_dev_info(hdev, "PPAG-BT: Enabled (Mode %d)", ppag.mode);
++	bt_dev_info(hdev, "PPAG-BT: Enabled (Mode %d)", mode);
+ 	kfree_skb(skb);
+ }
+ 
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index dd3c0626c72d8..b8120b98a2395 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1327,6 +1327,12 @@ static void btintel_pcie_remove(struct pci_dev *pdev)
+ 	data = pci_get_drvdata(pdev);
+ 
+ 	btintel_pcie_reset_bt(data);
++	for (int i = 0; i < data->alloc_vecs; i++) {
++		struct msix_entry *msix_entry;
++
++		msix_entry = &data->msix_entries[i];
++		free_irq(msix_entry->vector, msix_entry);
++	}
+ 
+ 	pci_free_irq_vectors(pdev);
+ 
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 9bfa9a6ad56c8..6a863328b8053 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -187,6 +187,11 @@ struct btnxpuart_dev {
+ #define NXP_NAK_V3		0x7b
+ #define NXP_CRC_ERROR_V3	0x7c
+ 
++/* Bootloader signature error codes */
++#define NXP_ACK_RX_TIMEOUT	0x0002	/* ACK not received from host */
++#define NXP_HDR_RX_TIMEOUT	0x0003	/* FW Header chunk not received */
++#define NXP_DATA_RX_TIMEOUT	0x0004	/* FW Data chunk not received */
++
+ #define HDR_LEN			16
+ 
+ #define NXP_RECV_CHIP_VER_V1 \
+@@ -277,6 +282,17 @@ struct nxp_bootloader_cmd {
+ 	__be32 crc;
+ } __packed;
+ 
++struct nxp_v3_rx_timeout_nak {
++	u8 nak;
++	__le32 offset;
++	u8 crc;
++} __packed;
++
++union nxp_v3_rx_timeout_nak_u {
++	struct nxp_v3_rx_timeout_nak pkt;
++	u8 buf[6];
++};
++
+ static u8 crc8_table[CRC8_TABLE_SIZE];
+ 
+ /* Default configurations */
+@@ -899,6 +915,32 @@ static int nxp_recv_chip_ver_v3(struct hci_dev *hdev, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
++static void nxp_handle_fw_download_error(struct hci_dev *hdev, struct v3_data_req *req)
++{
++	struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
++	__u32 offset = __le32_to_cpu(req->offset);
++	__u16 err = __le16_to_cpu(req->error);
++	union nxp_v3_rx_timeout_nak_u nak_tx_buf;
++
++	switch (err) {
++	case NXP_ACK_RX_TIMEOUT:
++	case NXP_HDR_RX_TIMEOUT:
++	case NXP_DATA_RX_TIMEOUT:
++		nak_tx_buf.pkt.nak = NXP_NAK_V3;
++		nak_tx_buf.pkt.offset = __cpu_to_le32(offset);
++		nak_tx_buf.pkt.crc = crc8(crc8_table, nak_tx_buf.buf,
++				      sizeof(nak_tx_buf) - 1, 0xff);
++		serdev_device_write_buf(nxpdev->serdev, nak_tx_buf.buf,
++					sizeof(nak_tx_buf));
++		break;
++	default:
++		bt_dev_dbg(hdev, "Unknown bootloader error code: %d", err);
++		break;
++
++	}
++
++}
++
+ static int nxp_recv_fw_req_v3(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ 	struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
+@@ -913,7 +955,12 @@ static int nxp_recv_fw_req_v3(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (!req || !nxpdev->fw)
+ 		goto free_skb;
+ 
+-	nxp_send_ack(NXP_ACK_V3, hdev);
++	if (!req->error) {
++		nxp_send_ack(NXP_ACK_V3, hdev);
++	} else {
++		nxp_handle_fw_download_error(hdev, req);
++		goto free_skb;
++	}
+ 
+ 	len = __le16_to_cpu(req->len);
+ 
+@@ -940,9 +987,6 @@ static int nxp_recv_fw_req_v3(struct hci_dev *hdev, struct sk_buff *skb)
+ 		wake_up_interruptible(&nxpdev->fw_dnld_done_wait_q);
+ 		goto free_skb;
+ 	}
+-	if (req->error)
+-		bt_dev_dbg(hdev, "FW Download received err 0x%02x from chip",
+-			   req->error);
+ 
+ 	offset = __le32_to_cpu(req->offset);
+ 	if (offset < nxpdev->fw_v3_offset_correction) {
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e384ef6ff050d..789c492df6fa2 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -555,6 +555,10 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3572), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x13d3, 0x3591), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0489, 0xe125), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek 8852BT/8852BE-VT Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0x8520), .driver_info = BTUSB_REALTEK |
+diff --git a/drivers/bluetooth/hci_bcm4377.c b/drivers/bluetooth/hci_bcm4377.c
+index d90858ea2fe59..a77a30fdc630e 100644
+--- a/drivers/bluetooth/hci_bcm4377.c
++++ b/drivers/bluetooth/hci_bcm4377.c
+@@ -32,7 +32,7 @@ enum bcm4377_chip {
+ #define BCM4378_DEVICE_ID 0x5f69
+ #define BCM4387_DEVICE_ID 0x5f71
+ 
+-#define BCM4377_TIMEOUT 1000
++#define BCM4377_TIMEOUT msecs_to_jiffies(1000)
+ 
+ /*
+  * These devices only support DMA transactions inside a 32bit window
+diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
+index f8f674adf1d40..4acfac73ca9ac 100644
+--- a/drivers/bus/mhi/ep/main.c
++++ b/drivers/bus/mhi/ep/main.c
+@@ -90,7 +90,7 @@ static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct m
+ 	struct mhi_ring_element *event;
+ 	int ret;
+ 
+-	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL);
+ 	if (!event)
+ 		return -ENOMEM;
+ 
+@@ -109,7 +109,7 @@ int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_stat
+ 	struct mhi_ring_element *event;
+ 	int ret;
+ 
+-	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL);
+ 	if (!event)
+ 		return -ENOMEM;
+ 
+@@ -127,7 +127,7 @@ int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_e
+ 	struct mhi_ring_element *event;
+ 	int ret;
+ 
+-	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL);
+ 	if (!event)
+ 		return -ENOMEM;
+ 
+@@ -146,7 +146,7 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
+ 	struct mhi_ring_element *event;
+ 	int ret;
+ 
+-	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL);
+ 	if (!event)
+ 		return -ENOMEM;
+ 
+@@ -438,7 +438,7 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+ 		read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
+ 		write_offset = len - buf_left;
+ 
+-		buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL | GFP_DMA);
++		buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL);
+ 		if (!buf_addr)
+ 			return -ENOMEM;
+ 
+@@ -1481,14 +1481,14 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ 
+ 	mhi_cntrl->ev_ring_el_cache = kmem_cache_create("mhi_ep_event_ring_el",
+ 							sizeof(struct mhi_ring_element), 0,
+-							SLAB_CACHE_DMA, NULL);
++							0, NULL);
+ 	if (!mhi_cntrl->ev_ring_el_cache) {
+ 		ret = -ENOMEM;
+ 		goto err_free_cmd;
+ 	}
+ 
+ 	mhi_cntrl->tre_buf_cache = kmem_cache_create("mhi_ep_tre_buf", MHI_EP_DEFAULT_MTU, 0,
+-						      SLAB_CACHE_DMA, NULL);
++						      0, NULL);
+ 	if (!mhi_cntrl->tre_buf_cache) {
+ 		ret = -ENOMEM;
+ 		goto err_destroy_ev_ring_el_cache;
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index 86162a13681e6..9a24d19236dc7 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -143,8 +143,10 @@ static int __init amd_rng_mod_init(void)
+ 
+ found:
+ 	err = pci_read_config_dword(pdev, 0x58, &pmbase);
+-	if (err)
++	if (err) {
++		err = pcibios_err_to_errno(err);
+ 		goto put_dev;
++	}
+ 
+ 	pmbase &= 0x0000FF00;
+ 	if (pmbase == 0) {
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 4084df65c9fa3..f6122a03ee37c 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -161,7 +161,6 @@ static int hwrng_init(struct hwrng *rng)
+ 	reinit_completion(&rng->cleanup_done);
+ 
+ skip_init:
+-	rng->quality = min_t(u16, min_t(u16, default_quality, 1024), rng->quality ?: 1024);
+ 	current_quality = rng->quality; /* obsolete */
+ 
+ 	return 0;
+@@ -545,6 +544,9 @@ int hwrng_register(struct hwrng *rng)
+ 	complete(&rng->cleanup_done);
+ 	init_completion(&rng->dying);
+ 
++	/* Adjust quality field to always have a proper value */
++	rng->quality = min_t(u16, min_t(u16, default_quality, 1024), rng->quality ?: 1024);
++
+ 	if (!current_rng ||
+ 	    (!cur_rng_set_by_user && rng->quality > current_rng->quality)) {
+ 		/*
+diff --git a/drivers/char/ipmi/ssif_bmc.c b/drivers/char/ipmi/ssif_bmc.c
+index 56346fb328727..ab4e87a99f087 100644
+--- a/drivers/char/ipmi/ssif_bmc.c
++++ b/drivers/char/ipmi/ssif_bmc.c
+@@ -177,13 +177,15 @@ static ssize_t ssif_bmc_write(struct file *file, const char __user *buf, size_t
+ 	unsigned long flags;
+ 	ssize_t ret;
+ 
+-	if (count > sizeof(struct ipmi_ssif_msg))
++	if (count < sizeof(msg.len) ||
++	    count > sizeof(struct ipmi_ssif_msg))
+ 		return -EINVAL;
+ 
+ 	if (copy_from_user(&msg, buf, count))
+ 		return -EFAULT;
+ 
+-	if (!msg.len || count < sizeof_field(struct ipmi_ssif_msg, len) + msg.len)
++	if (!msg.len || msg.len > IPMI_SSIF_PAYLOAD_MAX ||
++	    count < sizeof_field(struct ipmi_ssif_msg, len) + msg.len)
+ 		return -EINVAL;
+ 
+ 	spin_lock_irqsave(&ssif_bmc->lock, flags);
+diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c
+index 639c3f395a5af..4c0bbba64ee50 100644
+--- a/drivers/char/tpm/eventlog/common.c
++++ b/drivers/char/tpm/eventlog/common.c
+@@ -47,6 +47,8 @@ static int tpm_bios_measurements_open(struct inode *inode,
+ 	if (!err) {
+ 		seq = file->private_data;
+ 		seq->private = chip;
++	} else {
++		put_device(&chip->dev);
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
+index c9eca24bbad47..61b42c83ced81 100644
+--- a/drivers/char/tpm/tpm_tis_spi_main.c
++++ b/drivers/char/tpm/tpm_tis_spi_main.c
+@@ -318,6 +318,7 @@ static void tpm_tis_spi_remove(struct spi_device *dev)
+ }
+ 
+ static const struct spi_device_id tpm_tis_spi_id[] = {
++	{ "attpm20p", (unsigned long)tpm_tis_spi_probe },
+ 	{ "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "slb9670", (unsigned long)tpm_tis_spi_probe },
+ 	{ "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index ccc3946926712..bdf5cbc12e236 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -57,6 +57,7 @@ struct en_clk_desc {
+ 	u8 div_shift;
+ 	u16 div_val0;
+ 	u8 div_step;
++	u8 div_offset;
+ };
+ 
+ struct en_clk_gate {
+@@ -90,6 +91,7 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ 		.div_bits = 3,
+ 		.div_shift = 0,
+ 		.div_step = 1,
++		.div_offset = 1,
+ 	}, {
+ 		.id = EN7523_CLK_EMI,
+ 		.name = "emi",
+@@ -103,6 +105,7 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ 		.div_bits = 3,
+ 		.div_shift = 0,
+ 		.div_step = 1,
++		.div_offset = 1,
+ 	}, {
+ 		.id = EN7523_CLK_BUS,
+ 		.name = "bus",
+@@ -116,6 +119,7 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ 		.div_bits = 3,
+ 		.div_shift = 0,
+ 		.div_step = 1,
++		.div_offset = 1,
+ 	}, {
+ 		.id = EN7523_CLK_SLIC,
+ 		.name = "slic",
+@@ -156,13 +160,14 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ 		.div_bits = 3,
+ 		.div_shift = 0,
+ 		.div_step = 1,
++		.div_offset = 1,
+ 	}, {
+ 		.id = EN7523_CLK_CRYPTO,
+ 		.name = "crypto",
+ 
+ 		.base_reg = REG_CRYPTO_CLKSRC,
+ 		.base_bits = 1,
+-		.base_shift = 8,
++		.base_shift = 0,
+ 		.base_values = emi_base,
+ 		.n_base_values = ARRAY_SIZE(emi_base),
+ 	}
+@@ -202,7 +207,7 @@ static u32 en7523_get_div(void __iomem *base, int i)
+ 	if (!val && desc->div_val0)
+ 		return desc->div_val0;
+ 
+-	return (val + 1) * desc->div_step;
++	return (val + desc->div_offset) * desc->div_step;
+ }
+ 
+ static int en7523_pci_is_enabled(struct clk_hw *hw)
+diff --git a/drivers/clk/davinci/da8xx-cfgchip.c b/drivers/clk/davinci/da8xx-cfgchip.c
+index ad2d0df43dc6f..ec60ecb517f1f 100644
+--- a/drivers/clk/davinci/da8xx-cfgchip.c
++++ b/drivers/clk/davinci/da8xx-cfgchip.c
+@@ -508,7 +508,7 @@ da8xx_cfgchip_register_usb0_clk48(struct device *dev,
+ 	const char * const parent_names[] = { "usb_refclkin", "pll0_auxclk" };
+ 	struct clk *fck_clk;
+ 	struct da8xx_usb0_clk48 *usb0;
+-	struct clk_init_data init;
++	struct clk_init_data init = {};
+ 	int ret;
+ 
+ 	fck_clk = devm_clk_get(dev, "fck");
+@@ -583,7 +583,7 @@ da8xx_cfgchip_register_usb1_clk48(struct device *dev,
+ {
+ 	const char * const parent_names[] = { "usb0_clk48", "usb_refclkin" };
+ 	struct da8xx_usb1_clk48 *usb1;
+-	struct clk_init_data init;
++	struct clk_init_data init = {};
+ 	int ret;
+ 
+ 	usb1 = devm_kzalloc(dev, sizeof(*usb1), GFP_KERNEL);
+diff --git a/drivers/clk/meson/s4-peripherals.c b/drivers/clk/meson/s4-peripherals.c
+index 5e17ca50ab091..73340c7e815e7 100644
+--- a/drivers/clk/meson/s4-peripherals.c
++++ b/drivers/clk/meson/s4-peripherals.c
+@@ -2978,7 +2978,7 @@ static struct clk_regmap s4_pwm_j_div = {
+ 		.name = "pwm_j_div",
+ 		.ops = &clk_regmap_divider_ops,
+ 		.parent_hws = (const struct clk_hw *[]) {
+-			&s4_pwm_h_mux.hw
++			&s4_pwm_j_mux.hw
+ 		},
+ 		.num_parents = 1,
+ 		.flags = CLK_SET_RATE_PARENT,
+diff --git a/drivers/clk/meson/s4-pll.c b/drivers/clk/meson/s4-pll.c
+index d2650d96400cf..707c107a52918 100644
+--- a/drivers/clk/meson/s4-pll.c
++++ b/drivers/clk/meson/s4-pll.c
+@@ -38,6 +38,11 @@ static struct clk_regmap s4_fixed_pll_dco = {
+ 			.shift   = 0,
+ 			.width   = 8,
+ 		},
++		.frac = {
++			.reg_off = ANACTRL_FIXPLL_CTRL1,
++			.shift   = 0,
++			.width   = 17,
++		},
+ 		.n = {
+ 			.reg_off = ANACTRL_FIXPLL_CTRL0,
+ 			.shift   = 10,
+diff --git a/drivers/clk/qcom/camcc-sc7280.c b/drivers/clk/qcom/camcc-sc7280.c
+index d89ddb2298e32..582fb3ba9c895 100644
+--- a/drivers/clk/qcom/camcc-sc7280.c
++++ b/drivers/clk/qcom/camcc-sc7280.c
+@@ -2260,6 +2260,7 @@ static struct gdsc cam_cc_bps_gdsc = {
+ 		.name = "cam_cc_bps_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &cam_cc_titan_top_gdsc.pd,
+ 	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+@@ -2269,6 +2270,7 @@ static struct gdsc cam_cc_ife_0_gdsc = {
+ 		.name = "cam_cc_ife_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &cam_cc_titan_top_gdsc.pd,
+ 	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+@@ -2278,6 +2280,7 @@ static struct gdsc cam_cc_ife_1_gdsc = {
+ 		.name = "cam_cc_ife_1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &cam_cc_titan_top_gdsc.pd,
+ 	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+@@ -2287,6 +2290,7 @@ static struct gdsc cam_cc_ife_2_gdsc = {
+ 		.name = "cam_cc_ife_2_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &cam_cc_titan_top_gdsc.pd,
+ 	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+@@ -2296,6 +2300,7 @@ static struct gdsc cam_cc_ipe_0_gdsc = {
+ 		.name = "cam_cc_ipe_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &cam_cc_titan_top_gdsc.pd,
+ 	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 9b3aaa7f20ac2..30b19bd39d087 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -1304,7 +1304,39 @@ clk_rcg2_shared_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ 	return clk_rcg2_recalc_rate(hw, parent_rate);
+ }
+ 
++static int clk_rcg2_shared_init(struct clk_hw *hw)
++{
++	/*
++	 * This does a few things:
++	 *
++	 *  1. Sets rcg->parked_cfg to reflect the value at probe so that the
++	 *     proper parent is reported from clk_rcg2_shared_get_parent().
++	 *
++	 *  2. Clears the force enable bit of the RCG because we rely on child
++	 *     clks (branches) to turn the RCG on/off with a hardware feedback
++	 *     mechanism and only set the force enable bit in the RCG when we
++	 *     want to make sure the clk stays on for parent switches or
++	 *     parking.
++	 *
++	 *  3. Parks shared RCGs on the safe source at registration because we
++	 *     can't be certain that the parent clk will stay on during boot,
++	 *     especially if the parent is shared. If this RCG is enabled at
++	 *     boot, and the parent is turned off, the RCG will get stuck on. A
++	 *     GDSC can wedge if is turned on and the RCG is stuck on because
++	 *     the GDSC's controller will hang waiting for the clk status to
++	 *     toggle on when it never does.
++	 *
++	 * The safest option here is to "park" the RCG at init so that the clk
++	 * can never get stuck on or off. This ensures the GDSC can't get
++	 * wedged.
++	 */
++	clk_rcg2_shared_disable(hw);
++
++	return 0;
++}
++
+ const struct clk_ops clk_rcg2_shared_ops = {
++	.init = clk_rcg2_shared_init,
+ 	.enable = clk_rcg2_shared_enable,
+ 	.disable = clk_rcg2_shared_disable,
+ 	.get_parent = clk_rcg2_shared_get_parent,
+diff --git a/drivers/clk/qcom/gcc-sa8775p.c b/drivers/clk/qcom/gcc-sa8775p.c
+index 5bcbfbf52cb9e..9bbc0836fae98 100644
+--- a/drivers/clk/qcom/gcc-sa8775p.c
++++ b/drivers/clk/qcom/gcc-sa8775p.c
+@@ -4305,74 +4305,114 @@ static struct clk_branch gcc_video_axi1_clk = {
+ 
+ static struct gdsc pcie_0_gdsc = {
+ 	.gdscr = 0xa9004,
++	.collapse_ctrl = 0x4b104,
++	.collapse_mask = BIT(0),
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "pcie_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc pcie_1_gdsc = {
+ 	.gdscr = 0x77004,
++	.collapse_ctrl = 0x4b104,
++	.collapse_mask = BIT(1),
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "pcie_1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc ufs_card_gdsc = {
+ 	.gdscr = 0x81004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ufs_card_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc ufs_phy_gdsc = {
+ 	.gdscr = 0x83004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ufs_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc usb20_prim_gdsc = {
+ 	.gdscr = 0x1c004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "usb20_prim_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc usb30_prim_gdsc = {
+ 	.gdscr = 0x1b004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc usb30_sec_gdsc = {
+ 	.gdscr = 0x2f004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "usb30_sec_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc emac0_gdsc = {
+ 	.gdscr = 0xb6004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "emac0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct gdsc emac1_gdsc = {
+ 	.gdscr = 0xb4004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "emac1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE | POLL_CFG_GDSCR,
+ };
+ 
+ static struct clk_regmap *gcc_sa8775p_clocks[] = {
+diff --git a/drivers/clk/qcom/gcc-sc7280.c b/drivers/clk/qcom/gcc-sc7280.c
+index f45a8318900c5..67ea9cf5303fa 100644
+--- a/drivers/clk/qcom/gcc-sc7280.c
++++ b/drivers/clk/qcom/gcc-sc7280.c
+@@ -3463,6 +3463,9 @@ static int gcc_sc7280_probe(struct platform_device *pdev)
+ 	qcom_branch_set_clk_en(regmap, 0x71004);/* GCC_GPU_CFG_AHB_CLK */
+ 	regmap_update_bits(regmap, 0x7100C, BIT(13), BIT(13));
+ 
++	/* FORCE_MEM_CORE_ON for ufs phy ice core clocks */
++	qcom_branch_set_force_mem_core(regmap, gcc_ufs_phy_ice_core_clk, true);
++
+ 	ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks,
+ 			ARRAY_SIZE(gcc_dfs_clocks));
+ 	if (ret)
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 1404017be9180..a263f0c412f5a 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -2812,7 +2812,7 @@ static struct clk_branch gcc_pcie_0_mstr_axi_clk = {
+ 
+ static struct clk_branch gcc_pcie_0_pipe_clk = {
+ 	.halt_reg = 0xa0044,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52010,
+ 		.enable_mask = BIT(25),
+@@ -2901,7 +2901,7 @@ static struct clk_branch gcc_pcie_1_mstr_axi_clk = {
+ 
+ static struct clk_branch gcc_pcie_1_pipe_clk = {
+ 	.halt_reg = 0x2c044,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52020,
+ 		.enable_mask = BIT(30),
+@@ -2990,7 +2990,7 @@ static struct clk_branch gcc_pcie_2_mstr_axi_clk = {
+ 
+ static struct clk_branch gcc_pcie_2_pipe_clk = {
+ 	.halt_reg = 0x13044,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52020,
+ 		.enable_mask = BIT(23),
+@@ -3110,7 +3110,7 @@ static struct clk_branch gcc_pcie_3_phy_rchng_clk = {
+ 
+ static struct clk_branch gcc_pcie_3_pipe_clk = {
+ 	.halt_reg = 0x58050,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52020,
+ 		.enable_mask = BIT(3),
+@@ -3235,7 +3235,7 @@ static struct clk_branch gcc_pcie_4_phy_rchng_clk = {
+ 
+ static struct clk_branch gcc_pcie_4_pipe_clk = {
+ 	.halt_reg = 0x6b044,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52008,
+ 		.enable_mask = BIT(4),
+@@ -3360,7 +3360,7 @@ static struct clk_branch gcc_pcie_5_phy_rchng_clk = {
+ 
+ static struct clk_branch gcc_pcie_5_pipe_clk = {
+ 	.halt_reg = 0x2f044,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52018,
+ 		.enable_mask = BIT(17),
+@@ -3498,7 +3498,7 @@ static struct clk_branch gcc_pcie_6a_phy_rchng_clk = {
+ 
+ static struct clk_branch gcc_pcie_6a_pipe_clk = {
+ 	.halt_reg = 0x31050,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52018,
+ 		.enable_mask = BIT(26),
+@@ -3636,7 +3636,7 @@ static struct clk_branch gcc_pcie_6b_phy_rchng_clk = {
+ 
+ static struct clk_branch gcc_pcie_6b_pipe_clk = {
+ 	.halt_reg = 0x8d050,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52000,
+ 		.enable_mask = BIT(30),
+@@ -5109,7 +5109,7 @@ static struct clk_branch gcc_usb3_mp_phy_com_aux_clk = {
+ 
+ static struct clk_branch gcc_usb3_mp_phy_pipe_0_clk = {
+ 	.halt_reg = 0x17290,
+-	.halt_check = BRANCH_HALT,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x17290,
+ 		.enable_mask = BIT(0),
+@@ -5122,7 +5122,7 @@ static struct clk_branch gcc_usb3_mp_phy_pipe_0_clk = {
+ 
+ static struct clk_branch gcc_usb3_mp_phy_pipe_1_clk = {
+ 	.halt_reg = 0x17298,
+-	.halt_check = BRANCH_HALT,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x17298,
+ 		.enable_mask = BIT(0),
+@@ -5186,7 +5186,7 @@ static struct clk_regmap_mux gcc_usb3_prim_phy_pipe_clk_src = {
+ 
+ static struct clk_branch gcc_usb3_prim_phy_pipe_clk = {
+ 	.halt_reg = 0x39068,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0x39068,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+@@ -5257,7 +5257,7 @@ static struct clk_regmap_mux gcc_usb3_sec_phy_pipe_clk_src = {
+ 
+ static struct clk_branch gcc_usb3_sec_phy_pipe_clk = {
+ 	.halt_reg = 0xa1068,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0xa1068,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+@@ -5269,6 +5269,7 @@ static struct clk_branch gcc_usb3_sec_phy_pipe_clk = {
+ 				&gcc_usb3_sec_phy_pipe_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -5327,7 +5328,7 @@ static struct clk_regmap_mux gcc_usb3_tert_phy_pipe_clk_src = {
+ 
+ static struct clk_branch gcc_usb3_tert_phy_pipe_clk = {
+ 	.halt_reg = 0xa2068,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0xa2068,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+@@ -5339,6 +5340,7 @@ static struct clk_branch gcc_usb3_tert_phy_pipe_clk = {
+ 				&gcc_usb3_tert_phy_pipe_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -5405,7 +5407,7 @@ static struct clk_branch gcc_usb4_0_master_clk = {
+ 
+ static struct clk_branch gcc_usb4_0_phy_p2rr2p_pipe_clk = {
+ 	.halt_reg = 0x9f0d8,
+-	.halt_check = BRANCH_HALT,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x9f0d8,
+ 		.enable_mask = BIT(0),
+@@ -5418,7 +5420,7 @@ static struct clk_branch gcc_usb4_0_phy_p2rr2p_pipe_clk = {
+ 
+ static struct clk_branch gcc_usb4_0_phy_pcie_pipe_clk = {
+ 	.halt_reg = 0x9f048,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52010,
+ 		.enable_mask = BIT(19),
+@@ -5457,7 +5459,7 @@ static struct clk_branch gcc_usb4_0_phy_rx1_clk = {
+ 
+ static struct clk_branch gcc_usb4_0_phy_usb_pipe_clk = {
+ 	.halt_reg = 0x9f0a4,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0x9f0a4,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+@@ -5582,7 +5584,7 @@ static struct clk_branch gcc_usb4_1_master_clk = {
+ 
+ static struct clk_branch gcc_usb4_1_phy_p2rr2p_pipe_clk = {
+ 	.halt_reg = 0x2b0d8,
+-	.halt_check = BRANCH_HALT,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x2b0d8,
+ 		.enable_mask = BIT(0),
+@@ -5595,7 +5597,7 @@ static struct clk_branch gcc_usb4_1_phy_p2rr2p_pipe_clk = {
+ 
+ static struct clk_branch gcc_usb4_1_phy_pcie_pipe_clk = {
+ 	.halt_reg = 0x2b048,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52028,
+ 		.enable_mask = BIT(0),
+@@ -5634,7 +5636,7 @@ static struct clk_branch gcc_usb4_1_phy_rx1_clk = {
+ 
+ static struct clk_branch gcc_usb4_1_phy_usb_pipe_clk = {
+ 	.halt_reg = 0x2b0a4,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0x2b0a4,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+@@ -5759,7 +5761,7 @@ static struct clk_branch gcc_usb4_2_master_clk = {
+ 
+ static struct clk_branch gcc_usb4_2_phy_p2rr2p_pipe_clk = {
+ 	.halt_reg = 0x110d8,
+-	.halt_check = BRANCH_HALT,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x110d8,
+ 		.enable_mask = BIT(0),
+@@ -5772,7 +5774,7 @@ static struct clk_branch gcc_usb4_2_phy_p2rr2p_pipe_clk = {
+ 
+ static struct clk_branch gcc_usb4_2_phy_pcie_pipe_clk = {
+ 	.halt_reg = 0x11048,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.clkr = {
+ 		.enable_reg = 0x52028,
+ 		.enable_mask = BIT(1),
+@@ -5811,7 +5813,7 @@ static struct clk_branch gcc_usb4_2_phy_rx1_clk = {
+ 
+ static struct clk_branch gcc_usb4_2_phy_usb_pipe_clk = {
+ 	.halt_reg = 0x110a4,
+-	.halt_check = BRANCH_HALT_VOTED,
++	.halt_check = BRANCH_HALT_SKIP,
+ 	.hwcg_reg = 0x110a4,
+ 	.hwcg_bit = 1,
+ 	.clkr = {
+diff --git a/drivers/clk/qcom/gpucc-sa8775p.c b/drivers/clk/qcom/gpucc-sa8775p.c
+index 1167c42da39db..3deabf8333883 100644
+--- a/drivers/clk/qcom/gpucc-sa8775p.c
++++ b/drivers/clk/qcom/gpucc-sa8775p.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024, Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2023, Linaro Limited
+  */
+ 
+@@ -161,7 +161,7 @@ static struct clk_rcg2 gpu_cc_ff_clk_src = {
+ 		.name = "gpu_cc_ff_clk_src",
+ 		.parent_data = gpu_cc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gpu_cc_parent_data_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -181,7 +181,7 @@ static struct clk_rcg2 gpu_cc_gmu_clk_src = {
+ 		.parent_data = gpu_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(gpu_cc_parent_data_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -200,7 +200,7 @@ static struct clk_rcg2 gpu_cc_hub_clk_src = {
+ 		.name = "gpu_cc_hub_clk_src",
+ 		.parent_data = gpu_cc_parent_data_2,
+ 		.num_parents = ARRAY_SIZE(gpu_cc_parent_data_2),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -280,7 +280,7 @@ static struct clk_branch gpu_cc_ahb_clk = {
+ 				&gpu_cc_hub_ahb_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -294,8 +294,7 @@ static struct clk_branch gpu_cc_cb_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data){
+ 			.name = "gpu_cc_cb_clk",
+-			.flags = CLK_IS_CRITICAL,
+-			.ops = &clk_branch2_ops,
++			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+ };
+@@ -312,7 +311,7 @@ static struct clk_branch gpu_cc_crc_ahb_clk = {
+ 				&gpu_cc_hub_ahb_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -330,7 +329,7 @@ static struct clk_branch gpu_cc_cx_ff_clk = {
+ 				&gpu_cc_ff_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -348,7 +347,7 @@ static struct clk_branch gpu_cc_cx_gmu_clk = {
+ 				&gpu_cc_gmu_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags =  CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags =  CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+@@ -362,7 +361,6 @@ static struct clk_branch gpu_cc_cx_snoc_dvm_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data){
+ 			.name = "gpu_cc_cx_snoc_dvm_clk",
+-			.flags = CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -380,7 +378,7 @@ static struct clk_branch gpu_cc_cxo_aon_clk = {
+ 				&gpu_cc_xo_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -398,7 +396,7 @@ static struct clk_branch gpu_cc_cxo_clk = {
+ 				&gpu_cc_xo_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags =  CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags =  CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -416,7 +414,7 @@ static struct clk_branch gpu_cc_demet_clk = {
+ 				&gpu_cc_demet_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+@@ -430,7 +428,6 @@ static struct clk_branch gpu_cc_hlos1_vote_gpu_smmu_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data){
+ 			.name = "gpu_cc_hlos1_vote_gpu_smmu_clk",
+-			.flags = CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -448,7 +445,7 @@ static struct clk_branch gpu_cc_hub_aon_clk = {
+ 				&gpu_cc_hub_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+@@ -466,7 +463,7 @@ static struct clk_branch gpu_cc_hub_cx_int_clk = {
+ 				&gpu_cc_hub_cx_int_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags =  CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++			.flags =  CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_aon_ops,
+ 		},
+ 	},
+@@ -480,7 +477,6 @@ static struct clk_branch gpu_cc_memnoc_gfx_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data){
+ 			.name = "gpu_cc_memnoc_gfx_clk",
+-			.flags = CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -494,7 +490,6 @@ static struct clk_branch gpu_cc_sleep_clk = {
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data){
+ 			.name = "gpu_cc_sleep_clk",
+-			.flags = CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -528,16 +523,22 @@ static struct clk_regmap *gpu_cc_sa8775p_clocks[] = {
+ 
+ static struct gdsc cx_gdsc = {
+ 	.gdscr = 0x9108,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.gds_hw_ctrl = 0x953c,
+ 	.pd = {
+ 		.name = "cx_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc gx_gdsc = {
+ 	.gdscr = 0x905c,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "gx_gdsc",
+ 		.power_on = gdsc_gx_do_nothing_enable,
+diff --git a/drivers/clk/qcom/gpucc-sm8350.c b/drivers/clk/qcom/gpucc-sm8350.c
+index 38505d1388b67..8d9dcff40dd0b 100644
+--- a/drivers/clk/qcom/gpucc-sm8350.c
++++ b/drivers/clk/qcom/gpucc-sm8350.c
+@@ -2,6 +2,7 @@
+ /*
+  * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
+  * Copyright (c) 2022, Linaro Limited
++ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/clk.h>
+@@ -147,7 +148,7 @@ static struct clk_rcg2 gpu_cc_gmu_clk_src = {
+ 		.parent_data = gpu_cc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gpu_cc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -169,7 +170,7 @@ static struct clk_rcg2 gpu_cc_hub_clk_src = {
+ 		.parent_data = gpu_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(gpu_cc_parent_data_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/kpss-xcc.c b/drivers/clk/qcom/kpss-xcc.c
+index 23b0b11f00077..e7cfa8d22044e 100644
+--- a/drivers/clk/qcom/kpss-xcc.c
++++ b/drivers/clk/qcom/kpss-xcc.c
+@@ -58,9 +58,7 @@ static int kpss_xcc_driver_probe(struct platform_device *pdev)
+ 	if (IS_ERR(hw))
+ 		return PTR_ERR(hw);
+ 
+-	of_clk_add_hw_provider(dev->of_node, of_clk_hw_simple_get, hw);
+-
+-	return 0;
++	return of_clk_add_hw_provider(dev->of_node, of_clk_hw_simple_get, hw);
+ }
+ 
+ static struct platform_driver kpss_xcc_driver = {
+diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
+index a026ccca7315f..28945b6b0ee1c 100644
+--- a/drivers/clk/samsung/clk-exynos4.c
++++ b/drivers/clk/samsung/clk-exynos4.c
+@@ -1040,19 +1040,20 @@ static unsigned long __init exynos4_get_xom(void)
+ static void __init exynos4_clk_register_finpll(struct samsung_clk_provider *ctx)
+ {
+ 	struct samsung_fixed_rate_clock fclk;
+-	struct clk *clk;
+-	unsigned long finpll_f = 24000000;
++	unsigned long finpll_f;
++	unsigned int parent;
+ 	char *parent_name;
+ 	unsigned int xom = exynos4_get_xom();
+ 
+ 	parent_name = xom & 1 ? "xusbxti" : "xxti";
+-	clk = clk_get(NULL, parent_name);
+-	if (IS_ERR(clk)) {
++	parent = xom & 1 ? CLK_XUSBXTI : CLK_XXTI;
++
++	finpll_f = clk_hw_get_rate(ctx->clk_data.hws[parent]);
++	if (!finpll_f) {
+ 		pr_err("%s: failed to lookup parent clock %s, assuming "
+ 			"fin_pll clock frequency is 24MHz\n", __func__,
+ 			parent_name);
+-	} else {
+-		finpll_f = clk_get_rate(clk);
++		finpll_f = 24000000;
+ 	}
+ 
+ 	fclk.id = CLK_FIN_PLL;
+diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
+index fc275d41d51e9..66b73c308ce67 100644
+--- a/drivers/cpufreq/amd-pstate-ut.c
++++ b/drivers/cpufreq/amd-pstate-ut.c
+@@ -202,6 +202,7 @@ static void amd_pstate_ut_check_freq(u32 index)
+ 	int cpu = 0;
+ 	struct cpufreq_policy *policy = NULL;
+ 	struct amd_cpudata *cpudata = NULL;
++	u32 nominal_freq_khz;
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		policy = cpufreq_cpu_get(cpu);
+@@ -209,13 +210,14 @@ static void amd_pstate_ut_check_freq(u32 index)
+ 			break;
+ 		cpudata = policy->driver_data;
+ 
+-		if (!((cpudata->max_freq >= cpudata->nominal_freq) &&
+-			(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
++		nominal_freq_khz = cpudata->nominal_freq*1000;
++		if (!((cpudata->max_freq >= nominal_freq_khz) &&
++			(nominal_freq_khz > cpudata->lowest_nonlinear_freq) &&
+ 			(cpudata->lowest_nonlinear_freq > cpudata->min_freq) &&
+ 			(cpudata->min_freq > 0))) {
+ 			amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 			pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
+-				__func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
++				__func__, cpu, cpudata->max_freq, nominal_freq_khz,
+ 				cpudata->lowest_nonlinear_freq, cpudata->min_freq);
+ 			goto skip_test;
+ 		}
+@@ -229,13 +231,13 @@ static void amd_pstate_ut_check_freq(u32 index)
+ 
+ 		if (cpudata->boost_supported) {
+ 			if ((policy->max == cpudata->max_freq) ||
+-					(policy->max == cpudata->nominal_freq))
++					(policy->max == nominal_freq_khz))
+ 				amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
+ 			else {
+ 				amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 				pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
+ 					__func__, cpu, policy->max, cpudata->max_freq,
+-					cpudata->nominal_freq);
++					nominal_freq_khz);
+ 				goto skip_test;
+ 			}
+ 		} else {
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 9ad62dbe8bfbf..a092b13ffbc2f 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -247,6 +247,26 @@ static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)
+ 	return index;
+ }
+ 
++static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
++			       u32 des_perf, u32 max_perf, bool fast_switch)
++{
++	if (fast_switch)
++		wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
++	else
++		wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
++			      READ_ONCE(cpudata->cppc_req_cached));
++}
++
++DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
++
++static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
++					  u32 min_perf, u32 des_perf,
++					  u32 max_perf, bool fast_switch)
++{
++	static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
++					    max_perf, fast_switch);
++}
++
+ static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
+ {
+ 	int ret;
+@@ -263,6 +283,9 @@ static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
+ 		if (!ret)
+ 			cpudata->epp_cached = epp;
+ 	} else {
++		amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U,
++					     cpudata->max_limit_perf, false);
++
+ 		perf_ctrls.energy_perf = epp;
+ 		ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
+ 		if (ret) {
+@@ -452,16 +475,6 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
+ 	return static_call(amd_pstate_init_perf)(cpudata);
+ }
+ 
+-static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
+-			       u32 des_perf, u32 max_perf, bool fast_switch)
+-{
+-	if (fast_switch)
+-		wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
+-	else
+-		wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
+-			      READ_ONCE(cpudata->cppc_req_cached));
+-}
+-
+ static void cppc_update_perf(struct amd_cpudata *cpudata,
+ 			     u32 min_perf, u32 des_perf,
+ 			     u32 max_perf, bool fast_switch)
+@@ -475,16 +488,6 @@ static void cppc_update_perf(struct amd_cpudata *cpudata,
+ 	cppc_set_perf(cpudata->cpu, &perf_ctrls);
+ }
+ 
+-DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
+-
+-static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
+-					  u32 min_perf, u32 des_perf,
+-					  u32 max_perf, bool fast_switch)
+-{
+-	static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
+-					    max_perf, fast_switch);
+-}
+-
+ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
+ {
+ 	u64 aperf, mperf, tsc;
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index ea05d9d674902..5004e1dbc7522 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -480,23 +480,30 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	drv = devm_kzalloc(&pdev->dev, struct_size(drv, cpus, num_possible_cpus()),
+ 		           GFP_KERNEL);
+-	if (!drv)
++	if (!drv) {
++		of_node_put(np);
+ 		return -ENOMEM;
++	}
+ 
+ 	match = pdev->dev.platform_data;
+ 	drv->data = match->data;
+-	if (!drv->data)
++	if (!drv->data) {
++		of_node_put(np);
+ 		return -ENODEV;
++	}
+ 
+ 	if (drv->data->get_version) {
+ 		speedbin_nvmem = of_nvmem_cell_get(np, NULL);
+-		if (IS_ERR(speedbin_nvmem))
++		if (IS_ERR(speedbin_nvmem)) {
++			of_node_put(np);
+ 			return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+ 					     "Could not get nvmem cell\n");
++		}
+ 
+ 		ret = drv->data->get_version(cpu_dev,
+ 							speedbin_nvmem, &pvs_name, drv);
+ 		if (ret) {
++			of_node_put(np);
+ 			nvmem_cell_put(speedbin_nvmem);
+ 			return ret;
+ 		}
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 0b882765cd66f..ef83e4bf26391 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -131,7 +131,7 @@ static const struct of_device_id cpu_opp_match_list[] = {
+ static bool dt_has_supported_hw(void)
+ {
+ 	bool has_opp_supported_hw = false;
+-	struct device_node *np, *opp;
++	struct device_node *np;
+ 	struct device *cpu_dev;
+ 
+ 	cpu_dev = get_cpu_device(0);
+@@ -142,7 +142,7 @@ static bool dt_has_supported_hw(void)
+ 	if (!np)
+ 		return false;
+ 
+-	for_each_child_of_node(np, opp) {
++	for_each_child_of_node_scoped(np, opp) {
+ 		if (of_find_property(opp, "opp-supported-hw", NULL)) {
+ 			has_opp_supported_hw = true;
+ 			break;
+diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
+index 714ed53753fa5..5af85c4cbad0c 100644
+--- a/drivers/cpufreq/ti-cpufreq.c
++++ b/drivers/cpufreq/ti-cpufreq.c
+@@ -417,7 +417,7 @@ static int ti_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	ret = dev_pm_opp_set_config(opp_data->cpu_dev, &config);
+ 	if (ret < 0) {
+-		dev_err(opp_data->cpu_dev, "Failed to set OPP config\n");
++		dev_err_probe(opp_data->cpu_dev, ret, "Failed to set OPP config\n");
+ 		goto fail_put_node;
+ 	}
+ 
+diff --git a/drivers/crypto/atmel-sha204a.c b/drivers/crypto/atmel-sha204a.c
+index 24ffdf5050235..2034f60315183 100644
+--- a/drivers/crypto/atmel-sha204a.c
++++ b/drivers/crypto/atmel-sha204a.c
+@@ -106,7 +106,7 @@ static int atmel_sha204a_otp_read(struct i2c_client *client, u16 addr, u8 *otp)
+ 
+ 	if (cmd.data[0] == 0xff) {
+ 		dev_err(&client->dev, "failed, device not ready\n");
+-		return -ret;
++		return -EINVAL;
+ 	}
+ 
+ 	memcpy(otp, cmd.data+1, 4);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 2102377f727b1..1912bee22dd4a 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -1642,10 +1642,16 @@ static int sev_update_firmware(struct device *dev)
+ 
+ static int __sev_snp_shutdown_locked(int *error, bool panic)
+ {
+-	struct sev_device *sev = psp_master->sev_data;
++	struct psp_device *psp = psp_master;
++	struct sev_device *sev;
+ 	struct sev_data_snp_shutdown_ex data;
+ 	int ret;
+ 
++	if (!psp || !psp->sev_data)
++		return 0;
++
++	sev = psp->sev_data;
++
+ 	if (!sev->snp_initialized)
+ 		return 0;
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg.c b/drivers/crypto/intel/qat/qat_common/adf_cfg.c
+index 8836f015c39c4..2cf102ad4ca82 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_cfg.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_cfg.c
+@@ -290,17 +290,19 @@ int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev,
+ 	 * 3. if the key exists with the same value, then return without doing
+ 	 *    anything (the newly created key_val is freed).
+ 	 */
++	down_write(&cfg->lock);
+ 	if (!adf_cfg_key_val_get(accel_dev, section_name, key, temp_val)) {
+ 		if (strncmp(temp_val, key_val->val, sizeof(temp_val))) {
+ 			adf_cfg_keyval_remove(key, section);
+ 		} else {
+ 			kfree(key_val);
+-			return 0;
++			goto out;
+ 		}
+ 	}
+ 
+-	down_write(&cfg->lock);
+ 	adf_cfg_keyval_add(key_val, section);
++
++out:
+ 	up_write(&cfg->lock);
+ 	return 0;
+ }
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 057d73c370b73..c82775dbb557a 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -225,7 +225,8 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 			   struct skcipher_request *req, int init)
+ {
+-	dma_addr_t key_phys, src_phys, dst_phys;
++	dma_addr_t key_phys = 0;
++	dma_addr_t src_phys, dst_phys;
+ 	struct dcp *sdcp = global_sdcp;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ 	struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+diff --git a/drivers/crypto/tegra/tegra-se-main.c b/drivers/crypto/tegra/tegra-se-main.c
+index 9955874b3dc37..f94c0331b148c 100644
+--- a/drivers/crypto/tegra/tegra-se-main.c
++++ b/drivers/crypto/tegra/tegra-se-main.c
+@@ -326,7 +326,6 @@ static void tegra_se_remove(struct platform_device *pdev)
+ 
+ 	crypto_engine_stop(se->engine);
+ 	crypto_engine_exit(se->engine);
+-	iommu_fwspec_free(se->dev);
+ 	host1x_client_unregister(&se->client);
+ }
+ 
+diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
+index 3af4307873157..0af934b56a6cb 100644
+--- a/drivers/dma/fsl-edma-common.c
++++ b/drivers/dma/fsl-edma-common.c
+@@ -758,6 +758,8 @@ struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(struct dma_chan *chan,
+ 	fsl_desc->iscyclic = false;
+ 
+ 	fsl_chan->is_sw = true;
++	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
++		fsl_chan->is_remote = true;
+ 
+ 	/* To match with copy_align and max_seg_size so 1 tcd is enough */
+ 	fsl_edma_fill_tcd(fsl_chan, fsl_desc->tcd[0].vtcd, dma_src, dma_dst,
+@@ -837,6 +839,7 @@ void fsl_edma_free_chan_resources(struct dma_chan *chan)
+ 	fsl_chan->tcd_pool = NULL;
+ 	fsl_chan->is_sw = false;
+ 	fsl_chan->srcid = 0;
++	fsl_chan->is_remote = false;
+ 	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_HAS_CHCLK)
+ 		clk_disable_unprepare(fsl_chan->clk);
+ }
+diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
+index ac66222c16040..268db3876787c 100644
+--- a/drivers/dma/fsl-edma-common.h
++++ b/drivers/dma/fsl-edma-common.h
+@@ -194,6 +194,7 @@ struct fsl_edma_desc {
+ #define FSL_EDMA_DRV_HAS_PD		BIT(5)
+ #define FSL_EDMA_DRV_HAS_CHCLK		BIT(6)
+ #define FSL_EDMA_DRV_HAS_CHMUX		BIT(7)
++#define FSL_EDMA_DRV_MEM_REMOTE		BIT(8)
+ /* control and status register is in tcd address space, edma3 reg layout */
+ #define FSL_EDMA_DRV_SPLIT_REG		BIT(9)
+ #define FSL_EDMA_DRV_BUS_8BYTE		BIT(10)
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index 391e4f13dfeb0..43d84cfefbe20 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -342,7 +342,7 @@ static struct fsl_edma_drvdata imx7ulp_data = {
+ };
+ 
+ static struct fsl_edma_drvdata imx8qm_data = {
+-	.flags = FSL_EDMA_DRV_HAS_PD | FSL_EDMA_DRV_EDMA3,
++	.flags = FSL_EDMA_DRV_HAS_PD | FSL_EDMA_DRV_EDMA3 | FSL_EDMA_DRV_MEM_REMOTE,
+ 	.chreg_space_sz = 0x10000,
+ 	.chreg_off = 0x10000,
+ 	.setup_irq = fsl_edma3_irq_init,
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 6400d06588a24..df507d96660b9 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -4472,7 +4472,9 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud)
+ 		ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2);
+ 		break;
+ 	case DMA_TYPE_BCDMA:
+-		ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2);
++		ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2) +
++				BCDMA_CAP3_HBCHAN_CNT(cap3) +
++				BCDMA_CAP3_UBCHAN_CNT(cap3);
+ 		ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2);
+ 		ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2);
+ 		ud->rflow_cnt = ud->rchan_cnt;
+diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile
+index 9c09893695b7e..4edfb83ffbeef 100644
+--- a/drivers/edac/Makefile
++++ b/drivers/edac/Makefile
+@@ -54,11 +54,13 @@ obj-$(CONFIG_EDAC_MPC85XX)		+= mpc85xx_edac_mod.o
+ layerscape_edac_mod-y			:= fsl_ddr_edac.o layerscape_edac.o
+ obj-$(CONFIG_EDAC_LAYERSCAPE)		+= layerscape_edac_mod.o
+ 
+-skx_edac-y				:= skx_common.o skx_base.o
+-obj-$(CONFIG_EDAC_SKX)			+= skx_edac.o
++skx_edac_common-y			:= skx_common.o
+ 
+-i10nm_edac-y				:= skx_common.o i10nm_base.o
+-obj-$(CONFIG_EDAC_I10NM)		+= i10nm_edac.o
++skx_edac-y				:= skx_base.o
++obj-$(CONFIG_EDAC_SKX)			+= skx_edac.o skx_edac_common.o
++
++i10nm_edac-y				:= i10nm_base.o
++obj-$(CONFIG_EDAC_I10NM)		+= i10nm_edac.o skx_edac_common.o
+ 
+ obj-$(CONFIG_EDAC_CELL)			+= cell_edac.o
+ obj-$(CONFIG_EDAC_PPC4XX)		+= ppc4xx_edac.o
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 27996b7924c82..8d18099fd528c 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -48,7 +48,7 @@ static u64 skx_tolm, skx_tohm;
+ static LIST_HEAD(dev_edac_list);
+ static bool skx_mem_cfg_2lm;
+ 
+-int __init skx_adxl_get(void)
++int skx_adxl_get(void)
+ {
+ 	const char * const *names;
+ 	int i, j;
+@@ -110,12 +110,14 @@ int __init skx_adxl_get(void)
+ 
+ 	return -ENODEV;
+ }
++EXPORT_SYMBOL_GPL(skx_adxl_get);
+ 
+-void __exit skx_adxl_put(void)
++void skx_adxl_put(void)
+ {
+ 	kfree(adxl_values);
+ 	kfree(adxl_msg);
+ }
++EXPORT_SYMBOL_GPL(skx_adxl_put);
+ 
+ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
+ {
+@@ -187,12 +189,14 @@ void skx_set_mem_cfg(bool mem_cfg_2lm)
+ {
+ 	skx_mem_cfg_2lm = mem_cfg_2lm;
+ }
++EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+ 
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
+ {
+ 	driver_decode = decode;
+ 	skx_show_retry_rd_err_log = show_retry_log;
+ }
++EXPORT_SYMBOL_GPL(skx_set_decode);
+ 
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id)
+ {
+@@ -206,6 +210,7 @@ int skx_get_src_id(struct skx_dev *d, int off, u8 *id)
+ 	*id = GET_BITFIELD(reg, 12, 14);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(skx_get_src_id);
+ 
+ int skx_get_node_id(struct skx_dev *d, u8 *id)
+ {
+@@ -219,6 +224,7 @@ int skx_get_node_id(struct skx_dev *d, u8 *id)
+ 	*id = GET_BITFIELD(reg, 0, 2);
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(skx_get_node_id);
+ 
+ static int get_width(u32 mtr)
+ {
+@@ -284,6 +290,7 @@ int skx_get_all_bus_mappings(struct res_config *cfg, struct list_head **list)
+ 		*list = &dev_edac_list;
+ 	return ndev;
+ }
++EXPORT_SYMBOL_GPL(skx_get_all_bus_mappings);
+ 
+ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm)
+ {
+@@ -323,6 +330,7 @@ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm)
+ 	pci_dev_put(pdev);
+ 	return -ENODEV;
+ }
++EXPORT_SYMBOL_GPL(skx_get_hi_lo);
+ 
+ static int skx_get_dimm_attr(u32 reg, int lobit, int hibit, int add,
+ 			     int minval, int maxval, const char *name)
+@@ -394,6 +402,7 @@ int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ 
+ 	return 1;
+ }
++EXPORT_SYMBOL_GPL(skx_get_dimm_info);
+ 
+ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+ 			int chan, int dimmno, const char *mod_str)
+@@ -442,6 +451,7 @@ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+ 
+ 	return (size == 0 || size == ~0ull) ? 0 : 1;
+ }
++EXPORT_SYMBOL_GPL(skx_get_nvdimm_info);
+ 
+ int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev,
+ 		     const char *ctl_name, const char *mod_str,
+@@ -512,6 +522,7 @@ int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev,
+ 	imc->mci = NULL;
+ 	return rc;
+ }
++EXPORT_SYMBOL_GPL(skx_register_mci);
+ 
+ static void skx_unregister_mci(struct skx_imc *imc)
+ {
+@@ -688,6 +699,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ 	mce->kflags |= MCE_HANDLED_EDAC;
+ 	return NOTIFY_DONE;
+ }
++EXPORT_SYMBOL_GPL(skx_mce_check_error);
+ 
+ void skx_remove(void)
+ {
+@@ -725,3 +737,8 @@ void skx_remove(void)
+ 		kfree(d);
+ 	}
+ }
++EXPORT_SYMBOL_GPL(skx_remove);
++
++MODULE_LICENSE("GPL v2");
++MODULE_AUTHOR("Tony Luck");
++MODULE_DESCRIPTION("MC Driver for Intel server processors");
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index b6d3607dffe27..11faf1db4fa48 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -231,8 +231,8 @@ typedef int (*get_dimm_config_f)(struct mem_ctl_info *mci,
+ typedef bool (*skx_decode_f)(struct decoded_addr *res);
+ typedef void (*skx_show_retry_log_f)(struct decoded_addr *res, char *msg, int len, bool scrub_err);
+ 
+-int __init skx_adxl_get(void);
+-void __exit skx_adxl_put(void);
++int skx_adxl_get(void);
++void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
+ 
+diff --git a/drivers/firmware/efi/libstub/screen_info.c b/drivers/firmware/efi/libstub/screen_info.c
+index a51ec201ca3cb..5d3a1e32d1776 100644
+--- a/drivers/firmware/efi/libstub/screen_info.c
++++ b/drivers/firmware/efi/libstub/screen_info.c
+@@ -32,6 +32,8 @@ struct screen_info *__alloc_screen_info(void)
+ 	if (status != EFI_SUCCESS)
+ 		return NULL;
+ 
++	memset(si, 0, sizeof(*si));
++
+ 	status = efi_bs_call(install_configuration_table,
+ 			     &screen_info_guid, si);
+ 	if (status == EFI_SUCCESS)
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 1983fd3bf392e..99d39eda51342 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -469,11 +469,12 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
+ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 				   efi_system_table_t *sys_table_arg)
+ {
+-	static struct boot_params boot_params __page_aligned_bss;
+-	struct setup_header *hdr = &boot_params.hdr;
+ 	efi_guid_t proto = LOADED_IMAGE_PROTOCOL_GUID;
++	struct boot_params *boot_params;
++	struct setup_header *hdr;
+ 	int options_size = 0;
+ 	efi_status_t status;
++	unsigned long alloc;
+ 	char *cmdline_ptr;
+ 
+ 	if (efi_is_native())
+@@ -491,6 +492,13 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 		efi_exit(handle, status);
+ 	}
+ 
++	status = efi_allocate_pages(PARAM_SIZE, &alloc, ULONG_MAX);
++	if (status != EFI_SUCCESS)
++		efi_exit(handle, status);
++
++	boot_params = memset((void *)alloc, 0x0, PARAM_SIZE);
++	hdr	    = &boot_params->hdr;
++
+ 	/* Assign the setup_header fields that the kernel actually cares about */
+ 	hdr->root_flags	= 1;
+ 	hdr->vid_mode	= 0xffff;
+@@ -500,17 +508,16 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 
+ 	/* Convert unicode cmdline to ascii */
+ 	cmdline_ptr = efi_convert_cmdline(image, &options_size);
+-	if (!cmdline_ptr)
+-		goto fail;
++	if (!cmdline_ptr) {
++		efi_free(PARAM_SIZE, alloc);
++		efi_exit(handle, EFI_OUT_OF_RESOURCES);
++	}
+ 
+ 	efi_set_u64_split((unsigned long)cmdline_ptr, &hdr->cmd_line_ptr,
+-			  &boot_params.ext_cmd_line_ptr);
++			  &boot_params->ext_cmd_line_ptr);
+ 
+-	efi_stub_entry(handle, sys_table_arg, &boot_params);
++	efi_stub_entry(handle, sys_table_arg, boot_params);
+ 	/* not reached */
+-
+-fail:
+-	efi_exit(handle, status);
+ }
+ 
+ static void add_e820ext(struct boot_params *params,
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 31d962cdd6eb2..3e7f186d239a2 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -2,7 +2,7 @@
+ /*
+  * Turris Mox rWTM firmware driver
+  *
+- * Copyright (C) 2019 Marek Behún <kabel@kernel.org>
++ * Copyright (C) 2019, 2024 Marek Behún <kabel@kernel.org>
+  */
+ 
+ #include <linux/armada-37xx-rwtm-mailbox.h>
+@@ -174,6 +174,9 @@ static void mox_rwtm_rx_callback(struct mbox_client *cl, void *data)
+ 	struct mox_rwtm *rwtm = dev_get_drvdata(cl->dev);
+ 	struct armada_37xx_rwtm_rx_msg *msg = data;
+ 
++	if (completion_done(&rwtm->cmd_done))
++		return;
++
+ 	rwtm->reply = *msg;
+ 	complete(&rwtm->cmd_done);
+ }
+@@ -199,9 +202,8 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	ret = mox_get_status(MBOX_CMD_BOARD_INFO, reply->retval);
+ 	if (ret == -ENODATA) {
+@@ -235,9 +237,8 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	ret = mox_get_status(MBOX_CMD_ECDSA_PUB_KEY, reply->retval);
+ 	if (ret == -ENODATA) {
+@@ -274,9 +275,8 @@ static int check_get_random_support(struct mox_rwtm *rwtm)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
+-	if (ret < 0)
+-		return ret;
++	if (!wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2))
++		return -ETIMEDOUT;
+ 
+ 	return mox_get_status(MBOX_CMD_GET_RANDOM, rwtm->reply.retval);
+ }
+@@ -499,6 +499,7 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, rwtm);
+ 
+ 	mutex_init(&rwtm->busy);
++	init_completion(&rwtm->cmd_done);
+ 
+ 	rwtm->mbox_client.dev = dev;
+ 	rwtm->mbox_client.rx_callback = mox_rwtm_rx_callback;
+@@ -512,8 +513,6 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 		goto remove_files;
+ 	}
+ 
+-	init_completion(&rwtm->cmd_done);
+-
+ 	ret = mox_get_board_info(rwtm);
+ 	if (ret < 0)
+ 		dev_warn(dev, "Cannot read board information: %i\n", ret);
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index d0aa277fc3bff..359b68adafc1b 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -106,8 +106,7 @@ config DRM_KMS_HELPER
+ 
+ config DRM_PANIC
+ 	bool "Display a user-friendly message when a kernel panic occurs"
+-	depends on DRM && !FRAMEBUFFER_CONSOLE
+-	select DRM_KMS_HELPER
++	depends on DRM && !(FRAMEBUFFER_CONSOLE && VT_CONSOLE)
+ 	select FONT_SUPPORT
+ 	help
+ 	  Enable a drm panic handler, which will display a user-friendly message
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 33f791d92ddf3..ee7df1d84e028 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -6173,7 +6173,7 @@ int amdgpu_device_baco_exit(struct drm_device *dev)
+ 	    adev->nbio.funcs->enable_doorbell_interrupt)
+ 		adev->nbio.funcs->enable_doorbell_interrupt(adev, true);
+ 
+-	if (amdgpu_passthrough(adev) &&
++	if (amdgpu_passthrough(adev) && adev->nbio.funcs &&
+ 	    adev->nbio.funcs->clear_doorbell_interrupt)
+ 		adev->nbio.funcs->clear_doorbell_interrupt(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 1d955652f3ba6..e92bdc9a39d35 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -329,8 +329,9 @@ int amdgpu_gfx_kiq_init_ring(struct amdgpu_device *adev, int xcc_id)
+ 
+ 	ring->eop_gpu_addr = kiq->eop_gpu_addr;
+ 	ring->no_scheduler = true;
+-	snprintf(ring->name, sizeof(ring->name), "kiq_%d.%d.%d.%d",
+-		 xcc_id, ring->me, ring->pipe, ring->queue);
++	snprintf(ring->name, sizeof(ring->name), "kiq_%hhu.%hhu.%hhu.%hhu",
++		 (unsigned char)xcc_id, (unsigned char)ring->me,
++		 (unsigned char)ring->pipe, (unsigned char)ring->queue);
+ 	r = amdgpu_ring_init(adev, ring, 1024, irq, AMDGPU_CP_KIQ_IRQ_DRIVER0,
+ 			     AMDGPU_RING_PRIO_DEFAULT, NULL);
+ 	if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 08b9dfb653355..86b096ad0319c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -878,7 +878,6 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
+ 	struct amdgpu_gmc *gmc = &adev->gmc;
+ 	uint32_t gc_ver = amdgpu_ip_version(adev, GC_HWIP, 0);
+ 	bool noretry_default = (gc_ver == IP_VERSION(9, 0, 1) ||
+-				gc_ver == IP_VERSION(9, 3, 0) ||
+ 				gc_ver == IP_VERSION(9, 4, 0) ||
+ 				gc_ver == IP_VERSION(9, 4, 1) ||
+ 				gc_ver == IP_VERSION(9, 4, 2) ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 4e2391c83d7c7..0f7106066480e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -434,7 +434,7 @@ uint64_t amdgpu_vm_generation(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+ 	if (!vm)
+ 		return result;
+ 
+-	result += vm->generation;
++	result += lower_32_bits(vm->generation);
+ 	/* Add one if the page tables will be re-generated on next CS */
+ 	if (drm_sched_entity_error(&vm->delayed))
+ 		++result;
+@@ -463,13 +463,14 @@ int amdgpu_vm_validate(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 		       int (*validate)(void *p, struct amdgpu_bo *bo),
+ 		       void *param)
+ {
++	uint64_t new_vm_generation = amdgpu_vm_generation(adev, vm);
+ 	struct amdgpu_vm_bo_base *bo_base;
+ 	struct amdgpu_bo *shadow;
+ 	struct amdgpu_bo *bo;
+ 	int r;
+ 
+-	if (drm_sched_entity_error(&vm->delayed)) {
+-		++vm->generation;
++	if (vm->generation != new_vm_generation) {
++		vm->generation = new_vm_generation;
+ 		amdgpu_vm_bo_reset_state_machine(vm);
+ 		amdgpu_vm_fini_entities(vm);
+ 		r = amdgpu_vm_init_entities(adev, vm);
+@@ -2441,7 +2442,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 	vm->last_update = dma_fence_get_stub();
+ 	vm->last_unlocked = dma_fence_get_stub();
+ 	vm->last_tlb_flush = dma_fence_get_stub();
+-	vm->generation = 0;
++	vm->generation = amdgpu_vm_generation(adev, NULL);
+ 
+ 	mutex_init(&vm->eviction_lock);
+ 	vm->evicting = false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index c4ec1358f3aa6..f7f4924751020 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1910,7 +1910,7 @@ gmc_v9_0_init_sw_mem_ranges(struct amdgpu_device *adev,
+ 		break;
+ 	}
+ 
+-	size = adev->gmc.real_vram_size >> AMDGPU_GPU_PAGE_SHIFT;
++	size = (adev->gmc.real_vram_size + SZ_16M) >> AMDGPU_GPU_PAGE_SHIFT;
+ 	size /= adev->gmc.num_mem_partitions;
+ 
+ 	for (i = 0; i < adev->gmc.num_mem_partitions; ++i) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index cc9e961f00787..af1e90159ce36 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -176,6 +176,14 @@ static void sdma_v5_2_ring_set_wptr(struct amdgpu_ring *ring)
+ 		DRM_DEBUG("calling WDOORBELL64(0x%08x, 0x%016llx)\n",
+ 				ring->doorbell_index, ring->wptr << 2);
+ 		WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
++		/* SDMA seems to miss doorbells sometimes when powergating kicks in.
++		 * Updating the wptr directly will wake it. This is only safe because
++		 * we disallow gfxoff in begin_use() and then allow it again in end_use().
++		 */
++		WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR),
++		       lower_32_bits(ring->wptr << 2));
++		WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI),
++		       upper_32_bits(ring->wptr << 2));
+ 	} else {
+ 		DRM_DEBUG("Not using doorbell -- "
+ 				"mmSDMA%i_GFX_RB_WPTR == 0x%08x "
+@@ -1647,6 +1655,10 @@ static void sdma_v5_2_ring_begin_use(struct amdgpu_ring *ring)
+ 	 * but it shouldn't hurt for other parts since
+ 	 * this GFXOFF will be disallowed anyway when SDMA is
+ 	 * active, this just makes it explicit.
++	 * sdma_v5_2_ring_set_wptr() takes advantage of this
++	 * to update the wptr because sometimes SDMA seems to miss
++	 * doorbells when entering PG.  If you remove this, update
++	 * sdma_v5_2_ring_set_wptr() as well!
+ 	 */
+ 	amdgpu_gfx_off_ctrl(adev, false);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.c b/drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.c
+index 04c797d54511b..0af648931df58 100644
+--- a/drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.c
++++ b/drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.c
+@@ -91,7 +91,7 @@ static int smu_v13_0_10_mode2_suspend_ip(struct amdgpu_device *adev)
+ 		adev->ip_blocks[i].status.hw = false;
+ 	}
+ 
+-	return r;
++	return 0;
+ }
+ 
+ static int
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index ac1b8ead03b3b..9a33d3d000655 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1053,6 +1053,9 @@ static int vcn_v4_0_start(struct amdgpu_device *adev)
+ 		amdgpu_dpm_enable_uvd(adev, true);
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 
+ 		if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+@@ -1506,6 +1509,9 @@ static int vcn_v4_0_stop(struct amdgpu_device *adev)
+ 	int i, r = 0;
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 		fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index 81fb99729f37d..30e80c6f11ed6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -964,6 +964,9 @@ static int vcn_v4_0_5_start(struct amdgpu_device *adev)
+ 		amdgpu_dpm_enable_uvd(adev, true);
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 
+ 		if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+@@ -1168,6 +1171,9 @@ static int vcn_v4_0_5_stop(struct amdgpu_device *adev)
+ 	int i, r = 0;
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 		fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+index 851975b5ce298..fbd3f7a582c12 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+@@ -722,6 +722,9 @@ static int vcn_v5_0_0_start(struct amdgpu_device *adev)
+ 		amdgpu_dpm_enable_uvd(adev, true);
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 
+ 		if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+@@ -899,6 +902,9 @@ static int vcn_v5_0_0_stop(struct amdgpu_device *adev)
+ 	int i, r = 0;
+ 
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++		if (adev->vcn.harvest_config & (1 << i))
++			continue;
++
+ 		fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+ 		fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 6bddc16808d7a..8ec136eba54a9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -713,7 +713,7 @@ static void update_mqd_v9_4_3(struct mqd_manager *mm, void *mqd,
+ 		m = get_mqd(mqd + size * xcc);
+ 		update_mqd(mm, m, q, minfo);
+ 
+-		update_cu_mask(mm, mqd, minfo, xcc);
++		update_cu_mask(mm, m, minfo, xcc);
+ 
+ 		if (q->format == KFD_QUEUE_FORMAT_AQL) {
+ 			switch (xcc) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+index 067f6555cfdff..ccbb15f1638c8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+@@ -143,7 +143,8 @@ const struct dc_plane_status *dc_plane_get_status(
+ 		if (pipe_ctx->plane_state != plane_state)
+ 			continue;
+ 
+-		pipe_ctx->plane_state->status.is_flip_pending = false;
++		if (pipe_ctx->plane_state)
++			pipe_ctx->plane_state->status.is_flip_pending = false;
+ 
+ 		break;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 3c33c3bcbe2cb..4362fca1f15ad 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1392,6 +1392,7 @@ struct dc {
+ 	} scratch;
+ 
+ 	struct dml2_configuration_options dml2_options;
++	struct dml2_configuration_options dml2_tmp;
+ 	enum dc_acpi_cm_power_state power_state;
+ 
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+index 282d70e2b18ab..3d29169dd6bbf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+@@ -750,6 +750,8 @@ static void enable_phantom_plane(struct dml2_context *ctx,
+ 					ctx->config.svp_pstate.callbacks.dc,
+ 					state,
+ 					curr_pipe->plane_state);
++			if (!phantom_plane)
++				return;
+ 		}
+ 
+ 		memcpy(&phantom_plane->address, &curr_pipe->plane_state->address, sizeof(phantom_plane->address));
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index 8ecc972dbffde..edff6b447680c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -804,7 +804,7 @@ static void populate_dml_surface_cfg_from_plane_state(enum dml_project_id dml2_p
+ 	}
+ }
+ 
+-static struct scaler_data get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc_state *context)
++static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc_state *context, struct scaler_data *out)
+ {
+ 	int i;
+ 	struct pipe_ctx *temp_pipe = &context->res_ctx.temp_pipe;
+@@ -825,7 +825,7 @@ static struct scaler_data get_scaler_data_for_plane(const struct dc_plane_state
+ 	}
+ 
+ 	ASSERT(i < MAX_PIPES);
+-	return temp_pipe->plane_res.scl_data;
++	memcpy(out, &temp_pipe->plane_res.scl_data, sizeof(*out));
+ }
+ 
+ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_stream_state *in)
+@@ -884,27 +884,31 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ 
+ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_plane_state *in, struct dc_state *context)
+ {
+-	const struct scaler_data scaler_data = get_scaler_data_for_plane(in, context);
++	struct scaler_data *scaler_data = kzalloc(sizeof(*scaler_data), GFP_KERNEL);
++	if (!scaler_data)
++		return;
++
++	get_scaler_data_for_plane(in, context, scaler_data);
+ 
+ 	out->CursorBPP[location] = dml_cur_32bit;
+ 	out->CursorWidth[location] = 256;
+ 
+ 	out->GPUVMMinPageSizeKBytes[location] = 256;
+ 
+-	out->ViewportWidth[location] = scaler_data.viewport.width;
+-	out->ViewportHeight[location] = scaler_data.viewport.height;
+-	out->ViewportWidthChroma[location] = scaler_data.viewport_c.width;
+-	out->ViewportHeightChroma[location] = scaler_data.viewport_c.height;
+-	out->ViewportXStart[location] = scaler_data.viewport.x;
+-	out->ViewportYStart[location] = scaler_data.viewport.y;
+-	out->ViewportXStartC[location] = scaler_data.viewport_c.x;
+-	out->ViewportYStartC[location] = scaler_data.viewport_c.y;
++	out->ViewportWidth[location] = scaler_data->viewport.width;
++	out->ViewportHeight[location] = scaler_data->viewport.height;
++	out->ViewportWidthChroma[location] = scaler_data->viewport_c.width;
++	out->ViewportHeightChroma[location] = scaler_data->viewport_c.height;
++	out->ViewportXStart[location] = scaler_data->viewport.x;
++	out->ViewportYStart[location] = scaler_data->viewport.y;
++	out->ViewportXStartC[location] = scaler_data->viewport_c.x;
++	out->ViewportYStartC[location] = scaler_data->viewport_c.y;
+ 	out->ViewportStationary[location] = false;
+ 
+-	out->ScalerEnabled[location] = scaler_data.ratios.horz.value != dc_fixpt_one.value ||
+-				scaler_data.ratios.horz_c.value != dc_fixpt_one.value ||
+-				scaler_data.ratios.vert.value != dc_fixpt_one.value ||
+-				scaler_data.ratios.vert_c.value != dc_fixpt_one.value;
++	out->ScalerEnabled[location] = scaler_data->ratios.horz.value != dc_fixpt_one.value ||
++				scaler_data->ratios.horz_c.value != dc_fixpt_one.value ||
++				scaler_data->ratios.vert.value != dc_fixpt_one.value ||
++				scaler_data->ratios.vert_c.value != dc_fixpt_one.value;
+ 
+ 	/* Current driver code base uses LBBitPerPixel as 57. There is a discrepancy
+ 	 * from the HW/DML teams about this value. Initialize LBBitPerPixel with the
+@@ -920,25 +924,25 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ 		out->VRatioChroma[location] = 1;
+ 	} else {
+ 		/* Follow the original dml_wrapper.c code direction to fix scaling issues */
+-		out->HRatio[location] = (dml_float_t)scaler_data.ratios.horz.value / (1ULL << 32);
+-		out->HRatioChroma[location] = (dml_float_t)scaler_data.ratios.horz_c.value / (1ULL << 32);
+-		out->VRatio[location] = (dml_float_t)scaler_data.ratios.vert.value / (1ULL << 32);
+-		out->VRatioChroma[location] = (dml_float_t)scaler_data.ratios.vert_c.value / (1ULL << 32);
++		out->HRatio[location] = (dml_float_t)scaler_data->ratios.horz.value / (1ULL << 32);
++		out->HRatioChroma[location] = (dml_float_t)scaler_data->ratios.horz_c.value / (1ULL << 32);
++		out->VRatio[location] = (dml_float_t)scaler_data->ratios.vert.value / (1ULL << 32);
++		out->VRatioChroma[location] = (dml_float_t)scaler_data->ratios.vert_c.value / (1ULL << 32);
+ 	}
+ 
+-	if (!scaler_data.taps.h_taps) {
++	if (!scaler_data->taps.h_taps) {
+ 		out->HTaps[location] = 1;
+ 		out->HTapsChroma[location] = 1;
+ 	} else {
+-		out->HTaps[location] = scaler_data.taps.h_taps;
+-		out->HTapsChroma[location] = scaler_data.taps.h_taps_c;
++		out->HTaps[location] = scaler_data->taps.h_taps;
++		out->HTapsChroma[location] = scaler_data->taps.h_taps_c;
+ 	}
+-	if (!scaler_data.taps.v_taps) {
++	if (!scaler_data->taps.v_taps) {
+ 		out->VTaps[location] = 1;
+ 		out->VTapsChroma[location] = 1;
+ 	} else {
+-		out->VTaps[location] = scaler_data.taps.v_taps;
+-		out->VTapsChroma[location] = scaler_data.taps.v_taps_c;
++		out->VTaps[location] = scaler_data->taps.v_taps;
++		out->VTapsChroma[location] = scaler_data->taps.v_taps_c;
+ 	}
+ 
+ 	out->SourceScan[location] = (enum dml_rotation_angle)in->rotation;
+@@ -949,6 +953,8 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ 	out->DynamicMetadataTransmittedBytes[location] = 0;
+ 
+ 	out->NumberOfCursors[location] = 1;
++
++	kfree(scaler_data);
+ }
+ 
+ static unsigned int map_stream_to_dml_display_cfg(const struct dml2_context *dml2,
+diff --git a/drivers/gpu/drm/amd/display/dc/optc/dcn10/dcn10_optc.c b/drivers/gpu/drm/amd/display/dc/optc/dcn10/dcn10_optc.c
+index 5574bc628053c..f109a101d84f3 100644
+--- a/drivers/gpu/drm/amd/display/dc/optc/dcn10/dcn10_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/optc/dcn10/dcn10_optc.c
+@@ -945,19 +945,10 @@ void optc1_set_drr(
+ 				OTG_FORCE_LOCK_ON_EVENT, 0,
+ 				OTG_SET_V_TOTAL_MIN_MASK_EN, 0,
+ 				OTG_SET_V_TOTAL_MIN_MASK, 0);
+-
+-		// Setup manual flow control for EOF via TRIG_A
+-		optc->funcs->setup_manual_trigger(optc);
+-
+-	} else {
+-		REG_UPDATE_4(OTG_V_TOTAL_CONTROL,
+-				OTG_SET_V_TOTAL_MIN_MASK, 0,
+-				OTG_V_TOTAL_MIN_SEL, 0,
+-				OTG_V_TOTAL_MAX_SEL, 0,
+-				OTG_FORCE_LOCK_ON_EVENT, 0);
+-
+-		optc->funcs->set_vtotal_min_max(optc, 0, 0);
+ 	}
++
++	// Setup manual flow control for EOF via TRIG_A
++	optc->funcs->setup_manual_trigger(optc);
+ }
+ 
+ void optc1_set_vtotal_min_max(struct timing_generator *optc, int vtotal_min, int vtotal_max)
+diff --git a/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c b/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
+index d6f095b4555dc..58bdbd859bf9b 100644
+--- a/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
+@@ -462,6 +462,16 @@ void optc2_setup_manual_trigger(struct timing_generator *optc)
+ {
+ 	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+ 
++	/* Set the min/max selectors unconditionally so that
++	 * DMCUB fw may change OTG timings when necessary
++	 * TODO: Remove the w/a after fixing the issue in DMCUB firmware
++	 */
++	REG_UPDATE_4(OTG_V_TOTAL_CONTROL,
++				 OTG_V_TOTAL_MIN_SEL, 1,
++				 OTG_V_TOTAL_MAX_SEL, 1,
++				 OTG_FORCE_LOCK_ON_EVENT, 0,
++				 OTG_SET_V_TOTAL_MIN_MASK, (1 << 1)); /* TRIGA */
++
+ 	REG_SET_8(OTG_TRIGA_CNTL, 0,
+ 			OTG_TRIGA_SOURCE_SELECT, 21,
+ 			OTG_TRIGA_SOURCE_PIPE_SELECT, optc->inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+index abd76345d1e43..d84c8e0e5c2f0 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+@@ -2006,19 +2006,21 @@ void dcn32_calculate_wm_and_dlg(struct dc *dc, struct dc_state *context,
+ 
+ static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+ {
+-	struct dml2_configuration_options dml2_opt = dc->dml2_options;
++	struct dml2_configuration_options *dml2_opt = &dc->dml2_tmp;
++
++	memcpy(dml2_opt, &dc->dml2_options, sizeof(dc->dml2_options));
+ 
+ 	DC_FP_START();
+ 
+ 	dcn32_update_bw_bounding_box_fpu(dc, bw_params);
+ 
+-	dml2_opt.use_clock_dc_limits = false;
++	dml2_opt->use_clock_dc_limits = false;
+ 	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2)
+-		dml2_reinit(dc, &dml2_opt, &dc->current_state->bw_ctx.dml2);
++		dml2_reinit(dc, dml2_opt, &dc->current_state->bw_ctx.dml2);
+ 
+-	dml2_opt.use_clock_dc_limits = true;
++	dml2_opt->use_clock_dc_limits = true;
+ 	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2_dc_power_source)
+-		dml2_reinit(dc, &dml2_opt, &dc->current_state->bw_ctx.dml2_dc_power_source);
++		dml2_reinit(dc, dml2_opt, &dc->current_state->bw_ctx.dml2_dc_power_source);
+ 
+ 	DC_FP_END();
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+index e4b360d89b3be..9a3cc0514a36e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+@@ -1581,19 +1581,21 @@ static struct dc_cap_funcs cap_funcs = {
+ 
+ static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+ {
+-	struct dml2_configuration_options dml2_opt = dc->dml2_options;
++	struct dml2_configuration_options *dml2_opt = &dc->dml2_tmp;
++
++	memcpy(dml2_opt, &dc->dml2_options, sizeof(dc->dml2_options));
+ 
+ 	DC_FP_START();
+ 
+ 	dcn321_update_bw_bounding_box_fpu(dc, bw_params);
+ 
+-	dml2_opt.use_clock_dc_limits = false;
++	dml2_opt->use_clock_dc_limits = false;
+ 	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2)
+-		dml2_reinit(dc, &dml2_opt, &dc->current_state->bw_ctx.dml2);
++		dml2_reinit(dc, dml2_opt, &dc->current_state->bw_ctx.dml2);
+ 
+-	dml2_opt.use_clock_dc_limits = true;
++	dml2_opt->use_clock_dc_limits = true;
+ 	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2_dc_power_source)
+-		dml2_reinit(dc, &dml2_opt, &dc->current_state->bw_ctx.dml2_dc_power_source);
++		dml2_reinit(dc, dml2_opt, &dc->current_state->bw_ctx.dml2_dc_power_source);
+ 
+ 	DC_FP_END();
+ }
+diff --git a/drivers/gpu/drm/amd/display/include/grph_object_id.h b/drivers/gpu/drm/amd/display/include/grph_object_id.h
+index 08ee0350b31fb..54e33062b3c02 100644
+--- a/drivers/gpu/drm/amd/display/include/grph_object_id.h
++++ b/drivers/gpu/drm/amd/display/include/grph_object_id.h
+@@ -226,8 +226,8 @@ enum dp_alt_mode {
+ 
+ struct graphics_object_id {
+ 	uint32_t  id:8;
+-	enum object_enum_id  enum_id;
+-	enum object_type  type;
++	enum object_enum_id  enum_id :4;
++	enum object_type  type :4;
+ 	uint32_t  reserved:16; /* for padding. total size should be u32 */
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index a8d34adc7d3f1..b63ad9cb24bfd 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -79,8 +79,8 @@ MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin");
+ #define PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK 0x00000070L
+ #define PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT 0x4
+ #define smnPCIE_LC_SPEED_CNTL			0x11140290
+-#define PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK 0xC000
+-#define PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT 0xE
++#define PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK 0xE0
++#define PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT 0x5
+ 
+ #define ENABLE_IMU_ARG_GFXOFF_ENABLE		1
+ 
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+index 2c661f28410ed..b645c5998230b 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+@@ -5,6 +5,7 @@
+  *
+  */
+ #include <linux/clk.h>
++#include <linux/of.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/spinlock.h>
+ 
+@@ -610,12 +611,34 @@ get_crtc_primary(struct komeda_kms_dev *kms, struct komeda_crtc *crtc)
+ 	return NULL;
+ }
+ 
++static int komeda_attach_bridge(struct device *dev,
++				struct komeda_pipeline *pipe,
++				struct drm_encoder *encoder)
++{
++	struct drm_bridge *bridge;
++	int err;
++
++	bridge = devm_drm_of_get_bridge(dev, pipe->of_node,
++					KOMEDA_OF_PORT_OUTPUT, 0);
++	if (IS_ERR(bridge))
++		return dev_err_probe(dev, PTR_ERR(bridge), "remote bridge not found for pipe: %s\n",
++				     of_node_full_name(pipe->of_node));
++
++	err = drm_bridge_attach(encoder, bridge, NULL, 0);
++	if (err)
++		dev_err(dev, "bridge_attach() failed for pipe: %s\n",
++			of_node_full_name(pipe->of_node));
++
++	return err;
++}
++
+ static int komeda_crtc_add(struct komeda_kms_dev *kms,
+ 			   struct komeda_crtc *kcrtc)
+ {
+ 	struct drm_crtc *crtc = &kcrtc->base;
+ 	struct drm_device *base = &kms->base;
+-	struct drm_bridge *bridge;
++	struct komeda_pipeline *pipe = kcrtc->master;
++	struct drm_encoder *encoder = &kcrtc->encoder;
+ 	int err;
+ 
+ 	err = drm_crtc_init_with_planes(base, crtc,
+@@ -626,27 +649,25 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms,
+ 
+ 	drm_crtc_helper_add(crtc, &komeda_crtc_helper_funcs);
+ 
+-	crtc->port = kcrtc->master->of_output_port;
++	crtc->port = pipe->of_output_port;
+ 
+ 	/* Construct an encoder for each pipeline and attach it to the remote
+ 	 * bridge
+ 	 */
+ 	kcrtc->encoder.possible_crtcs = drm_crtc_mask(crtc);
+-	err = drm_simple_encoder_init(base, &kcrtc->encoder,
+-				      DRM_MODE_ENCODER_TMDS);
++	err = drm_simple_encoder_init(base, encoder, DRM_MODE_ENCODER_TMDS);
+ 	if (err)
+ 		return err;
+ 
+-	bridge = devm_drm_of_get_bridge(base->dev, kcrtc->master->of_node,
+-					KOMEDA_OF_PORT_OUTPUT, 0);
+-	if (IS_ERR(bridge))
+-		return PTR_ERR(bridge);
+-
+-	err = drm_bridge_attach(&kcrtc->encoder, bridge, NULL, 0);
++	if (pipe->of_output_links[0]) {
++		err = komeda_attach_bridge(base->dev, pipe, encoder);
++		if (err)
++			return err;
++	}
+ 
+ 	drm_crtc_enable_color_mgmt(crtc, 0, true, KOMEDA_COLOR_LUT_SIZE);
+ 
+-	return err;
++	return 0;
+ }
+ 
+ int komeda_kms_add_crtcs(struct komeda_kms_dev *kms, struct komeda_dev *mdev)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index ea271f62b214d..ec0b7f3d889c4 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -401,7 +401,7 @@ struct adv7511 {
+ 
+ #ifdef CONFIG_DRM_I2C_ADV7511_CEC
+ int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511);
+-void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1);
++int adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1);
+ #else
+ static inline int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
+ {
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+index 44451a9658a32..2e9c88a2b5ed4 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_cec.c
+@@ -119,7 +119,7 @@ static void adv7511_cec_rx(struct adv7511 *adv7511, int rx_buf)
+ 	cec_received_msg(adv7511->cec_adap, &msg);
+ }
+ 
+-void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1)
++int adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1)
+ {
+ 	unsigned int offset = adv7511->info->reg_cec_offset;
+ 	const u32 irq_tx_mask = ADV7511_INT1_CEC_TX_READY |
+@@ -131,16 +131,19 @@ void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1)
+ 	unsigned int rx_status;
+ 	int rx_order[3] = { -1, -1, -1 };
+ 	int i;
++	int irq_status = IRQ_NONE;
+ 
+-	if (irq1 & irq_tx_mask)
++	if (irq1 & irq_tx_mask) {
+ 		adv_cec_tx_raw_status(adv7511, irq1);
++		irq_status = IRQ_HANDLED;
++	}
+ 
+ 	if (!(irq1 & irq_rx_mask))
+-		return;
++		return irq_status;
+ 
+ 	if (regmap_read(adv7511->regmap_cec,
+ 			ADV7511_REG_CEC_RX_STATUS + offset, &rx_status))
+-		return;
++		return irq_status;
+ 
+ 	/*
+ 	 * ADV7511_REG_CEC_RX_STATUS[5:0] contains the reception order of RX
+@@ -172,6 +175,8 @@ void adv7511_cec_irq_process(struct adv7511 *adv7511, unsigned int irq1)
+ 
+ 		adv7511_cec_rx(adv7511, rx_buf);
+ 	}
++
++	return IRQ_HANDLED;
+ }
+ 
+ static int adv7511_cec_adap_enable(struct cec_adapter *adap, bool enable)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 66ccb61e2a660..c8d2c4a157b24 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -469,6 +469,8 @@ static int adv7511_irq_process(struct adv7511 *adv7511, bool process_hpd)
+ {
+ 	unsigned int irq0, irq1;
+ 	int ret;
++	int cec_status = IRQ_NONE;
++	int irq_status = IRQ_NONE;
+ 
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
+ 	if (ret < 0)
+@@ -478,29 +480,31 @@ static int adv7511_irq_process(struct adv7511 *adv7511, bool process_hpd)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* If there is no IRQ to handle, exit indicating no IRQ data */
+-	if (!(irq0 & (ADV7511_INT0_HPD | ADV7511_INT0_EDID_READY)) &&
+-	    !(irq1 & ADV7511_INT1_DDC_ERROR))
+-		return -ENODATA;
+-
+ 	regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
+ 	regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
+ 
+-	if (process_hpd && irq0 & ADV7511_INT0_HPD && adv7511->bridge.encoder)
++	if (process_hpd && irq0 & ADV7511_INT0_HPD && adv7511->bridge.encoder) {
+ 		schedule_work(&adv7511->hpd_work);
++		irq_status = IRQ_HANDLED;
++	}
+ 
+ 	if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
+ 		adv7511->edid_read = true;
+ 
+ 		if (adv7511->i2c_main->irq)
+ 			wake_up_all(&adv7511->wq);
++		irq_status = IRQ_HANDLED;
+ 	}
+ 
+ #ifdef CONFIG_DRM_I2C_ADV7511_CEC
+-	adv7511_cec_irq_process(adv7511, irq1);
++	cec_status = adv7511_cec_irq_process(adv7511, irq1);
+ #endif
+ 
+-	return 0;
++	/* If there is no IRQ to handle, exit indicating no IRQ data */
++	if (irq_status == IRQ_HANDLED || cec_status == IRQ_HANDLED)
++		return IRQ_HANDLED;
++
++	return IRQ_NONE;
+ }
+ 
+ static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+@@ -509,7 +513,7 @@ static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+ 	int ret;
+ 
+ 	ret = adv7511_irq_process(adv7511, true);
+-	return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
++	return ret < 0 ? IRQ_NONE : ret;
+ }
+ 
+ /* -----------------------------------------------------------------------------
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 3f68c82888c2c..cf59347d3d605 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -1307,9 +1307,15 @@ static void it6505_video_reset(struct it6505 *it6505)
+ 	it6505_link_reset_step_train(it6505);
+ 	it6505_set_bits(it6505, REG_DATA_MUTE_CTRL, EN_VID_MUTE, EN_VID_MUTE);
+ 	it6505_set_bits(it6505, REG_INFOFRAME_CTRL, EN_VID_CTRL_PKT, 0x00);
+-	it6505_set_bits(it6505, REG_RESET_CTRL, VIDEO_RESET, VIDEO_RESET);
++
++	it6505_set_bits(it6505, REG_VID_BUS_CTRL1, TX_FIFO_RESET, TX_FIFO_RESET);
++	it6505_set_bits(it6505, REG_VID_BUS_CTRL1, TX_FIFO_RESET, 0x00);
++
+ 	it6505_set_bits(it6505, REG_501_FIFO_CTRL, RST_501_FIFO, RST_501_FIFO);
+ 	it6505_set_bits(it6505, REG_501_FIFO_CTRL, RST_501_FIFO, 0x00);
++
++	it6505_set_bits(it6505, REG_RESET_CTRL, VIDEO_RESET, VIDEO_RESET);
++	usleep_range(1000, 2000);
+ 	it6505_set_bits(it6505, REG_RESET_CTRL, VIDEO_RESET, 0x00);
+ }
+ 
+@@ -2245,12 +2251,11 @@ static void it6505_link_training_work(struct work_struct *work)
+ 	if (ret) {
+ 		it6505->auto_train_retry = AUTO_TRAIN_RETRY;
+ 		it6505_link_train_ok(it6505);
+-		return;
+ 	} else {
+ 		it6505->auto_train_retry--;
++		it6505_dump(it6505);
+ 	}
+ 
+-	it6505_dump(it6505);
+ }
+ 
+ static void it6505_plugged_status_to_codec(struct it6505 *it6505)
+@@ -2471,31 +2476,53 @@ static void it6505_irq_link_train_fail(struct it6505 *it6505)
+ 	schedule_work(&it6505->link_works);
+ }
+ 
+-static void it6505_irq_video_fifo_error(struct it6505 *it6505)
++static bool it6505_test_bit(unsigned int bit, const unsigned int *addr)
+ {
+-	struct device *dev = it6505->dev;
+-
+-	DRM_DEV_DEBUG_DRIVER(dev, "video fifo overflow interrupt");
+-	it6505->auto_train_retry = AUTO_TRAIN_RETRY;
+-	flush_work(&it6505->link_works);
+-	it6505_stop_hdcp(it6505);
+-	it6505_video_reset(it6505);
++	return 1 & (addr[bit / BITS_PER_BYTE] >> (bit % BITS_PER_BYTE));
+ }
+ 
+-static void it6505_irq_io_latch_fifo_overflow(struct it6505 *it6505)
++static void it6505_irq_video_handler(struct it6505 *it6505, const int *int_status)
+ {
+ 	struct device *dev = it6505->dev;
++	int reg_0d, reg_int03;
+ 
+-	DRM_DEV_DEBUG_DRIVER(dev, "IO latch fifo overflow interrupt");
+-	it6505->auto_train_retry = AUTO_TRAIN_RETRY;
+-	flush_work(&it6505->link_works);
+-	it6505_stop_hdcp(it6505);
+-	it6505_video_reset(it6505);
+-}
++	/*
++	 * When video SCDT change with video not stable,
++	 * Or video FIFO error, need video reset
++	 */
+ 
+-static bool it6505_test_bit(unsigned int bit, const unsigned int *addr)
+-{
+-	return 1 & (addr[bit / BITS_PER_BYTE] >> (bit % BITS_PER_BYTE));
++	if ((!it6505_get_video_status(it6505) &&
++	     (it6505_test_bit(INT_SCDT_CHANGE, (unsigned int *)int_status))) ||
++	    (it6505_test_bit(BIT_INT_IO_FIFO_OVERFLOW,
++			     (unsigned int *)int_status)) ||
++	    (it6505_test_bit(BIT_INT_VID_FIFO_ERROR,
++			     (unsigned int *)int_status))) {
++		it6505->auto_train_retry = AUTO_TRAIN_RETRY;
++		flush_work(&it6505->link_works);
++		it6505_stop_hdcp(it6505);
++		it6505_video_reset(it6505);
++
++		usleep_range(10000, 11000);
++
++		/*
++		 * Clear FIFO error IRQ to prevent fifo error -> reset loop
++		 * HW will trigger SCDT change IRQ again when video stable
++		 */
++
++		reg_int03 = it6505_read(it6505, INT_STATUS_03);
++		reg_0d = it6505_read(it6505, REG_SYSTEM_STS);
++
++		reg_int03 &= (BIT(INT_VID_FIFO_ERROR) | BIT(INT_IO_LATCH_FIFO_OVERFLOW));
++		it6505_write(it6505, INT_STATUS_03, reg_int03);
++
++		DRM_DEV_DEBUG_DRIVER(dev, "reg08 = 0x%02x", reg_int03);
++		DRM_DEV_DEBUG_DRIVER(dev, "reg0D = 0x%02x", reg_0d);
++
++		return;
++	}
++
++	if (it6505_test_bit(INT_SCDT_CHANGE, (unsigned int *)int_status))
++		it6505_irq_scdt(it6505);
+ }
+ 
+ static irqreturn_t it6505_int_threaded_handler(int unused, void *data)
+@@ -2508,15 +2535,12 @@ static irqreturn_t it6505_int_threaded_handler(int unused, void *data)
+ 	} irq_vec[] = {
+ 		{ BIT_INT_HPD, it6505_irq_hpd },
+ 		{ BIT_INT_HPD_IRQ, it6505_irq_hpd_irq },
+-		{ BIT_INT_SCDT, it6505_irq_scdt },
+ 		{ BIT_INT_HDCP_FAIL, it6505_irq_hdcp_fail },
+ 		{ BIT_INT_HDCP_DONE, it6505_irq_hdcp_done },
+ 		{ BIT_INT_AUX_CMD_FAIL, it6505_irq_aux_cmd_fail },
+ 		{ BIT_INT_HDCP_KSV_CHECK, it6505_irq_hdcp_ksv_check },
+ 		{ BIT_INT_AUDIO_FIFO_ERROR, it6505_irq_audio_fifo_error },
+ 		{ BIT_INT_LINK_TRAIN_FAIL, it6505_irq_link_train_fail },
+-		{ BIT_INT_VID_FIFO_ERROR, it6505_irq_video_fifo_error },
+-		{ BIT_INT_IO_FIFO_OVERFLOW, it6505_irq_io_latch_fifo_overflow },
+ 	};
+ 	int int_status[3], i;
+ 
+@@ -2546,6 +2570,7 @@ static irqreturn_t it6505_int_threaded_handler(int unused, void *data)
+ 			if (it6505_test_bit(irq_vec[i].bit, (unsigned int *)int_status))
+ 				irq_vec[i].handler(it6505);
+ 		}
++		it6505_irq_video_handler(it6505, (unsigned int *)int_status);
+ 	}
+ 
+ 	pm_runtime_put_sync(dev);
+diff --git a/drivers/gpu/drm/bridge/samsung-dsim.c b/drivers/gpu/drm/bridge/samsung-dsim.c
+index 95fedc68b0ae5..8476650c477c2 100644
+--- a/drivers/gpu/drm/bridge/samsung-dsim.c
++++ b/drivers/gpu/drm/bridge/samsung-dsim.c
+@@ -574,8 +574,8 @@ static unsigned long samsung_dsim_pll_find_pms(struct samsung_dsim *dsi,
+ 	u16 _m, best_m;
+ 	u8 _s, best_s;
+ 
+-	p_min = DIV_ROUND_UP(fin, (12 * MHZ));
+-	p_max = fin / (6 * MHZ);
++	p_min = DIV_ROUND_UP(fin, (driver_data->pll_fin_max * MHZ));
++	p_max = fin / (driver_data->pll_fin_min * MHZ);
+ 
+ 	for (_p = p_min; _p <= p_max; ++_p) {
+ 		for (_s = 0; _s <= 5; ++_s) {
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 7f8e1cfbe19d9..68831f4e502a2 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -2929,7 +2929,7 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
+ 
+ 	/* FIXME: Actually do some real error handling here */
+ 	ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		drm_err(mgr->dev, "Sending link address failed with %d\n", ret);
+ 		goto out;
+ 	}
+@@ -2981,7 +2981,7 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
+ 	mutex_unlock(&mgr->lock);
+ 
+ out:
+-	if (ret <= 0)
++	if (ret < 0)
+ 		mstb->link_address_sent = false;
+ 	kfree(txmsg);
+ 	return ret < 0 ? ret : changed;
+diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c
+index 13cd754af311d..77695339e4d4c 100644
+--- a/drivers/gpu/drm/drm_fbdev_dma.c
++++ b/drivers/gpu/drm/drm_fbdev_dma.c
+@@ -90,7 +90,8 @@ static int drm_fbdev_dma_helper_fb_probe(struct drm_fb_helper *fb_helper,
+ 		    sizes->surface_width, sizes->surface_height,
+ 		    sizes->surface_bpp);
+ 
+-	format = drm_mode_legacy_fb_format(sizes->surface_bpp, sizes->surface_depth);
++	format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp,
++					     sizes->surface_depth);
+ 	buffer = drm_client_framebuffer_create(client, sizes->surface_width,
+ 					       sizes->surface_height, format);
+ 	if (IS_ERR(buffer))
+diff --git a/drivers/gpu/drm/drm_panic.c b/drivers/gpu/drm/drm_panic.c
+index 7ece67086cecb..831b214975a51 100644
+--- a/drivers/gpu/drm/drm_panic.c
++++ b/drivers/gpu/drm/drm_panic.c
+@@ -15,7 +15,6 @@
+ #include <linux/types.h>
+ 
+ #include <drm/drm_drv.h>
+-#include <drm/drm_format_helper.h>
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_framebuffer.h>
+ #include <drm/drm_modeset_helper_vtables.h>
+@@ -194,40 +193,42 @@ static u32 convert_from_xrgb8888(u32 color, u32 format)
+ /*
+  * Blit & Fill
+  */
++/* check if the pixel at coord x,y is 1 (foreground) or 0 (background) */
++static bool drm_panic_is_pixel_fg(const u8 *sbuf8, unsigned int spitch, int x, int y)
++{
++	return (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) != 0;
++}
++
+ static void drm_panic_blit16(struct iosys_map *dmap, unsigned int dpitch,
+ 			     const u8 *sbuf8, unsigned int spitch,
+ 			     unsigned int height, unsigned int width,
+-			     u16 fg16, u16 bg16)
++			     u16 fg16)
+ {
+ 	unsigned int y, x;
+-	u16 val16;
+ 
+-	for (y = 0; y < height; y++) {
+-		for (x = 0; x < width; x++) {
+-			val16 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg16 : bg16;
+-			iosys_map_wr(dmap, y * dpitch + x * sizeof(u16), u16, val16);
+-		}
+-	}
++	for (y = 0; y < height; y++)
++		for (x = 0; x < width; x++)
++			if (drm_panic_is_pixel_fg(sbuf8, spitch, x, y))
++				iosys_map_wr(dmap, y * dpitch + x * sizeof(u16), u16, fg16);
+ }
+ 
+ static void drm_panic_blit24(struct iosys_map *dmap, unsigned int dpitch,
+ 			     const u8 *sbuf8, unsigned int spitch,
+ 			     unsigned int height, unsigned int width,
+-			     u32 fg32, u32 bg32)
++			     u32 fg32)
+ {
+ 	unsigned int y, x;
+-	u32 val32;
+ 
+ 	for (y = 0; y < height; y++) {
+ 		for (x = 0; x < width; x++) {
+ 			u32 off = y * dpitch + x * 3;
+ 
+-			val32 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg32 : bg32;
+-
+-			/* write blue-green-red to output in little endianness */
+-			iosys_map_wr(dmap, off, u8, (val32 & 0x000000FF) >> 0);
+-			iosys_map_wr(dmap, off + 1, u8, (val32 & 0x0000FF00) >> 8);
+-			iosys_map_wr(dmap, off + 2, u8, (val32 & 0x00FF0000) >> 16);
++			if (drm_panic_is_pixel_fg(sbuf8, spitch, x, y)) {
++				/* write blue-green-red to output in little endianness */
++				iosys_map_wr(dmap, off, u8, (fg32 & 0x000000FF) >> 0);
++				iosys_map_wr(dmap, off + 1, u8, (fg32 & 0x0000FF00) >> 8);
++				iosys_map_wr(dmap, off + 2, u8, (fg32 & 0x00FF0000) >> 16);
++			}
+ 		}
+ 	}
+ }
+@@ -235,17 +236,14 @@ static void drm_panic_blit24(struct iosys_map *dmap, unsigned int dpitch,
+ static void drm_panic_blit32(struct iosys_map *dmap, unsigned int dpitch,
+ 			     const u8 *sbuf8, unsigned int spitch,
+ 			     unsigned int height, unsigned int width,
+-			     u32 fg32, u32 bg32)
++			     u32 fg32)
+ {
+ 	unsigned int y, x;
+-	u32 val32;
+ 
+-	for (y = 0; y < height; y++) {
+-		for (x = 0; x < width; x++) {
+-			val32 = (sbuf8[(y * spitch) + x / 8] & (0x80 >> (x % 8))) ? fg32 : bg32;
+-			iosys_map_wr(dmap, y * dpitch + x * sizeof(u32), u32, val32);
+-		}
+-	}
++	for (y = 0; y < height; y++)
++		for (x = 0; x < width; x++)
++			if (drm_panic_is_pixel_fg(sbuf8, spitch, x, y))
++				iosys_map_wr(dmap, y * dpitch + x * sizeof(u32), u32, fg32);
+ }
+ 
+ /*
+@@ -257,7 +255,6 @@ static void drm_panic_blit32(struct iosys_map *dmap, unsigned int dpitch,
+  * @height: height of the image to copy, in pixels
+  * @width: width of the image to copy, in pixels
+  * @fg_color: foreground color, in destination format
+- * @bg_color: background color, in destination format
+  * @pixel_width: pixel width in bytes.
+  *
+  * This can be used to draw a font character, which is a monochrome image, to a
+@@ -266,21 +263,20 @@ static void drm_panic_blit32(struct iosys_map *dmap, unsigned int dpitch,
+ static void drm_panic_blit(struct iosys_map *dmap, unsigned int dpitch,
+ 			   const u8 *sbuf8, unsigned int spitch,
+ 			   unsigned int height, unsigned int width,
+-			   u32 fg_color, u32 bg_color,
+-			   unsigned int pixel_width)
++			   u32 fg_color, unsigned int pixel_width)
+ {
+ 	switch (pixel_width) {
+ 	case 2:
+ 		drm_panic_blit16(dmap, dpitch, sbuf8, spitch,
+-				 height, width, fg_color, bg_color);
++				 height, width, fg_color);
+ 	break;
+ 	case 3:
+ 		drm_panic_blit24(dmap, dpitch, sbuf8, spitch,
+-				 height, width, fg_color, bg_color);
++				 height, width, fg_color);
+ 	break;
+ 	case 4:
+ 		drm_panic_blit32(dmap, dpitch, sbuf8, spitch,
+-				 height, width, fg_color, bg_color);
++				 height, width, fg_color);
+ 	break;
+ 	default:
+ 		WARN_ONCE(1, "Can't blit with pixel width %d\n", pixel_width);
+@@ -381,8 +377,7 @@ static void draw_txt_rectangle(struct drm_scanout_buffer *sb,
+ 			       unsigned int msg_lines,
+ 			       bool centered,
+ 			       struct drm_rect *clip,
+-			       u32 fg_color,
+-			       u32 bg_color)
++			       u32 color)
+ {
+ 	int i, j;
+ 	const u8 *src;
+@@ -404,8 +399,7 @@ static void draw_txt_rectangle(struct drm_scanout_buffer *sb,
+ 		for (j = 0; j < line_len; j++) {
+ 			src = get_char_bitmap(font, msg[i].txt[j], font_pitch);
+ 			drm_panic_blit(&dst, sb->pitch[0], src, font_pitch,
+-				       font->height, font->width,
+-				       fg_color, bg_color, px_width);
++				       font->height, font->width, color, px_width);
+ 			iosys_map_incr(&dst, font->width * px_width);
+ 		}
+ 	}
+@@ -444,10 +438,10 @@ static void draw_panic_static(struct drm_scanout_buffer *sb)
+ 		       bg_color, sb->format->cpp[0]);
+ 
+ 	if ((r_msg.x1 >= drm_rect_width(&r_logo) || r_msg.y1 >= drm_rect_height(&r_logo)) &&
+-	    drm_rect_width(&r_logo) < sb->width && drm_rect_height(&r_logo) < sb->height) {
+-		draw_txt_rectangle(sb, font, logo, logo_lines, false, &r_logo, fg_color, bg_color);
++	    drm_rect_width(&r_logo) <= sb->width && drm_rect_height(&r_logo) <= sb->height) {
++		draw_txt_rectangle(sb, font, logo, logo_lines, false, &r_logo, fg_color);
+ 	}
+-	draw_txt_rectangle(sb, font, panic_msg, msg_lines, true, &r_msg, fg_color, bg_color);
++	draw_txt_rectangle(sb, font, panic_msg, msg_lines, true, &r_msg, fg_color);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index 71a6d2b1c80f5..5c0c9d4e3be18 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -355,9 +355,11 @@ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ 
+ static inline enum dma_data_direction etnaviv_op_to_dma_dir(u32 op)
+ {
+-	if (op & ETNA_PREP_READ)
++	op &= ETNA_PREP_READ | ETNA_PREP_WRITE;
++
++	if (op == ETNA_PREP_READ)
+ 		return DMA_FROM_DEVICE;
+-	else if (op & ETNA_PREP_WRITE)
++	else if (op == ETNA_PREP_WRITE)
+ 		return DMA_TO_DEVICE;
+ 	else
+ 		return DMA_BIDIRECTIONAL;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index c4b04b0dee16a..62dcfdc7894dd 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -38,9 +38,6 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 	u32 dma_addr;
+ 	int change;
+ 
+-	/* block scheduler */
+-	drm_sched_stop(&gpu->sched, sched_job);
+-
+ 	/*
+ 	 * If the GPU managed to complete this jobs fence, the timout is
+ 	 * spurious. Bail out.
+@@ -63,6 +60,9 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 		goto out_no_timeout;
+ 	}
+ 
++	/* block scheduler */
++	drm_sched_stop(&gpu->sched, sched_job);
++
+ 	if(sched_job)
+ 		drm_sched_increase_karma(sched_job);
+ 
+@@ -76,8 +76,7 @@ static enum drm_gpu_sched_stat etnaviv_sched_timedout_job(struct drm_sched_job
+ 	return DRM_GPU_SCHED_STAT_NOMINAL;
+ 
+ out_no_timeout:
+-	/* restart scheduler after GPU is usable again */
+-	drm_sched_start(&gpu->sched, true);
++	list_add(&sched_job->list, &sched_job->sched->pending_list);
+ 	return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/gma500/cdv_intel_lvds.c b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+index f08a6803dc184..3adc2c9ab72da 100644
+--- a/drivers/gpu/drm/gma500/cdv_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+@@ -311,6 +311,9 @@ static int cdv_intel_lvds_get_modes(struct drm_connector *connector)
+ 	if (mode_dev->panel_fixed_mode != NULL) {
+ 		struct drm_display_mode *mode =
+ 		    drm_mode_duplicate(dev, mode_dev->panel_fixed_mode);
++		if (!mode)
++			return 0;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		return 1;
+ 	}
+diff --git a/drivers/gpu/drm/gma500/psb_intel_lvds.c b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+index 8486de230ec91..8d1be94a443b2 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+@@ -504,6 +504,9 @@ static int psb_intel_lvds_get_modes(struct drm_connector *connector)
+ 	if (mode_dev->panel_fixed_mode != NULL) {
+ 		struct drm_display_mode *mode =
+ 		    drm_mode_duplicate(dev, mode_dev->panel_fixed_mode);
++		if (!mode)
++			return 0;
++
+ 		drm_mode_probed_add(connector, mode);
+ 		return 1;
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_crtc_state_dump.c b/drivers/gpu/drm/i915/display/intel_crtc_state_dump.c
+index ccaa4cb2809b0..bddcc9edeab42 100644
+--- a/drivers/gpu/drm/i915/display/intel_crtc_state_dump.c
++++ b/drivers/gpu/drm/i915/display/intel_crtc_state_dump.c
+@@ -251,9 +251,10 @@ void intel_crtc_state_dump(const struct intel_crtc_state *pipe_config,
+ 		drm_printf(&p, "sdp split: %s\n",
+ 			   str_enabled_disabled(pipe_config->sdp_split_enable));
+ 
+-		drm_printf(&p, "psr: %s, psr2: %s, panel replay: %s, selective fetch: %s\n",
+-			   str_enabled_disabled(pipe_config->has_psr),
+-			   str_enabled_disabled(pipe_config->has_psr2),
++		drm_printf(&p, "psr: %s, selective update: %s, panel replay: %s, selective fetch: %s\n",
++			   str_enabled_disabled(pipe_config->has_psr &&
++						!pipe_config->has_panel_replay),
++			   str_enabled_disabled(pipe_config->has_sel_update),
+ 			   str_enabled_disabled(pipe_config->has_panel_replay),
+ 			   str_enabled_disabled(pipe_config->enable_psr2_sel_fetch));
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 273323f30ae29..e53d3e900b3e4 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -5318,9 +5318,11 @@ intel_pipe_config_compare(const struct intel_crtc_state *current_config,
+ 	 * Panel replay has to be enabled before link training. PSR doesn't have
+ 	 * this requirement -> check these only if using panel replay
+ 	 */
+-	if (current_config->has_panel_replay || pipe_config->has_panel_replay) {
++	if (current_config->active_planes &&
++	    (current_config->has_panel_replay ||
++	     pipe_config->has_panel_replay)) {
+ 		PIPE_CONF_CHECK_BOOL(has_psr);
+-		PIPE_CONF_CHECK_BOOL(has_psr2);
++		PIPE_CONF_CHECK_BOOL(has_sel_update);
+ 		PIPE_CONF_CHECK_BOOL(enable_psr2_sel_fetch);
+ 		PIPE_CONF_CHECK_BOOL(enable_psr2_su_region_et);
+ 		PIPE_CONF_CHECK_BOOL(has_panel_replay);
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index 62f7a30c37dcf..6747c10da298e 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -1189,7 +1189,7 @@ struct intel_crtc_state {
+ 
+ 	/* PSR is supported but might not be enabled due the lack of enabled planes */
+ 	bool has_psr;
+-	bool has_psr2;
++	bool has_sel_update;
+ 	bool enable_psr2_sel_fetch;
+ 	bool enable_psr2_su_region_et;
+ 	bool req_psr2_sdp_prior_scanline;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 5b3b6ae1e3d71..9c9e060476c72 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2664,7 +2664,7 @@ static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp,
+ 	if (intel_dp_needs_vsc_sdp(crtc_state, conn_state)) {
+ 		intel_dp_compute_vsc_colorimetry(crtc_state, conn_state,
+ 						 vsc);
+-	} else if (crtc_state->has_psr2) {
++	} else if (crtc_state->has_sel_update) {
+ 		/*
+ 		 * [PSR2 without colorimetry]
+ 		 * Prepare VSC Header for SU as per eDP 1.4 spec, Table 6-11
+@@ -5267,6 +5267,8 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
+ 		    !intel_dp_mst_is_master_trans(crtc_state))
+ 			continue;
+ 
++		intel_dp->link_trained = false;
++
+ 		intel_dp_check_frl_training(intel_dp);
+ 		intel_dp_pcon_dsc_configure(intel_dp, crtc_state);
+ 		intel_dp_start_link_train(intel_dp, crtc_state);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index 947575140059d..8cfc55f3d98ef 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -114,10 +114,24 @@ intel_dp_set_lttpr_transparent_mode(struct intel_dp *intel_dp, bool enable)
+ 	return drm_dp_dpcd_write(&intel_dp->aux, DP_PHY_REPEATER_MODE, &val, 1) == 1;
+ }
+ 
+-static int intel_dp_init_lttpr(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEIVER_CAP_SIZE])
++static bool intel_dp_lttpr_transparent_mode_enabled(struct intel_dp *intel_dp)
++{
++	return intel_dp->lttpr_common_caps[DP_PHY_REPEATER_MODE -
++					   DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV] ==
++		DP_PHY_REPEATER_MODE_TRANSPARENT;
++}
++
++/*
++ * Read the LTTPR common capabilities and switch the LTTPR PHYs to
++ * non-transparent mode if this is supported. Preserve the
++ * transparent/non-transparent mode on an active link.
++ *
++ * Return the number of detected LTTPRs in non-transparent mode or 0 if the
++ * LTTPRs are in transparent mode or the detection failed.
++ */
++static int intel_dp_init_lttpr_phys(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEIVER_CAP_SIZE])
+ {
+ 	int lttpr_count;
+-	int i;
+ 
+ 	if (!intel_dp_read_lttpr_common_caps(intel_dp, dpcd))
+ 		return 0;
+@@ -131,6 +145,19 @@ static int intel_dp_init_lttpr(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEI
+ 	if (lttpr_count == 0)
+ 		return 0;
+ 
++	/*
++	 * Don't change the mode on an active link, to prevent a loss of link
++	 * synchronization. See DP Standard v2.0 3.6.7. about the LTTPR
++	 * resetting its internal state when the mode is changed from
++	 * non-transparent to transparent.
++	 */
++	if (intel_dp->link_trained) {
++		if (lttpr_count < 0 || intel_dp_lttpr_transparent_mode_enabled(intel_dp))
++			goto out_reset_lttpr_count;
++
++		return lttpr_count;
++	}
++
+ 	/*
+ 	 * See DP Standard v2.0 3.6.6.1. about the explicit disabling of
+ 	 * non-transparent mode and the disable->enable non-transparent mode
+@@ -151,11 +178,25 @@ static int intel_dp_init_lttpr(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEI
+ 		       "Switching to LTTPR non-transparent LT mode failed, fall-back to transparent mode\n");
+ 
+ 		intel_dp_set_lttpr_transparent_mode(intel_dp, true);
+-		intel_dp_reset_lttpr_count(intel_dp);
+ 
+-		return 0;
++		goto out_reset_lttpr_count;
+ 	}
+ 
++	return lttpr_count;
++
++out_reset_lttpr_count:
++	intel_dp_reset_lttpr_count(intel_dp);
++
++	return 0;
++}
++
++static int intel_dp_init_lttpr(struct intel_dp *intel_dp, const u8 dpcd[DP_RECEIVER_CAP_SIZE])
++{
++	int lttpr_count;
++	int i;
++
++	lttpr_count = intel_dp_init_lttpr_phys(intel_dp, dpcd);
++
+ 	for (i = 0; i < lttpr_count; i++)
+ 		intel_dp_read_lttpr_phy_caps(intel_dp, dpcd, DP_PHY_LTTPR(i));
+ 
+@@ -1372,10 +1413,10 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
+ {
+ 	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+ 	bool passed;
+-
+ 	/*
+-	 * TODO: Reiniting LTTPRs here won't be needed once proper connector
+-	 * HW state readout is added.
++	 * Reinit the LTTPRs here to ensure that they are switched to
++	 * non-transparent mode. During an earlier LTTPR detection this
++	 * could've been prevented by an active link.
+ 	 */
+ 	int lttpr_count = intel_dp_init_lttpr_and_dprx_caps(intel_dp);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c
+index 151dcd0c45b60..984f13d8c0c88 100644
+--- a/drivers/gpu/drm/i915/display/intel_fbc.c
++++ b/drivers/gpu/drm/i915/display/intel_fbc.c
+@@ -1251,7 +1251,7 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
+ 	 * Recommendation is to keep this combination disabled
+ 	 * Bspec: 50422 HSD: 14010260002
+ 	 */
+-	if (IS_DISPLAY_VER(i915, 12, 14) && crtc_state->has_psr2) {
++	if (IS_DISPLAY_VER(i915, 12, 14) && crtc_state->has_sel_update) {
+ 		plane_state->no_fbc_reason = "PSR2 enabled";
+ 		return 0;
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index f5b33335a9ae0..3c7da862222bf 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -651,7 +651,7 @@ void intel_psr_enable_sink(struct intel_dp *intel_dp,
+ 	struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+ 	u8 dpcd_val = DP_PSR_ENABLE;
+ 
+-	if (crtc_state->has_psr2) {
++	if (crtc_state->has_sel_update) {
+ 		/* Enable ALPM at sink for psr2 */
+ 		if (!crtc_state->has_panel_replay) {
+ 			drm_dp_dpcd_writeb(&intel_dp->aux,
+@@ -659,7 +659,7 @@ void intel_psr_enable_sink(struct intel_dp *intel_dp,
+ 					   DP_ALPM_ENABLE |
+ 					   DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE);
+ 
+-			if (psr2_su_region_et_valid(intel_dp))
++			if (crtc_state->enable_psr2_su_region_et)
+ 				dpcd_val |= DP_PSR_ENABLE_SU_REGION_ET;
+ 		}
+ 
+@@ -1639,7 +1639,7 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
+ 	if (!crtc_state->has_psr)
+ 		return;
+ 
+-	crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state);
++	crtc_state->has_sel_update = intel_psr2_config_valid(intel_dp, crtc_state);
+ }
+ 
+ void intel_psr_get_config(struct intel_encoder *encoder,
+@@ -1672,7 +1672,7 @@ void intel_psr_get_config(struct intel_encoder *encoder,
+ 		pipe_config->has_psr = true;
+ 	}
+ 
+-	pipe_config->has_psr2 = intel_dp->psr.psr2_enabled;
++	pipe_config->has_sel_update = intel_dp->psr.psr2_enabled;
+ 	pipe_config->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC);
+ 
+ 	if (!intel_dp->psr.psr2_enabled)
+@@ -1960,7 +1960,7 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp,
+ 
+ 	drm_WARN_ON(&dev_priv->drm, intel_dp->psr.enabled);
+ 
+-	intel_dp->psr.psr2_enabled = crtc_state->has_psr2;
++	intel_dp->psr.psr2_enabled = crtc_state->has_sel_update;
+ 	intel_dp->psr.panel_replay_enabled = crtc_state->has_panel_replay;
+ 	intel_dp->psr.busy_frontbuffer_bits = 0;
+ 	intel_dp->psr.pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
+@@ -2484,7 +2484,7 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
+ 
+ 	crtc_state->psr2_su_area.x1 = 0;
+ 	crtc_state->psr2_su_area.y1 = -1;
+-	crtc_state->psr2_su_area.x2 = INT_MAX;
++	crtc_state->psr2_su_area.x2 = drm_rect_width(&crtc_state->pipe_src);
+ 	crtc_state->psr2_su_area.y2 = -1;
+ 
+ 	/*
+@@ -2688,7 +2688,7 @@ void intel_psr_pre_plane_update(struct intel_atomic_state *state,
+ 		needs_to_disable |= intel_crtc_needs_modeset(new_crtc_state);
+ 		needs_to_disable |= !new_crtc_state->has_psr;
+ 		needs_to_disable |= !new_crtc_state->active_planes;
+-		needs_to_disable |= new_crtc_state->has_psr2 != psr->psr2_enabled;
++		needs_to_disable |= new_crtc_state->has_sel_update != psr->psr2_enabled;
+ 		needs_to_disable |= DISPLAY_VER(i915) < 11 &&
+ 			new_crtc_state->wm_level_disabled;
+ 
+@@ -3694,16 +3694,9 @@ static int i915_psr_sink_status_show(struct seq_file *m, void *data)
+ 		"reserved",
+ 		"sink internal error",
+ 	};
+-	static const char * const panel_replay_status[] = {
+-		"Sink device frame is locked to the Source device",
+-		"Sink device is coasting, using the VTotal target",
+-		"Sink device is governing the frame rate (frame rate unlock is granted)",
+-		"Sink device in the process of re-locking with the Source device",
+-	};
+ 	const char *str;
+ 	int ret;
+ 	u8 status, error_status;
+-	u32 idx;
+ 
+ 	if (!(CAN_PSR(intel_dp) || CAN_PANEL_REPLAY(intel_dp))) {
+ 		seq_puts(m, "PSR/Panel-Replay Unsupported\n");
+@@ -3717,16 +3710,11 @@ static int i915_psr_sink_status_show(struct seq_file *m, void *data)
+ 	if (ret)
+ 		return ret;
+ 
+-	str = "unknown";
+-	if (intel_dp->psr.panel_replay_enabled) {
+-		idx = (status & DP_SINK_FRAME_LOCKED_MASK) >> DP_SINK_FRAME_LOCKED_SHIFT;
+-		if (idx < ARRAY_SIZE(panel_replay_status))
+-			str = panel_replay_status[idx];
+-	} else if (intel_dp->psr.enabled) {
+-		idx = status & DP_PSR_SINK_STATE_MASK;
+-		if (idx < ARRAY_SIZE(sink_status))
+-			str = sink_status[idx];
+-	}
++	status &= DP_PSR_SINK_STATE_MASK;
++	if (status < ARRAY_SIZE(sink_status))
++		str = sink_status[status];
++	else
++		str = "unknown";
+ 
+ 	seq_printf(m, "Sink %s status: 0x%x [%s]\n", psr_mode_str(intel_dp), status, str);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index 21829439e6867..72090f52fb850 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -3315,11 +3315,7 @@ static void remove_from_engine(struct i915_request *rq)
+ 
+ static bool can_preempt(struct intel_engine_cs *engine)
+ {
+-	if (GRAPHICS_VER(engine->i915) > 8)
+-		return true;
+-
+-	/* GPGPU on bdw requires extra w/a; not implemented */
+-	return engine->class != RENDER_CLASS;
++	return GRAPHICS_VER(engine->i915) > 8;
+ }
+ 
+ static void kick_execlists(const struct i915_request *rq, int prio)
+diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
+index 17b0364112922..be66d94be3613 100644
+--- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c
+@@ -514,29 +514,42 @@ static bool mtk_ddp_comp_find(struct device *dev,
+ 	return false;
+ }
+ 
+-static unsigned int mtk_ddp_comp_find_in_route(struct device *dev,
+-					       const struct mtk_drm_route *routes,
+-					       unsigned int num_routes,
+-					       struct mtk_ddp_comp *ddp_comp)
++static int mtk_ddp_comp_find_in_route(struct device *dev,
++				      const struct mtk_drm_route *routes,
++				      unsigned int num_routes,
++				      struct mtk_ddp_comp *ddp_comp)
+ {
+-	int ret;
+ 	unsigned int i;
+ 
+-	if (!routes) {
+-		ret = -EINVAL;
+-		goto err;
+-	}
++	if (!routes)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < num_routes; i++)
+ 		if (dev == ddp_comp[routes[i].route_ddp].dev)
+ 			return BIT(routes[i].crtc_id);
+ 
+-	ret = -ENODEV;
+-err:
++	return -ENODEV;
++}
+ 
+-	DRM_INFO("Failed to find comp in ddp table, ret = %d\n", ret);
++static bool mtk_ddp_path_available(const unsigned int *path,
++				   unsigned int path_len,
++				   struct device_node **comp_node)
++{
++	unsigned int i;
+ 
+-	return 0;
++	if (!path || !path_len)
++		return false;
++
++	for (i = 0U; i < path_len; i++) {
++		/* OVL_ADAPTOR doesn't have a device node */
++		if (path[i] == DDP_COMPONENT_DRM_OVL_ADAPTOR)
++			continue;
++
++		if (!comp_node[path[i]])
++			return false;
++	}
++
++	return true;
+ }
+ 
+ int mtk_ddp_comp_get_id(struct device_node *node,
+@@ -554,31 +567,53 @@ int mtk_ddp_comp_get_id(struct device_node *node,
+ 	return -EINVAL;
+ }
+ 
+-unsigned int mtk_find_possible_crtcs(struct drm_device *drm, struct device *dev)
++int mtk_find_possible_crtcs(struct drm_device *drm, struct device *dev)
+ {
+ 	struct mtk_drm_private *private = drm->dev_private;
+-	unsigned int ret = 0;
+-
+-	if (mtk_ddp_comp_find(dev,
+-			      private->data->main_path,
+-			      private->data->main_len,
+-			      private->ddp_comp))
+-		ret = BIT(0);
+-	else if (mtk_ddp_comp_find(dev,
+-				   private->data->ext_path,
+-				   private->data->ext_len,
+-				   private->ddp_comp))
+-		ret = BIT(1);
+-	else if (mtk_ddp_comp_find(dev,
+-				   private->data->third_path,
+-				   private->data->third_len,
+-				   private->ddp_comp))
+-		ret = BIT(2);
+-	else
+-		ret = mtk_ddp_comp_find_in_route(dev,
+-						 private->data->conn_routes,
+-						 private->data->num_conn_routes,
+-						 private->ddp_comp);
++	const struct mtk_mmsys_driver_data *data;
++	struct mtk_drm_private *priv_n;
++	int i = 0, j;
++	int ret;
++
++	for (j = 0; j < private->data->mmsys_dev_num; j++) {
++		priv_n = private->all_drm_private[j];
++		data = priv_n->data;
++
++		if (mtk_ddp_path_available(data->main_path, data->main_len,
++					   priv_n->comp_node)) {
++			if (mtk_ddp_comp_find(dev, data->main_path,
++					      data->main_len,
++					      priv_n->ddp_comp))
++				return BIT(i);
++			i++;
++		}
++
++		if (mtk_ddp_path_available(data->ext_path, data->ext_len,
++					   priv_n->comp_node)) {
++			if (mtk_ddp_comp_find(dev, data->ext_path,
++					      data->ext_len,
++					      priv_n->ddp_comp))
++				return BIT(i);
++			i++;
++		}
++
++		if (mtk_ddp_path_available(data->third_path, data->third_len,
++					   priv_n->comp_node)) {
++			if (mtk_ddp_comp_find(dev, data->third_path,
++					      data->third_len,
++					      priv_n->ddp_comp))
++				return BIT(i);
++			i++;
++		}
++	}
++
++	ret = mtk_ddp_comp_find_in_route(dev,
++					 private->data->conn_routes,
++					 private->data->num_conn_routes,
++					 private->ddp_comp);
++
++	if (ret < 0)
++		DRM_INFO("Failed to find comp in ddp table, ret = %d\n", ret);
+ 
+ 	return ret;
+ }
+@@ -593,7 +628,7 @@ int mtk_ddp_comp_init(struct device_node *node, struct mtk_ddp_comp *comp,
+ 	int ret;
+ #endif
+ 
+-	if (comp_id < 0 || comp_id >= DDP_COMPONENT_DRM_ID_MAX)
++	if (comp_id >= DDP_COMPONENT_DRM_ID_MAX)
+ 		return -EINVAL;
+ 
+ 	type = mtk_ddp_matches[comp_id].type;
+diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
+index 26236691ce4c2..ecf6dc283cd7c 100644
+--- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
++++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h
+@@ -192,7 +192,11 @@ unsigned int mtk_ddp_comp_supported_rotations(struct mtk_ddp_comp *comp)
+ 	if (comp->funcs && comp->funcs->supported_rotations)
+ 		return comp->funcs->supported_rotations(comp->dev);
+ 
+-	return 0;
++	/*
++	 * In order to pass IGT tests, DRM_MODE_ROTATE_0 is required when
++	 * rotation is not supported.
++	 */
++	return DRM_MODE_ROTATE_0;
+ }
+ 
+ static inline unsigned int mtk_ddp_comp_layer_nr(struct mtk_ddp_comp *comp)
+@@ -326,7 +330,7 @@ static inline void mtk_ddp_comp_encoder_index_set(struct mtk_ddp_comp *comp)
+ 
+ int mtk_ddp_comp_get_id(struct device_node *node,
+ 			enum mtk_ddp_comp_type comp_type);
+-unsigned int mtk_find_possible_crtcs(struct drm_device *drm, struct device *dev);
++int mtk_find_possible_crtcs(struct drm_device *drm, struct device *dev);
+ int mtk_ddp_comp_init(struct device_node *comp_node, struct mtk_ddp_comp *comp,
+ 		      unsigned int comp_id);
+ enum mtk_ddp_comp_type mtk_ddp_comp_get_type(unsigned int comp_id);
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index b552a02d7eae7..26b598b9f71f2 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -38,6 +38,7 @@
+ #define DISP_REG_OVL_PITCH_MSB(n)		(0x0040 + 0x20 * (n))
+ #define OVL_PITCH_MSB_2ND_SUBBUF			BIT(16)
+ #define DISP_REG_OVL_PITCH(n)			(0x0044 + 0x20 * (n))
++#define OVL_CONST_BLEND					BIT(28)
+ #define DISP_REG_OVL_RDMA_CTRL(n)		(0x00c0 + 0x20 * (n))
+ #define DISP_REG_OVL_RDMA_GMC(n)		(0x00c8 + 0x20 * (n))
+ #define DISP_REG_OVL_ADDR_MT2701		0x0040
+@@ -71,6 +72,8 @@
+ #define	OVL_CON_VIRT_FLIP	BIT(9)
+ #define	OVL_CON_HORZ_FLIP	BIT(10)
+ 
++#define OVL_COLOR_ALPHA		GENMASK(31, 24)
++
+ static const u32 mt8173_formats[] = {
+ 	DRM_FORMAT_XRGB8888,
+ 	DRM_FORMAT_ARGB8888,
+@@ -273,7 +276,13 @@ void mtk_ovl_config(struct device *dev, unsigned int w,
+ 	if (w != 0 && h != 0)
+ 		mtk_ddp_write_relaxed(cmdq_pkt, h << 16 | w, &ovl->cmdq_reg, ovl->regs,
+ 				      DISP_REG_OVL_ROI_SIZE);
+-	mtk_ddp_write_relaxed(cmdq_pkt, 0x0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_ROI_BGCLR);
++
++	/*
++	 * The background color must be opaque black (ARGB),
++	 * otherwise the alpha blending will have no effect
++	 */
++	mtk_ddp_write_relaxed(cmdq_pkt, OVL_COLOR_ALPHA, &ovl->cmdq_reg,
++			      ovl->regs, DISP_REG_OVL_ROI_BGCLR);
+ 
+ 	mtk_ddp_write(cmdq_pkt, 0x1, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RST);
+ 	mtk_ddp_write(cmdq_pkt, 0x0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RST);
+@@ -296,27 +305,20 @@ int mtk_ovl_layer_check(struct device *dev, unsigned int idx,
+ 			struct mtk_plane_state *mtk_state)
+ {
+ 	struct drm_plane_state *state = &mtk_state->base;
+-	unsigned int rotation = 0;
+ 
+-	rotation = drm_rotation_simplify(state->rotation,
+-					 DRM_MODE_ROTATE_0 |
+-					 DRM_MODE_REFLECT_X |
+-					 DRM_MODE_REFLECT_Y);
+-	rotation &= ~DRM_MODE_ROTATE_0;
+-
+-	/* We can only do reflection, not rotation */
+-	if ((rotation & DRM_MODE_ROTATE_MASK) != 0)
++	/* check if any unsupported rotation is set */
++	if (state->rotation & ~mtk_ovl_supported_rotations(dev))
+ 		return -EINVAL;
+ 
+ 	/*
+ 	 * TODO: Rotating/reflecting YUV buffers is not supported at this time.
+ 	 *	 Only RGB[AX] variants are supported.
++	 *	 Since DRM_MODE_ROTATE_0 means "no rotation", we should not
++	 *	 reject layers with this property.
+ 	 */
+-	if (state->fb->format->is_yuv && rotation != 0)
++	if (state->fb->format->is_yuv && (state->rotation & ~DRM_MODE_ROTATE_0))
+ 		return -EINVAL;
+ 
+-	state->rotation = rotation;
+-
+ 	return 0;
+ }
+ 
+@@ -407,6 +409,7 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ 	unsigned int fmt = pending->format;
+ 	unsigned int offset = (pending->y << 16) | pending->x;
+ 	unsigned int src_size = (pending->height << 16) | pending->width;
++	unsigned int ignore_pixel_alpha = 0;
+ 	unsigned int con;
+ 	bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR;
+ 	union overlay_pitch {
+@@ -428,6 +431,14 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ 	if (state->base.fb && state->base.fb->format->has_alpha)
+ 		con |= OVL_CON_AEN | OVL_CON_ALPHA;
+ 
++	/* CONST_BLD must be enabled for XRGB formats although the alpha channel
++	 * can be ignored, or OVL will still read the value from memory.
++	 * For RGB888 related formats, whether CONST_BLD is enabled or not won't
++	 * affect the result. Therefore we use !has_alpha as the condition.
++	 */
++	if (state->base.fb && !state->base.fb->format->has_alpha)
++		ignore_pixel_alpha = OVL_CONST_BLEND;
++
+ 	if (pending->rotation & DRM_MODE_REFLECT_Y) {
+ 		con |= OVL_CON_VIRT_FLIP;
+ 		addr += (pending->height - 1) * pending->pitch;
+@@ -443,8 +454,8 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ 
+ 	mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs,
+ 			      DISP_REG_OVL_CON(idx));
+-	mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb, &ovl->cmdq_reg, ovl->regs,
+-			      DISP_REG_OVL_PITCH(idx));
++	mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha,
++			      &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx));
+ 	mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs,
+ 			      DISP_REG_OVL_SRC_SIZE(idx));
+ 	mtk_ddp_write_relaxed(cmdq_pkt, offset, &ovl->cmdq_reg, ovl->regs,
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+index 02dd7dcdfedb2..2b62d64759181 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+@@ -158,7 +158,7 @@ void mtk_ovl_adaptor_layer_config(struct device *dev, unsigned int idx,
+ 	merge = ovl_adaptor->ovl_adaptor_comp[OVL_ADAPTOR_MERGE0 + idx];
+ 	ethdr = ovl_adaptor->ovl_adaptor_comp[OVL_ADAPTOR_ETHDR0];
+ 
+-	if (!pending->enable) {
++	if (!pending->enable || !pending->width || !pending->height) {
+ 		mtk_merge_stop_cmdq(merge, cmdq_pkt);
+ 		mtk_mdp_rdma_stop(rdma_l, cmdq_pkt);
+ 		mtk_mdp_rdma_stop(rdma_r, cmdq_pkt);
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index 536366956447a..ada12927bbacf 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -2073,9 +2073,15 @@ static const struct drm_edid *mtk_dp_edid_read(struct drm_bridge *bridge,
+ 		 */
+ 		const struct edid *edid = drm_edid_raw(drm_edid);
+ 		struct cea_sad *sads;
++		int ret;
+ 
+-		audio_caps->sad_count = drm_edid_to_sad(edid, &sads);
+-		kfree(sads);
++		ret = drm_edid_to_sad(edid, &sads);
++		/* Ignore any errors */
++		if (ret < 0)
++			ret = 0;
++		if (ret)
++			kfree(sads);
++		audio_caps->sad_count = ret;
+ 
+ 		/*
+ 		 * FIXME: This should use connector->display_info.has_audio from
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index bfe8653005dbf..a08d206549543 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -805,7 +805,10 @@ static int mtk_dpi_bind(struct device *dev, struct device *master, void *data)
+ 		return ret;
+ 	}
+ 
+-	dpi->encoder.possible_crtcs = mtk_find_possible_crtcs(drm_dev, dpi->dev);
++	ret = mtk_find_possible_crtcs(drm_dev, dpi->dev);
++	if (ret < 0)
++		goto err_cleanup;
++	dpi->encoder.possible_crtcs = ret;
+ 
+ 	ret = drm_bridge_attach(&dpi->encoder, &dpi->bridge, NULL,
+ 				DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index de811e2265da7..56f409ad7f390 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -294,6 +294,9 @@ static const struct mtk_mmsys_driver_data mt8188_vdosys0_driver_data = {
+ 	.conn_routes = mt8188_mtk_ddp_main_routes,
+ 	.num_conn_routes = ARRAY_SIZE(mt8188_mtk_ddp_main_routes),
+ 	.mmsys_dev_num = 2,
++	.max_width = 8191,
++	.min_width = 1,
++	.min_height = 1,
+ };
+ 
+ static const struct mtk_mmsys_driver_data mt8192_mmsys_driver_data = {
+@@ -308,6 +311,9 @@ static const struct mtk_mmsys_driver_data mt8195_vdosys0_driver_data = {
+ 	.main_path = mt8195_mtk_ddp_main,
+ 	.main_len = ARRAY_SIZE(mt8195_mtk_ddp_main),
+ 	.mmsys_dev_num = 2,
++	.max_width = 8191,
++	.min_width = 1,
++	.min_height = 1,
+ };
+ 
+ static const struct mtk_mmsys_driver_data mt8195_vdosys1_driver_data = {
+@@ -315,6 +321,9 @@ static const struct mtk_mmsys_driver_data mt8195_vdosys1_driver_data = {
+ 	.ext_len = ARRAY_SIZE(mt8195_mtk_ddp_ext),
+ 	.mmsys_id = 1,
+ 	.mmsys_dev_num = 2,
++	.max_width = 8191,
++	.min_width = 2, /* 2-pixel align when ethdr is bypassed */
++	.min_height = 1,
+ };
+ 
+ static const struct of_device_id mtk_drm_of_ids[] = {
+@@ -493,6 +502,15 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 		for (j = 0; j < private->data->mmsys_dev_num; j++) {
+ 			priv_n = private->all_drm_private[j];
+ 
++			if (priv_n->data->max_width)
++				drm->mode_config.max_width = priv_n->data->max_width;
++
++			if (priv_n->data->min_width)
++				drm->mode_config.min_width = priv_n->data->min_width;
++
++			if (priv_n->data->min_height)
++				drm->mode_config.min_height = priv_n->data->min_height;
++
+ 			if (i == CRTC_MAIN && priv_n->data->main_len) {
+ 				ret = mtk_crtc_create(drm, priv_n->data->main_path,
+ 						      priv_n->data->main_len, j,
+@@ -520,6 +538,10 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 		}
+ 	}
+ 
++	/* IGT will check if the cursor size is configured */
++	drm->mode_config.cursor_width = drm->mode_config.max_width;
++	drm->mode_config.cursor_height = drm->mode_config.max_height;
++
+ 	/* Use OVL device for all DMA memory allocations */
+ 	crtc = drm_crtc_from_index(drm, 0);
+ 	if (crtc)
+@@ -743,6 +765,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
+ 	  .data = (void *)MTK_DISP_OVL },
+ 	{ .compatible = "mediatek,mt8192-disp-ovl",
+ 	  .data = (void *)MTK_DISP_OVL },
++	{ .compatible = "mediatek,mt8195-disp-ovl",
++	  .data = (void *)MTK_DISP_OVL },
+ 	{ .compatible = "mediatek,mt8183-disp-ovl-2l",
+ 	  .data = (void *)MTK_DISP_OVL_2L },
+ 	{ .compatible = "mediatek,mt8192-disp-ovl-2l",
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.h b/drivers/gpu/drm/mediatek/mtk_drm_drv.h
+index 78d698ede1bf8..ce897984de51e 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.h
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.h
+@@ -46,6 +46,10 @@ struct mtk_mmsys_driver_data {
+ 	bool shadow_register;
+ 	unsigned int mmsys_id;
+ 	unsigned int mmsys_dev_num;
++
++	u16 max_width;
++	u16 min_width;
++	u16 min_height;
+ };
+ 
+ struct mtk_drm_private {
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index c255559cc56ed..b6e3c011a12d8 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -837,7 +837,10 @@ static int mtk_dsi_encoder_init(struct drm_device *drm, struct mtk_dsi *dsi)
+ 		return ret;
+ 	}
+ 
+-	dsi->encoder.possible_crtcs = mtk_find_possible_crtcs(drm, dsi->host.dev);
++	ret = mtk_find_possible_crtcs(drm, dsi->host.dev);
++	if (ret < 0)
++		goto err_cleanup_encoder;
++	dsi->encoder.possible_crtcs = ret;
+ 
+ 	ret = drm_bridge_attach(&dsi->encoder, &dsi->bridge, NULL,
+ 				DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+diff --git a/drivers/gpu/drm/mediatek/mtk_ethdr.c b/drivers/gpu/drm/mediatek/mtk_ethdr.c
+index 156c6ff547e86..bf5826b7e7760 100644
+--- a/drivers/gpu/drm/mediatek/mtk_ethdr.c
++++ b/drivers/gpu/drm/mediatek/mtk_ethdr.c
+@@ -50,7 +50,6 @@
+ 
+ #define MIXER_INX_MODE_BYPASS			0
+ #define MIXER_INX_MODE_EVEN_EXTEND		1
+-#define DEFAULT_9BIT_ALPHA			0x100
+ #define	MIXER_ALPHA_AEN				BIT(8)
+ #define	MIXER_ALPHA				0xff
+ #define ETHDR_CLK_NUM				13
+@@ -154,13 +153,19 @@ void mtk_ethdr_layer_config(struct device *dev, unsigned int idx,
+ 	unsigned int offset = (pending->x & 1) << 31 | pending->y << 16 | pending->x;
+ 	unsigned int align_width = ALIGN_DOWN(pending->width, 2);
+ 	unsigned int alpha_con = 0;
++	bool replace_src_a = false;
+ 
+ 	dev_dbg(dev, "%s+ idx:%d", __func__, idx);
+ 
+ 	if (idx >= 4)
+ 		return;
+ 
+-	if (!pending->enable) {
++	if (!pending->enable || !pending->width || !pending->height) {
++		/*
++		 * instead of disabling layer with MIX_SRC_CON directly
++		 * set the size to 0 to avoid screen shift due to mixer
++		 * mode switch (hardware behavior)
++		 */
+ 		mtk_ddp_write(cmdq_pkt, 0, &mixer->cmdq_base, mixer->regs, MIX_L_SRC_SIZE(idx));
+ 		return;
+ 	}
+@@ -168,8 +173,16 @@ void mtk_ethdr_layer_config(struct device *dev, unsigned int idx,
+ 	if (state->base.fb && state->base.fb->format->has_alpha)
+ 		alpha_con = MIXER_ALPHA_AEN | MIXER_ALPHA;
+ 
+-	mtk_mmsys_mixer_in_config(priv->mmsys_dev, idx + 1, alpha_con ? false : true,
+-				  DEFAULT_9BIT_ALPHA,
++	if (state->base.fb && !state->base.fb->format->has_alpha) {
++		/*
++		 * Mixer doesn't support CONST_BLD mode,
++		 * use a trick to make the output equivalent
++		 */
++		replace_src_a = true;
++	}
++
++	mtk_mmsys_mixer_in_config(priv->mmsys_dev, idx + 1, replace_src_a,
++				  MIXER_ALPHA,
+ 				  pending->x & 1 ? MIXER_INX_MODE_EVEN_EXTEND :
+ 				  MIXER_INX_MODE_BYPASS, align_width / 2 - 1, cmdq_pkt);
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c
+index 4625deb21d406..1723d4333f371 100644
+--- a/drivers/gpu/drm/mediatek/mtk_plane.c
++++ b/drivers/gpu/drm/mediatek/mtk_plane.c
+@@ -227,6 +227,8 @@ static void mtk_plane_atomic_async_update(struct drm_plane *plane,
+ 	plane->state->src_y = new_state->src_y;
+ 	plane->state->src_h = new_state->src_h;
+ 	plane->state->src_w = new_state->src_w;
++	plane->state->dst.x1 = new_state->dst.x1;
++	plane->state->dst.y1 = new_state->dst.y1;
+ 
+ 	mtk_plane_update_new_state(new_state, new_plane_state);
+ 	swap(plane->state->fb, new_state->fb);
+@@ -336,7 +338,7 @@ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 		return err;
+ 	}
+ 
+-	if (supported_rotations & ~DRM_MODE_ROTATE_0) {
++	if (supported_rotations) {
+ 		err = drm_plane_create_rotation_property(plane,
+ 							 DRM_MODE_ROTATE_0,
+ 							 supported_rotations);
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 17a5cca007e29..4bd0baa2a4f55 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -250,29 +250,20 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 	if (ret)
+ 		goto free_drm;
+ 	ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_0);
+-	if (ret) {
+-		meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
+-		goto free_drm;
+-	}
++	if (ret)
++		goto free_canvas_osd1;
+ 	ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_1);
+-	if (ret) {
+-		meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
+-		meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
+-		goto free_drm;
+-	}
++	if (ret)
++		goto free_canvas_vd1_0;
+ 	ret = meson_canvas_alloc(priv->canvas, &priv->canvas_id_vd1_2);
+-	if (ret) {
+-		meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
+-		meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
+-		meson_canvas_free(priv->canvas, priv->canvas_id_vd1_1);
+-		goto free_drm;
+-	}
++	if (ret)
++		goto free_canvas_vd1_1;
+ 
+ 	priv->vsync_irq = platform_get_irq(pdev, 0);
+ 
+ 	ret = drm_vblank_init(drm, 1);
+ 	if (ret)
+-		goto free_drm;
++		goto free_canvas_vd1_2;
+ 
+ 	/* Assign limits per soc revision/package */
+ 	for (i = 0 ; i < ARRAY_SIZE(meson_drm_soc_attrs) ; ++i) {
+@@ -288,11 +279,11 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 	 */
+ 	ret = drm_aperture_remove_framebuffers(&meson_driver);
+ 	if (ret)
+-		goto free_drm;
++		goto free_canvas_vd1_2;
+ 
+ 	ret = drmm_mode_config_init(drm);
+ 	if (ret)
+-		goto free_drm;
++		goto free_canvas_vd1_2;
+ 	drm->mode_config.max_width = 3840;
+ 	drm->mode_config.max_height = 2160;
+ 	drm->mode_config.funcs = &meson_mode_config_funcs;
+@@ -307,7 +298,7 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 	if (priv->afbcd.ops) {
+ 		ret = priv->afbcd.ops->init(priv);
+ 		if (ret)
+-			goto free_drm;
++			goto free_canvas_vd1_2;
+ 	}
+ 
+ 	/* Encoder Initialization */
+@@ -371,6 +362,14 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ exit_afbcd:
+ 	if (priv->afbcd.ops)
+ 		priv->afbcd.ops->exit(priv);
++free_canvas_vd1_2:
++	meson_canvas_free(priv->canvas, priv->canvas_id_vd1_2);
++free_canvas_vd1_1:
++	meson_canvas_free(priv->canvas, priv->canvas_id_vd1_1);
++free_canvas_vd1_0:
++	meson_canvas_free(priv->canvas, priv->canvas_id_vd1_0);
++free_canvas_osd1:
++	meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
+ free_drm:
+ 	drm_dev_put(drm);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 973872ad0474e..5383aff848300 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1409,7 +1409,7 @@ static void a6xx_calc_ubwc_config(struct adreno_gpu *gpu)
+ 	if (adreno_is_a702(gpu)) {
+ 		gpu->ubwc_config.highest_bank_bit = 14;
+ 		gpu->ubwc_config.min_acc_len = 1;
+-		gpu->ubwc_config.ubwc_mode = 2;
++		gpu->ubwc_config.ubwc_mode = 0;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 0a7717a4fc2fd..789a11416f7a4 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -8,19 +8,16 @@
+ #include "a6xx_gpu_state.h"
+ #include "a6xx_gmu.xml.h"
+ 
+-/* Ignore diagnostics about register tables that we aren't using yet. We don't
+- * want to modify these headers too much from their original source.
+- */
+-#pragma GCC diagnostic push
+-#pragma GCC diagnostic ignored "-Wunused-variable"
+-#pragma GCC diagnostic ignored "-Wunused-const-variable"
++static const unsigned int *gen7_0_0_external_core_regs[] __always_unused;
++static const unsigned int *gen7_2_0_external_core_regs[] __always_unused;
++static const unsigned int *gen7_9_0_external_core_regs[] __always_unused;
++static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] __always_unused;
++static const u32 gen7_9_0_cx_debugbus_blocks[] __always_unused;
+ 
+ #include "adreno_gen7_0_0_snapshot.h"
+ #include "adreno_gen7_2_0_snapshot.h"
+ #include "adreno_gen7_9_0_snapshot.h"
+ 
+-#pragma GCC diagnostic pop
+-
+ struct a6xx_gpu_state_obj {
+ 	const void *handle;
+ 	u32 *data;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 119f3ea50a7c6..697ad4a640516 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -428,7 +428,7 @@ int dpu_encoder_helper_wait_for_irq(struct dpu_encoder_phys *phys_enc,
+ 		return -EWOULDBLOCK;
+ 	}
+ 
+-	if (irq_idx < 0) {
++	if (irq_idx == 0) {
+ 		DRM_DEBUG_KMS("skip irq wait id=%u, callback=%ps\n",
+ 			      DRMID(phys_enc->parent), func);
+ 		return 0;
+@@ -1200,6 +1200,8 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ 		phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]);
+ 
+ 		phys->cached_mode = crtc_state->adjusted_mode;
++		if (phys->ops.atomic_mode_set)
++			phys->ops.atomic_mode_set(phys, crtc_state, conn_state);
+ 	}
+ }
+ 
+@@ -1741,8 +1743,7 @@ void dpu_encoder_trigger_kickoff_pending(struct drm_encoder *drm_enc)
+ 		phys = dpu_enc->phys_encs[i];
+ 
+ 		ctl = phys->hw_ctl;
+-		if (ctl->ops.clear_pending_flush)
+-			ctl->ops.clear_pending_flush(ctl);
++		ctl->ops.clear_pending_flush(ctl);
+ 
+ 		/* update only for command mode primary ctl */
+ 		if ((phys == dpu_enc->cur_master) &&
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
+index 002e89cc17058..30470cd15a484 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
+@@ -69,6 +69,8 @@ struct dpu_encoder_phys;
+  * @is_master:			Whether this phys_enc is the current master
+  *				encoder. Can be switched at enable time. Based
+  *				on split_role and current mode (CMD/VID).
++ * @atomic_mode_set:		DRM Call. Set a DRM mode.
++ *				This likely caches the mode, for use at enable.
+  * @enable:			DRM Call. Enable a DRM mode.
+  * @disable:			DRM Call. Disable mode.
+  * @control_vblank_irq		Register/Deregister for VBLANK IRQ
+@@ -93,6 +95,9 @@ struct dpu_encoder_phys;
+ struct dpu_encoder_phys_ops {
+ 	void (*prepare_commit)(struct dpu_encoder_phys *encoder);
+ 	bool (*is_master)(struct dpu_encoder_phys *encoder);
++	void (*atomic_mode_set)(struct dpu_encoder_phys *encoder,
++			struct drm_crtc_state *crtc_state,
++			struct drm_connector_state *conn_state);
+ 	void (*enable)(struct dpu_encoder_phys *encoder);
+ 	void (*disable)(struct dpu_encoder_phys *encoder);
+ 	int (*control_vblank_irq)(struct dpu_encoder_phys *enc, bool enable);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+index 489be1c0c7046..95cd39b496688 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+@@ -142,6 +142,23 @@ static void dpu_encoder_phys_cmd_underrun_irq(void *arg)
+ 	dpu_encoder_underrun_callback(phys_enc->parent, phys_enc);
+ }
+ 
++static void dpu_encoder_phys_cmd_atomic_mode_set(
++		struct dpu_encoder_phys *phys_enc,
++		struct drm_crtc_state *crtc_state,
++		struct drm_connector_state *conn_state)
++{
++	phys_enc->irq[INTR_IDX_CTL_START] = phys_enc->hw_ctl->caps->intr_start;
++
++	phys_enc->irq[INTR_IDX_PINGPONG] = phys_enc->hw_pp->caps->intr_done;
++
++	if (phys_enc->has_intf_te)
++		phys_enc->irq[INTR_IDX_RDPTR] = phys_enc->hw_intf->cap->intr_tear_rd_ptr;
++	else
++		phys_enc->irq[INTR_IDX_RDPTR] = phys_enc->hw_pp->caps->intr_rdptr;
++
++	phys_enc->irq[INTR_IDX_UNDERRUN] = phys_enc->hw_intf->cap->intr_underrun;
++}
++
+ static int _dpu_encoder_phys_cmd_handle_ppdone_timeout(
+ 		struct dpu_encoder_phys *phys_enc)
+ {
+@@ -280,14 +297,6 @@ static void dpu_encoder_phys_cmd_irq_enable(struct dpu_encoder_phys *phys_enc)
+ 					  phys_enc->hw_pp->idx - PINGPONG_0,
+ 					  phys_enc->vblank_refcount);
+ 
+-	phys_enc->irq[INTR_IDX_CTL_START] = phys_enc->hw_ctl->caps->intr_start;
+-	phys_enc->irq[INTR_IDX_PINGPONG] = phys_enc->hw_pp->caps->intr_done;
+-
+-	if (phys_enc->has_intf_te)
+-		phys_enc->irq[INTR_IDX_RDPTR] = phys_enc->hw_intf->cap->intr_tear_rd_ptr;
+-	else
+-		phys_enc->irq[INTR_IDX_RDPTR] = phys_enc->hw_pp->caps->intr_rdptr;
+-
+ 	dpu_core_irq_register_callback(phys_enc->dpu_kms,
+ 				       phys_enc->irq[INTR_IDX_PINGPONG],
+ 				       dpu_encoder_phys_cmd_pp_tx_done_irq,
+@@ -318,10 +327,6 @@ static void dpu_encoder_phys_cmd_irq_disable(struct dpu_encoder_phys *phys_enc)
+ 	dpu_core_irq_unregister_callback(phys_enc->dpu_kms, phys_enc->irq[INTR_IDX_UNDERRUN]);
+ 	dpu_encoder_phys_cmd_control_vblank_irq(phys_enc, false);
+ 	dpu_core_irq_unregister_callback(phys_enc->dpu_kms, phys_enc->irq[INTR_IDX_PINGPONG]);
+-
+-	phys_enc->irq[INTR_IDX_CTL_START] = 0;
+-	phys_enc->irq[INTR_IDX_PINGPONG] = 0;
+-	phys_enc->irq[INTR_IDX_RDPTR] = 0;
+ }
+ 
+ static void dpu_encoder_phys_cmd_tearcheck_config(
+@@ -698,6 +703,7 @@ static void dpu_encoder_phys_cmd_init_ops(
+ 		struct dpu_encoder_phys_ops *ops)
+ {
+ 	ops->is_master = dpu_encoder_phys_cmd_is_master;
++	ops->atomic_mode_set = dpu_encoder_phys_cmd_atomic_mode_set;
+ 	ops->enable = dpu_encoder_phys_cmd_enable;
+ 	ops->disable = dpu_encoder_phys_cmd_disable;
+ 	ops->control_vblank_irq = dpu_encoder_phys_cmd_control_vblank_irq;
+@@ -736,8 +742,6 @@ struct dpu_encoder_phys *dpu_encoder_phys_cmd_init(struct drm_device *dev,
+ 
+ 	dpu_encoder_phys_cmd_init_ops(&phys_enc->ops);
+ 	phys_enc->intf_mode = INTF_MODE_CMD;
+-	phys_enc->irq[INTR_IDX_UNDERRUN] = phys_enc->hw_intf->cap->intr_underrun;
+-
+ 	cmd_enc->stream_sel = 0;
+ 
+ 	if (!phys_enc->hw_intf) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index ef69c2f408c3e..636a97432d517 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -356,6 +356,16 @@ static bool dpu_encoder_phys_vid_needs_single_flush(
+ 	return phys_enc->split_role != ENC_ROLE_SOLO;
+ }
+ 
++static void dpu_encoder_phys_vid_atomic_mode_set(
++		struct dpu_encoder_phys *phys_enc,
++		struct drm_crtc_state *crtc_state,
++		struct drm_connector_state *conn_state)
++{
++	phys_enc->irq[INTR_IDX_VSYNC] = phys_enc->hw_intf->cap->intr_vsync;
++
++	phys_enc->irq[INTR_IDX_UNDERRUN] = phys_enc->hw_intf->cap->intr_underrun;
++}
++
+ static int dpu_encoder_phys_vid_control_vblank_irq(
+ 		struct dpu_encoder_phys *phys_enc,
+ 		bool enable)
+@@ -699,6 +709,7 @@ static int dpu_encoder_phys_vid_get_frame_count(
+ static void dpu_encoder_phys_vid_init_ops(struct dpu_encoder_phys_ops *ops)
+ {
+ 	ops->is_master = dpu_encoder_phys_vid_is_master;
++	ops->atomic_mode_set = dpu_encoder_phys_vid_atomic_mode_set;
+ 	ops->enable = dpu_encoder_phys_vid_enable;
+ 	ops->disable = dpu_encoder_phys_vid_disable;
+ 	ops->control_vblank_irq = dpu_encoder_phys_vid_control_vblank_irq;
+@@ -737,8 +748,6 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(struct drm_device *dev,
+ 
+ 	dpu_encoder_phys_vid_init_ops(&phys_enc->ops);
+ 	phys_enc->intf_mode = INTF_MODE_VIDEO;
+-	phys_enc->irq[INTR_IDX_VSYNC] = phys_enc->hw_intf->cap->intr_vsync;
+-	phys_enc->irq[INTR_IDX_UNDERRUN] = phys_enc->hw_intf->cap->intr_underrun;
+ 
+ 	DPU_DEBUG_VIDENC(phys_enc, "created intf idx:%d\n", p->hw_intf->idx);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+index d3ea91c1d7d2e..882c717859cec 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+@@ -404,6 +404,15 @@ static void dpu_encoder_phys_wb_irq_disable(struct dpu_encoder_phys *phys)
+ 		dpu_core_irq_unregister_callback(phys->dpu_kms, phys->irq[INTR_IDX_WB_DONE]);
+ }
+ 
++static void dpu_encoder_phys_wb_atomic_mode_set(
++		struct dpu_encoder_phys *phys_enc,
++		struct drm_crtc_state *crtc_state,
++		struct drm_connector_state *conn_state)
++{
++
++	phys_enc->irq[INTR_IDX_WB_DONE] = phys_enc->hw_wb->caps->intr_wb_done;
++}
++
+ static void _dpu_encoder_phys_wb_handle_wbdone_timeout(
+ 		struct dpu_encoder_phys *phys_enc)
+ {
+@@ -529,8 +538,7 @@ static void dpu_encoder_phys_wb_disable(struct dpu_encoder_phys *phys_enc)
+ 	}
+ 
+ 	/* reset h/w before final flush */
+-	if (phys_enc->hw_ctl->ops.clear_pending_flush)
+-		phys_enc->hw_ctl->ops.clear_pending_flush(phys_enc->hw_ctl);
++	phys_enc->hw_ctl->ops.clear_pending_flush(phys_enc->hw_ctl);
+ 
+ 	/*
+ 	 * New CTL reset sequence from 5.0 MDP onwards.
+@@ -640,6 +648,7 @@ static bool dpu_encoder_phys_wb_is_valid_for_commit(struct dpu_encoder_phys *phy
+ static void dpu_encoder_phys_wb_init_ops(struct dpu_encoder_phys_ops *ops)
+ {
+ 	ops->is_master = dpu_encoder_phys_wb_is_master;
++	ops->atomic_mode_set = dpu_encoder_phys_wb_atomic_mode_set;
+ 	ops->enable = dpu_encoder_phys_wb_enable;
+ 	ops->disable = dpu_encoder_phys_wb_disable;
+ 	ops->wait_for_commit_done = dpu_encoder_phys_wb_wait_for_commit_done;
+@@ -685,7 +694,6 @@ struct dpu_encoder_phys *dpu_encoder_phys_wb_init(struct drm_device *dev,
+ 
+ 	dpu_encoder_phys_wb_init_ops(&phys_enc->ops);
+ 	phys_enc->intf_mode = INTF_MODE_WB_LINE;
+-	phys_enc->irq[INTR_IDX_WB_DONE] = phys_enc->hw_wb->caps->intr_wb_done;
+ 
+ 	atomic_set(&wb_enc->wbirq_refcount, 0);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index f2b6eac7601dd..9b72977feafa4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -220,12 +220,9 @@ static const u32 wb2_formats_rgb[] = {
+ 	DRM_FORMAT_RGBA4444,
+ 	DRM_FORMAT_RGBX4444,
+ 	DRM_FORMAT_XRGB4444,
+-	DRM_FORMAT_BGR565,
+ 	DRM_FORMAT_BGR888,
+-	DRM_FORMAT_ABGR8888,
+ 	DRM_FORMAT_BGRA8888,
+ 	DRM_FORMAT_BGRX8888,
+-	DRM_FORMAT_XBGR8888,
+ 	DRM_FORMAT_ABGR1555,
+ 	DRM_FORMAT_BGRA5551,
+ 	DRM_FORMAT_XBGR1555,
+@@ -254,12 +251,9 @@ static const u32 wb2_formats_rgb_yuv[] = {
+ 	DRM_FORMAT_RGBA4444,
+ 	DRM_FORMAT_RGBX4444,
+ 	DRM_FORMAT_XRGB4444,
+-	DRM_FORMAT_BGR565,
+ 	DRM_FORMAT_BGR888,
+-	DRM_FORMAT_ABGR8888,
+ 	DRM_FORMAT_BGRA8888,
+ 	DRM_FORMAT_BGRX8888,
+-	DRM_FORMAT_XBGR8888,
+ 	DRM_FORMAT_ABGR1555,
+ 	DRM_FORMAT_BGRA5551,
+ 	DRM_FORMAT_XBGR1555,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
+index ef56280bea932..4401fdc0f3e4f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
+@@ -83,7 +83,8 @@ struct dpu_hw_ctl_ops {
+ 
+ 	/**
+ 	 * Clear the value of the cached pending_flush_mask
+-	 * No effect on hardware
++	 * No effect on hardware.
++	 * Required to be implemented.
+ 	 * @ctx       : ctl path ctx pointer
+ 	 */
+ 	void (*clear_pending_flush)(struct dpu_hw_ctl *ctx);
+diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
+index da46a433bf747..00dfafbebe0e5 100644
+--- a/drivers/gpu/drm/msm/dp/dp_aux.c
++++ b/drivers/gpu/drm/msm/dp/dp_aux.c
+@@ -513,7 +513,10 @@ static int dp_wait_hpd_asserted(struct drm_dp_aux *dp_aux,
+ 
+ 	aux = container_of(dp_aux, struct dp_aux_private, dp_aux);
+ 
+-	pm_runtime_get_sync(aux->dev);
++	ret = pm_runtime_resume_and_get(aux->dev);
++	if (ret)
++		return ret;
++
+ 	ret = dp_catalog_aux_wait_for_hpd_connect_state(aux->catalog, wait_us);
+ 	pm_runtime_put_sync(aux->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index a50f4dda59410..7252d36687e61 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -754,6 +754,8 @@ static void dsi_ctrl_enable(struct msm_dsi_host *msm_host,
+ 		data |= DSI_VID_CFG0_TRAFFIC_MODE(dsi_get_traffic_mode(flags));
+ 		data |= DSI_VID_CFG0_DST_FORMAT(dsi_get_vid_fmt(mipi_fmt));
+ 		data |= DSI_VID_CFG0_VIRT_CHANNEL(msm_host->channel);
++		if (msm_dsi_host_is_wide_bus_enabled(&msm_host->base))
++			data |= DSI_VID_CFG0_DATABUS_WIDEN;
+ 		dsi_write(msm_host, REG_DSI_VID_CFG0, data);
+ 
+ 		/* Do not swap RGB colors */
+@@ -778,7 +780,6 @@ static void dsi_ctrl_enable(struct msm_dsi_host *msm_host,
+ 			if (cfg_hnd->minor >= MSM_DSI_6G_VER_MINOR_V1_3)
+ 				data |= DSI_CMD_MODE_MDP_CTRL2_BURST_MODE;
+ 
+-			/* TODO: Allow for video-mode support once tested/fixed */
+ 			if (msm_dsi_host_is_wide_bus_enabled(&msm_host->base))
+ 				data |= DSI_CMD_MODE_MDP_CTRL2_DATABUS_WIDEN;
+ 
+@@ -856,6 +857,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ 	u32 slice_per_intf, total_bytes_per_intf;
+ 	u32 pkt_per_line;
+ 	u32 eol_byte_num;
++	u32 bytes_per_pkt;
+ 
+ 	/* first calculate dsc parameters and then program
+ 	 * compress mode registers
+@@ -863,6 +865,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ 	slice_per_intf = msm_dsc_get_slices_per_intf(dsc, hdisplay);
+ 
+ 	total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
++	bytes_per_pkt = dsc->slice_chunk_size; /* * slice_per_pkt; */
+ 
+ 	eol_byte_num = total_bytes_per_intf % 3;
+ 
+@@ -900,6 +903,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ 		dsi_write(msm_host, REG_DSI_COMMAND_COMPRESSION_MODE_CTRL, reg_ctrl);
+ 		dsi_write(msm_host, REG_DSI_COMMAND_COMPRESSION_MODE_CTRL2, reg_ctrl2);
+ 	} else {
++		reg |= DSI_VIDEO_COMPRESSION_MODE_CTRL_WC(bytes_per_pkt);
+ 		dsi_write(msm_host, REG_DSI_VIDEO_COMPRESSION_MODE_CTRL, reg);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+index 0ffe8f8c01de8..83c604ba3ee1c 100644
+--- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
++++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+@@ -1507,7 +1507,11 @@ static int boe_panel_prepare(struct drm_panel *panel)
+ 	usleep_range(10000, 11000);
+ 
+ 	if (boe->desc->lp11_before_reset) {
+-		mipi_dsi_dcs_nop(boe->dsi);
++		ret = mipi_dsi_dcs_nop(boe->dsi);
++		if (ret < 0) {
++			dev_err(&boe->dsi->dev, "Failed to send NOP: %d\n", ret);
++			goto poweroff;
++		}
+ 		usleep_range(1000, 2000);
+ 	}
+ 	gpiod_set_value(boe->enable_gpio, 1);
+@@ -1528,13 +1532,13 @@ static int boe_panel_prepare(struct drm_panel *panel)
+ 	return 0;
+ 
+ poweroff:
++	gpiod_set_value(boe->enable_gpio, 0);
+ 	regulator_disable(boe->avee);
+ poweroffavdd:
+ 	regulator_disable(boe->avdd);
+ poweroff1v8:
+ 	usleep_range(5000, 7000);
+ 	regulator_disable(boe->pp1800);
+-	gpiod_set_value(boe->enable_gpio, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-himax-hx8394.c b/drivers/gpu/drm/panel/panel-himax-hx8394.c
+index ff0dc08b98297..cb9f46e853de4 100644
+--- a/drivers/gpu/drm/panel/panel-himax-hx8394.c
++++ b/drivers/gpu/drm/panel/panel-himax-hx8394.c
+@@ -370,8 +370,7 @@ static int hx8394_enable(struct drm_panel *panel)
+ 
+ sleep_in:
+ 	/* This will probably fail, but let's try orderly power off anyway. */
+-	ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
+-	if (!ret)
++	if (!mipi_dsi_dcs_enter_sleep_mode(dsi))
+ 		msleep(50);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+index 267a5307041c9..35ea5494e0eb8 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+@@ -560,7 +560,11 @@ static int ili9882t_prepare(struct drm_panel *panel)
+ 	usleep_range(10000, 11000);
+ 
+ 	// MIPI needs to keep the LP11 state before the lcm_reset pin is pulled high
+-	mipi_dsi_dcs_nop(ili->dsi);
++	ret = mipi_dsi_dcs_nop(ili->dsi);
++	if (ret < 0) {
++		dev_err(&ili->dsi->dev, "Failed to send NOP: %d\n", ret);
++		goto poweroff;
++	}
+ 	usleep_range(1000, 2000);
+ 
+ 	gpiod_set_value(ili->enable_gpio, 1);
+@@ -579,13 +583,13 @@ static int ili9882t_prepare(struct drm_panel *panel)
+ 	return 0;
+ 
+ poweroff:
++	gpiod_set_value(ili->enable_gpio, 0);
+ 	regulator_disable(ili->avee);
+ poweroffavdd:
+ 	regulator_disable(ili->avdd);
+ poweroff1v8:
+ 	usleep_range(5000, 7000);
+ 	regulator_disable(ili->pp1800);
+-	gpiod_set_value(ili->enable_gpio, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-lg-sw43408.c b/drivers/gpu/drm/panel/panel-lg-sw43408.c
+index 2b3a73696dcec..67a98ac508f87 100644
+--- a/drivers/gpu/drm/panel/panel-lg-sw43408.c
++++ b/drivers/gpu/drm/panel/panel-lg-sw43408.c
+@@ -62,16 +62,25 @@ static int sw43408_program(struct drm_panel *panel)
+ {
+ 	struct sw43408_panel *ctx = to_panel_info(panel);
+ 	struct drm_dsc_picture_parameter_set pps;
++	int ret;
+ 
+ 	mipi_dsi_dcs_write_seq(ctx->link, MIPI_DCS_SET_GAMMA_CURVE, 0x02);
+ 
+-	mipi_dsi_dcs_set_tear_on(ctx->link, MIPI_DSI_DCS_TEAR_MODE_VBLANK);
++	ret = mipi_dsi_dcs_set_tear_on(ctx->link, MIPI_DSI_DCS_TEAR_MODE_VBLANK);
++	if (ret < 0) {
++		dev_err(panel->dev, "Failed to set tearing: %d\n", ret);
++		return ret;
++	}
+ 
+ 	mipi_dsi_dcs_write_seq(ctx->link, 0x53, 0x0c, 0x30);
+ 	mipi_dsi_dcs_write_seq(ctx->link, 0x55, 0x00, 0x70, 0xdf, 0x00, 0x70, 0xdf);
+ 	mipi_dsi_dcs_write_seq(ctx->link, 0xf7, 0x01, 0x49, 0x0c);
+ 
+-	mipi_dsi_dcs_exit_sleep_mode(ctx->link);
++	ret = mipi_dsi_dcs_exit_sleep_mode(ctx->link);
++	if (ret < 0) {
++		dev_err(panel->dev, "Failed to exit sleep mode: %d\n", ret);
++		return ret;
++	}
+ 
+ 	msleep(135);
+ 
+@@ -97,14 +106,22 @@ static int sw43408_program(struct drm_panel *panel)
+ 	mipi_dsi_dcs_write_seq(ctx->link, 0x55, 0x04, 0x61, 0xdb, 0x04, 0x70, 0xdb);
+ 	mipi_dsi_dcs_write_seq(ctx->link, 0xb0, 0xca);
+ 
+-	mipi_dsi_dcs_set_display_on(ctx->link);
++	ret = mipi_dsi_dcs_set_display_on(ctx->link);
++	if (ret < 0) {
++		dev_err(panel->dev, "Failed to set display on: %d\n", ret);
++		return ret;
++	}
+ 
+ 	msleep(50);
+ 
+ 	ctx->link->mode_flags &= ~MIPI_DSI_MODE_LPM;
+ 
+ 	drm_dsc_pps_payload_pack(&pps, ctx->link->dsc);
+-	mipi_dsi_picture_parameter_set(ctx->link, &pps);
++	ret = mipi_dsi_picture_parameter_set(ctx->link, &pps);
++	if (ret < 0) {
++		dev_err(panel->dev, "Failed to set PPS: %d\n", ret);
++		return ret;
++	}
+ 
+ 	ctx->link->mode_flags |= MIPI_DSI_MODE_LPM;
+ 
+@@ -113,8 +130,12 @@ static int sw43408_program(struct drm_panel *panel)
+ 	 * PPS 1 if pps_identifier is 0
+ 	 * PPS 2 if pps_identifier is 1
+ 	 */
+-	mipi_dsi_compression_mode_ext(ctx->link, true,
+-				      MIPI_DSI_COMPRESSION_DSC, 1);
++	ret = mipi_dsi_compression_mode_ext(ctx->link, true,
++					    MIPI_DSI_COMPRESSION_DSC, 1);
++	if (ret < 0) {
++		dev_err(panel->dev, "Failed to set compression mode: %d\n", ret);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index ef9f6c0716d5a..149737d7a07e3 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -828,3 +828,4 @@ module_platform_driver(panfrost_driver);
+ MODULE_AUTHOR("Panfrost Project Developers");
+ MODULE_DESCRIPTION("Panfrost DRM Driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_SOFTDEP("pre: governor_simpleondemand");
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 9a0ff48f7061d..463bcd3cf00f3 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -2939,6 +2939,7 @@ queue_run_job(struct drm_sched_job *sched_job)
+ 			pm_runtime_get(ptdev->base.dev);
+ 			sched->pm.has_ref = true;
+ 		}
++		panthor_devfreq_record_busy(sched->ptdev);
+ 	}
+ 
+ 	/* Update the last fence. */
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index c6d35c33d5d63..bc24af08dfcd5 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -236,6 +236,9 @@ static int qxl_add_mode(struct drm_connector *connector,
+ 		return 0;
+ 
+ 	mode = drm_cvt_mode(dev, width, height, 60, false, false, false);
++	if (!mode)
++		return 0;
++
+ 	if (preferred)
+ 		mode->type |= DRM_MODE_TYPE_PREFERRED;
+ 	mode->hdisplay = width;
+@@ -581,11 +584,11 @@ static struct qxl_bo *qxl_create_cursor(struct qxl_device *qdev,
+ 	if (ret)
+ 		goto err;
+ 
+-	ret = qxl_bo_vmap(cursor_bo, &cursor_map);
++	ret = qxl_bo_pin_and_vmap(cursor_bo, &cursor_map);
+ 	if (ret)
+ 		goto err_unref;
+ 
+-	ret = qxl_bo_vmap(user_bo, &user_map);
++	ret = qxl_bo_pin_and_vmap(user_bo, &user_map);
+ 	if (ret)
+ 		goto err_unmap;
+ 
+@@ -611,12 +614,12 @@ static struct qxl_bo *qxl_create_cursor(struct qxl_device *qdev,
+ 		       user_map.vaddr, size);
+ 	}
+ 
+-	qxl_bo_vunmap(user_bo);
+-	qxl_bo_vunmap(cursor_bo);
++	qxl_bo_vunmap_and_unpin(user_bo);
++	qxl_bo_vunmap_and_unpin(cursor_bo);
+ 	return cursor_bo;
+ 
+ err_unmap:
+-	qxl_bo_vunmap(cursor_bo);
++	qxl_bo_vunmap_and_unpin(cursor_bo);
+ err_unref:
+ 	qxl_bo_unpin(cursor_bo);
+ 	qxl_bo_unref(&cursor_bo);
+@@ -1202,7 +1205,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
+ 	}
+ 	qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
+ 
+-	ret = qxl_bo_vmap(qdev->monitors_config_bo, &map);
++	ret = qxl_bo_pin_and_vmap(qdev->monitors_config_bo, &map);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1233,7 +1236,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev)
+ 	qdev->monitors_config = NULL;
+ 	qdev->ram_header->monitors_config = 0;
+ 
+-	ret = qxl_bo_vunmap(qdev->monitors_config_bo);
++	ret = qxl_bo_vunmap_and_unpin(qdev->monitors_config_bo);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
+index 5893e27a7ae50..66635c55cf857 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.c
++++ b/drivers/gpu/drm/qxl/qxl_object.c
+@@ -182,7 +182,7 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map)
+ 	return 0;
+ }
+ 
+-int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map)
++int qxl_bo_pin_and_vmap(struct qxl_bo *bo, struct iosys_map *map)
+ {
+ 	int r;
+ 
+@@ -190,7 +190,15 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map)
+ 	if (r)
+ 		return r;
+ 
++	r = qxl_bo_pin_locked(bo);
++	if (r) {
++		qxl_bo_unreserve(bo);
++		return r;
++	}
++
+ 	r = qxl_bo_vmap_locked(bo, map);
++	if (r)
++		qxl_bo_unpin_locked(bo);
+ 	qxl_bo_unreserve(bo);
+ 	return r;
+ }
+@@ -241,7 +249,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo)
+ 	ttm_bo_vunmap(&bo->tbo, &bo->map);
+ }
+ 
+-int qxl_bo_vunmap(struct qxl_bo *bo)
++int qxl_bo_vunmap_and_unpin(struct qxl_bo *bo)
+ {
+ 	int r;
+ 
+@@ -250,6 +258,7 @@ int qxl_bo_vunmap(struct qxl_bo *bo)
+ 		return r;
+ 
+ 	qxl_bo_vunmap_locked(bo);
++	qxl_bo_unpin_locked(bo);
+ 	qxl_bo_unreserve(bo);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
+index 1cf5bc7591016..875f63221074c 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.h
++++ b/drivers/gpu/drm/qxl/qxl_object.h
+@@ -59,9 +59,9 @@ extern int qxl_bo_create(struct qxl_device *qdev,
+ 			 u32 priority,
+ 			 struct qxl_surface *surf,
+ 			 struct qxl_bo **bo_ptr);
+-int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map);
++int qxl_bo_pin_and_vmap(struct qxl_bo *bo, struct iosys_map *map);
+ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map);
+-int qxl_bo_vunmap(struct qxl_bo *bo);
++int qxl_bo_vunmap_and_unpin(struct qxl_bo *bo);
+ void qxl_bo_vunmap_locked(struct qxl_bo *bo);
+ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
+ void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 62ebbdb16253d..9873172e3fd33 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -2344,7 +2344,7 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 		port_sel |= FIELD_PREP(RK3568_OVL_PORT_SET__PORT2_MUX,
+ 			(vp2->nlayers + vp1->nlayers + vp0->nlayers - 1));
+ 	else
+-		port_sel |= FIELD_PREP(RK3568_OVL_PORT_SET__PORT1_MUX, 8);
++		port_sel |= FIELD_PREP(RK3568_OVL_PORT_SET__PORT2_MUX, 8);
+ 
+ 	layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
+ 
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
+index 1f8a4f8adc929..801bb139075f3 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
+@@ -18,6 +18,12 @@
+ 
+ #define BO_SIZE		SZ_8K
+ 
++#ifdef CONFIG_PREEMPT_RT
++#define ww_mutex_base_lock(b)			rt_mutex_lock(b)
++#else
++#define ww_mutex_base_lock(b)			mutex_lock(b)
++#endif
++
+ struct ttm_bo_test_case {
+ 	const char *description;
+ 	bool interruptible;
+@@ -56,7 +62,7 @@ static void ttm_bo_reserve_optimistic_no_ticket(struct kunit *test)
+ 	struct ttm_buffer_object *bo;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_bo_reserve(bo, params->interruptible, params->no_wait, NULL);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -71,7 +77,7 @@ static void ttm_bo_reserve_locked_no_sleep(struct kunit *test)
+ 	bool no_wait = true;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	/* Let's lock it beforehand */
+ 	dma_resv_lock(bo->base.resv, NULL);
+@@ -92,7 +98,7 @@ static void ttm_bo_reserve_no_wait_ticket(struct kunit *test)
+ 
+ 	ww_acquire_init(&ctx, &reservation_ww_class);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx);
+ 	KUNIT_ASSERT_EQ(test, err, -EBUSY);
+@@ -110,7 +116,7 @@ static void ttm_bo_reserve_double_resv(struct kunit *test)
+ 
+ 	ww_acquire_init(&ctx, &reservation_ww_class);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_bo_reserve(bo, interruptible, no_wait, &ctx);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -138,11 +144,11 @@ static void ttm_bo_reserve_deadlock(struct kunit *test)
+ 	bool no_wait = false;
+ 	int err;
+ 
+-	bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
+-	bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
++	bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	ww_acquire_init(&ctx1, &reservation_ww_class);
+-	mutex_lock(&bo2->base.resv->lock.base);
++	ww_mutex_base_lock(&bo2->base.resv->lock.base);
+ 
+ 	/* The deadlock will be caught by WW mutex, don't warn about it */
+ 	lock_release(&bo2->base.resv->lock.base.dep_map, 1);
+@@ -208,7 +214,7 @@ static void ttm_bo_reserve_interrupted(struct kunit *test)
+ 	struct task_struct *task;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	task = kthread_create(threaded_ttm_bo_reserve, bo, "ttm-bo-reserve");
+ 
+@@ -249,7 +255,7 @@ static void ttm_bo_unreserve_basic(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	bo->priority = bo_prio;
+ 
+ 	err = ttm_resource_alloc(bo, place, &res1);
+@@ -288,7 +294,7 @@ static void ttm_bo_unreserve_pinned(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	place = ttm_place_kunit_init(test, mem_type, 0);
+ 
+ 	dma_resv_lock(bo->base.resv, NULL);
+@@ -321,6 +327,7 @@ static void ttm_bo_unreserve_bulk(struct kunit *test)
+ 	struct ttm_resource *res1, *res2;
+ 	struct ttm_device *ttm_dev;
+ 	struct ttm_place *place;
++	struct dma_resv *resv;
+ 	uint32_t mem_type = TTM_PL_SYSTEM;
+ 	unsigned int bo_priority = 0;
+ 	int err;
+@@ -332,12 +339,17 @@ static void ttm_bo_unreserve_bulk(struct kunit *test)
+ 	ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, ttm_dev);
+ 
++	resv = kunit_kzalloc(test, sizeof(*resv), GFP_KERNEL);
++	KUNIT_ASSERT_NOT_NULL(test, ttm_dev);
++
+ 	err = ttm_device_kunit_init(priv, ttm_dev, false, false);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
+-	bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	dma_resv_init(resv);
++
++	bo1 = ttm_bo_kunit_init(test, test->priv, BO_SIZE, resv);
++	bo2 = ttm_bo_kunit_init(test, test->priv, BO_SIZE, resv);
+ 
+ 	dma_resv_lock(bo1->base.resv, NULL);
+ 	ttm_bo_set_bulk_move(bo1, &lru_bulk_move);
+@@ -363,6 +375,8 @@ static void ttm_bo_unreserve_bulk(struct kunit *test)
+ 
+ 	ttm_resource_free(bo1, &res1);
+ 	ttm_resource_free(bo2, &res2);
++
++	dma_resv_fini(resv);
+ }
+ 
+ static void ttm_bo_put_basic(struct kunit *test)
+@@ -384,7 +398,7 @@ static void ttm_bo_put_basic(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	bo->type = ttm_bo_type_device;
+ 
+ 	err = ttm_resource_alloc(bo, place, &res);
+@@ -445,7 +459,7 @@ static void ttm_bo_put_shared_resv(struct kunit *test)
+ 
+ 	dma_fence_signal(fence);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	bo->type = ttm_bo_type_device;
+ 	bo->base.resv = external_resv;
+ 
+@@ -467,7 +481,7 @@ static void ttm_bo_pin_basic(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	for (int i = 0; i < no_pins; i++) {
+ 		dma_resv_lock(bo->base.resv, NULL);
+@@ -502,7 +516,7 @@ static void ttm_bo_pin_unpin_resource(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_resource_alloc(bo, place, &res);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -553,7 +567,7 @@ static void ttm_bo_multiple_pin_one_unpin(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 	priv->ttm_dev = ttm_dev;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_resource_alloc(bo, place, &res);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c b/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c
+index 7b7c1fa805fcb..5be317a0af56b 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c
+@@ -51,7 +51,8 @@ EXPORT_SYMBOL_GPL(ttm_device_kunit_init);
+ 
+ struct ttm_buffer_object *ttm_bo_kunit_init(struct kunit *test,
+ 					    struct ttm_test_devices *devs,
+-					    size_t size)
++					    size_t size,
++					    struct dma_resv *obj)
+ {
+ 	struct drm_gem_object gem_obj = { };
+ 	struct ttm_buffer_object *bo;
+@@ -61,6 +62,10 @@ struct ttm_buffer_object *ttm_bo_kunit_init(struct kunit *test,
+ 	KUNIT_ASSERT_NOT_NULL(test, bo);
+ 
+ 	bo->base = gem_obj;
++
++	if (obj)
++		bo->base.resv = obj;
++
+ 	err = drm_gem_object_init(devs->drm, &bo->base, size);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+ 
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h b/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h
+index 2f51c833a5367..c83d31b23c9aa 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h
++++ b/drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h
+@@ -28,7 +28,8 @@ int ttm_device_kunit_init(struct ttm_test_devices *priv,
+ 			  bool use_dma32);
+ struct ttm_buffer_object *ttm_bo_kunit_init(struct kunit *test,
+ 					    struct ttm_test_devices *devs,
+-					    size_t size);
++					    size_t size,
++					    struct dma_resv *obj);
+ struct ttm_place *ttm_place_kunit_init(struct kunit *test,
+ 				       uint32_t mem_type, uint32_t flags);
+ 
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c b/drivers/gpu/drm/ttm/tests/ttm_pool_test.c
+index 0a3fede84da92..4643f91c6bd59 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_pool_test.c
+@@ -57,7 +57,7 @@ static struct ttm_tt *ttm_tt_kunit_init(struct kunit *test,
+ 	struct ttm_tt *tt;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, priv->devs, size);
++	bo = ttm_bo_kunit_init(test, priv->devs, size, NULL);
+ 	KUNIT_ASSERT_NOT_NULL(test, bo);
+ 	priv->mock_bo = bo;
+ 
+@@ -209,7 +209,7 @@ static void ttm_pool_alloc_basic_dma_addr(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, devs, size);
++	bo = ttm_bo_kunit_init(test, devs, size, NULL);
+ 	KUNIT_ASSERT_NOT_NULL(test, bo);
+ 
+ 	err = ttm_sg_tt_init(tt, bo, 0, caching);
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
+index 029e1f094bb08..67584058dadbc 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
+@@ -54,7 +54,7 @@ static void ttm_init_test_mocks(struct kunit *test,
+ 	/* Make sure we have what we need for a good BO mock */
+ 	KUNIT_ASSERT_NOT_NULL(test, priv->devs->ttm_dev);
+ 
+-	priv->bo = ttm_bo_kunit_init(test, priv->devs, size);
++	priv->bo = ttm_bo_kunit_init(test, priv->devs, size, NULL);
+ 	priv->place = ttm_place_kunit_init(test, mem_type, flags);
+ }
+ 
+diff --git a/drivers/gpu/drm/ttm/tests/ttm_tt_test.c b/drivers/gpu/drm/ttm/tests/ttm_tt_test.c
+index fd4502c18de67..67bf51723c92f 100644
+--- a/drivers/gpu/drm/ttm/tests/ttm_tt_test.c
++++ b/drivers/gpu/drm/ttm/tests/ttm_tt_test.c
+@@ -63,7 +63,7 @@ static void ttm_tt_init_basic(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, params->size);
++	bo = ttm_bo_kunit_init(test, test->priv, params->size, NULL);
+ 
+ 	err = ttm_tt_init(tt, bo, page_flags, caching, extra_pages);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -89,7 +89,7 @@ static void ttm_tt_init_misaligned(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, size);
++	bo = ttm_bo_kunit_init(test, test->priv, size, NULL);
+ 
+ 	/* Make the object size misaligned */
+ 	bo->base.size += 1;
+@@ -110,7 +110,7 @@ static void ttm_tt_fini_basic(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_tt_init(tt, bo, 0, caching, 0);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -130,7 +130,7 @@ static void ttm_tt_fini_sg(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_sg_tt_init(tt, bo, 0, caching);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -151,7 +151,7 @@ static void ttm_tt_fini_shmem(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_tt_init(tt, bo, 0, caching, 0);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -168,7 +168,7 @@ static void ttm_tt_create_basic(struct kunit *test)
+ 	struct ttm_buffer_object *bo;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	bo->type = ttm_bo_type_device;
+ 
+ 	dma_resv_lock(bo->base.resv, NULL);
+@@ -187,7 +187,7 @@ static void ttm_tt_create_invalid_bo_type(struct kunit *test)
+ 	struct ttm_buffer_object *bo;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 	bo->type = ttm_bo_type_sg + 1;
+ 
+ 	dma_resv_lock(bo->base.resv, NULL);
+@@ -208,7 +208,7 @@ static void ttm_tt_create_ttm_exists(struct kunit *test)
+ 	tt = kunit_kzalloc(test, sizeof(*tt), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_NULL(test, tt);
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	err = ttm_tt_init(tt, bo, 0, caching, 0);
+ 	KUNIT_ASSERT_EQ(test, err, 0);
+@@ -239,7 +239,7 @@ static void ttm_tt_create_failed(struct kunit *test)
+ 	struct ttm_buffer_object *bo;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	/* Update ttm_device_funcs so we don't alloc ttm_tt */
+ 	devs->ttm_dev->funcs = &ttm_dev_empty_funcs;
+@@ -257,7 +257,7 @@ static void ttm_tt_destroy_basic(struct kunit *test)
+ 	struct ttm_buffer_object *bo;
+ 	int err;
+ 
+-	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE);
++	bo = ttm_bo_kunit_init(test, test->priv, BO_SIZE, NULL);
+ 
+ 	dma_resv_lock(bo->base.resv, NULL);
+ 	err = ttm_tt_create(bo, false);
+diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
+index 7702359c90c22..751da3a294c44 100644
+--- a/drivers/gpu/drm/udl/udl_modeset.c
++++ b/drivers/gpu/drm/udl/udl_modeset.c
+@@ -527,8 +527,7 @@ struct drm_connector *udl_connector_init(struct drm_device *dev)
+ 
+ 	drm_connector_helper_add(connector, &udl_connector_helper_funcs);
+ 
+-	connector->polled = DRM_CONNECTOR_POLL_HPD |
+-			    DRM_CONNECTOR_POLL_CONNECT |
++	connector->polled = DRM_CONNECTOR_POLL_CONNECT |
+ 			    DRM_CONNECTOR_POLL_DISCONNECT;
+ 
+ 	return connector;
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index d46f87a039f20..b3d3c065dd9d8 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -159,12 +159,16 @@ void intel_hdcp_gsc_fini(struct xe_device *xe)
+ {
+ 	struct intel_hdcp_gsc_message *hdcp_message =
+ 					xe->display.hdcp.hdcp_message;
++	struct i915_hdcp_arbiter *arb = xe->display.hdcp.arbiter;
+ 
+-	if (!hdcp_message)
+-		return;
++	if (hdcp_message) {
++		xe_bo_unpin_map_no_vm(hdcp_message->hdcp_bo);
++		kfree(hdcp_message);
++		xe->display.hdcp.hdcp_message = NULL;
++	}
+ 
+-	xe_bo_unpin_map_no_vm(hdcp_message->hdcp_bo);
+-	kfree(hdcp_message);
++	kfree(arb);
++	xe->display.hdcp.arbiter = NULL;
+ }
+ 
+ static int xe_gsc_send_sync(struct xe_device *xe,
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index bc1f794e3e614..b6f3a43d637f7 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -317,7 +317,7 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
+ 	struct xe_device *xe = xe_bo_device(bo);
+ 	struct xe_ttm_tt *tt;
+ 	unsigned long extra_pages;
+-	enum ttm_caching caching;
++	enum ttm_caching caching = ttm_cached;
+ 	int err;
+ 
+ 	tt = kzalloc(sizeof(*tt), GFP_KERNEL);
+@@ -331,26 +331,35 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
+ 		extra_pages = DIV_ROUND_UP(xe_device_ccs_bytes(xe, bo->size),
+ 					   PAGE_SIZE);
+ 
+-	switch (bo->cpu_caching) {
+-	case DRM_XE_GEM_CPU_CACHING_WC:
+-		caching = ttm_write_combined;
+-		break;
+-	default:
+-		caching = ttm_cached;
+-		break;
+-	}
+-
+-	WARN_ON((bo->flags & XE_BO_FLAG_USER) && !bo->cpu_caching);
+-
+ 	/*
+-	 * Display scanout is always non-coherent with the CPU cache.
+-	 *
+-	 * For Xe_LPG and beyond, PPGTT PTE lookups are also non-coherent and
+-	 * require a CPU:WC mapping.
++	 * DGFX system memory is always WB / ttm_cached, since
++	 * other caching modes are only supported on x86. DGFX
++	 * GPU system memory accesses are always coherent with the
++	 * CPU.
+ 	 */
+-	if ((!bo->cpu_caching && bo->flags & XE_BO_FLAG_SCANOUT) ||
+-	    (xe->info.graphics_verx100 >= 1270 && bo->flags & XE_BO_FLAG_PAGETABLE))
+-		caching = ttm_write_combined;
++	if (!IS_DGFX(xe)) {
++		switch (bo->cpu_caching) {
++		case DRM_XE_GEM_CPU_CACHING_WC:
++			caching = ttm_write_combined;
++			break;
++		default:
++			caching = ttm_cached;
++			break;
++		}
++
++		WARN_ON((bo->flags & XE_BO_FLAG_USER) && !bo->cpu_caching);
++
++		/*
++		 * Display scanout is always non-coherent with the CPU cache.
++		 *
++		 * For Xe_LPG and beyond, PPGTT PTE lookups are also
++		 * non-coherent and require a CPU:WC mapping.
++		 */
++		if ((!bo->cpu_caching && bo->flags & XE_BO_FLAG_SCANOUT) ||
++		    (xe->info.graphics_verx100 >= 1270 &&
++		     bo->flags & XE_BO_FLAG_PAGETABLE))
++			caching = ttm_write_combined;
++	}
+ 
+ 	err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages);
+ 	if (err) {
+diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
+index 86422e113d396..10450f1fbbde7 100644
+--- a/drivers/gpu/drm/xe/xe_bo_types.h
++++ b/drivers/gpu/drm/xe/xe_bo_types.h
+@@ -66,7 +66,8 @@ struct xe_bo {
+ 
+ 	/**
+ 	 * @cpu_caching: CPU caching mode. Currently only used for userspace
+-	 * objects.
++	 * objects. Exceptions are system memory on DGFX, which is always
++	 * WB.
+ 	 */
+ 	u16 cpu_caching;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index 97eeb973e897c..074344c739abc 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -118,7 +118,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 	u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
+ 	struct drm_gpuvm_exec vm_exec = {.extra.fn = xe_exec_fn};
+ 	struct drm_exec *exec = &vm_exec.exec;
+-	u32 i, num_syncs = 0, num_ufence = 0;
++	u32 i, num_syncs, num_ufence = 0;
+ 	struct xe_sched_job *job;
+ 	struct xe_vm *vm;
+ 	bool write_locked, skip_retry = false;
+@@ -156,15 +156,15 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 
+ 	vm = q->vm;
+ 
+-	for (i = 0; i < args->num_syncs; i++) {
+-		err = xe_sync_entry_parse(xe, xef, &syncs[num_syncs++],
+-					  &syncs_user[i], SYNC_PARSE_FLAG_EXEC |
++	for (num_syncs = 0; num_syncs < args->num_syncs; num_syncs++) {
++		err = xe_sync_entry_parse(xe, xef, &syncs[num_syncs],
++					  &syncs_user[num_syncs], SYNC_PARSE_FLAG_EXEC |
+ 					  (xe_vm_in_lr_mode(vm) ?
+ 					   SYNC_PARSE_FLAG_LR_MODE : 0));
+ 		if (err)
+ 			goto err_syncs;
+ 
+-		if (xe_sync_is_ufence(&syncs[i]))
++		if (xe_sync_is_ufence(&syncs[num_syncs]))
+ 			num_ufence++;
+ 	}
+ 
+@@ -325,8 +325,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 	if (err == -EAGAIN && !skip_retry)
+ 		goto retry;
+ err_syncs:
+-	for (i = 0; i < num_syncs; i++)
+-		xe_sync_entry_cleanup(&syncs[i]);
++	while (num_syncs--)
++		xe_sync_entry_cleanup(&syncs[num_syncs]);
+ 	kfree(syncs);
+ err_exec_queue:
+ 	xe_exec_queue_put(q);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index 6c2cfc54442ce..4f40fe24c649c 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -1527,6 +1527,7 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
+ 	u64 fair;
+ 
+ 	fair = div_u64(available, num_vfs);
++	fair = rounddown_pow_of_two(fair);	/* XXX: ttm_vram_mgr & drm_buddy limitation */
+ 	fair = ALIGN_DOWN(fair, alignment);
+ #ifdef MAX_FAIR_LMEM
+ 	fair = min_t(u64, MAX_FAIR_LMEM, fair);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+index face8d6b2a6fb..f5781939de9c3 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+@@ -269,6 +269,7 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_disp:
++	drm_bridge_remove(dpsub->bridge);
+ 	zynqmp_disp_remove(dpsub);
+ err_dp:
+ 	zynqmp_dp_remove(dpsub);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index 43bf416b33d5c..f25583ce92e60 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -433,23 +433,28 @@ static int zynqmp_dpsub_kms_init(struct zynqmp_dpsub *dpsub)
+ 				DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ 	if (ret) {
+ 		dev_err(dpsub->dev, "failed to attach bridge to encoder\n");
+-		return ret;
++		goto err_encoder;
+ 	}
+ 
+ 	/* Create the connector for the chain of bridges. */
+ 	connector = drm_bridge_connector_init(&dpsub->drm->dev, encoder);
+ 	if (IS_ERR(connector)) {
+ 		dev_err(dpsub->dev, "failed to created connector\n");
+-		return PTR_ERR(connector);
++		ret = PTR_ERR(connector);
++		goto err_encoder;
+ 	}
+ 
+ 	ret = drm_connector_attach_encoder(connector, encoder);
+ 	if (ret < 0) {
+ 		dev_err(dpsub->dev, "failed to attach connector to encoder\n");
+-		return ret;
++		goto err_encoder;
+ 	}
+ 
+ 	return 0;
++
++err_encoder:
++	drm_encoder_cleanup(encoder);
++	return ret;
+ }
+ 
+ static void zynqmp_dpsub_drm_release(struct drm_device *drm, void *res)
+@@ -529,5 +534,6 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub)
+ 
+ 	drm_dev_unregister(drm);
+ 	drm_atomic_helper_shutdown(drm);
++	drm_encoder_cleanup(&dpsub->drm->encoder);
+ 	drm_kms_helper_poll_fini(drm);
+ }
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 4224ffb304832..ec3336804720e 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -1900,7 +1900,7 @@ static void adt7475_read_pwm(struct i2c_client *client, int index)
+ 		data->pwm[CONTROL][index] &= ~0xE0;
+ 		data->pwm[CONTROL][index] |= (7 << 5);
+ 
+-		i2c_smbus_write_byte_data(client, PWM_CONFIG_REG(index),
++		i2c_smbus_write_byte_data(client, PWM_REG(index),
+ 					  data->pwm[INPUT][index]);
+ 
+ 		i2c_smbus_write_byte_data(client, PWM_CONFIG_REG(index),
+diff --git a/drivers/hwmon/ltc2991.c b/drivers/hwmon/ltc2991.c
+index 06750bb93c236..f74ce9c25bf71 100644
+--- a/drivers/hwmon/ltc2991.c
++++ b/drivers/hwmon/ltc2991.c
+@@ -225,8 +225,8 @@ static umode_t ltc2991_is_visible(const void *data,
+ 	case hwmon_temp:
+ 		switch (attr) {
+ 		case hwmon_temp_input:
+-			if (st->temp_en[channel] ||
+-			    channel == LTC2991_T_INT_CH_NR)
++			if (channel == LTC2991_T_INT_CH_NR ||
++			    st->temp_en[channel])
+ 				return 0444;
+ 			break;
+ 		}
+diff --git a/drivers/hwmon/max6697.c b/drivers/hwmon/max6697.c
+index d161ba0e7813c..8aac8278193f0 100644
+--- a/drivers/hwmon/max6697.c
++++ b/drivers/hwmon/max6697.c
+@@ -311,6 +311,7 @@ static ssize_t temp_store(struct device *dev,
+ 		return ret;
+ 
+ 	mutex_lock(&data->update_lock);
++	temp = clamp_val(temp, -1000000, 1000000);	/* prevent underflow */
+ 	temp = DIV_ROUND_CLOSEST(temp, 1000) + data->temp_offset;
+ 	temp = clamp_val(temp, 0, data->type == max6581 ? 255 : 127);
+ 	data->temp[nr][index] = temp;
+@@ -428,14 +429,14 @@ static SENSOR_DEVICE_ATTR_RO(temp6_max_alarm, alarm, 20);
+ static SENSOR_DEVICE_ATTR_RO(temp7_max_alarm, alarm, 21);
+ static SENSOR_DEVICE_ATTR_RO(temp8_max_alarm, alarm, 23);
+ 
+-static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, alarm, 14);
++static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, alarm, 15);
+ static SENSOR_DEVICE_ATTR_RO(temp2_crit_alarm, alarm, 8);
+ static SENSOR_DEVICE_ATTR_RO(temp3_crit_alarm, alarm, 9);
+ static SENSOR_DEVICE_ATTR_RO(temp4_crit_alarm, alarm, 10);
+ static SENSOR_DEVICE_ATTR_RO(temp5_crit_alarm, alarm, 11);
+ static SENSOR_DEVICE_ATTR_RO(temp6_crit_alarm, alarm, 12);
+ static SENSOR_DEVICE_ATTR_RO(temp7_crit_alarm, alarm, 13);
+-static SENSOR_DEVICE_ATTR_RO(temp8_crit_alarm, alarm, 15);
++static SENSOR_DEVICE_ATTR_RO(temp8_crit_alarm, alarm, 14);
+ 
+ static SENSOR_DEVICE_ATTR_RO(temp2_fault, alarm, 1);
+ static SENSOR_DEVICE_ATTR_RO(temp3_fault, alarm, 2);
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 9d550f5697fa8..57a009552cc5c 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -297,8 +297,10 @@ static int of_get_coresight_platform_data(struct device *dev,
+ 			continue;
+ 
+ 		ret = of_coresight_parse_endpoint(dev, ep, pdata);
+-		if (ret)
++		if (ret) {
++			of_node_put(ep);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
+index d7e966a255833..4e7d6a43ee9b3 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/core.c
++++ b/drivers/i3c/master/mipi-i3c-hci/core.c
+@@ -631,6 +631,7 @@ static irqreturn_t i3c_hci_irq_handler(int irq, void *dev_id)
+ static int i3c_hci_init(struct i3c_hci *hci)
+ {
+ 	u32 regval, offset;
++	bool size_in_dwords;
+ 	int ret;
+ 
+ 	/* Validate HCI hardware version */
+@@ -654,11 +655,16 @@ static int i3c_hci_init(struct i3c_hci *hci)
+ 	hci->caps = reg_read(HC_CAPABILITIES);
+ 	DBG("caps = %#x", hci->caps);
+ 
++	size_in_dwords = hci->version_major < 1 ||
++			 (hci->version_major == 1 && hci->version_minor < 1);
++
+ 	regval = reg_read(DAT_SECTION);
+ 	offset = FIELD_GET(DAT_TABLE_OFFSET, regval);
+ 	hci->DAT_regs = offset ? hci->base_regs + offset : NULL;
+ 	hci->DAT_entries = FIELD_GET(DAT_TABLE_SIZE, regval);
+ 	hci->DAT_entry_size = FIELD_GET(DAT_ENTRY_SIZE, regval) ? 0 : 8;
++	if (size_in_dwords)
++		hci->DAT_entries = 4 * hci->DAT_entries / hci->DAT_entry_size;
+ 	dev_info(&hci->master.dev, "DAT: %u %u-bytes entries at offset %#x\n",
+ 		 hci->DAT_entries, hci->DAT_entry_size, offset);
+ 
+@@ -667,6 +673,8 @@ static int i3c_hci_init(struct i3c_hci *hci)
+ 	hci->DCT_regs = offset ? hci->base_regs + offset : NULL;
+ 	hci->DCT_entries = FIELD_GET(DCT_TABLE_SIZE, regval);
+ 	hci->DCT_entry_size = FIELD_GET(DCT_ENTRY_SIZE, regval) ? 0 : 16;
++	if (size_in_dwords)
++		hci->DCT_entries = 4 * hci->DCT_entries / hci->DCT_entry_size;
+ 	dev_info(&hci->master.dev, "DCT: %u %u-bytes entries at offset %#x\n",
+ 		 hci->DCT_entries, hci->DCT_entry_size, offset);
+ 
+diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
+index 8f5b9c3f6e3d6..1fd2211e29642 100644
+--- a/drivers/iio/adc/ad9467.c
++++ b/drivers/iio/adc/ad9467.c
+@@ -141,9 +141,10 @@ struct ad9467_state {
+ 	struct gpio_desc		*pwrdown_gpio;
+ 	/* ensure consistent state obtained on multiple related accesses */
+ 	struct mutex			lock;
++	u8				buf[3] __aligned(IIO_DMA_MINALIGN);
+ };
+ 
+-static int ad9467_spi_read(struct spi_device *spi, unsigned int reg)
++static int ad9467_spi_read(struct ad9467_state *st, unsigned int reg)
+ {
+ 	unsigned char tbuf[2], rbuf[1];
+ 	int ret;
+@@ -151,7 +152,7 @@ static int ad9467_spi_read(struct spi_device *spi, unsigned int reg)
+ 	tbuf[0] = 0x80 | (reg >> 8);
+ 	tbuf[1] = reg & 0xFF;
+ 
+-	ret = spi_write_then_read(spi,
++	ret = spi_write_then_read(st->spi,
+ 				  tbuf, ARRAY_SIZE(tbuf),
+ 				  rbuf, ARRAY_SIZE(rbuf));
+ 
+@@ -161,35 +162,32 @@ static int ad9467_spi_read(struct spi_device *spi, unsigned int reg)
+ 	return rbuf[0];
+ }
+ 
+-static int ad9467_spi_write(struct spi_device *spi, unsigned int reg,
++static int ad9467_spi_write(struct ad9467_state *st, unsigned int reg,
+ 			    unsigned int val)
+ {
+-	unsigned char buf[3];
++	st->buf[0] = reg >> 8;
++	st->buf[1] = reg & 0xFF;
++	st->buf[2] = val;
+ 
+-	buf[0] = reg >> 8;
+-	buf[1] = reg & 0xFF;
+-	buf[2] = val;
+-
+-	return spi_write(spi, buf, ARRAY_SIZE(buf));
++	return spi_write(st->spi, st->buf, ARRAY_SIZE(st->buf));
+ }
+ 
+ static int ad9467_reg_access(struct iio_dev *indio_dev, unsigned int reg,
+ 			     unsigned int writeval, unsigned int *readval)
+ {
+ 	struct ad9467_state *st = iio_priv(indio_dev);
+-	struct spi_device *spi = st->spi;
+ 	int ret;
+ 
+ 	if (!readval) {
+ 		guard(mutex)(&st->lock);
+-		ret = ad9467_spi_write(spi, reg, writeval);
++		ret = ad9467_spi_write(st, reg, writeval);
+ 		if (ret)
+ 			return ret;
+-		return ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
++		return ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+-	ret = ad9467_spi_read(spi, reg);
++	ret = ad9467_spi_read(st, reg);
+ 	if (ret < 0)
+ 		return ret;
+ 	*readval = ret;
+@@ -295,7 +293,7 @@ static int ad9467_get_scale(struct ad9467_state *st, int *val, int *val2)
+ 	unsigned int i, vref_val;
+ 	int ret;
+ 
+-	ret = ad9467_spi_read(st->spi, AN877_ADC_REG_VREF);
++	ret = ad9467_spi_read(st, AN877_ADC_REG_VREF);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -330,31 +328,31 @@ static int ad9467_set_scale(struct ad9467_state *st, int val, int val2)
+ 			continue;
+ 
+ 		guard(mutex)(&st->lock);
+-		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_VREF,
++		ret = ad9467_spi_write(st, AN877_ADC_REG_VREF,
+ 				       info->scale_table[i][1]);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++		return ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+ 	return -EINVAL;
+ }
+ 
+-static int ad9467_outputmode_set(struct spi_device *spi, unsigned int mode)
++static int ad9467_outputmode_set(struct ad9467_state *st, unsigned int mode)
+ {
+ 	int ret;
+ 
+-	ret = ad9467_spi_write(spi, AN877_ADC_REG_OUTPUT_MODE, mode);
++	ret = ad9467_spi_write(st, AN877_ADC_REG_OUTPUT_MODE, mode);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
++	return ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 				AN877_ADC_TRANSFER_SYNC);
+ }
+ 
+-static int ad9647_calibrate_prepare(const struct ad9467_state *st)
++static int ad9647_calibrate_prepare(struct ad9467_state *st)
+ {
+ 	struct iio_backend_data_fmt data = {
+ 		.enable = false,
+@@ -362,17 +360,17 @@ static int ad9647_calibrate_prepare(const struct ad9467_state *st)
+ 	unsigned int c;
+ 	int ret;
+ 
+-	ret = ad9467_spi_write(st->spi, AN877_ADC_REG_TEST_IO,
++	ret = ad9467_spi_write(st, AN877_ADC_REG_TEST_IO,
+ 			       AN877_ADC_TESTMODE_PN9_SEQ);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++	ret = ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 			       AN877_ADC_TRANSFER_SYNC);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = ad9467_outputmode_set(st->spi, st->info->default_output_mode);
++	ret = ad9467_outputmode_set(st, st->info->default_output_mode);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -390,7 +388,7 @@ static int ad9647_calibrate_prepare(const struct ad9467_state *st)
+ 	return iio_backend_chan_enable(st->back, 0);
+ }
+ 
+-static int ad9647_calibrate_polarity_set(const struct ad9467_state *st,
++static int ad9647_calibrate_polarity_set(struct ad9467_state *st,
+ 					 bool invert)
+ {
+ 	enum iio_backend_sample_trigger trigger;
+@@ -401,7 +399,7 @@ static int ad9647_calibrate_polarity_set(const struct ad9467_state *st,
+ 		if (invert)
+ 			phase |= AN877_ADC_INVERT_DCO_CLK;
+ 
+-		return ad9467_spi_write(st->spi, AN877_ADC_REG_OUTPUT_PHASE,
++		return ad9467_spi_write(st, AN877_ADC_REG_OUTPUT_PHASE,
+ 					phase);
+ 	}
+ 
+@@ -437,19 +435,18 @@ static unsigned int ad9467_find_optimal_point(const unsigned long *calib_map,
+ 	return cnt;
+ }
+ 
+-static int ad9467_calibrate_apply(const struct ad9467_state *st,
+-				  unsigned int val)
++static int ad9467_calibrate_apply(struct ad9467_state *st, unsigned int val)
+ {
+ 	unsigned int lane;
+ 	int ret;
+ 
+ 	if (st->info->has_dco) {
+-		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_OUTPUT_DELAY,
++		ret = ad9467_spi_write(st, AN877_ADC_REG_OUTPUT_DELAY,
+ 				       val);
+ 		if (ret)
+ 			return ret;
+ 
+-		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++		return ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+@@ -462,7 +459,7 @@ static int ad9467_calibrate_apply(const struct ad9467_state *st,
+ 	return 0;
+ }
+ 
+-static int ad9647_calibrate_stop(const struct ad9467_state *st)
++static int ad9647_calibrate_stop(struct ad9467_state *st)
+ {
+ 	struct iio_backend_data_fmt data = {
+ 		.sign_extend = true,
+@@ -487,16 +484,16 @@ static int ad9647_calibrate_stop(const struct ad9467_state *st)
+ 	}
+ 
+ 	mode = st->info->default_output_mode | AN877_ADC_OUTPUT_MODE_TWOS_COMPLEMENT;
+-	ret = ad9467_outputmode_set(st->spi, mode);
++	ret = ad9467_outputmode_set(st, mode);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = ad9467_spi_write(st->spi, AN877_ADC_REG_TEST_IO,
++	ret = ad9467_spi_write(st, AN877_ADC_REG_TEST_IO,
+ 			       AN877_ADC_TESTMODE_OFF);
+ 	if (ret)
+ 		return ret;
+ 
+-	return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++	return ad9467_spi_write(st, AN877_ADC_REG_TRANSFER,
+ 			       AN877_ADC_TRANSFER_SYNC);
+ }
+ 
+@@ -846,7 +843,7 @@ static int ad9467_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	id = ad9467_spi_read(spi, AN877_ADC_REG_CHIP_ID);
++	id = ad9467_spi_read(st, AN877_ADC_REG_CHIP_ID);
+ 	if (id != st->info->id) {
+ 		dev_err(&spi->dev, "Mismatch CHIP_ID, got 0x%X, expected 0x%X\n",
+ 			id, st->info->id);
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index 0cf0d81358fd5..bf51d619ebbc9 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -85,6 +85,7 @@ static int axi_adc_enable(struct iio_backend *back)
+ 	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
+ 	int ret;
+ 
++	guard(mutex)(&st->lock);
+ 	ret = regmap_set_bits(st->regmap, ADI_AXI_REG_RSTN,
+ 			      ADI_AXI_REG_RSTN_MMCM_RSTN);
+ 	if (ret)
+@@ -99,6 +100,7 @@ static void axi_adc_disable(struct iio_backend *back)
+ {
+ 	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
+ 
++	guard(mutex)(&st->lock);
+ 	regmap_write(st->regmap, ADI_AXI_REG_RSTN, 0);
+ }
+ 
+diff --git a/drivers/iio/frequency/adrf6780.c b/drivers/iio/frequency/adrf6780.c
+index b4defb82f37e3..3f46032c92752 100644
+--- a/drivers/iio/frequency/adrf6780.c
++++ b/drivers/iio/frequency/adrf6780.c
+@@ -9,7 +9,6 @@
+ #include <linux/bits.h>
+ #include <linux/clk.h>
+ #include <linux/clkdev.h>
+-#include <linux/clk-provider.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/iio/iio.h>
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index b51eb6cb766f3..59d7615c0f565 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -362,17 +362,20 @@ static int iio_gts_build_avail_time_table(struct iio_gts *gts)
+ 	for (i = gts->num_itime - 1; i >= 0; i--) {
+ 		int new = gts->itime_table[i].time_us;
+ 
+-		if (times[idx] < new) {
++		if (idx == 0 || times[idx - 1] < new) {
+ 			times[idx++] = new;
+ 			continue;
+ 		}
+ 
+-		for (j = 0; j <= idx; j++) {
++		for (j = 0; j < idx; j++) {
++			if (times[j] == new)
++				break;
+ 			if (times[j] > new) {
+ 				memmove(&times[j + 1], &times[j],
+ 					(idx - j) * sizeof(int));
+ 				times[j] = new;
+ 				idx++;
++				break;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index c02a96d3572a8..6791df64a5fe0 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -794,7 +794,6 @@ static struct ib_gid_table *alloc_gid_table(int sz)
+ static void release_gid_table(struct ib_device *device,
+ 			      struct ib_gid_table *table)
+ {
+-	bool leak = false;
+ 	int i;
+ 
+ 	if (!table)
+@@ -803,15 +802,12 @@ static void release_gid_table(struct ib_device *device,
+ 	for (i = 0; i < table->sz; i++) {
+ 		if (is_gid_entry_free(table->data_vec[i]))
+ 			continue;
+-		if (kref_read(&table->data_vec[i]->kref) > 1) {
+-			dev_err(&device->dev,
+-				"GID entry ref leak for index %d ref=%u\n", i,
+-				kref_read(&table->data_vec[i]->kref));
+-			leak = true;
+-		}
++
++		WARN_ONCE(true,
++			  "GID entry ref leak for dev %s index %d ref=%u\n",
++			  dev_name(&device->dev), i,
++			  kref_read(&table->data_vec[i]->kref));
+ 	}
+-	if (leak)
+-		return;
+ 
+ 	mutex_destroy(&table->lock);
+ 	kfree(table->data_vec);
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 55aa7aa32d4ab..46d1c2c32d719 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -2146,6 +2146,9 @@ int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
+ 	unsigned long flags;
+ 	int ret;
+ 
++	if (!rdma_is_port_valid(ib_dev, port))
++		return -EINVAL;
++
+ 	/*
+ 	 * Drivers wish to call this before ib_register_driver, so we have to
+ 	 * setup the port data early.
+@@ -2154,9 +2157,6 @@ int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!rdma_is_port_valid(ib_dev, port))
+-		return -EINVAL;
+-
+ 	pdata = &ib_dev->port_data[port];
+ 	spin_lock_irqsave(&pdata->netdev_lock, flags);
+ 	old_ndev = rcu_dereference_protected(
+@@ -2166,16 +2166,12 @@ int ib_device_set_netdev(struct ib_device *ib_dev, struct net_device *ndev,
+ 		return 0;
+ 	}
+ 
+-	if (old_ndev)
+-		netdev_tracker_free(ndev, &pdata->netdev_tracker);
+-	if (ndev)
+-		netdev_hold(ndev, &pdata->netdev_tracker, GFP_ATOMIC);
+ 	rcu_assign_pointer(pdata->netdev, ndev);
++	netdev_put(old_ndev, &pdata->netdev_tracker);
++	netdev_hold(ndev, &pdata->netdev_tracker, GFP_ATOMIC);
+ 	spin_unlock_irqrestore(&pdata->netdev_lock, flags);
+ 
+ 	add_ndev_hash(pdata);
+-	__dev_put(old_ndev);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(ib_device_set_netdev);
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index 0301fcad4b48b..bf3265e678651 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -368,8 +368,10 @@ EXPORT_SYMBOL(iw_cm_disconnect);
+  *
+  * Clean up all resources associated with the connection and release
+  * the initial reference taken by iw_create_cm_id.
++ *
++ * Returns true if and only if the last cm_id_priv reference has been dropped.
+  */
+-static void destroy_cm_id(struct iw_cm_id *cm_id)
++static bool destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+ 	struct iwcm_id_private *cm_id_priv;
+ 	struct ib_qp *qp;
+@@ -439,7 +441,7 @@ static void destroy_cm_id(struct iw_cm_id *cm_id)
+ 		iwpm_remove_mapping(&cm_id->local_addr, RDMA_NL_IWCM);
+ 	}
+ 
+-	(void)iwcm_deref_id(cm_id_priv);
++	return iwcm_deref_id(cm_id_priv);
+ }
+ 
+ /*
+@@ -450,7 +452,8 @@ static void destroy_cm_id(struct iw_cm_id *cm_id)
+  */
+ void iw_destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+-	destroy_cm_id(cm_id);
++	if (!destroy_cm_id(cm_id))
++		flush_workqueue(iwcm_wq);
+ }
+ EXPORT_SYMBOL(iw_destroy_cm_id);
+ 
+@@ -1034,7 +1037,7 @@ static void cm_work_handler(struct work_struct *_work)
+ 		if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
+ 			ret = process_event(cm_id_priv, &levent);
+ 			if (ret)
+-				destroy_cm_id(&cm_id_priv->id);
++				WARN_ON_ONCE(destroy_cm_id(&cm_id_priv->id));
+ 		} else
+ 			pr_debug("dropping event %d\n", levent.event);
+ 		if (iwcm_deref_id(cm_id_priv))
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index ce9c5bae83bf1..582e83a36ccbe 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -2479,7 +2479,7 @@ static int bnxt_re_build_send_wqe(struct bnxt_re_qp *qp,
+ 		break;
+ 	case IB_WR_SEND_WITH_IMM:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_IMM;
+-		wqe->send.imm_data = wr->ex.imm_data;
++		wqe->send.imm_data = be32_to_cpu(wr->ex.imm_data);
+ 		break;
+ 	case IB_WR_SEND_WITH_INV:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_SEND_WITH_INV;
+@@ -2509,7 +2509,7 @@ static int bnxt_re_build_rdma_wqe(const struct ib_send_wr *wr,
+ 		break;
+ 	case IB_WR_RDMA_WRITE_WITH_IMM:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_WRITE_WITH_IMM;
+-		wqe->rdma.imm_data = wr->ex.imm_data;
++		wqe->rdma.imm_data = be32_to_cpu(wr->ex.imm_data);
+ 		break;
+ 	case IB_WR_RDMA_READ:
+ 		wqe->type = BNXT_QPLIB_SWQE_TYPE_RDMA_READ;
+@@ -3581,7 +3581,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ 	wc->byte_len = orig_cqe->length;
+ 	wc->qp = &gsi_qp->ib_qp;
+ 
+-	wc->ex.imm_data = orig_cqe->immdata;
++	wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
+ 	wc->src_qp = orig_cqe->src_qp;
+ 	memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ 	if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+@@ -3726,7 +3726,7 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+ 				 (unsigned long)(cqe->qp_handle),
+ 				 struct bnxt_re_qp, qplib_qp);
+ 			wc->qp = &qp->ib_qp;
+-			wc->ex.imm_data = cqe->immdata;
++			wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
+ 			wc->src_qp = cqe->src_qp;
+ 			memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ 			wc->port_num = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 7fd4506b3584f..244da20d1181f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -164,7 +164,7 @@ struct bnxt_qplib_swqe {
+ 		/* Send, with imm, inval key */
+ 		struct {
+ 			union {
+-				__be32	imm_data;
++				u32	imm_data;
+ 				u32	inv_key;
+ 			};
+ 			u32		q_key;
+@@ -182,7 +182,7 @@ struct bnxt_qplib_swqe {
+ 		/* RDMA write, with imm, read */
+ 		struct {
+ 			union {
+-				__be32	imm_data;
++				u32	imm_data;
+ 				u32	inv_key;
+ 			};
+ 			u64		remote_va;
+@@ -389,7 +389,7 @@ struct bnxt_qplib_cqe {
+ 	u16				cfa_meta;
+ 	u64				wr_id;
+ 	union {
+-		__be32			immdata;
++		__le32			immdata;
+ 		u32			invrkey;
+ 	};
+ 	u64				qp_handle;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index ff0b3f68ee3a4..7d5931872f8a7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -83,6 +83,7 @@
+ #define MR_TYPE_DMA				0x03
+ 
+ #define HNS_ROCE_FRMR_MAX_PA			512
++#define HNS_ROCE_FRMR_ALIGN_SIZE		128
+ 
+ #define PKEY_ID					0xffff
+ #define NODE_DESC_SIZE				64
+@@ -91,6 +92,8 @@
+ /* Configure to HW for PAGE_SIZE larger than 4KB */
+ #define PG_SHIFT_OFFSET				(PAGE_SHIFT - 12)
+ 
++#define ATOMIC_WR_LEN				8
++
+ #define HNS_ROCE_IDX_QUE_ENTRY_SZ		4
+ #define SRQ_DB_REG				0x230
+ 
+@@ -187,6 +190,9 @@ enum {
+ #define HNS_HW_PAGE_SHIFT			12
+ #define HNS_HW_PAGE_SIZE			(1 << HNS_HW_PAGE_SHIFT)
+ 
++#define HNS_HW_MAX_PAGE_SHIFT			27
++#define HNS_HW_MAX_PAGE_SIZE			(1 << HNS_HW_MAX_PAGE_SHIFT)
++
+ struct hns_roce_uar {
+ 	u64		pfn;
+ 	unsigned long	index;
+@@ -715,6 +721,7 @@ struct hns_roce_eq {
+ 	int				shift;
+ 	int				event_type;
+ 	int				sub_type;
++	struct work_struct		work;
+ };
+ 
+ struct hns_roce_eq_table {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 4287818a737f9..621b057fb9daa 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -36,6 +36,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/types.h>
++#include <linux/workqueue.h>
+ #include <net/addrconf.h>
+ #include <rdma/ib_addr.h>
+ #include <rdma/ib_cache.h>
+@@ -591,11 +592,16 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ 		     (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
+ 
+ 	if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP ||
+-	    wr->opcode == IB_WR_ATOMIC_FETCH_AND_ADD)
++	    wr->opcode == IB_WR_ATOMIC_FETCH_AND_ADD) {
++		if (msg_len != ATOMIC_WR_LEN)
++			return -EINVAL;
+ 		set_atomic_seg(wr, rc_sq_wqe, valid_num_sge);
+-	else if (wr->opcode != IB_WR_REG_MR)
++	} else if (wr->opcode != IB_WR_REG_MR) {
+ 		ret = set_rwqe_data_seg(&qp->ibqp, wr, rc_sq_wqe,
+ 					&curr_idx, valid_num_sge);
++		if (ret)
++			return ret;
++	}
+ 
+ 	/*
+ 	 * The pipeline can sequentially post all valid WQEs into WQ buffer,
+@@ -1269,12 +1275,38 @@ static int hns_roce_cmd_err_convert_errno(u16 desc_ret)
+ 	return -EIO;
+ }
+ 
++static u32 hns_roce_cmdq_tx_timeout(u16 opcode, u32 tx_timeout)
++{
++	static const struct hns_roce_cmdq_tx_timeout_map cmdq_tx_timeout[] = {
++		{HNS_ROCE_OPC_POST_MB, HNS_ROCE_OPC_POST_MB_TIMEOUT},
++	};
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(cmdq_tx_timeout); i++)
++		if (cmdq_tx_timeout[i].opcode == opcode)
++			return cmdq_tx_timeout[i].tx_timeout;
++
++	return tx_timeout;
++}
++
++static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode)
++{
++	struct hns_roce_v2_priv *priv = hr_dev->priv;
++	u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout);
++	u32 timeout = 0;
++
++	do {
++		if (hns_roce_cmq_csq_done(hr_dev))
++			break;
++		udelay(1);
++	} while (++timeout < tx_timeout);
++}
++
+ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 			       struct hns_roce_cmq_desc *desc, int num)
+ {
+ 	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq;
+-	u32 timeout = 0;
+ 	u16 desc_ret;
+ 	u32 tail;
+ 	int ret;
+@@ -1295,12 +1327,7 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 
+ 	atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_CNT]);
+ 
+-	do {
+-		if (hns_roce_cmq_csq_done(hr_dev))
+-			break;
+-		udelay(1);
+-	} while (++timeout < priv->cmq.tx_timeout);
+-
++	hns_roce_wait_csq_done(hr_dev, le16_to_cpu(desc->opcode));
+ 	if (hns_roce_cmq_csq_done(hr_dev)) {
+ 		ret = 0;
+ 		for (i = 0; i < num; i++) {
+@@ -2457,14 +2484,16 @@ static int set_llm_cfg_to_hw(struct hns_roce_dev *hr_dev,
+ static struct hns_roce_link_table *
+ alloc_link_table_buf(struct hns_roce_dev *hr_dev)
+ {
++	u16 total_sl = hr_dev->caps.sl_num * hr_dev->func_num;
+ 	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	struct hns_roce_link_table *link_tbl;
+ 	u32 pg_shift, size, min_size;
+ 
+ 	link_tbl = &priv->ext_llm;
+ 	pg_shift = hr_dev->caps.llm_buf_pg_sz + PAGE_SHIFT;
+-	size = hr_dev->caps.num_qps * HNS_ROCE_V2_EXT_LLM_ENTRY_SZ;
+-	min_size = HNS_ROCE_EXT_LLM_MIN_PAGES(hr_dev->caps.sl_num) << pg_shift;
++	size = hr_dev->caps.num_qps * hr_dev->func_num *
++	       HNS_ROCE_V2_EXT_LLM_ENTRY_SZ;
++	min_size = HNS_ROCE_EXT_LLM_MIN_PAGES(total_sl) << pg_shift;
+ 
+ 	/* Alloc data table */
+ 	size = max(size, min_size);
+@@ -6135,33 +6164,11 @@ static struct hns_roce_ceqe *next_ceqe_sw_v2(struct hns_roce_eq *eq)
+ 		!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
+ }
+ 
+-static irqreturn_t hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev,
+-				       struct hns_roce_eq *eq)
++static irqreturn_t hns_roce_v2_ceq_int(struct hns_roce_eq *eq)
+ {
+-	struct hns_roce_ceqe *ceqe = next_ceqe_sw_v2(eq);
+-	irqreturn_t ceqe_found = IRQ_NONE;
+-	u32 cqn;
+-
+-	while (ceqe) {
+-		/* Make sure we read CEQ entry after we have checked the
+-		 * ownership bit
+-		 */
+-		dma_rmb();
+-
+-		cqn = hr_reg_read(ceqe, CEQE_CQN);
+-
+-		hns_roce_cq_completion(hr_dev, cqn);
+-
+-		++eq->cons_index;
+-		ceqe_found = IRQ_HANDLED;
+-		atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CEQE_CNT]);
++	queue_work(system_bh_wq, &eq->work);
+ 
+-		ceqe = next_ceqe_sw_v2(eq);
+-	}
+-
+-	update_eq_db(eq);
+-
+-	return IRQ_RETVAL(ceqe_found);
++	return IRQ_HANDLED;
+ }
+ 
+ static irqreturn_t hns_roce_v2_msix_interrupt_eq(int irq, void *eq_ptr)
+@@ -6172,7 +6179,7 @@ static irqreturn_t hns_roce_v2_msix_interrupt_eq(int irq, void *eq_ptr)
+ 
+ 	if (eq->type_flag == HNS_ROCE_CEQ)
+ 		/* Completion event interrupt */
+-		int_work = hns_roce_v2_ceq_int(hr_dev, eq);
++		int_work = hns_roce_v2_ceq_int(eq);
+ 	else
+ 		/* Asynchronous event interrupt */
+ 		int_work = hns_roce_v2_aeq_int(hr_dev, eq);
+@@ -6384,9 +6391,16 @@ static void hns_roce_v2_int_mask_enable(struct hns_roce_dev *hr_dev,
+ 	roce_write(hr_dev, ROCEE_VF_ABN_INT_CFG_REG, enable_flag);
+ }
+ 
+-static void hns_roce_v2_destroy_eqc(struct hns_roce_dev *hr_dev, u32 eqn)
++static void free_eq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
++{
++	hns_roce_mtr_destroy(hr_dev, &eq->mtr);
++}
++
++static void hns_roce_v2_destroy_eqc(struct hns_roce_dev *hr_dev,
++				    struct hns_roce_eq *eq)
+ {
+ 	struct device *dev = hr_dev->dev;
++	int eqn = eq->eqn;
+ 	int ret;
+ 	u8 cmd;
+ 
+@@ -6397,12 +6411,9 @@ static void hns_roce_v2_destroy_eqc(struct hns_roce_dev *hr_dev, u32 eqn)
+ 
+ 	ret = hns_roce_destroy_hw_ctx(hr_dev, cmd, eqn & HNS_ROCE_V2_EQN_M);
+ 	if (ret)
+-		dev_err(dev, "[mailbox cmd] destroy eqc(%u) failed.\n", eqn);
+-}
++		dev_err(dev, "[mailbox cmd] destroy eqc(%d) failed.\n", eqn);
+ 
+-static void free_eq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
+-{
+-	hns_roce_mtr_destroy(hr_dev, &eq->mtr);
++	free_eq_buf(hr_dev, eq);
+ }
+ 
+ static void init_eq_config(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
+@@ -6540,6 +6551,34 @@ static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev,
+ 	return ret;
+ }
+ 
++static void hns_roce_ceq_work(struct work_struct *work)
++{
++	struct hns_roce_eq *eq = from_work(eq, work, work);
++	struct hns_roce_ceqe *ceqe = next_ceqe_sw_v2(eq);
++	struct hns_roce_dev *hr_dev = eq->hr_dev;
++	int ceqe_num = 0;
++	u32 cqn;
++
++	while (ceqe && ceqe_num < hr_dev->caps.ceqe_depth) {
++		/* Make sure we read CEQ entry after we have checked the
++		 * ownership bit
++		 */
++		dma_rmb();
++
++		cqn = hr_reg_read(ceqe, CEQE_CQN);
++
++		hns_roce_cq_completion(hr_dev, cqn);
++
++		++eq->cons_index;
++		++ceqe_num;
++		atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CEQE_CNT]);
++
++		ceqe = next_ceqe_sw_v2(eq);
++	}
++
++	update_eq_db(eq);
++}
++
+ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num,
+ 				  int comp_num, int aeq_num, int other_num)
+ {
+@@ -6571,21 +6610,24 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num,
+ 			 j - other_num - aeq_num);
+ 
+ 	for (j = 0; j < irq_num; j++) {
+-		if (j < other_num)
++		if (j < other_num) {
+ 			ret = request_irq(hr_dev->irq[j],
+ 					  hns_roce_v2_msix_interrupt_abn,
+ 					  0, hr_dev->irq_names[j], hr_dev);
+-
+-		else if (j < (other_num + comp_num))
++		} else if (j < (other_num + comp_num)) {
++			INIT_WORK(&eq_table->eq[j - other_num].work,
++				  hns_roce_ceq_work);
+ 			ret = request_irq(eq_table->eq[j - other_num].irq,
+ 					  hns_roce_v2_msix_interrupt_eq,
+ 					  0, hr_dev->irq_names[j + aeq_num],
+ 					  &eq_table->eq[j - other_num]);
+-		else
++		} else {
+ 			ret = request_irq(eq_table->eq[j - other_num].irq,
+ 					  hns_roce_v2_msix_interrupt_eq,
+ 					  0, hr_dev->irq_names[j - comp_num],
+ 					  &eq_table->eq[j - other_num]);
++		}
++
+ 		if (ret) {
+ 			dev_err(hr_dev->dev, "request irq error!\n");
+ 			goto err_request_failed;
+@@ -6595,12 +6637,16 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num,
+ 	return 0;
+ 
+ err_request_failed:
+-	for (j -= 1; j >= 0; j--)
+-		if (j < other_num)
++	for (j -= 1; j >= 0; j--) {
++		if (j < other_num) {
+ 			free_irq(hr_dev->irq[j], hr_dev);
+-		else
+-			free_irq(eq_table->eq[j - other_num].irq,
+-				 &eq_table->eq[j - other_num]);
++			continue;
++		}
++		free_irq(eq_table->eq[j - other_num].irq,
++			 &eq_table->eq[j - other_num]);
++		if (j < other_num + comp_num)
++			cancel_work_sync(&eq_table->eq[j - other_num].work);
++	}
+ 
+ err_kzalloc_failed:
+ 	for (i -= 1; i >= 0; i--)
+@@ -6621,8 +6667,11 @@ static void __hns_roce_free_irq(struct hns_roce_dev *hr_dev)
+ 	for (i = 0; i < hr_dev->caps.num_other_vectors; i++)
+ 		free_irq(hr_dev->irq[i], hr_dev);
+ 
+-	for (i = 0; i < eq_num; i++)
++	for (i = 0; i < eq_num; i++) {
+ 		free_irq(hr_dev->eq_table.eq[i].irq, &hr_dev->eq_table.eq[i]);
++		if (i < hr_dev->caps.num_comp_vectors)
++			cancel_work_sync(&hr_dev->eq_table.eq[i].work);
++	}
+ 
+ 	for (i = 0; i < irq_num; i++)
+ 		kfree(hr_dev->irq_names[i]);
+@@ -6711,7 +6760,7 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
+ 
+ err_create_eq_fail:
+ 	for (i -= 1; i >= 0; i--)
+-		free_eq_buf(hr_dev, &eq_table->eq[i]);
++		hns_roce_v2_destroy_eqc(hr_dev, &eq_table->eq[i]);
+ 	kfree(eq_table->eq);
+ 
+ 	return ret;
+@@ -6731,11 +6780,8 @@ static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev)
+ 	__hns_roce_free_irq(hr_dev);
+ 	destroy_workqueue(hr_dev->irq_workq);
+ 
+-	for (i = 0; i < eq_num; i++) {
+-		hns_roce_v2_destroy_eqc(hr_dev, i);
+-
+-		free_eq_buf(hr_dev, &eq_table->eq[i]);
+-	}
++	for (i = 0; i < eq_num; i++)
++		hns_roce_v2_destroy_eqc(hr_dev, &eq_table->eq[i]);
+ 
+ 	kfree(eq_table->eq);
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index def1d15a03c7e..c65f68a14a260 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -224,6 +224,12 @@ enum hns_roce_opcode_type {
+ 	HNS_SWITCH_PARAMETER_CFG			= 0x1033,
+ };
+ 
++#define HNS_ROCE_OPC_POST_MB_TIMEOUT 35000
++struct hns_roce_cmdq_tx_timeout_map {
++	u16 opcode;
++	u32 tx_timeout;
++};
++
+ enum {
+ 	TYPE_CRQ,
+ 	TYPE_CSQ,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 1a61dceb33197..846da8c78b8b7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -443,6 +443,11 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ 	struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+ 	int ret, sg_num = 0;
+ 
++	if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
++	    ibmr->page_size < HNS_HW_PAGE_SIZE ||
++	    ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
++		return sg_num;
++
+ 	mr->npages = 0;
+ 	mr->page_list = kvcalloc(mr->pbl_mtr.hem_cfg.buf_pg_count,
+ 				 sizeof(dma_addr_t), GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index db34665d1dfbf..1de384ce4d0e1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -532,13 +532,15 @@ static unsigned int get_sge_num_from_max_inl_data(bool is_ud_or_gsi,
+ {
+ 	unsigned int inline_sge;
+ 
+-	inline_sge = roundup_pow_of_two(max_inline_data) / HNS_ROCE_SGE_SIZE;
++	if (!max_inline_data)
++		return 0;
+ 
+ 	/*
+ 	 * if max_inline_data less than
+ 	 * HNS_ROCE_SGE_IN_WQE * HNS_ROCE_SGE_SIZE,
+ 	 * In addition to ud's mode, no need to extend sge.
+ 	 */
++	inline_sge = roundup_pow_of_two(max_inline_data) / HNS_ROCE_SGE_SIZE;
+ 	if (!is_ud_or_gsi && inline_sge <= HNS_ROCE_SGE_IN_WQE)
+ 		inline_sge = 0;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index f1997abc97cac..c9b8233f4b057 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -297,7 +297,7 @@ static int set_srq_basic_param(struct hns_roce_srq *srq,
+ 
+ 	max_sge = proc_srq_sge(hr_dev, srq, !!udata);
+ 	if (attr->max_wr > hr_dev->caps.max_srq_wrs ||
+-	    attr->max_sge > max_sge) {
++	    attr->max_sge > max_sge || !attr->max_sge) {
+ 		ibdev_err(&hr_dev->ib_dev,
+ 			  "invalid SRQ attr, depth = %u, sge = %u.\n",
+ 			  attr->max_wr, attr->max_sge);
+diff --git a/drivers/infiniband/hw/mana/device.c b/drivers/infiniband/hw/mana/device.c
+index 7e09ceb3da537..7bb7e06392001 100644
+--- a/drivers/infiniband/hw/mana/device.c
++++ b/drivers/infiniband/hw/mana/device.c
+@@ -5,6 +5,7 @@
+ 
+ #include "mana_ib.h"
+ #include <net/mana/mana_auxiliary.h>
++#include <net/addrconf.h>
+ 
+ MODULE_DESCRIPTION("Microsoft Azure Network Adapter IB driver");
+ MODULE_LICENSE("GPL");
+@@ -55,7 +56,7 @@ static int mana_ib_probe(struct auxiliary_device *adev,
+ {
+ 	struct mana_adev *madev = container_of(adev, struct mana_adev, adev);
+ 	struct gdma_dev *mdev = madev->mdev;
+-	struct net_device *upper_ndev;
++	struct net_device *ndev;
+ 	struct mana_context *mc;
+ 	struct mana_ib_dev *dev;
+ 	u8 mac_addr[ETH_ALEN];
+@@ -83,16 +84,17 @@ static int mana_ib_probe(struct auxiliary_device *adev,
+ 	dev->ib_dev.num_comp_vectors = mdev->gdma_context->max_num_queues;
+ 	dev->ib_dev.dev.parent = mdev->gdma_context->dev;
+ 
+-	rcu_read_lock(); /* required to get upper dev */
+-	upper_ndev = netdev_master_upper_dev_get_rcu(mc->ports[0]);
+-	if (!upper_ndev) {
++	rcu_read_lock(); /* required to get primary netdev */
++	ndev = mana_get_primary_netdev_rcu(mc, 0);
++	if (!ndev) {
+ 		rcu_read_unlock();
+ 		ret = -ENODEV;
+-		ibdev_err(&dev->ib_dev, "Failed to get master netdev");
++		ibdev_err(&dev->ib_dev, "Failed to get netdev for IB port 1");
+ 		goto free_ib_device;
+ 	}
+-	ether_addr_copy(mac_addr, upper_ndev->dev_addr);
+-	ret = ib_device_set_netdev(&dev->ib_dev, upper_ndev, 1);
++	ether_addr_copy(mac_addr, ndev->dev_addr);
++	addrconf_addr_eui48((u8 *)&dev->ib_dev.node_guid, ndev->dev_addr);
++	ret = ib_device_set_netdev(&dev->ib_dev, ndev, 1);
+ 	rcu_read_unlock();
+ 	if (ret) {
+ 		ibdev_err(&dev->ib_dev, "Failed to set ib netdev, ret %d", ret);
+diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c
+index 111fa88a3be44..9a439569ffcf3 100644
+--- a/drivers/infiniband/hw/mlx4/alias_GUID.c
++++ b/drivers/infiniband/hw/mlx4/alias_GUID.c
+@@ -829,7 +829,7 @@ void mlx4_ib_destroy_alias_guid_service(struct mlx4_ib_dev *dev)
+ 
+ int mlx4_ib_init_alias_guid_service(struct mlx4_ib_dev *dev)
+ {
+-	char alias_wq_name[15];
++	char alias_wq_name[22];
+ 	int ret = 0;
+ 	int i, j;
+ 	union ib_gid gid;
+diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
+index a37cfac5e23f9..dc9cf45d2d320 100644
+--- a/drivers/infiniband/hw/mlx4/mad.c
++++ b/drivers/infiniband/hw/mlx4/mad.c
+@@ -2158,7 +2158,7 @@ static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
+ 				       struct mlx4_ib_demux_ctx *ctx,
+ 				       int port)
+ {
+-	char name[12];
++	char name[21];
+ 	int ret = 0;
+ 	int i;
+ 
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index f255a12e26a02..f9abdca3493aa 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -115,6 +115,19 @@ unsigned long __mlx5_umem_find_best_quantized_pgoff(
+ 		__mlx5_bit_sz(typ, page_offset_fld), 0, scale,                 \
+ 		page_offset_quantized)
+ 
++static inline unsigned long
++mlx5_umem_dmabuf_find_best_pgsz(struct ib_umem_dmabuf *umem_dmabuf)
++{
++	/*
++	 * mkeys used for dmabuf are fixed at PAGE_SIZE because we must be able
++	 * to hold any sgl after a move operation. Ideally the mkc page size
++	 * could be changed at runtime to be optimal, but right now the driver
++	 * cannot do that.
++	 */
++	return ib_umem_find_best_pgsz(&umem_dmabuf->umem, PAGE_SIZE,
++				      umem_dmabuf->umem.iova);
++}
++
+ enum {
+ 	MLX5_IB_MMAP_OFFSET_START = 9,
+ 	MLX5_IB_MMAP_OFFSET_END = 255,
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4a04cbc5b78a4..a524181f34df9 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -705,10 +705,8 @@ static int pagefault_dmabuf_mr(struct mlx5_ib_mr *mr, size_t bcnt,
+ 		return err;
+ 	}
+ 
+-	page_size = mlx5_umem_find_best_pgsz(&umem_dmabuf->umem, mkc,
+-					     log_page_size, 0,
+-					     umem_dmabuf->umem.iova);
+-	if (unlikely(page_size < PAGE_SIZE)) {
++	page_size = mlx5_umem_dmabuf_find_best_pgsz(umem_dmabuf);
++	if (!page_size) {
+ 		ib_umem_dmabuf_unmap_pages(umem_dmabuf);
+ 		err = -EINVAL;
+ 	} else {
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index cd14c4c2dff9d..479c07e6e4ed3 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -424,7 +424,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 	int			paylen;
+ 	int			solicited;
+ 	u32			qp_num;
+-	int			ack_req;
++	int			ack_req = 0;
+ 
+ 	/* length from start of bth to end of icrc */
+ 	paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
+@@ -445,8 +445,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ 	qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn :
+ 					 qp->attr.dest_qp_num;
+ 
+-	ack_req = ((pkt->mask & RXE_END_MASK) ||
+-		(qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK));
++	if (qp_type(qp) != IB_QPT_UD && qp_type(qp) != IB_QPT_UC)
++		ack_req = ((pkt->mask & RXE_END_MASK) ||
++			   (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK));
+ 	if (ack_req)
+ 		qp->req.noack_pkts = 0;
+ 
+diff --git a/drivers/input/keyboard/qt1050.c b/drivers/input/keyboard/qt1050.c
+index b51dfcd760386..056e9bc260262 100644
+--- a/drivers/input/keyboard/qt1050.c
++++ b/drivers/input/keyboard/qt1050.c
+@@ -226,7 +226,12 @@ static bool qt1050_identify(struct qt1050_priv *ts)
+ 	int err;
+ 
+ 	/* Read Chip ID */
+-	regmap_read(ts->regmap, QT1050_CHIP_ID, &val);
++	err = regmap_read(ts->regmap, QT1050_CHIP_ID, &val);
++	if (err) {
++		dev_err(&ts->client->dev, "Failed to read chip ID: %d\n", err);
++		return false;
++	}
++
+ 	if (val != QT1050_CHIP_ID_VER) {
+ 		dev_err(&ts->client->dev, "ID %d not supported\n", val);
+ 		return false;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index c2aec5c360b3b..ce96513b34f64 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1356,6 +1356,8 @@ static int elan_suspend(struct device *dev)
+ 	}
+ 
+ err:
++	if (ret)
++		enable_irq(client->irq);
+ 	mutex_unlock(&data->sysfs_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/interconnect/qcom/qcm2290.c b/drivers/interconnect/qcom/qcm2290.c
+index ba4cc08684d63..ccbdc6202c07a 100644
+--- a/drivers/interconnect/qcom/qcm2290.c
++++ b/drivers/interconnect/qcom/qcm2290.c
+@@ -166,7 +166,7 @@ static struct qcom_icc_node mas_snoc_bimc = {
+ 	.qos.ap_owned = true,
+ 	.qos.qos_port = 6,
+ 	.qos.qos_mode = NOC_QOS_MODE_BYPASS,
+-	.mas_rpm_id = 164,
++	.mas_rpm_id = 3,
+ 	.slv_rpm_id = -1,
+ 	.num_links = ARRAY_SIZE(mas_snoc_bimc_links),
+ 	.links = mas_snoc_bimc_links,
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index ab415e107054c..f456bcf1890ba 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2302,7 +2302,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_device *smmu,
+ 				       struct arm_smmu_domain *smmu_domain)
+ {
+ 	int ret;
+-	u32 asid;
++	u32 asid = 0;
+ 	struct arm_smmu_ctx_desc *cd = &smmu_domain->cd;
+ 
+ 	refcount_set(&cd->refs, 1);
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom-debug.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom-debug.c
+index 552199cbd9e25..482c40aa029b4 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom-debug.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom-debug.c
+@@ -488,7 +488,7 @@ irqreturn_t qcom_smmu_context_fault(int irq, void *dev)
+ 	return ret;
+ }
+ 
+-static int qcom_tbu_probe(struct platform_device *pdev)
++int qcom_tbu_probe(struct platform_device *pdev)
+ {
+ 	struct of_phandle_args args = { .args_count = 2 };
+ 	struct device_node *np = pdev->dev.of_node;
+@@ -530,18 +530,3 @@ static int qcom_tbu_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ }
+-
+-static const struct of_device_id qcom_tbu_of_match[] = {
+-	{ .compatible = "qcom,sc7280-tbu" },
+-	{ .compatible = "qcom,sdm845-tbu" },
+-	{ }
+-};
+-
+-static struct platform_driver qcom_tbu_driver = {
+-	.driver = {
+-		.name           = "qcom_tbu",
+-		.of_match_table = qcom_tbu_of_match,
+-	},
+-	.probe = qcom_tbu_probe,
+-};
+-builtin_platform_driver(qcom_tbu_driver);
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 25f034677f568..13f3e2efb2ccb 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -8,6 +8,8 @@
+ #include <linux/delay.h>
+ #include <linux/of_device.h>
+ #include <linux/firmware/qcom/qcom_scm.h>
++#include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ 
+ #include "arm-smmu.h"
+ #include "arm-smmu-qcom.h"
+@@ -561,10 +563,47 @@ static struct acpi_platform_list qcom_acpi_platlist[] = {
+ };
+ #endif
+ 
++static int qcom_smmu_tbu_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	int ret;
++
++	if (IS_ENABLED(CONFIG_ARM_SMMU_QCOM_DEBUG)) {
++		ret = qcom_tbu_probe(pdev);
++		if (ret)
++			return ret;
++	}
++
++	if (dev->pm_domain) {
++		pm_runtime_set_active(dev);
++		pm_runtime_enable(dev);
++	}
++
++	return 0;
++}
++
++static const struct of_device_id qcom_smmu_tbu_of_match[] = {
++	{ .compatible = "qcom,sc7280-tbu" },
++	{ .compatible = "qcom,sdm845-tbu" },
++	{ }
++};
++
++static struct platform_driver qcom_smmu_tbu_driver = {
++	.driver = {
++		.name           = "qcom_tbu",
++		.of_match_table = qcom_smmu_tbu_of_match,
++	},
++	.probe = qcom_smmu_tbu_probe,
++};
++
+ struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu)
+ {
+ 	const struct device_node *np = smmu->dev->of_node;
+ 	const struct of_device_id *match;
++	static u8 tbu_registered;
++
++	if (!tbu_registered++)
++		platform_driver_register(&qcom_smmu_tbu_driver);
+ 
+ #ifdef CONFIG_ACPI
+ 	if (np == NULL) {
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.h b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.h
+index 9bb3ae7d62da6..3c134d1a62773 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.h
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.h
+@@ -34,8 +34,10 @@ irqreturn_t qcom_smmu_context_fault(int irq, void *dev);
+ 
+ #ifdef CONFIG_ARM_SMMU_QCOM_DEBUG
+ void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu);
++int qcom_tbu_probe(struct platform_device *pdev);
+ #else
+ static inline void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu) { }
++static inline int qcom_tbu_probe(struct platform_device *pdev) { return -EINVAL; }
+ #endif
+ 
+ #endif /* _ARM_SMMU_QCOM_H */
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index e8418cdd8331b..44e92638c0cd1 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -245,7 +245,8 @@ static unsigned long calculate_psi_aligned_address(unsigned long start,
+ 		 * shared_bits are all equal in both pfn and end_pfn.
+ 		 */
+ 		shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
+-		mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
++		mask = shared_bits ? __ffs(shared_bits) : MAX_AGAW_PFN_WIDTH;
++		aligned_pages = 1UL << mask;
+ 	}
+ 
+ 	*_pages = aligned_pages;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index fd11a080380c8..f55ec1fd7942a 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2071,7 +2071,7 @@ static int __init si_domain_init(int hw)
+ 		for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
+ 			ret = iommu_domain_identity_map(si_domain,
+ 					mm_to_dma_pfn_start(start_pfn),
+-					mm_to_dma_pfn_end(end_pfn));
++					mm_to_dma_pfn_end(end_pfn-1));
+ 			if (ret)
+ 				return ret;
+ 		}
+diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/iova_bitmap.c
+index db8c46bee1559..e33ddfc239b5b 100644
+--- a/drivers/iommu/iommufd/iova_bitmap.c
++++ b/drivers/iommu/iommufd/iova_bitmap.c
+@@ -384,8 +384,6 @@ static int iova_bitmap_advance(struct iova_bitmap *bitmap)
+ 	bitmap->mapped_base_index += count;
+ 
+ 	iova_bitmap_put(bitmap);
+-	if (iova_bitmap_done(bitmap))
+-		return 0;
+ 
+ 	/* Iterate, set and skip any bits requested for next iteration */
+ 	if (bitmap->set_ahead_length) {
+@@ -396,6 +394,9 @@ static int iova_bitmap_advance(struct iova_bitmap *bitmap)
+ 			return ret;
+ 	}
+ 
++	if (iova_bitmap_done(bitmap))
++		return 0;
++
+ 	/* When advancing the index we pin the next set of bitmap pages */
+ 	return iova_bitmap_get(bitmap);
+ }
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index 7a2199470f312..654ed33390957 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -1334,7 +1334,7 @@ static int iommufd_test_dirty(struct iommufd_ucmd *ucmd, unsigned int mockpt_id,
+ 	}
+ 
+ 	max = length / page_size;
+-	bitmap_size = max / BITS_PER_BYTE;
++	bitmap_size = DIV_ROUND_UP(max, BITS_PER_BYTE);
+ 
+ 	tmp = kvzalloc(bitmap_size, GFP_KERNEL_ACCOUNT);
+ 	if (!tmp) {
+diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
+index ba53571a82390..a2f4ffe6d9491 100644
+--- a/drivers/iommu/sprd-iommu.c
++++ b/drivers/iommu/sprd-iommu.c
+@@ -232,8 +232,8 @@ static void sprd_iommu_cleanup(struct sprd_iommu_domain *dom)
+ 
+ 	pgt_size = sprd_iommu_pgt_size(&dom->domain);
+ 	dma_free_coherent(dom->sdev->dev, pgt_size, dom->pgt_va, dom->pgt_pa);
+-	dom->sdev = NULL;
+ 	sprd_iommu_hw_en(dom->sdev, false);
++	dom->sdev = NULL;
+ }
+ 
+ static void sprd_iommu_domain_free(struct iommu_domain *domain)
+diff --git a/drivers/irqchip/irq-imx-irqsteer.c b/drivers/irqchip/irq-imx-irqsteer.c
+index 20cf7a9e9ece2..75a0e980ff352 100644
+--- a/drivers/irqchip/irq-imx-irqsteer.c
++++ b/drivers/irqchip/irq-imx-irqsteer.c
+@@ -36,6 +36,7 @@ struct irqsteer_data {
+ 	int			channel;
+ 	struct irq_domain	*domain;
+ 	u32			*saved_reg;
++	struct device		*dev;
+ };
+ 
+ static int imx_irqsteer_get_reg_index(struct irqsteer_data *data,
+@@ -72,10 +73,26 @@ static void imx_irqsteer_irq_mask(struct irq_data *d)
+ 	raw_spin_unlock_irqrestore(&data->lock, flags);
+ }
+ 
++static void imx_irqsteer_irq_bus_lock(struct irq_data *d)
++{
++	struct irqsteer_data *data = d->chip_data;
++
++	pm_runtime_get_sync(data->dev);
++}
++
++static void imx_irqsteer_irq_bus_sync_unlock(struct irq_data *d)
++{
++	struct irqsteer_data *data = d->chip_data;
++
++	pm_runtime_put_autosuspend(data->dev);
++}
++
+ static const struct irq_chip imx_irqsteer_irq_chip = {
+-	.name		= "irqsteer",
+-	.irq_mask	= imx_irqsteer_irq_mask,
+-	.irq_unmask	= imx_irqsteer_irq_unmask,
++	.name			= "irqsteer",
++	.irq_mask		= imx_irqsteer_irq_mask,
++	.irq_unmask		= imx_irqsteer_irq_unmask,
++	.irq_bus_lock		= imx_irqsteer_irq_bus_lock,
++	.irq_bus_sync_unlock	= imx_irqsteer_irq_bus_sync_unlock,
+ };
+ 
+ static int imx_irqsteer_irq_map(struct irq_domain *h, unsigned int irq,
+@@ -150,6 +167,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
+ 	if (!data)
+ 		return -ENOMEM;
+ 
++	data->dev = &pdev->dev;
+ 	data->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(data->regs)) {
+ 		dev_err(&pdev->dev, "failed to initialize reg\n");
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 2e5cb9dde3ec5..44383cec1f47a 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -1900,7 +1900,7 @@ hfcmulti_dtmf(struct hfc_multi *hc)
+ static void
+ hfcmulti_tx(struct hfc_multi *hc, int ch)
+ {
+-	int i, ii, temp, len = 0;
++	int i, ii, temp, tmp_len, len = 0;
+ 	int Zspace, z1, z2; /* must be int for calculation */
+ 	int Fspace, f1, f2;
+ 	u_char *d;
+@@ -2121,14 +2121,15 @@ hfcmulti_tx(struct hfc_multi *hc, int ch)
+ 		HFC_wait_nodebug(hc);
+ 	}
+ 
++	tmp_len = (*sp)->len;
+ 	dev_kfree_skb(*sp);
+ 	/* check for next frame */
+ 	if (bch && get_next_bframe(bch)) {
+-		len = (*sp)->len;
++		len = tmp_len;
+ 		goto next_frame;
+ 	}
+ 	if (dch && get_next_dframe(dch)) {
+-		len = (*sp)->len;
++		len = tmp_len;
+ 		goto next_frame;
+ 	}
+ 
+diff --git a/drivers/leds/flash/leds-mt6360.c b/drivers/leds/flash/leds-mt6360.c
+index 1b75b4d368348..4c74f1cf01f00 100644
+--- a/drivers/leds/flash/leds-mt6360.c
++++ b/drivers/leds/flash/leds-mt6360.c
+@@ -643,14 +643,17 @@ static int mt6360_init_isnk_properties(struct mt6360_led *led,
+ 
+ 			ret = fwnode_property_read_u32(child, "reg", &reg);
+ 			if (ret || reg > MT6360_LED_ISNK3 ||
+-			    priv->leds_active & BIT(reg))
++			    priv->leds_active & BIT(reg)) {
++				fwnode_handle_put(child);
+ 				return -EINVAL;
++			}
+ 
+ 			ret = fwnode_property_read_u32(child, "color", &color);
+ 			if (ret) {
+ 				dev_err(priv->dev,
+ 					"led %d, no color specified\n",
+ 					led->led_no);
++				fwnode_handle_put(child);
+ 				return ret;
+ 			}
+ 
+diff --git a/drivers/leds/flash/leds-qcom-flash.c b/drivers/leds/flash/leds-qcom-flash.c
+index 7c99a30391716..bf70bf6fb0d59 100644
+--- a/drivers/leds/flash/leds-qcom-flash.c
++++ b/drivers/leds/flash/leds-qcom-flash.c
+@@ -505,6 +505,7 @@ qcom_flash_v4l2_init(struct device *dev, struct qcom_flash_led *led, struct fwno
+ 	struct qcom_flash_data *flash_data = led->flash_data;
+ 	struct v4l2_flash_config v4l2_cfg = { 0 };
+ 	struct led_flash_setting *intensity = &v4l2_cfg.intensity;
++	struct v4l2_flash *v4l2_flash;
+ 
+ 	if (!(led->flash.led_cdev.flags & LED_DEV_CAP_FLASH))
+ 		return 0;
+@@ -523,9 +524,12 @@ qcom_flash_v4l2_init(struct device *dev, struct qcom_flash_led *led, struct fwno
+ 				LED_FAULT_OVER_TEMPERATURE |
+ 				LED_FAULT_TIMEOUT;
+ 
+-	flash_data->v4l2_flash[flash_data->leds_count] =
+-		v4l2_flash_init(dev, fwnode, &led->flash, &qcom_v4l2_flash_ops, &v4l2_cfg);
+-	return PTR_ERR_OR_ZERO(flash_data->v4l2_flash);
++	v4l2_flash = v4l2_flash_init(dev, fwnode, &led->flash, &qcom_v4l2_flash_ops, &v4l2_cfg);
++	if (IS_ERR(v4l2_flash))
++		return PTR_ERR(v4l2_flash);
++
++	flash_data->v4l2_flash[flash_data->leds_count] = v4l2_flash;
++	return 0;
+ }
+ # else
+ static int
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index ba1be15cfd8ea..c66d1bead0a4a 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -258,7 +258,6 @@ struct led_classdev *of_led_get(struct device_node *np, int index)
+ 
+ 	led_dev = class_find_device_by_of_node(&leds_class, led_node);
+ 	of_node_put(led_node);
+-	put_device(led_dev);
+ 
+ 	return led_module_get(led_dev);
+ }
+diff --git a/drivers/leds/led-triggers.c b/drivers/leds/led-triggers.c
+index b1b323b19301d..6345c52afb564 100644
+--- a/drivers/leds/led-triggers.c
++++ b/drivers/leds/led-triggers.c
+@@ -179,9 +179,9 @@ int led_trigger_set(struct led_classdev *led_cdev, struct led_trigger *trig)
+ 
+ 		cancel_work_sync(&led_cdev->set_brightness_work);
+ 		led_stop_software_blink(led_cdev);
++		device_remove_groups(led_cdev->dev, led_cdev->trigger->groups);
+ 		if (led_cdev->trigger->deactivate)
+ 			led_cdev->trigger->deactivate(led_cdev);
+-		device_remove_groups(led_cdev->dev, led_cdev->trigger->groups);
+ 		led_cdev->trigger = NULL;
+ 		led_cdev->trigger_data = NULL;
+ 		led_cdev->activated = false;
+@@ -194,6 +194,12 @@ int led_trigger_set(struct led_classdev *led_cdev, struct led_trigger *trig)
+ 		spin_unlock(&trig->leddev_list_lock);
+ 		led_cdev->trigger = trig;
+ 
++		/*
++		 * If "set brightness to 0" is pending in workqueue,
++		 * we don't want that to be reordered after ->activate()
++		 */
++		flush_work(&led_cdev->set_brightness_work);
++
+ 		ret = 0;
+ 		if (trig->activate)
+ 			ret = trig->activate(led_cdev);
+diff --git a/drivers/leds/leds-ss4200.c b/drivers/leds/leds-ss4200.c
+index fcaa34706b6ca..2ef9fc7371bd1 100644
+--- a/drivers/leds/leds-ss4200.c
++++ b/drivers/leds/leds-ss4200.c
+@@ -356,8 +356,10 @@ static int ich7_lpc_probe(struct pci_dev *dev,
+ 
+ 	nas_gpio_pci_dev = dev;
+ 	status = pci_read_config_dword(dev, PMBASE, &g_pm_io_base);
+-	if (status)
++	if (status) {
++		status = pcibios_err_to_errno(status);
+ 		goto out;
++	}
+ 	g_pm_io_base &= 0x00000ff80;
+ 
+ 	status = pci_read_config_dword(dev, GPIO_CTRL, &gc);
+@@ -369,8 +371,9 @@ static int ich7_lpc_probe(struct pci_dev *dev,
+ 	}
+ 
+ 	status = pci_read_config_dword(dev, GPIO_BASE, &nas_gpio_io_base);
+-	if (0 > status) {
++	if (status) {
+ 		dev_info(&dev->dev, "Unable to read GPIOBASE.\n");
++		status = pcibios_err_to_errno(status);
+ 		goto out;
+ 	}
+ 	dev_dbg(&dev->dev, ": GPIOBASE = 0x%08x\n", nas_gpio_io_base);
+diff --git a/drivers/leds/rgb/leds-qcom-lpg.c b/drivers/leds/rgb/leds-qcom-lpg.c
+index 9467c796bd041..e74b2ceed1c26 100644
+--- a/drivers/leds/rgb/leds-qcom-lpg.c
++++ b/drivers/leds/rgb/leds-qcom-lpg.c
+@@ -2,7 +2,7 @@
+ /*
+  * Copyright (c) 2017-2022 Linaro Ltd
+  * Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+- * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2023-2024, Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/bits.h>
+ #include <linux/bitfield.h>
+@@ -254,6 +254,9 @@ static int lpg_clear_pbs_trigger(struct lpg *lpg, unsigned int lut_mask)
+ 	u8 val = 0;
+ 	int rc;
+ 
++	if (!lpg->lpg_chan_sdam)
++		return 0;
++
+ 	lpg->pbs_en_bitmap &= (~lut_mask);
+ 	if (!lpg->pbs_en_bitmap) {
+ 		rc = nvmem_device_write(lpg->lpg_chan_sdam, SDAM_REG_PBS_SEQ_EN, 1, &val);
+@@ -276,6 +279,9 @@ static int lpg_set_pbs_trigger(struct lpg *lpg, unsigned int lut_mask)
+ 	u8 val = PBS_SW_TRIG_BIT;
+ 	int rc;
+ 
++	if (!lpg->lpg_chan_sdam)
++		return 0;
++
+ 	if (!lpg->pbs_en_bitmap) {
+ 		rc = nvmem_device_write(lpg->lpg_chan_sdam, SDAM_REG_PBS_SEQ_EN, 1, &val);
+ 		if (rc < 0)
+diff --git a/drivers/leds/trigger/ledtrig-timer.c b/drivers/leds/trigger/ledtrig-timer.c
+index b4688d1d9d2b2..1d213c999d40a 100644
+--- a/drivers/leds/trigger/ledtrig-timer.c
++++ b/drivers/leds/trigger/ledtrig-timer.c
+@@ -110,11 +110,6 @@ static int timer_trig_activate(struct led_classdev *led_cdev)
+ 		led_cdev->flags &= ~LED_INIT_DEFAULT_TRIGGER;
+ 	}
+ 
+-	/*
+-	 * If "set brightness to 0" is pending in workqueue, we don't
+-	 * want that to be reordered after blink_set()
+-	 */
+-	flush_work(&led_cdev->set_brightness_work);
+ 	led_blink_set(led_cdev, &led_cdev->blink_delay_on,
+ 		      &led_cdev->blink_delay_off);
+ 
+diff --git a/drivers/macintosh/therm_windtunnel.c b/drivers/macintosh/therm_windtunnel.c
+index 37cdc6931f6d0..2576a53f247ea 100644
+--- a/drivers/macintosh/therm_windtunnel.c
++++ b/drivers/macintosh/therm_windtunnel.c
+@@ -549,7 +549,7 @@ g4fan_exit( void )
+ 	platform_driver_unregister( &therm_of_driver );
+ 
+ 	if( x.of_dev )
+-		of_device_unregister( x.of_dev );
++		of_platform_device_destroy(&x.of_dev->dev, NULL);
+ }
+ 
+ module_init(g4fan_init);
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 933727f89431d..d17efb1dd0cb1 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -225,6 +225,8 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ 			     void *data)
+ {
+ 	u32 *arg = data;
++	u32 val;
++	int ret;
+ 
+ 	switch (cp->type) {
+ 	case IMX_MU_TYPE_TX:
+@@ -236,7 +238,13 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ 		queue_work(system_bh_wq, &cp->txdb_work);
+ 		break;
+ 	case IMX_MU_TYPE_TXDB_V2:
+-		imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
++		imx_mu_write(priv, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx),
++			     priv->dcfg->xCR[IMX_MU_GCR]);
++		ret = readl_poll_timeout(priv->base + priv->dcfg->xCR[IMX_MU_GCR], val,
++					 !(val & IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx)),
++					 0, 1000);
++		if (ret)
++			dev_warn_ratelimited(priv->dev, "channel type: %d failure\n", cp->type);
+ 		break;
+ 	default:
+ 		dev_warn_ratelimited(priv->dev, "Send data on wrong channel type: %d\n", cp->type);
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 4aa394e91109c..63b5e3fe75281 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -662,12 +662,6 @@ static int cmdq_probe(struct platform_device *pdev)
+ 		cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
+ 	}
+ 
+-	err = devm_mbox_controller_register(dev, &cmdq->mbox);
+-	if (err < 0) {
+-		dev_err(dev, "failed to register mailbox: %d\n", err);
+-		return err;
+-	}
+-
+ 	platform_set_drvdata(pdev, cmdq);
+ 
+ 	WARN_ON(clk_bulk_prepare(cmdq->pdata->gce_num, cmdq->clocks));
+@@ -695,6 +689,12 @@ static int cmdq_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(dev, CMDQ_MBOX_AUTOSUSPEND_DELAY_MS);
+ 	pm_runtime_use_autosuspend(dev);
+ 
++	err = devm_mbox_controller_register(dev, &cmdq->mbox);
++	if (err < 0) {
++		dev_err(dev, "failed to register mailbox: %d\n", err);
++		return err;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c
+index 46747559b438f..7a87424657a15 100644
+--- a/drivers/mailbox/omap-mailbox.c
++++ b/drivers/mailbox/omap-mailbox.c
+@@ -230,7 +230,8 @@ static int omap_mbox_startup(struct omap_mbox *mbox)
+ 	int ret = 0;
+ 
+ 	ret = request_threaded_irq(mbox->irq, NULL, mbox_interrupt,
+-				   IRQF_ONESHOT, mbox->name, mbox);
++				   IRQF_SHARED | IRQF_ONESHOT, mbox->name,
++				   mbox);
+ 	if (unlikely(ret)) {
+ 		pr_err("failed to register mailbox interrupt:%d\n", ret);
+ 		return ret;
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index abe88d1e67358..b149ac46a990e 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -4101,10 +4101,11 @@ static void raid_resume(struct dm_target *ti)
+ 		if (mddev->delta_disks < 0)
+ 			rs_set_capacity(rs);
+ 
++		mddev_lock_nointr(mddev);
+ 		WARN_ON_ONCE(!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery));
+-		WARN_ON_ONCE(test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
++		WARN_ON_ONCE(rcu_dereference_protected(mddev->sync_thread,
++						       lockdep_is_held(&mddev->reconfig_mutex)));
+ 		clear_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
+-		mddev_lock_nointr(mddev);
+ 		mddev->ro = 0;
+ 		mddev->in_sync = 0;
+ 		md_unfrozen_sync_thread(mddev);
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index b2d5246cff210..2fc847af254de 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -2028,10 +2028,7 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	    dm_table_any_dev_attr(t, device_is_not_random, NULL))
+ 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
+ 
+-	/*
+-	 * For a zoned target, setup the zones related queue attributes
+-	 * and resources necessary for zone append emulation if necessary.
+-	 */
++	/* For a zoned table, setup the zone related queue attributes. */
+ 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && limits->zoned) {
+ 		r = dm_set_zones_restrictions(t, q, limits);
+ 		if (r)
+@@ -2042,6 +2039,16 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	if (r)
+ 		return r;
+ 
++	/*
++	 * Now that the limits are set, check the zones mapped by the table
++	 * and setup the resources for zone append emulation if necessary.
++	 */
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && limits->zoned) {
++		r = dm_revalidate_zones(t, q);
++		if (r)
++			return r;
++	}
++
+ 	dm_update_crypto_profile(q, t);
+ 
+ 	/*
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index bb5da66da4c17..431a61e1a8b7b 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -1538,14 +1538,6 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	return r;
+ }
+ 
+-/*
+- * Check whether a DM target is a verity target.
+- */
+-bool dm_is_verity_target(struct dm_target *ti)
+-{
+-	return ti->type->module == THIS_MODULE;
+-}
+-
+ /*
+  * Get the verity mode (error behavior) of a verity target.
+  *
+@@ -1599,6 +1591,14 @@ static struct target_type verity_target = {
+ };
+ module_dm(verity);
+ 
++/*
++ * Check whether a DM target is a verity target.
++ */
++bool dm_is_verity_target(struct dm_target *ti)
++{
++	return ti->type == &verity_target;
++}
++
+ MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
+ MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
+ MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
+diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
+index 5d66d916730ef..75d0019a0649d 100644
+--- a/drivers/md/dm-zone.c
++++ b/drivers/md/dm-zone.c
+@@ -166,14 +166,22 @@ static int dm_check_zoned_cb(struct blk_zone *zone, unsigned int idx,
+  * blk_revalidate_disk_zones() function here as the mapped device is suspended
+  * (this is called from __bind() context).
+  */
+-static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t)
++int dm_revalidate_zones(struct dm_table *t, struct request_queue *q)
+ {
++	struct mapped_device *md = t->md;
+ 	struct gendisk *disk = md->disk;
+ 	int ret;
+ 
++	if (!get_capacity(disk))
++		return 0;
++
+ 	/* Revalidate only if something changed. */
+-	if (!disk->nr_zones || disk->nr_zones != md->nr_zones)
++	if (!disk->nr_zones || disk->nr_zones != md->nr_zones) {
++		DMINFO("%s using %s zone append",
++		       disk->disk_name,
++		       queue_emulates_zone_append(q) ? "emulated" : "native");
+ 		md->nr_zones = 0;
++	}
+ 
+ 	if (md->nr_zones)
+ 		return 0;
+@@ -240,9 +248,6 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		lim->max_zone_append_sectors = 0;
+ 	}
+ 
+-	if (!get_capacity(md->disk))
+-		return 0;
+-
+ 	/*
+ 	 * Count conventional zones to check that the mapped device will indeed 
+ 	 * have sequential write required zones.
+@@ -269,16 +274,6 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		return 0;
+ 	}
+ 
+-	if (!md->disk->nr_zones) {
+-		DMINFO("%s using %s zone append",
+-		       md->disk->disk_name,
+-		       queue_emulates_zone_append(q) ? "emulated" : "native");
+-	}
+-
+-	ret = dm_revalidate_zones(md, t);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (!static_key_enabled(&zoned_enabled.key))
+ 		static_branch_enable(&zoned_enabled);
+ 	return 0;
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index 53ef8207fe2c1..c984ecb64b1e8 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -103,6 +103,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
+  */
+ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		struct queue_limits *lim);
++int dm_revalidate_zones(struct dm_table *t, struct request_queue *q);
+ void dm_zone_endio(struct dm_io *io, struct bio *clone);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 0a2d37eb38ef9..08232d8dc815e 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -227,6 +227,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
+ 	struct block_device *bdev;
+ 	struct mddev *mddev = bitmap->mddev;
+ 	struct bitmap_storage *store = &bitmap->storage;
++	unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) <<
++		PAGE_SHIFT;
+ 	loff_t sboff, offset = mddev->bitmap_info.offset;
+ 	sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
+ 	unsigned int size = PAGE_SIZE;
+@@ -269,11 +271,9 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
+ 		if (size == 0)
+ 			/* bitmap runs in to data */
+ 			return -EINVAL;
+-	} else {
+-		/* DATA METADATA BITMAP - no problems */
+ 	}
+ 
+-	md_super_write(mddev, rdev, sboff + ps, (int) size, page);
++	md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 8e36a0feec098..b5a802ae17bb2 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -15,6 +15,7 @@
+ 
+ #define LVB_SIZE	64
+ #define NEW_DEV_TIMEOUT 5000
++#define WAIT_DLM_LOCK_TIMEOUT (30 * HZ)
+ 
+ struct dlm_lock_resource {
+ 	dlm_lockspace_t *ls;
+@@ -130,8 +131,13 @@ static int dlm_lock_sync(struct dlm_lock_resource *res, int mode)
+ 			0, sync_ast, res, res->bast);
+ 	if (ret)
+ 		return ret;
+-	wait_event(res->sync_locking, res->sync_locking_done);
++	ret = wait_event_timeout(res->sync_locking, res->sync_locking_done,
++				WAIT_DLM_LOCK_TIMEOUT);
+ 	res->sync_locking_done = false;
++	if (!ret) {
++		pr_err("locking DLM '%s' timeout!\n", res->name);
++		return -EBUSY;
++	}
+ 	if (res->lksb.sb_status == 0)
+ 		res->mode = mode;
+ 	return res->lksb.sb_status;
+@@ -743,7 +749,7 @@ static void unlock_comm(struct md_cluster_info *cinfo)
+  */
+ static int __sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg)
+ {
+-	int error;
++	int error, unlock_error;
+ 	int slot = cinfo->slot_number - 1;
+ 
+ 	cmsg->slot = cpu_to_le32(slot);
+@@ -751,7 +757,7 @@ static int __sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg)
+ 	error = dlm_lock_sync(cinfo->message_lockres, DLM_LOCK_EX);
+ 	if (error) {
+ 		pr_err("md-cluster: failed to get EX on MESSAGE (%d)\n", error);
+-		goto failed_message;
++		return error;
+ 	}
+ 
+ 	memcpy(cinfo->message_lockres->lksb.sb_lvbptr, (void *)cmsg,
+@@ -781,14 +787,10 @@ static int __sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg)
+ 	}
+ 
+ failed_ack:
+-	error = dlm_unlock_sync(cinfo->message_lockres);
+-	if (unlikely(error != 0)) {
++	while ((unlock_error = dlm_unlock_sync(cinfo->message_lockres)))
+ 		pr_err("md-cluster: failed convert to NL on MESSAGE(%d)\n",
+-			error);
+-		/* in case the message can't be released due to some reason */
+-		goto failed_ack;
+-	}
+-failed_message:
++			unlock_error);
++
+ 	return error;
+ }
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index aff9118ff6975..9c5be016e5073 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -550,13 +550,9 @@ static void md_end_flush(struct bio *bio)
+ 
+ 	rdev_dec_pending(rdev, mddev);
+ 
+-	if (atomic_dec_and_test(&mddev->flush_pending)) {
+-		/* The pair is percpu_ref_get() from md_flush_request() */
+-		percpu_ref_put(&mddev->active_io);
+-
++	if (atomic_dec_and_test(&mddev->flush_pending))
+ 		/* The pre-request flush has finished */
+ 		queue_work(md_wq, &mddev->flush_work);
+-	}
+ }
+ 
+ static void md_submit_flush_data(struct work_struct *ws);
+@@ -587,12 +583,8 @@ static void submit_flushes(struct work_struct *ws)
+ 			rcu_read_lock();
+ 		}
+ 	rcu_read_unlock();
+-	if (atomic_dec_and_test(&mddev->flush_pending)) {
+-		/* The pair is percpu_ref_get() from md_flush_request() */
+-		percpu_ref_put(&mddev->active_io);
+-
++	if (atomic_dec_and_test(&mddev->flush_pending))
+ 		queue_work(md_wq, &mddev->flush_work);
+-	}
+ }
+ 
+ static void md_submit_flush_data(struct work_struct *ws)
+@@ -617,8 +609,20 @@ static void md_submit_flush_data(struct work_struct *ws)
+ 		bio_endio(bio);
+ 	} else {
+ 		bio->bi_opf &= ~REQ_PREFLUSH;
+-		md_handle_request(mddev, bio);
++
++		/*
++		 * make_requst() will never return error here, it only
++		 * returns error in raid5_make_request() by dm-raid.
++		 * Since dm always splits data and flush operation into
++		 * two separate io, io size of flush submitted by dm
++		 * always is 0, make_request() will not be called here.
++		 */
++		if (WARN_ON_ONCE(!mddev->pers->make_request(mddev, bio)))
++			bio_io_error(bio);;
+ 	}
++
++	/* The pair is percpu_ref_get() from md_flush_request() */
++	percpu_ref_put(&mddev->active_io);
+ }
+ 
+ /*
+@@ -7742,12 +7746,6 @@ static int md_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 		return get_bitmap_file(mddev, argp);
+ 	}
+ 
+-	if (cmd == HOT_REMOVE_DISK)
+-		/* need to ensure recovery thread has run */
+-		wait_event_interruptible_timeout(mddev->sb_wait,
+-						 !test_bit(MD_RECOVERY_NEEDED,
+-							   &mddev->recovery),
+-						 msecs_to_jiffies(5000));
+ 	if (cmd == STOP_ARRAY || cmd == STOP_ARRAY_RO) {
+ 		/* Need to flush page cache, and ensure no-one else opens
+ 		 * and writes
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index c5d4aeb68404c..81c01347cd24e 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -365,18 +365,13 @@ static sector_t raid0_size(struct mddev *mddev, sector_t sectors, int raid_disks
+ 	return array_sectors;
+ }
+ 
+-static void free_conf(struct mddev *mddev, struct r0conf *conf)
+-{
+-	kfree(conf->strip_zone);
+-	kfree(conf->devlist);
+-	kfree(conf);
+-}
+-
+ static void raid0_free(struct mddev *mddev, void *priv)
+ {
+ 	struct r0conf *conf = priv;
+ 
+-	free_conf(mddev, conf);
++	kfree(conf->strip_zone);
++	kfree(conf->devlist);
++	kfree(conf);
+ }
+ 
+ static int raid0_set_limits(struct mddev *mddev)
+@@ -415,7 +410,7 @@ static int raid0_run(struct mddev *mddev)
+ 	if (!mddev_is_dm(mddev)) {
+ 		ret = raid0_set_limits(mddev);
+ 		if (ret)
+-			goto out_free_conf;
++			return ret;
+ 	}
+ 
+ 	/* calculate array device size */
+@@ -427,13 +422,7 @@ static int raid0_run(struct mddev *mddev)
+ 
+ 	dump_zones(mddev);
+ 
+-	ret = md_integrity_register(mddev);
+-	if (ret)
+-		goto out_free_conf;
+-	return 0;
+-out_free_conf:
+-	free_conf(mddev, conf);
+-	return ret;
++	return md_integrity_register(mddev);
+ }
+ 
+ /*
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 7b8a71ca66dde..22bbd06ba6a29 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -680,6 +680,7 @@ static int choose_slow_rdev(struct r1conf *conf, struct r1bio *r1_bio,
+ 		len = r1_bio->sectors;
+ 		read_len = raid1_check_read_range(rdev, this_sector, &len);
+ 		if (read_len == r1_bio->sectors) {
++			*max_sectors = read_len;
+ 			update_read_sectors(conf, disk, this_sector, read_len);
+ 			return disk;
+ 		}
+@@ -3204,7 +3205,6 @@ static int raid1_set_limits(struct mddev *mddev)
+ 	return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+ 
+-static void raid1_free(struct mddev *mddev, void *priv);
+ static int raid1_run(struct mddev *mddev)
+ {
+ 	struct r1conf *conf;
+@@ -3238,7 +3238,7 @@ static int raid1_run(struct mddev *mddev)
+ 	if (!mddev_is_dm(mddev)) {
+ 		ret = raid1_set_limits(mddev);
+ 		if (ret)
+-			goto abort;
++			return ret;
+ 	}
+ 
+ 	mddev->degraded = 0;
+@@ -3252,8 +3252,7 @@ static int raid1_run(struct mddev *mddev)
+ 	 */
+ 	if (conf->raid_disks - mddev->degraded < 1) {
+ 		md_unregister_thread(mddev, &conf->thread);
+-		ret = -EINVAL;
+-		goto abort;
++		return -EINVAL;
+ 	}
+ 
+ 	if (conf->raid_disks - mddev->degraded == 1)
+@@ -3277,14 +3276,8 @@ static int raid1_run(struct mddev *mddev)
+ 	md_set_array_sectors(mddev, raid1_size(mddev, 0, 0));
+ 
+ 	ret = md_integrity_register(mddev);
+-	if (ret) {
++	if (ret)
+ 		md_unregister_thread(mddev, &mddev->thread);
+-		goto abort;
+-	}
+-	return 0;
+-
+-abort:
+-	raid1_free(mddev, conf);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2bd1ce9b39226..1c6b58adec133 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -155,7 +155,7 @@ static int raid6_idx_to_slot(int idx, struct stripe_head *sh,
+ 	return slot;
+ }
+ 
+-static void print_raid5_conf (struct r5conf *conf);
++static void print_raid5_conf(struct r5conf *conf);
+ 
+ static int stripe_operations_active(struct stripe_head *sh)
+ {
+@@ -5899,6 +5899,39 @@ static int add_all_stripe_bios(struct r5conf *conf,
+ 	return ret;
+ }
+ 
++enum reshape_loc {
++	LOC_NO_RESHAPE,
++	LOC_AHEAD_OF_RESHAPE,
++	LOC_INSIDE_RESHAPE,
++	LOC_BEHIND_RESHAPE,
++};
++
++static enum reshape_loc get_reshape_loc(struct mddev *mddev,
++		struct r5conf *conf, sector_t logical_sector)
++{
++	sector_t reshape_progress, reshape_safe;
++	/*
++	 * Spinlock is needed as reshape_progress may be
++	 * 64bit on a 32bit platform, and so it might be
++	 * possible to see a half-updated value
++	 * Of course reshape_progress could change after
++	 * the lock is dropped, so once we get a reference
++	 * to the stripe that we think it is, we will have
++	 * to check again.
++	 */
++	spin_lock_irq(&conf->device_lock);
++	reshape_progress = conf->reshape_progress;
++	reshape_safe = conf->reshape_safe;
++	spin_unlock_irq(&conf->device_lock);
++	if (reshape_progress == MaxSector)
++		return LOC_NO_RESHAPE;
++	if (ahead_of_reshape(mddev, logical_sector, reshape_progress))
++		return LOC_AHEAD_OF_RESHAPE;
++	if (ahead_of_reshape(mddev, logical_sector, reshape_safe))
++		return LOC_INSIDE_RESHAPE;
++	return LOC_BEHIND_RESHAPE;
++}
++
+ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ 		struct r5conf *conf, struct stripe_request_ctx *ctx,
+ 		sector_t logical_sector, struct bio *bi)
+@@ -5913,28 +5946,14 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ 	seq = read_seqcount_begin(&conf->gen_lock);
+ 
+ 	if (unlikely(conf->reshape_progress != MaxSector)) {
+-		/*
+-		 * Spinlock is needed as reshape_progress may be
+-		 * 64bit on a 32bit platform, and so it might be
+-		 * possible to see a half-updated value
+-		 * Of course reshape_progress could change after
+-		 * the lock is dropped, so once we get a reference
+-		 * to the stripe that we think it is, we will have
+-		 * to check again.
+-		 */
+-		spin_lock_irq(&conf->device_lock);
+-		if (ahead_of_reshape(mddev, logical_sector,
+-				     conf->reshape_progress)) {
+-			previous = 1;
+-		} else {
+-			if (ahead_of_reshape(mddev, logical_sector,
+-					     conf->reshape_safe)) {
+-				spin_unlock_irq(&conf->device_lock);
+-				ret = STRIPE_SCHEDULE_AND_RETRY;
+-				goto out;
+-			}
++		enum reshape_loc loc = get_reshape_loc(mddev, conf,
++						       logical_sector);
++		if (loc == LOC_INSIDE_RESHAPE) {
++			ret = STRIPE_SCHEDULE_AND_RETRY;
++			goto out;
+ 		}
+-		spin_unlock_irq(&conf->device_lock);
++		if (loc == LOC_AHEAD_OF_RESHAPE)
++			previous = 1;
+ 	}
+ 
+ 	new_sector = raid5_compute_sector(conf, logical_sector, previous,
+@@ -6113,8 +6132,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 	/* Bail out if conflicts with reshape and REQ_NOWAIT is set */
+ 	if ((bi->bi_opf & REQ_NOWAIT) &&
+ 	    (conf->reshape_progress != MaxSector) &&
+-	    !ahead_of_reshape(mddev, logical_sector, conf->reshape_progress) &&
+-	    ahead_of_reshape(mddev, logical_sector, conf->reshape_safe)) {
++	    get_reshape_loc(mddev, conf, logical_sector) == LOC_INSIDE_RESHAPE) {
+ 		bio_wouldblock_error(bi);
+ 		if (rw == WRITE)
+ 			md_write_end(mddev);
+@@ -7562,11 +7580,11 @@ static struct r5conf *setup_conf(struct mddev *mddev)
+ 		if (test_bit(Replacement, &rdev->flags)) {
+ 			if (disk->replacement)
+ 				goto abort;
+-			RCU_INIT_POINTER(disk->replacement, rdev);
++			disk->replacement = rdev;
+ 		} else {
+ 			if (disk->rdev)
+ 				goto abort;
+-			RCU_INIT_POINTER(disk->rdev, rdev);
++			disk->rdev = rdev;
+ 		}
+ 
+ 		if (test_bit(In_sync, &rdev->flags)) {
+@@ -8048,7 +8066,7 @@ static void raid5_status(struct seq_file *seq, struct mddev *mddev)
+ 	seq_printf (seq, "]");
+ }
+ 
+-static void print_raid5_conf (struct r5conf *conf)
++static void print_raid5_conf(struct r5conf *conf)
+ {
+ 	struct md_rdev *rdev;
+ 	int i;
+@@ -8062,15 +8080,13 @@ static void print_raid5_conf (struct r5conf *conf)
+ 	       conf->raid_disks,
+ 	       conf->raid_disks - conf->mddev->degraded);
+ 
+-	rcu_read_lock();
+ 	for (i = 0; i < conf->raid_disks; i++) {
+-		rdev = rcu_dereference(conf->disks[i].rdev);
++		rdev = conf->disks[i].rdev;
+ 		if (rdev)
+ 			pr_debug(" disk %d, o:%d, dev:%pg\n",
+ 			       i, !test_bit(Faulty, &rdev->flags),
+ 			       rdev->bdev);
+ 	}
+-	rcu_read_unlock();
+ }
+ 
+ static int raid5_spare_active(struct mddev *mddev)
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index c6d3ee472d814..742bc665602e9 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -679,6 +679,7 @@ config VIDEO_THP7312
+ 	tristate "THine THP7312 support"
+ 	depends on I2C
+ 	select FW_LOADER
++	select FW_UPLOAD
+ 	select MEDIA_CONTROLLER
+ 	select V4L2_CCI_I2C
+ 	select V4L2_FWNODE
+diff --git a/drivers/media/i2c/alvium-csi2.c b/drivers/media/i2c/alvium-csi2.c
+index e65702e3f73e8..272d892da56ee 100644
+--- a/drivers/media/i2c/alvium-csi2.c
++++ b/drivers/media/i2c/alvium-csi2.c
+@@ -1962,7 +1962,7 @@ static int alvium_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+ 	int val;
+ 
+ 	switch (ctrl->id) {
+-	case V4L2_CID_GAIN:
++	case V4L2_CID_ANALOGUE_GAIN:
+ 		val = alvium_get_gain(alvium);
+ 		if (val < 0)
+ 			return val;
+@@ -1994,7 +1994,7 @@ static int alvium_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		return 0;
+ 
+ 	switch (ctrl->id) {
+-	case V4L2_CID_GAIN:
++	case V4L2_CID_ANALOGUE_GAIN:
+ 		ret = alvium_set_ctrl_gain(alvium, ctrl->val);
+ 		break;
+ 	case V4L2_CID_AUTOGAIN:
+@@ -2123,7 +2123,7 @@ static int alvium_ctrl_init(struct alvium_dev *alvium)
+ 
+ 	if (alvium->avail_ft.gain) {
+ 		ctrls->gain = v4l2_ctrl_new_std(hdl, ops,
+-						V4L2_CID_GAIN,
++						V4L2_CID_ANALOGUE_GAIN,
+ 						alvium->min_gain,
+ 						alvium->max_gain,
+ 						alvium->inc_gain,
+diff --git a/drivers/media/i2c/hi846.c b/drivers/media/i2c/hi846.c
+index 9c565ec033d4e..52d9ca68a86c8 100644
+--- a/drivers/media/i2c/hi846.c
++++ b/drivers/media/i2c/hi846.c
+@@ -1851,7 +1851,7 @@ static int hi846_get_selection(struct v4l2_subdev *sd,
+ 		mutex_lock(&hi846->mutex);
+ 		switch (sel->which) {
+ 		case V4L2_SUBDEV_FORMAT_TRY:
+-			v4l2_subdev_state_get_crop(sd_state, sel->pad);
++			sel->r = *v4l2_subdev_state_get_crop(sd_state, sel->pad);
+ 			break;
+ 		case V4L2_SUBDEV_FORMAT_ACTIVE:
+ 			sel->r = hi846->cur_mode->crop;
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 51ebf5453fceb..e78a80b2bb2e4 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -162,8 +162,8 @@ static const struct cci_reg_sequence imx219_common_regs[] = {
+ 	{ IMX219_REG_MODE_SELECT, 0x00 },	/* Mode Select */
+ 
+ 	/* To Access Addresses 3000-5fff, send the following commands */
+-	{ CCI_REG8(0x30eb), 0x0c },
+ 	{ CCI_REG8(0x30eb), 0x05 },
++	{ CCI_REG8(0x30eb), 0x0c },
+ 	{ CCI_REG8(0x300a), 0xff },
+ 	{ CCI_REG8(0x300b), 0xff },
+ 	{ CCI_REG8(0x30eb), 0x05 },
+diff --git a/drivers/media/i2c/imx412.c b/drivers/media/i2c/imx412.c
+index 0efce329525e4..7d1f7af0a9dff 100644
+--- a/drivers/media/i2c/imx412.c
++++ b/drivers/media/i2c/imx412.c
+@@ -542,14 +542,13 @@ static int imx412_update_controls(struct imx412 *imx412,
+  */
+ static int imx412_update_exp_gain(struct imx412 *imx412, u32 exposure, u32 gain)
+ {
+-	u32 lpfr, shutter;
++	u32 lpfr;
+ 	int ret;
+ 
+ 	lpfr = imx412->vblank + imx412->cur_mode->height;
+-	shutter = lpfr - exposure;
+ 
+-	dev_dbg(imx412->dev, "Set exp %u, analog gain %u, shutter %u, lpfr %u",
+-		exposure, gain, shutter, lpfr);
++	dev_dbg(imx412->dev, "Set exp %u, analog gain %u, lpfr %u",
++		exposure, gain, lpfr);
+ 
+ 	ret = imx412_write_reg(imx412, IMX412_REG_HOLD, 1, 1);
+ 	if (ret)
+@@ -559,7 +558,7 @@ static int imx412_update_exp_gain(struct imx412 *imx412, u32 exposure, u32 gain)
+ 	if (ret)
+ 		goto error_release_group_hold;
+ 
+-	ret = imx412_write_reg(imx412, IMX412_REG_EXPOSURE_CIT, 2, shutter);
++	ret = imx412_write_reg(imx412, IMX412_REG_EXPOSURE_CIT, 2, exposure);
+ 	if (ret)
+ 		goto error_release_group_hold;
+ 
+diff --git a/drivers/media/pci/intel/ivsc/mei_csi.c b/drivers/media/pci/intel/ivsc/mei_csi.c
+index f04a89584334b..16791a7f4f157 100644
+--- a/drivers/media/pci/intel/ivsc/mei_csi.c
++++ b/drivers/media/pci/intel/ivsc/mei_csi.c
+@@ -126,6 +126,8 @@ struct mei_csi {
+ 	struct v4l2_ctrl_handler ctrl_handler;
+ 	struct v4l2_ctrl *freq_ctrl;
+ 	struct v4l2_ctrl *privacy_ctrl;
++	/* lock for v4l2 controls */
++	struct mutex ctrl_lock;
+ 	unsigned int remote_pad;
+ 	/* start streaming or not */
+ 	int streaming;
+@@ -190,7 +192,11 @@ static int mei_csi_send(struct mei_csi *csi, u8 *buf, size_t len)
+ 
+ 	/* command response status */
+ 	ret = csi->cmd_response.status;
+-	if (ret) {
++	if (ret == -1) {
++		/* notify privacy on instead of reporting error */
++		ret = 0;
++		v4l2_ctrl_s_ctrl(csi->privacy_ctrl, 1);
++	} else if (ret) {
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -559,11 +565,13 @@ static int mei_csi_init_controls(struct mei_csi *csi)
+ 	u32 max;
+ 	int ret;
+ 
++	mutex_init(&csi->ctrl_lock);
++
+ 	ret = v4l2_ctrl_handler_init(&csi->ctrl_handler, 2);
+ 	if (ret)
+ 		return ret;
+ 
+-	csi->ctrl_handler.lock = &csi->lock;
++	csi->ctrl_handler.lock = &csi->ctrl_lock;
+ 
+ 	max = ARRAY_SIZE(link_freq_menu_items) - 1;
+ 	csi->freq_ctrl = v4l2_ctrl_new_int_menu(&csi->ctrl_handler,
+@@ -755,6 +763,7 @@ static int mei_csi_probe(struct mei_cl_device *cldev,
+ 
+ err_ctrl_handler:
+ 	v4l2_ctrl_handler_free(&csi->ctrl_handler);
++	mutex_destroy(&csi->ctrl_lock);
+ 	v4l2_async_nf_unregister(&csi->notifier);
+ 	v4l2_async_nf_cleanup(&csi->notifier);
+ 
+@@ -774,6 +783,7 @@ static void mei_csi_remove(struct mei_cl_device *cldev)
+ 	v4l2_async_nf_unregister(&csi->notifier);
+ 	v4l2_async_nf_cleanup(&csi->notifier);
+ 	v4l2_ctrl_handler_free(&csi->ctrl_handler);
++	mutex_destroy(&csi->ctrl_lock);
+ 	v4l2_async_unregister_subdev(&csi->subdev);
+ 	v4l2_subdev_cleanup(&csi->subdev);
+ 	media_entity_cleanup(&csi->subdev.entity);
+diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c
+index 99b9f55ca8292..f467a00492f4b 100644
+--- a/drivers/media/pci/ivtv/ivtv-udma.c
++++ b/drivers/media/pci/ivtv/ivtv-udma.c
+@@ -131,6 +131,8 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr,
+ 
+ 	/* Fill SG List with new values */
+ 	if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) {
++		IVTV_DEBUG_WARN("%s: could not allocate bounce buffers for highmem userspace buffers\n",
++				__func__);
+ 		unpin_user_pages(dma->map, dma->page_count);
+ 		dma->page_count = 0;
+ 		return -ENOMEM;
+@@ -139,6 +141,12 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr,
+ 	/* Map SG List */
+ 	dma->SG_length = dma_map_sg(&itv->pdev->dev, dma->SGlist,
+ 				    dma->page_count, DMA_TO_DEVICE);
++	if (!dma->SG_length) {
++		IVTV_DEBUG_WARN("%s: DMA map error, SG_length is 0\n", __func__);
++		unpin_user_pages(dma->map, dma->page_count);
++		dma->page_count = 0;
++		return -EINVAL;
++	}
+ 
+ 	/* Fill SG Array with new values */
+ 	ivtv_udma_fill_sg_array (dma, ivtv_dest_addr, 0, -1);
+diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c
+index 582146f8d70d5..2d9274537725a 100644
+--- a/drivers/media/pci/ivtv/ivtv-yuv.c
++++ b/drivers/media/pci/ivtv/ivtv-yuv.c
+@@ -114,6 +114,12 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma,
+ 	}
+ 	dma->SG_length = dma_map_sg(&itv->pdev->dev, dma->SGlist,
+ 				    dma->page_count, DMA_TO_DEVICE);
++	if (!dma->SG_length) {
++		IVTV_DEBUG_WARN("%s: DMA map error, SG_length is 0\n", __func__);
++		unpin_user_pages(dma->map, dma->page_count);
++		dma->page_count = 0;
++		return -EINVAL;
++	}
+ 
+ 	/* Fill SG Array with new values */
+ 	ivtv_udma_fill_sg_array(dma, y_buffer_offset, uv_buffer_offset, y_size);
+diff --git a/drivers/media/pci/ivtv/ivtvfb.c b/drivers/media/pci/ivtv/ivtvfb.c
+index 410477e3e6216..d1ab7fee0d057 100644
+--- a/drivers/media/pci/ivtv/ivtvfb.c
++++ b/drivers/media/pci/ivtv/ivtvfb.c
+@@ -281,10 +281,10 @@ static int ivtvfb_prep_dec_dma_to_device(struct ivtv *itv,
+ 	/* Map User DMA */
+ 	if (ivtv_udma_setup(itv, ivtv_dest_addr, userbuf, size_in_bytes) <= 0) {
+ 		mutex_unlock(&itv->udma.lock);
+-		IVTVFB_WARN("ivtvfb_prep_dec_dma_to_device, Error with pin_user_pages: %d bytes, %d pages returned\n",
+-			       size_in_bytes, itv->udma.page_count);
++		IVTVFB_WARN("%s, Error in ivtv_udma_setup: %d bytes, %d pages returned\n",
++			       __func__, size_in_bytes, itv->udma.page_count);
+ 
+-		/* pin_user_pages must have failed completely */
++		/* pin_user_pages or DMA must have failed completely */
+ 		return -EIO;
+ 	}
+ 
+diff --git a/drivers/media/pci/saa7134/saa7134-dvb.c b/drivers/media/pci/saa7134/saa7134-dvb.c
+index 9c6cfef03331d..a66df6adfaad8 100644
+--- a/drivers/media/pci/saa7134/saa7134-dvb.c
++++ b/drivers/media/pci/saa7134/saa7134-dvb.c
+@@ -466,7 +466,9 @@ static int philips_europa_tuner_sleep(struct dvb_frontend *fe)
+ 	/* switch the board to analog mode */
+ 	if (fe->ops.i2c_gate_ctrl)
+ 		fe->ops.i2c_gate_ctrl(fe, 1);
+-	i2c_transfer(&dev->i2c_adap, &analog_msg, 1);
++	if (i2c_transfer(&dev->i2c_adap, &analog_msg, 1) != 1)
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+@@ -1018,7 +1020,9 @@ static int md8800_set_voltage2(struct dvb_frontend *fe,
+ 	else
+ 		wbuf[1] = rbuf & 0xef;
+ 	msg[0].len = 2;
+-	i2c_transfer(&dev->i2c_adap, msg, 1);
++	if (i2c_transfer(&dev->i2c_adap, msg, 1) != 1)
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_if.c
+index 4bc89c8644fec..5f848691cea44 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_if.c
+@@ -449,7 +449,7 @@ static int vdec_vp8_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 		       inst->frm_cnt, y_fb_dma, c_fb_dma, fb);
+ 
+ 	inst->cur_fb = fb;
+-	dec->bs_dma = (uint64_t)bs->dma_addr;
++	dec->bs_dma = bs->dma_addr;
+ 	dec->bs_sz = bs->size;
+ 	dec->cur_y_fb_dma = y_fb_dma;
+ 	dec->cur_c_fb_dma = c_fb_dma;
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec_vpu_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec_vpu_if.c
+index da6be556727bb..145958206e38a 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec_vpu_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec_vpu_if.c
+@@ -233,6 +233,12 @@ int vpu_dec_init(struct vdec_vpu_inst *vpu)
+ 	mtk_vdec_debug(vpu->ctx, "vdec_inst=%p", vpu);
+ 
+ 	err = vcodec_vpu_send_msg(vpu, (void *)&msg, sizeof(msg));
++
++	if (IS_ERR_OR_NULL(vpu->vsi)) {
++		mtk_vdec_err(vpu->ctx, "invalid vdec vsi, status=%d", err);
++		return -EINVAL;
++	}
++
+ 	mtk_vdec_debug(vpu->ctx, "- ret=%d", err);
+ 	return err;
+ }
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index cc97790ed30f6..b1300f15e5020 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -1634,6 +1634,9 @@ static int mxc_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	dev_dbg(ctx->mxc_jpeg->dev, "Start streaming ctx=%p", ctx);
+ 	q_data->sequence = 0;
+ 
++	if (V4L2_TYPE_IS_CAPTURE(q->type))
++		ctx->need_initial_source_change_evt = false;
++
+ 	ret = pm_runtime_resume_and_get(ctx->mxc_jpeg->dev);
+ 	if (ret < 0) {
+ 		dev_err(ctx->mxc_jpeg->dev, "Failed to power up jpeg\n");
+diff --git a/drivers/media/platform/nxp/imx-pxp.c b/drivers/media/platform/nxp/imx-pxp.c
+index e62dc5c1a4aea..e4427e6487fba 100644
+--- a/drivers/media/platform/nxp/imx-pxp.c
++++ b/drivers/media/platform/nxp/imx-pxp.c
+@@ -1805,6 +1805,9 @@ static int pxp_probe(struct platform_device *pdev)
+ 		return PTR_ERR(mmio);
+ 	dev->regmap = devm_regmap_init_mmio(&pdev->dev, mmio,
+ 					    &pxp_regmap_config);
++	if (IS_ERR(dev->regmap))
++		return dev_err_probe(&pdev->dev, PTR_ERR(dev->regmap),
++				     "Failed to init regmap\n");
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 29130a9441e70..d12089370d91e 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -1255,7 +1255,7 @@ static int vdec_stop_output(struct venus_inst *inst)
+ 		break;
+ 	case VENUS_DEC_STATE_INIT:
+ 	case VENUS_DEC_STATE_CAPTURE_SETUP:
+-		ret = hfi_session_flush(inst, HFI_FLUSH_INPUT, true);
++		ret = hfi_session_flush(inst, HFI_FLUSH_ALL, true);
+ 		break;
+ 	default:
+ 		break;
+@@ -1747,6 +1747,7 @@ static int vdec_close(struct file *file)
+ 
+ 	vdec_pm_get(inst);
+ 
++	cancel_work_sync(&inst->delayed_process_work);
+ 	v4l2_m2m_ctx_release(inst->m2m_ctx);
+ 	v4l2_m2m_release(inst->m2m_dev);
+ 	vdec_ctrl_deinit(inst);
+diff --git a/drivers/media/platform/renesas/rcar-csi2.c b/drivers/media/platform/renesas/rcar-csi2.c
+index 582d5e35db0e5..2d464e43a5be8 100644
+--- a/drivers/media/platform/renesas/rcar-csi2.c
++++ b/drivers/media/platform/renesas/rcar-csi2.c
+@@ -1914,12 +1914,14 @@ static int rcsi2_probe(struct platform_device *pdev)
+ 
+ 	ret = v4l2_async_register_subdev(&priv->subdev);
+ 	if (ret < 0)
+-		goto error_async;
++		goto error_pm_runtime;
+ 
+ 	dev_info(priv->dev, "%d lanes found\n", priv->lanes);
+ 
+ 	return 0;
+ 
++error_pm_runtime:
++	pm_runtime_disable(&pdev->dev);
+ error_async:
+ 	v4l2_async_nf_unregister(&priv->notifier);
+ 	v4l2_async_nf_cleanup(&priv->notifier);
+@@ -1936,6 +1938,7 @@ static void rcsi2_remove(struct platform_device *pdev)
+ 	v4l2_async_nf_unregister(&priv->notifier);
+ 	v4l2_async_nf_cleanup(&priv->notifier);
+ 	v4l2_async_unregister_subdev(&priv->subdev);
++	v4l2_subdev_cleanup(&priv->subdev);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 
+diff --git a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+index e2c40abc6d3d1..21d5b2815e86a 100644
+--- a/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/renesas/rcar-vin/rcar-dma.c
+@@ -742,12 +742,22 @@ static int rvin_setup(struct rvin_dev *vin)
+ 	 */
+ 	switch (vin->mbus_code) {
+ 	case MEDIA_BUS_FMT_YUYV8_1X16:
+-		/* BT.601/BT.1358 16bit YCbCr422 */
+-		vnmc |= VNMC_INF_YUV16;
++		if (vin->is_csi)
++			/* YCbCr422 8-bit */
++			vnmc |= VNMC_INF_YUV8_BT601;
++		else
++			/* BT.601/BT.1358 16bit YCbCr422 */
++			vnmc |= VNMC_INF_YUV16;
+ 		input_is_yuv = true;
+ 		break;
+ 	case MEDIA_BUS_FMT_UYVY8_1X16:
+-		vnmc |= VNMC_INF_YUV16 | VNMC_YCAL;
++		if (vin->is_csi)
++			/* YCbCr422 8-bit */
++			vnmc |= VNMC_INF_YUV8_BT601;
++		else
++			/* BT.601/BT.1358 16bit YCbCr422 */
++			vnmc |= VNMC_INF_YUV16;
++		vnmc |= VNMC_YCAL;
+ 		input_is_yuv = true;
+ 		break;
+ 	case MEDIA_BUS_FMT_UYVY8_2X8:
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_histo.c b/drivers/media/platform/renesas/vsp1/vsp1_histo.c
+index 71155282ca116..cd1c8778662e6 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_histo.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_histo.c
+@@ -36,9 +36,8 @@ struct vsp1_histogram_buffer *
+ vsp1_histogram_buffer_get(struct vsp1_histogram *histo)
+ {
+ 	struct vsp1_histogram_buffer *buf = NULL;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock(&histo->irqlock);
+ 
+ 	if (list_empty(&histo->irqqueue))
+ 		goto done;
+@@ -49,7 +48,7 @@ vsp1_histogram_buffer_get(struct vsp1_histogram *histo)
+ 	histo->readout = true;
+ 
+ done:
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock(&histo->irqlock);
+ 	return buf;
+ }
+ 
+@@ -58,7 +57,6 @@ void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+ 				    size_t size)
+ {
+ 	struct vsp1_pipeline *pipe = histo->entity.pipe;
+-	unsigned long flags;
+ 
+ 	/*
+ 	 * The pipeline pointer is guaranteed to be valid as this function is
+@@ -70,10 +68,10 @@ void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+ 	vb2_set_plane_payload(&buf->buf.vb2_buf, 0, size);
+ 	vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock(&histo->irqlock);
+ 	histo->readout = false;
+ 	wake_up(&histo->wait_queue);
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock(&histo->irqlock);
+ }
+ 
+ /* -----------------------------------------------------------------------------
+@@ -124,11 +122,10 @@ static void histo_buffer_queue(struct vb2_buffer *vb)
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 	struct vsp1_histogram *histo = vb2_get_drv_priv(vb->vb2_queue);
+ 	struct vsp1_histogram_buffer *buf = to_vsp1_histogram_buffer(vbuf);
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock_irq(&histo->irqlock);
+ 	list_add_tail(&buf->queue, &histo->irqqueue);
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock_irq(&histo->irqlock);
+ }
+ 
+ static int histo_start_streaming(struct vb2_queue *vq, unsigned int count)
+@@ -140,9 +137,8 @@ static void histo_stop_streaming(struct vb2_queue *vq)
+ {
+ 	struct vsp1_histogram *histo = vb2_get_drv_priv(vq);
+ 	struct vsp1_histogram_buffer *buffer;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&histo->irqlock, flags);
++	spin_lock_irq(&histo->irqlock);
+ 
+ 	/* Remove all buffers from the IRQ queue. */
+ 	list_for_each_entry(buffer, &histo->irqqueue, queue)
+@@ -152,7 +148,7 @@ static void histo_stop_streaming(struct vb2_queue *vq)
+ 	/* Wait for the buffer being read out (if any) to complete. */
+ 	wait_event_lock_irq(histo->wait_queue, !histo->readout, histo->irqlock);
+ 
+-	spin_unlock_irqrestore(&histo->irqlock, flags);
++	spin_unlock_irq(&histo->irqlock);
+ }
+ 
+ static const struct vb2_ops histo_video_queue_qops = {
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_pipe.h b/drivers/media/platform/renesas/vsp1/vsp1_pipe.h
+index 674b5748d929e..85ecd53cda495 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_pipe.h
++++ b/drivers/media/platform/renesas/vsp1/vsp1_pipe.h
+@@ -73,7 +73,7 @@ struct vsp1_partition_window {
+  * @wpf: The WPF partition window configuration
+  */
+ struct vsp1_partition {
+-	struct vsp1_partition_window rpf;
++	struct vsp1_partition_window rpf[VSP1_MAX_RPF];
+ 	struct vsp1_partition_window uds_sink;
+ 	struct vsp1_partition_window uds_source;
+ 	struct vsp1_partition_window sru;
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_rpf.c b/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
+index c47579efc65f6..6055554fb0714 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
+@@ -315,8 +315,8 @@ static void rpf_configure_partition(struct vsp1_entity *entity,
+ 	 * 'width' need to be adjusted.
+ 	 */
+ 	if (pipe->partitions > 1) {
+-		crop.width = pipe->partition->rpf.width;
+-		crop.left += pipe->partition->rpf.left;
++		crop.width = pipe->partition->rpf[rpf->entity.index].width;
++		crop.left += pipe->partition->rpf[rpf->entity.index].left;
+ 	}
+ 
+ 	if (pipe->interlaced) {
+@@ -371,7 +371,9 @@ static void rpf_partition(struct vsp1_entity *entity,
+ 			  unsigned int partition_idx,
+ 			  struct vsp1_partition_window *window)
+ {
+-	partition->rpf = *window;
++	struct vsp1_rwpf *rpf = to_rwpf(&entity->subdev);
++
++	partition->rpf[rpf->entity.index] = *window;
+ }
+ 
+ static const struct vsp1_entity_operations rpf_entity_ops = {
+diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-debugfs.h b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-debugfs.h
+index 8e1bfd8605247..3fe177b59b16d 100644
+--- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-debugfs.h
++++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-debugfs.h
+@@ -16,8 +16,8 @@
+ void c8sectpfe_debugfs_init(struct c8sectpfei *);
+ void c8sectpfe_debugfs_exit(struct c8sectpfei *);
+ #else
+-static inline void c8sectpfe_debugfs_init(struct c8sectpfei *) {};
+-static inline void c8sectpfe_debugfs_exit(struct c8sectpfei *) {};
++static inline void c8sectpfe_debugfs_init(struct c8sectpfei *fei) {};
++static inline void c8sectpfe_debugfs_exit(struct c8sectpfei *fei) {};
+ #endif
+ 
+ #endif /* __C8SECTPFE_DEBUG_H */
+diff --git a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-core.c b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-core.c
+index 4acc3b90d03aa..7f771ea49b784 100644
+--- a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-core.c
++++ b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-core.c
+@@ -202,8 +202,8 @@ static int dcmipp_create_subdevs(struct dcmipp_device *dcmipp)
+ 	return 0;
+ 
+ err_init_entity:
+-	while (i > 0)
+-		dcmipp->pipe_cfg->ents[i - 1].release(dcmipp->entity[i - 1]);
++	while (i-- > 0)
++		dcmipp->pipe_cfg->ents[i].release(dcmipp->entity[i]);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 0b55314a80827..8f1361bcce3a6 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -1148,10 +1148,7 @@ static int imon_ir_change_protocol(struct rc_dev *rc, u64 *rc_proto)
+ 
+ 	memcpy(ictx->usb_tx_buf, &ir_proto_packet, sizeof(ir_proto_packet));
+ 
+-	if (!mutex_is_locked(&ictx->lock)) {
+-		unlock = true;
+-		mutex_lock(&ictx->lock);
+-	}
++	unlock = mutex_trylock(&ictx->lock);
+ 
+ 	retval = send_packet(ictx);
+ 	if (retval)
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index 52aea41677183..717c441b4a865 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -828,8 +828,10 @@ struct rc_dev *rc_dev_get_from_fd(int fd, bool write)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (write && !(f.file->f_mode & FMODE_WRITE))
++	if (write && !(f.file->f_mode & FMODE_WRITE)) {
++		fdput(f);
+ 		return ERR_PTR(-EPERM);
++	}
+ 
+ 	fh = f.file->private_data;
+ 	dev = fh->rc;
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index fbf58012becdf..22d83ac18eb73 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -23,11 +23,40 @@ static int dvb_usb_force_pid_filter_usage;
+ module_param_named(force_pid_filter_usage, dvb_usb_force_pid_filter_usage, int, 0444);
+ MODULE_PARM_DESC(force_pid_filter_usage, "force all dvb-usb-devices to use a PID filter, if any (default: 0).");
+ 
++static int dvb_usb_check_bulk_endpoint(struct dvb_usb_device *d, u8 endpoint)
++{
++	if (endpoint) {
++		int ret;
++
++		ret = usb_pipe_type_check(d->udev, usb_sndbulkpipe(d->udev, endpoint));
++		if (ret)
++			return ret;
++		ret = usb_pipe_type_check(d->udev, usb_rcvbulkpipe(d->udev, endpoint));
++		if (ret)
++			return ret;
++	}
++	return 0;
++}
++
++static void dvb_usb_clear_halt(struct dvb_usb_device *d, u8 endpoint)
++{
++	if (endpoint) {
++		usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, endpoint));
++		usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, endpoint));
++	}
++}
++
+ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ {
+ 	struct dvb_usb_adapter *adap;
+ 	int ret, n, o;
+ 
++	ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint);
++	if (ret)
++		return ret;
++	ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint_response);
++	if (ret)
++		return ret;
+ 	for (n = 0; n < d->props.num_adapters; n++) {
+ 		adap = &d->adapter[n];
+ 		adap->dev = d;
+@@ -103,10 +132,8 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 	 * when reloading the driver w/o replugging the device
+ 	 * sometimes a timeout occurs, this helps
+ 	 */
+-	if (d->props.generic_bulk_ctrl_endpoint != 0) {
+-		usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));
+-		usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));
+-	}
++	dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint);
++	dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint_response);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 4b685f883e4d7..a7d0ec22d95c0 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -2031,7 +2031,13 @@ static int uvc_ctrl_get_flags(struct uvc_device *dev,
+ 	else
+ 		ret = uvc_query_ctrl(dev, UVC_GET_INFO, ctrl->entity->id,
+ 				     dev->intfnum, info->selector, data, 1);
+-	if (!ret)
++
++	if (!ret) {
++		info->flags &= ~(UVC_CTRL_FLAG_GET_CUR |
++				 UVC_CTRL_FLAG_SET_CUR |
++				 UVC_CTRL_FLAG_AUTO_UPDATE |
++				 UVC_CTRL_FLAG_ASYNCHRONOUS);
++
+ 		info->flags |= (data[0] & UVC_CONTROL_CAP_GET ?
+ 				UVC_CTRL_FLAG_GET_CUR : 0)
+ 			    |  (data[0] & UVC_CONTROL_CAP_SET ?
+@@ -2040,6 +2046,7 @@ static int uvc_ctrl_get_flags(struct uvc_device *dev,
+ 				UVC_CTRL_FLAG_AUTO_UPDATE : 0)
+ 			    |  (data[0] & UVC_CONTROL_CAP_ASYNCHRONOUS ?
+ 				UVC_CTRL_FLAG_ASYNCHRONOUS : 0);
++	}
+ 
+ 	kfree(data);
+ 	return ret;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 8fe24c98087e6..d435b6a6c295d 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2580,7 +2580,17 @@ static const struct usb_device_id uvc_ids[] = {
+ 	  .bInterfaceClass	= USB_CLASS_VIDEO,
+ 	  .bInterfaceSubClass	= 1,
+ 	  .bInterfaceProtocol	= 0,
+-	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_RESTORE_CTRLS_ON_INIT) },
++	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_RESTORE_CTRLS_ON_INIT
++					       | UVC_QUIRK_INVALID_DEVICE_SOF) },
++	/* Logitech HD Pro Webcam C922 */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x046d,
++	  .idProduct		= 0x085c,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= 0,
++	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_INVALID_DEVICE_SOF) },
+ 	/* Logitech Rally Bar Huddle */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 7cbf4692bd875..51f4f653b983d 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -529,6 +529,17 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	stream->clock.last_sof = dev_sof;
+ 
+ 	host_sof = usb_get_current_frame_number(stream->dev->udev);
++
++	/*
++	 * On some devices, like the Logitech C922, the device SOF does not run
++	 * at a stable rate of 1kHz. For those devices use the host SOF instead.
++	 * In the tests performed so far, this improves the timestamp precision.
++	 * This is probably explained by a small packet handling jitter from the
++	 * host, but the exact reason hasn't been fully determined.
++	 */
++	if (stream->dev->quirks & UVC_QUIRK_INVALID_DEVICE_SOF)
++		dev_sof = host_sof;
++
+ 	time = uvc_video_get_time();
+ 
+ 	/*
+@@ -709,11 +720,11 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	unsigned long flags;
+ 	u64 timestamp;
+ 	u32 delta_stc;
+-	u32 y1, y2;
++	u32 y1;
+ 	u32 x1, x2;
+ 	u32 mean;
+ 	u32 sof;
+-	u64 y;
++	u64 y, y2;
+ 
+ 	if (!uvc_hw_timestamps_param)
+ 		return;
+@@ -753,7 +764,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	sof = y;
+ 
+ 	uvc_dbg(stream->dev, CLOCK,
+-		"%s: PTS %u y %llu.%06llu SOF %u.%06llu (x1 %u x2 %u y1 %u y2 %u SOF offset %u)\n",
++		"%s: PTS %u y %llu.%06llu SOF %u.%06llu (x1 %u x2 %u y1 %u y2 %llu SOF offset %u)\n",
+ 		stream->dev->name, buf->pts,
+ 		y >> 16, div_u64((y & 0xffff) * 1000000, 65536),
+ 		sof >> 16, div_u64(((u64)sof & 0xffff) * 1000000LLU, 65536),
+@@ -768,7 +779,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 		goto done;
+ 
+ 	y1 = NSEC_PER_SEC;
+-	y2 = (u32)ktime_to_ns(ktime_sub(last->host_time, first->host_time)) + y1;
++	y2 = ktime_to_ns(ktime_sub(last->host_time, first->host_time)) + y1;
+ 
+ 	/*
+ 	 * Interpolated and host SOF timestamps can wrap around at slightly
+@@ -789,7 +800,7 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	timestamp = ktime_to_ns(first->host_time) + y - y1;
+ 
+ 	uvc_dbg(stream->dev, CLOCK,
+-		"%s: SOF %u.%06llu y %llu ts %llu buf ts %llu (x1 %u/%u/%u x2 %u/%u/%u y1 %u y2 %u)\n",
++		"%s: SOF %u.%06llu y %llu ts %llu buf ts %llu (x1 %u/%u/%u x2 %u/%u/%u y1 %u y2 %llu)\n",
+ 		stream->dev->name,
+ 		sof >> 16, div_u64(((u64)sof & 0xffff) * 1000000LLU, 65536),
+ 		y, timestamp, vbuf->vb2_buf.timestamp,
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index 3653b2c8a86cb..e5b12717016fa 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -75,6 +75,7 @@
+ #define UVC_QUIRK_WAKE_AUTOSUSPEND	0x00002000
+ #define UVC_QUIRK_NO_RESET_RESUME	0x00004000
+ #define UVC_QUIRK_DISABLE_AUTOSUSPEND	0x00008000
++#define UVC_QUIRK_INVALID_DEVICE_SOF	0x00010000
+ 
+ /* Format flags */
+ #define UVC_FMT_FLAG_COMPRESSED		0x00000001
+diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
+index 222f01665f7ce..c477723c07bf8 100644
+--- a/drivers/media/v4l2-core/v4l2-async.c
++++ b/drivers/media/v4l2-core/v4l2-async.c
+@@ -323,6 +323,9 @@ static int v4l2_async_create_ancillary_links(struct v4l2_async_notifier *n,
+ 	    sd->entity.function != MEDIA_ENT_F_FLASH)
+ 		return 0;
+ 
++	if (!n->sd)
++		return 0;
++
+ 	link = media_create_ancillary_link(&n->sd->entity, &sd->entity);
+ 
+ 	return IS_ERR(link) ? PTR_ERR(link) : 0;
+diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig
+index 8efdd1f971395..c82d8d8a16eaf 100644
+--- a/drivers/memory/Kconfig
++++ b/drivers/memory/Kconfig
+@@ -167,7 +167,7 @@ config FSL_CORENET_CF
+ 	  represents a coherency violation.
+ 
+ config FSL_IFC
+-	bool "Freescale IFC driver" if COMPILE_TEST
++	bool "Freescale IFC driver"
+ 	depends on FSL_SOC || ARCH_LAYERSCAPE || SOC_LS1021A || COMPILE_TEST
+ 	depends on HAS_IOMEM
+ 
+diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
+index c66f07edcd0e6..db1ba39de3b59 100644
+--- a/drivers/mfd/Makefile
++++ b/drivers/mfd/Makefile
+@@ -280,7 +280,5 @@ obj-$(CONFIG_MFD_INTEL_M10_BMC_PMCI)   += intel-m10-bmc-pmci.o
+ obj-$(CONFIG_MFD_ATC260X)	+= atc260x-core.o
+ obj-$(CONFIG_MFD_ATC260X_I2C)	+= atc260x-i2c.o
+ 
+-rsmu-i2c-objs			:= rsmu_core.o rsmu_i2c.o
+-rsmu-spi-objs			:= rsmu_core.o rsmu_spi.o
+-obj-$(CONFIG_MFD_RSMU_I2C)	+= rsmu-i2c.o
+-obj-$(CONFIG_MFD_RSMU_SPI)	+= rsmu-spi.o
++obj-$(CONFIG_MFD_RSMU_I2C)	+= rsmu_i2c.o rsmu_core.o
++obj-$(CONFIG_MFD_RSMU_SPI)	+= rsmu_spi.o rsmu_core.o
+diff --git a/drivers/mfd/omap-usb-tll.c b/drivers/mfd/omap-usb-tll.c
+index b6303ddb013b0..f68dd02814638 100644
+--- a/drivers/mfd/omap-usb-tll.c
++++ b/drivers/mfd/omap-usb-tll.c
+@@ -230,8 +230,7 @@ static int usbtll_omap_probe(struct platform_device *pdev)
+ 		break;
+ 	}
+ 
+-	tll = devm_kzalloc(dev, sizeof(*tll) + sizeof(tll->ch_clk[nch]),
+-			   GFP_KERNEL);
++	tll = devm_kzalloc(dev, struct_size(tll, ch_clk, nch), GFP_KERNEL);
+ 	if (!tll) {
+ 		pm_runtime_put_sync(dev);
+ 		pm_runtime_disable(dev);
+diff --git a/drivers/mfd/rsmu_core.c b/drivers/mfd/rsmu_core.c
+index 29437fd0bd5bf..fd04a6e5dfa31 100644
+--- a/drivers/mfd/rsmu_core.c
++++ b/drivers/mfd/rsmu_core.c
+@@ -78,11 +78,13 @@ int rsmu_core_init(struct rsmu_ddata *rsmu)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(rsmu_core_init);
+ 
+ void rsmu_core_exit(struct rsmu_ddata *rsmu)
+ {
+ 	mutex_destroy(&rsmu->lock);
+ }
++EXPORT_SYMBOL_GPL(rsmu_core_exit);
+ 
+ MODULE_DESCRIPTION("Renesas SMU core driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/misc/eeprom/ee1004.c b/drivers/misc/eeprom/ee1004.c
+index 21feebc3044c3..71ca66d1df82c 100644
+--- a/drivers/misc/eeprom/ee1004.c
++++ b/drivers/misc/eeprom/ee1004.c
+@@ -185,6 +185,8 @@ BIN_ATTRIBUTE_GROUPS(ee1004);
+ static void ee1004_probe_temp_sensor(struct i2c_client *client)
+ {
+ 	struct i2c_board_info info = { .type = "jc42" };
++	unsigned short addr = 0x18 | (client->addr & 7);
++	unsigned short addr_list[] = { addr, I2C_CLIENT_END };
+ 	u8 byte14;
+ 	int ret;
+ 
+@@ -193,9 +195,7 @@ static void ee1004_probe_temp_sensor(struct i2c_client *client)
+ 	if (ret != 1 || !(byte14 & BIT(7)))
+ 		return;
+ 
+-	info.addr = 0x18 | (client->addr & 7);
+-
+-	i2c_new_client_device(client->adapter, &info);
++	i2c_new_scanned_device(client->adapter, &info, addr_list, NULL);
+ }
+ 
+ static void ee1004_cleanup(int idx, struct ee1004_bus_data *bd)
+diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig
+index cbf8ae85e1ae0..6142573085169 100644
+--- a/drivers/mtd/nand/raw/Kconfig
++++ b/drivers/mtd/nand/raw/Kconfig
+@@ -234,8 +234,7 @@ config MTD_NAND_FSL_IFC
+ 	tristate "Freescale IFC NAND controller"
+ 	depends on FSL_SOC || ARCH_LAYERSCAPE || SOC_LS1021A || COMPILE_TEST
+ 	depends on HAS_IOMEM
+-	select FSL_IFC
+-	select MEMORY
++	depends on FSL_IFC
+ 	help
+ 	  Various Freescale chips e.g P1010, include a NAND Flash machine
+ 	  with built-in hardware ECC capabilities.
+diff --git a/drivers/mtd/spi-nor/winbond.c b/drivers/mtd/spi-nor/winbond.c
+index 142fb27b2ea9a..e065e4fd42a33 100644
+--- a/drivers/mtd/spi-nor/winbond.c
++++ b/drivers/mtd/spi-nor/winbond.c
+@@ -105,7 +105,9 @@ static const struct flash_info winbond_nor_parts[] = {
+ 	}, {
+ 		.id = SNOR_ID(0xef, 0x40, 0x18),
+ 		.name = "w25q128",
++		.size = SZ_16M,
+ 		.flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB,
++		.no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ,
+ 	}, {
+ 		.id = SNOR_ID(0xef, 0x40, 0x19),
+ 		.name = "w25q256",
+diff --git a/drivers/mtd/tests/Makefile b/drivers/mtd/tests/Makefile
+index 5de0378f90dbd..7dae831ee8b6b 100644
+--- a/drivers/mtd/tests/Makefile
++++ b/drivers/mtd/tests/Makefile
+@@ -1,19 +1,19 @@
+ # SPDX-License-Identifier: GPL-2.0
+-obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o
+-obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o
++obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o mtd_test.o
++obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o mtd_test.o
+ 
+-mtd_oobtest-objs := oobtest.o mtd_test.o
+-mtd_pagetest-objs := pagetest.o mtd_test.o
+-mtd_readtest-objs := readtest.o mtd_test.o
+-mtd_speedtest-objs := speedtest.o mtd_test.o
+-mtd_stresstest-objs := stresstest.o mtd_test.o
+-mtd_subpagetest-objs := subpagetest.o mtd_test.o
+-mtd_torturetest-objs := torturetest.o mtd_test.o
+-mtd_nandbiterrs-objs := nandbiterrs.o mtd_test.o
++mtd_oobtest-objs := oobtest.o
++mtd_pagetest-objs := pagetest.o
++mtd_readtest-objs := readtest.o
++mtd_speedtest-objs := speedtest.o
++mtd_stresstest-objs := stresstest.o
++mtd_subpagetest-objs := subpagetest.o
++mtd_torturetest-objs := torturetest.o
++mtd_nandbiterrs-objs := nandbiterrs.o
+diff --git a/drivers/mtd/tests/mtd_test.c b/drivers/mtd/tests/mtd_test.c
+index c84250beffdc9..f391e0300cdc9 100644
+--- a/drivers/mtd/tests/mtd_test.c
++++ b/drivers/mtd/tests/mtd_test.c
+@@ -25,6 +25,7 @@ int mtdtest_erase_eraseblock(struct mtd_info *mtd, unsigned int ebnum)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_erase_eraseblock);
+ 
+ static int is_block_bad(struct mtd_info *mtd, unsigned int ebnum)
+ {
+@@ -57,6 +58,7 @@ int mtdtest_scan_for_bad_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_scan_for_bad_eraseblocks);
+ 
+ int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 				unsigned int eb, int ebcnt)
+@@ -75,6 +77,7 @@ int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_erase_good_eraseblocks);
+ 
+ int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
+ {
+@@ -92,6 +95,7 @@ int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
+ 
+ 	return err;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_read);
+ 
+ int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
+ 		const void *buf)
+@@ -107,3 +111,8 @@ int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
+ 
+ 	return err;
+ }
++EXPORT_SYMBOL_GPL(mtdtest_write);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("MTD function test helpers");
++MODULE_AUTHOR("Akinobu Mita");
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index e5ac3cd0bbae6..c7ba7a15c9f78 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -1564,6 +1564,7 @@ int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap,
+ 					  GFP_KERNEL);
+ 		if (!fm_eba[i]) {
+ 			ret = -ENOMEM;
++			kfree(scan_eba[i]);
+ 			goto out_free;
+ 		}
+ 
+@@ -1599,7 +1600,7 @@ int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap,
+ 	}
+ 
+ out_free:
+-	for (i = 0; i < num_volumes; i++) {
++	while (--i >= 0) {
+ 		if (!ubi->volumes[i])
+ 			continue;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index d19aabf5d4fba..2ed0da0684906 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1121,13 +1121,10 @@ static struct slave *bond_find_best_slave(struct bonding *bond)
+ 	return bestslave;
+ }
+ 
++/* must be called in RCU critical section or with RTNL held */
+ static bool bond_should_notify_peers(struct bonding *bond)
+ {
+-	struct slave *slave;
+-
+-	rcu_read_lock();
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	rcu_read_unlock();
++	struct slave *slave = rcu_dereference_rtnl(bond->curr_active_slave);
+ 
+ 	if (!slave || !bond->send_peer_notif ||
+ 	    bond->send_peer_notif %
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 8f50abe739b71..0783fc121bbbf 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -2256,6 +2256,9 @@ static int b53_change_mtu(struct dsa_switch *ds, int port, int mtu)
+ 	if (is5325(dev) || is5365(dev))
+ 		return -EOPNOTSUPP;
+ 
++	if (!dsa_is_cpu_port(ds, port))
++		return 0;
++
+ 	enable_jumbo = (mtu >= JMS_MIN_SIZE);
+ 	allow_10_100 = (dev->chip_id == BCM583XX_DEVICE_ID);
+ 
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 0580b2fee21c3..baa1eeb9a1b04 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -3917,6 +3917,13 @@ static int ksz_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	/* KSZ9477 can only perform HSR offloading for up to two ports */
++	if (hweight8(dev->hsr_ports) >= 2) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Cannot offload more than two ports - using software HSR");
++		return -EOPNOTSUPP;
++	}
++
+ 	/* Self MAC address filtering, to avoid frames traversing
+ 	 * the HSR ring more than once.
+ 	 */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 07c897b13de13..5b4e2ce5470d9 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3626,7 +3626,8 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	mv88e6xxx_reg_lock(chip);
+ 	if (chip->info->ops->port_set_jumbo_size)
+ 		ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
+-	else if (chip->info->ops->set_max_frame_size)
++	else if (chip->info->ops->set_max_frame_size &&
++		 dsa_is_cpu_port(ds, port))
+ 		ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
+ 	mv88e6xxx_reg_unlock(chip);
+ 
+diff --git a/drivers/net/ethernet/brocade/bna/bna_types.h b/drivers/net/ethernet/brocade/bna/bna_types.h
+index a5ebd7110e073..986f43d277119 100644
+--- a/drivers/net/ethernet/brocade/bna/bna_types.h
++++ b/drivers/net/ethernet/brocade/bna/bna_types.h
+@@ -416,7 +416,7 @@ struct bna_ib {
+ /* Tx object */
+ 
+ /* Tx datapath control structure */
+-#define BNA_Q_NAME_SIZE		16
++#define BNA_Q_NAME_SIZE		(IFNAMSIZ + 6)
+ struct bna_tcb {
+ 	/* Fast path */
+ 	void			**sw_qpt;
+diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c
+index fe121d36112d5..ece6f3b483273 100644
+--- a/drivers/net/ethernet/brocade/bna/bnad.c
++++ b/drivers/net/ethernet/brocade/bna/bnad.c
+@@ -1534,8 +1534,9 @@ bnad_tx_msix_register(struct bnad *bnad, struct bnad_tx_info *tx_info,
+ 
+ 	for (i = 0; i < num_txqs; i++) {
+ 		vector_num = tx_info->tcb[i]->intr_vector;
+-		sprintf(tx_info->tcb[i]->name, "%s TXQ %d", bnad->netdev->name,
+-				tx_id + tx_info->tcb[i]->id);
++		snprintf(tx_info->tcb[i]->name, BNA_Q_NAME_SIZE, "%s TXQ %d",
++			 bnad->netdev->name,
++			 tx_id + tx_info->tcb[i]->id);
+ 		err = request_irq(bnad->msix_table[vector_num].vector,
+ 				  (irq_handler_t)bnad_msix_tx, 0,
+ 				  tx_info->tcb[i]->name,
+@@ -1585,9 +1586,9 @@ bnad_rx_msix_register(struct bnad *bnad, struct bnad_rx_info *rx_info,
+ 
+ 	for (i = 0; i < num_rxps; i++) {
+ 		vector_num = rx_info->rx_ctrl[i].ccb->intr_vector;
+-		sprintf(rx_info->rx_ctrl[i].ccb->name, "%s CQ %d",
+-			bnad->netdev->name,
+-			rx_id + rx_info->rx_ctrl[i].ccb->id);
++		snprintf(rx_info->rx_ctrl[i].ccb->name, BNA_Q_NAME_SIZE,
++			 "%s CQ %d", bnad->netdev->name,
++			 rx_id + rx_info->rx_ctrl[i].ccb->id);
+ 		err = request_irq(bnad->msix_table[vector_num].vector,
+ 				  (irq_handler_t)bnad_msix_rx, 0,
+ 				  rx_info->rx_ctrl[i].ccb->name,
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 5f0c9e1771dbf..7ebd61a3a49b0 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -79,7 +79,8 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
+ #define GMAC0_IRQ4_8 (GMAC0_MIB_INT_BIT | GMAC0_RX_OVERRUN_INT_BIT)
+ 
+ #define GMAC_OFFLOAD_FEATURES (NETIF_F_SG | NETIF_F_IP_CSUM | \
+-			       NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM)
++			       NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM | \
++			       NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6)
+ 
+ /**
+  * struct gmac_queue_page - page buffer per-page info
+@@ -1148,13 +1149,25 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 	skb_frag_t *skb_frag;
+ 	dma_addr_t mapping;
+ 	void *buffer;
++	u16 mss;
+ 	int ret;
+ 
+-	/* TODO: implement proper TSO using MTU in word3 */
+ 	word1 = skb->len;
+ 	word3 = SOF_BIT;
+ 
+-	if (skb->len >= ETH_FRAME_LEN) {
++	mss = skb_shinfo(skb)->gso_size;
++	if (mss) {
++		/* This means we are dealing with TCP and skb->len is the
++		 * sum total of all the segments. The TSO will deal with
++		 * chopping this up for us.
++		 */
++		/* The accelerator needs the full frame size here */
++		mss += skb_tcp_all_headers(skb);
++		netdev_dbg(netdev, "segment offloading mss = %04x len=%04x\n",
++			   mss, skb->len);
++		word1 |= TSS_MTU_ENABLE_BIT;
++		word3 |= mss;
++	} else if (skb->len >= ETH_FRAME_LEN) {
+ 		/* Hardware offloaded checksumming isn't working on frames
+ 		 * bigger than 1514 bytes. A hypothesis about this is that the
+ 		 * checksum buffer is only 1518 bytes, so when the frames get
+@@ -1169,7 +1182,9 @@ static int gmac_map_tx_bufs(struct net_device *netdev, struct sk_buff *skb,
+ 				return ret;
+ 		}
+ 		word1 |= TSS_BYPASS_BIT;
+-	} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
++	}
++
++	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		int tcp = 0;
+ 
+ 		/* We do not switch off the checksumming on non TCP/UDP
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 881ece735dcf1..fb19295529a21 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1361,6 +1361,12 @@ fec_stop(struct net_device *ndev)
+ 		writel(FEC_ECR_ETHEREN, fep->hwp + FEC_ECNTRL);
+ 		writel(rmii_mode, fep->hwp + FEC_R_CNTRL);
+ 	}
++
++	if (fep->bufdesc_ex) {
++		val = readl(fep->hwp + FEC_ECNTRL);
++		val |= FEC_ECR_EN1588;
++		writel(val, fep->hwp + FEC_ECNTRL);
++	}
+ }
+ 
+ static void
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index 24a64ec1073e2..e7fb7d6d283df 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -158,15 +158,16 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
+ 			      u32 to_do)
+ {
+ 	struct gve_tx_buffer_state *info;
+-	u32 clean_end = tx->done + to_do;
+ 	u64 pkts = 0, bytes = 0;
+ 	size_t space_freed = 0;
+ 	u32 xsk_complete = 0;
+ 	u32 idx;
++	int i;
+ 
+-	for (; tx->done < clean_end; tx->done++) {
++	for (i = 0; i < to_do; i++) {
+ 		idx = tx->done & tx->mask;
+ 		info = &tx->info[idx];
++		tx->done++;
+ 
+ 		if (unlikely(!info->xdp.size))
+ 			continue;
+diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+index 0b3cca3fc7921..f879426cb5523 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+@@ -866,22 +866,42 @@ static bool gve_can_send_tso(const struct sk_buff *skb)
+ 	const int header_len = skb_tcp_all_headers(skb);
+ 	const int gso_size = shinfo->gso_size;
+ 	int cur_seg_num_bufs;
++	int prev_frag_size;
+ 	int cur_seg_size;
+ 	int i;
+ 
+ 	cur_seg_size = skb_headlen(skb) - header_len;
++	prev_frag_size = skb_headlen(skb);
+ 	cur_seg_num_bufs = cur_seg_size > 0;
+ 
+ 	for (i = 0; i < shinfo->nr_frags; i++) {
+ 		if (cur_seg_size >= gso_size) {
+ 			cur_seg_size %= gso_size;
+ 			cur_seg_num_bufs = cur_seg_size > 0;
++
++			if (prev_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) {
++				int prev_frag_remain = prev_frag_size %
++					GVE_TX_MAX_BUF_SIZE_DQO;
++
++				/* If the last descriptor of the previous frag
++				 * is less than cur_seg_size, the segment will
++				 * span two descriptors in the previous frag.
++				 * Since max gso size (9728) is less than
++				 * GVE_TX_MAX_BUF_SIZE_DQO, it is impossible
++				 * for the segment to span more than two
++				 * descriptors.
++				 */
++				if (prev_frag_remain &&
++				    cur_seg_size > prev_frag_remain)
++					cur_seg_num_bufs++;
++			}
+ 		}
+ 
+ 		if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg))
+ 			return false;
+ 
+-		cur_seg_size += skb_frag_size(&shinfo->frags[i]);
++		prev_frag_size = skb_frag_size(&shinfo->frags[i]);
++		cur_seg_size += prev_frag_size;
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
+index 8e9293e57bfd5..e8af26da1fc1e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
++++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
+@@ -15,15 +15,14 @@ hns3-objs = hns3_enet.o hns3_ethtool.o hns3_debugfs.o
+ 
+ hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
+ 
+-obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
++obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o hclge-common.o
+ 
+-hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_mbx.o  hns3vf/hclgevf_devlink.o hns3vf/hclgevf_regs.o \
+-		hns3_common/hclge_comm_cmd.o hns3_common/hclge_comm_rss.o hns3_common/hclge_comm_tqp_stats.o
++hclge-common-objs += hns3_common/hclge_comm_cmd.o hns3_common/hclge_comm_rss.o hns3_common/hclge_comm_tqp_stats.o
+ 
+-obj-$(CONFIG_HNS3_HCLGE) += hclge.o
++hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_mbx.o  hns3vf/hclgevf_devlink.o hns3vf/hclgevf_regs.o
++
++obj-$(CONFIG_HNS3_HCLGE) += hclge.o hclge-common.o
+ hclge-objs = hns3pf/hclge_main.o hns3pf/hclge_mdio.o hns3pf/hclge_tm.o hns3pf/hclge_regs.o \
+ 		hns3pf/hclge_mbx.o hns3pf/hclge_err.o  hns3pf/hclge_debugfs.o hns3pf/hclge_ptp.o hns3pf/hclge_devlink.o \
+-		hns3_common/hclge_comm_cmd.o hns3_common/hclge_comm_rss.o hns3_common/hclge_comm_tqp_stats.o
+-
+ 
+ hclge-$(CONFIG_HNS3_DCB) += hns3pf/hclge_dcb.o
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
+index ea40b594dbac7..4ad4e8ab2f1f3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
+@@ -48,6 +48,7 @@ void hclge_comm_cmd_reuse_desc(struct hclge_desc *desc, bool is_read)
+ 	else
+ 		desc->flag &= cpu_to_le16(~HCLGE_COMM_CMD_FLAG_WR);
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_reuse_desc);
+ 
+ static void hclge_comm_set_default_capability(struct hnae3_ae_dev *ae_dev,
+ 					      bool is_pf)
+@@ -72,6 +73,7 @@ void hclge_comm_cmd_setup_basic_desc(struct hclge_desc *desc,
+ 	if (is_read)
+ 		desc->flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_WR);
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_setup_basic_desc);
+ 
+ int hclge_comm_firmware_compat_config(struct hnae3_ae_dev *ae_dev,
+ 				      struct hclge_comm_hw *hw, bool en)
+@@ -517,6 +519,7 @@ int hclge_comm_cmd_send(struct hclge_comm_hw *hw, struct hclge_desc *desc,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_send);
+ 
+ static void hclge_comm_cmd_uninit_regs(struct hclge_comm_hw *hw)
+ {
+@@ -553,6 +556,7 @@ void hclge_comm_cmd_uninit(struct hnae3_ae_dev *ae_dev,
+ 	hclge_comm_free_cmd_desc(&cmdq->csq);
+ 	hclge_comm_free_cmd_desc(&cmdq->crq);
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_uninit);
+ 
+ int hclge_comm_cmd_queue_init(struct pci_dev *pdev, struct hclge_comm_hw *hw)
+ {
+@@ -591,6 +595,7 @@ int hclge_comm_cmd_queue_init(struct pci_dev *pdev, struct hclge_comm_hw *hw)
+ 	hclge_comm_free_cmd_desc(&hw->cmq.csq);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_queue_init);
+ 
+ void hclge_comm_cmd_init_ops(struct hclge_comm_hw *hw,
+ 			     const struct hclge_comm_cmq_ops *ops)
+@@ -602,6 +607,7 @@ void hclge_comm_cmd_init_ops(struct hclge_comm_hw *hw,
+ 		cmdq->ops.trace_cmd_get = ops->trace_cmd_get;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_init_ops);
+ 
+ int hclge_comm_cmd_init(struct hnae3_ae_dev *ae_dev, struct hclge_comm_hw *hw,
+ 			u32 *fw_version, bool is_pf,
+@@ -672,3 +678,8 @@ int hclge_comm_cmd_init(struct hnae3_ae_dev *ae_dev, struct hclge_comm_hw *hw,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_cmd_init);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("HNS3: Hisilicon Ethernet PF/VF Common Library");
++MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
+index b4ae2160aff4f..4e2bb6556b1ce 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
+@@ -62,6 +62,7 @@ int hclge_comm_rss_init_cfg(struct hnae3_handle *nic,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_rss_init_cfg);
+ 
+ void hclge_comm_get_rss_tc_info(u16 rss_size, u8 hw_tc_map, u16 *tc_offset,
+ 				u16 *tc_valid, u16 *tc_size)
+@@ -78,6 +79,7 @@ void hclge_comm_get_rss_tc_info(u16 rss_size, u8 hw_tc_map, u16 *tc_offset,
+ 		tc_offset[i] = (hw_tc_map & BIT(i)) ? rss_size * i : 0;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_get_rss_tc_info);
+ 
+ int hclge_comm_set_rss_tc_mode(struct hclge_comm_hw *hw, u16 *tc_offset,
+ 			       u16 *tc_valid, u16 *tc_size)
+@@ -113,6 +115,7 @@ int hclge_comm_set_rss_tc_mode(struct hclge_comm_hw *hw, u16 *tc_offset,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_tc_mode);
+ 
+ int hclge_comm_set_rss_hash_key(struct hclge_comm_rss_cfg *rss_cfg,
+ 				struct hclge_comm_hw *hw, const u8 *key,
+@@ -143,6 +146,7 @@ int hclge_comm_set_rss_hash_key(struct hclge_comm_rss_cfg *rss_cfg,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_hash_key);
+ 
+ int hclge_comm_set_rss_tuple(struct hnae3_ae_dev *ae_dev,
+ 			     struct hclge_comm_hw *hw,
+@@ -185,11 +189,13 @@ int hclge_comm_set_rss_tuple(struct hnae3_ae_dev *ae_dev,
+ 	rss_cfg->rss_tuple_sets.ipv6_fragment_en = req->ipv6_fragment_en;
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_tuple);
+ 
+ u32 hclge_comm_get_rss_key_size(struct hnae3_handle *handle)
+ {
+ 	return HCLGE_COMM_RSS_KEY_SIZE;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_get_rss_key_size);
+ 
+ int hclge_comm_parse_rss_hfunc(struct hclge_comm_rss_cfg *rss_cfg,
+ 			       const u8 hfunc, u8 *hash_algo)
+@@ -217,6 +223,7 @@ void hclge_comm_rss_indir_init_cfg(struct hnae3_ae_dev *ae_dev,
+ 	for (i = 0; i < ae_dev->dev_specs.rss_ind_tbl_size; i++)
+ 		rss_cfg->rss_indirection_tbl[i] = i % rss_cfg->rss_size;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_rss_indir_init_cfg);
+ 
+ int hclge_comm_get_rss_tuple(struct hclge_comm_rss_cfg *rss_cfg, int flow_type,
+ 			     u8 *tuple_sets)
+@@ -250,6 +257,7 @@ int hclge_comm_get_rss_tuple(struct hclge_comm_rss_cfg *rss_cfg, int flow_type,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_get_rss_tuple);
+ 
+ static void
+ hclge_comm_append_rss_msb_info(struct hclge_comm_rss_ind_tbl_cmd *req,
+@@ -304,6 +312,7 @@ int hclge_comm_set_rss_indir_table(struct hnae3_ae_dev *ae_dev,
+ 	}
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_indir_table);
+ 
+ int hclge_comm_set_rss_input_tuple(struct hclge_comm_hw *hw,
+ 				   struct hclge_comm_rss_cfg *rss_cfg)
+@@ -332,6 +341,7 @@ int hclge_comm_set_rss_input_tuple(struct hclge_comm_hw *hw,
+ 			"failed to configure rss input, ret = %d.\n", ret);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_input_tuple);
+ 
+ void hclge_comm_get_rss_hash_info(struct hclge_comm_rss_cfg *rss_cfg, u8 *key,
+ 				  u8 *hfunc)
+@@ -355,6 +365,7 @@ void hclge_comm_get_rss_hash_info(struct hclge_comm_rss_cfg *rss_cfg, u8 *key,
+ 	if (key)
+ 		memcpy(key, rss_cfg->rss_hash_key, HCLGE_COMM_RSS_KEY_SIZE);
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_get_rss_hash_info);
+ 
+ void hclge_comm_get_rss_indir_tbl(struct hclge_comm_rss_cfg *rss_cfg,
+ 				  u32 *indir, u16 rss_ind_tbl_size)
+@@ -367,6 +378,7 @@ void hclge_comm_get_rss_indir_tbl(struct hclge_comm_rss_cfg *rss_cfg,
+ 	for (i = 0; i < rss_ind_tbl_size; i++)
+ 		indir[i] = rss_cfg->rss_indirection_tbl[i];
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_get_rss_indir_tbl);
+ 
+ int hclge_comm_set_rss_algo_key(struct hclge_comm_hw *hw, const u8 hfunc,
+ 				const u8 *key)
+@@ -408,6 +420,7 @@ int hclge_comm_set_rss_algo_key(struct hclge_comm_hw *hw, const u8 hfunc,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_set_rss_algo_key);
+ 
+ static u8 hclge_comm_get_rss_hash_bits(struct ethtool_rxnfc *nfc)
+ {
+@@ -502,3 +515,4 @@ u64 hclge_comm_convert_rss_tuple(u8 tuple_sets)
+ 
+ 	return tuple_data;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_convert_rss_tuple);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
+index 618f66d9586b3..2b31188ff5558 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
+@@ -26,6 +26,7 @@ u64 *hclge_comm_tqps_get_stats(struct hnae3_handle *handle, u64 *data)
+ 
+ 	return buff;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_tqps_get_stats);
+ 
+ int hclge_comm_tqps_get_sset_count(struct hnae3_handle *handle)
+ {
+@@ -33,6 +34,7 @@ int hclge_comm_tqps_get_sset_count(struct hnae3_handle *handle)
+ 
+ 	return kinfo->num_tqps * HCLGE_COMM_QUEUE_PAIR_SIZE;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_tqps_get_sset_count);
+ 
+ u8 *hclge_comm_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
+ {
+@@ -56,6 +58,7 @@ u8 *hclge_comm_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
+ 
+ 	return buff;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_tqps_get_strings);
+ 
+ int hclge_comm_tqps_update_stats(struct hnae3_handle *handle,
+ 				 struct hclge_comm_hw *hw)
+@@ -99,6 +102,7 @@ int hclge_comm_tqps_update_stats(struct hnae3_handle *handle,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_tqps_update_stats);
+ 
+ void hclge_comm_reset_tqp_stats(struct hnae3_handle *handle)
+ {
+@@ -113,3 +117,4 @@ void hclge_comm_reset_tqp_stats(struct hnae3_handle *handle)
+ 		memset(&tqp->tqp_stats, 0, sizeof(tqp->tqp_stats));
+ 	}
+ }
++EXPORT_SYMBOL_GPL(hclge_comm_reset_tqp_stats);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+index e3cab8e98f525..5412eff8ef233 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+@@ -534,7 +534,7 @@ ice_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp,
+  *
+  * Returns the number of available flow director filters to this VSI
+  */
+-static int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi)
++int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi)
+ {
+ 	u16 vsi_num = ice_get_hw_vsi_num(hw, vsi->idx);
+ 	u16 num_guar;
+diff --git a/drivers/net/ethernet/intel/ice/ice_fdir.h b/drivers/net/ethernet/intel/ice/ice_fdir.h
+index 021ecbac7848f..ab5b118daa2da 100644
+--- a/drivers/net/ethernet/intel/ice/ice_fdir.h
++++ b/drivers/net/ethernet/intel/ice/ice_fdir.h
+@@ -207,6 +207,8 @@ struct ice_fdir_base_pkt {
+ 	const u8 *tun_pkt;
+ };
+ 
++struct ice_vsi;
++
+ int ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id);
+ int ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id);
+ int ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
+@@ -218,6 +220,7 @@ int
+ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
+ 			  u8 *pkt, bool frag, bool tun);
+ int ice_get_fdir_cnt_all(struct ice_hw *hw);
++int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi);
+ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
+ bool ice_fdir_has_frag(enum ice_fltr_ptype flow);
+ struct ice_fdir_fltr *
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 1191031b2a43d..ffd6c42bda1ed 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -2413,10 +2413,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid,
+ 		/* Propagate some data to the recipe database */
+ 		recps[idx].is_root = !!is_root;
+ 		recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority;
+-		recps[idx].need_pass_l2 = root_bufs.content.act_ctrl &
+-					  ICE_AQ_RECIPE_ACT_NEED_PASS_L2;
+-		recps[idx].allow_pass_l2 = root_bufs.content.act_ctrl &
+-					   ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2;
++		recps[idx].need_pass_l2 = !!(root_bufs.content.act_ctrl &
++					     ICE_AQ_RECIPE_ACT_NEED_PASS_L2);
++		recps[idx].allow_pass_l2 = !!(root_bufs.content.act_ctrl &
++					      ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2);
+ 		bitmap_zero(recps[idx].res_idxs, ICE_MAX_FV_WORDS);
+ 		if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) {
+ 			recps[idx].chain_idx = root_bufs.content.result_indx &
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+index 8e4ff3af86c68..b4feb09276870 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+@@ -536,6 +536,8 @@ static void ice_vc_fdir_reset_cnt_all(struct ice_vf_fdir *fdir)
+ 		fdir->fdir_fltr_cnt[flow][0] = 0;
+ 		fdir->fdir_fltr_cnt[flow][1] = 0;
+ 	}
++
++	fdir->fdir_fltr_cnt_total = 0;
+ }
+ 
+ /**
+@@ -1560,6 +1562,7 @@ ice_vc_add_fdir_fltr_post(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx,
+ 	resp->status = status;
+ 	resp->flow_id = conf->flow_id;
+ 	vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]++;
++	vf->fdir.fdir_fltr_cnt_total++;
+ 
+ 	ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret,
+ 				    (u8 *)resp, len);
+@@ -1624,6 +1627,7 @@ ice_vc_del_fdir_fltr_post(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx,
+ 	resp->status = status;
+ 	ice_vc_fdir_remove_entry(vf, conf, conf->flow_id);
+ 	vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]--;
++	vf->fdir.fdir_fltr_cnt_total--;
+ 
+ 	ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret,
+ 				    (u8 *)resp, len);
+@@ -1790,6 +1794,7 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
+ 	struct virtchnl_fdir_add *stat = NULL;
+ 	struct virtchnl_fdir_fltr_conf *conf;
+ 	enum virtchnl_status_code v_ret;
++	struct ice_vsi *vf_vsi;
+ 	struct device *dev;
+ 	struct ice_pf *pf;
+ 	int is_tun = 0;
+@@ -1798,6 +1803,17 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
+ 
+ 	pf = vf->pf;
+ 	dev = ice_pf_to_dev(pf);
++	vf_vsi = ice_get_vf_vsi(vf);
++
++#define ICE_VF_MAX_FDIR_FILTERS	128
++	if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) ||
++	    vf->fdir.fdir_fltr_cnt_total >= ICE_VF_MAX_FDIR_FILTERS) {
++		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++		dev_err(dev, "Max number of FDIR filters for VF %d is reached\n",
++			vf->vf_id);
++		goto err_exit;
++	}
++
+ 	ret = ice_vc_fdir_param_check(vf, fltr->vsi_id);
+ 	if (ret) {
+ 		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.h
+index c5bcc8d7481ca..ac6dcab454b49 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.h
+@@ -29,6 +29,7 @@ struct ice_vf_fdir_ctx {
+ struct ice_vf_fdir {
+ 	u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX];
+ 	int prof_entry_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX];
++	u16 fdir_fltr_cnt_total;
+ 	struct ice_fd_hw_prof **fdir_prof;
+ 
+ 	struct idr fdir_rule_idr;
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index c84ce54a84a00..c11bb0f0b8c47 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -4198,8 +4198,6 @@ static int mtk_free_dev(struct mtk_eth *eth)
+ 		metadata_dst_free(eth->dsa_meta[i]);
+ 	}
+ 
+-	free_netdev(eth->dummy_dev);
+-
+ 	return 0;
+ }
+ 
+@@ -5048,6 +5046,7 @@ static void mtk_remove(struct platform_device *pdev)
+ 	netif_napi_del(&eth->tx_napi);
+ 	netif_napi_del(&eth->rx_napi);
+ 	mtk_cleanup(eth);
++	free_netdev(eth->dummy_dev);
+ 	mtk_mdio_cleanup(eth);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+index 4b713832fdd55..f5c0a4214c4e5 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c
+@@ -391,7 +391,8 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp,
+ 	if (err)
+ 		return err;
+ 
+-	lkey_id = aregion->ops->lkey_id_get(aregion, aentry->enc_key, erp_id);
++	lkey_id = aregion->ops->lkey_id_get(aregion, aentry->ht_key.enc_key,
++					    erp_id);
+ 	if (IS_ERR(lkey_id))
+ 		return PTR_ERR(lkey_id);
+ 	aentry->lkey_id = lkey_id;
+@@ -399,7 +400,7 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp,
+ 	kvdl_index = mlxsw_afa_block_first_kvdl_index(rulei->act_block);
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_WRITE,
+ 			     priority, region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -428,7 +429,7 @@ mlxsw_sp_acl_atcam_region_entry_remove(struct mlxsw_sp *mlxsw_sp,
+ 
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, false, MLXSW_REG_PTCE3_OP_WRITE_WRITE, 0,
+ 			     region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -457,7 +458,7 @@ mlxsw_sp_acl_atcam_region_entry_action_replace(struct mlxsw_sp *mlxsw_sp,
+ 	kvdl_index = mlxsw_afa_block_first_kvdl_index(rulei->act_block);
+ 	mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_UPDATE,
+ 			     priority, region->tcam_region_info,
+-			     aentry->enc_key, erp_id,
++			     aentry->ht_key.enc_key, erp_id,
+ 			     aentry->delta_info.start,
+ 			     aentry->delta_info.mask,
+ 			     aentry->delta_info.value,
+@@ -480,15 +481,13 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp,
+ 	int err;
+ 
+ 	mlxsw_afk_encode(afk, region->key_info, &rulei->values,
+-			 aentry->ht_key.full_enc_key, mask);
++			 aentry->ht_key.enc_key, mask);
+ 
+ 	erp_mask = mlxsw_sp_acl_erp_mask_get(aregion, mask, false);
+ 	if (IS_ERR(erp_mask))
+ 		return PTR_ERR(erp_mask);
+ 	aentry->erp_mask = erp_mask;
+ 	aentry->ht_key.erp_id = mlxsw_sp_acl_erp_mask_erp_id(erp_mask);
+-	memcpy(aentry->enc_key, aentry->ht_key.full_enc_key,
+-	       sizeof(aentry->enc_key));
+ 
+ 	/* Compute all needed delta information and clear the delta bits
+ 	 * from the encrypted key.
+@@ -497,9 +496,8 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp,
+ 	aentry->delta_info.start = mlxsw_sp_acl_erp_delta_start(delta);
+ 	aentry->delta_info.mask = mlxsw_sp_acl_erp_delta_mask(delta);
+ 	aentry->delta_info.value =
+-		mlxsw_sp_acl_erp_delta_value(delta,
+-					     aentry->ht_key.full_enc_key);
+-	mlxsw_sp_acl_erp_delta_clear(delta, aentry->enc_key);
++		mlxsw_sp_acl_erp_delta_value(delta, aentry->ht_key.enc_key);
++	mlxsw_sp_acl_erp_delta_clear(delta, aentry->ht_key.enc_key);
+ 
+ 	/* Add rule to the list of A-TCAM rules, assuming this
+ 	 * rule is intended to A-TCAM. In case this rule does
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+index 95f63fcf4ba1f..a54eedb69a3f5 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+@@ -249,7 +249,7 @@ __mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
+ 		memcpy(chunk + pad_bytes, &erp_region_id,
+ 		       sizeof(erp_region_id));
+ 		memcpy(chunk + key_offset,
+-		       &aentry->enc_key[chunk_key_offsets[chunk_index]],
++		       &aentry->ht_key.enc_key[chunk_key_offsets[chunk_index]],
+ 		       chunk_key_len);
+ 		chunk += chunk_len;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index d231f4d2888be..9eee229303cce 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -1217,18 +1217,6 @@ static bool mlxsw_sp_acl_erp_delta_check(void *priv, const void *parent_obj,
+ 	return err ? false : true;
+ }
+ 
+-static int mlxsw_sp_acl_erp_hints_obj_cmp(const void *obj1, const void *obj2)
+-{
+-	const struct mlxsw_sp_acl_erp_key *key1 = obj1;
+-	const struct mlxsw_sp_acl_erp_key *key2 = obj2;
+-
+-	/* For hints purposes, two objects are considered equal
+-	 * in case the masks are the same. Does not matter what
+-	 * the "ctcam" value is.
+-	 */
+-	return memcmp(key1->mask, key2->mask, sizeof(key1->mask));
+-}
+-
+ static void *mlxsw_sp_acl_erp_delta_create(void *priv, void *parent_obj,
+ 					   void *obj)
+ {
+@@ -1308,7 +1296,6 @@ static void mlxsw_sp_acl_erp_root_destroy(void *priv, void *root_priv)
+ static const struct objagg_ops mlxsw_sp_acl_erp_objagg_ops = {
+ 	.obj_size = sizeof(struct mlxsw_sp_acl_erp_key),
+ 	.delta_check = mlxsw_sp_acl_erp_delta_check,
+-	.hints_obj_cmp = mlxsw_sp_acl_erp_hints_obj_cmp,
+ 	.delta_create = mlxsw_sp_acl_erp_delta_create,
+ 	.delta_destroy = mlxsw_sp_acl_erp_delta_destroy,
+ 	.root_create = mlxsw_sp_acl_erp_root_create,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
+index 79a1d86065125..010204f73ea46 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
+@@ -167,9 +167,9 @@ struct mlxsw_sp_acl_atcam_region {
+ };
+ 
+ struct mlxsw_sp_acl_atcam_entry_ht_key {
+-	char full_enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded
+-								 * key.
+-								 */
++	char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key, minus
++							    * delta bits.
++							    */
+ 	u8 erp_id;
+ };
+ 
+@@ -181,9 +181,6 @@ struct mlxsw_sp_acl_atcam_entry {
+ 	struct rhash_head ht_node;
+ 	struct list_head list; /* Member in entries_list */
+ 	struct mlxsw_sp_acl_atcam_entry_ht_key ht_key;
+-	char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key,
+-							    * minus delta bits.
+-							    */
+ 	struct {
+ 		u16 start;
+ 		u8 mask;
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index 608ad31a97022..ad7ae7ba2b8fc 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -2950,3 +2950,22 @@ void mana_remove(struct gdma_dev *gd, bool suspending)
+ 	gd->gdma_context = NULL;
+ 	kfree(ac);
+ }
++
++struct net_device *mana_get_primary_netdev_rcu(struct mana_context *ac, u32 port_index)
++{
++	struct net_device *ndev;
++
++	RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
++			 "Taking primary netdev without holding the RCU read lock");
++	if (port_index >= ac->num_ports)
++		return NULL;
++
++	/* When mana is used in netvsc, the upper netdevice should be returned. */
++	if (ac->ports[port_index]->flags & IFF_SLAVE)
++		ndev = netdev_master_upper_dev_get_rcu(ac->ports[port_index]);
++	else
++		ndev = ac->ports[port_index];
++
++	return ndev;
++}
++EXPORT_SYMBOL_NS(mana_get_primary_netdev_rcu, NET_MANA);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index b25774d691957..8e2049ed60159 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -982,7 +982,7 @@ static void dwmac4_set_mac_loopback(void __iomem *ioaddr, bool enable)
+ }
+ 
+ static void dwmac4_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+-				    __le16 perfect_match, bool is_double)
++				    u16 perfect_match, bool is_double)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+ 	u32 value;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index f8e7775bb6336..9a705a5a3a1ad 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -615,7 +615,7 @@ static int dwxgmac2_rss_configure(struct mac_device_info *hw,
+ }
+ 
+ static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+-				      __le16 perfect_match, bool is_double)
++				      u16 perfect_match, bool is_double)
+ {
+ 	void __iomem *ioaddr = hw->pcsr;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index 90384db228b5c..a318c84ddb8ac 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -394,7 +394,7 @@ struct stmmac_ops {
+ 			     struct stmmac_rss *cfg, u32 num_rxq);
+ 	/* VLAN */
+ 	void (*update_vlan_hash)(struct mac_device_info *hw, u32 hash,
+-				 __le16 perfect_match, bool is_double);
++				 u16 perfect_match, bool is_double);
+ 	void (*enable_vlan)(struct mac_device_info *hw, u32 type);
+ 	void (*rx_hw_vlan)(struct mac_device_info *hw, struct dma_desc *rx_desc,
+ 			   struct sk_buff *skb);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c58782c41417a..33e2bd5a351ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -6640,7 +6640,7 @@ static u32 stmmac_vid_crc32_le(__le16 vid_le)
+ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
+ {
+ 	u32 crc, hash = 0;
+-	__le16 pmatch = 0;
++	u16 pmatch = 0;
+ 	int count = 0;
+ 	u16 vid = 0;
+ 
+@@ -6655,7 +6655,7 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
+ 		if (count > 2) /* VID = 0 always passes filter */
+ 			return -EOPNOTSUPP;
+ 
+-		pmatch = cpu_to_le16(vid);
++		pmatch = vid;
+ 		hash = 0;
+ 	}
+ 
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index d7070dd4fe736..aa66c923790ff 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -974,6 +974,7 @@ static int netconsole_netdev_event(struct notifier_block *this,
+ 				/* rtnl_lock already held
+ 				 * we might sleep in __netpoll_cleanup()
+ 				 */
++				nt->enabled = false;
+ 				spin_unlock_irqrestore(&target_list_lock, flags);
+ 
+ 				__netpoll_cleanup(&nt->np);
+@@ -981,7 +982,6 @@ static int netconsole_netdev_event(struct notifier_block *this,
+ 				spin_lock_irqsave(&target_list_lock, flags);
+ 				netdev_put(nt->np.dev, &nt->np.dev_tracker);
+ 				nt->np.dev = NULL;
+-				nt->enabled = false;
+ 				stopped = true;
+ 				netconsole_target_put(nt);
+ 				goto restart;
+diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
+index 795ab264eaf27..513cd7f859337 100644
+--- a/drivers/net/pse-pd/pse_core.c
++++ b/drivers/net/pse-pd/pse_core.c
+@@ -719,13 +719,13 @@ int pse_ethtool_set_config(struct pse_control *psec,
+ {
+ 	int err = 0;
+ 
+-	if (pse_has_c33(psec)) {
++	if (pse_has_c33(psec) && config->c33_admin_control) {
+ 		err = pse_ethtool_c33_set_config(psec, config);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+-	if (pse_has_podl(psec))
++	if (pse_has_podl(psec) && config->podl_admin_control)
+ 		err = pse_ethtool_podl_set_config(psec, config);
+ 
+ 	return err;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ea10db9a09fa2..5161e7efda2cb 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2313,7 +2313,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
+ 	return packets;
+ }
+ 
+-static void virtnet_poll_cleantx(struct receive_queue *rq)
++static void virtnet_poll_cleantx(struct receive_queue *rq, int budget)
+ {
+ 	struct virtnet_info *vi = rq->vq->vdev->priv;
+ 	unsigned int index = vq2rxq(rq->vq);
+@@ -2331,7 +2331,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
+ 
+ 		do {
+ 			virtqueue_disable_cb(sq->vq);
+-			free_old_xmit(sq, true);
++			free_old_xmit(sq, !!budget);
+ 		} while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
+ 
+ 		if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) {
+@@ -2375,7 +2375,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+ 	unsigned int xdp_xmit = 0;
+ 	bool napi_complete;
+ 
+-	virtnet_poll_cleantx(rq);
++	virtnet_poll_cleantx(rq, budget);
+ 
+ 	received = virtnet_receive(rq, budget, &xdp_xmit);
+ 	rq->packets_in_napi += received;
+@@ -2489,7 +2489,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 	txq = netdev_get_tx_queue(vi->dev, index);
+ 	__netif_tx_lock(txq, raw_smp_processor_id());
+ 	virtqueue_disable_cb(sq->vq);
+-	free_old_xmit(sq, true);
++	free_old_xmit(sq, !!budget);
+ 
+ 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) {
+ 		if (netif_tx_queue_stopped(txq)) {
+diff --git a/drivers/net/wireless/ath/ath11k/ce.h b/drivers/net/wireless/ath/ath11k/ce.h
+index 69946fc700777..bcde2fcf02cf7 100644
+--- a/drivers/net/wireless/ath/ath11k/ce.h
++++ b/drivers/net/wireless/ath/ath11k/ce.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH11K_CE_H
+@@ -146,7 +146,7 @@ struct ath11k_ce_ring {
+ 	/* Host address space */
+ 	void *base_addr_owner_space_unaligned;
+ 	/* CE address space */
+-	u32 base_addr_ce_space_unaligned;
++	dma_addr_t base_addr_ce_space_unaligned;
+ 
+ 	/* Actual start of descriptors.
+ 	 * Aligned to descriptor-size boundary.
+@@ -156,7 +156,7 @@ struct ath11k_ce_ring {
+ 	void *base_addr_owner_space;
+ 
+ 	/* CE address space */
+-	u32 base_addr_ce_space;
++	dma_addr_t base_addr_ce_space;
+ 
+ 	/* HAL ring id */
+ 	u32 hal_ring_id;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index b82e8fb285413..47554c3619633 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -1009,6 +1009,16 @@ int ath11k_core_resume(struct ath11k_base *ab)
+ 		return -ETIMEDOUT;
+ 	}
+ 
++	if (ab->hw_params.current_cc_support &&
++	    ar->alpha2[0] != 0 && ar->alpha2[1] != 0) {
++		ret = ath11k_reg_set_cc(ar);
++		if (ret) {
++			ath11k_warn(ab, "failed to set country code during resume: %d\n",
++				    ret);
++			return ret;
++		}
++	}
++
+ 	ret = ath11k_dp_rx_pktlog_start(ab);
+ 	if (ret)
+ 		ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n",
+@@ -1978,23 +1988,20 @@ static void ath11k_update_11d(struct work_struct *work)
+ 	struct ath11k_base *ab = container_of(work, struct ath11k_base, update_11d_work);
+ 	struct ath11k *ar;
+ 	struct ath11k_pdev *pdev;
+-	struct wmi_set_current_country_params set_current_param = {};
+ 	int ret, i;
+ 
+-	spin_lock_bh(&ab->base_lock);
+-	memcpy(&set_current_param.alpha2, &ab->new_alpha2, 2);
+-	spin_unlock_bh(&ab->base_lock);
+-
+-	ath11k_dbg(ab, ATH11K_DBG_WMI, "update 11d new cc %c%c\n",
+-		   set_current_param.alpha2[0],
+-		   set_current_param.alpha2[1]);
+-
+ 	for (i = 0; i < ab->num_radios; i++) {
+ 		pdev = &ab->pdevs[i];
+ 		ar = pdev->ar;
+ 
+-		memcpy(&ar->alpha2, &set_current_param.alpha2, 2);
+-		ret = ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
++		spin_lock_bh(&ab->base_lock);
++		memcpy(&ar->alpha2, &ab->new_alpha2, 2);
++		spin_unlock_bh(&ab->base_lock);
++
++		ath11k_dbg(ab, ATH11K_DBG_WMI, "update 11d new cc %c%c for pdev %d\n",
++			   ar->alpha2[0], ar->alpha2[1], i);
++
++		ret = ath11k_reg_set_cc(ar);
+ 		if (ret)
+ 			ath11k_warn(ar->ab,
+ 				    "pdev id %d failed set current country code: %d\n",
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index afd481f5858f0..aabde24d87632 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -1877,8 +1877,7 @@ static void ath11k_dp_rx_h_csum_offload(struct ath11k *ar, struct sk_buff *msdu)
+ 			  CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
+ }
+ 
+-static int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar,
+-				       enum hal_encrypt_type enctype)
++int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype)
+ {
+ 	switch (enctype) {
+ 	case HAL_ENCRYPT_TYPE_OPEN:
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.h b/drivers/net/wireless/ath/ath11k/dp_rx.h
+index 623da3bf9dc81..c322e30caa968 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.h
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
++ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #ifndef ATH11K_DP_RX_H
+ #define ATH11K_DP_RX_H
+@@ -95,4 +96,6 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
+ int ath11k_dp_rx_pktlog_start(struct ath11k_base *ab);
+ int ath11k_dp_rx_pktlog_stop(struct ath11k_base *ab, bool stop_timer);
+ 
++int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype);
++
+ #endif /* ATH11K_DP_RX_H */
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 9b96dbb21d833..eaa53bc39ab2c 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4229,6 +4229,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_CCMP:
++	case WLAN_CIPHER_SUITE_CCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_CCM;
+ 		/* TODO: Re-check if flag is valid */
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+@@ -4238,12 +4239,10 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
+ 		arg.key_txmic_len = 8;
+ 		arg.key_rxmic_len = 8;
+ 		break;
+-	case WLAN_CIPHER_SUITE_CCMP_256:
+-		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		break;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_GCM;
++		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	default:
+ 		ath11k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
+@@ -5903,7 +5902,10 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ {
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++	struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB(skb);
+ 	struct ieee80211_tx_info *info;
++	enum hal_encrypt_type enctype;
++	unsigned int mic_len;
+ 	dma_addr_t paddr;
+ 	int buf_id;
+ 	int ret;
+@@ -5927,7 +5929,12 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
+ 		     ieee80211_is_deauth(hdr->frame_control) ||
+ 		     ieee80211_is_disassoc(hdr->frame_control)) &&
+ 		     ieee80211_has_protected(hdr->frame_control)) {
+-			skb_put(skb, IEEE80211_CCMP_MIC_LEN);
++			if (!(skb_cb->flags & ATH11K_SKB_CIPHER_SET))
++				ath11k_warn(ab, "WMI management tx frame without ATH11K_SKB_CIPHER_SET");
++
++			enctype = ath11k_dp_tx_get_encrypt_type(skb_cb->cipher);
++			mic_len = ath11k_dp_rx_crypto_mic_len(ar, enctype);
++			skb_put(skb, mic_len);
+ 		}
+ 	}
+ 
+@@ -8851,12 +8858,8 @@ ath11k_mac_op_reconfig_complete(struct ieee80211_hw *hw,
+ 		ieee80211_wake_queues(ar->hw);
+ 
+ 		if (ar->ab->hw_params.current_cc_support &&
+-		    ar->alpha2[0] != 0 && ar->alpha2[1] != 0) {
+-			struct wmi_set_current_country_params set_current_param = {};
+-
+-			memcpy(&set_current_param.alpha2, ar->alpha2, 2);
+-			ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
+-		}
++		    ar->alpha2[0] != 0 && ar->alpha2[1] != 0)
++			ath11k_reg_set_cc(ar);
+ 
+ 		if (ab->is_reset) {
+ 			recovery_count = atomic_inc_return(&ab->recovery_count);
+@@ -10325,11 +10328,8 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ 	}
+ 
+ 	if (ab->hw_params.current_cc_support && ab->new_alpha2[0]) {
+-		struct wmi_set_current_country_params set_current_param = {};
+-
+-		memcpy(&set_current_param.alpha2, ab->new_alpha2, 2);
+ 		memcpy(&ar->alpha2, ab->new_alpha2, 2);
+-		ret = ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
++		ret = ath11k_reg_set_cc(ar);
+ 		if (ret)
+ 			ath11k_warn(ar->ab,
+ 				    "failed set cc code for mac register: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index 737fcd450d4bd..39232b8f52bae 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -49,7 +49,6 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+ {
+ 	struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
+ 	struct wmi_init_country_params init_country_param;
+-	struct wmi_set_current_country_params set_current_param = {};
+ 	struct ath11k *ar = hw->priv;
+ 	int ret;
+ 
+@@ -83,9 +82,8 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+ 	 * reg info
+ 	 */
+ 	if (ar->ab->hw_params.current_cc_support) {
+-		memcpy(&set_current_param.alpha2, request->alpha2, 2);
+-		memcpy(&ar->alpha2, &set_current_param.alpha2, 2);
+-		ret = ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
++		memcpy(&ar->alpha2, request->alpha2, 2);
++		ret = ath11k_reg_set_cc(ar);
+ 		if (ret)
+ 			ath11k_warn(ar->ab,
+ 				    "failed set current country code: %d\n", ret);
+@@ -1017,3 +1015,11 @@ void ath11k_reg_free(struct ath11k_base *ab)
+ 		kfree(ab->new_regd[i]);
+ 	}
+ }
++
++int ath11k_reg_set_cc(struct ath11k *ar)
++{
++	struct wmi_set_current_country_params set_current_param = {};
++
++	memcpy(&set_current_param.alpha2, ar->alpha2, 2);
++	return ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
++}
+diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
+index 64edb794260ab..263ea90619483 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.h
++++ b/drivers/net/wireless/ath/ath11k/reg.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH11K_REG_H
+@@ -45,5 +45,5 @@ ath11k_reg_ap_pwr_convert(enum ieee80211_ap_reg_power power_type);
+ int ath11k_reg_handle_chan_list(struct ath11k_base *ab,
+ 				struct cur_regulatory_info *reg_info,
+ 				enum ieee80211_ap_reg_power power_type);
+-
++int ath11k_reg_set_cc(struct ath11k *ar);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath12k/acpi.c b/drivers/net/wireless/ath/ath12k/acpi.c
+index 443ba12e01f37..0555d35aab477 100644
+--- a/drivers/net/wireless/ath/ath12k/acpi.c
++++ b/drivers/net/wireless/ath/ath12k/acpi.c
+@@ -391,4 +391,6 @@ void ath12k_acpi_stop(struct ath12k_base *ab)
+ 	acpi_remove_notify_handler(ACPI_HANDLE(ab->dev),
+ 				   ACPI_DEVICE_NOTIFY,
+ 				   ath12k_acpi_dsm_notify);
++
++	memset(&ab->acpi, 0, sizeof(ab->acpi));
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/ce.h b/drivers/net/wireless/ath/ath12k/ce.h
+index 79af3b6159f1c..857bc5f9e946a 100644
+--- a/drivers/net/wireless/ath/ath12k/ce.h
++++ b/drivers/net/wireless/ath/ath12k/ce.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_CE_H
+@@ -119,7 +119,7 @@ struct ath12k_ce_ring {
+ 	/* Host address space */
+ 	void *base_addr_owner_space_unaligned;
+ 	/* CE address space */
+-	u32 base_addr_ce_space_unaligned;
++	dma_addr_t base_addr_ce_space_unaligned;
+ 
+ 	/* Actual start of descriptors.
+ 	 * Aligned to descriptor-size boundary.
+@@ -129,7 +129,7 @@ struct ath12k_ce_ring {
+ 	void *base_addr_owner_space;
+ 
+ 	/* CE address space */
+-	u32 base_addr_ce_space;
++	dma_addr_t base_addr_ce_space;
+ 
+ 	/* HAL ring id */
+ 	u32 hal_ring_id;
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 6663f4e1792de..52969a1bb5a56 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -50,19 +50,16 @@ int ath12k_core_suspend(struct ath12k_base *ab)
+ 	if (!ab->hw_params->supports_suspend)
+ 		return -EOPNOTSUPP;
+ 
+-	rcu_read_lock();
+ 	for (i = 0; i < ab->num_radios; i++) {
+-		ar = ath12k_mac_get_ar_by_pdev_id(ab, i);
++		ar = ab->pdevs[i].ar;
+ 		if (!ar)
+ 			continue;
+ 		ret = ath12k_mac_wait_tx_complete(ar);
+ 		if (ret) {
+ 			ath12k_warn(ab, "failed to wait tx complete: %d\n", ret);
+-			rcu_read_unlock();
+ 			return ret;
+ 		}
+ 	}
+-	rcu_read_unlock();
+ 
+ 	/* PM framework skips suspend_late/resume_early callbacks
+ 	 * if other devices report errors in their suspend callbacks.
+@@ -86,6 +83,8 @@ int ath12k_core_suspend_late(struct ath12k_base *ab)
+ 	if (!ab->hw_params->supports_suspend)
+ 		return -EOPNOTSUPP;
+ 
++	ath12k_acpi_stop(ab);
++
+ 	ath12k_hif_irq_disable(ab);
+ 	ath12k_hif_ce_irq_disable(ab);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 7843c76a82c17..90476f38e8d46 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -132,7 +132,9 @@ static int ath12k_dp_srng_find_ring_in_mask(int ring_num, const u8 *grp_mask)
+ static int ath12k_dp_srng_calculate_msi_group(struct ath12k_base *ab,
+ 					      enum hal_ring_type type, int ring_num)
+ {
++	const struct ath12k_hal_tcl_to_wbm_rbm_map *map;
+ 	const u8 *grp_mask;
++	int i;
+ 
+ 	switch (type) {
+ 	case HAL_WBM2SW_RELEASE:
+@@ -140,6 +142,14 @@ static int ath12k_dp_srng_calculate_msi_group(struct ath12k_base *ab,
+ 			grp_mask = &ab->hw_params->ring_mask->rx_wbm_rel[0];
+ 			ring_num = 0;
+ 		} else {
++			map = ab->hw_params->hal_ops->tcl_to_wbm_rbm_map;
++			for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
++				if (ring_num == map[i].wbm_ring_num) {
++					ring_num = i;
++					break;
++				}
++			}
++
+ 			grp_mask = &ab->hw_params->ring_mask->tx[0];
+ 		}
+ 		break;
+@@ -881,11 +891,9 @@ int ath12k_dp_service_srng(struct ath12k_base *ab,
+ 	enum dp_monitor_mode monitor_mode;
+ 	u8 ring_mask;
+ 
+-	while (i < ab->hw_params->max_tx_ring) {
+-		if (ab->hw_params->ring_mask->tx[grp_id] &
+-			BIT(ab->hw_params->hal_ops->tcl_to_wbm_rbm_map[i].wbm_ring_num))
+-			ath12k_dp_tx_completion_handler(ab, i);
+-		i++;
++	if (ab->hw_params->ring_mask->tx[grp_id]) {
++		i = fls(ab->hw_params->ring_mask->tx[grp_id]) - 1;
++		ath12k_dp_tx_completion_handler(ab, i);
+ 	}
+ 
+ 	if (ab->hw_params->ring_mask->rx_err[grp_id]) {
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index 5cf0d21ef184b..4dfbff326030e 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -334,6 +334,7 @@ struct ath12k_dp {
+ 	struct dp_srng reo_except_ring;
+ 	struct dp_srng reo_cmd_ring;
+ 	struct dp_srng reo_status_ring;
++	enum ath12k_peer_metadata_version peer_metadata_ver;
+ 	struct dp_srng reo_dst_ring[DP_REO_DST_RING_MAX];
+ 	struct dp_tx_ring tx_ring[DP_TCL_NUM_RING_MAX];
+ 	struct hal_wbm_idle_scatter_list scatter_list[DP_IDLE_SCATTER_BUFS_MAX];
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 75df622f25d85..121f27284be59 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2383,8 +2383,10 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 	channel_num = meta_data;
+ 	center_freq = meta_data >> 16;
+ 
+-	if (center_freq >= 5935 && center_freq <= 7105) {
++	if (center_freq >= ATH12K_MIN_6G_FREQ &&
++	    center_freq <= ATH12K_MAX_6G_FREQ) {
+ 		rx_status->band = NL80211_BAND_6GHZ;
++		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+ 		rx_status->band = NL80211_BAND_2GHZ;
+ 	} else if (channel_num >= 36 && channel_num <= 173) {
+@@ -2402,8 +2404,9 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 				rx_desc, sizeof(*rx_desc));
+ 	}
+ 
+-	rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+-							 rx_status->band);
++	if (rx_status->band != NL80211_BAND_6GHZ)
++		rx_status->freq = ieee80211_channel_to_frequency(channel_num,
++								 rx_status->band);
+ 
+ 	ath12k_dp_rx_h_rate(ar, rx_desc, rx_status);
+ }
+@@ -2604,6 +2607,29 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 	rcu_read_unlock();
+ }
+ 
++static u16 ath12k_dp_rx_get_peer_id(struct ath12k_base *ab,
++				    enum ath12k_peer_metadata_version ver,
++				    __le32 peer_metadata)
++{
++	switch (ver) {
++	default:
++		ath12k_warn(ab, "Unknown peer metadata version: %d", ver);
++		fallthrough;
++	case ATH12K_PEER_METADATA_V0:
++		return le32_get_bits(peer_metadata,
++				     RX_MPDU_DESC_META_DATA_V0_PEER_ID);
++	case ATH12K_PEER_METADATA_V1:
++		return le32_get_bits(peer_metadata,
++				     RX_MPDU_DESC_META_DATA_V1_PEER_ID);
++	case ATH12K_PEER_METADATA_V1A:
++		return le32_get_bits(peer_metadata,
++				     RX_MPDU_DESC_META_DATA_V1A_PEER_ID);
++	case ATH12K_PEER_METADATA_V1B:
++		return le32_get_bits(peer_metadata,
++				     RX_MPDU_DESC_META_DATA_V1B_PEER_ID);
++	}
++}
++
+ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ 			 struct napi_struct *napi, int budget)
+ {
+@@ -2632,6 +2658,8 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ 	ath12k_hal_srng_access_begin(ab, srng);
+ 
+ 	while ((desc = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
++		struct rx_mpdu_desc *mpdu_info;
++		struct rx_msdu_desc *msdu_info;
+ 		enum hal_reo_dest_ring_push_reason push_reason;
+ 		u32 cookie;
+ 
+@@ -2678,16 +2706,19 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ 			continue;
+ 		}
+ 
+-		rxcb->is_first_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
++		msdu_info = &desc->rx_msdu_info;
++		mpdu_info = &desc->rx_mpdu_info;
++
++		rxcb->is_first_msdu = !!(le32_to_cpu(msdu_info->info0) &
+ 					 RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU);
+-		rxcb->is_last_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
++		rxcb->is_last_msdu = !!(le32_to_cpu(msdu_info->info0) &
+ 					RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU);
+-		rxcb->is_continuation = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
++		rxcb->is_continuation = !!(le32_to_cpu(msdu_info->info0) &
+ 					   RX_MSDU_DESC_INFO0_MSDU_CONTINUATION);
+ 		rxcb->mac_id = mac_id;
+-		rxcb->peer_id = le32_get_bits(desc->rx_mpdu_info.peer_meta_data,
+-					      RX_MPDU_DESC_META_DATA_PEER_ID);
+-		rxcb->tid = le32_get_bits(desc->rx_mpdu_info.info0,
++		rxcb->peer_id = ath12k_dp_rx_get_peer_id(ab, dp->peer_metadata_ver,
++							 mpdu_info->peer_meta_data);
++		rxcb->tid = le32_get_bits(mpdu_info->info0,
+ 					  RX_MPDU_DESC_INFO0_TID);
+ 
+ 		__skb_queue_tail(&msdu_list, msdu);
+@@ -2991,7 +3022,7 @@ static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ 	struct hal_srng *srng;
+ 	dma_addr_t link_paddr, buf_paddr;
+ 	u32 desc_bank, msdu_info, msdu_ext_info, mpdu_info;
+-	u32 cookie, hal_rx_desc_sz, dest_ring_info0;
++	u32 cookie, hal_rx_desc_sz, dest_ring_info0, queue_addr_hi;
+ 	int ret;
+ 	struct ath12k_rx_desc_info *desc_info;
+ 	u8 dst_ind;
+@@ -3027,7 +3058,7 @@ static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ 
+ 	buf_paddr = dma_map_single(ab->dev, defrag_skb->data,
+ 				   defrag_skb->len + skb_tailroom(defrag_skb),
+-				   DMA_FROM_DEVICE);
++				   DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ab->dev, buf_paddr))
+ 		return -ENOMEM;
+ 
+@@ -3083,13 +3114,11 @@ static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ 	reo_ent_ring->rx_mpdu_info.peer_meta_data =
+ 		reo_dest_ring->rx_mpdu_info.peer_meta_data;
+ 
+-	/* Firmware expects physical address to be filled in queue_addr_lo in
+-	 * the MLO scenario and in case of non MLO peer meta data needs to be
+-	 * filled.
+-	 * TODO: Need to handle for MLO scenario.
+-	 */
+-	reo_ent_ring->queue_addr_lo = reo_dest_ring->rx_mpdu_info.peer_meta_data;
+-	reo_ent_ring->info0 = le32_encode_bits(dst_ind,
++	reo_ent_ring->queue_addr_lo = cpu_to_le32(lower_32_bits(rx_tid->paddr));
++	queue_addr_hi = upper_32_bits(rx_tid->paddr);
++	reo_ent_ring->info0 = le32_encode_bits(queue_addr_hi,
++					       HAL_REO_ENTR_RING_INFO0_QUEUE_ADDR_HI) |
++			      le32_encode_bits(dst_ind,
+ 					       HAL_REO_ENTR_RING_INFO0_DEST_IND);
+ 
+ 	reo_ent_ring->info1 = le32_encode_bits(rx_tid->cur_sn,
+@@ -3113,7 +3142,7 @@ static int ath12k_dp_rx_h_defrag_reo_reinject(struct ath12k *ar,
+ 	spin_unlock_bh(&dp->rx_desc_lock);
+ err_unmap_dma:
+ 	dma_unmap_single(ab->dev, buf_paddr, defrag_skb->len + skb_tailroom(defrag_skb),
+-			 DMA_FROM_DEVICE);
++			 DMA_TO_DEVICE);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index 9b6d7d72f57c4..a7c7a868c14ce 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -352,15 +352,15 @@ static void ath12k_dp_tx_free_txbuf(struct ath12k_base *ab,
+ 	u8 pdev_id = ath12k_hw_mac_id_to_pdev_id(ab->hw_params, mac_id);
+ 
+ 	skb_cb = ATH12K_SKB_CB(msdu);
++	ar = ab->pdevs[pdev_id].ar;
+ 
+ 	dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ 	if (skb_cb->paddr_ext_desc)
+ 		dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ 				 sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+ 
+-	dev_kfree_skb_any(msdu);
++	ieee80211_free_txskb(ar->ah->hw, msdu);
+ 
+-	ar = ab->pdevs[pdev_id].ar;
+ 	if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+ 		wake_up(&ar->dp.tx_empty_waitq);
+ }
+@@ -448,6 +448,7 @@ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ 				       struct hal_tx_status *ts)
+ {
+ 	struct ath12k_base *ab = ar->ab;
++	struct ath12k_hw *ah = ar->ah;
+ 	struct ieee80211_tx_info *info;
+ 	struct ath12k_skb_cb *skb_cb;
+ 
+@@ -466,12 +467,12 @@ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ 	rcu_read_lock();
+ 
+ 	if (!rcu_dereference(ab->pdevs_active[ar->pdev_idx])) {
+-		dev_kfree_skb_any(msdu);
++		ieee80211_free_txskb(ah->hw, msdu);
+ 		goto exit;
+ 	}
+ 
+ 	if (!skb_cb->vif) {
+-		dev_kfree_skb_any(msdu);
++		ieee80211_free_txskb(ah->hw, msdu);
+ 		goto exit;
+ 	}
+ 
+@@ -481,18 +482,36 @@ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ 	/* skip tx rate update from ieee80211_status*/
+ 	info->status.rates[0].idx = -1;
+ 
+-	if (ts->status == HAL_WBM_TQM_REL_REASON_FRAME_ACKED &&
+-	    !(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
+-		info->flags |= IEEE80211_TX_STAT_ACK;
+-		info->status.ack_signal = ATH12K_DEFAULT_NOISE_FLOOR +
+-					  ts->ack_rssi;
+-		info->status.flags = IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
++	switch (ts->status) {
++	case HAL_WBM_TQM_REL_REASON_FRAME_ACKED:
++		if (!(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
++			info->flags |= IEEE80211_TX_STAT_ACK;
++			info->status.ack_signal = ATH12K_DEFAULT_NOISE_FLOOR +
++						  ts->ack_rssi;
++			info->status.flags = IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
++		}
++		break;
++	case HAL_WBM_TQM_REL_REASON_CMD_REMOVE_TX:
++		if (info->flags & IEEE80211_TX_CTL_NO_ACK) {
++			info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++			break;
++		}
++		fallthrough;
++	case HAL_WBM_TQM_REL_REASON_CMD_REMOVE_MPDU:
++	case HAL_WBM_TQM_REL_REASON_DROP_THRESHOLD:
++	case HAL_WBM_TQM_REL_REASON_CMD_REMOVE_AGED_FRAMES:
++		/* The failure status is due to internal firmware tx failure
++		 * hence drop the frame; do not update the status of frame to
++		 * the upper layer
++		 */
++		ieee80211_free_txskb(ah->hw, msdu);
++		goto exit;
++	default:
++		ath12k_dbg(ab, ATH12K_DBG_DP_TX, "tx frame is not acked status %d\n",
++			   ts->status);
++		break;
+ 	}
+ 
+-	if (ts->status == HAL_WBM_TQM_REL_REASON_CMD_REMOVE_TX &&
+-	    (info->flags & IEEE80211_TX_CTL_NO_ACK))
+-		info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+-
+ 	/* NOTE: Tx rate status reporting. Tx completion status does not have
+ 	 * necessary information (for example nss) to build the tx rate.
+ 	 * Might end up reporting it out-of-band from HTT stats.
+diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
+index 63340256d3f64..072e36365808e 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_desc.h
++++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include "core.h"
+ 
+@@ -597,8 +597,30 @@ struct hal_tlv_64_hdr {
+ #define RX_MPDU_DESC_INFO0_MPDU_QOS_CTRL_VALID	BIT(27)
+ #define RX_MPDU_DESC_INFO0_TID			GENMASK(31, 28)
+ 
+-/* TODO revisit after meta data is concluded */
+-#define RX_MPDU_DESC_META_DATA_PEER_ID		GENMASK(15, 0)
++/* Peer Metadata classification */
++
++/* Version 0 */
++#define RX_MPDU_DESC_META_DATA_V0_PEER_ID	GENMASK(15, 0)
++#define RX_MPDU_DESC_META_DATA_V0_VDEV_ID	GENMASK(23, 16)
++
++/* Version 1 */
++#define RX_MPDU_DESC_META_DATA_V1_PEER_ID		GENMASK(13, 0)
++#define RX_MPDU_DESC_META_DATA_V1_LOGICAL_LINK_ID	GENMASK(15, 14)
++#define RX_MPDU_DESC_META_DATA_V1_VDEV_ID		GENMASK(23, 16)
++#define RX_MPDU_DESC_META_DATA_V1_LMAC_ID		GENMASK(25, 24)
++#define RX_MPDU_DESC_META_DATA_V1_DEVICE_ID		GENMASK(28, 26)
++
++/* Version 1A */
++#define RX_MPDU_DESC_META_DATA_V1A_PEER_ID		GENMASK(13, 0)
++#define RX_MPDU_DESC_META_DATA_V1A_VDEV_ID		GENMASK(21, 14)
++#define RX_MPDU_DESC_META_DATA_V1A_LOGICAL_LINK_ID	GENMASK(25, 22)
++#define RX_MPDU_DESC_META_DATA_V1A_DEVICE_ID		GENMASK(28, 26)
++
++/* Version 1B */
++#define RX_MPDU_DESC_META_DATA_V1B_PEER_ID	GENMASK(13, 0)
++#define RX_MPDU_DESC_META_DATA_V1B_VDEV_ID	GENMASK(21, 14)
++#define RX_MPDU_DESC_META_DATA_V1B_HW_LINK_ID	GENMASK(25, 22)
++#define RX_MPDU_DESC_META_DATA_V1B_DEVICE_ID	GENMASK(28, 26)
+ 
+ struct rx_mpdu_desc {
+ 	__le32 info0; /* %RX_MPDU_DESC_INFO */
+@@ -2048,6 +2070,19 @@ struct hal_wbm_release_ring {
+  *	fw with fw_reason2.
+  * @HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON3: Remove command initiated by
+  *	fw with fw_reason3.
++ * @HAL_WBM_TQM_REL_REASON_CMD_DISABLE_QUEUE: Remove command initiated by
++ *	fw with disable queue.
++ * @HAL_WBM_TQM_REL_REASON_CMD_TILL_NONMATCHING: Remove command initiated by
++ *	fw to remove all mpdu until 1st non-match.
++ * @HAL_WBM_TQM_REL_REASON_DROP_THRESHOLD: Dropped due to drop threshold
++ *	criteria
++ * @HAL_WBM_TQM_REL_REASON_DROP_LINK_DESC_UNAVAIL: Dropped due to link desc
++ *	not available
++ * @HAL_WBM_TQM_REL_REASON_DROP_OR_INVALID_MSDU: Dropped due drop bit set or
++ *	null flow
++ * @HAL_WBM_TQM_REL_REASON_MULTICAST_DROP: Dropped due mcast drop set for VDEV
++ * @HAL_WBM_TQM_REL_REASON_VDEV_MISMATCH_DROP: Dropped due to being set with
++ *	'TCL_drop_reason'
+  */
+ enum hal_wbm_tqm_rel_reason {
+ 	HAL_WBM_TQM_REL_REASON_FRAME_ACKED,
+@@ -2058,6 +2093,13 @@ enum hal_wbm_tqm_rel_reason {
+ 	HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON1,
+ 	HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON2,
+ 	HAL_WBM_TQM_REL_REASON_CMD_REMOVE_RESEAON3,
++	HAL_WBM_TQM_REL_REASON_CMD_DISABLE_QUEUE,
++	HAL_WBM_TQM_REL_REASON_CMD_TILL_NONMATCHING,
++	HAL_WBM_TQM_REL_REASON_DROP_THRESHOLD,
++	HAL_WBM_TQM_REL_REASON_DROP_LINK_DESC_UNAVAIL,
++	HAL_WBM_TQM_REL_REASON_DROP_OR_INVALID_MSDU,
++	HAL_WBM_TQM_REL_REASON_MULTICAST_DROP,
++	HAL_WBM_TQM_REL_REASON_VDEV_MISMATCH_DROP,
+ };
+ 
+ struct hal_wbm_buffer_ring {
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index f4c8270158215..bff8cf97a18c6 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -544,9 +544,6 @@ static const struct ath12k_hw_ring_mask ath12k_hw_ring_mask_qcn9274 = {
+ 	},
+ 	.rx_mon_dest = {
+ 		0, 0, 0,
+-		ATH12K_RX_MON_RING_MASK_0,
+-		ATH12K_RX_MON_RING_MASK_1,
+-		ATH12K_RX_MON_RING_MASK_2,
+ 	},
+ 	.rx = {
+ 		0, 0, 0, 0,
+@@ -572,16 +569,15 @@ static const struct ath12k_hw_ring_mask ath12k_hw_ring_mask_qcn9274 = {
+ 		ATH12K_HOST2RXDMA_RING_MASK_0,
+ 	},
+ 	.tx_mon_dest = {
+-		ATH12K_TX_MON_RING_MASK_0,
+-		ATH12K_TX_MON_RING_MASK_1,
++		0, 0, 0,
+ 	},
+ };
+ 
+ static const struct ath12k_hw_ring_mask ath12k_hw_ring_mask_wcn7850 = {
+ 	.tx  = {
+ 		ATH12K_TX_RING_MASK_0,
++		ATH12K_TX_RING_MASK_1,
+ 		ATH12K_TX_RING_MASK_2,
+-		ATH12K_TX_RING_MASK_4,
+ 	},
+ 	.rx_mon_dest = {
+ 	},
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 3f450ee93f34b..2a314cfc8cb84 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -78,8 +78,7 @@
+ #define TARGET_NUM_WDS_ENTRIES		32
+ #define TARGET_DMA_BURST_SIZE		1
+ #define TARGET_RX_BATCHMODE		1
+-#define TARGET_RX_PEER_METADATA_VER_V1A	2
+-#define TARGET_RX_PEER_METADATA_VER_V1B	3
++#define TARGET_EMA_MAX_PROFILE_PERIOD	8
+ 
+ #define ATH12K_HW_DEFAULT_QUEUE		0
+ #define ATH12K_HW_MAX_QUEUES		4
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 805cb084484a4..ead37a4e002a2 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -7386,7 +7386,8 @@ ath12k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 		arvif->is_started = false;
+ 	}
+ 
+-	if (arvif->vdev_type != WMI_VDEV_TYPE_STA) {
++	if (arvif->vdev_type != WMI_VDEV_TYPE_STA &&
++	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
+ 		ath12k_bss_disassoc(ar, arvif);
+ 		ret = ath12k_mac_vdev_stop(arvif);
+ 		if (ret)
+@@ -8488,19 +8489,23 @@ static int ath12k_mac_setup_iface_combinations(struct ath12k_hw *ah)
+ 
+ static const u8 ath12k_if_types_ext_capa[] = {
+ 	[0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
++	[2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
+ 	[7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ };
+ 
+ static const u8 ath12k_if_types_ext_capa_sta[] = {
+ 	[0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
++	[2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
+ 	[7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ 	[9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
+ };
+ 
+ static const u8 ath12k_if_types_ext_capa_ap[] = {
+ 	[0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
++	[2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
+ 	[7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ 	[9] = WLAN_EXT_CAPA10_TWT_RESPONDER_SUPPORT,
++	[10] = WLAN_EXT_CAPA11_EMA_SUPPORT,
+ };
+ 
+ static const struct wiphy_iftype_ext_capab ath12k_iftypes_ext_capa[] = {
+@@ -8605,6 +8610,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 	u32 ht_cap = U32_MAX, antennas_rx = 0, antennas_tx = 0;
+ 	bool is_6ghz = false, is_raw_mode = false, is_monitor_disable = false;
+ 	u8 *mac_addr = NULL;
++	u8 mbssid_max_interfaces = 0;
+ 
+ 	wiphy->max_ap_assoc_sta = 0;
+ 
+@@ -8648,6 +8654,8 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 			mac_addr = ar->mac_addr;
+ 		else
+ 			mac_addr = ab->mac_addr;
++
++		mbssid_max_interfaces += TARGET_NUM_VDEVS;
+ 	}
+ 
+ 	wiphy->available_antennas_rx = antennas_rx;
+@@ -8739,6 +8747,9 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 	wiphy->iftype_ext_capab = ath12k_iftypes_ext_capa;
+ 	wiphy->num_iftype_ext_capab = ARRAY_SIZE(ath12k_iftypes_ext_capa);
+ 
++	wiphy->mbssid_max_interfaces = mbssid_max_interfaces;
++	wiphy->ema_max_profile_periodicity = TARGET_EMA_MAX_PROFILE_PERIOD;
++
+ 	if (is_6ghz) {
+ 		wiphy_ext_feature_set(wiphy,
+ 				      NL80211_EXT_FEATURE_FILS_DISCOVERY);
+@@ -8777,9 +8788,9 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 			ath12k_err(ar->ab, "ath12k regd update failed: %d\n", ret);
+ 			goto err_unregister_hw;
+ 		}
+-	}
+ 
+-	ath12k_debugfs_register(ar);
++		ath12k_debugfs_register(ar);
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 7a52d2082b792..ef775af25093c 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -228,9 +228,12 @@ void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
+ 	config->peer_map_unmap_version = 0x32;
+ 	config->twt_ap_pdev_count = ab->num_radios;
+ 	config->twt_ap_sta_count = 1000;
++	config->ema_max_vap_cnt = ab->num_radios;
++	config->ema_max_profile_period = TARGET_EMA_MAX_PROFILE_PERIOD;
++	config->beacon_tx_offload_max_vdev += config->ema_max_vap_cnt;
+ 
+ 	if (test_bit(WMI_TLV_SERVICE_PEER_METADATA_V1A_V1B_SUPPORT, ab->wmi_ab.svc_map))
+-		config->dp_peer_meta_data_ver = TARGET_RX_PEER_METADATA_VER_V1B;
++		config->peer_metadata_ver = ATH12K_PEER_METADATA_V1B;
+ }
+ 
+ void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+@@ -3473,11 +3476,13 @@ ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cf
+ 	wmi_cfg->sched_params = cpu_to_le32(tg_cfg->sched_params);
+ 	wmi_cfg->twt_ap_pdev_count = cpu_to_le32(tg_cfg->twt_ap_pdev_count);
+ 	wmi_cfg->twt_ap_sta_count = cpu_to_le32(tg_cfg->twt_ap_sta_count);
+-	wmi_cfg->flags2 = le32_encode_bits(tg_cfg->dp_peer_meta_data_ver,
++	wmi_cfg->flags2 = le32_encode_bits(tg_cfg->peer_metadata_ver,
+ 					   WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION);
+-
+ 	wmi_cfg->host_service_flags = cpu_to_le32(tg_cfg->is_reg_cc_ext_event_supported <<
+ 				WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT);
++	wmi_cfg->ema_max_vap_cnt = cpu_to_le32(tg_cfg->ema_max_vap_cnt);
++	wmi_cfg->ema_max_profile_period = cpu_to_le32(tg_cfg->ema_max_profile_period);
++	wmi_cfg->flags2 |= cpu_to_le32(WMI_RSRC_CFG_FLAGS2_CALC_NEXT_DTIM_COUNT_SET);
+ }
+ 
+ static int ath12k_init_cmd_send(struct ath12k_wmi_pdev *wmi,
+@@ -3701,6 +3706,8 @@ int ath12k_wmi_cmd_init(struct ath12k_base *ab)
+ 	arg.num_band_to_mac = ab->num_radios;
+ 	ath12k_fill_band_to_mac_param(ab, arg.band_to_mac);
+ 
++	ab->dp.peer_metadata_ver = arg.res_cfg.peer_metadata_ver;
++
+ 	return ath12k_init_cmd_send(&wmi_ab->wmi[0], &arg);
+ }
+ 
+@@ -6022,8 +6029,10 @@ static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
+ 	if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
+-	if (rx_ev.chan_freq >= ATH12K_MIN_6G_FREQ) {
++	if (rx_ev.chan_freq >= ATH12K_MIN_6G_FREQ &&
++	    rx_ev.chan_freq <= ATH12K_MAX_6G_FREQ) {
+ 		status->band = NL80211_BAND_6GHZ;
++		status->freq = rx_ev.chan_freq;
+ 	} else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5G_CHAN) {
+@@ -6044,8 +6053,10 @@ static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
+ 
+ 	sband = &ar->mac.sbands[status->band];
+ 
+-	status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
+-						      status->band);
++	if (status->band != NL80211_BAND_6GHZ)
++		status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
++							      status->band);
++
+ 	status->signal = rx_ev.snr + ATH12K_DEFAULT_NOISE_FLOOR;
+ 	status->rate_idx = ath12k_mac_bitrate_to_idx(sband, rx_ev.rate / 100);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 496866673aead..742fe0b36cf20 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -2292,6 +2292,13 @@ struct ath12k_wmi_host_mem_chunk_arg {
+ 	u32 req_id;
+ };
+ 
++enum ath12k_peer_metadata_version {
++	ATH12K_PEER_METADATA_V0,
++	ATH12K_PEER_METADATA_V1,
++	ATH12K_PEER_METADATA_V1A,
++	ATH12K_PEER_METADATA_V1B
++};
++
+ struct ath12k_wmi_resource_config_arg {
+ 	u32 num_vdevs;
+ 	u32 num_peers;
+@@ -2354,8 +2361,10 @@ struct ath12k_wmi_resource_config_arg {
+ 	u32 sched_params;
+ 	u32 twt_ap_pdev_count;
+ 	u32 twt_ap_sta_count;
++	enum ath12k_peer_metadata_version peer_metadata_ver;
++	u32 ema_max_vap_cnt;
++	u32 ema_max_profile_period;
+ 	bool is_reg_cc_ext_event_supported;
+-	u8  dp_peer_meta_data_ver;
+ };
+ 
+ struct ath12k_wmi_init_cmd_arg {
+@@ -2410,6 +2419,7 @@ struct wmi_init_cmd {
+ #define WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT 4
+ #define WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION		GENMASK(5, 4)
+ #define WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64	BIT(5)
++#define WMI_RSRC_CFG_FLAGS2_CALC_NEXT_DTIM_COUNT_SET      BIT(9)
+ 
+ struct ath12k_wmi_resource_config_params {
+ 	__le32 tlv_header;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+index aae2cf95fe958..e472591f321bd 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+@@ -2567,7 +2567,6 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 
+ 	struct lcnphy_txgains cal_gains, temp_gains;
+ 	u16 hash;
+-	u8 band_idx;
+ 	int j;
+ 	u16 ncorr_override[5];
+ 	u16 syst_coeffs[] = { 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,
+@@ -2599,6 +2598,9 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 	u16 *values_to_save;
+ 	struct brcms_phy_lcnphy *pi_lcn = pi->u.pi_lcnphy;
+ 
++	if (WARN_ON(CHSPEC_IS5G(pi->radio_chanspec)))
++		return;
++
+ 	values_to_save = kmalloc_array(20, sizeof(u16), GFP_ATOMIC);
+ 	if (NULL == values_to_save)
+ 		return;
+@@ -2662,20 +2664,18 @@ wlc_lcnphy_tx_iqlo_cal(struct brcms_phy *pi,
+ 	hash = (target_gains->gm_gain << 8) |
+ 	       (target_gains->pga_gain << 4) | (target_gains->pad_gain);
+ 
+-	band_idx = (CHSPEC_IS5G(pi->radio_chanspec) ? 1 : 0);
+-
+ 	cal_gains = *target_gains;
+ 	memset(ncorr_override, 0, sizeof(ncorr_override));
+-	for (j = 0; j < iqcal_gainparams_numgains_lcnphy[band_idx]; j++) {
+-		if (hash == tbl_iqcal_gainparams_lcnphy[band_idx][j][0]) {
++	for (j = 0; j < iqcal_gainparams_numgains_lcnphy[0]; j++) {
++		if (hash == tbl_iqcal_gainparams_lcnphy[0][j][0]) {
+ 			cal_gains.gm_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][1];
++				tbl_iqcal_gainparams_lcnphy[0][j][1];
+ 			cal_gains.pga_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][2];
++				tbl_iqcal_gainparams_lcnphy[0][j][2];
+ 			cal_gains.pad_gain =
+-				tbl_iqcal_gainparams_lcnphy[band_idx][j][3];
++				tbl_iqcal_gainparams_lcnphy[0][j][3];
+ 			memcpy(ncorr_override,
+-			       &tbl_iqcal_gainparams_lcnphy[band_idx][j][3],
++			       &tbl_iqcal_gainparams_lcnphy[0][j][3],
+ 			       sizeof(ncorr_override));
+ 			break;
+ 		}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/link.c b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
+index 6ec9a8e21a34e..92ac6cc40faa7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/link.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
+@@ -1082,6 +1082,15 @@ static void iwl_mvm_esr_unblocked(struct iwl_mvm *mvm,
+ 
+ 	IWL_DEBUG_INFO(mvm, "EMLSR is unblocked\n");
+ 
++	/* If we exited due to an EXIT reason, and the exit was in less than
++	 * 30 seconds, then a MLO scan was scheduled already.
++	 */
++	if (!need_new_sel &&
++	    !(mvmvif->last_esr_exit.reason & IWL_MVM_BLOCK_ESR_REASONS)) {
++		IWL_DEBUG_INFO(mvm, "Wait for MLO scan\n");
++		return;
++	}
++
+ 	/*
+ 	 * If EMLSR was blocked for more than 30 seconds, or the last link
+ 	 * selection decided to not enter EMLSR, trigger a new scan.
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index dac6155ae1bd0..259afecd1a98d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -4861,6 +4861,7 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		       const struct iwl_mvm_roc_ops *ops)
+ {
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
++	struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm);
+ 	u32 lmac_id;
+ 	int ret;
+ 
+@@ -4873,9 +4874,12 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	 */
+ 	flush_work(&mvm->roc_done_wk);
+ 
+-	ret = iwl_mvm_esr_non_bss_link(mvm, vif, 0, true);
+-	if (ret)
+-		return ret;
++	if (!IS_ERR_OR_NULL(bss_vif)) {
++		ret = iwl_mvm_block_esr_sync(mvm, bss_vif,
++					     IWL_MVM_ESR_BLOCKED_ROC);
++		if (ret)
++			return ret;
++	}
+ 
+ 	mutex_lock(&mvm->mutex);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index 0a1959bd40799..ded094b6b63df 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -360,7 +360,9 @@ struct iwl_mvm_vif_link_info {
+  * @IWL_MVM_ESR_BLOCKED_WOWLAN: WOWLAN is preventing the enablement of EMLSR
+  * @IWL_MVM_ESR_BLOCKED_TPT: block EMLSR when there is not enough traffic
+  * @IWL_MVM_ESR_BLOCKED_FW: FW didn't recommended/forced exit from EMLSR
+- * @IWL_MVM_ESR_BLOCKED_NON_BSS: An active non-bssid link's preventing EMLSR
++ * @IWL_MVM_ESR_BLOCKED_NON_BSS: An active non-BSS interface's link is
++ *	preventing EMLSR
++ * @IWL_MVM_ESR_BLOCKED_ROC: remain-on-channel is preventing EMLSR
+  * @IWL_MVM_ESR_EXIT_MISSED_BEACON: exited EMLSR due to missed beacons
+  * @IWL_MVM_ESR_EXIT_LOW_RSSI: link is deactivated/not allowed for EMLSR
+  *	due to low RSSI.
+@@ -377,6 +379,7 @@ enum iwl_mvm_esr_state {
+ 	IWL_MVM_ESR_BLOCKED_TPT		= 0x4,
+ 	IWL_MVM_ESR_BLOCKED_FW		= 0x8,
+ 	IWL_MVM_ESR_BLOCKED_NON_BSS	= 0x10,
++	IWL_MVM_ESR_BLOCKED_ROC		= 0x20,
+ 	IWL_MVM_ESR_EXIT_MISSED_BEACON	= 0x10000,
+ 	IWL_MVM_ESR_EXIT_LOW_RSSI	= 0x20000,
+ 	IWL_MVM_ESR_EXIT_COEX		= 0x40000,
+@@ -1860,10 +1863,10 @@ static inline u8 iwl_mvm_get_valid_tx_ant(struct iwl_mvm *mvm)
+ 
+ static inline u8 iwl_mvm_get_valid_rx_ant(struct iwl_mvm *mvm)
+ {
+-	u8 rx_ant = mvm->fw->valid_tx_ant;
++	u8 rx_ant = mvm->fw->valid_rx_ant;
+ 
+ 	if (mvm->nvm_data && mvm->nvm_data->valid_rx_ant)
+-		rx_ant &= mvm->nvm_data->valid_tx_ant;
++		rx_ant &= mvm->nvm_data->valid_rx_ant;
+ 
+ 	if (mvm->set_rx_ant)
+ 		rx_ant &= mvm->set_rx_ant;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 31bc80cdcb7d5..9d681377cbab3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -47,6 +47,7 @@ void iwl_mvm_te_clear_data(struct iwl_mvm *mvm,
+ 
+ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
+ {
++	struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm);
+ 	struct ieee80211_vif *vif = mvm->p2p_device_vif;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+@@ -119,9 +120,9 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
+ 			iwl_mvm_rm_aux_sta(mvm);
+ 	}
+ 
++	if (!IS_ERR_OR_NULL(bss_vif))
++		iwl_mvm_unblock_esr(mvm, bss_vif, IWL_MVM_ESR_BLOCKED_ROC);
+ 	mutex_unlock(&mvm->mutex);
+-	if (vif)
+-		iwl_mvm_esr_non_bss_link(mvm, vif, 0, false);
+ }
+ 
+ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index b909a7665e9cc..155eb0fab12a4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -926,6 +926,8 @@ mwifiex_init_new_priv_params(struct mwifiex_private *priv,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	priv->bss_num = mwifiex_get_unused_bss_num(adapter, priv->bss_type);
++
+ 	spin_lock_irqsave(&adapter->main_proc_lock, flags);
+ 	adapter->main_locked = false;
+ 	spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/8188f.c b/drivers/net/wireless/realtek/rtl8xxxu/8188f.c
+index bd5a0603b4a23..3abf14d7044f3 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/8188f.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/8188f.c
+@@ -697,9 +697,14 @@ static void rtl8188fu_init_statistics(struct rtl8xxxu_priv *priv)
+ 	rtl8xxxu_write32(priv, REG_OFDM0_FA_RSTC, val32);
+ }
+ 
++#define TX_POWER_INDEX_MAX 0x3F
++#define TX_POWER_INDEX_DEFAULT_CCK 0x22
++#define TX_POWER_INDEX_DEFAULT_HT40 0x27
++
+ static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
+ {
+ 	struct rtl8188fu_efuse *efuse = &priv->efuse_wifi.efuse8188fu;
++	int i;
+ 
+ 	if (efuse->rtl_id != cpu_to_le16(0x8129))
+ 		return -EINVAL;
+@@ -713,6 +718,16 @@ static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
+ 	       efuse->tx_power_index_A.ht40_base,
+ 	       sizeof(efuse->tx_power_index_A.ht40_base));
+ 
++	for (i = 0; i < ARRAY_SIZE(priv->cck_tx_power_index_A); i++) {
++		if (priv->cck_tx_power_index_A[i] > TX_POWER_INDEX_MAX)
++			priv->cck_tx_power_index_A[i] = TX_POWER_INDEX_DEFAULT_CCK;
++	}
++
++	for (i = 0; i < ARRAY_SIZE(priv->ht40_1s_tx_power_index_A); i++) {
++		if (priv->ht40_1s_tx_power_index_A[i] > TX_POWER_INDEX_MAX)
++			priv->ht40_1s_tx_power_index_A[i] = TX_POWER_INDEX_DEFAULT_HT40;
++	}
++
+ 	priv->ofdm_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.a;
+ 	priv->ht20_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.b;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index 0dba8aae77160..564f5988ee82a 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -1201,6 +1201,15 @@ static int __priority_queue_cfg(struct rtw_dev *rtwdev,
+ 	rtw_write16(rtwdev, REG_FIFOPAGE_CTRL_2 + 2, fifo->rsvd_boundary);
+ 	rtw_write16(rtwdev, REG_BCNQ1_BDNY_V1, fifo->rsvd_boundary);
+ 	rtw_write32(rtwdev, REG_RXFF_BNDY, chip->rxff_size - C2H_PKT_BUF - 1);
++
++	if (rtwdev->hci.type == RTW_HCI_TYPE_USB) {
++		rtw_write8_mask(rtwdev, REG_AUTO_LLT_V1, BIT_MASK_BLK_DESC_NUM,
++				chip->usb_tx_agg_desc_num);
++
++		rtw_write8(rtwdev, REG_AUTO_LLT_V1 + 3, chip->usb_tx_agg_desc_num);
++		rtw_write8_set(rtwdev, REG_TXDMA_OFFSET_CHK + 1, BIT(1));
++	}
++
+ 	rtw_write8_set(rtwdev, REG_AUTO_LLT_V1, BIT_AUTO_INIT_LLT_V1);
+ 
+ 	if (!check_hw_ready(rtwdev, REG_AUTO_LLT_V1, BIT_AUTO_INIT_LLT_V1, 0))
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 49894331f7b49..49a3fd4fb7dcd 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -1197,6 +1197,8 @@ struct rtw_chip_info {
+ 	u16 fw_fifo_addr[RTW_FW_FIFO_MAX];
+ 	const struct rtw_fwcd_segs *fwcd_segs;
+ 
++	u8 usb_tx_agg_desc_num;
++
+ 	u8 default_1ss_tx_path;
+ 
+ 	bool path_div_supported;
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index b122f226924be..02ef9a77316b4 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -270,6 +270,7 @@
+ #define BIT_MASK_BCN_HEAD_1_V1	0xfff
+ #define REG_AUTO_LLT_V1		0x0208
+ #define BIT_AUTO_INIT_LLT_V1	BIT(0)
++#define BIT_MASK_BLK_DESC_NUM	GENMASK(7, 4)
+ #define REG_DWBCN0_CTRL		0x0208
+ #define BIT_BCN_VALID		BIT(16)
+ #define REG_TXDMA_OFFSET_CHK	0x020C
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8703b.c b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+index 8919f9e11f037..222608de33cde 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8703b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+@@ -2013,6 +2013,7 @@ const struct rtw_chip_info rtw8703b_hw_spec = {
+ 	.tx_stbc = false,
+ 	.max_power_index = 0x3f,
+ 	.ampdu_density = IEEE80211_HT_MPDU_DENSITY_16,
++	.usb_tx_agg_desc_num = 1, /* Not sure if this chip has USB interface */
+ 
+ 	.path_div_supported = false,
+ 	.ht_supported = true,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8723d.c b/drivers/net/wireless/realtek/rtw88/rtw8723d.c
+index f8df4c84d39f7..3fba4054d45f4 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8723d.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8723d.c
+@@ -2171,6 +2171,7 @@ const struct rtw_chip_info rtw8723d_hw_spec = {
+ 	.band = RTW_BAND_2G,
+ 	.page_size = TX_PAGE_SIZE,
+ 	.dig_min = 0x20,
++	.usb_tx_agg_desc_num = 1,
+ 	.ht_supported = true,
+ 	.vht_supported = false,
+ 	.lps_deep_mode_supported = 0,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index fe5d8e1883509..526e8de77b3e8 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -2008,6 +2008,7 @@ const struct rtw_chip_info rtw8821c_hw_spec = {
+ 	.band = RTW_BAND_2G | RTW_BAND_5G,
+ 	.page_size = TX_PAGE_SIZE,
+ 	.dig_min = 0x1c,
++	.usb_tx_agg_desc_num = 3,
+ 	.ht_supported = true,
+ 	.vht_supported = true,
+ 	.lps_deep_mode_supported = BIT(LPS_DEEP_MODE_LCLK),
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index 3017a9760da8d..2456ff2428180 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -2548,6 +2548,7 @@ const struct rtw_chip_info rtw8822b_hw_spec = {
+ 	.band = RTW_BAND_2G | RTW_BAND_5G,
+ 	.page_size = TX_PAGE_SIZE,
+ 	.dig_min = 0x1c,
++	.usb_tx_agg_desc_num = 3,
+ 	.ht_supported = true,
+ 	.vht_supported = true,
+ 	.lps_deep_mode_supported = BIT(LPS_DEEP_MODE_LCLK),
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index cd965edc29cea..62376d1cca22f 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -5366,6 +5366,7 @@ const struct rtw_chip_info rtw8822c_hw_spec = {
+ 	.band = RTW_BAND_2G | RTW_BAND_5G,
+ 	.page_size = TX_PAGE_SIZE,
+ 	.dig_min = 0x20,
++	.usb_tx_agg_desc_num = 3,
+ 	.default_1ss_tx_path = BB_PATH_A,
+ 	.path_div_supported = true,
+ 	.ht_supported = true,
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index a0188511099a1..0001a1ab6f38b 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -273,6 +273,8 @@ static void rtw_usb_write_port_tx_complete(struct urb *urb)
+ 		info = IEEE80211_SKB_CB(skb);
+ 		tx_data = rtw_usb_get_tx_data(skb);
+ 
++		skb_pull(skb, rtwdev->chip->tx_pkt_desc_sz);
++
+ 		/* enqueue to wait for tx report */
+ 		if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
+ 			rtw_tx_report_enqueue(rtwdev, skb, tx_data->sn);
+@@ -377,7 +379,9 @@ static bool rtw_usb_tx_agg_skb(struct rtw_usb *rtwusb, struct sk_buff_head *list
+ 
+ 		skb_iter = skb_peek(list);
+ 
+-		if (skb_iter && skb_iter->len + skb_head->len <= RTW_USB_MAX_XMITBUF_SZ)
++		if (skb_iter &&
++		    skb_iter->len + skb_head->len <= RTW_USB_MAX_XMITBUF_SZ &&
++		    agg_num < rtwdev->chip->usb_tx_agg_desc_num)
+ 			__skb_unlink(skb_iter, list);
+ 		else
+ 			skb_iter = NULL;
+diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
+index affffc4092ba3..5b4077c9fd286 100644
+--- a/drivers/net/wireless/realtek/rtw89/debug.c
++++ b/drivers/net/wireless/realtek/rtw89/debug.c
+@@ -3531,7 +3531,7 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ 	case RX_ENC_HE:
+ 		seq_printf(m, "HE %dSS MCS-%d GI:%s", status->nss, status->rate_idx,
+ 			   status->he_gi <= NL80211_RATE_INFO_HE_GI_3_2 ?
+-			   he_gi_str[rate->he_gi] : "N/A");
++			   he_gi_str[status->he_gi] : "N/A");
+ 		break;
+ 	case RX_ENC_EHT:
+ 		seq_printf(m, "EHT %dSS MCS-%d GI:%s", status->nss, status->rate_idx,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 044a5b90c7f4e..6cd6922882211 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6245,7 +6245,14 @@ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ 
+ 	ret = rtw89_hw_scan_offload(rtwdev, vif, false);
+ 	if (ret)
+-		rtw89_hw_scan_complete(rtwdev, vif, true);
++		rtw89_warn(rtwdev, "rtw89_hw_scan_offload failed ret %d\n", ret);
++
++	/* Indicate ieee80211_scan_completed() before returning, which is safe
++	 * because scan abort command always waits for completion of
++	 * RTW89_SCAN_END_SCAN_NOTIFY, so that ieee80211_stop() can flush scan
++	 * work properly.
++	 */
++	rtw89_hw_scan_complete(rtwdev, vif, true);
+ }
+ 
+ static bool rtw89_is_any_vif_connected_or_connecting(struct rtw89_dev *rtwdev)
+@@ -6715,10 +6722,8 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+ 	skb_put(skb, len);
+ 	h2c = (struct rtw89_h2c_wow_gtk_ofld *)skb->data;
+ 
+-	if (!enable) {
+-		skb_put_zero(skb, sizeof(*gtk_info));
++	if (!enable)
+ 		goto hdr;
+-	}
+ 
+ 	ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
+ 					   RTW89_PKT_OFLD_TYPE_EAPOL_KEY,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 3fe0046f6eaa2..185b9cd283ed4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -4757,6 +4757,9 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ 		}
+ 		return;
+ 	case RTW89_SCAN_END_SCAN_NOTIFY:
++		if (rtwdev->scan_info.abort)
++			return;
++
+ 		if (rtwvif && rtwvif->scan_req &&
+ 		    last_chan < rtwvif->scan_req->n_channels) {
+ 			ret = rtw89_hw_scan_offload(rtwdev, vif, true);
+@@ -4765,7 +4768,7 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ 				rtw89_warn(rtwdev, "HW scan failed: %d\n", ret);
+ 			}
+ 		} else {
+-			rtw89_hw_scan_complete(rtwdev, vif, rtwdev->scan_info.abort);
++			rtw89_hw_scan_complete(rtwdev, vif, false);
+ 		}
+ 		break;
+ 	case RTW89_SCAN_ENTER_OP_NOTIFY:
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 03bbcf9b6737c..b36aa9a6bb3fc 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -2330,21 +2330,20 @@ static void rtw89_pci_disable_eq(struct rtw89_dev *rtwdev)
+ 	u32 backup_aspm;
+ 	u32 phy_offset;
+ 	u16 oobs_val;
+-	u16 val16;
+ 	int ret;
+ 
+ 	if (rtwdev->chip->chip_id != RTL8852C)
+ 		return;
+ 
+-	backup_aspm = rtw89_read32(rtwdev, R_AX_PCIE_MIX_CFG_V1);
+-	rtw89_write32_clr(rtwdev, R_AX_PCIE_MIX_CFG_V1, B_AX_ASPM_CTRL_MASK);
+-
+ 	g1_oobs = rtw89_read16_mask(rtwdev, R_RAC_DIRECT_OFFSET_G1 +
+ 					    RAC_ANA09 * RAC_MULT, BAC_OOBS_SEL);
+ 	g2_oobs = rtw89_read16_mask(rtwdev, R_RAC_DIRECT_OFFSET_G2 +
+ 					    RAC_ANA09 * RAC_MULT, BAC_OOBS_SEL);
+ 	if (g1_oobs && g2_oobs)
+-		goto out;
++		return;
++
++	backup_aspm = rtw89_read32(rtwdev, R_AX_PCIE_MIX_CFG_V1);
++	rtw89_write32_clr(rtwdev, R_AX_PCIE_MIX_CFG_V1, B_AX_ASPM_CTRL_MASK);
+ 
+ 	ret = rtw89_pci_get_phy_offset_by_link_speed(rtwdev, &phy_offset);
+ 	if (ret)
+@@ -2354,15 +2353,16 @@ static void rtw89_pci_disable_eq(struct rtw89_dev *rtwdev)
+ 	rtw89_write16(rtwdev, phy_offset + RAC_ANA10 * RAC_MULT, ADDR_SEL_PINOUT_DIS_VAL);
+ 	rtw89_write16_set(rtwdev, phy_offset + RAC_ANA19 * RAC_MULT, B_PCIE_BIT_RD_SEL);
+ 
+-	val16 = rtw89_read16_mask(rtwdev, phy_offset + RAC_ANA1F * RAC_MULT,
+-				  OOBS_LEVEL_MASK);
+-	oobs_val = u16_encode_bits(val16, OOBS_SEN_MASK);
++	oobs_val = rtw89_read16_mask(rtwdev, phy_offset + RAC_ANA1F * RAC_MULT,
++				     OOBS_LEVEL_MASK);
+ 
+-	rtw89_write16(rtwdev, R_RAC_DIRECT_OFFSET_G1 + RAC_ANA03 * RAC_MULT, oobs_val);
++	rtw89_write16_mask(rtwdev, R_RAC_DIRECT_OFFSET_G1 + RAC_ANA03 * RAC_MULT,
++			   OOBS_SEN_MASK, oobs_val);
+ 	rtw89_write16_set(rtwdev, R_RAC_DIRECT_OFFSET_G1 + RAC_ANA09 * RAC_MULT,
+ 			  BAC_OOBS_SEL);
+ 
+-	rtw89_write16(rtwdev, R_RAC_DIRECT_OFFSET_G2 + RAC_ANA03 * RAC_MULT, oobs_val);
++	rtw89_write16_mask(rtwdev, R_RAC_DIRECT_OFFSET_G2 + RAC_ANA03 * RAC_MULT,
++			   OOBS_SEN_MASK, oobs_val);
+ 	rtw89_write16_set(rtwdev, R_RAC_DIRECT_OFFSET_G2 + RAC_ANA09 * RAC_MULT,
+ 			  BAC_OOBS_SEL);
+ 
+@@ -2783,7 +2783,6 @@ static int rtw89_pci_ops_mac_pre_init_ax(struct rtw89_dev *rtwdev)
+ 	const struct rtw89_pci_info *info = rtwdev->pci_info;
+ 	int ret;
+ 
+-	rtw89_pci_disable_eq(rtwdev);
+ 	rtw89_pci_ber(rtwdev);
+ 	rtw89_pci_rxdma_prefth(rtwdev);
+ 	rtw89_pci_l1off_pwroff(rtwdev);
+@@ -4155,6 +4154,7 @@ static int __maybe_unused rtw89_pci_resume(struct device *dev)
+ 				  B_AX_SEL_REQ_ENTR_L1);
+ 	}
+ 	rtw89_pci_l2_hci_ldo(rtwdev);
++	rtw89_pci_disable_eq(rtwdev);
+ 	rtw89_pci_filter_out(rtwdev);
+ 	rtw89_pci_link_cfg(rtwdev);
+ 	rtw89_pci_l1ss_cfg(rtwdev);
+@@ -4289,6 +4289,7 @@ int rtw89_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		goto err_clear_resource;
+ 	}
+ 
++	rtw89_pci_disable_eq(rtwdev);
+ 	rtw89_pci_filter_out(rtwdev);
+ 	rtw89_pci_link_cfg(rtwdev);
+ 	rtw89_pci_l1ss_cfg(rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index d351096fa4b41..767de9a2de7e4 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -403,6 +403,8 @@ static int rtw8852b_pwr_on_func(struct rtw89_dev *rtwdev)
+ 	u32 val32;
+ 	u32 ret;
+ 
++	rtw8852b_pwr_sps_ana(rtwdev);
++
+ 	rtw89_write32_clr(rtwdev, R_AX_SYS_PW_CTRL, B_AX_AFSM_WLSUS_EN |
+ 						    B_AX_AFSM_PCIE_SUS_EN);
+ 	rtw89_write32_set(rtwdev, R_AX_SYS_PW_CTRL, B_AX_DIS_WLBT_PDNSUSEN_SOPC);
+@@ -530,9 +532,7 @@ static int rtw8852b_pwr_off_func(struct rtw89_dev *rtwdev)
+ 	u32 val32;
+ 	u32 ret;
+ 
+-	/* Only do once during probe stage after reading efuse */
+-	if (!test_bit(RTW89_FLAG_PROBE_DONE, rtwdev->flags))
+-		rtw8852b_pwr_sps_ana(rtwdev);
++	rtw8852b_pwr_sps_ana(rtwdev);
+ 
+ 	ret = rtw89_mac_write_xtal_si(rtwdev, XTAL_SI_ANAPAR_WL, XTAL_SI_RFC2RF,
+ 				      XTAL_SI_RFC2RF);
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+index 259df67836a0e..a2fa1d339bc21 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+@@ -20,7 +20,7 @@
+ #define RTW8852B_RF_REL_VERSION 34
+ #define RTW8852B_DPK_VER 0x0d
+ #define RTW8852B_DPK_RF_PATH 2
+-#define RTW8852B_DPK_KIP_REG_NUM 2
++#define RTW8852B_DPK_KIP_REG_NUM 3
+ 
+ #define _TSSI_DE_MASK GENMASK(21, 12)
+ #define ADDC_T_AVG 100
+diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c
+index 6a84ec58d618b..4ee3740804667 100644
+--- a/drivers/net/wireless/virtual/virt_wifi.c
++++ b/drivers/net/wireless/virtual/virt_wifi.c
+@@ -136,6 +136,9 @@ static struct ieee80211_supported_band band_5ghz = {
+ /* Assigned at module init. Guaranteed locally-administered and unicast. */
+ static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {};
+ 
++#define VIRT_WIFI_SSID "VirtWifi"
++#define VIRT_WIFI_SSID_LEN 8
++
+ static void virt_wifi_inform_bss(struct wiphy *wiphy)
+ {
+ 	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
+@@ -146,8 +149,8 @@ static void virt_wifi_inform_bss(struct wiphy *wiphy)
+ 		u8 ssid[8];
+ 	} __packed ssid = {
+ 		.tag = WLAN_EID_SSID,
+-		.len = 8,
+-		.ssid = "VirtWifi",
++		.len = VIRT_WIFI_SSID_LEN,
++		.ssid = VIRT_WIFI_SSID,
+ 	};
+ 
+ 	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
+@@ -213,6 +216,8 @@ struct virt_wifi_netdev_priv {
+ 	struct net_device *upperdev;
+ 	u32 tx_packets;
+ 	u32 tx_failed;
++	u32 connect_requested_ssid_len;
++	u8 connect_requested_ssid[IEEE80211_MAX_SSID_LEN];
+ 	u8 connect_requested_bss[ETH_ALEN];
+ 	bool is_up;
+ 	bool is_connected;
+@@ -229,6 +234,12 @@ static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev,
+ 	if (priv->being_deleted || !priv->is_up)
+ 		return -EBUSY;
+ 
++	if (!sme->ssid)
++		return -EINVAL;
++
++	priv->connect_requested_ssid_len = sme->ssid_len;
++	memcpy(priv->connect_requested_ssid, sme->ssid, sme->ssid_len);
++
+ 	could_schedule = schedule_delayed_work(&priv->connect, HZ * 2);
+ 	if (!could_schedule)
+ 		return -EBUSY;
+@@ -252,12 +263,15 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 		container_of(work, struct virt_wifi_netdev_priv, connect.work);
+ 	u8 *requested_bss = priv->connect_requested_bss;
+ 	bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid);
++	bool right_ssid = priv->connect_requested_ssid_len == VIRT_WIFI_SSID_LEN &&
++			  !memcmp(priv->connect_requested_ssid, VIRT_WIFI_SSID,
++				  priv->connect_requested_ssid_len);
+ 	u16 status = WLAN_STATUS_SUCCESS;
+ 
+ 	if (is_zero_ether_addr(requested_bss))
+ 		requested_bss = NULL;
+ 
+-	if (!priv->is_up || (requested_bss && !right_addr))
++	if (!priv->is_up || (requested_bss && !right_addr) || !right_ssid)
+ 		status = WLAN_STATUS_UNSPECIFIED_FAILURE;
+ 	else
+ 		priv->is_connected = true;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 102a9fb0c65ff..5a93f021ca4f1 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -863,7 +863,8 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
+ 	nvme_start_request(req);
+ 	return BLK_STS_OK;
+ out_unmap_data:
+-	nvme_unmap_data(dev, req);
++	if (blk_rq_nr_phys_segments(req))
++		nvme_unmap_data(dev, req);
+ out_free_cmd:
+ 	nvme_cleanup_cmd(req);
+ 	return ret;
+@@ -1274,7 +1275,7 @@ static void nvme_warn_reset(struct nvme_dev *dev, u32 csts)
+ 	dev_warn(dev->ctrl.device,
+ 		 "Does your device have a faulty power saving mode enabled?\n");
+ 	dev_warn(dev->ctrl.device,
+-		 "Try \"nvme_core.default_ps_max_latency_us=0 pcie_aspm=off\" and report a bug\n");
++		 "Try \"nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off\" and report a bug\n");
+ }
+ 
+ static enum blk_eh_timer_return nvme_timeout(struct request *req)
+diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
+index 7d2633940f9b8..8bc3f431c77f6 100644
+--- a/drivers/nvme/target/auth.c
++++ b/drivers/nvme/target/auth.c
+@@ -314,7 +314,7 @@ int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
+ 						    req->sq->dhchap_c1,
+ 						    challenge, shash_len);
+ 		if (ret)
+-			goto out_free_response;
++			goto out_free_challenge;
+ 	}
+ 
+ 	pr_debug("ctrl %d qid %d host response seq %u transaction %d\n",
+@@ -325,7 +325,7 @@ int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
+ 			GFP_KERNEL);
+ 	if (!shash) {
+ 		ret = -ENOMEM;
+-		goto out_free_response;
++		goto out_free_challenge;
+ 	}
+ 	shash->tfm = shash_tfm;
+ 	ret = crypto_shash_init(shash);
+@@ -361,9 +361,10 @@ int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
+ 		goto out;
+ 	ret = crypto_shash_final(shash, response);
+ out:
++	kfree(shash);
++out_free_challenge:
+ 	if (challenge != req->sq->dhchap_c1)
+ 		kfree(challenge);
+-	kfree(shash);
+ out_free_response:
+ 	nvme_auth_free_key(transformed_key);
+ out_free_tfm:
+@@ -427,14 +428,14 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
+ 						    req->sq->dhchap_c2,
+ 						    challenge, shash_len);
+ 		if (ret)
+-			goto out_free_response;
++			goto out_free_challenge;
+ 	}
+ 
+ 	shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(shash_tfm),
+ 			GFP_KERNEL);
+ 	if (!shash) {
+ 		ret = -ENOMEM;
+-		goto out_free_response;
++		goto out_free_challenge;
+ 	}
+ 	shash->tfm = shash_tfm;
+ 
+@@ -471,9 +472,10 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
+ 		goto out;
+ 	ret = crypto_shash_final(shash, response);
+ out:
++	kfree(shash);
++out_free_challenge:
+ 	if (challenge != req->sq->dhchap_c2)
+ 		kfree(challenge);
+-	kfree(shash);
+ out_free_response:
+ 	nvme_auth_free_key(transformed_key);
+ out_free_tfm:
+diff --git a/drivers/nvmem/rockchip-otp.c b/drivers/nvmem/rockchip-otp.c
+index cb9aa5428350a..7107d68a2f8c7 100644
+--- a/drivers/nvmem/rockchip-otp.c
++++ b/drivers/nvmem/rockchip-otp.c
+@@ -255,6 +255,7 @@ static int rockchip_otp_read(void *context, unsigned int offset,
+ static struct nvmem_config otp_config = {
+ 	.name = "rockchip-otp",
+ 	.owner = THIS_MODULE,
++	.add_legacy_fixed_of_cells = true,
+ 	.read_only = true,
+ 	.stride = 1,
+ 	.word_size = 1,
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index cb4611fe1b5b2..4e4d293bf5b10 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -2443,8 +2443,10 @@ static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev,
+ 		 * Cross check it again and fix if required.
+ 		 */
+ 		gdev = dev_to_genpd_dev(virt_dev);
+-		if (IS_ERR(gdev))
+-			return PTR_ERR(gdev);
++		if (IS_ERR(gdev)) {
++			ret = PTR_ERR(gdev);
++			goto err;
++		}
+ 
+ 		genpd_table = _find_opp_table(gdev);
+ 		if (!IS_ERR(genpd_table)) {
+diff --git a/drivers/opp/ti-opp-supply.c b/drivers/opp/ti-opp-supply.c
+index e3b97cd1fbbf3..ec0056a4bb135 100644
+--- a/drivers/opp/ti-opp-supply.c
++++ b/drivers/opp/ti-opp-supply.c
+@@ -393,10 +393,12 @@ static int ti_opp_supply_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ret = dev_pm_opp_set_config_regulators(cpu_dev, ti_opp_config_regulators);
+-	if (ret < 0)
++	if (ret < 0) {
+ 		_free_optimized_voltages(dev, &opp_data);
++		return ret;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static struct platform_driver ti_opp_supply_driver = {
+diff --git a/drivers/parport/procfs.c b/drivers/parport/procfs.c
+index bd388560ed592..c2e371c50dcfa 100644
+--- a/drivers/parport/procfs.c
++++ b/drivers/parport/procfs.c
+@@ -51,12 +51,12 @@ static int do_active_device(struct ctl_table *table, int write,
+ 	
+ 	for (dev = port->devices; dev ; dev = dev->next) {
+ 		if(dev == port->cad) {
+-			len += sprintf(buffer, "%s\n", dev->name);
++			len += snprintf(buffer, sizeof(buffer), "%s\n", dev->name);
+ 		}
+ 	}
+ 
+ 	if(!len) {
+-		len += sprintf(buffer, "%s\n", "none");
++		len += snprintf(buffer, sizeof(buffer), "%s\n", "none");
+ 	}
+ 
+ 	if (len > *lenp)
+@@ -87,19 +87,19 @@ static int do_autoprobe(struct ctl_table *table, int write,
+ 	}
+ 	
+ 	if ((str = info->class_name) != NULL)
+-		len += sprintf (buffer + len, "CLASS:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "CLASS:%s;\n", str);
+ 
+ 	if ((str = info->model) != NULL)
+-		len += sprintf (buffer + len, "MODEL:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "MODEL:%s;\n", str);
+ 
+ 	if ((str = info->mfr) != NULL)
+-		len += sprintf (buffer + len, "MANUFACTURER:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "MANUFACTURER:%s;\n", str);
+ 
+ 	if ((str = info->description) != NULL)
+-		len += sprintf (buffer + len, "DESCRIPTION:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "DESCRIPTION:%s;\n", str);
+ 
+ 	if ((str = info->cmdset) != NULL)
+-		len += sprintf (buffer + len, "COMMAND SET:%s;\n", str);
++		len += snprintf (buffer + len, sizeof(buffer) - len, "COMMAND SET:%s;\n", str);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -117,7 +117,7 @@ static int do_hardware_base_addr(struct ctl_table *table, int write,
+ 				 void *result, size_t *lenp, loff_t *ppos)
+ {
+ 	struct parport *port = (struct parport *)table->extra1;
+-	char buffer[20];
++	char buffer[64];
+ 	int len = 0;
+ 
+ 	if (*ppos) {
+@@ -128,7 +128,7 @@ static int do_hardware_base_addr(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%lu\t%lu\n", port->base, port->base_hi);
++	len += snprintf (buffer, sizeof(buffer), "%lu\t%lu\n", port->base, port->base_hi);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -155,7 +155,7 @@ static int do_hardware_irq(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%d\n", port->irq);
++	len += snprintf (buffer, sizeof(buffer), "%d\n", port->irq);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -182,7 +182,7 @@ static int do_hardware_dma(struct ctl_table *table, int write,
+ 	if (write) /* permissions prevent this anyway */
+ 		return -EACCES;
+ 
+-	len += sprintf (buffer, "%d\n", port->dma);
++	len += snprintf (buffer, sizeof(buffer), "%d\n", port->dma);
+ 
+ 	if (len > *lenp)
+ 		len = *lenp;
+@@ -213,7 +213,7 @@ static int do_hardware_modes(struct ctl_table *table, int write,
+ #define printmode(x)							\
+ do {									\
+ 	if (port->modes & PARPORT_MODE_##x)				\
+-		len += sprintf(buffer + len, "%s%s", f++ ? "," : "", #x); \
++		len += snprintf(buffer + len, sizeof(buffer) - len, "%s%s", f++ ? "," : "", #x); \
+ } while (0)
+ 		int f = 0;
+ 		printmode(PCSPP);
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index d3a7d14ee685a..cd0e0022f91d6 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -245,8 +245,68 @@ static struct irq_chip ks_pcie_msi_irq_chip = {
+ 	.irq_unmask = ks_pcie_msi_unmask,
+ };
+ 
++/**
++ * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers
++ * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
++ *	     PCIe host controller driver information.
++ *
++ * Since modification of dbi_cs2 involves different clock domain, read the
++ * status back to ensure the transition is complete.
++ */
++static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
++{
++	u32 val;
++
++	val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
++	val |= DBI_CS2;
++	ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
++
++	do {
++		val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
++	} while (!(val & DBI_CS2));
++}
++
++/**
++ * ks_pcie_clear_dbi_mode() - Disable DBI mode
++ * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
++ *	     PCIe host controller driver information.
++ *
++ * Since modification of dbi_cs2 involves different clock domain, read the
++ * status back to ensure the transition is complete.
++ */
++static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
++{
++	u32 val;
++
++	val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
++	val &= ~DBI_CS2;
++	ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
++
++	do {
++		val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
++	} while (val & DBI_CS2);
++}
++
+ static int ks_pcie_msi_host_init(struct dw_pcie_rp *pp)
+ {
++	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
++	struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
++
++	/* Configure and set up BAR0 */
++	ks_pcie_set_dbi_mode(ks_pcie);
++
++	/* Enable BAR0 */
++	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
++	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
++
++	ks_pcie_clear_dbi_mode(ks_pcie);
++
++	/*
++	 * For BAR0, just setting bus address for inbound writes (MSI) should
++	 * be sufficient.  Use physical address to avoid any conflicts.
++	 */
++	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
++
+ 	pp->msi_irq_chip = &ks_pcie_msi_irq_chip;
+ 	return dw_pcie_allocate_domains(pp);
+ }
+@@ -340,59 +400,22 @@ static const struct irq_domain_ops ks_pcie_intx_irq_domain_ops = {
+ 	.xlate = irq_domain_xlate_onetwocell,
+ };
+ 
+-/**
+- * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers
+- * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
+- *	     PCIe host controller driver information.
+- *
+- * Since modification of dbi_cs2 involves different clock domain, read the
+- * status back to ensure the transition is complete.
+- */
+-static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
+-{
+-	u32 val;
+-
+-	val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+-	val |= DBI_CS2;
+-	ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
+-
+-	do {
+-		val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+-	} while (!(val & DBI_CS2));
+-}
+-
+-/**
+- * ks_pcie_clear_dbi_mode() - Disable DBI mode
+- * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
+- *	     PCIe host controller driver information.
+- *
+- * Since modification of dbi_cs2 involves different clock domain, read the
+- * status back to ensure the transition is complete.
+- */
+-static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
+-{
+-	u32 val;
+-
+-	val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+-	val &= ~DBI_CS2;
+-	ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
+-
+-	do {
+-		val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+-	} while (val & DBI_CS2);
+-}
+-
+-static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
++static int ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
+ {
+ 	u32 val;
+ 	u32 num_viewport = ks_pcie->num_viewport;
+ 	struct dw_pcie *pci = ks_pcie->pci;
+ 	struct dw_pcie_rp *pp = &pci->pp;
+-	u64 start, end;
++	struct resource_entry *entry;
+ 	struct resource *mem;
++	u64 start, end;
+ 	int i;
+ 
+-	mem = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM)->res;
++	entry = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM);
++	if (!entry)
++		return -ENODEV;
++
++	mem = entry->res;
+ 	start = mem->start;
+ 	end = mem->end;
+ 
+@@ -403,7 +426,7 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
+ 	ks_pcie_clear_dbi_mode(ks_pcie);
+ 
+ 	if (ks_pcie->is_am6)
+-		return;
++		return 0;
+ 
+ 	val = ilog2(OB_WIN_SIZE);
+ 	ks_pcie_app_writel(ks_pcie, OB_SIZE, val);
+@@ -420,6 +443,8 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
+ 	val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
+ 	val |= OB_XLAT_EN_VAL;
+ 	ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
++
++	return 0;
+ }
+ 
+ static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
+@@ -445,44 +470,10 @@ static struct pci_ops ks_child_pcie_ops = {
+ 	.write = pci_generic_config_write,
+ };
+ 
+-/**
+- * ks_pcie_v3_65_add_bus() - keystone add_bus post initialization
+- * @bus: A pointer to the PCI bus structure.
+- *
+- * This sets BAR0 to enable inbound access for MSI_IRQ register
+- */
+-static int ks_pcie_v3_65_add_bus(struct pci_bus *bus)
+-{
+-	struct dw_pcie_rp *pp = bus->sysdata;
+-	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+-	struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+-
+-	if (!pci_is_root_bus(bus))
+-		return 0;
+-
+-	/* Configure and set up BAR0 */
+-	ks_pcie_set_dbi_mode(ks_pcie);
+-
+-	/* Enable BAR0 */
+-	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
+-	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
+-
+-	ks_pcie_clear_dbi_mode(ks_pcie);
+-
+-	 /*
+-	  * For BAR0, just setting bus address for inbound writes (MSI) should
+-	  * be sufficient.  Use physical address to avoid any conflicts.
+-	  */
+-	dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
+-
+-	return 0;
+-}
+-
+ static struct pci_ops ks_pcie_ops = {
+ 	.map_bus = dw_pcie_own_conf_map_bus,
+ 	.read = pci_generic_config_read,
+ 	.write = pci_generic_config_write,
+-	.add_bus = ks_pcie_v3_65_add_bus,
+ };
+ 
+ /**
+@@ -814,7 +805,10 @@ static int __init ks_pcie_host_init(struct dw_pcie_rp *pp)
+ 		return ret;
+ 
+ 	ks_pcie_stop_link(pci);
+-	ks_pcie_setup_rc_app_regs(ks_pcie);
++	ret = ks_pcie_setup_rc_app_regs(ks_pcie);
++	if (ret)
++		return ret;
++
+ 	writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8),
+ 			pci->dbi_base + PCI_IO_BASE);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 47391d7d3a734..769e848246870 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -161,7 +161,7 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
+ 	if (!ep->bar_to_atu[bar])
+ 		free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows);
+ 	else
+-		free_win = ep->bar_to_atu[bar];
++		free_win = ep->bar_to_atu[bar] - 1;
+ 
+ 	if (free_win >= pci->num_ib_windows) {
+ 		dev_err(pci->dev, "No free inbound window\n");
+@@ -175,7 +175,11 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
+ 		return ret;
+ 	}
+ 
+-	ep->bar_to_atu[bar] = free_win;
++	/*
++	 * Always increment free_win before assignment, since value 0 is used to identify
++	 * unallocated mapping.
++	 */
++	ep->bar_to_atu[bar] = free_win + 1;
+ 	set_bit(free_win, ep->ib_window_map);
+ 
+ 	return 0;
+@@ -212,7 +216,10 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ 	struct dw_pcie_ep *ep = epc_get_drvdata(epc);
+ 	struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+ 	enum pci_barno bar = epf_bar->barno;
+-	u32 atu_index = ep->bar_to_atu[bar];
++	u32 atu_index = ep->bar_to_atu[bar] - 1;
++
++	if (!ep->bar_to_atu[bar])
++		return;
+ 
+ 	__dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+index d6842141d384d..a909e42b4273e 100644
+--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
++++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+@@ -240,7 +240,7 @@ static int rockchip_pcie_resource_get(struct platform_device *pdev,
+ 		return PTR_ERR(rockchip->apb_base);
+ 
+ 	rockchip->rst_gpio = devm_gpiod_get_optional(&pdev->dev, "reset",
+-						     GPIOD_OUT_HIGH);
++						     GPIOD_OUT_LOW);
+ 	if (IS_ERR(rockchip->rst_gpio))
+ 		return PTR_ERR(rockchip->rst_gpio);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index 2fb8c15e7a911..50b1635e3cbb1 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -500,12 +500,6 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
+ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
+ {
+ 	struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
+-	struct device *dev = pci->dev;
+-
+-	if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) {
+-		dev_dbg(dev, "Link is already disabled\n");
+-		return;
+-	}
+ 
+ 	dw_pcie_ep_cleanup(&pci->ep);
+ 	qcom_pcie_disable_resources(pcie_ep);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 93f5433c5c550..4537313ef37a9 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -2015,6 +2015,7 @@ static const struct pci_epc_features tegra_pcie_epc_features = {
+ 	.bar[BAR_3] = { .type = BAR_RESERVED, },
+ 	.bar[BAR_4] = { .type = BAR_RESERVED, },
+ 	.bar[BAR_5] = { .type = BAR_RESERVED, },
++	.align = SZ_64K,
+ };
+ 
+ static const struct pci_epc_features*
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 5992280e8110b..cdd5be16021dd 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1130,8 +1130,8 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where,
+ 		   PCI_CAPABILITY_LIST) {
+ 		/* ROM BARs are unimplemented */
+ 		*val = 0;
+-	} else if (where >= PCI_INTERRUPT_LINE && where + size <=
+-		   PCI_INTERRUPT_PIN) {
++	} else if ((where >= PCI_INTERRUPT_LINE && where + size <= PCI_INTERRUPT_PIN) ||
++		   (where >= PCI_INTERRUPT_PIN && where + size <= PCI_MIN_GNT)) {
+ 		/*
+ 		 * Interrupt Line and Interrupt PIN are hard-wired to zero
+ 		 * because this front-end only supports message-signaled
+diff --git a/drivers/pci/controller/pci-loongson.c b/drivers/pci/controller/pci-loongson.c
+index 8b34ccff073a9..bc630ab8a2831 100644
+--- a/drivers/pci/controller/pci-loongson.c
++++ b/drivers/pci/controller/pci-loongson.c
+@@ -163,6 +163,19 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON,
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON,
+ 			DEV_LS7A_HDMI, loongson_pci_pin_quirk);
+ 
++static void loongson_pci_msi_quirk(struct pci_dev *dev)
++{
++	u16 val, class = dev->class >> 8;
++
++	if (class != PCI_CLASS_BRIDGE_HOST)
++		return;
++
++	pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &val);
++	val |= PCI_MSI_FLAGS_ENABLE;
++	pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, val);
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, DEV_LS7A_PCIE_PORT5, loongson_pci_msi_quirk);
++
+ static struct loongson_pci *pci_bus_to_loongson_pci(struct pci_bus *bus)
+ {
+ 	struct pci_config_window *cfg;
+diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
+index 996077ab7cfdb..c01efc6ea64f6 100644
+--- a/drivers/pci/controller/pcie-rcar-host.c
++++ b/drivers/pci/controller/pcie-rcar-host.c
+@@ -78,7 +78,11 @@ static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base)
+ 		writel(L1IATN, pcie_base + PMCTLR);
+ 		ret = readl_poll_timeout_atomic(pcie_base + PMSR, val,
+ 						val & L1FAEG, 10, 1000);
+-		WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret);
++		if (ret) {
++			dev_warn_ratelimited(pcie_dev,
++					     "Timeout waiting for L1 link state, ret=%d\n",
++					     ret);
++		}
+ 		writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
+ 	}
+ 
+diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
+index 0ef2e622d36e1..c07d7129f1c7c 100644
+--- a/drivers/pci/controller/pcie-rockchip.c
++++ b/drivers/pci/controller/pcie-rockchip.c
+@@ -121,7 +121,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
+ 
+ 	if (rockchip->is_rc) {
+ 		rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
+-							    GPIOD_OUT_HIGH);
++							    GPIOD_OUT_LOW);
+ 		if (IS_ERR(rockchip->ep_gpio))
+ 			return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
+ 					     "failed to get ep GPIO\n");
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 977fb79c15677..546d2a27955cf 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -735,20 +735,12 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
+ {
+ 	struct pci_epf_test *epf_test = epf_get_drvdata(epf);
+ 	struct pci_epf_header *header = epf->header;
+-	const struct pci_epc_features *epc_features;
++	const struct pci_epc_features *epc_features = epf_test->epc_features;
+ 	struct pci_epc *epc = epf->epc;
+ 	struct device *dev = &epf->dev;
+ 	bool linkup_notifier = false;
+-	bool msix_capable = false;
+-	bool msi_capable = true;
+ 	int ret;
+ 
+-	epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no);
+-	if (epc_features) {
+-		msix_capable = epc_features->msix_capable;
+-		msi_capable = epc_features->msi_capable;
+-	}
+-
+ 	if (epf->vfunc_no <= 1) {
+ 		ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, header);
+ 		if (ret) {
+@@ -761,7 +753,7 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (msi_capable) {
++	if (epc_features->msi_capable) {
+ 		ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no,
+ 				      epf->msi_interrupts);
+ 		if (ret) {
+@@ -770,7 +762,7 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
+ 		}
+ 	}
+ 
+-	if (msix_capable) {
++	if (epc_features->msix_capable) {
+ 		ret = pci_epc_set_msix(epc, epf->func_no, epf->vfunc_no,
+ 				       epf->msix_interrupts,
+ 				       epf_test->test_reg_bar,
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index 8e779eecd62d4..874cb097b093a 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -799,8 +799,9 @@ static int epf_ntb_epc_init(struct epf_ntb *ntb)
+  */
+ static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
+ {
+-	epf_ntb_db_bar_clear(ntb);
+ 	epf_ntb_mw_bar_clear(ntb, ntb->num_mws);
++	epf_ntb_db_bar_clear(ntb);
++	epf_ntb_config_sspad_bar_clear(ntb);
+ }
+ 
+ #define EPF_NTB_R(_name)						\
+@@ -1018,8 +1019,10 @@ static int vpci_scan_bus(void *sysdata)
+ 	struct epf_ntb *ndev = sysdata;
+ 
+ 	vpci_bus = pci_scan_bus(ndev->vbus_number, &vpci_ops, sysdata);
+-	if (vpci_bus)
+-		pr_err("create pci bus\n");
++	if (!vpci_bus) {
++		pr_err("create pci bus failed\n");
++		return -EINVAL;
++	}
+ 
+ 	pci_bus_add_devices(vpci_bus);
+ 
+@@ -1335,13 +1338,19 @@ static int epf_ntb_bind(struct pci_epf *epf)
+ 	ret = pci_register_driver(&vntb_pci_driver);
+ 	if (ret) {
+ 		dev_err(dev, "failure register vntb pci driver\n");
+-		goto err_bar_alloc;
++		goto err_epc_cleanup;
+ 	}
+ 
+-	vpci_scan_bus(ntb);
++	ret = vpci_scan_bus(ntb);
++	if (ret)
++		goto err_unregister;
+ 
+ 	return 0;
+ 
++err_unregister:
++	pci_unregister_driver(&vntb_pci_driver);
++err_epc_cleanup:
++	epf_ntb_epc_cleanup(ntb);
+ err_bar_alloc:
+ 	epf_ntb_config_spad_bar_free(ntb);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 35fb1f17a589c..dff09e4892d39 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4753,7 +4753,7 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
+  */
+ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
+ {
+-	struct pci_dev *child;
++	struct pci_dev *child __free(pci_dev_put) = NULL;
+ 	int delay;
+ 
+ 	if (pci_dev_is_disconnected(dev))
+@@ -4782,8 +4782,8 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
+ 		return 0;
+ 	}
+ 
+-	child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
+-				 bus_list);
++	child = pci_dev_get(list_first_entry(&dev->subordinate->devices,
++					     struct pci_dev, bus_list));
+ 	up_read(&pci_bus_sem);
+ 
+ 	/*
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 909e6a7c3cc31..141d6b31959be 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -829,11 +829,9 @@ static resource_size_t calculate_memsize(resource_size_t size,
+ 		size = min_size;
+ 	if (old_size == 1)
+ 		old_size = 0;
+-	if (size < old_size)
+-		size = old_size;
+ 
+-	size = ALIGN(max(size, add_size) + children_add_size, align);
+-	return size;
++	size = max(size, add_size) + children_add_size;
++	return ALIGN(max(size, old_size), align);
+ }
+ 
+ resource_size_t __weak pcibios_window_alignment(struct pci_bus *bus,
+diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
+index 23fa6c5da82c4..8ed5c3358920a 100644
+--- a/drivers/perf/arm_pmuv3.c
++++ b/drivers/perf/arm_pmuv3.c
+@@ -338,6 +338,11 @@ static bool armv8pmu_event_want_user_access(struct perf_event *event)
+ 	return ATTR_CFG_GET_FLD(&event->attr, rdpmc);
+ }
+ 
++static u32 armv8pmu_event_get_threshold(struct perf_event_attr *attr)
++{
++	return ATTR_CFG_GET_FLD(attr, threshold);
++}
++
+ static u8 armv8pmu_event_threshold_control(struct perf_event_attr *attr)
+ {
+ 	u8 th_compare = ATTR_CFG_GET_FLD(attr, threshold_compare);
+@@ -941,7 +946,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
+ 	unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
+ 
+ 	/* Always prefer to place a cycle counter into the cycle counter. */
+-	if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
++	if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) &&
++	    !armv8pmu_event_get_threshold(&event->attr)) {
+ 		if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
+ 			return ARMV8_IDX_CYCLE_COUNTER;
+ 		else if (armv8pmu_event_is_64bit(event) &&
+@@ -1033,7 +1039,7 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
+ 	 * If FEAT_PMUv3_TH isn't implemented, then THWIDTH (threshold_max) will
+ 	 * be 0 and will also trigger this check, preventing it from being used.
+ 	 */
+-	th = ATTR_CFG_GET_FLD(attr, threshold);
++	th = armv8pmu_event_get_threshold(attr);
+ 	if (th > threshold_max(cpu_pmu)) {
+ 		pr_debug("PMU event threshold exceeds max value\n");
+ 		return -EINVAL;
+diff --git a/drivers/phy/cadence/phy-cadence-torrent.c b/drivers/phy/cadence/phy-cadence-torrent.c
+index 95924a09960cc..6113f0022e6ee 100644
+--- a/drivers/phy/cadence/phy-cadence-torrent.c
++++ b/drivers/phy/cadence/phy-cadence-torrent.c
+@@ -1156,6 +1156,9 @@ static int cdns_torrent_dp_set_power_state(struct cdns_torrent_phy *cdns_phy,
+ 	ret = regmap_read_poll_timeout(regmap, PHY_PMA_XCVR_POWER_STATE_ACK,
+ 				       read_val, (read_val & mask) == value, 0,
+ 				       POLL_TIMEOUT_US);
++	if (ret)
++		return ret;
++
+ 	cdns_torrent_dp_write(regmap, PHY_PMA_XCVR_POWER_STATE_REQ, 0x00000000);
+ 	ndelay(100);
+ 
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 6c796723c8f5a..8fcdcb193d241 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -3730,14 +3730,11 @@ static int phy_aux_clk_register(struct qmp_pcie *qmp, struct device_node *np)
+ {
+ 	struct clk_fixed_rate *fixed = &qmp->aux_clk_fixed;
+ 	struct clk_init_data init = { };
+-	int ret;
++	char name[64];
+ 
+-	ret = of_property_read_string_index(np, "clock-output-names", 1, &init.name);
+-	if (ret) {
+-		dev_err(qmp->dev, "%pOFn: No clock-output-names index 1\n", np);
+-		return ret;
+-	}
++	snprintf(name, sizeof(name), "%s::phy_aux_clk", dev_name(qmp->dev));
+ 
++	init.name = name;
+ 	init.ops = &clk_fixed_rate_ops;
+ 
+ 	fixed->fixed_rate = qmp->cfg->aux_clock_rate;
+diff --git a/drivers/phy/rockchip/Kconfig b/drivers/phy/rockchip/Kconfig
+index 08b0f43457606..490263375057b 100644
+--- a/drivers/phy/rockchip/Kconfig
++++ b/drivers/phy/rockchip/Kconfig
+@@ -86,7 +86,9 @@ config PHY_ROCKCHIP_PCIE
+ config PHY_ROCKCHIP_SAMSUNG_HDPTX
+ 	tristate "Rockchip Samsung HDMI/eDP Combo PHY driver"
+ 	depends on (ARCH_ROCKCHIP || COMPILE_TEST) && OF
++	depends on HAS_IOMEM
+ 	select GENERIC_PHY
++	select MFD_SYSCON
+ 	select RATIONAL
+ 	help
+ 	  Enable this to support the Rockchip HDMI/eDP Combo PHY
+diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c
+index dc8319bda43d7..f2bff7f25f05a 100644
+--- a/drivers/phy/xilinx/phy-zynqmp.c
++++ b/drivers/phy/xilinx/phy-zynqmp.c
+@@ -80,7 +80,8 @@
+ 
+ /* Reference clock selection parameters */
+ #define L0_Ln_REF_CLK_SEL(n)		(0x2860 + (n) * 4)
+-#define L0_REF_CLK_SEL_MASK		0x8f
++#define L0_REF_CLK_LCL_SEL		BIT(7)
++#define L0_REF_CLK_SEL_MASK		0x9f
+ 
+ /* Calibration digital logic parameters */
+ #define L3_TM_CALIB_DIG19		0xec4c
+@@ -349,11 +350,12 @@ static void xpsgtr_configure_pll(struct xpsgtr_phy *gtr_phy)
+ 		       PLL_FREQ_MASK, ssc->pll_ref_clk);
+ 
+ 	/* Enable lane clock sharing, if required */
+-	if (gtr_phy->refclk != gtr_phy->lane) {
+-		/* Lane3 Ref Clock Selection Register */
++	if (gtr_phy->refclk == gtr_phy->lane)
++		xpsgtr_clr_set(gtr_phy->dev, L0_Ln_REF_CLK_SEL(gtr_phy->lane),
++			       L0_REF_CLK_SEL_MASK, L0_REF_CLK_LCL_SEL);
++	else
+ 		xpsgtr_clr_set(gtr_phy->dev, L0_Ln_REF_CLK_SEL(gtr_phy->lane),
+ 			       L0_REF_CLK_SEL_MASK, 1 << gtr_phy->refclk);
+-	}
+ 
+ 	/* SSC step size [7:0] */
+ 	xpsgtr_clr_set_phy(gtr_phy, L0_PLL_SS_STEP_SIZE_0_LSB,
+@@ -573,7 +575,7 @@ static int xpsgtr_phy_init(struct phy *phy)
+ 	mutex_lock(&gtr_dev->gtr_mutex);
+ 
+ 	/* Configure and enable the clock when peripheral phy_init call */
+-	if (clk_prepare_enable(gtr_dev->clk[gtr_phy->lane]))
++	if (clk_prepare_enable(gtr_dev->clk[gtr_phy->refclk]))
+ 		goto out;
+ 
+ 	/* Skip initialization if not required. */
+@@ -625,7 +627,7 @@ static int xpsgtr_phy_exit(struct phy *phy)
+ 	gtr_phy->skip_phy_init = false;
+ 
+ 	/* Ensure that disable clock only, which configure for lane */
+-	clk_disable_unprepare(gtr_dev->clk[gtr_phy->lane]);
++	clk_disable_unprepare(gtr_dev->clk[gtr_phy->refclk]);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index f424a57f00136..4438f3b4b5ef9 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2080,6 +2080,14 @@ pinctrl_init_controller(struct pinctrl_desc *pctldesc, struct device *dev,
+ 	return ERR_PTR(ret);
+ }
+ 
++static void pinctrl_uninit_controller(struct pinctrl_dev *pctldev, struct pinctrl_desc *pctldesc)
++{
++	pinctrl_free_pindescs(pctldev, pctldesc->pins,
++			      pctldesc->npins);
++	mutex_destroy(&pctldev->mutex);
++	kfree(pctldev);
++}
++
+ static int pinctrl_claim_hogs(struct pinctrl_dev *pctldev)
+ {
+ 	pctldev->p = create_pinctrl(pctldev->dev, pctldev);
+@@ -2160,8 +2168,10 @@ struct pinctrl_dev *pinctrl_register(struct pinctrl_desc *pctldesc,
+ 		return pctldev;
+ 
+ 	error = pinctrl_enable(pctldev);
+-	if (error)
++	if (error) {
++		pinctrl_uninit_controller(pctldev, pctldesc);
+ 		return ERR_PTR(error);
++	}
+ 
+ 	return pctldev;
+ }
+diff --git a/drivers/pinctrl/freescale/pinctrl-mxs.c b/drivers/pinctrl/freescale/pinctrl-mxs.c
+index e77311f26262a..4813a9e16cb3b 100644
+--- a/drivers/pinctrl/freescale/pinctrl-mxs.c
++++ b/drivers/pinctrl/freescale/pinctrl-mxs.c
+@@ -413,8 +413,8 @@ static int mxs_pinctrl_probe_dt(struct platform_device *pdev,
+ 	int ret;
+ 	u32 val;
+ 
+-	child = of_get_next_child(np, NULL);
+-	if (!child) {
++	val = of_get_child_count(np);
++	if (val == 0) {
+ 		dev_err(&pdev->dev, "no group is defined\n");
+ 		return -ENOENT;
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 3f56991f5b892..6a74619786300 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -915,9 +915,8 @@ static struct rockchip_mux_route_data rk3308_mux_route_data[] = {
+ 	RK_MUXROUTE_SAME(0, RK_PC3, 1, 0x314, BIT(16 + 0) | BIT(0)), /* rtc_clk */
+ 	RK_MUXROUTE_SAME(1, RK_PC6, 2, 0x314, BIT(16 + 2) | BIT(16 + 3)), /* uart2_rxm0 */
+ 	RK_MUXROUTE_SAME(4, RK_PD2, 2, 0x314, BIT(16 + 2) | BIT(16 + 3) | BIT(2)), /* uart2_rxm1 */
+-	RK_MUXROUTE_SAME(0, RK_PB7, 2, 0x608, BIT(16 + 8) | BIT(16 + 9)), /* i2c3_sdam0 */
+-	RK_MUXROUTE_SAME(3, RK_PB4, 2, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(8)), /* i2c3_sdam1 */
+-	RK_MUXROUTE_SAME(2, RK_PA0, 3, 0x608, BIT(16 + 8) | BIT(16 + 9) | BIT(9)), /* i2c3_sdam2 */
++	RK_MUXROUTE_SAME(0, RK_PB7, 2, 0x314, BIT(16 + 4)), /* i2c3_sdam0 */
++	RK_MUXROUTE_SAME(3, RK_PB4, 2, 0x314, BIT(16 + 4) | BIT(4)), /* i2c3_sdam1 */
+ 	RK_MUXROUTE_SAME(1, RK_PA3, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclktxm0 */
+ 	RK_MUXROUTE_SAME(1, RK_PA4, 2, 0x308, BIT(16 + 3)), /* i2s-8ch-1-sclkrxm0 */
+ 	RK_MUXROUTE_SAME(1, RK_PB5, 2, 0x308, BIT(16 + 3) | BIT(3)), /* i2s-8ch-1-sclktxm1 */
+@@ -926,18 +925,6 @@ static struct rockchip_mux_route_data rk3308_mux_route_data[] = {
+ 	RK_MUXROUTE_SAME(1, RK_PB6, 4, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* pdm-clkm1 */
+ 	RK_MUXROUTE_SAME(2, RK_PA6, 2, 0x308, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* pdm-clkm2 */
+ 	RK_MUXROUTE_SAME(2, RK_PA4, 3, 0x600, BIT(16 + 2) | BIT(2)), /* pdm-clkm-m2 */
+-	RK_MUXROUTE_SAME(3, RK_PB2, 3, 0x314, BIT(16 + 9)), /* spi1_miso */
+-	RK_MUXROUTE_SAME(2, RK_PA4, 2, 0x314, BIT(16 + 9) | BIT(9)), /* spi1_miso_m1 */
+-	RK_MUXROUTE_SAME(0, RK_PB3, 3, 0x314, BIT(16 + 10) | BIT(16 + 11)), /* owire_m0 */
+-	RK_MUXROUTE_SAME(1, RK_PC6, 7, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(10)), /* owire_m1 */
+-	RK_MUXROUTE_SAME(2, RK_PA2, 5, 0x314, BIT(16 + 10) | BIT(16 + 11) | BIT(11)), /* owire_m2 */
+-	RK_MUXROUTE_SAME(0, RK_PB3, 2, 0x314, BIT(16 + 12) | BIT(16 + 13)), /* can_rxd_m0 */
+-	RK_MUXROUTE_SAME(1, RK_PC6, 5, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(12)), /* can_rxd_m1 */
+-	RK_MUXROUTE_SAME(2, RK_PA2, 4, 0x314, BIT(16 + 12) | BIT(16 + 13) | BIT(13)), /* can_rxd_m2 */
+-	RK_MUXROUTE_SAME(1, RK_PC4, 3, 0x314, BIT(16 + 14)), /* mac_rxd0_m0 */
+-	RK_MUXROUTE_SAME(4, RK_PA2, 2, 0x314, BIT(16 + 14) | BIT(14)), /* mac_rxd0_m1 */
+-	RK_MUXROUTE_SAME(3, RK_PB4, 4, 0x314, BIT(16 + 15)), /* uart3_rx */
+-	RK_MUXROUTE_SAME(0, RK_PC1, 3, 0x314, BIT(16 + 15) | BIT(15)), /* uart3_rx_m1 */
+ };
+ 
+ static struct rockchip_mux_route_data rk3328_mux_route_data[] = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index a798f31d69542..4c6bfabb6bd7d 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1329,7 +1329,6 @@ static void pcs_irq_free(struct pcs_device *pcs)
+ static void pcs_free_resources(struct pcs_device *pcs)
+ {
+ 	pcs_irq_free(pcs);
+-	pinctrl_unregister(pcs->pctl);
+ 
+ #if IS_BUILTIN(CONFIG_PINCTRL_SINGLE)
+ 	if (pcs->missing_nr_pinctrl_cells)
+@@ -1879,7 +1878,7 @@ static int pcs_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto free;
+ 
+-	ret = pinctrl_register_and_init(&pcs->desc, pcs->dev, pcs, &pcs->pctl);
++	ret = devm_pinctrl_register_and_init(pcs->dev, &pcs->desc, pcs, &pcs->pctl);
+ 	if (ret) {
+ 		dev_err(pcs->dev, "could not register single pinctrl driver\n");
+ 		goto free;
+@@ -1912,8 +1911,10 @@ static int pcs_probe(struct platform_device *pdev)
+ 
+ 	dev_info(pcs->dev, "%i pins, size %u\n", pcs->desc.npins, pcs->size);
+ 
+-	return pinctrl_enable(pcs->pctl);
++	if (pinctrl_enable(pcs->pctl))
++		goto free;
+ 
++	return 0;
+ free:
+ 	pcs_free_resources(pcs);
+ 
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779g0.c b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+index d2de526a3b588..bb843e333c880 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779g0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+@@ -68,20 +68,20 @@
+ #define GPSR0_9		F_(MSIOF5_SYNC,		IP1SR0_7_4)
+ #define GPSR0_8		F_(MSIOF5_SS1,		IP1SR0_3_0)
+ #define GPSR0_7		F_(MSIOF5_SS2,		IP0SR0_31_28)
+-#define GPSR0_6		F_(IRQ0,		IP0SR0_27_24)
+-#define GPSR0_5		F_(IRQ1,		IP0SR0_23_20)
+-#define GPSR0_4		F_(IRQ2,		IP0SR0_19_16)
+-#define GPSR0_3		F_(IRQ3,		IP0SR0_15_12)
++#define GPSR0_6		F_(IRQ0_A,		IP0SR0_27_24)
++#define GPSR0_5		F_(IRQ1_A,		IP0SR0_23_20)
++#define GPSR0_4		F_(IRQ2_A,		IP0SR0_19_16)
++#define GPSR0_3		F_(IRQ3_A,		IP0SR0_15_12)
+ #define GPSR0_2		F_(GP0_02,		IP0SR0_11_8)
+ #define GPSR0_1		F_(GP0_01,		IP0SR0_7_4)
+ #define GPSR0_0		F_(GP0_00,		IP0SR0_3_0)
+ 
+ /* GPSR1 */
+-#define GPSR1_28	F_(HTX3,		IP3SR1_19_16)
+-#define GPSR1_27	F_(HCTS3_N,		IP3SR1_15_12)
+-#define GPSR1_26	F_(HRTS3_N,		IP3SR1_11_8)
+-#define GPSR1_25	F_(HSCK3,		IP3SR1_7_4)
+-#define GPSR1_24	F_(HRX3,		IP3SR1_3_0)
++#define GPSR1_28	F_(HTX3_A,		IP3SR1_19_16)
++#define GPSR1_27	F_(HCTS3_N_A,		IP3SR1_15_12)
++#define GPSR1_26	F_(HRTS3_N_A,		IP3SR1_11_8)
++#define GPSR1_25	F_(HSCK3_A,		IP3SR1_7_4)
++#define GPSR1_24	F_(HRX3_A,		IP3SR1_3_0)
+ #define GPSR1_23	F_(GP1_23,		IP2SR1_31_28)
+ #define GPSR1_22	F_(AUDIO_CLKIN,		IP2SR1_27_24)
+ #define GPSR1_21	F_(AUDIO_CLKOUT,	IP2SR1_23_20)
+@@ -119,14 +119,14 @@
+ #define GPSR2_11	F_(CANFD0_RX,		IP1SR2_15_12)
+ #define GPSR2_10	F_(CANFD0_TX,		IP1SR2_11_8)
+ #define GPSR2_9		F_(CAN_CLK,		IP1SR2_7_4)
+-#define GPSR2_8		F_(TPU0TO0,		IP1SR2_3_0)
+-#define GPSR2_7		F_(TPU0TO1,		IP0SR2_31_28)
++#define GPSR2_8		F_(TPU0TO0_A,		IP1SR2_3_0)
++#define GPSR2_7		F_(TPU0TO1_A,		IP0SR2_31_28)
+ #define GPSR2_6		F_(FXR_TXDB,		IP0SR2_27_24)
+-#define GPSR2_5		F_(FXR_TXENB_N,		IP0SR2_23_20)
++#define GPSR2_5		F_(FXR_TXENB_N_A,	IP0SR2_23_20)
+ #define GPSR2_4		F_(RXDB_EXTFXR,		IP0SR2_19_16)
+ #define GPSR2_3		F_(CLK_EXTFXR,		IP0SR2_15_12)
+ #define GPSR2_2		F_(RXDA_EXTFXR,		IP0SR2_11_8)
+-#define GPSR2_1		F_(FXR_TXENA_N,		IP0SR2_7_4)
++#define GPSR2_1		F_(FXR_TXENA_N_A,	IP0SR2_7_4)
+ #define GPSR2_0		F_(FXR_TXDA,		IP0SR2_3_0)
+ 
+ /* GPSR3 */
+@@ -275,13 +275,13 @@
+ 
+ /* SR0 */
+ /* IP0SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR0_3_0	F_(0, 0)		FM(ERROROUTC_N_B)	FM(TCLK2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_3_0	F_(0, 0)		FM(ERROROUTC_N_B)	FM(TCLK2_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR0_7_4	F_(0, 0)		FM(MSIOF3_SS1)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR0_11_8	F_(0, 0)		FM(MSIOF3_SS2)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_15_12	FM(IRQ3)		FM(MSIOF3_SCK)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_19_16	FM(IRQ2)		FM(MSIOF3_TXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_23_20	FM(IRQ1)		FM(MSIOF3_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR0_27_24	FM(IRQ0)		FM(MSIOF3_SYNC)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_15_12	FM(IRQ3_A)		FM(MSIOF3_SCK)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_19_16	FM(IRQ2_A)		FM(MSIOF3_TXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_23_20	FM(IRQ1_A)		FM(MSIOF3_RXD)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR0_27_24	FM(IRQ0_A)		FM(MSIOF3_SYNC)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR0_31_28	FM(MSIOF5_SS2)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP1SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+@@ -290,72 +290,72 @@
+ #define IP1SR0_11_8	FM(MSIOF5_TXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR0_15_12	FM(MSIOF5_SCK)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR0_19_16	FM(MSIOF5_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_23_20	FM(MSIOF2_SS2)		FM(TCLK1)		FM(IRQ2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_27_24	FM(MSIOF2_SS1)		FM(HTX1)		FM(TX1)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR0_31_28	FM(MSIOF2_SYNC)		FM(HRX1)		FM(RX1)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_23_20	FM(MSIOF2_SS2)		FM(TCLK1_A)		FM(IRQ2_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_27_24	FM(MSIOF2_SS1)		FM(HTX1_A)		FM(TX1_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR0_31_28	FM(MSIOF2_SYNC)		FM(HRX1_A)		FM(RX1_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP2SR0 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP2SR0_3_0	FM(MSIOF2_TXD)		FM(HCTS1_N)		FM(CTS1_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR0_7_4	FM(MSIOF2_SCK)		FM(HRTS1_N)		FM(RTS1_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR0_11_8	FM(MSIOF2_RXD)		FM(HSCK1)		FM(SCK1)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR0_3_0	FM(MSIOF2_TXD)		FM(HCTS1_N_A)		FM(CTS1_N_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR0_7_4	FM(MSIOF2_SCK)		FM(HRTS1_N_A)		FM(RTS1_N_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR0_11_8	FM(MSIOF2_RXD)		FM(HSCK1_A)		FM(SCK1_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR1 */
+ /* IP0SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR1_3_0	FM(MSIOF1_SS2)		FM(HTX3_A)		FM(TX3)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_7_4	FM(MSIOF1_SS1)		FM(HCTS3_N_A)		FM(RX3)			F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_11_8	FM(MSIOF1_SYNC)		FM(HRTS3_N_A)		FM(RTS3_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_15_12	FM(MSIOF1_SCK)		FM(HSCK3_A)		FM(CTS3_N)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_19_16	FM(MSIOF1_TXD)		FM(HRX3_A)		FM(SCK3)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_3_0	FM(MSIOF1_SS2)		FM(HTX3_B)		FM(TX3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_7_4	FM(MSIOF1_SS1)		FM(HCTS3_N_B)		FM(RX3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_11_8	FM(MSIOF1_SYNC)		FM(HRTS3_N_B)		FM(RTS3_N_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_15_12	FM(MSIOF1_SCK)		FM(HSCK3_B)		FM(CTS3_N_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_19_16	FM(MSIOF1_TXD)		FM(HRX3_B)		FM(SCK3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR1_23_20	FM(MSIOF1_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_27_24	FM(MSIOF0_SS2)		FM(HTX1_X)		FM(TX1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR1_31_28	FM(MSIOF0_SS1)		FM(HRX1_X)		FM(RX1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_27_24	FM(MSIOF0_SS2)		FM(HTX1_B)		FM(TX1_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR1_31_28	FM(MSIOF0_SS1)		FM(HRX1_B)		FM(RX1_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP1SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR1_3_0	FM(MSIOF0_SYNC)		FM(HCTS1_N_X)		FM(CTS1_N_X)		FM(CANFD5_TX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_7_4	FM(MSIOF0_TXD)		FM(HRTS1_N_X)		FM(RTS1_N_X)		FM(CANFD5_RX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_11_8	FM(MSIOF0_SCK)		FM(HSCK1_X)		FM(SCK1_X)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_3_0	FM(MSIOF0_SYNC)		FM(HCTS1_N_B)		FM(CTS1_N_B)		FM(CANFD5_TX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_7_4	FM(MSIOF0_TXD)		FM(HRTS1_N_B)		FM(RTS1_N_B)		FM(CANFD5_RX_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_11_8	FM(MSIOF0_SCK)		FM(HSCK1_B)		FM(SCK1_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR1_15_12	FM(MSIOF0_RXD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR1_19_16	FM(HTX0)		FM(TX0)			F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_23_20	FM(HCTS0_N)		FM(CTS0_N)		FM(PWM8_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_27_24	FM(HRTS0_N)		FM(RTS0_N)		FM(PWM9_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR1_31_28	FM(HSCK0)		FM(SCK0)		FM(PWM0_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_23_20	FM(HCTS0_N)		FM(CTS0_N)		FM(PWM8)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_27_24	FM(HRTS0_N)		FM(RTS0_N)		FM(PWM9)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR1_31_28	FM(HSCK0)		FM(SCK0)		FM(PWM0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP2SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+ #define IP2SR1_3_0	FM(HRX0)		FM(RX0)			F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP2SR1_7_4	FM(SCIF_CLK)		FM(IRQ4_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_11_8	FM(SSI_SCK)		FM(TCLK3)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_15_12	FM(SSI_WS)		FM(TCLK4)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_19_16	FM(SSI_SD)		FM(IRQ0_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_23_20	FM(AUDIO_CLKOUT)	FM(IRQ1_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_11_8	FM(SSI_SCK)		FM(TCLK3_B)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_15_12	FM(SSI_WS)		FM(TCLK4_B)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_19_16	FM(SSI_SD)		FM(IRQ0_B)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_23_20	FM(AUDIO_CLKOUT)	FM(IRQ1_B)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP2SR1_27_24	FM(AUDIO_CLKIN)		FM(PWM3_A)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP2SR1_31_28	F_(0, 0)		FM(TCLK2)		FM(MSIOF4_SS1)		FM(IRQ3_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP2SR1_31_28	F_(0, 0)		FM(TCLK2_A)		FM(MSIOF4_SS1)		FM(IRQ3_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP3SR1 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP3SR1_3_0	FM(HRX3)		FM(SCK3_A)		FM(MSIOF4_SS2)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_7_4	FM(HSCK3)		FM(CTS3_N_A)		FM(MSIOF4_SCK)		FM(TPU0TO0_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_11_8	FM(HRTS3_N)		FM(RTS3_N_A)		FM(MSIOF4_TXD)		FM(TPU0TO1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_15_12	FM(HCTS3_N)		FM(RX3_A)		FM(MSIOF4_RXD)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP3SR1_19_16	FM(HTX3)		FM(TX3_A)		FM(MSIOF4_SYNC)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_3_0	FM(HRX3_A)		FM(SCK3_A)		FM(MSIOF4_SS2)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_7_4	FM(HSCK3_A)		FM(CTS3_N_A)		FM(MSIOF4_SCK)		FM(TPU0TO0_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_11_8	FM(HRTS3_N_A)		FM(RTS3_N_A)		FM(MSIOF4_TXD)		FM(TPU0TO1_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_15_12	FM(HCTS3_N_A)		FM(RX3_A)		FM(MSIOF4_RXD)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP3SR1_19_16	FM(HTX3_A)		FM(TX3_A)		FM(MSIOF4_SYNC)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* SR2 */
+ /* IP0SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP0SR2_3_0	FM(FXR_TXDA)		FM(CANFD1_TX)		FM(TPU0TO2_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_7_4	FM(FXR_TXENA_N)		FM(CANFD1_RX)		FM(TPU0TO3_A)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_11_8	FM(RXDA_EXTFXR)		FM(CANFD5_TX)		FM(IRQ5)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_15_12	FM(CLK_EXTFXR)		FM(CANFD5_RX)		FM(IRQ4_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_3_0	FM(FXR_TXDA)		FM(CANFD1_TX)		FM(TPU0TO2_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_7_4	FM(FXR_TXENA_N_A)	FM(CANFD1_RX)		FM(TPU0TO3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_11_8	FM(RXDA_EXTFXR)		FM(CANFD5_TX_A)		FM(IRQ5)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_15_12	FM(CLK_EXTFXR)		FM(CANFD5_RX_A)		FM(IRQ4_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR2_19_16	FM(RXDB_EXTFXR)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_23_20	FM(FXR_TXENB_N)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_23_20	FM(FXR_TXENB_N_A)	F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP0SR2_27_24	FM(FXR_TXDB)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP0SR2_31_28	FM(TPU0TO1)		FM(CANFD6_TX)		F_(0, 0)		FM(TCLK2_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP0SR2_31_28	FM(TPU0TO1_A)		FM(CANFD6_TX)		F_(0, 0)		FM(TCLK2_C)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP1SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+-#define IP1SR2_3_0	FM(TPU0TO0)		FM(CANFD6_RX)		F_(0, 0)		FM(TCLK1_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_7_4	FM(CAN_CLK)		FM(FXR_TXENA_N_X)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_11_8	FM(CANFD0_TX)		FM(FXR_TXENB_N_X)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_3_0	FM(TPU0TO0_A)		FM(CANFD6_RX)		F_(0, 0)		FM(TCLK1_B)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_7_4	FM(CAN_CLK)		FM(FXR_TXENA_N_B)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_11_8	FM(CANFD0_TX)		FM(FXR_TXENB_N_B)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR2_15_12	FM(CANFD0_RX)		FM(STPWT_EXTFXR)	F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_19_16	FM(CANFD2_TX)		FM(TPU0TO2)		F_(0, 0)		FM(TCLK3_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_23_20	FM(CANFD2_RX)		FM(TPU0TO3)		FM(PWM1_B)		FM(TCLK4_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR2_27_24	FM(CANFD3_TX)		F_(0, 0)		FM(PWM2_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_19_16	FM(CANFD2_TX)		FM(TPU0TO2_A)		F_(0, 0)		FM(TCLK3_C)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_23_20	FM(CANFD2_RX)		FM(TPU0TO3_A)		FM(PWM1_B)		FM(TCLK4_C)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR2_27_24	FM(CANFD3_TX)		F_(0, 0)		FM(PWM2)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR2_31_28	FM(CANFD3_RX)		F_(0, 0)		FM(PWM3_B)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP2SR2 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+@@ -381,8 +381,8 @@
+ #define IP1SR3_11_8	FM(MMC_SD_CMD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR3_15_12	FM(SD_CD)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR3_19_16	FM(SD_WP)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_23_20	FM(IPC_CLKIN)		FM(IPC_CLKEN_IN)	FM(PWM1_A)		FM(TCLK3_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define IP1SR3_27_24	FM(IPC_CLKOUT)		FM(IPC_CLKEN_OUT)	FM(ERROROUTC_N_A)	FM(TCLK4_X)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_23_20	FM(IPC_CLKIN)		FM(IPC_CLKEN_IN)	FM(PWM1_A)		FM(TCLK3_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
++#define IP1SR3_27_24	FM(IPC_CLKOUT)		FM(IPC_CLKEN_OUT)	FM(ERROROUTC_N_A)	FM(TCLK4_A)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ #define IP1SR3_31_28	FM(QSPI0_SSL)		F_(0, 0)		F_(0, 0)		F_(0, 0)	F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0) F_(0, 0)
+ 
+ /* IP2SR3 */		/* 0 */			/* 1 */			/* 2 */			/* 3		4	 5	  6	   7	    8	     9	      A	       B	C	 D	  E	   F */
+@@ -718,22 +718,22 @@ static const u16 pinmux_data[] = {
+ 
+ 	/* IP0SR0 */
+ 	PINMUX_IPSR_GPSR(IP0SR0_3_0,	ERROROUTC_N_B),
+-	PINMUX_IPSR_GPSR(IP0SR0_3_0,	TCLK2_A),
++	PINMUX_IPSR_GPSR(IP0SR0_3_0,	TCLK2_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR0_7_4,	MSIOF3_SS1),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR0_11_8,	MSIOF3_SS2),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR0_15_12,	IRQ3),
++	PINMUX_IPSR_GPSR(IP0SR0_15_12,	IRQ3_A),
+ 	PINMUX_IPSR_GPSR(IP0SR0_15_12,	MSIOF3_SCK),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR0_19_16,	IRQ2),
++	PINMUX_IPSR_GPSR(IP0SR0_19_16,	IRQ2_A),
+ 	PINMUX_IPSR_GPSR(IP0SR0_19_16,	MSIOF3_TXD),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR0_23_20,	IRQ1),
++	PINMUX_IPSR_GPSR(IP0SR0_23_20,	IRQ1_A),
+ 	PINMUX_IPSR_GPSR(IP0SR0_23_20,	MSIOF3_RXD),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR0_27_24,	IRQ0),
++	PINMUX_IPSR_GPSR(IP0SR0_27_24,	IRQ0_A),
+ 	PINMUX_IPSR_GPSR(IP0SR0_27_24,	MSIOF3_SYNC),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR0_31_28,	MSIOF5_SS2),
+@@ -750,75 +750,75 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP1SR0_19_16,	MSIOF5_RXD),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR0_23_20,	MSIOF2_SS2),
+-	PINMUX_IPSR_GPSR(IP1SR0_23_20,	TCLK1),
+-	PINMUX_IPSR_GPSR(IP1SR0_23_20,	IRQ2_A),
++	PINMUX_IPSR_GPSR(IP1SR0_23_20,	TCLK1_A),
++	PINMUX_IPSR_GPSR(IP1SR0_23_20,	IRQ2_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR0_27_24,	MSIOF2_SS1),
+-	PINMUX_IPSR_GPSR(IP1SR0_27_24,	HTX1),
+-	PINMUX_IPSR_GPSR(IP1SR0_27_24,	TX1),
++	PINMUX_IPSR_GPSR(IP1SR0_27_24,	HTX1_A),
++	PINMUX_IPSR_GPSR(IP1SR0_27_24,	TX1_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR0_31_28,	MSIOF2_SYNC),
+-	PINMUX_IPSR_GPSR(IP1SR0_31_28,	HRX1),
+-	PINMUX_IPSR_GPSR(IP1SR0_31_28,	RX1),
++	PINMUX_IPSR_GPSR(IP1SR0_31_28,	HRX1_A),
++	PINMUX_IPSR_GPSR(IP1SR0_31_28,	RX1_A),
+ 
+ 	/* IP2SR0 */
+ 	PINMUX_IPSR_GPSR(IP2SR0_3_0,	MSIOF2_TXD),
+-	PINMUX_IPSR_GPSR(IP2SR0_3_0,	HCTS1_N),
+-	PINMUX_IPSR_GPSR(IP2SR0_3_0,	CTS1_N),
++	PINMUX_IPSR_GPSR(IP2SR0_3_0,	HCTS1_N_A),
++	PINMUX_IPSR_GPSR(IP2SR0_3_0,	CTS1_N_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR0_7_4,	MSIOF2_SCK),
+-	PINMUX_IPSR_GPSR(IP2SR0_7_4,	HRTS1_N),
+-	PINMUX_IPSR_GPSR(IP2SR0_7_4,	RTS1_N),
++	PINMUX_IPSR_GPSR(IP2SR0_7_4,	HRTS1_N_A),
++	PINMUX_IPSR_GPSR(IP2SR0_7_4,	RTS1_N_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR0_11_8,	MSIOF2_RXD),
+-	PINMUX_IPSR_GPSR(IP2SR0_11_8,	HSCK1),
+-	PINMUX_IPSR_GPSR(IP2SR0_11_8,	SCK1),
++	PINMUX_IPSR_GPSR(IP2SR0_11_8,	HSCK1_A),
++	PINMUX_IPSR_GPSR(IP2SR0_11_8,	SCK1_A),
+ 
+ 	/* IP0SR1 */
+ 	PINMUX_IPSR_GPSR(IP0SR1_3_0,	MSIOF1_SS2),
+-	PINMUX_IPSR_GPSR(IP0SR1_3_0,	HTX3_A),
+-	PINMUX_IPSR_GPSR(IP0SR1_3_0,	TX3),
++	PINMUX_IPSR_GPSR(IP0SR1_3_0,	HTX3_B),
++	PINMUX_IPSR_GPSR(IP0SR1_3_0,	TX3_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_7_4,	MSIOF1_SS1),
+-	PINMUX_IPSR_GPSR(IP0SR1_7_4,	HCTS3_N_A),
+-	PINMUX_IPSR_GPSR(IP0SR1_7_4,	RX3),
++	PINMUX_IPSR_GPSR(IP0SR1_7_4,	HCTS3_N_B),
++	PINMUX_IPSR_GPSR(IP0SR1_7_4,	RX3_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_11_8,	MSIOF1_SYNC),
+-	PINMUX_IPSR_GPSR(IP0SR1_11_8,	HRTS3_N_A),
+-	PINMUX_IPSR_GPSR(IP0SR1_11_8,	RTS3_N),
++	PINMUX_IPSR_GPSR(IP0SR1_11_8,	HRTS3_N_B),
++	PINMUX_IPSR_GPSR(IP0SR1_11_8,	RTS3_N_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_15_12,	MSIOF1_SCK),
+-	PINMUX_IPSR_GPSR(IP0SR1_15_12,	HSCK3_A),
+-	PINMUX_IPSR_GPSR(IP0SR1_15_12,	CTS3_N),
++	PINMUX_IPSR_GPSR(IP0SR1_15_12,	HSCK3_B),
++	PINMUX_IPSR_GPSR(IP0SR1_15_12,	CTS3_N_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_19_16,	MSIOF1_TXD),
+-	PINMUX_IPSR_GPSR(IP0SR1_19_16,	HRX3_A),
+-	PINMUX_IPSR_GPSR(IP0SR1_19_16,	SCK3),
++	PINMUX_IPSR_GPSR(IP0SR1_19_16,	HRX3_B),
++	PINMUX_IPSR_GPSR(IP0SR1_19_16,	SCK3_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_23_20,	MSIOF1_RXD),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_27_24,	MSIOF0_SS2),
+-	PINMUX_IPSR_GPSR(IP0SR1_27_24,	HTX1_X),
+-	PINMUX_IPSR_GPSR(IP0SR1_27_24,	TX1_X),
++	PINMUX_IPSR_GPSR(IP0SR1_27_24,	HTX1_B),
++	PINMUX_IPSR_GPSR(IP0SR1_27_24,	TX1_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR1_31_28,	MSIOF0_SS1),
+-	PINMUX_IPSR_GPSR(IP0SR1_31_28,	HRX1_X),
+-	PINMUX_IPSR_GPSR(IP0SR1_31_28,	RX1_X),
++	PINMUX_IPSR_GPSR(IP0SR1_31_28,	HRX1_B),
++	PINMUX_IPSR_GPSR(IP0SR1_31_28,	RX1_B),
+ 
+ 	/* IP1SR1 */
+ 	PINMUX_IPSR_GPSR(IP1SR1_3_0,	MSIOF0_SYNC),
+-	PINMUX_IPSR_GPSR(IP1SR1_3_0,	HCTS1_N_X),
+-	PINMUX_IPSR_GPSR(IP1SR1_3_0,	CTS1_N_X),
++	PINMUX_IPSR_GPSR(IP1SR1_3_0,	HCTS1_N_B),
++	PINMUX_IPSR_GPSR(IP1SR1_3_0,	CTS1_N_B),
+ 	PINMUX_IPSR_GPSR(IP1SR1_3_0,	CANFD5_TX_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_7_4,	MSIOF0_TXD),
+-	PINMUX_IPSR_GPSR(IP1SR1_7_4,	HRTS1_N_X),
+-	PINMUX_IPSR_GPSR(IP1SR1_7_4,	RTS1_N_X),
++	PINMUX_IPSR_GPSR(IP1SR1_7_4,	HRTS1_N_B),
++	PINMUX_IPSR_GPSR(IP1SR1_7_4,	RTS1_N_B),
+ 	PINMUX_IPSR_GPSR(IP1SR1_7_4,	CANFD5_RX_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_11_8,	MSIOF0_SCK),
+-	PINMUX_IPSR_GPSR(IP1SR1_11_8,	HSCK1_X),
+-	PINMUX_IPSR_GPSR(IP1SR1_11_8,	SCK1_X),
++	PINMUX_IPSR_GPSR(IP1SR1_11_8,	HSCK1_B),
++	PINMUX_IPSR_GPSR(IP1SR1_11_8,	SCK1_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_15_12,	MSIOF0_RXD),
+ 
+@@ -827,15 +827,15 @@ static const u16 pinmux_data[] = {
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_23_20,	HCTS0_N),
+ 	PINMUX_IPSR_GPSR(IP1SR1_23_20,	CTS0_N),
+-	PINMUX_IPSR_GPSR(IP1SR1_23_20,	PWM8_A),
++	PINMUX_IPSR_GPSR(IP1SR1_23_20,	PWM8),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_27_24,	HRTS0_N),
+ 	PINMUX_IPSR_GPSR(IP1SR1_27_24,	RTS0_N),
+-	PINMUX_IPSR_GPSR(IP1SR1_27_24,	PWM9_A),
++	PINMUX_IPSR_GPSR(IP1SR1_27_24,	PWM9),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR1_31_28,	HSCK0),
+ 	PINMUX_IPSR_GPSR(IP1SR1_31_28,	SCK0),
+-	PINMUX_IPSR_GPSR(IP1SR1_31_28,	PWM0_A),
++	PINMUX_IPSR_GPSR(IP1SR1_31_28,	PWM0),
+ 
+ 	/* IP2SR1 */
+ 	PINMUX_IPSR_GPSR(IP2SR1_3_0,	HRX0),
+@@ -845,99 +845,99 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP2SR1_7_4,	IRQ4_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR1_11_8,	SSI_SCK),
+-	PINMUX_IPSR_GPSR(IP2SR1_11_8,	TCLK3),
++	PINMUX_IPSR_GPSR(IP2SR1_11_8,	TCLK3_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR1_15_12,	SSI_WS),
+-	PINMUX_IPSR_GPSR(IP2SR1_15_12,	TCLK4),
++	PINMUX_IPSR_GPSR(IP2SR1_15_12,	TCLK4_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR1_19_16,	SSI_SD),
+-	PINMUX_IPSR_GPSR(IP2SR1_19_16,	IRQ0_A),
++	PINMUX_IPSR_GPSR(IP2SR1_19_16,	IRQ0_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR1_23_20,	AUDIO_CLKOUT),
+-	PINMUX_IPSR_GPSR(IP2SR1_23_20,	IRQ1_A),
++	PINMUX_IPSR_GPSR(IP2SR1_23_20,	IRQ1_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP2SR1_27_24,	AUDIO_CLKIN),
+ 	PINMUX_IPSR_GPSR(IP2SR1_27_24,	PWM3_A),
+ 
+-	PINMUX_IPSR_GPSR(IP2SR1_31_28,	TCLK2),
++	PINMUX_IPSR_GPSR(IP2SR1_31_28,	TCLK2_A),
+ 	PINMUX_IPSR_GPSR(IP2SR1_31_28,	MSIOF4_SS1),
+ 	PINMUX_IPSR_GPSR(IP2SR1_31_28,	IRQ3_B),
+ 
+ 	/* IP3SR1 */
+-	PINMUX_IPSR_GPSR(IP3SR1_3_0,	HRX3),
++	PINMUX_IPSR_GPSR(IP3SR1_3_0,	HRX3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_3_0,	SCK3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_3_0,	MSIOF4_SS2),
+ 
+-	PINMUX_IPSR_GPSR(IP3SR1_7_4,	HSCK3),
++	PINMUX_IPSR_GPSR(IP3SR1_7_4,	HSCK3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_7_4,	CTS3_N_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_7_4,	MSIOF4_SCK),
+-	PINMUX_IPSR_GPSR(IP3SR1_7_4,	TPU0TO0_A),
++	PINMUX_IPSR_GPSR(IP3SR1_7_4,	TPU0TO0_B),
+ 
+-	PINMUX_IPSR_GPSR(IP3SR1_11_8,	HRTS3_N),
++	PINMUX_IPSR_GPSR(IP3SR1_11_8,	HRTS3_N_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_11_8,	RTS3_N_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_11_8,	MSIOF4_TXD),
+-	PINMUX_IPSR_GPSR(IP3SR1_11_8,	TPU0TO1_A),
++	PINMUX_IPSR_GPSR(IP3SR1_11_8,	TPU0TO1_B),
+ 
+-	PINMUX_IPSR_GPSR(IP3SR1_15_12,	HCTS3_N),
++	PINMUX_IPSR_GPSR(IP3SR1_15_12,	HCTS3_N_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_15_12,	RX3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_15_12,	MSIOF4_RXD),
+ 
+-	PINMUX_IPSR_GPSR(IP3SR1_19_16,	HTX3),
++	PINMUX_IPSR_GPSR(IP3SR1_19_16,	HTX3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_19_16,	TX3_A),
+ 	PINMUX_IPSR_GPSR(IP3SR1_19_16,	MSIOF4_SYNC),
+ 
+ 	/* IP0SR2 */
+ 	PINMUX_IPSR_GPSR(IP0SR2_3_0,	FXR_TXDA),
+ 	PINMUX_IPSR_GPSR(IP0SR2_3_0,	CANFD1_TX),
+-	PINMUX_IPSR_GPSR(IP0SR2_3_0,	TPU0TO2_A),
++	PINMUX_IPSR_GPSR(IP0SR2_3_0,	TPU0TO2_B),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR2_7_4,	FXR_TXENA_N),
++	PINMUX_IPSR_GPSR(IP0SR2_7_4,	FXR_TXENA_N_A),
+ 	PINMUX_IPSR_GPSR(IP0SR2_7_4,	CANFD1_RX),
+-	PINMUX_IPSR_GPSR(IP0SR2_7_4,	TPU0TO3_A),
++	PINMUX_IPSR_GPSR(IP0SR2_7_4,	TPU0TO3_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR2_11_8,	RXDA_EXTFXR),
+-	PINMUX_IPSR_GPSR(IP0SR2_11_8,	CANFD5_TX),
++	PINMUX_IPSR_GPSR(IP0SR2_11_8,	CANFD5_TX_A),
+ 	PINMUX_IPSR_GPSR(IP0SR2_11_8,	IRQ5),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR2_15_12,	CLK_EXTFXR),
+-	PINMUX_IPSR_GPSR(IP0SR2_15_12,	CANFD5_RX),
++	PINMUX_IPSR_GPSR(IP0SR2_15_12,	CANFD5_RX_A),
+ 	PINMUX_IPSR_GPSR(IP0SR2_15_12,	IRQ4_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR2_19_16,	RXDB_EXTFXR),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR2_23_20,	FXR_TXENB_N),
++	PINMUX_IPSR_GPSR(IP0SR2_23_20,	FXR_TXENB_N_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP0SR2_27_24,	FXR_TXDB),
+ 
+-	PINMUX_IPSR_GPSR(IP0SR2_31_28,	TPU0TO1),
++	PINMUX_IPSR_GPSR(IP0SR2_31_28,	TPU0TO1_A),
+ 	PINMUX_IPSR_GPSR(IP0SR2_31_28,	CANFD6_TX),
+-	PINMUX_IPSR_GPSR(IP0SR2_31_28,	TCLK2_B),
++	PINMUX_IPSR_GPSR(IP0SR2_31_28,	TCLK2_C),
+ 
+ 	/* IP1SR2 */
+-	PINMUX_IPSR_GPSR(IP1SR2_3_0,	TPU0TO0),
++	PINMUX_IPSR_GPSR(IP1SR2_3_0,	TPU0TO0_A),
+ 	PINMUX_IPSR_GPSR(IP1SR2_3_0,	CANFD6_RX),
+-	PINMUX_IPSR_GPSR(IP1SR2_3_0,	TCLK1_A),
++	PINMUX_IPSR_GPSR(IP1SR2_3_0,	TCLK1_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_7_4,	CAN_CLK),
+-	PINMUX_IPSR_GPSR(IP1SR2_7_4,	FXR_TXENA_N_X),
++	PINMUX_IPSR_GPSR(IP1SR2_7_4,	FXR_TXENA_N_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_11_8,	CANFD0_TX),
+-	PINMUX_IPSR_GPSR(IP1SR2_11_8,	FXR_TXENB_N_X),
++	PINMUX_IPSR_GPSR(IP1SR2_11_8,	FXR_TXENB_N_B),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_15_12,	CANFD0_RX),
+ 	PINMUX_IPSR_GPSR(IP1SR2_15_12,	STPWT_EXTFXR),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_19_16,	CANFD2_TX),
+-	PINMUX_IPSR_GPSR(IP1SR2_19_16,	TPU0TO2),
+-	PINMUX_IPSR_GPSR(IP1SR2_19_16,	TCLK3_A),
++	PINMUX_IPSR_GPSR(IP1SR2_19_16,	TPU0TO2_A),
++	PINMUX_IPSR_GPSR(IP1SR2_19_16,	TCLK3_C),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_23_20,	CANFD2_RX),
+-	PINMUX_IPSR_GPSR(IP1SR2_23_20,	TPU0TO3),
++	PINMUX_IPSR_GPSR(IP1SR2_23_20,	TPU0TO3_A),
+ 	PINMUX_IPSR_GPSR(IP1SR2_23_20,	PWM1_B),
+-	PINMUX_IPSR_GPSR(IP1SR2_23_20,	TCLK4_A),
++	PINMUX_IPSR_GPSR(IP1SR2_23_20,	TCLK4_C),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_27_24,	CANFD3_TX),
+-	PINMUX_IPSR_GPSR(IP1SR2_27_24,	PWM2_B),
++	PINMUX_IPSR_GPSR(IP1SR2_27_24,	PWM2),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR2_31_28,	CANFD3_RX),
+ 	PINMUX_IPSR_GPSR(IP1SR2_31_28,	PWM3_B),
+@@ -979,12 +979,12 @@ static const u16 pinmux_data[] = {
+ 	PINMUX_IPSR_GPSR(IP1SR3_23_20,	IPC_CLKIN),
+ 	PINMUX_IPSR_GPSR(IP1SR3_23_20,	IPC_CLKEN_IN),
+ 	PINMUX_IPSR_GPSR(IP1SR3_23_20,	PWM1_A),
+-	PINMUX_IPSR_GPSR(IP1SR3_23_20,	TCLK3_X),
++	PINMUX_IPSR_GPSR(IP1SR3_23_20,	TCLK3_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	IPC_CLKOUT),
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	IPC_CLKEN_OUT),
+ 	PINMUX_IPSR_GPSR(IP1SR3_27_24,	ERROROUTC_N_A),
+-	PINMUX_IPSR_GPSR(IP1SR3_27_24,	TCLK4_X),
++	PINMUX_IPSR_GPSR(IP1SR3_27_24,	TCLK4_A),
+ 
+ 	PINMUX_IPSR_GPSR(IP1SR3_31_28,	QSPI0_SSL),
+ 
+@@ -1531,15 +1531,14 @@ static const unsigned int canfd4_data_mux[] = {
+ };
+ 
+ /* - CANFD5 ----------------------------------------------------------------- */
+-static const unsigned int canfd5_data_pins[] = {
+-	/* CANFD5_TX, CANFD5_RX */
++static const unsigned int canfd5_data_a_pins[] = {
++	/* CANFD5_TX_A, CANFD5_RX_A */
+ 	RCAR_GP_PIN(2, 2), RCAR_GP_PIN(2, 3),
+ };
+-static const unsigned int canfd5_data_mux[] = {
+-	CANFD5_TX_MARK, CANFD5_RX_MARK,
++static const unsigned int canfd5_data_a_mux[] = {
++	CANFD5_TX_A_MARK, CANFD5_RX_A_MARK,
+ };
+ 
+-/* - CANFD5_B ----------------------------------------------------------------- */
+ static const unsigned int canfd5_data_b_pins[] = {
+ 	/* CANFD5_TX_B, CANFD5_RX_B */
+ 	RCAR_GP_PIN(1, 8), RCAR_GP_PIN(1, 9),
+@@ -1599,49 +1598,48 @@ static const unsigned int hscif0_ctrl_mux[] = {
+ };
+ 
+ /* - HSCIF1 ----------------------------------------------------------------- */
+-static const unsigned int hscif1_data_pins[] = {
+-	/* HRX1, HTX1 */
++static const unsigned int hscif1_data_a_pins[] = {
++	/* HRX1_A, HTX1_A */
+ 	RCAR_GP_PIN(0, 15), RCAR_GP_PIN(0, 14),
+ };
+-static const unsigned int hscif1_data_mux[] = {
+-	HRX1_MARK, HTX1_MARK,
++static const unsigned int hscif1_data_a_mux[] = {
++	HRX1_A_MARK, HTX1_A_MARK,
+ };
+-static const unsigned int hscif1_clk_pins[] = {
+-	/* HSCK1 */
++static const unsigned int hscif1_clk_a_pins[] = {
++	/* HSCK1_A */
+ 	RCAR_GP_PIN(0, 18),
+ };
+-static const unsigned int hscif1_clk_mux[] = {
+-	HSCK1_MARK,
++static const unsigned int hscif1_clk_a_mux[] = {
++	HSCK1_A_MARK,
+ };
+-static const unsigned int hscif1_ctrl_pins[] = {
+-	/* HRTS1_N, HCTS1_N */
++static const unsigned int hscif1_ctrl_a_pins[] = {
++	/* HRTS1_N_A, HCTS1_N_A */
+ 	RCAR_GP_PIN(0, 17), RCAR_GP_PIN(0, 16),
+ };
+-static const unsigned int hscif1_ctrl_mux[] = {
+-	HRTS1_N_MARK, HCTS1_N_MARK,
++static const unsigned int hscif1_ctrl_a_mux[] = {
++	HRTS1_N_A_MARK, HCTS1_N_A_MARK,
+ };
+ 
+-/* - HSCIF1_X---------------------------------------------------------------- */
+-static const unsigned int hscif1_data_x_pins[] = {
+-	/* HRX1_X, HTX1_X */
++static const unsigned int hscif1_data_b_pins[] = {
++	/* HRX1_B, HTX1_B */
+ 	RCAR_GP_PIN(1, 7), RCAR_GP_PIN(1, 6),
+ };
+-static const unsigned int hscif1_data_x_mux[] = {
+-	HRX1_X_MARK, HTX1_X_MARK,
++static const unsigned int hscif1_data_b_mux[] = {
++	HRX1_B_MARK, HTX1_B_MARK,
+ };
+-static const unsigned int hscif1_clk_x_pins[] = {
+-	/* HSCK1_X */
++static const unsigned int hscif1_clk_b_pins[] = {
++	/* HSCK1_B */
+ 	RCAR_GP_PIN(1, 10),
+ };
+-static const unsigned int hscif1_clk_x_mux[] = {
+-	HSCK1_X_MARK,
++static const unsigned int hscif1_clk_b_mux[] = {
++	HSCK1_B_MARK,
+ };
+-static const unsigned int hscif1_ctrl_x_pins[] = {
+-	/* HRTS1_N_X, HCTS1_N_X */
++static const unsigned int hscif1_ctrl_b_pins[] = {
++	/* HRTS1_N_B, HCTS1_N_B */
+ 	RCAR_GP_PIN(1, 9), RCAR_GP_PIN(1, 8),
+ };
+-static const unsigned int hscif1_ctrl_x_mux[] = {
+-	HRTS1_N_X_MARK, HCTS1_N_X_MARK,
++static const unsigned int hscif1_ctrl_b_mux[] = {
++	HRTS1_N_B_MARK, HCTS1_N_B_MARK,
+ };
+ 
+ /* - HSCIF2 ----------------------------------------------------------------- */
+@@ -1668,49 +1666,48 @@ static const unsigned int hscif2_ctrl_mux[] = {
+ };
+ 
+ /* - HSCIF3 ----------------------------------------------------------------- */
+-static const unsigned int hscif3_data_pins[] = {
+-	/* HRX3, HTX3 */
++static const unsigned int hscif3_data_a_pins[] = {
++	/* HRX3_A, HTX3_A */
+ 	RCAR_GP_PIN(1, 24), RCAR_GP_PIN(1, 28),
+ };
+-static const unsigned int hscif3_data_mux[] = {
+-	HRX3_MARK, HTX3_MARK,
++static const unsigned int hscif3_data_a_mux[] = {
++	HRX3_A_MARK, HTX3_A_MARK,
+ };
+-static const unsigned int hscif3_clk_pins[] = {
+-	/* HSCK3 */
++static const unsigned int hscif3_clk_a_pins[] = {
++	/* HSCK3_A */
+ 	RCAR_GP_PIN(1, 25),
+ };
+-static const unsigned int hscif3_clk_mux[] = {
+-	HSCK3_MARK,
++static const unsigned int hscif3_clk_a_mux[] = {
++	HSCK3_A_MARK,
+ };
+-static const unsigned int hscif3_ctrl_pins[] = {
+-	/* HRTS3_N, HCTS3_N */
++static const unsigned int hscif3_ctrl_a_pins[] = {
++	/* HRTS3_N_A, HCTS3_N_A */
+ 	RCAR_GP_PIN(1, 26), RCAR_GP_PIN(1, 27),
+ };
+-static const unsigned int hscif3_ctrl_mux[] = {
+-	HRTS3_N_MARK, HCTS3_N_MARK,
++static const unsigned int hscif3_ctrl_a_mux[] = {
++	HRTS3_N_A_MARK, HCTS3_N_A_MARK,
+ };
+ 
+-/* - HSCIF3_A ----------------------------------------------------------------- */
+-static const unsigned int hscif3_data_a_pins[] = {
+-	/* HRX3_A, HTX3_A */
++static const unsigned int hscif3_data_b_pins[] = {
++	/* HRX3_B, HTX3_B */
+ 	RCAR_GP_PIN(1, 4), RCAR_GP_PIN(1, 0),
+ };
+-static const unsigned int hscif3_data_a_mux[] = {
+-	HRX3_A_MARK, HTX3_A_MARK,
++static const unsigned int hscif3_data_b_mux[] = {
++	HRX3_B_MARK, HTX3_B_MARK,
+ };
+-static const unsigned int hscif3_clk_a_pins[] = {
+-	/* HSCK3_A */
++static const unsigned int hscif3_clk_b_pins[] = {
++	/* HSCK3_B */
+ 	RCAR_GP_PIN(1, 3),
+ };
+-static const unsigned int hscif3_clk_a_mux[] = {
+-	HSCK3_A_MARK,
++static const unsigned int hscif3_clk_b_mux[] = {
++	HSCK3_B_MARK,
+ };
+-static const unsigned int hscif3_ctrl_a_pins[] = {
+-	/* HRTS3_N_A, HCTS3_N_A */
++static const unsigned int hscif3_ctrl_b_pins[] = {
++	/* HRTS3_N_B, HCTS3_N_B */
+ 	RCAR_GP_PIN(1, 2), RCAR_GP_PIN(1, 1),
+ };
+-static const unsigned int hscif3_ctrl_a_mux[] = {
+-	HRTS3_N_A_MARK, HCTS3_N_A_MARK,
++static const unsigned int hscif3_ctrl_b_mux[] = {
++	HRTS3_N_B_MARK, HCTS3_N_B_MARK,
+ };
+ 
+ /* - I2C0 ------------------------------------------------------------------- */
+@@ -2093,13 +2090,13 @@ static const unsigned int pcie1_clkreq_n_mux[] = {
+ 	PCIE1_CLKREQ_N_MARK,
+ };
+ 
+-/* - PWM0_A ------------------------------------------------------------------- */
+-static const unsigned int pwm0_a_pins[] = {
+-	/* PWM0_A */
++/* - PWM0 ------------------------------------------------------------------- */
++static const unsigned int pwm0_pins[] = {
++	/* PWM0 */
+ 	RCAR_GP_PIN(1, 15),
+ };
+-static const unsigned int pwm0_a_mux[] = {
+-	PWM0_A_MARK,
++static const unsigned int pwm0_mux[] = {
++	PWM0_MARK,
+ };
+ 
+ /* - PWM1_A ------------------------------------------------------------------- */
+@@ -2120,13 +2117,13 @@ static const unsigned int pwm1_b_mux[] = {
+ 	PWM1_B_MARK,
+ };
+ 
+-/* - PWM2_B ------------------------------------------------------------------- */
+-static const unsigned int pwm2_b_pins[] = {
+-	/* PWM2_B */
++/* - PWM2 ------------------------------------------------------------------- */
++static const unsigned int pwm2_pins[] = {
++	/* PWM2 */
+ 	RCAR_GP_PIN(2, 14),
+ };
+-static const unsigned int pwm2_b_mux[] = {
+-	PWM2_B_MARK,
++static const unsigned int pwm2_mux[] = {
++	PWM2_MARK,
+ };
+ 
+ /* - PWM3_A ------------------------------------------------------------------- */
+@@ -2183,22 +2180,22 @@ static const unsigned int pwm7_mux[] = {
+ 	PWM7_MARK,
+ };
+ 
+-/* - PWM8_A ------------------------------------------------------------------- */
+-static const unsigned int pwm8_a_pins[] = {
+-	/* PWM8_A */
++/* - PWM8 ------------------------------------------------------------------- */
++static const unsigned int pwm8_pins[] = {
++	/* PWM8 */
+ 	RCAR_GP_PIN(1, 13),
+ };
+-static const unsigned int pwm8_a_mux[] = {
+-	PWM8_A_MARK,
++static const unsigned int pwm8_mux[] = {
++	PWM8_MARK,
+ };
+ 
+-/* - PWM9_A ------------------------------------------------------------------- */
+-static const unsigned int pwm9_a_pins[] = {
+-	/* PWM9_A */
++/* - PWM9 ------------------------------------------------------------------- */
++static const unsigned int pwm9_pins[] = {
++	/* PWM9 */
+ 	RCAR_GP_PIN(1, 14),
+ };
+-static const unsigned int pwm9_a_mux[] = {
+-	PWM9_A_MARK,
++static const unsigned int pwm9_mux[] = {
++	PWM9_MARK,
+ };
+ 
+ /* - QSPI0 ------------------------------------------------------------------ */
+@@ -2261,75 +2258,51 @@ static const unsigned int scif0_ctrl_mux[] = {
+ };
+ 
+ /* - SCIF1 ------------------------------------------------------------------ */
+-static const unsigned int scif1_data_pins[] = {
+-	/* RX1, TX1 */
++static const unsigned int scif1_data_a_pins[] = {
++	/* RX1_A, TX1_A */
+ 	RCAR_GP_PIN(0, 15), RCAR_GP_PIN(0, 14),
+ };
+-static const unsigned int scif1_data_mux[] = {
+-	RX1_MARK, TX1_MARK,
++static const unsigned int scif1_data_a_mux[] = {
++	RX1_A_MARK, TX1_A_MARK,
+ };
+-static const unsigned int scif1_clk_pins[] = {
+-	/* SCK1 */
++static const unsigned int scif1_clk_a_pins[] = {
++	/* SCK1_A */
+ 	RCAR_GP_PIN(0, 18),
+ };
+-static const unsigned int scif1_clk_mux[] = {
+-	SCK1_MARK,
++static const unsigned int scif1_clk_a_mux[] = {
++	SCK1_A_MARK,
+ };
+-static const unsigned int scif1_ctrl_pins[] = {
+-	/* RTS1_N, CTS1_N */
++static const unsigned int scif1_ctrl_a_pins[] = {
++	/* RTS1_N_A, CTS1_N_A */
+ 	RCAR_GP_PIN(0, 17), RCAR_GP_PIN(0, 16),
+ };
+-static const unsigned int scif1_ctrl_mux[] = {
+-	RTS1_N_MARK, CTS1_N_MARK,
++static const unsigned int scif1_ctrl_a_mux[] = {
++	RTS1_N_A_MARK, CTS1_N_A_MARK,
+ };
+ 
+-/* - SCIF1_X ------------------------------------------------------------------ */
+-static const unsigned int scif1_data_x_pins[] = {
+-	/* RX1_X, TX1_X */
++static const unsigned int scif1_data_b_pins[] = {
++	/* RX1_B, TX1_B */
+ 	RCAR_GP_PIN(1, 7), RCAR_GP_PIN(1, 6),
+ };
+-static const unsigned int scif1_data_x_mux[] = {
+-	RX1_X_MARK, TX1_X_MARK,
++static const unsigned int scif1_data_b_mux[] = {
++	RX1_B_MARK, TX1_B_MARK,
+ };
+-static const unsigned int scif1_clk_x_pins[] = {
+-	/* SCK1_X */
++static const unsigned int scif1_clk_b_pins[] = {
++	/* SCK1_B */
+ 	RCAR_GP_PIN(1, 10),
+ };
+-static const unsigned int scif1_clk_x_mux[] = {
+-	SCK1_X_MARK,
++static const unsigned int scif1_clk_b_mux[] = {
++	SCK1_B_MARK,
+ };
+-static const unsigned int scif1_ctrl_x_pins[] = {
+-	/* RTS1_N_X, CTS1_N_X */
++static const unsigned int scif1_ctrl_b_pins[] = {
++	/* RTS1_N_B, CTS1_N_B */
+ 	RCAR_GP_PIN(1, 9), RCAR_GP_PIN(1, 8),
+ };
+-static const unsigned int scif1_ctrl_x_mux[] = {
+-	RTS1_N_X_MARK, CTS1_N_X_MARK,
++static const unsigned int scif1_ctrl_b_mux[] = {
++	RTS1_N_B_MARK, CTS1_N_B_MARK,
+ };
+ 
+ /* - SCIF3 ------------------------------------------------------------------ */
+-static const unsigned int scif3_data_pins[] = {
+-	/* RX3, TX3 */
+-	RCAR_GP_PIN(1, 1), RCAR_GP_PIN(1, 0),
+-};
+-static const unsigned int scif3_data_mux[] = {
+-	RX3_MARK, TX3_MARK,
+-};
+-static const unsigned int scif3_clk_pins[] = {
+-	/* SCK3 */
+-	RCAR_GP_PIN(1, 4),
+-};
+-static const unsigned int scif3_clk_mux[] = {
+-	SCK3_MARK,
+-};
+-static const unsigned int scif3_ctrl_pins[] = {
+-	/* RTS3_N, CTS3_N */
+-	RCAR_GP_PIN(1, 2), RCAR_GP_PIN(1, 3),
+-};
+-static const unsigned int scif3_ctrl_mux[] = {
+-	RTS3_N_MARK, CTS3_N_MARK,
+-};
+-
+-/* - SCIF3_A ------------------------------------------------------------------ */
+ static const unsigned int scif3_data_a_pins[] = {
+ 	/* RX3_A, TX3_A */
+ 	RCAR_GP_PIN(1, 27), RCAR_GP_PIN(1, 28),
+@@ -2352,6 +2325,28 @@ static const unsigned int scif3_ctrl_a_mux[] = {
+ 	RTS3_N_A_MARK, CTS3_N_A_MARK,
+ };
+ 
++static const unsigned int scif3_data_b_pins[] = {
++	/* RX3_B, TX3_B */
++	RCAR_GP_PIN(1, 1), RCAR_GP_PIN(1, 0),
++};
++static const unsigned int scif3_data_b_mux[] = {
++	RX3_B_MARK, TX3_B_MARK,
++};
++static const unsigned int scif3_clk_b_pins[] = {
++	/* SCK3_B */
++	RCAR_GP_PIN(1, 4),
++};
++static const unsigned int scif3_clk_b_mux[] = {
++	SCK3_B_MARK,
++};
++static const unsigned int scif3_ctrl_b_pins[] = {
++	/* RTS3_N_B, CTS3_N_B */
++	RCAR_GP_PIN(1, 2), RCAR_GP_PIN(1, 3),
++};
++static const unsigned int scif3_ctrl_b_mux[] = {
++	RTS3_N_B_MARK, CTS3_N_B_MARK,
++};
++
+ /* - SCIF4 ------------------------------------------------------------------ */
+ static const unsigned int scif4_data_pins[] = {
+ 	/* RX4, TX4 */
+@@ -2408,64 +2403,63 @@ static const unsigned int ssi_ctrl_mux[] = {
+ 	SSI_SCK_MARK, SSI_WS_MARK,
+ };
+ 
+-/* - TPU ------------------------------------------------------------------- */
+-static const unsigned int tpu_to0_pins[] = {
+-	/* TPU0TO0 */
++/* - TPU -------------------------------------------------------------------- */
++static const unsigned int tpu_to0_a_pins[] = {
++	/* TPU0TO0_A */
+ 	RCAR_GP_PIN(2, 8),
+ };
+-static const unsigned int tpu_to0_mux[] = {
+-	TPU0TO0_MARK,
++static const unsigned int tpu_to0_a_mux[] = {
++	TPU0TO0_A_MARK,
+ };
+-static const unsigned int tpu_to1_pins[] = {
+-	/* TPU0TO1 */
++static const unsigned int tpu_to1_a_pins[] = {
++	/* TPU0TO1_A */
+ 	RCAR_GP_PIN(2, 7),
+ };
+-static const unsigned int tpu_to1_mux[] = {
+-	TPU0TO1_MARK,
++static const unsigned int tpu_to1_a_mux[] = {
++	TPU0TO1_A_MARK,
+ };
+-static const unsigned int tpu_to2_pins[] = {
+-	/* TPU0TO2 */
++static const unsigned int tpu_to2_a_pins[] = {
++	/* TPU0TO2_A */
+ 	RCAR_GP_PIN(2, 12),
+ };
+-static const unsigned int tpu_to2_mux[] = {
+-	TPU0TO2_MARK,
++static const unsigned int tpu_to2_a_mux[] = {
++	TPU0TO2_A_MARK,
+ };
+-static const unsigned int tpu_to3_pins[] = {
+-	/* TPU0TO3 */
++static const unsigned int tpu_to3_a_pins[] = {
++	/* TPU0TO3_A */
+ 	RCAR_GP_PIN(2, 13),
+ };
+-static const unsigned int tpu_to3_mux[] = {
+-	TPU0TO3_MARK,
++static const unsigned int tpu_to3_a_mux[] = {
++	TPU0TO3_A_MARK,
+ };
+ 
+-/* - TPU_A ------------------------------------------------------------------- */
+-static const unsigned int tpu_to0_a_pins[] = {
+-	/* TPU0TO0_A */
++static const unsigned int tpu_to0_b_pins[] = {
++	/* TPU0TO0_B */
+ 	RCAR_GP_PIN(1, 25),
+ };
+-static const unsigned int tpu_to0_a_mux[] = {
+-	TPU0TO0_A_MARK,
++static const unsigned int tpu_to0_b_mux[] = {
++	TPU0TO0_B_MARK,
+ };
+-static const unsigned int tpu_to1_a_pins[] = {
+-	/* TPU0TO1_A */
++static const unsigned int tpu_to1_b_pins[] = {
++	/* TPU0TO1_B */
+ 	RCAR_GP_PIN(1, 26),
+ };
+-static const unsigned int tpu_to1_a_mux[] = {
+-	TPU0TO1_A_MARK,
++static const unsigned int tpu_to1_b_mux[] = {
++	TPU0TO1_B_MARK,
+ };
+-static const unsigned int tpu_to2_a_pins[] = {
+-	/* TPU0TO2_A */
++static const unsigned int tpu_to2_b_pins[] = {
++	/* TPU0TO2_B */
+ 	RCAR_GP_PIN(2, 0),
+ };
+-static const unsigned int tpu_to2_a_mux[] = {
+-	TPU0TO2_A_MARK,
++static const unsigned int tpu_to2_b_mux[] = {
++	TPU0TO2_B_MARK,
+ };
+-static const unsigned int tpu_to3_a_pins[] = {
+-	/* TPU0TO3_A */
++static const unsigned int tpu_to3_b_pins[] = {
++	/* TPU0TO3_B */
+ 	RCAR_GP_PIN(2, 1),
+ };
+-static const unsigned int tpu_to3_a_mux[] = {
+-	TPU0TO3_A_MARK,
++static const unsigned int tpu_to3_b_mux[] = {
++	TPU0TO3_B_MARK,
+ };
+ 
+ /* - TSN0 ------------------------------------------------ */
+@@ -2578,8 +2572,8 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(canfd2_data),
+ 	SH_PFC_PIN_GROUP(canfd3_data),
+ 	SH_PFC_PIN_GROUP(canfd4_data),
+-	SH_PFC_PIN_GROUP(canfd5_data),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(canfd5_data_b),	/* suffix might be updated */
++	SH_PFC_PIN_GROUP(canfd5_data_a),
++	SH_PFC_PIN_GROUP(canfd5_data_b),
+ 	SH_PFC_PIN_GROUP(canfd6_data),
+ 	SH_PFC_PIN_GROUP(canfd7_data),
+ 	SH_PFC_PIN_GROUP(can_clk),
+@@ -2587,21 +2581,21 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(hscif0_data),
+ 	SH_PFC_PIN_GROUP(hscif0_clk),
+ 	SH_PFC_PIN_GROUP(hscif0_ctrl),
+-	SH_PFC_PIN_GROUP(hscif1_data),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif1_clk),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif1_ctrl),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif1_data_x),	/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif1_clk_x),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif1_ctrl_x),	/* suffix might be updated */
++	SH_PFC_PIN_GROUP(hscif1_data_a),
++	SH_PFC_PIN_GROUP(hscif1_clk_a),
++	SH_PFC_PIN_GROUP(hscif1_ctrl_a),
++	SH_PFC_PIN_GROUP(hscif1_data_b),
++	SH_PFC_PIN_GROUP(hscif1_clk_b),
++	SH_PFC_PIN_GROUP(hscif1_ctrl_b),
+ 	SH_PFC_PIN_GROUP(hscif2_data),
+ 	SH_PFC_PIN_GROUP(hscif2_clk),
+ 	SH_PFC_PIN_GROUP(hscif2_ctrl),
+-	SH_PFC_PIN_GROUP(hscif3_data),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif3_clk),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif3_ctrl),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif3_data_a),	/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif3_clk_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(hscif3_ctrl_a),	/* suffix might be updated */
++	SH_PFC_PIN_GROUP(hscif3_data_a),
++	SH_PFC_PIN_GROUP(hscif3_clk_a),
++	SH_PFC_PIN_GROUP(hscif3_ctrl_a),
++	SH_PFC_PIN_GROUP(hscif3_data_b),
++	SH_PFC_PIN_GROUP(hscif3_clk_b),
++	SH_PFC_PIN_GROUP(hscif3_ctrl_b),
+ 
+ 	SH_PFC_PIN_GROUP(i2c0),
+ 	SH_PFC_PIN_GROUP(i2c1),
+@@ -2663,18 +2657,18 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(pcie0_clkreq_n),
+ 	SH_PFC_PIN_GROUP(pcie1_clkreq_n),
+ 
+-	SH_PFC_PIN_GROUP(pwm0_a),		/* suffix might be updated */
++	SH_PFC_PIN_GROUP(pwm0),
+ 	SH_PFC_PIN_GROUP(pwm1_a),
+ 	SH_PFC_PIN_GROUP(pwm1_b),
+-	SH_PFC_PIN_GROUP(pwm2_b),		/* suffix might be updated */
++	SH_PFC_PIN_GROUP(pwm2),
+ 	SH_PFC_PIN_GROUP(pwm3_a),
+ 	SH_PFC_PIN_GROUP(pwm3_b),
+ 	SH_PFC_PIN_GROUP(pwm4),
+ 	SH_PFC_PIN_GROUP(pwm5),
+ 	SH_PFC_PIN_GROUP(pwm6),
+ 	SH_PFC_PIN_GROUP(pwm7),
+-	SH_PFC_PIN_GROUP(pwm8_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(pwm9_a),		/* suffix might be updated */
++	SH_PFC_PIN_GROUP(pwm8),
++	SH_PFC_PIN_GROUP(pwm9),
+ 
+ 	SH_PFC_PIN_GROUP(qspi0_ctrl),
+ 	BUS_DATA_PIN_GROUP(qspi0_data, 2),
+@@ -2686,18 +2680,18 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(scif0_data),
+ 	SH_PFC_PIN_GROUP(scif0_clk),
+ 	SH_PFC_PIN_GROUP(scif0_ctrl),
+-	SH_PFC_PIN_GROUP(scif1_data),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif1_clk),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif1_ctrl),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif1_data_x),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif1_clk_x),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif1_ctrl_x),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_data),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_clk),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_ctrl),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_data_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_clk_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(scif3_ctrl_a),		/* suffix might be updated */
++	SH_PFC_PIN_GROUP(scif1_data_a),
++	SH_PFC_PIN_GROUP(scif1_clk_a),
++	SH_PFC_PIN_GROUP(scif1_ctrl_a),
++	SH_PFC_PIN_GROUP(scif1_data_b),
++	SH_PFC_PIN_GROUP(scif1_clk_b),
++	SH_PFC_PIN_GROUP(scif1_ctrl_b),
++	SH_PFC_PIN_GROUP(scif3_data_a),
++	SH_PFC_PIN_GROUP(scif3_clk_a),
++	SH_PFC_PIN_GROUP(scif3_ctrl_a),
++	SH_PFC_PIN_GROUP(scif3_data_b),
++	SH_PFC_PIN_GROUP(scif3_clk_b),
++	SH_PFC_PIN_GROUP(scif3_ctrl_b),
+ 	SH_PFC_PIN_GROUP(scif4_data),
+ 	SH_PFC_PIN_GROUP(scif4_clk),
+ 	SH_PFC_PIN_GROUP(scif4_ctrl),
+@@ -2707,14 +2701,14 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(ssi_data),
+ 	SH_PFC_PIN_GROUP(ssi_ctrl),
+ 
+-	SH_PFC_PIN_GROUP(tpu_to0),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to0_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to1),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to1_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to2),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to2_a),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to3),		/* suffix might be updated */
+-	SH_PFC_PIN_GROUP(tpu_to3_a),		/* suffix might be updated */
++	SH_PFC_PIN_GROUP(tpu_to0_a),
++	SH_PFC_PIN_GROUP(tpu_to0_b),
++	SH_PFC_PIN_GROUP(tpu_to1_a),
++	SH_PFC_PIN_GROUP(tpu_to1_b),
++	SH_PFC_PIN_GROUP(tpu_to2_a),
++	SH_PFC_PIN_GROUP(tpu_to2_b),
++	SH_PFC_PIN_GROUP(tpu_to3_a),
++	SH_PFC_PIN_GROUP(tpu_to3_b),
+ 
+ 	SH_PFC_PIN_GROUP(tsn0_link),
+ 	SH_PFC_PIN_GROUP(tsn0_phy_int),
+@@ -2788,8 +2782,7 @@ static const char * const canfd4_groups[] = {
+ };
+ 
+ static const char * const canfd5_groups[] = {
+-	/* suffix might be updated */
+-	"canfd5_data",
++	"canfd5_data_a",
+ 	"canfd5_data_b",
+ };
+ 
+@@ -2812,13 +2805,12 @@ static const char * const hscif0_groups[] = {
+ };
+ 
+ static const char * const hscif1_groups[] = {
+-	/* suffix might be updated */
+-	"hscif1_data",
+-	"hscif1_clk",
+-	"hscif1_ctrl",
+-	"hscif1_data_x",
+-	"hscif1_clk_x",
+-	"hscif1_ctrl_x",
++	"hscif1_data_a",
++	"hscif1_clk_a",
++	"hscif1_ctrl_a",
++	"hscif1_data_b",
++	"hscif1_clk_b",
++	"hscif1_ctrl_b",
+ };
+ 
+ static const char * const hscif2_groups[] = {
+@@ -2828,13 +2820,12 @@ static const char * const hscif2_groups[] = {
+ };
+ 
+ static const char * const hscif3_groups[] = {
+-	/* suffix might be updated */
+-	"hscif3_data",
+-	"hscif3_clk",
+-	"hscif3_ctrl",
+ 	"hscif3_data_a",
+ 	"hscif3_clk_a",
+ 	"hscif3_ctrl_a",
++	"hscif3_data_b",
++	"hscif3_clk_b",
++	"hscif3_ctrl_b",
+ };
+ 
+ static const char * const i2c0_groups[] = {
+@@ -2931,8 +2922,7 @@ static const char * const pcie_groups[] = {
+ };
+ 
+ static const char * const pwm0_groups[] = {
+-	/* suffix might be updated */
+-	"pwm0_a",
++	"pwm0",
+ };
+ 
+ static const char * const pwm1_groups[] = {
+@@ -2941,8 +2931,7 @@ static const char * const pwm1_groups[] = {
+ };
+ 
+ static const char * const pwm2_groups[] = {
+-	/* suffix might be updated */
+-	"pwm2_b",
++	"pwm2",
+ };
+ 
+ static const char * const pwm3_groups[] = {
+@@ -2967,13 +2956,11 @@ static const char * const pwm7_groups[] = {
+ };
+ 
+ static const char * const pwm8_groups[] = {
+-	/* suffix might be updated */
+-	"pwm8_a",
++	"pwm8",
+ };
+ 
+ static const char * const pwm9_groups[] = {
+-	/* suffix might be updated */
+-	"pwm9_a",
++	"pwm9",
+ };
+ 
+ static const char * const qspi0_groups[] = {
+@@ -2995,23 +2982,21 @@ static const char * const scif0_groups[] = {
+ };
+ 
+ static const char * const scif1_groups[] = {
+-	/* suffix might be updated */
+-	"scif1_data",
+-	"scif1_clk",
+-	"scif1_ctrl",
+-	"scif1_data_x",
+-	"scif1_clk_x",
+-	"scif1_ctrl_x",
++	"scif1_data_a",
++	"scif1_clk_a",
++	"scif1_ctrl_a",
++	"scif1_data_b",
++	"scif1_clk_b",
++	"scif1_ctrl_b",
+ };
+ 
+ static const char * const scif3_groups[] = {
+-	/* suffix might be updated */
+-	"scif3_data",
+-	"scif3_clk",
+-	"scif3_ctrl",
+ 	"scif3_data_a",
+ 	"scif3_clk_a",
+ 	"scif3_ctrl_a",
++	"scif3_data_b",
++	"scif3_clk_b",
++	"scif3_ctrl_b",
+ };
+ 
+ static const char * const scif4_groups[] = {
+@@ -3034,15 +3019,14 @@ static const char * const ssi_groups[] = {
+ };
+ 
+ static const char * const tpu_groups[] = {
+-	/* suffix might be updated */
+-	"tpu_to0",
+ 	"tpu_to0_a",
+-	"tpu_to1",
++	"tpu_to0_b",
+ 	"tpu_to1_a",
+-	"tpu_to2",
++	"tpu_to1_b",
+ 	"tpu_to2_a",
+-	"tpu_to3",
++	"tpu_to2_b",
+ 	"tpu_to3_a",
++	"tpu_to3_b",
+ };
+ 
+ static const char * const tsn0_groups[] = {
+diff --git a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+index 040f2c46a868d..ef97586385019 100644
+--- a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
++++ b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+@@ -876,7 +876,7 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ 	iod->desc.name = dev_name(dev);
+ 	iod->desc.owner = THIS_MODULE;
+ 
+-	ret = pinctrl_register_and_init(&iod->desc, dev, iod, &iod->pctl);
++	ret = devm_pinctrl_register_and_init(dev, &iod->desc, iod, &iod->pctl);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register pinctrl\n");
+ 		goto exit_out;
+@@ -884,7 +884,11 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, iod);
+ 
+-	return pinctrl_enable(iod->pctl);
++	ret = pinctrl_enable(iod->pctl);
++	if (ret)
++		goto exit_out;
++
++	return 0;
+ 
+ exit_out:
+ 	of_node_put(np);
+@@ -899,9 +903,6 @@ static void ti_iodelay_remove(struct platform_device *pdev)
+ {
+ 	struct ti_iodelay_device *iod = platform_get_drvdata(pdev);
+ 
+-	if (iod->pctl)
+-		pinctrl_unregister(iod->pctl);
+-
+ 	ti_iodelay_pinconf_deinit_dev(iod);
+ 
+ 	/* Expect other allocations to be freed by devm */
+diff --git a/drivers/platform/Makefile b/drivers/platform/Makefile
+index fbbe4f77aa5d7..837202842a6f6 100644
+--- a/drivers/platform/Makefile
++++ b/drivers/platform/Makefile
+@@ -11,4 +11,4 @@ obj-$(CONFIG_OLPC_EC)		+= olpc/
+ obj-$(CONFIG_GOLDFISH)		+= goldfish/
+ obj-$(CONFIG_CHROME_PLATFORMS)	+= chrome/
+ obj-$(CONFIG_SURFACE_PLATFORMS)	+= surface/
+-obj-$(CONFIG_ARM64)		+= arm64/
++obj-$(CONFIG_ARM64_PLATFORM_DEVICES)	+= arm64/
+diff --git a/drivers/platform/chrome/cros_ec_debugfs.c b/drivers/platform/chrome/cros_ec_debugfs.c
+index e1d313246beb5..5996e9d53c387 100644
+--- a/drivers/platform/chrome/cros_ec_debugfs.c
++++ b/drivers/platform/chrome/cros_ec_debugfs.c
+@@ -330,6 +330,7 @@ static int ec_read_version_supported(struct cros_ec_dev *ec)
+ 	if (!msg)
+ 		return 0;
+ 
++	msg->version = 1;
+ 	msg->command = EC_CMD_GET_CMD_VERSIONS + ec->cmd_offset;
+ 	msg->outsize = sizeof(*params);
+ 	msg->insize = sizeof(*response);
+diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c
+index d8c5f9195f85f..2ac2f31090f96 100644
+--- a/drivers/platform/mips/cpu_hwmon.c
++++ b/drivers/platform/mips/cpu_hwmon.c
+@@ -139,6 +139,9 @@ static int __init loongson_hwmon_init(void)
+ 		csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) &
+ 				  LOONGSON_CSRF_TEMP;
+ 
++	if (!csr_temp_enable && !loongson_chiptemp[0])
++		return -ENODEV;
++
+ 	nr_packages = loongson_sysconf.nr_cpus /
+ 		loongson_sysconf.cores_per_package;
+ 
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 3f9b6285c9a66..bc9c5db383244 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -879,10 +879,14 @@ static ssize_t kbd_rgb_mode_store(struct device *dev,
+ 				 struct device_attribute *attr,
+ 				 const char *buf, size_t count)
+ {
+-	struct asus_wmi *asus = dev_get_drvdata(dev);
+ 	u32 cmd, mode, r, g, b, speed;
++	struct led_classdev *led;
++	struct asus_wmi *asus;
+ 	int err;
+ 
++	led = dev_get_drvdata(dev);
++	asus = container_of(led, struct asus_wmi, kbd_led);
++
+ 	if (sscanf(buf, "%d %d %d %d %d %d", &cmd, &mode, &r, &g, &b, &speed) != 6)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
+index 9b34d1a60f662..4b0ad1b4b4c9b 100644
+--- a/drivers/power/supply/ab8500_charger.c
++++ b/drivers/power/supply/ab8500_charger.c
+@@ -488,8 +488,10 @@ static int ab8500_charger_get_ac_voltage(struct ab8500_charger *di)
+ 	/* Only measure voltage if the charger is connected */
+ 	if (di->ac.charger_connected) {
+ 		ret = iio_read_channel_processed(di->adc_main_charger_v, &vch);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(di->dev, "%s ADC conv failed,\n", __func__);
++			return ret;
++		}
+ 	} else {
+ 		vch = 0;
+ 	}
+@@ -540,8 +542,10 @@ static int ab8500_charger_get_vbus_voltage(struct ab8500_charger *di)
+ 	/* Only measure voltage if the charger is connected */
+ 	if (di->usb.charger_connected) {
+ 		ret = iio_read_channel_processed(di->adc_vbus_v, &vch);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(di->dev, "%s ADC conv failed,\n", __func__);
++			return ret;
++		}
+ 	} else {
+ 		vch = 0;
+ 	}
+@@ -563,8 +567,10 @@ static int ab8500_charger_get_usb_current(struct ab8500_charger *di)
+ 	/* Only measure current if the charger is online */
+ 	if (di->usb.charger_online) {
+ 		ret = iio_read_channel_processed(di->adc_usb_charger_c, &ich);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(di->dev, "%s ADC conv failed,\n", __func__);
++			return ret;
++		}
+ 	} else {
+ 		ich = 0;
+ 	}
+@@ -586,8 +592,10 @@ static int ab8500_charger_get_ac_current(struct ab8500_charger *di)
+ 	/* Only measure current if the charger is online */
+ 	if (di->ac.charger_online) {
+ 		ret = iio_read_channel_processed(di->adc_main_charger_c, &ich);
+-		if (ret < 0)
++		if (ret < 0) {
+ 			dev_err(di->dev, "%s ADC conv failed,\n", __func__);
++			return ret;
++		}
+ 	} else {
+ 		ich = 0;
+ 	}
+diff --git a/drivers/power/supply/ingenic-battery.c b/drivers/power/supply/ingenic-battery.c
+index 2e7fdfde47ece..0a40f425c2772 100644
+--- a/drivers/power/supply/ingenic-battery.c
++++ b/drivers/power/supply/ingenic-battery.c
+@@ -31,8 +31,9 @@ static int ingenic_battery_get_property(struct power_supply *psy,
+ 
+ 	switch (psp) {
+ 	case POWER_SUPPLY_PROP_HEALTH:
+-		ret = iio_read_channel_processed(bat->channel, &val->intval);
+-		val->intval *= 1000;
++		ret = iio_read_channel_processed_scale(bat->channel,
++						       &val->intval,
++						       1000);
+ 		if (val->intval < info->voltage_min_design_uv)
+ 			val->intval = POWER_SUPPLY_HEALTH_DEAD;
+ 		else if (val->intval > info->voltage_max_design_uv)
+@@ -41,8 +42,9 @@ static int ingenic_battery_get_property(struct power_supply *psy,
+ 			val->intval = POWER_SUPPLY_HEALTH_GOOD;
+ 		return ret;
+ 	case POWER_SUPPLY_PROP_VOLTAGE_NOW:
+-		ret = iio_read_channel_processed(bat->channel, &val->intval);
+-		val->intval *= 1000;
++		ret = iio_read_channel_processed_scale(bat->channel,
++						       &val->intval,
++						       1000);
+ 		return ret;
+ 	case POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN:
+ 		val->intval = info->voltage_min_design_uv;
+diff --git a/drivers/pwm/pwm-atmel-tcb.c b/drivers/pwm/pwm-atmel-tcb.c
+index 528e54c5999d8..aca11493239a5 100644
+--- a/drivers/pwm/pwm-atmel-tcb.c
++++ b/drivers/pwm/pwm-atmel-tcb.c
+@@ -81,7 +81,8 @@ static int atmel_tcb_pwm_request(struct pwm_chip *chip,
+ 	tcbpwm->period = 0;
+ 	tcbpwm->div = 0;
+ 
+-	spin_lock(&tcbpwmc->lock);
++	guard(spinlock)(&tcbpwmc->lock);
++
+ 	regmap_read(tcbpwmc->regmap, ATMEL_TC_REG(tcbpwmc->channel, CMR), &cmr);
+ 	/*
+ 	 * Get init config from Timer Counter registers if
+@@ -107,7 +108,6 @@ static int atmel_tcb_pwm_request(struct pwm_chip *chip,
+ 
+ 	cmr |= ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO | ATMEL_TC_EEVT_XC0;
+ 	regmap_write(tcbpwmc->regmap, ATMEL_TC_REG(tcbpwmc->channel, CMR), cmr);
+-	spin_unlock(&tcbpwmc->lock);
+ 
+ 	return 0;
+ }
+@@ -137,7 +137,6 @@ static void atmel_tcb_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (tcbpwm->duty == 0)
+ 		polarity = !polarity;
+ 
+-	spin_lock(&tcbpwmc->lock);
+ 	regmap_read(tcbpwmc->regmap, ATMEL_TC_REG(tcbpwmc->channel, CMR), &cmr);
+ 
+ 	/* flush old setting and set the new one */
+@@ -172,8 +171,6 @@ static void atmel_tcb_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			     ATMEL_TC_SWTRG);
+ 		tcbpwmc->bkup.enabled = 0;
+ 	}
+-
+-	spin_unlock(&tcbpwmc->lock);
+ }
+ 
+ static int atmel_tcb_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm,
+@@ -194,7 +191,6 @@ static int atmel_tcb_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (tcbpwm->duty == 0)
+ 		polarity = !polarity;
+ 
+-	spin_lock(&tcbpwmc->lock);
+ 	regmap_read(tcbpwmc->regmap, ATMEL_TC_REG(tcbpwmc->channel, CMR), &cmr);
+ 
+ 	/* flush old setting and set the new one */
+@@ -256,7 +252,6 @@ static int atmel_tcb_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	regmap_write(tcbpwmc->regmap, ATMEL_TC_REG(tcbpwmc->channel, CCR),
+ 		     ATMEL_TC_SWTRG | ATMEL_TC_CLKEN);
+ 	tcbpwmc->bkup.enabled = 1;
+-	spin_unlock(&tcbpwmc->lock);
+ 	return 0;
+ }
+ 
+@@ -341,9 +336,12 @@ static int atmel_tcb_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ static int atmel_tcb_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			       const struct pwm_state *state)
+ {
++	struct atmel_tcb_pwm_chip *tcbpwmc = to_tcb_chip(chip);
+ 	int duty_cycle, period;
+ 	int ret;
+ 
++	guard(spinlock)(&tcbpwmc->lock);
++
+ 	if (!state->enabled) {
+ 		atmel_tcb_pwm_disable(chip, pwm, state->polarity);
+ 		return 0;
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index 8bae3fd2b3306..c586029caf233 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -452,8 +452,9 @@ static int stm32_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	enabled = pwm->state.enabled;
+ 
+-	if (enabled && !state->enabled) {
+-		stm32_pwm_disable(priv, pwm->hwpwm);
++	if (!state->enabled) {
++		if (enabled)
++			stm32_pwm_disable(priv, pwm->hwpwm);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 5a3fb902acc9f..144c8e9a642e8 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -726,31 +726,37 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+ 		struct resource res;
+ 
+ 		node = of_parse_phandle(np, "memory-region", a);
++		if (!node)
++			continue;
+ 		/* Not map vdevbuffer, vdevring region */
+ 		if (!strncmp(node->name, "vdev", strlen("vdev"))) {
+ 			of_node_put(node);
+ 			continue;
+ 		}
+ 		err = of_address_to_resource(node, 0, &res);
+-		of_node_put(node);
+ 		if (err) {
+ 			dev_err(dev, "unable to resolve memory region\n");
++			of_node_put(node);
+ 			return err;
+ 		}
+ 
+-		if (b >= IMX_RPROC_MEM_MAX)
++		if (b >= IMX_RPROC_MEM_MAX) {
++			of_node_put(node);
+ 			break;
++		}
+ 
+ 		/* Not use resource version, because we might share region */
+ 		priv->mem[b].cpu_addr = devm_ioremap_wc(&pdev->dev, res.start, resource_size(&res));
+ 		if (!priv->mem[b].cpu_addr) {
+ 			dev_err(dev, "failed to remap %pr\n", &res);
++			of_node_put(node);
+ 			return -ENOMEM;
+ 		}
+ 		priv->mem[b].sys_addr = res.start;
+ 		priv->mem[b].size = resource_size(&res);
+ 		if (!strcmp(node->name, "rsc-table"))
+ 			priv->rsc_table = priv->mem[b].cpu_addr;
++		of_node_put(node);
+ 		b++;
+ 	}
+ 
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index b8498772dba17..abf7b371b8604 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -1344,14 +1344,12 @@ static int scp_probe(struct platform_device *pdev)
+ 
+ 	/* l1tcm is an optional memory region */
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "l1tcm");
+-	scp_cluster->l1tcm_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(scp_cluster->l1tcm_base)) {
+-		ret = PTR_ERR(scp_cluster->l1tcm_base);
+-		if (ret != -EINVAL)
+-			return dev_err_probe(dev, ret, "Failed to map l1tcm memory\n");
++	if (res) {
++		scp_cluster->l1tcm_base = devm_ioremap_resource(dev, res);
++		if (IS_ERR(scp_cluster->l1tcm_base))
++			return dev_err_probe(dev, PTR_ERR(scp_cluster->l1tcm_base),
++					     "Failed to map l1tcm memory\n");
+ 
+-		scp_cluster->l1tcm_base = NULL;
+-	} else {
+ 		scp_cluster->l1tcm_size = resource_size(res);
+ 		scp_cluster->l1tcm_phys = res->start;
+ 	}
+@@ -1390,7 +1388,7 @@ static const struct mtk_scp_sizes_data default_scp_sizes = {
+ };
+ 
+ static const struct mtk_scp_sizes_data mt8188_scp_sizes = {
+-	.max_dram_size = 0x500000,
++	.max_dram_size = 0x800000,
+ 	.ipi_share_buffer_size = 600,
+ };
+ 
+@@ -1399,6 +1397,11 @@ static const struct mtk_scp_sizes_data mt8188_scp_c1_sizes = {
+ 	.ipi_share_buffer_size = 600,
+ };
+ 
++static const struct mtk_scp_sizes_data mt8195_scp_sizes = {
++	.max_dram_size = 0x800000,
++	.ipi_share_buffer_size = 288,
++};
++
+ static const struct mtk_scp_of_data mt8183_of_data = {
+ 	.scp_clk_get = mt8183_scp_clk_get,
+ 	.scp_before_load = mt8183_scp_before_load,
+@@ -1476,7 +1479,7 @@ static const struct mtk_scp_of_data mt8195_of_data = {
+ 	.scp_da_to_va = mt8192_scp_da_to_va,
+ 	.host_to_scp_reg = MT8192_GIPC_IN_SET,
+ 	.host_to_scp_int_bit = MT8192_HOST_IPC_INT_BIT,
+-	.scp_sizes = &default_scp_sizes,
++	.scp_sizes = &mt8195_scp_sizes,
+ };
+ 
+ static const struct mtk_scp_of_data mt8195_of_data_c1 = {
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 88623df7d0c35..8c7f7950b80ee 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -294,7 +294,7 @@ static void stm32_rproc_mb_vq_work(struct work_struct *work)
+ 
+ 	mutex_lock(&rproc->lock);
+ 
+-	if (rproc->state != RPROC_RUNNING)
++	if (rproc->state != RPROC_RUNNING && rproc->state != RPROC_ATTACHED)
+ 		goto unlock_mutex;
+ 
+ 	if (rproc_vq_interrupt(rproc, mb->vq_id) == IRQ_NONE)
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index 50e486bcfa103..39a47540c5900 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -1144,6 +1144,7 @@ static int k3_r5_rproc_configure_mode(struct k3_r5_rproc *kproc)
+ 	u32 atcm_enable, btcm_enable, loczrama;
+ 	struct k3_r5_core *core0;
+ 	enum cluster_mode mode = cluster->mode;
++	int reset_ctrl_status;
+ 	int ret;
+ 
+ 	core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem);
+@@ -1160,11 +1161,11 @@ static int k3_r5_rproc_configure_mode(struct k3_r5_rproc *kproc)
+ 			 r_state, c_state);
+ 	}
+ 
+-	ret = reset_control_status(core->reset);
+-	if (ret < 0) {
++	reset_ctrl_status = reset_control_status(core->reset);
++	if (reset_ctrl_status < 0) {
+ 		dev_err(cdev, "failed to get initial local reset status, ret = %d\n",
+-			ret);
+-		return ret;
++			reset_ctrl_status);
++		return reset_ctrl_status;
+ 	}
+ 
+ 	/*
+@@ -1199,7 +1200,7 @@ static int k3_r5_rproc_configure_mode(struct k3_r5_rproc *kproc)
+ 	 * irrelevant if module reset is asserted (POR value has local reset
+ 	 * deasserted), and is deemed as remoteproc mode
+ 	 */
+-	if (c_state && !ret && !halted) {
++	if (c_state && !reset_ctrl_status && !halted) {
+ 		dev_info(cdev, "configured R5F for IPC-only mode\n");
+ 		kproc->rproc->state = RPROC_DETACHED;
+ 		ret = 1;
+@@ -1217,7 +1218,7 @@ static int k3_r5_rproc_configure_mode(struct k3_r5_rproc *kproc)
+ 		ret = 0;
+ 	} else {
+ 		dev_err(cdev, "mismatched mode: local_reset = %s, module_reset = %s, core_state = %s\n",
+-			!ret ? "deasserted" : "asserted",
++			!reset_ctrl_status ? "deasserted" : "asserted",
+ 			c_state ? "deasserted" : "asserted",
+ 			halted ? "halted" : "unhalted");
+ 		ret = -EINVAL;
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 5faafb4aa55cc..cca650b2e0b94 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -274,10 +274,9 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ 			return err;
+ 
+ 		/* full-function RTCs won't have such missing fields */
+-		if (rtc_valid_tm(&alarm->time) == 0) {
+-			rtc_add_offset(rtc, &alarm->time);
+-			return 0;
+-		}
++		err = rtc_valid_tm(&alarm->time);
++		if (!err)
++			goto done;
+ 
+ 		/* get the "after" timestamp, to detect wrapped fields */
+ 		err = rtc_read_time(rtc, &now);
+@@ -379,6 +378,8 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ 	if (err && alarm->enabled)
+ 		dev_warn(&rtc->dev, "invalid alarm value: %ptR\n",
+ 			 &alarm->time);
++	else
++		rtc_add_offset(rtc, &alarm->time);
+ 
+ 	return err;
+ }
+diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c
+index fde2b8054c2ea..1298962402ff4 100644
+--- a/drivers/rtc/rtc-abx80x.c
++++ b/drivers/rtc/rtc-abx80x.c
+@@ -705,14 +705,18 @@ static int abx80x_nvmem_xfer(struct abx80x_priv *priv, unsigned int offset,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (write)
++		if (write) {
+ 			ret = i2c_smbus_write_i2c_block_data(priv->client, reg,
+ 							     len, val);
+-		else
++			if (ret)
++				return ret;
++		} else {
+ 			ret = i2c_smbus_read_i2c_block_data(priv->client, reg,
+ 							    len, val);
+-		if (ret)
+-			return ret;
++			if (ret <= 0)
++				return ret ? ret : -EIO;
++			len = ret;
++		}
+ 
+ 		offset += len;
+ 		val += len;
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 7d99cd2c37a0b..35dca2accbb8d 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -643,11 +643,10 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ 			   size_t count)
+ {
+ 	unsigned char *buf = val;
+-	int	retval;
+ 
+ 	off += NVRAM_OFFSET;
+ 	spin_lock_irq(&rtc_lock);
+-	for (retval = 0; count; count--, off++, retval++) {
++	for (; count; count--, off++) {
+ 		if (off < 128)
+ 			*buf++ = CMOS_READ(off);
+ 		else if (can_bank2)
+@@ -657,7 +656,7 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	return retval;
++	return count ? -EIO : 0;
+ }
+ 
+ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+@@ -665,7 +664,6 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ {
+ 	struct cmos_rtc	*cmos = priv;
+ 	unsigned char	*buf = val;
+-	int		retval;
+ 
+ 	/* NOTE:  on at least PCs and Ataris, the boot firmware uses a
+ 	 * checksum on part of the NVRAM data.  That's currently ignored
+@@ -674,7 +672,7 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ 	 */
+ 	off += NVRAM_OFFSET;
+ 	spin_lock_irq(&rtc_lock);
+-	for (retval = 0; count; count--, off++, retval++) {
++	for (; count; count--, off++) {
+ 		/* don't trash RTC registers */
+ 		if (off == cmos->day_alrm
+ 				|| off == cmos->mon_alrm
+@@ -689,7 +687,7 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	return retval;
++	return count ? -EIO : 0;
+ }
+ 
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/rtc/rtc-isl1208.c b/drivers/rtc/rtc-isl1208.c
+index e50c23ee1646a..206f96b90f58b 100644
+--- a/drivers/rtc/rtc-isl1208.c
++++ b/drivers/rtc/rtc-isl1208.c
+@@ -775,14 +775,13 @@ static int isl1208_nvmem_read(void *priv, unsigned int off, void *buf,
+ {
+ 	struct isl1208_state *isl1208 = priv;
+ 	struct i2c_client *client = to_i2c_client(isl1208->rtc->dev.parent);
+-	int ret;
+ 
+ 	/* nvmem sanitizes offset/count for us, but count==0 is possible */
+ 	if (!count)
+ 		return count;
+-	ret = isl1208_i2c_read_regs(client, ISL1208_REG_USR1 + off, buf,
++
++	return isl1208_i2c_read_regs(client, ISL1208_REG_USR1 + off, buf,
+ 				    count);
+-	return ret == 0 ? count : ret;
+ }
+ 
+ static int isl1208_nvmem_write(void *priv, unsigned int off, void *buf,
+@@ -790,15 +789,13 @@ static int isl1208_nvmem_write(void *priv, unsigned int off, void *buf,
+ {
+ 	struct isl1208_state *isl1208 = priv;
+ 	struct i2c_client *client = to_i2c_client(isl1208->rtc->dev.parent);
+-	int ret;
+ 
+ 	/* nvmem sanitizes off/count for us, but count==0 is possible */
+ 	if (!count)
+ 		return count;
+-	ret = isl1208_i2c_set_regs(client, ISL1208_REG_USR1 + off, buf,
+-				   count);
+ 
+-	return ret == 0 ? count : ret;
++	return isl1208_i2c_set_regs(client, ISL1208_REG_USR1 + off, buf,
++				   count);
+ }
+ 
+ static const struct nvmem_config isl1208_nvmem_config = {
+diff --git a/drivers/rtc/rtc-tps6594.c b/drivers/rtc/rtc-tps6594.c
+index 838ae8562a351..bc8dc735aa238 100644
+--- a/drivers/rtc/rtc-tps6594.c
++++ b/drivers/rtc/rtc-tps6594.c
+@@ -360,10 +360,6 @@ static int tps6594_rtc_probe(struct platform_device *pdev)
+ 	int irq;
+ 	int ret;
+ 
+-	rtc = devm_kzalloc(dev, sizeof(*rtc), GFP_KERNEL);
+-	if (!rtc)
+-		return -ENOMEM;
+-
+ 	rtc = devm_rtc_allocate_device(dev);
+ 	if (IS_ERR(rtc))
+ 		return PTR_ERR(rtc);
+diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c
+index 0316c20823eec..6adaeb985dde1 100644
+--- a/drivers/s390/block/dasd_devmap.c
++++ b/drivers/s390/block/dasd_devmap.c
+@@ -2248,13 +2248,19 @@ static ssize_t dasd_copy_pair_store(struct device *dev,
+ 
+ 	/* allocate primary devmap if needed */
+ 	prim_devmap = dasd_find_busid(prim_busid);
+-	if (IS_ERR(prim_devmap))
++	if (IS_ERR(prim_devmap)) {
+ 		prim_devmap = dasd_add_busid(prim_busid, DASD_FEATURE_DEFAULT);
++		if (IS_ERR(prim_devmap))
++			return PTR_ERR(prim_devmap);
++	}
+ 
+ 	/* allocate secondary devmap if needed */
+ 	sec_devmap = dasd_find_busid(sec_busid);
+-	if (IS_ERR(sec_devmap))
++	if (IS_ERR(sec_devmap)) {
+ 		sec_devmap = dasd_add_busid(sec_busid, DASD_FEATURE_DEFAULT);
++		if (IS_ERR(sec_devmap))
++			return PTR_ERR(sec_devmap);
++	}
+ 
+ 	/* setting copy relation is only allowed for offline secondary */
+ 	if (sec_devmap->device)
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index a46c73e8d7c40..0a9d6978cb0c3 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -1907,6 +1907,11 @@ lpfc_xcvr_data_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	/* Get transceiver information */
+ 	rdp_context = kmalloc(sizeof(*rdp_context), GFP_KERNEL);
++	if (!rdp_context) {
++		len = scnprintf(buf, PAGE_SIZE - len,
++				"SPF info NA: alloc failure\n");
++		return len;
++	}
+ 
+ 	rc = lpfc_get_sfp_info_wait(phba, rdp_context);
+ 	if (rc) {
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 153770bdc56ab..13b08c85440fe 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -5725,7 +5725,7 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
+ 				return ndlp;
+ 
+ 			if (ndlp->nlp_state > NLP_STE_UNUSED_NODE &&
+-			    ndlp->nlp_state < NLP_STE_PRLI_ISSUE) {
++			    ndlp->nlp_state <= NLP_STE_PRLI_ISSUE) {
+ 				lpfc_disc_state_machine(vport, ndlp, NULL,
+ 							NLP_EVT_DEVICE_RECOVERY);
+ 			}
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index f475e7ece41a4..3e55d5edd60ab 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -10579,10 +10579,11 @@ lpfc_prep_embed_io(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
+ {
+ 	struct lpfc_iocbq *piocb = &lpfc_cmd->cur_iocbq;
+ 	union lpfc_wqe128 *wqe = &lpfc_cmd->cur_iocbq.wqe;
+-	struct sli4_sge *sgl;
++	struct sli4_sge_le *sgl;
++	u32 type_size;
+ 
+ 	/* 128 byte wqe support here */
+-	sgl = (struct sli4_sge *)lpfc_cmd->dma_sgl;
++	sgl = (struct sli4_sge_le *)lpfc_cmd->dma_sgl;
+ 
+ 	if (phba->fcp_embed_io) {
+ 		struct fcp_cmnd *fcp_cmnd;
+@@ -10591,9 +10592,9 @@ lpfc_prep_embed_io(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
+ 		fcp_cmnd = lpfc_cmd->fcp_cmnd;
+ 
+ 		/* Word 0-2 - FCP_CMND */
+-		wqe->generic.bde.tus.f.bdeFlags =
+-			BUFF_TYPE_BDE_IMMED;
+-		wqe->generic.bde.tus.f.bdeSize = sgl->sge_len;
++		type_size = le32_to_cpu(sgl->sge_len);
++		type_size |= ULP_BDE64_TYPE_BDE_IMMED;
++		wqe->generic.bde.tus.w = type_size;
+ 		wqe->generic.bde.addrHigh = 0;
+ 		wqe->generic.bde.addrLow =  72;  /* Word 18 */
+ 
+@@ -10602,13 +10603,13 @@ lpfc_prep_embed_io(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
+ 
+ 		/* Word 18-29  FCP CMND Payload */
+ 		ptr = &wqe->words[18];
+-		memcpy(ptr, fcp_cmnd, sgl->sge_len);
++		lpfc_sli_pcimem_bcopy(fcp_cmnd, ptr, le32_to_cpu(sgl->sge_len));
+ 	} else {
+ 		/* Word 0-2 - Inline BDE */
+ 		wqe->generic.bde.tus.f.bdeFlags =  BUFF_TYPE_BDE_64;
+-		wqe->generic.bde.tus.f.bdeSize = sgl->sge_len;
+-		wqe->generic.bde.addrHigh = sgl->addr_hi;
+-		wqe->generic.bde.addrLow =  sgl->addr_lo;
++		wqe->generic.bde.tus.f.bdeSize = le32_to_cpu(sgl->sge_len);
++		wqe->generic.bde.addrHigh = le32_to_cpu(sgl->addr_hi);
++		wqe->generic.bde.addrLow = le32_to_cpu(sgl->addr_lo);
+ 
+ 		/* Word 10 */
+ 		bf_set(wqe_dbde, &wqe->generic.wqe_com, 1);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 19bb64bdd88b1..52dc9604f5674 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -324,7 +324,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ 		    "request_sg_cnt=%x reply_sg_cnt=%x.\n",
+ 		    bsg_job->request_payload.sg_cnt,
+ 		    bsg_job->reply_payload.sg_cnt);
+-		rval = -EPERM;
++		rval = -ENOBUFS;
+ 		goto done;
+ 	}
+ 
+@@ -3059,17 +3059,61 @@ qla24xx_bsg_request(struct bsg_job *bsg_job)
+ 	return ret;
+ }
+ 
+-int
+-qla24xx_bsg_timeout(struct bsg_job *bsg_job)
++static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ {
++	bool found = false;
+ 	struct fc_bsg_reply *bsg_reply = bsg_job->reply;
+ 	scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
+ 	struct qla_hw_data *ha = vha->hw;
+-	srb_t *sp;
+-	int cnt, que;
++	srb_t *sp = NULL;
++	int cnt;
+ 	unsigned long flags;
+ 	struct req_que *req;
+ 
++	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
++	req = qpair->req;
++
++	for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
++		sp = req->outstanding_cmds[cnt];
++		if (sp &&
++		    (sp->type == SRB_CT_CMD ||
++		     sp->type == SRB_ELS_CMD_HST ||
++		     sp->type == SRB_ELS_CMD_HST_NOLOGIN) &&
++		    sp->u.bsg_job == bsg_job) {
++			req->outstanding_cmds[cnt] = NULL;
++			spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++
++			if (!ha->flags.eeh_busy && ha->isp_ops->abort_command(sp)) {
++				ql_log(ql_log_warn, vha, 0x7089,
++						"mbx abort_command failed.\n");
++				bsg_reply->result = -EIO;
++			} else {
++				ql_dbg(ql_dbg_user, vha, 0x708a,
++						"mbx abort_command success.\n");
++				bsg_reply->result = 0;
++			}
++			/* ref: INIT */
++			kref_put(&sp->cmd_kref, qla2x00_sp_release);
++
++			found = true;
++			goto done;
++		}
++	}
++	spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++
++done:
++	return found;
++}
++
++int
++qla24xx_bsg_timeout(struct bsg_job *bsg_job)
++{
++	struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++	scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
++	struct qla_hw_data *ha = vha->hw;
++	int i;
++	struct qla_qpair *qpair;
++
+ 	ql_log(ql_log_info, vha, 0x708b, "%s CMD timeout. bsg ptr %p.\n",
+ 	    __func__, bsg_job);
+ 
+@@ -3079,48 +3123,22 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ 		qla_pci_set_eeh_busy(vha);
+ 	}
+ 
++	if (qla_bsg_found(ha->base_qpair, bsg_job))
++		goto done;
++
+ 	/* find the bsg job from the active list of commands */
+-	spin_lock_irqsave(&ha->hardware_lock, flags);
+-	for (que = 0; que < ha->max_req_queues; que++) {
+-		req = ha->req_q_map[que];
+-		if (!req)
++	for (i = 0; i < ha->max_qpairs; i++) {
++		qpair = vha->hw->queue_pair_map[i];
++		if (!qpair)
+ 			continue;
+-
+-		for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+-			sp = req->outstanding_cmds[cnt];
+-			if (sp &&
+-			    (sp->type == SRB_CT_CMD ||
+-			     sp->type == SRB_ELS_CMD_HST ||
+-			     sp->type == SRB_ELS_CMD_HST_NOLOGIN ||
+-			     sp->type == SRB_FXIOCB_BCMD) &&
+-			    sp->u.bsg_job == bsg_job) {
+-				req->outstanding_cmds[cnt] = NULL;
+-				spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-
+-				if (!ha->flags.eeh_busy && ha->isp_ops->abort_command(sp)) {
+-					ql_log(ql_log_warn, vha, 0x7089,
+-					    "mbx abort_command failed.\n");
+-					bsg_reply->result = -EIO;
+-				} else {
+-					ql_dbg(ql_dbg_user, vha, 0x708a,
+-					    "mbx abort_command success.\n");
+-					bsg_reply->result = 0;
+-				}
+-				spin_lock_irqsave(&ha->hardware_lock, flags);
+-				goto done;
+-
+-			}
+-		}
++		if (qla_bsg_found(qpair, bsg_job))
++			goto done;
+ 	}
+-	spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ 	ql_log(ql_log_info, vha, 0x708b, "SRB not found to abort.\n");
+ 	bsg_reply->result = -ENXIO;
+-	return 0;
+ 
+ done:
+-	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-	/* ref: INIT */
+-	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 2f49baf131e26..7cf998e3cc681 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3309,9 +3309,20 @@ struct fab_scan_rp {
+ 	u8 node_name[8];
+ };
+ 
++enum scan_step {
++	FAB_SCAN_START,
++	FAB_SCAN_GPNFT_FCP,
++	FAB_SCAN_GNNFT_FCP,
++	FAB_SCAN_GPNFT_NVME,
++	FAB_SCAN_GNNFT_NVME,
++};
++
+ struct fab_scan {
+ 	struct fab_scan_rp *l;
+ 	u32 size;
++	u32 rscn_gen_start;
++	u32 rscn_gen_end;
++	enum scan_step step;
+ 	u16 scan_retry;
+ #define MAX_SCAN_RETRIES 5
+ 	enum scan_flags_t scan_flags;
+@@ -3537,9 +3548,8 @@ enum qla_work_type {
+ 	QLA_EVT_RELOGIN,
+ 	QLA_EVT_ASYNC_PRLO,
+ 	QLA_EVT_ASYNC_PRLO_DONE,
+-	QLA_EVT_GPNFT,
+-	QLA_EVT_GPNFT_DONE,
+-	QLA_EVT_GNNFT_DONE,
++	QLA_EVT_SCAN_CMD,
++	QLA_EVT_SCAN_FINISH,
+ 	QLA_EVT_GFPNID,
+ 	QLA_EVT_SP_RETRY,
+ 	QLA_EVT_IIDMA,
+@@ -5030,6 +5040,7 @@ typedef struct scsi_qla_host {
+ 
+ 	/* Counter to detect races between ELS and RSCN events */
+ 	atomic_t		generation_tick;
++	atomic_t		rscn_gen;
+ 	/* Time when global fcport update has been scheduled */
+ 	int			total_fcport_update_gen;
+ 	/* List of pending LOGOs, protected by tgt_mutex */
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 7309310d2ab94..cededfda9d0e3 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -728,9 +728,9 @@ int qla24xx_async_gpsc(scsi_qla_host_t *, fc_port_t *);
+ void qla24xx_handle_gpsc_event(scsi_qla_host_t *, struct event_arg *);
+ int qla2x00_mgmt_svr_login(scsi_qla_host_t *);
+ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport, bool);
+-int qla24xx_async_gpnft(scsi_qla_host_t *, u8, srb_t *);
+-void qla24xx_async_gpnft_done(scsi_qla_host_t *, srb_t *);
+-void qla24xx_async_gnnft_done(scsi_qla_host_t *, srb_t *);
++int qla_fab_async_scan(scsi_qla_host_t *, srb_t *);
++void qla_fab_scan_start(struct scsi_qla_host *);
++void qla_fab_scan_finish(scsi_qla_host_t *, srb_t *);
+ int qla24xx_post_gfpnid_work(struct scsi_qla_host *, fc_port_t *);
+ int qla24xx_async_gfpnid(scsi_qla_host_t *, fc_port_t *);
+ void qla24xx_handle_gfpnid_event(scsi_qla_host_t *, struct event_arg *);
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 1cf9d200d5630..d2bddca7045aa 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1710,7 +1710,7 @@ qla2x00_hba_attributes(scsi_qla_host_t *vha, void *entries,
+ 	eiter->type = cpu_to_be16(FDMI_HBA_OPTION_ROM_VERSION);
+ 	alen = scnprintf(
+ 		eiter->a.orom_version, sizeof(eiter->a.orom_version),
+-		"%d.%02d", ha->bios_revision[1], ha->bios_revision[0]);
++		"%d.%02d", ha->efi_revision[1], ha->efi_revision[0]);
+ 	alen += FDMI_ATTR_ALIGNMENT(alen);
+ 	alen += FDMI_ATTR_TYPELEN(eiter);
+ 	eiter->len = cpu_to_be16(alen);
+@@ -3168,7 +3168,30 @@ static int qla2x00_is_a_vp(scsi_qla_host_t *vha, u64 wwn)
+ 	return rc;
+ }
+ 
+-void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
++static bool qla_ok_to_clear_rscn(scsi_qla_host_t *vha, fc_port_t *fcport)
++{
++	u32 rscn_gen;
++
++	rscn_gen = atomic_read(&vha->rscn_gen);
++	ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x2017,
++	    "%s %d %8phC rscn_gen %x start %x end %x current %x\n",
++	    __func__, __LINE__, fcport->port_name, fcport->rscn_gen,
++	    vha->scan.rscn_gen_start, vha->scan.rscn_gen_end, rscn_gen);
++
++	if (val_is_in_range(fcport->rscn_gen, vha->scan.rscn_gen_start,
++	    vha->scan.rscn_gen_end))
++		/* rscn came in before fabric scan */
++		return true;
++
++	if (val_is_in_range(fcport->rscn_gen, vha->scan.rscn_gen_end, rscn_gen))
++		/* rscn came in after fabric scan */
++		return false;
++
++	/* rare: fcport's scan_needed + rscn_gen must be stale */
++	return true;
++}
++
++void qla_fab_scan_finish(scsi_qla_host_t *vha, srb_t *sp)
+ {
+ 	fc_port_t *fcport;
+ 	u32 i, rc;
+@@ -3281,10 +3304,10 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ 				   (fcport->scan_needed &&
+ 				    fcport->port_type != FCT_INITIATOR &&
+ 				    fcport->port_type != FCT_NVME_INITIATOR)) {
++				fcport->scan_needed = 0;
+ 				qlt_schedule_sess_for_deletion(fcport);
+ 			}
+ 			fcport->d_id.b24 = rp->id.b24;
+-			fcport->scan_needed = 0;
+ 			break;
+ 		}
+ 
+@@ -3325,7 +3348,9 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ 				do_delete = true;
+ 			}
+ 
+-			fcport->scan_needed = 0;
++			if (qla_ok_to_clear_rscn(vha, fcport))
++				fcport->scan_needed = 0;
++
+ 			if (((qla_dual_mode_enabled(vha) ||
+ 			      qla_ini_mode_enabled(vha)) &&
+ 			    atomic_read(&fcport->state) == FCS_ONLINE) ||
+@@ -3355,7 +3380,9 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ 					    fcport->port_name, fcport->loop_id,
+ 					    fcport->login_retry);
+ 				}
+-				fcport->scan_needed = 0;
++
++				if (qla_ok_to_clear_rscn(vha, fcport))
++					fcport->scan_needed = 0;
+ 				qla24xx_fcport_handle_login(vha, fcport);
+ 			}
+ 		}
+@@ -3379,14 +3406,11 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ 	}
+ }
+ 
+-static int qla2x00_post_gnnft_gpnft_done_work(struct scsi_qla_host *vha,
++static int qla2x00_post_next_scan_work(struct scsi_qla_host *vha,
+     srb_t *sp, int cmd)
+ {
+ 	struct qla_work_evt *e;
+ 
+-	if (cmd != QLA_EVT_GPNFT_DONE && cmd != QLA_EVT_GNNFT_DONE)
+-		return QLA_PARAMETER_ERROR;
+-
+ 	e = qla2x00_alloc_work(vha, cmd);
+ 	if (!e)
+ 		return QLA_FUNCTION_FAILED;
+@@ -3396,37 +3420,15 @@ static int qla2x00_post_gnnft_gpnft_done_work(struct scsi_qla_host *vha,
+ 	return qla2x00_post_work(vha, e);
+ }
+ 
+-static int qla2x00_post_nvme_gpnft_work(struct scsi_qla_host *vha,
+-    srb_t *sp, int cmd)
+-{
+-	struct qla_work_evt *e;
+-
+-	if (cmd != QLA_EVT_GPNFT)
+-		return QLA_PARAMETER_ERROR;
+-
+-	e = qla2x00_alloc_work(vha, cmd);
+-	if (!e)
+-		return QLA_FUNCTION_FAILED;
+-
+-	e->u.gpnft.fc4_type = FC4_TYPE_NVME;
+-	e->u.gpnft.sp = sp;
+-
+-	return qla2x00_post_work(vha, e);
+-}
+-
+ static void qla2x00_find_free_fcp_nvme_slot(struct scsi_qla_host *vha,
+ 	struct srb *sp)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
+ 	int num_fibre_dev = ha->max_fibre_devices;
+-	struct ct_sns_req *ct_req =
+-		(struct ct_sns_req *)sp->u.iocb_cmd.u.ctarg.req;
+ 	struct ct_sns_gpnft_rsp *ct_rsp =
+ 		(struct ct_sns_gpnft_rsp *)sp->u.iocb_cmd.u.ctarg.rsp;
+ 	struct ct_sns_gpn_ft_data *d;
+ 	struct fab_scan_rp *rp;
+-	u16 cmd = be16_to_cpu(ct_req->command);
+-	u8 fc4_type = sp->gen2;
+ 	int i, j, k;
+ 	port_id_t id;
+ 	u8 found;
+@@ -3445,85 +3447,83 @@ static void qla2x00_find_free_fcp_nvme_slot(struct scsi_qla_host *vha,
+ 		if (id.b24 == 0 || wwn == 0)
+ 			continue;
+ 
+-		if (fc4_type == FC4_TYPE_FCP_SCSI) {
+-			if (cmd == GPN_FT_CMD) {
+-				rp = &vha->scan.l[j];
+-				rp->id = id;
+-				memcpy(rp->port_name, d->port_name, 8);
+-				j++;
+-				rp->fc4type = FS_FC4TYPE_FCP;
+-			} else {
+-				for (k = 0; k < num_fibre_dev; k++) {
+-					rp = &vha->scan.l[k];
+-					if (id.b24 == rp->id.b24) {
+-						memcpy(rp->node_name,
+-						    d->port_name, 8);
+-						break;
+-					}
++		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x2025,
++		       "%s %06x %8ph \n",
++		       __func__, id.b24, d->port_name);
++
++		switch (vha->scan.step) {
++		case FAB_SCAN_GPNFT_FCP:
++			rp = &vha->scan.l[j];
++			rp->id = id;
++			memcpy(rp->port_name, d->port_name, 8);
++			j++;
++			rp->fc4type = FS_FC4TYPE_FCP;
++			break;
++		case FAB_SCAN_GNNFT_FCP:
++			for (k = 0; k < num_fibre_dev; k++) {
++				rp = &vha->scan.l[k];
++				if (id.b24 == rp->id.b24) {
++					memcpy(rp->node_name,
++					    d->port_name, 8);
++					break;
+ 				}
+ 			}
+-		} else {
+-			/* Search if the fibre device supports FC4_TYPE_NVME */
+-			if (cmd == GPN_FT_CMD) {
+-				found = 0;
+-
+-				for (k = 0; k < num_fibre_dev; k++) {
+-					rp = &vha->scan.l[k];
+-					if (!memcmp(rp->port_name,
+-					    d->port_name, 8)) {
+-						/*
+-						 * Supports FC-NVMe & FCP
+-						 */
+-						rp->fc4type |= FS_FC4TYPE_NVME;
+-						found = 1;
+-						break;
+-					}
++			break;
++		case FAB_SCAN_GPNFT_NVME:
++			found = 0;
++
++			for (k = 0; k < num_fibre_dev; k++) {
++				rp = &vha->scan.l[k];
++				if (!memcmp(rp->port_name, d->port_name, 8)) {
++					/*
++					 * Supports FC-NVMe & FCP
++					 */
++					rp->fc4type |= FS_FC4TYPE_NVME;
++					found = 1;
++					break;
+ 				}
++			}
+ 
+-				/* We found new FC-NVMe only port */
+-				if (!found) {
+-					for (k = 0; k < num_fibre_dev; k++) {
+-						rp = &vha->scan.l[k];
+-						if (wwn_to_u64(rp->port_name)) {
+-							continue;
+-						} else {
+-							rp->id = id;
+-							memcpy(rp->port_name,
+-							    d->port_name, 8);
+-							rp->fc4type =
+-							    FS_FC4TYPE_NVME;
+-							break;
+-						}
+-					}
+-				}
+-			} else {
++			/* We found new FC-NVMe only port */
++			if (!found) {
+ 				for (k = 0; k < num_fibre_dev; k++) {
+ 					rp = &vha->scan.l[k];
+-					if (id.b24 == rp->id.b24) {
+-						memcpy(rp->node_name,
+-						    d->port_name, 8);
++					if (wwn_to_u64(rp->port_name)) {
++						continue;
++					} else {
++						rp->id = id;
++						memcpy(rp->port_name, d->port_name, 8);
++						rp->fc4type = FS_FC4TYPE_NVME;
+ 						break;
+ 					}
+ 				}
+ 			}
++			break;
++		case FAB_SCAN_GNNFT_NVME:
++			for (k = 0; k < num_fibre_dev; k++) {
++				rp = &vha->scan.l[k];
++				if (id.b24 == rp->id.b24) {
++					memcpy(rp->node_name, d->port_name, 8);
++					break;
++				}
++			}
++			break;
++		default:
++			break;
+ 		}
+ 	}
+ }
+ 
+-static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
++static void qla_async_scan_sp_done(srb_t *sp, int res)
+ {
+ 	struct scsi_qla_host *vha = sp->vha;
+-	struct ct_sns_req *ct_req =
+-		(struct ct_sns_req *)sp->u.iocb_cmd.u.ctarg.req;
+-	u16 cmd = be16_to_cpu(ct_req->command);
+-	u8 fc4_type = sp->gen2;
+ 	unsigned long flags;
+ 	int rc;
+ 
+ 	/* gen2 field is holding the fc4type */
+-	ql_dbg(ql_dbg_disc, vha, 0xffff,
+-	    "Async done-%s res %x FC4Type %x\n",
+-	    sp->name, res, sp->gen2);
++	ql_dbg(ql_dbg_disc, vha, 0x2026,
++	    "Async done-%s res %x step %x\n",
++	    sp->name, res, vha->scan.step);
+ 
+ 	sp->rc = res;
+ 	if (res) {
+@@ -3547,8 +3547,7 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ 		 * sp for GNNFT_DONE work. This will allow all
+ 		 * the resource to get freed up.
+ 		 */
+-		rc = qla2x00_post_gnnft_gpnft_done_work(vha, sp,
+-		    QLA_EVT_GNNFT_DONE);
++		rc = qla2x00_post_next_scan_work(vha, sp, QLA_EVT_SCAN_FINISH);
+ 		if (rc) {
+ 			/* Cleanup here to prevent memory leak */
+ 			qla24xx_sp_unmap(vha, sp);
+@@ -3573,28 +3572,30 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ 
+ 	qla2x00_find_free_fcp_nvme_slot(vha, sp);
+ 
+-	if ((fc4_type == FC4_TYPE_FCP_SCSI) && vha->flags.nvme_enabled &&
+-	    cmd == GNN_FT_CMD) {
+-		spin_lock_irqsave(&vha->work_lock, flags);
+-		vha->scan.scan_flags &= ~SF_SCANNING;
+-		spin_unlock_irqrestore(&vha->work_lock, flags);
++	spin_lock_irqsave(&vha->work_lock, flags);
++	vha->scan.scan_flags &= ~SF_SCANNING;
++	spin_unlock_irqrestore(&vha->work_lock, flags);
+ 
+-		sp->rc = res;
+-		rc = qla2x00_post_nvme_gpnft_work(vha, sp, QLA_EVT_GPNFT);
+-		if (rc) {
+-			qla24xx_sp_unmap(vha, sp);
+-			set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+-			set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+-		}
+-		return;
+-	}
++	switch (vha->scan.step) {
++	case FAB_SCAN_GPNFT_FCP:
++	case FAB_SCAN_GPNFT_NVME:
++		rc = qla2x00_post_next_scan_work(vha, sp, QLA_EVT_SCAN_CMD);
++		break;
++	case  FAB_SCAN_GNNFT_FCP:
++		if (vha->flags.nvme_enabled)
++			rc = qla2x00_post_next_scan_work(vha, sp, QLA_EVT_SCAN_CMD);
++		else
++			rc = qla2x00_post_next_scan_work(vha, sp, QLA_EVT_SCAN_FINISH);
+ 
+-	if (cmd == GPN_FT_CMD) {
+-		rc = qla2x00_post_gnnft_gpnft_done_work(vha, sp,
+-		    QLA_EVT_GPNFT_DONE);
+-	} else {
+-		rc = qla2x00_post_gnnft_gpnft_done_work(vha, sp,
+-		    QLA_EVT_GNNFT_DONE);
++		break;
++	case  FAB_SCAN_GNNFT_NVME:
++		rc = qla2x00_post_next_scan_work(vha, sp, QLA_EVT_SCAN_FINISH);
++		break;
++	default:
++		/* should not be here */
++		WARN_ON(1);
++		rc = QLA_FUNCTION_FAILED;
++		break;
+ 	}
+ 
+ 	if (rc) {
+@@ -3605,127 +3606,16 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ 	}
+ }
+ 
+-/*
+- * Get WWNN list for fc4_type
+- *
+- * It is assumed the same SRB is re-used from GPNFT to avoid
+- * mem free & re-alloc
+- */
+-static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+-    u8 fc4_type)
+-{
+-	int rval = QLA_FUNCTION_FAILED;
+-	struct ct_sns_req *ct_req;
+-	struct ct_sns_pkt *ct_sns;
+-	unsigned long flags;
+-
+-	if (!vha->flags.online) {
+-		spin_lock_irqsave(&vha->work_lock, flags);
+-		vha->scan.scan_flags &= ~SF_SCANNING;
+-		spin_unlock_irqrestore(&vha->work_lock, flags);
+-		goto done_free_sp;
+-	}
+-
+-	if (!sp->u.iocb_cmd.u.ctarg.req || !sp->u.iocb_cmd.u.ctarg.rsp) {
+-		ql_log(ql_log_warn, vha, 0xffff,
+-		    "%s: req %p rsp %p are not setup\n",
+-		    __func__, sp->u.iocb_cmd.u.ctarg.req,
+-		    sp->u.iocb_cmd.u.ctarg.rsp);
+-		spin_lock_irqsave(&vha->work_lock, flags);
+-		vha->scan.scan_flags &= ~SF_SCANNING;
+-		spin_unlock_irqrestore(&vha->work_lock, flags);
+-		WARN_ON(1);
+-		set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+-		set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+-		goto done_free_sp;
+-	}
+-
+-	ql_dbg(ql_dbg_disc, vha, 0xfffff,
+-	    "%s: FC4Type %x, CT-PASSTHRU %s command ctarg rsp size %d, ctarg req size %d\n",
+-	    __func__, fc4_type, sp->name, sp->u.iocb_cmd.u.ctarg.rsp_size,
+-	     sp->u.iocb_cmd.u.ctarg.req_size);
+-
+-	sp->type = SRB_CT_PTHRU_CMD;
+-	sp->name = "gnnft";
+-	sp->gen1 = vha->hw->base_qpair->chip_reset;
+-	sp->gen2 = fc4_type;
+-	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
+-			      qla2x00_async_gpnft_gnnft_sp_done);
+-
+-	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+-	memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
+-
+-	ct_sns = (struct ct_sns_pkt *)sp->u.iocb_cmd.u.ctarg.req;
+-	/* CT_IU preamble  */
+-	ct_req = qla2x00_prep_ct_req(ct_sns, GNN_FT_CMD,
+-	    sp->u.iocb_cmd.u.ctarg.rsp_size);
+-
+-	/* GPN_FT req */
+-	ct_req->req.gpn_ft.port_type = fc4_type;
+-
+-	sp->u.iocb_cmd.u.ctarg.req_size = GNN_FT_REQ_SIZE;
+-	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+-
+-	ql_dbg(ql_dbg_disc, vha, 0xffff,
+-	    "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+-	    sp->handle, ct_req->req.gpn_ft.port_type);
+-
+-	rval = qla2x00_start_sp(sp);
+-	if (rval != QLA_SUCCESS) {
+-		goto done_free_sp;
+-	}
+-
+-	return rval;
+-
+-done_free_sp:
+-	if (sp->u.iocb_cmd.u.ctarg.req) {
+-		dma_free_coherent(&vha->hw->pdev->dev,
+-		    sp->u.iocb_cmd.u.ctarg.req_allocated_size,
+-		    sp->u.iocb_cmd.u.ctarg.req,
+-		    sp->u.iocb_cmd.u.ctarg.req_dma);
+-		sp->u.iocb_cmd.u.ctarg.req = NULL;
+-	}
+-	if (sp->u.iocb_cmd.u.ctarg.rsp) {
+-		dma_free_coherent(&vha->hw->pdev->dev,
+-		    sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
+-		    sp->u.iocb_cmd.u.ctarg.rsp,
+-		    sp->u.iocb_cmd.u.ctarg.rsp_dma);
+-		sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+-	}
+-	/* ref: INIT */
+-	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+-
+-	spin_lock_irqsave(&vha->work_lock, flags);
+-	vha->scan.scan_flags &= ~SF_SCANNING;
+-	if (vha->scan.scan_flags == 0) {
+-		ql_dbg(ql_dbg_disc, vha, 0xffff,
+-		    "%s: schedule\n", __func__);
+-		vha->scan.scan_flags |= SF_QUEUED;
+-		schedule_delayed_work(&vha->scan.scan_work, 5);
+-	}
+-	spin_unlock_irqrestore(&vha->work_lock, flags);
+-
+-
+-	return rval;
+-} /* GNNFT */
+-
+-void qla24xx_async_gpnft_done(scsi_qla_host_t *vha, srb_t *sp)
+-{
+-	ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
+-	    "%s enter\n", __func__);
+-	qla24xx_async_gnnft(vha, sp, sp->gen2);
+-}
+-
+ /* Get WWPN list for certain fc4_type */
+-int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
++int qla_fab_async_scan(scsi_qla_host_t *vha, srb_t *sp)
+ {
+ 	int rval = QLA_FUNCTION_FAILED;
+ 	struct ct_sns_req       *ct_req;
+ 	struct ct_sns_pkt *ct_sns;
+-	u32 rspsz;
++	u32 rspsz = 0;
+ 	unsigned long flags;
+ 
+-	ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
++	ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x200c,
+ 	    "%s enter\n", __func__);
+ 
+ 	if (!vha->flags.online)
+@@ -3734,22 +3624,21 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 	spin_lock_irqsave(&vha->work_lock, flags);
+ 	if (vha->scan.scan_flags & SF_SCANNING) {
+ 		spin_unlock_irqrestore(&vha->work_lock, flags);
+-		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
++		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x2012,
+ 		    "%s: scan active\n", __func__);
+ 		return rval;
+ 	}
+ 	vha->scan.scan_flags |= SF_SCANNING;
++	if (!sp)
++		vha->scan.step = FAB_SCAN_START;
++
+ 	spin_unlock_irqrestore(&vha->work_lock, flags);
+ 
+-	if (fc4_type == FC4_TYPE_FCP_SCSI) {
+-		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
++	switch (vha->scan.step) {
++	case FAB_SCAN_START:
++		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x2018,
+ 		    "%s: Performing FCP Scan\n", __func__);
+ 
+-		if (sp) {
+-			/* ref: INIT */
+-			kref_put(&sp->cmd_kref, qla2x00_sp_release);
+-		}
+-
+ 		/* ref: INIT */
+ 		sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ 		if (!sp) {
+@@ -3765,7 +3654,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 								GFP_KERNEL);
+ 		sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt);
+ 		if (!sp->u.iocb_cmd.u.ctarg.req) {
+-			ql_log(ql_log_warn, vha, 0xffff,
++			ql_log(ql_log_warn, vha, 0x201a,
+ 			    "Failed to allocate ct_sns request.\n");
+ 			spin_lock_irqsave(&vha->work_lock, flags);
+ 			vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -3773,7 +3662,6 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 			qla2x00_rel_sp(sp);
+ 			return rval;
+ 		}
+-		sp->u.iocb_cmd.u.ctarg.req_size = GPN_FT_REQ_SIZE;
+ 
+ 		rspsz = sizeof(struct ct_sns_gpnft_rsp) +
+ 			vha->hw->max_fibre_devices *
+@@ -3785,7 +3673,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 								GFP_KERNEL);
+ 		sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = rspsz;
+ 		if (!sp->u.iocb_cmd.u.ctarg.rsp) {
+-			ql_log(ql_log_warn, vha, 0xffff,
++			ql_log(ql_log_warn, vha, 0x201b,
+ 			    "Failed to allocate ct_sns request.\n");
+ 			spin_lock_irqsave(&vha->work_lock, flags);
+ 			vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -3805,35 +3693,95 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 		    "%s scan list size %d\n", __func__, vha->scan.size);
+ 
+ 		memset(vha->scan.l, 0, vha->scan.size);
+-	} else if (!sp) {
+-		ql_dbg(ql_dbg_disc, vha, 0xffff,
+-		    "NVME scan did not provide SP\n");
++
++		vha->scan.step = FAB_SCAN_GPNFT_FCP;
++		break;
++	case FAB_SCAN_GPNFT_FCP:
++		vha->scan.step = FAB_SCAN_GNNFT_FCP;
++		break;
++	case FAB_SCAN_GNNFT_FCP:
++		vha->scan.step = FAB_SCAN_GPNFT_NVME;
++		break;
++	case FAB_SCAN_GPNFT_NVME:
++		vha->scan.step = FAB_SCAN_GNNFT_NVME;
++		break;
++	case FAB_SCAN_GNNFT_NVME:
++	default:
++		/* should not be here */
++		WARN_ON(1);
++		goto done_free_sp;
++	}
++
++	if (!sp) {
++		ql_dbg(ql_dbg_disc, vha, 0x201c,
++		    "scan did not provide SP\n");
+ 		return rval;
+ 	}
++	if (!sp->u.iocb_cmd.u.ctarg.req || !sp->u.iocb_cmd.u.ctarg.rsp) {
++		ql_log(ql_log_warn, vha, 0x201d,
++		    "%s: req %p rsp %p are not setup\n",
++		    __func__, sp->u.iocb_cmd.u.ctarg.req,
++		    sp->u.iocb_cmd.u.ctarg.rsp);
++		spin_lock_irqsave(&vha->work_lock, flags);
++		vha->scan.scan_flags &= ~SF_SCANNING;
++		spin_unlock_irqrestore(&vha->work_lock, flags);
++		WARN_ON(1);
++		set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
++		set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
++		goto done_free_sp;
++	}
++
++	rspsz = sp->u.iocb_cmd.u.ctarg.rsp_size;
++	memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
++	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
++
+ 
+ 	sp->type = SRB_CT_PTHRU_CMD;
+-	sp->name = "gpnft";
+ 	sp->gen1 = vha->hw->base_qpair->chip_reset;
+-	sp->gen2 = fc4_type;
+ 	qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
+-			      qla2x00_async_gpnft_gnnft_sp_done);
+-
+-	rspsz = sp->u.iocb_cmd.u.ctarg.rsp_size;
+-	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+-	memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
++			      qla_async_scan_sp_done);
+ 
+ 	ct_sns = (struct ct_sns_pkt *)sp->u.iocb_cmd.u.ctarg.req;
+-	/* CT_IU preamble  */
+-	ct_req = qla2x00_prep_ct_req(ct_sns, GPN_FT_CMD, rspsz);
+ 
+-	/* GPN_FT req */
+-	ct_req->req.gpn_ft.port_type = fc4_type;
++	/* CT_IU preamble  */
++	switch (vha->scan.step) {
++	case FAB_SCAN_GPNFT_FCP:
++		sp->name = "gpnft";
++		ct_req = qla2x00_prep_ct_req(ct_sns, GPN_FT_CMD, rspsz);
++		ct_req->req.gpn_ft.port_type = FC4_TYPE_FCP_SCSI;
++		sp->u.iocb_cmd.u.ctarg.req_size = GPN_FT_REQ_SIZE;
++		break;
++	case FAB_SCAN_GNNFT_FCP:
++		sp->name = "gnnft";
++		ct_req = qla2x00_prep_ct_req(ct_sns, GNN_FT_CMD, rspsz);
++		ct_req->req.gpn_ft.port_type = FC4_TYPE_FCP_SCSI;
++		sp->u.iocb_cmd.u.ctarg.req_size = GNN_FT_REQ_SIZE;
++		break;
++	case FAB_SCAN_GPNFT_NVME:
++		sp->name = "gpnft";
++		ct_req = qla2x00_prep_ct_req(ct_sns, GPN_FT_CMD, rspsz);
++		ct_req->req.gpn_ft.port_type = FC4_TYPE_NVME;
++		sp->u.iocb_cmd.u.ctarg.req_size = GPN_FT_REQ_SIZE;
++		break;
++	case FAB_SCAN_GNNFT_NVME:
++		sp->name = "gnnft";
++		ct_req = qla2x00_prep_ct_req(ct_sns, GNN_FT_CMD, rspsz);
++		ct_req->req.gpn_ft.port_type = FC4_TYPE_NVME;
++		sp->u.iocb_cmd.u.ctarg.req_size = GNN_FT_REQ_SIZE;
++		break;
++	default:
++		/* should not be here */
++		WARN_ON(1);
++		goto done_free_sp;
++	}
+ 
+ 	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+ 
+-	ql_dbg(ql_dbg_disc, vha, 0xffff,
+-	    "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+-	    sp->handle, ct_req->req.gpn_ft.port_type);
++	ql_dbg(ql_dbg_disc, vha, 0x2003,
++	       "%s: step %d, rsp size %d, req size %d hdl %x %s FC4TYPE %x \n",
++	       __func__, vha->scan.step, sp->u.iocb_cmd.u.ctarg.rsp_size,
++	       sp->u.iocb_cmd.u.ctarg.req_size, sp->handle, sp->name,
++	       ct_req->req.gpn_ft.port_type);
+ 
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+@@ -3864,7 +3812,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 	spin_lock_irqsave(&vha->work_lock, flags);
+ 	vha->scan.scan_flags &= ~SF_SCANNING;
+ 	if (vha->scan.scan_flags == 0) {
+-		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
++		ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0x2007,
+ 		    "%s: Scan scheduled.\n", __func__);
+ 		vha->scan.scan_flags |= SF_QUEUED;
+ 		schedule_delayed_work(&vha->scan.scan_work, 5);
+@@ -3875,6 +3823,15 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 	return rval;
+ }
+ 
++void qla_fab_scan_start(struct scsi_qla_host *vha)
++{
++	int rval;
++
++	rval = qla_fab_async_scan(vha, NULL);
++	if (rval)
++		set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
++}
++
+ void qla_scan_work_fn(struct work_struct *work)
+ {
+ 	struct fab_scan *s = container_of(to_delayed_work(work),
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8377624d76c98..eda3bdab934d5 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1842,10 +1842,18 @@ int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id,
+ 	return qla2x00_post_work(vha, e);
+ }
+ 
++static void qla_rscn_gen_tick(scsi_qla_host_t *vha, u32 *ret_rscn_gen)
++{
++	*ret_rscn_gen = atomic_inc_return(&vha->rscn_gen);
++	/* memory barrier */
++	wmb();
++}
++
+ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+ 	fc_port_t *fcport;
+ 	unsigned long flags;
++	u32 rscn_gen;
+ 
+ 	switch (ea->id.b.rsvd_1) {
+ 	case RSCN_PORT_ADDR:
+@@ -1875,15 +1883,16 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 					 * Otherwise we're already in the middle of a relogin
+ 					 */
+ 					fcport->scan_needed = 1;
+-					fcport->rscn_gen++;
++					qla_rscn_gen_tick(vha, &fcport->rscn_gen);
+ 				}
+ 			} else {
+ 				fcport->scan_needed = 1;
+-				fcport->rscn_gen++;
++				qla_rscn_gen_tick(vha, &fcport->rscn_gen);
+ 			}
+ 		}
+ 		break;
+ 	case RSCN_AREA_ADDR:
++		qla_rscn_gen_tick(vha, &rscn_gen);
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ 			if (fcport->flags & FCF_FCP2_DEVICE &&
+ 			    atomic_read(&fcport->state) == FCS_ONLINE)
+@@ -1891,11 +1900,12 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 
+ 			if ((ea->id.b24 & 0xffff00) == (fcport->d_id.b24 & 0xffff00)) {
+ 				fcport->scan_needed = 1;
+-				fcport->rscn_gen++;
++				fcport->rscn_gen = rscn_gen;
+ 			}
+ 		}
+ 		break;
+ 	case RSCN_DOM_ADDR:
++		qla_rscn_gen_tick(vha, &rscn_gen);
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ 			if (fcport->flags & FCF_FCP2_DEVICE &&
+ 			    atomic_read(&fcport->state) == FCS_ONLINE)
+@@ -1903,19 +1913,20 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 
+ 			if ((ea->id.b24 & 0xff0000) == (fcport->d_id.b24 & 0xff0000)) {
+ 				fcport->scan_needed = 1;
+-				fcport->rscn_gen++;
++				fcport->rscn_gen = rscn_gen;
+ 			}
+ 		}
+ 		break;
+ 	case RSCN_FAB_ADDR:
+ 	default:
++		qla_rscn_gen_tick(vha, &rscn_gen);
+ 		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ 			if (fcport->flags & FCF_FCP2_DEVICE &&
+ 			    atomic_read(&fcport->state) == FCS_ONLINE)
+ 				continue;
+ 
+ 			fcport->scan_needed = 1;
+-			fcport->rscn_gen++;
++			fcport->rscn_gen = rscn_gen;
+ 		}
+ 		break;
+ 	}
+@@ -1924,6 +1935,7 @@ void qla2x00_handle_rscn(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	if (vha->scan.scan_flags == 0) {
+ 		ql_dbg(ql_dbg_disc, vha, 0xffff, "%s: schedule\n", __func__);
+ 		vha->scan.scan_flags |= SF_QUEUED;
++		vha->scan.rscn_gen_start = atomic_read(&vha->rscn_gen);
+ 		schedule_delayed_work(&vha->scan.scan_work, 5);
+ 	}
+ 	spin_unlock_irqrestore(&vha->work_lock, flags);
+@@ -6393,10 +6405,9 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha)
+ 		qlt_do_generation_tick(vha, &discovery_gen);
+ 
+ 		if (USE_ASYNC_SCAN(ha)) {
+-			rval = qla24xx_async_gpnft(vha, FC4_TYPE_FCP_SCSI,
+-			    NULL);
+-			if (rval)
+-				set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
++			/* start of scan begins here */
++			vha->scan.rscn_gen_end = atomic_read(&vha->rscn_gen);
++			qla_fab_scan_start(vha);
+ 		} else  {
+ 			list_for_each_entry(fcport, &vha->vp_fcports, list)
+ 				fcport->scan_state = QLA_FCPORT_SCAN;
+@@ -8207,15 +8218,21 @@ qla28xx_get_aux_images(
+ 	struct qla27xx_image_status pri_aux_image_status, sec_aux_image_status;
+ 	bool valid_pri_image = false, valid_sec_image = false;
+ 	bool active_pri_image = false, active_sec_image = false;
++	int rc;
+ 
+ 	if (!ha->flt_region_aux_img_status_pri) {
+ 		ql_dbg(ql_dbg_init, vha, 0x018a, "Primary aux image not addressed\n");
+ 		goto check_sec_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)&pri_aux_image_status,
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)&pri_aux_image_status,
+ 	    ha->flt_region_aux_img_status_pri,
+ 	    sizeof(pri_aux_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a1,
++		    "Unable to read Primary aux image(%x).\n", rc);
++		goto check_sec_image;
++	}
+ 	qla27xx_print_image(vha, "Primary aux image", &pri_aux_image_status);
+ 
+ 	if (qla28xx_check_aux_image_status_signature(&pri_aux_image_status)) {
+@@ -8246,9 +8263,15 @@ qla28xx_get_aux_images(
+ 		goto check_valid_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)&sec_aux_image_status,
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)&sec_aux_image_status,
+ 	    ha->flt_region_aux_img_status_sec,
+ 	    sizeof(sec_aux_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a2,
++		    "Unable to read Secondary aux image(%x).\n", rc);
++		goto check_valid_image;
++	}
++
+ 	qla27xx_print_image(vha, "Secondary aux image", &sec_aux_image_status);
+ 
+ 	if (qla28xx_check_aux_image_status_signature(&sec_aux_image_status)) {
+@@ -8306,6 +8329,7 @@ qla27xx_get_active_image(struct scsi_qla_host *vha,
+ 	struct qla27xx_image_status pri_image_status, sec_image_status;
+ 	bool valid_pri_image = false, valid_sec_image = false;
+ 	bool active_pri_image = false, active_sec_image = false;
++	int rc;
+ 
+ 	if (!ha->flt_region_img_status_pri) {
+ 		ql_dbg(ql_dbg_init, vha, 0x018a, "Primary image not addressed\n");
+@@ -8347,8 +8371,14 @@ qla27xx_get_active_image(struct scsi_qla_host *vha,
+ 		goto check_valid_image;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, (uint32_t *)(&sec_image_status),
++	rc = qla24xx_read_flash_data(vha, (uint32_t *)(&sec_image_status),
+ 	    ha->flt_region_img_status_sec, sizeof(sec_image_status) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a3,
++		    "Unable to read Secondary image status(%x).\n", rc);
++		goto check_valid_image;
++	}
++
+ 	qla27xx_print_image(vha, "Secondary image", &sec_image_status);
+ 
+ 	if (qla27xx_check_image_status_signature(&sec_image_status)) {
+@@ -8420,11 +8450,10 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 	    "FW: Loading firmware from flash (%x).\n", faddr);
+ 
+ 	dcode = (uint32_t *)req->ring;
+-	qla24xx_read_flash_data(vha, dcode, faddr, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
++	rval = qla24xx_read_flash_data(vha, dcode, faddr, 8);
++	if (rval || qla24xx_risc_firmware_invalid(dcode)) {
+ 		ql_log(ql_log_fatal, vha, 0x008c,
+-		    "Unable to verify the integrity of flash firmware "
+-		    "image.\n");
++		    "Unable to verify the integrity of flash firmware image (rval %x).\n", rval);
+ 		ql_log(ql_log_fatal, vha, 0x008d,
+ 		    "Firmware data: %08x %08x %08x %08x.\n",
+ 		    dcode[0], dcode[1], dcode[2], dcode[3]);
+@@ -8438,7 +8467,12 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 	for (j = 0; j < segments; j++) {
+ 		ql_dbg(ql_dbg_init, vha, 0x008d,
+ 		    "-> Loading segment %u...\n", j);
+-		qla24xx_read_flash_data(vha, dcode, faddr, 10);
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, 10);
++		if (rval) {
++			ql_log(ql_log_fatal, vha, 0x016a,
++			    "-> Unable to read segment addr + size .\n");
++			return QLA_FUNCTION_FAILED;
++		}
+ 		risc_addr = be32_to_cpu((__force __be32)dcode[2]);
+ 		risc_size = be32_to_cpu((__force __be32)dcode[3]);
+ 		if (!*srisc_addr) {
+@@ -8454,7 +8488,13 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 			ql_dbg(ql_dbg_init, vha, 0x008e,
+ 			    "-> Loading fragment %u: %#x <- %#x (%#lx dwords)...\n",
+ 			    fragment, risc_addr, faddr, dlen);
+-			qla24xx_read_flash_data(vha, dcode, faddr, dlen);
++			rval = qla24xx_read_flash_data(vha, dcode, faddr, dlen);
++			if (rval) {
++				ql_log(ql_log_fatal, vha, 0x016b,
++				    "-> Unable to read fragment(faddr %#x dlen %#lx).\n",
++				    faddr, dlen);
++				return QLA_FUNCTION_FAILED;
++			}
+ 			for (i = 0; i < dlen; i++)
+ 				dcode[i] = swab32(dcode[i]);
+ 
+@@ -8483,7 +8523,14 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 		fwdt->length = 0;
+ 
+ 		dcode = (uint32_t *)req->ring;
+-		qla24xx_read_flash_data(vha, dcode, faddr, 7);
++
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, 7);
++		if (rval) {
++			ql_log(ql_log_fatal, vha, 0x016c,
++			    "-> Unable to read template size.\n");
++			goto failed;
++		}
++
+ 		risc_size = be32_to_cpu((__force __be32)dcode[2]);
+ 		ql_dbg(ql_dbg_init, vha, 0x0161,
+ 		    "-> fwdt%u template array at %#x (%#x dwords)\n",
+@@ -8509,11 +8556,12 @@ qla24xx_load_risc_flash(scsi_qla_host_t *vha, uint32_t *srisc_addr,
+ 		}
+ 
+ 		dcode = fwdt->template;
+-		qla24xx_read_flash_data(vha, dcode, faddr, risc_size);
++		rval = qla24xx_read_flash_data(vha, dcode, faddr, risc_size);
+ 
+-		if (!qla27xx_fwdt_template_valid(dcode)) {
++		if (rval || !qla27xx_fwdt_template_valid(dcode)) {
+ 			ql_log(ql_log_warn, vha, 0x0165,
+-			    "-> fwdt%u failed template validate\n", j);
++			    "-> fwdt%u failed template validate (rval %x)\n",
++			    j, rval);
+ 			goto failed;
+ 		}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index a4a56ab0ba747..ef4b3cc1cd77e 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -631,3 +631,11 @@ static inline int qla_mapq_alloc_qp_cpu_map(struct qla_hw_data *ha)
+ 	}
+ 	return 0;
+ }
++
++static inline bool val_is_in_range(u32 val, u32 start, u32 end)
++{
++	if (val >= start && val <= end)
++		return true;
++	else
++		return false;
++}
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index b67416951a5f7..76703f2706b8e 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -180,7 +180,7 @@ qla24xx_disable_vp(scsi_qla_host_t *vha)
+ 	atomic_set(&vha->loop_state, LOOP_DOWN);
+ 	atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
+ 	list_for_each_entry(fcport, &vha->vp_fcports, list)
+-		fcport->logout_on_delete = 0;
++		fcport->logout_on_delete = 1;
+ 
+ 	if (!vha->hw->flags.edif_enabled)
+ 		qla2x00_wait_for_sess_deletion(vha);
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index a8ddf356e6626..8f4cc136a9c9c 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -49,7 +49,10 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ 		return 0;
+ 	}
+ 
+-	if (!vha->nvme_local_port && qla_nvme_register_hba(vha))
++	if (qla_nvme_register_hba(vha))
++		return 0;
++
++	if (!vha->nvme_local_port)
+ 		return 0;
+ 
+ 	if (!(fcport->nvme_prli_service_param &
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index fcb06df2ce4e6..bc3b2aea3f8bf 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1875,14 +1875,9 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
+ 	for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+ 		sp = req->outstanding_cmds[cnt];
+ 		if (sp) {
+-			/*
+-			 * perform lockless completion during driver unload
+-			 */
+ 			if (qla2x00_chip_is_down(vha)) {
+ 				req->outstanding_cmds[cnt] = NULL;
+-				spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+ 				sp->done(sp, res);
+-				spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ 				continue;
+ 			}
+ 
+@@ -4689,7 +4684,7 @@ static void
+ qla2x00_number_of_exch(scsi_qla_host_t *vha, u32 *ret_cnt, u16 max_cnt)
+ {
+ 	u32 temp;
+-	struct init_cb_81xx *icb = (struct init_cb_81xx *)&vha->hw->init_cb;
++	struct init_cb_81xx *icb = (struct init_cb_81xx *)vha->hw->init_cb;
+ 	*ret_cnt = FW_DEF_EXCHANGES_CNT;
+ 
+ 	if (max_cnt > vha->hw->max_exchg)
+@@ -5563,15 +5558,11 @@ qla2x00_do_work(struct scsi_qla_host *vha)
+ 			qla2x00_async_prlo_done(vha, e->u.logio.fcport,
+ 			    e->u.logio.data);
+ 			break;
+-		case QLA_EVT_GPNFT:
+-			qla24xx_async_gpnft(vha, e->u.gpnft.fc4_type,
+-			    e->u.gpnft.sp);
+-			break;
+-		case QLA_EVT_GPNFT_DONE:
+-			qla24xx_async_gpnft_done(vha, e->u.iosb.sp);
++		case QLA_EVT_SCAN_CMD:
++			qla_fab_async_scan(vha, e->u.iosb.sp);
+ 			break;
+-		case QLA_EVT_GNNFT_DONE:
+-			qla24xx_async_gnnft_done(vha, e->u.iosb.sp);
++		case QLA_EVT_SCAN_FINISH:
++			qla_fab_scan_finish(vha, e->u.iosb.sp);
+ 			break;
+ 		case QLA_EVT_GFPNID:
+ 			qla24xx_async_gfpnid(vha, e->u.fcport.fcport);
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index c092a6b1ced4f..6d16546e17292 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -555,6 +555,7 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	struct qla_flt_location *fltl = (void *)req->ring;
+ 	uint32_t *dcode = (uint32_t *)req->ring;
+ 	uint8_t *buf = (void *)req->ring, *bcode,  last_image;
++	int rc;
+ 
+ 	/*
+ 	 * FLT-location structure resides after the last PCI region.
+@@ -584,14 +585,24 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	pcihdr = 0;
+ 	do {
+ 		/* Verify PCI expansion ROM header. */
+-		qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		rc = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		if (rc) {
++			ql_log(ql_log_info, vha, 0x016d,
++			    "Unable to read PCI Expansion Rom Header (%x).\n", rc);
++			return QLA_FUNCTION_FAILED;
++		}
+ 		bcode = buf + (pcihdr % 4);
+ 		if (bcode[0x0] != 0x55 || bcode[0x1] != 0xaa)
+ 			goto end;
+ 
+ 		/* Locate PCI data structure. */
+ 		pcids = pcihdr + ((bcode[0x19] << 8) | bcode[0x18]);
+-		qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		rc = qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		if (rc) {
++			ql_log(ql_log_info, vha, 0x0179,
++			    "Unable to read PCI Data Structure (%x).\n", rc);
++			return QLA_FUNCTION_FAILED;
++		}
+ 		bcode = buf + (pcihdr % 4);
+ 
+ 		/* Validate signature of PCI data structure. */
+@@ -606,7 +617,12 @@ qla2xxx_find_flt_start(scsi_qla_host_t *vha, uint32_t *start)
+ 	} while (!last_image);
+ 
+ 	/* Now verify FLT-location structure. */
+-	qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, sizeof(*fltl) >> 2);
++	rc = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, sizeof(*fltl) >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x017a,
++		    "Unable to read FLT (%x).\n", rc);
++		return QLA_FUNCTION_FAILED;
++	}
+ 	if (memcmp(fltl->sig, "QFLT", 4))
+ 		goto end;
+ 
+@@ -2605,13 +2621,18 @@ qla24xx_read_optrom_data(struct scsi_qla_host *vha, void *buf,
+     uint32_t offset, uint32_t length)
+ {
+ 	struct qla_hw_data *ha = vha->hw;
++	int rc;
+ 
+ 	/* Suspend HBA. */
+ 	scsi_block_requests(vha->host);
+ 	set_bit(MBX_UPDATE_FLASH_ACTIVE, &ha->mbx_cmd_flags);
+ 
+ 	/* Go with read. */
+-	qla24xx_read_flash_data(vha, buf, offset >> 2, length >> 2);
++	rc = qla24xx_read_flash_data(vha, buf, offset >> 2, length >> 2);
++	if (rc) {
++		ql_log(ql_log_info, vha, 0x01a0,
++		    "Unable to perform optrom read(%x).\n", rc);
++	}
+ 
+ 	/* Resume HBA. */
+ 	clear_bit(MBX_UPDATE_FLASH_ACTIVE, &ha->mbx_cmd_flags);
+@@ -3412,7 +3433,7 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 	struct active_regions active_regions = { };
+ 
+ 	if (IS_P3P_TYPE(ha))
+-		return ret;
++		return QLA_SUCCESS;
+ 
+ 	if (!mbuf)
+ 		return QLA_FUNCTION_FAILED;
+@@ -3432,20 +3453,31 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 
+ 	do {
+ 		/* Verify PCI expansion ROM header. */
+-		qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		ret = qla24xx_read_flash_data(vha, dcode, pcihdr >> 2, 0x20);
++		if (ret) {
++			ql_log(ql_log_info, vha, 0x017d,
++			    "Unable to read PCI EXP Rom Header(%x).\n", ret);
++			return QLA_FUNCTION_FAILED;
++		}
++
+ 		bcode = mbuf + (pcihdr % 4);
+ 		if (memcmp(bcode, "\x55\xaa", 2)) {
+ 			/* No signature */
+ 			ql_log(ql_log_fatal, vha, 0x0059,
+ 			    "No matching ROM signature.\n");
+-			ret = QLA_FUNCTION_FAILED;
+-			break;
++			return QLA_FUNCTION_FAILED;
+ 		}
+ 
+ 		/* Locate PCI data structure. */
+ 		pcids = pcihdr + ((bcode[0x19] << 8) | bcode[0x18]);
+ 
+-		qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		ret = qla24xx_read_flash_data(vha, dcode, pcids >> 2, 0x20);
++		if (ret) {
++			ql_log(ql_log_info, vha, 0x018e,
++			    "Unable to read PCI Data Structure (%x).\n", ret);
++			return QLA_FUNCTION_FAILED;
++		}
++
+ 		bcode = mbuf + (pcihdr % 4);
+ 
+ 		/* Validate signature of PCI data structure. */
+@@ -3454,8 +3486,7 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 			ql_log(ql_log_fatal, vha, 0x005a,
+ 			    "PCI data struct not found pcir_adr=%x.\n", pcids);
+ 			ql_dump_buffer(ql_dbg_init, vha, 0x0059, dcode, 32);
+-			ret = QLA_FUNCTION_FAILED;
+-			break;
++			return QLA_FUNCTION_FAILED;
+ 		}
+ 
+ 		/* Read version */
+@@ -3507,20 +3538,26 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 			faddr = ha->flt_region_fw_sec;
+ 	}
+ 
+-	qla24xx_read_flash_data(vha, dcode, faddr, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
+-		ql_log(ql_log_warn, vha, 0x005f,
+-		    "Unrecognized fw revision at %x.\n",
+-		    ha->flt_region_fw * 4);
+-		ql_dump_buffer(ql_dbg_init, vha, 0x005f, dcode, 32);
++	ret = qla24xx_read_flash_data(vha, dcode, faddr, 8);
++	if (ret) {
++		ql_log(ql_log_info, vha, 0x019e,
++		    "Unable to read FW version (%x).\n", ret);
++		return ret;
+ 	} else {
+-		for (i = 0; i < 4; i++)
+-			ha->fw_revision[i] =
++		if (qla24xx_risc_firmware_invalid(dcode)) {
++			ql_log(ql_log_warn, vha, 0x005f,
++			    "Unrecognized fw revision at %x.\n",
++			    ha->flt_region_fw * 4);
++			ql_dump_buffer(ql_dbg_init, vha, 0x005f, dcode, 32);
++		} else {
++			for (i = 0; i < 4; i++)
++				ha->fw_revision[i] =
+ 				be32_to_cpu((__force __be32)dcode[4+i]);
+-		ql_dbg(ql_dbg_init, vha, 0x0060,
+-		    "Firmware revision (flash) %u.%u.%u (%x).\n",
+-		    ha->fw_revision[0], ha->fw_revision[1],
+-		    ha->fw_revision[2], ha->fw_revision[3]);
++			ql_dbg(ql_dbg_init, vha, 0x0060,
++			    "Firmware revision (flash) %u.%u.%u (%x).\n",
++			    ha->fw_revision[0], ha->fw_revision[1],
++			    ha->fw_revision[2], ha->fw_revision[3]);
++		}
+ 	}
+ 
+ 	/* Check for golden firmware and get version if available */
+@@ -3531,18 +3568,23 @@ qla24xx_get_flash_version(scsi_qla_host_t *vha, void *mbuf)
+ 
+ 	memset(ha->gold_fw_version, 0, sizeof(ha->gold_fw_version));
+ 	faddr = ha->flt_region_gold_fw;
+-	qla24xx_read_flash_data(vha, dcode, ha->flt_region_gold_fw, 8);
+-	if (qla24xx_risc_firmware_invalid(dcode)) {
+-		ql_log(ql_log_warn, vha, 0x0056,
+-		    "Unrecognized golden fw at %#x.\n", faddr);
+-		ql_dump_buffer(ql_dbg_init, vha, 0x0056, dcode, 32);
++	ret = qla24xx_read_flash_data(vha, dcode, ha->flt_region_gold_fw, 8);
++	if (ret) {
++		ql_log(ql_log_info, vha, 0x019f,
++		    "Unable to read Gold FW version (%x).\n", ret);
+ 		return ret;
+-	}
+-
+-	for (i = 0; i < 4; i++)
+-		ha->gold_fw_version[i] =
+-			be32_to_cpu((__force __be32)dcode[4+i]);
++	} else {
++		if (qla24xx_risc_firmware_invalid(dcode)) {
++			ql_log(ql_log_warn, vha, 0x0056,
++			    "Unrecognized golden fw at %#x.\n", faddr);
++			ql_dump_buffer(ql_dbg_init, vha, 0x0056, dcode, 32);
++			return QLA_FUNCTION_FAILED;
++		}
+ 
++		for (i = 0; i < 4; i++)
++			ha->gold_fw_version[i] =
++			   be32_to_cpu((__force __be32)dcode[4+i]);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/drivers/scsi/sr_ioctl.c b/drivers/scsi/sr_ioctl.c
+index a0d2556a27bba..089653018d32c 100644
+--- a/drivers/scsi/sr_ioctl.c
++++ b/drivers/scsi/sr_ioctl.c
+@@ -431,7 +431,7 @@ int sr_select_speed(struct cdrom_device_info *cdi, unsigned long speed)
+ 	struct packet_command cgc;
+ 
+ 	/* avoid exceeding the max speed or overflowing integer bounds */
+-	speed = clamp(0, speed, 0xffff / 177);
++	speed = clamp(speed, 0, 0xffff / 177);
+ 
+ 	if (speed == 0)
+ 		speed = 0xffff;	/* set to max */
+diff --git a/drivers/soc/mediatek/mtk-mutex.c b/drivers/soc/mediatek/mtk-mutex.c
+index b5af1fb5847ea..01b129caf1eb2 100644
+--- a/drivers/soc/mediatek/mtk-mutex.c
++++ b/drivers/soc/mediatek/mtk-mutex.c
+@@ -524,6 +524,7 @@ static const unsigned int mt8188_mdp_mutex_table_mod[MUTEX_MOD_IDX_MAX] = {
+ 	[MUTEX_MOD_IDX_MDP_PAD0] = MT8195_MUTEX_MOD_MDP_PAD0,
+ 	[MUTEX_MOD_IDX_MDP_PAD2] = MT8195_MUTEX_MOD_MDP_PAD2,
+ 	[MUTEX_MOD_IDX_MDP_PAD3] = MT8195_MUTEX_MOD_MDP_PAD3,
++	[MUTEX_MOD_IDX_MDP_TCC0] = MT8195_MUTEX_MOD_MDP_TCC0,
+ 	[MUTEX_MOD_IDX_MDP_WROT0] = MT8195_MUTEX_MOD_MDP_WROT0,
+ 	[MUTEX_MOD_IDX_MDP_WROT2] = MT8195_MUTEX_MOD_MDP_WROT2,
+ 	[MUTEX_MOD_IDX_MDP_WROT3] = MT8195_MUTEX_MOD_MDP_WROT3,
+diff --git a/drivers/soc/qcom/icc-bwmon.c b/drivers/soc/qcom/icc-bwmon.c
+index fb323b3364db4..ecddb60bd6650 100644
+--- a/drivers/soc/qcom/icc-bwmon.c
++++ b/drivers/soc/qcom/icc-bwmon.c
+@@ -565,7 +565,7 @@ static void bwmon_start(struct icc_bwmon *bwmon)
+ 	int window;
+ 
+ 	/* No need to check for errors, as this must have succeeded before. */
+-	dev_pm_opp_find_bw_ceil(bwmon->dev, &bw_low, 0);
++	dev_pm_opp_put(dev_pm_opp_find_bw_ceil(bwmon->dev, &bw_low, 0));
+ 
+ 	bwmon_clear_counters(bwmon, true);
+ 
+@@ -772,11 +772,13 @@ static int bwmon_probe(struct platform_device *pdev)
+ 	opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0);
+ 	if (IS_ERR(opp))
+ 		return dev_err_probe(dev, PTR_ERR(opp), "failed to find max peak bandwidth\n");
++	dev_pm_opp_put(opp);
+ 
+ 	bwmon->min_bw_kbps = 0;
+ 	opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0);
+ 	if (IS_ERR(opp))
+ 		return dev_err_probe(dev, PTR_ERR(opp), "failed to find min peak bandwidth\n");
++	dev_pm_opp_put(opp);
+ 
+ 	bwmon->dev = dev;
+ 
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index a1b6a4081dea7..216166e98fae4 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -76,12 +76,12 @@ static int pdr_locator_new_server(struct qmi_handle *qmi,
+ 					      locator_hdl);
+ 	struct pdr_service *pds;
+ 
++	mutex_lock(&pdr->lock);
+ 	/* Create a local client port for QMI communication */
+ 	pdr->locator_addr.sq_family = AF_QIPCRTR;
+ 	pdr->locator_addr.sq_node = svc->node;
+ 	pdr->locator_addr.sq_port = svc->port;
+ 
+-	mutex_lock(&pdr->lock);
+ 	pdr->locator_init_complete = true;
+ 	mutex_unlock(&pdr->lock);
+ 
+@@ -104,10 +104,10 @@ static void pdr_locator_del_server(struct qmi_handle *qmi,
+ 
+ 	mutex_lock(&pdr->lock);
+ 	pdr->locator_init_complete = false;
+-	mutex_unlock(&pdr->lock);
+ 
+ 	pdr->locator_addr.sq_node = 0;
+ 	pdr->locator_addr.sq_port = 0;
++	mutex_unlock(&pdr->lock);
+ }
+ 
+ static const struct qmi_ops pdr_locator_ops = {
+@@ -365,12 +365,14 @@ static int pdr_get_domain_list(struct servreg_get_domain_list_req *req,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	mutex_lock(&pdr->lock);
+ 	ret = qmi_send_request(&pdr->locator_hdl,
+ 			       &pdr->locator_addr,
+ 			       &txn, SERVREG_GET_DOMAIN_LIST_REQ,
+ 			       SERVREG_GET_DOMAIN_LIST_REQ_MAX_LEN,
+ 			       servreg_get_domain_list_req_ei,
+ 			       req);
++	mutex_unlock(&pdr->lock);
+ 	if (ret < 0) {
+ 		qmi_txn_cancel(&txn);
+ 		return ret;
+@@ -415,7 +417,7 @@ static int pdr_locate_service(struct pdr_handle *pdr, struct pdr_service *pds)
+ 		if (ret < 0)
+ 			goto out;
+ 
+-		for (i = domains_read; i < resp->domain_list_len; i++) {
++		for (i = 0; i < resp->domain_list_len; i++) {
+ 			entry = &resp->domain_list[i];
+ 
+ 			if (strnlen(entry->name, sizeof(entry->name)) == sizeof(entry->name))
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index 65279243072c3..9ebc0ba359477 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -373,8 +373,17 @@ static struct platform_driver pmic_glink_driver = {
+ 
+ static int pmic_glink_init(void)
+ {
+-	platform_driver_register(&pmic_glink_driver);
+-	register_rpmsg_driver(&pmic_glink_rpmsg_driver);
++	int ret;
++
++	ret = platform_driver_register(&pmic_glink_driver);
++	if (ret < 0)
++		return ret;
++
++	ret = register_rpmsg_driver(&pmic_glink_rpmsg_driver);
++	if (ret < 0) {
++		platform_driver_unregister(&pmic_glink_driver);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index 561d8037b50a0..de86009ecd913 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -646,13 +646,14 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+ {
+ 	struct tcs_group *tcs;
+ 	int tcs_id;
+-	unsigned long flags;
++
++	might_sleep();
+ 
+ 	tcs = get_tcs_for_msg(drv, msg);
+ 	if (IS_ERR(tcs))
+ 		return PTR_ERR(tcs);
+ 
+-	spin_lock_irqsave(&drv->lock, flags);
++	spin_lock_irq(&drv->lock);
+ 
+ 	/* Wait forever for a free tcs. It better be there eventually! */
+ 	wait_event_lock_irq(drv->tcs_wait,
+@@ -670,7 +671,7 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+ 		write_tcs_reg_sync(drv, drv->regs[RSC_DRV_CMD_ENABLE], tcs_id, 0);
+ 		enable_tcs_irq(drv, tcs_id, true);
+ 	}
+-	spin_unlock_irqrestore(&drv->lock, flags);
++	spin_unlock_irq(&drv->lock);
+ 
+ 	/*
+ 	 * These two can be done after the lock is released because:
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index 9f26d7f9b9dc4..8903ed956312d 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -183,7 +183,6 @@ static int __rpmh_write(const struct device *dev, enum rpmh_state state,
+ 	}
+ 
+ 	if (state == RPMH_ACTIVE_ONLY_STATE) {
+-		WARN_ON(irqs_disabled());
+ 		ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg);
+ 	} else {
+ 		/* Clean up our call by spoofing tx_done */
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index 277c07a6603d4..41342c37916ae 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -133,7 +133,8 @@ static const char *const pmic_models[] = {
+ 	[72] = "PMR735D",
+ 	[73] = "PM8550",
+ 	[74] = "PMK8550",
+-	[82] = "SMB2360",
++	[82] = "PMC8380",
++	[83] = "SMB2360",
+ };
+ 
+ struct socinfo_params {
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index 253299e4214d0..366018f6a0ee0 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -3,6 +3,7 @@
+  * Xilinx Event Management Driver
+  *
+  *  Copyright (C) 2021 Xilinx, Inc.
++ *  Copyright (C) 2024 Advanced Micro Devices, Inc.
+  *
+  *  Abhyuday Godhasara <abhyuday.godhasara@xilinx.com>
+  */
+@@ -19,7 +20,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ 
+-static DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number1);
++static DEFINE_PER_CPU_READ_MOSTLY(int, dummy_cpu_number);
+ 
+ static int virq_sgi;
+ static int event_manager_availability = -EACCES;
+@@ -570,7 +571,6 @@ static void xlnx_disable_percpu_irq(void *data)
+ static int xlnx_event_init_sgi(struct platform_device *pdev)
+ {
+ 	int ret = 0;
+-	int cpu;
+ 	/*
+ 	 * IRQ related structures are used for the following:
+ 	 * for each SGI interrupt ensure its mapped by GIC IRQ domain
+@@ -607,11 +607,8 @@ static int xlnx_event_init_sgi(struct platform_device *pdev)
+ 	sgi_fwspec.param[0] = sgi_num;
+ 	virq_sgi = irq_create_fwspec_mapping(&sgi_fwspec);
+ 
+-	cpu = get_cpu();
+-	per_cpu(cpu_number1, cpu) = cpu;
+ 	ret = request_percpu_irq(virq_sgi, xlnx_event_handler, "xlnx_event_mgmt",
+-				 &cpu_number1);
+-	put_cpu();
++				 &dummy_cpu_number);
+ 
+ 	WARN_ON(ret);
+ 	if (ret) {
+@@ -627,16 +624,12 @@ static int xlnx_event_init_sgi(struct platform_device *pdev)
+ 
+ static void xlnx_event_cleanup_sgi(struct platform_device *pdev)
+ {
+-	int cpu = smp_processor_id();
+-
+-	per_cpu(cpu_number1, cpu) = cpu;
+-
+ 	cpuhp_remove_state(CPUHP_AP_ONLINE_DYN);
+ 
+ 	on_each_cpu(xlnx_disable_percpu_irq, NULL, 1);
+ 
+ 	irq_clear_status_flags(virq_sgi, IRQ_PER_CPU);
+-	free_percpu_irq(virq_sgi, &cpu_number1);
++	free_percpu_irq(virq_sgi, &dummy_cpu_number);
+ 	irq_dispose_mapping(virq_sgi);
+ }
+ 
+diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
+index 965b1143936ab..b82c01373f532 100644
+--- a/drivers/soc/xilinx/zynqmp_power.c
++++ b/drivers/soc/xilinx/zynqmp_power.c
+@@ -190,7 +190,9 @@ static int zynqmp_pm_probe(struct platform_device *pdev)
+ 	u32 pm_api_version;
+ 	struct mbox_client *client;
+ 
+-	zynqmp_pm_get_api_version(&pm_api_version);
++	ret = zynqmp_pm_get_api_version(&pm_api_version);
++	if (ret)
++		return ret;
+ 
+ 	/* Check PM API version number */
+ 	if (pm_api_version < ZYNQMP_PM_VERSION)
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 370c4d1572ed0..5aaff3bee1b78 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -756,8 +756,15 @@ static int __maybe_unused atmel_qspi_resume(struct device *dev)
+ 	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 	int ret;
+ 
+-	clk_prepare(aq->pclk);
+-	clk_prepare(aq->qspick);
++	ret = clk_prepare(aq->pclk);
++	if (ret)
++		return ret;
++
++	ret = clk_prepare(aq->qspick);
++	if (ret) {
++		clk_unprepare(aq->pclk);
++		return ret;
++	}
+ 
+ 	ret = pm_runtime_force_resume(dev);
+ 	if (ret < 0)
+diff --git a/drivers/spi/spi-microchip-core.c b/drivers/spi/spi-microchip-core.c
+index 634364c7cfe61..99c25e6a937fd 100644
+--- a/drivers/spi/spi-microchip-core.c
++++ b/drivers/spi/spi-microchip-core.c
+@@ -21,7 +21,7 @@
+ #include <linux/spi/spi.h>
+ 
+ #define MAX_LEN				(0xffff)
+-#define MAX_CS				(8)
++#define MAX_CS				(1)
+ #define DEFAULT_FRAMESIZE		(8)
+ #define FIFO_DEPTH			(32)
+ #define CLK_GEN_MODE1_MAX		(255)
+@@ -75,6 +75,7 @@
+ 
+ #define REG_CONTROL		(0x00)
+ #define REG_FRAME_SIZE		(0x04)
++#define  FRAME_SIZE_MASK	GENMASK(5, 0)
+ #define REG_STATUS		(0x08)
+ #define REG_INT_CLEAR		(0x0c)
+ #define REG_RX_DATA		(0x10)
+@@ -89,6 +90,9 @@
+ #define REG_RIS			(0x24)
+ #define REG_CONTROL2		(0x28)
+ #define REG_COMMAND		(0x2c)
++#define  COMMAND_CLRFRAMECNT	BIT(4)
++#define  COMMAND_TXFIFORST		BIT(3)
++#define  COMMAND_RXFIFORST		BIT(2)
+ #define REG_PKTSIZE		(0x30)
+ #define REG_CMD_SIZE		(0x34)
+ #define REG_HWSTATUS		(0x38)
+@@ -103,6 +107,7 @@ struct mchp_corespi {
+ 	u8 *rx_buf;
+ 	u32 clk_gen; /* divider for spi output clock generated by the controller */
+ 	u32 clk_mode;
++	u32 pending_slave_select;
+ 	int irq;
+ 	int tx_len;
+ 	int rx_len;
+@@ -148,62 +153,59 @@ static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi)
+ 
+ static void mchp_corespi_enable_ints(struct mchp_corespi *spi)
+ {
+-	u32 control, mask = INT_ENABLE_MASK;
+-
+-	mchp_corespi_disable(spi);
+-
+-	control = mchp_corespi_read(spi, REG_CONTROL);
+-
+-	control |= mask;
+-	mchp_corespi_write(spi, REG_CONTROL, control);
++	u32 control = mchp_corespi_read(spi, REG_CONTROL);
+ 
+-	control |= CONTROL_ENABLE;
++	control |= INT_ENABLE_MASK;
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+ }
+ 
+ static void mchp_corespi_disable_ints(struct mchp_corespi *spi)
+ {
+-	u32 control, mask = INT_ENABLE_MASK;
+-
+-	mchp_corespi_disable(spi);
+-
+-	control = mchp_corespi_read(spi, REG_CONTROL);
+-	control &= ~mask;
+-	mchp_corespi_write(spi, REG_CONTROL, control);
++	u32 control = mchp_corespi_read(spi, REG_CONTROL);
+ 
+-	control |= CONTROL_ENABLE;
++	control &= ~INT_ENABLE_MASK;
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+ }
+ 
+ static inline void mchp_corespi_set_xfer_size(struct mchp_corespi *spi, int len)
+ {
+ 	u32 control;
+-	u16 lenpart;
++	u32 lenpart;
++	u32 frames = mchp_corespi_read(spi, REG_FRAMESUP);
+ 
+ 	/*
+-	 * Disable the SPI controller. Writes to transfer length have
+-	 * no effect when the controller is enabled.
++	 * Writing to FRAMECNT in REG_CONTROL will reset the frame count, taking
++	 * a shortcut requires an explicit clear.
+ 	 */
+-	mchp_corespi_disable(spi);
++	if (frames == len) {
++		mchp_corespi_write(spi, REG_COMMAND, COMMAND_CLRFRAMECNT);
++		return;
++	}
+ 
+ 	/*
+ 	 * The lower 16 bits of the frame count are stored in the control reg
+ 	 * for legacy reasons, but the upper 16 written to a different register:
+ 	 * FRAMESUP. While both the upper and lower bits can be *READ* from the
+-	 * FRAMESUP register, writing to the lower 16 bits is a NOP
++	 * FRAMESUP register, writing to the lower 16 bits is (supposedly) a NOP.
++	 *
++	 * The driver used to disable the controller while modifying the frame
++	 * count, and mask off the lower 16 bits of len while writing to
++	 * FRAMES_UP. When the driver was changed to disable the controller as
++	 * infrequently as possible, it was discovered that the logic of
++	 * lenpart = len & 0xffff_0000
++	 * write(REG_FRAMESUP, lenpart)
++	 * would actually write zeros into the lower 16 bits on an mpfs250t-es,
++	 * despite documentation stating these bits were read-only.
++	 * Writing len unmasked into FRAMES_UP ensures those bits aren't zeroed
++	 * on an mpfs250t-es and will be a NOP for the lower 16 bits on hardware
++	 * that matches the documentation.
+ 	 */
+ 	lenpart = len & 0xffff;
+-
+ 	control = mchp_corespi_read(spi, REG_CONTROL);
+ 	control &= ~CONTROL_FRAMECNT_MASK;
+ 	control |= lenpart << CONTROL_FRAMECNT_SHIFT;
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+-
+-	lenpart = len & 0xffff0000;
+-	mchp_corespi_write(spi, REG_FRAMESUP, lenpart);
+-
+-	control |= CONTROL_ENABLE;
+-	mchp_corespi_write(spi, REG_CONTROL, control);
++	mchp_corespi_write(spi, REG_FRAMESUP, len);
+ }
+ 
+ static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi)
+@@ -226,17 +228,22 @@ static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi)
+ 
+ static inline void mchp_corespi_set_framesize(struct mchp_corespi *spi, int bt)
+ {
++	u32 frame_size = mchp_corespi_read(spi, REG_FRAME_SIZE);
+ 	u32 control;
+ 
++	if ((frame_size & FRAME_SIZE_MASK) == bt)
++		return;
++
+ 	/*
+ 	 * Disable the SPI controller. Writes to the frame size have
+ 	 * no effect when the controller is enabled.
+ 	 */
+-	mchp_corespi_disable(spi);
++	control = mchp_corespi_read(spi, REG_CONTROL);
++	control &= ~CONTROL_ENABLE;
++	mchp_corespi_write(spi, REG_CONTROL, control);
+ 
+ 	mchp_corespi_write(spi, REG_FRAME_SIZE, bt);
+ 
+-	control = mchp_corespi_read(spi, REG_CONTROL);
+ 	control |= CONTROL_ENABLE;
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+ }
+@@ -249,8 +256,18 @@ static void mchp_corespi_set_cs(struct spi_device *spi, bool disable)
+ 	reg = mchp_corespi_read(corespi, REG_SLAVE_SELECT);
+ 	reg &= ~BIT(spi_get_chipselect(spi, 0));
+ 	reg |= !disable << spi_get_chipselect(spi, 0);
++	corespi->pending_slave_select = reg;
+ 
+-	mchp_corespi_write(corespi, REG_SLAVE_SELECT, reg);
++	/*
++	 * Only deassert chip select immediately. Writing to some registers
++	 * requires the controller to be disabled, which results in the
++	 * output pins being tristated and can cause the SCLK and MOSI lines
++	 * to transition. Therefore asserting the chip select is deferred
++	 * until just before writing to the TX FIFO, to ensure the device
++	 * doesn't see any spurious clock transitions whilst CS is enabled.
++	 */
++	if (((spi->mode & SPI_CS_HIGH) == 0) == disable)
++		mchp_corespi_write(corespi, REG_SLAVE_SELECT, reg);
+ }
+ 
+ static int mchp_corespi_setup(struct spi_device *spi)
+@@ -266,6 +283,7 @@ static int mchp_corespi_setup(struct spi_device *spi)
+ 	if (spi->mode & SPI_CS_HIGH) {
+ 		reg = mchp_corespi_read(corespi, REG_SLAVE_SELECT);
+ 		reg |= BIT(spi_get_chipselect(spi, 0));
++		corespi->pending_slave_select = reg;
+ 		mchp_corespi_write(corespi, REG_SLAVE_SELECT, reg);
+ 	}
+ 	return 0;
+@@ -276,17 +294,13 @@ static void mchp_corespi_init(struct spi_controller *host, struct mchp_corespi *
+ 	unsigned long clk_hz;
+ 	u32 control = mchp_corespi_read(spi, REG_CONTROL);
+ 
+-	control |= CONTROL_MASTER;
++	control &= ~CONTROL_ENABLE;
++	mchp_corespi_write(spi, REG_CONTROL, control);
+ 
++	control |= CONTROL_MASTER;
+ 	control &= ~CONTROL_MODE_MASK;
+ 	control |= MOTOROLA_MODE;
+ 
+-	mchp_corespi_set_framesize(spi, DEFAULT_FRAMESIZE);
+-
+-	/* max. possible spi clock rate is the apb clock rate */
+-	clk_hz = clk_get_rate(spi->clk);
+-	host->max_speed_hz = clk_hz;
+-
+ 	/*
+ 	 * The controller must be configured so that it doesn't remove Chip
+ 	 * Select until the entire message has been transferred, even if at
+@@ -295,11 +309,16 @@ static void mchp_corespi_init(struct spi_controller *host, struct mchp_corespi *
+ 	 * BIGFIFO mode is also enabled, which sets the fifo depth to 32 frames
+ 	 * for the 8 bit transfers that this driver uses.
+ 	 */
+-	control = mchp_corespi_read(spi, REG_CONTROL);
+ 	control |= CONTROL_SPS | CONTROL_BIGFIFO;
+ 
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+ 
++	mchp_corespi_set_framesize(spi, DEFAULT_FRAMESIZE);
++
++	/* max. possible spi clock rate is the apb clock rate */
++	clk_hz = clk_get_rate(spi->clk);
++	host->max_speed_hz = clk_hz;
++
+ 	mchp_corespi_enable_ints(spi);
+ 
+ 	/*
+@@ -307,7 +326,8 @@ static void mchp_corespi_init(struct spi_controller *host, struct mchp_corespi *
+ 	 * select is relinquished to the hardware. SSELOUT is enabled too so we
+ 	 * can deal with active high targets.
+ 	 */
+-	mchp_corespi_write(spi, REG_SLAVE_SELECT, SSELOUT | SSEL_DIRECT);
++	spi->pending_slave_select = SSELOUT | SSEL_DIRECT;
++	mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select);
+ 
+ 	control = mchp_corespi_read(spi, REG_CONTROL);
+ 
+@@ -321,8 +341,6 @@ static inline void mchp_corespi_set_clk_gen(struct mchp_corespi *spi)
+ {
+ 	u32 control;
+ 
+-	mchp_corespi_disable(spi);
+-
+ 	control = mchp_corespi_read(spi, REG_CONTROL);
+ 	if (spi->clk_mode)
+ 		control |= CONTROL_CLKMODE;
+@@ -331,12 +349,12 @@ static inline void mchp_corespi_set_clk_gen(struct mchp_corespi *spi)
+ 
+ 	mchp_corespi_write(spi, REG_CLK_GEN, spi->clk_gen);
+ 	mchp_corespi_write(spi, REG_CONTROL, control);
+-	mchp_corespi_write(spi, REG_CONTROL, control | CONTROL_ENABLE);
+ }
+ 
+ static inline void mchp_corespi_set_mode(struct mchp_corespi *spi, unsigned int mode)
+ {
+-	u32 control, mode_val;
++	u32 mode_val;
++	u32 control = mchp_corespi_read(spi, REG_CONTROL);
+ 
+ 	switch (mode & SPI_MODE_X_MASK) {
+ 	case SPI_MODE_0:
+@@ -354,12 +372,13 @@ static inline void mchp_corespi_set_mode(struct mchp_corespi *spi, unsigned int
+ 	}
+ 
+ 	/*
+-	 * Disable the SPI controller. Writes to the frame size have
++	 * Disable the SPI controller. Writes to the frame protocol have
+ 	 * no effect when the controller is enabled.
+ 	 */
+-	mchp_corespi_disable(spi);
+ 
+-	control = mchp_corespi_read(spi, REG_CONTROL);
++	control &= ~CONTROL_ENABLE;
++	mchp_corespi_write(spi, REG_CONTROL, control);
++
+ 	control &= ~(SPI_MODE_X_MASK << MODE_X_MASK_SHIFT);
+ 	control |= mode_val;
+ 
+@@ -380,21 +399,18 @@ static irqreturn_t mchp_corespi_interrupt(int irq, void *dev_id)
+ 	if (intfield == 0)
+ 		return IRQ_NONE;
+ 
+-	if (intfield & INT_TXDONE) {
++	if (intfield & INT_TXDONE)
+ 		mchp_corespi_write(spi, REG_INT_CLEAR, INT_TXDONE);
+ 
++	if (intfield & INT_RXRDY) {
++		mchp_corespi_write(spi, REG_INT_CLEAR, INT_RXRDY);
++
+ 		if (spi->rx_len)
+ 			mchp_corespi_read_fifo(spi);
+-
+-		if (spi->tx_len)
+-			mchp_corespi_write_fifo(spi);
+-
+-		if (!spi->rx_len)
+-			finalise = true;
+ 	}
+ 
+-	if (intfield & INT_RXRDY)
+-		mchp_corespi_write(spi, REG_INT_CLEAR, INT_RXRDY);
++	if (!spi->rx_len && !spi->tx_len)
++		finalise = true;
+ 
+ 	if (intfield & INT_RX_CHANNEL_OVERFLOW) {
+ 		mchp_corespi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW);
+@@ -479,8 +495,13 @@ static int mchp_corespi_transfer_one(struct spi_controller *host,
+ 	mchp_corespi_set_xfer_size(spi, (spi->tx_len > FIFO_DEPTH)
+ 				   ? FIFO_DEPTH : spi->tx_len);
+ 
+-	if (spi->tx_len)
++	mchp_corespi_write(spi, REG_COMMAND, COMMAND_RXFIFORST | COMMAND_TXFIFORST);
++
++	mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select);
++
++	while (spi->tx_len)
+ 		mchp_corespi_write_fifo(spi);
++
+ 	return 1;
+ }
+ 
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 95fb5f1c91c17..05e6d007f9a7f 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -734,6 +734,7 @@ static const struct of_device_id spidev_dt_ids[] = {
+ 	{ .compatible = "lwn,bk4", .data = &spidev_of_check },
+ 	{ .compatible = "menlo,m53cpld", .data = &spidev_of_check },
+ 	{ .compatible = "micron,spi-authenta", .data = &spidev_of_check },
++	{ .compatible = "rohm,bh2228fv", .data = &spidev_of_check },
+ 	{ .compatible = "rohm,dh2228fv", .data = &spidev_of_check },
+ 	{ .compatible = "semtech,sx1301", .data = &spidev_of_check },
+ 	{ .compatible = "silabs,em3581", .data = &spidev_of_check },
+diff --git a/drivers/thermal/broadcom/bcm2835_thermal.c b/drivers/thermal/broadcom/bcm2835_thermal.c
+index 5c1cebe075801..3b1030fc4fbfe 100644
+--- a/drivers/thermal/broadcom/bcm2835_thermal.c
++++ b/drivers/thermal/broadcom/bcm2835_thermal.c
+@@ -185,7 +185,7 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	data->clk = devm_clk_get(&pdev->dev, NULL);
++	data->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ 	if (IS_ERR(data->clk)) {
+ 		err = PTR_ERR(data->clk);
+ 		if (err != -EPROBE_DEFER)
+@@ -193,10 +193,6 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	err = clk_prepare_enable(data->clk);
+-	if (err)
+-		return err;
+-
+ 	rate = clk_get_rate(data->clk);
+ 	if ((rate < 1920000) || (rate > 5000000))
+ 		dev_warn(&pdev->dev,
+@@ -211,7 +207,7 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev,
+ 			"Failed to register the thermal device: %d\n",
+ 			err);
+-		goto err_clk;
++		return err;
+ 	}
+ 
+ 	/*
+@@ -236,7 +232,7 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"Not able to read trip_temp: %d\n",
+ 				err);
+-			goto err_tz;
++			return err;
+ 		}
+ 
+ 		/* set bandgap reference voltage and enable voltage regulator */
+@@ -269,17 +265,11 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 	 */
+ 	err = thermal_add_hwmon_sysfs(tz);
+ 	if (err)
+-		goto err_tz;
++		return err;
+ 
+ 	bcm2835_thermal_debugfs(pdev);
+ 
+ 	return 0;
+-err_tz:
+-	devm_thermal_of_zone_unregister(&pdev->dev, tz);
+-err_clk:
+-	clk_disable_unprepare(data->clk);
+-
+-	return err;
+ }
+ 
+ static void bcm2835_thermal_remove(struct platform_device *pdev)
+@@ -287,7 +277,6 @@ static void bcm2835_thermal_remove(struct platform_device *pdev)
+ 	struct bcm2835_thermal_data *data = platform_get_drvdata(pdev);
+ 
+ 	debugfs_remove_recursive(data->debugfsdir);
+-	clk_disable_unprepare(data->clk);
+ }
+ 
+ static struct platform_driver bcm2835_thermal_driver = {
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 4e7fec406ee59..f2d31bc48f529 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -272,6 +272,44 @@ static int __init thermal_register_governors(void)
+ 	return ret;
+ }
+ 
++static int __thermal_zone_device_set_mode(struct thermal_zone_device *tz,
++					  enum thermal_device_mode mode)
++{
++	if (tz->ops.change_mode) {
++		int ret;
++
++		ret = tz->ops.change_mode(tz, mode);
++		if (ret)
++			return ret;
++	}
++
++	tz->mode = mode;
++
++	return 0;
++}
++
++static void thermal_zone_broken_disable(struct thermal_zone_device *tz)
++{
++	struct thermal_trip_desc *td;
++
++	dev_err(&tz->device, "Unable to get temperature, disabling!\n");
++	/*
++	 * This function only runs for enabled thermal zones, so no need to
++	 * check for the current mode.
++	 */
++	__thermal_zone_device_set_mode(tz, THERMAL_DEVICE_DISABLED);
++	thermal_notify_tz_disable(tz);
++
++	for_each_trip_desc(tz, td) {
++		if (td->trip.type == THERMAL_TRIP_CRITICAL &&
++		    td->trip.temperature > THERMAL_TEMP_INVALID) {
++			dev_crit(&tz->device,
++				 "Disabled thermal zone with critical trip point\n");
++			return;
++		}
++	}
++}
++
+ /*
+  * Zone update section: main control loop applied to each zone while monitoring
+  * in polling mode. The monitoring is done using a workqueue.
+@@ -292,6 +330,34 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
+ 		cancel_delayed_work(&tz->poll_queue);
+ }
+ 
++static void thermal_zone_recheck(struct thermal_zone_device *tz, int error)
++{
++	if (error == -EAGAIN) {
++		thermal_zone_device_set_polling(tz, THERMAL_RECHECK_DELAY);
++		return;
++	}
++
++	/*
++	 * Print the message once to reduce log noise.  It will be followed by
++	 * another one if the temperature cannot be determined after multiple
++	 * attempts.
++	 */
++	if (tz->recheck_delay_jiffies == THERMAL_RECHECK_DELAY)
++		dev_info(&tz->device, "Temperature check failed (%d)\n", error);
++
++	thermal_zone_device_set_polling(tz, tz->recheck_delay_jiffies);
++
++	tz->recheck_delay_jiffies += max(tz->recheck_delay_jiffies >> 1, 1ULL);
++	if (tz->recheck_delay_jiffies > THERMAL_MAX_RECHECK_DELAY) {
++		thermal_zone_broken_disable(tz);
++		/*
++		 * Restore the original recheck delay value to allow the thermal
++		 * zone to try to recover when it is reenabled by user space.
++		 */
++		tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
++	}
++}
++
+ static void monitor_thermal_zone(struct thermal_zone_device *tz)
+ {
+ 	if (tz->mode != THERMAL_DEVICE_ENABLED)
+@@ -488,10 +554,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ 
+ 	ret = __thermal_zone_get_temp(tz, &temp);
+ 	if (ret) {
+-		if (ret != -EAGAIN)
+-			dev_info(&tz->device, "Temperature check failed (%d)\n", ret);
+-
+-		thermal_zone_device_set_polling(tz, msecs_to_jiffies(THERMAL_RECHECK_DELAY_MS));
++		thermal_zone_recheck(tz, ret);
+ 		return;
+ 	} else if (temp <= THERMAL_TEMP_INVALID) {
+ 		/*
+@@ -503,6 +566,8 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ 		goto monitor;
+ 	}
+ 
++	tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
++
+ 	tz->last_temperature = tz->temperature;
+ 	tz->temperature = temp;
+ 
+@@ -537,7 +602,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ static int thermal_zone_device_set_mode(struct thermal_zone_device *tz,
+ 					enum thermal_device_mode mode)
+ {
+-	int ret = 0;
++	int ret;
+ 
+ 	mutex_lock(&tz->lock);
+ 
+@@ -545,14 +610,15 @@ static int thermal_zone_device_set_mode(struct thermal_zone_device *tz,
+ 	if (mode == tz->mode) {
+ 		mutex_unlock(&tz->lock);
+ 
+-		return ret;
++		return 0;
+ 	}
+ 
+-	if (tz->ops.change_mode)
+-		ret = tz->ops.change_mode(tz, mode);
++	ret = __thermal_zone_device_set_mode(tz, mode);
++	if (ret) {
++		mutex_unlock(&tz->lock);
+ 
+-	if (!ret)
+-		tz->mode = mode;
++		return ret;
++	}
+ 
+ 	__thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
+ 
+@@ -563,7 +629,7 @@ static int thermal_zone_device_set_mode(struct thermal_zone_device *tz,
+ 	else
+ 		thermal_notify_tz_disable(tz);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ int thermal_zone_device_enable(struct thermal_zone_device *tz)
+@@ -1433,6 +1499,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ 
+ 	thermal_set_delay_jiffies(&tz->passive_delay_jiffies, passive_delay);
+ 	thermal_set_delay_jiffies(&tz->polling_delay_jiffies, polling_delay);
++	tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+ 
+ 	/* sys I/F */
+ 	/* Add nodes that are always present via .groups */
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 5afd541d54b0b..56113c9db5755 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -67,6 +67,8 @@ struct thermal_governor {
+  * @polling_delay_jiffies: number of jiffies to wait between polls when
+  *			checking whether trip points have been crossed (0 for
+  *			interrupt driven systems)
++ * @recheck_delay_jiffies: delay after a failed attempt to determine the zone
++ * 			temperature before trying again
+  * @temperature:	current temperature.  This is only for core code,
+  *			drivers should use thermal_zone_get_temp() to get the
+  *			current temperature
+@@ -108,6 +110,7 @@ struct thermal_zone_device {
+ 	int num_trips;
+ 	unsigned long passive_delay_jiffies;
+ 	unsigned long polling_delay_jiffies;
++	unsigned long recheck_delay_jiffies;
+ 	int temperature;
+ 	int last_temperature;
+ 	int emul_temperature;
+@@ -137,10 +140,11 @@ struct thermal_zone_device {
+ #define THERMAL_TEMP_INIT	INT_MIN
+ 
+ /*
+- * Default delay after a failing thermal zone temperature check before
+- * attempting to check it again.
++ * Default and maximum delay after a failed thermal zone temperature check
++ * before attempting to check it again (in jiffies).
+  */
+-#define THERMAL_RECHECK_DELAY_MS	250
++#define THERMAL_RECHECK_DELAY		msecs_to_jiffies(250)
++#define THERMAL_MAX_RECHECK_DELAY	(120 * HZ)
+ 
+ /* Default Thermal Governor */
+ #if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index c532416aec229..408fef9c6fd66 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -230,8 +230,6 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 
+ /* Operation and runtime registers configuration */
+ #define MCQ_CFG_n(r, i)	((r) + MCQ_QCFG_SIZE * (i))
+-#define MCQ_OPR_OFFSET_n(p, i) \
+-	(hba->mcq_opr[(p)].offset + hba->mcq_opr[(p)].stride * (i))
+ 
+ static void __iomem *mcq_opr_base(struct ufs_hba *hba,
+ 					 enum ufshcd_mcq_opr n, int i)
+@@ -342,10 +340,10 @@ void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba)
+ 		ufsmcq_writelx(hba, upper_32_bits(hwq->sqe_dma_addr),
+ 			      MCQ_CFG_n(REG_SQUBA, i));
+ 		/* Submission Queue Doorbell Address Offset */
+-		ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_SQD, i),
++		ufsmcq_writelx(hba, ufshcd_mcq_opr_offset(hba, OPR_SQD, i),
+ 			      MCQ_CFG_n(REG_SQDAO, i));
+ 		/* Submission Queue Interrupt Status Address Offset */
+-		ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_SQIS, i),
++		ufsmcq_writelx(hba, ufshcd_mcq_opr_offset(hba, OPR_SQIS, i),
+ 			      MCQ_CFG_n(REG_SQISAO, i));
+ 
+ 		/* Completion Queue Lower Base Address */
+@@ -355,10 +353,10 @@ void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba)
+ 		ufsmcq_writelx(hba, upper_32_bits(hwq->cqe_dma_addr),
+ 			      MCQ_CFG_n(REG_CQUBA, i));
+ 		/* Completion Queue Doorbell Address Offset */
+-		ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_CQD, i),
++		ufsmcq_writelx(hba, ufshcd_mcq_opr_offset(hba, OPR_CQD, i),
+ 			      MCQ_CFG_n(REG_CQDAO, i));
+ 		/* Completion Queue Interrupt Status Address Offset */
+-		ufsmcq_writelx(hba, MCQ_OPR_OFFSET_n(OPR_CQIS, i),
++		ufsmcq_writelx(hba, ufshcd_mcq_opr_offset(hba, OPR_CQIS, i),
+ 			      MCQ_CFG_n(REG_CQISAO, i));
+ 
+ 		/* Save the base addresses for quicker access */
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 05881153883ec..dc1e345ab67ea 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -50,6 +50,7 @@
+ #define PCI_DEVICE_ID_INTEL_DENVERTON_XHCI		0x19d0
+ #define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI		0x8a13
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
++#define PCI_DEVICE_ID_INTEL_TIGER_LAKE_PCH_XHCI		0xa0ed
+ #define PCI_DEVICE_ID_INTEL_COMET_LAKE_XHCI		0xa3af
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI		0x51ed
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI	0x54ed
+@@ -373,7 +374,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-	    (pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
++	    (pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_PCH_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI))
+ 		xhci->quirks |= XHCI_RESET_TO_DEFAULT;
+ 
+diff --git a/drivers/usb/typec/mux/nb7vpq904m.c b/drivers/usb/typec/mux/nb7vpq904m.c
+index b17826713753a..9fe4ce6f62ac0 100644
+--- a/drivers/usb/typec/mux/nb7vpq904m.c
++++ b/drivers/usb/typec/mux/nb7vpq904m.c
+@@ -413,7 +413,7 @@ static int nb7vpq904m_probe(struct i2c_client *client)
+ 
+ 	ret = nb7vpq904m_parse_data_lanes_mapping(nb7);
+ 	if (ret)
+-		return ret;
++		goto err_switch_put;
+ 
+ 	ret = regulator_enable(nb7->vcc_supply);
+ 	if (ret)
+@@ -456,6 +456,9 @@ static int nb7vpq904m_probe(struct i2c_client *client)
+ 	gpiod_set_value(nb7->enable_gpio, 0);
+ 	regulator_disable(nb7->vcc_supply);
+ 
++err_switch_put:
++	typec_switch_put(nb7->typec_switch);
++
+ 	return ret;
+ }
+ 
+@@ -469,6 +472,8 @@ static void nb7vpq904m_remove(struct i2c_client *client)
+ 	gpiod_set_value(nb7->enable_gpio, 0);
+ 
+ 	regulator_disable(nb7->vcc_supply);
++
++	typec_switch_put(nb7->typec_switch);
+ }
+ 
+ static const struct i2c_device_id nb7vpq904m_table[] = {
+diff --git a/drivers/usb/typec/mux/ptn36502.c b/drivers/usb/typec/mux/ptn36502.c
+index 0ec86ef32a871..88136a6d6f313 100644
+--- a/drivers/usb/typec/mux/ptn36502.c
++++ b/drivers/usb/typec/mux/ptn36502.c
+@@ -322,8 +322,10 @@ static int ptn36502_probe(struct i2c_client *client)
+ 				     "Failed to acquire orientation-switch\n");
+ 
+ 	ret = regulator_enable(ptn->vdd18_supply);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "Failed to enable vdd18\n");
++	if (ret) {
++		ret = dev_err_probe(dev, ret, "Failed to enable vdd18\n");
++		goto err_switch_put;
++	}
+ 
+ 	ret = ptn36502_detect(ptn);
+ 	if (ret)
+@@ -363,6 +365,9 @@ static int ptn36502_probe(struct i2c_client *client)
+ err_disable_regulator:
+ 	regulator_disable(ptn->vdd18_supply);
+ 
++err_switch_put:
++	typec_switch_put(ptn->typec_switch);
++
+ 	return ret;
+ }
+ 
+@@ -374,6 +379,8 @@ static void ptn36502_remove(struct i2c_client *client)
+ 	typec_switch_unregister(ptn->sw);
+ 
+ 	regulator_disable(ptn->vdd18_supply);
++
++	typec_switch_put(ptn->typec_switch);
+ }
+ 
+ static const struct i2c_device_id ptn36502_table[] = {
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index ec20ecff85c7f..bf664ec9341b3 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -667,6 +667,7 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	vsock->guest_cid = 0; /* no CID assigned yet */
++	vsock->seqpacket_allow = false;
+ 
+ 	atomic_set(&vsock->queued_replies, 0);
+ 
+@@ -810,8 +811,7 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features)
+ 			goto err;
+ 	}
+ 
+-	if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET))
+-		vsock->seqpacket_allow = true;
++	vsock->seqpacket_allow = features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
+ 		vq = &vsock->vqs[i];
+diff --git a/drivers/video/fbdev/vesafb.c b/drivers/video/fbdev/vesafb.c
+index 8ab64ae4cad3e..5a161750a3aee 100644
+--- a/drivers/video/fbdev/vesafb.c
++++ b/drivers/video/fbdev/vesafb.c
+@@ -271,7 +271,7 @@ static int vesafb_probe(struct platform_device *dev)
+ 	if (si->orig_video_isVGA != VIDEO_TYPE_VLFB)
+ 		return -ENODEV;
+ 
+-	vga_compat = (si->capabilities & 2) ? 0 : 1;
++	vga_compat = !__screen_info_vbe_mode_nonvga(si);
+ 	vesafb_fix.smem_start = si->lfb_base;
+ 	vesafb_defined.bits_per_pixel = si->lfb_depth;
+ 	if (15 == vesafb_defined.bits_per_pixel)
+diff --git a/drivers/watchdog/rzg2l_wdt.c b/drivers/watchdog/rzg2l_wdt.c
+index 1741f98ca67c5..7bce093316c4d 100644
+--- a/drivers/watchdog/rzg2l_wdt.c
++++ b/drivers/watchdog/rzg2l_wdt.c
+@@ -123,8 +123,11 @@ static void rzg2l_wdt_init_timeout(struct watchdog_device *wdev)
+ static int rzg2l_wdt_start(struct watchdog_device *wdev)
+ {
+ 	struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
++	int ret;
+ 
+-	pm_runtime_get_sync(wdev->parent);
++	ret = pm_runtime_resume_and_get(wdev->parent);
++	if (ret)
++		return ret;
+ 
+ 	/* Initialize time out */
+ 	rzg2l_wdt_init_timeout(wdev);
+@@ -141,15 +144,21 @@ static int rzg2l_wdt_start(struct watchdog_device *wdev)
+ static int rzg2l_wdt_stop(struct watchdog_device *wdev)
+ {
+ 	struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
++	int ret;
+ 
+ 	rzg2l_wdt_reset(priv);
+-	pm_runtime_put(wdev->parent);
++
++	ret = pm_runtime_put(wdev->parent);
++	if (ret < 0)
++		return ret;
+ 
+ 	return 0;
+ }
+ 
+ static int rzg2l_wdt_set_timeout(struct watchdog_device *wdev, unsigned int timeout)
+ {
++	int ret = 0;
++
+ 	wdev->timeout = timeout;
+ 
+ 	/*
+@@ -158,11 +167,14 @@ static int rzg2l_wdt_set_timeout(struct watchdog_device *wdev, unsigned int time
+ 	 * to reset the module) so that it is updated with new timeout values.
+ 	 */
+ 	if (watchdog_active(wdev)) {
+-		rzg2l_wdt_stop(wdev);
+-		rzg2l_wdt_start(wdev);
++		ret = rzg2l_wdt_stop(wdev);
++		if (ret)
++			return ret;
++
++		ret = rzg2l_wdt_start(wdev);
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int rzg2l_wdt_restart(struct watchdog_device *wdev,
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 6441e47d8a5e6..7c22f5b8c535f 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -514,6 +514,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
+ 			put_page(page);
+ 			break;
+ 		}
++		add_size = min(em->start + em->len, page_end + 1) - cur;
+ 		free_extent_map(em);
+ 
+ 		if (page->index == end_index) {
+@@ -526,7 +527,6 @@ static noinline int add_ra_bio_pages(struct inode *inode,
+ 			}
+ 		}
+ 
+-		add_size = min(em->start + em->len, page_end + 1) - cur;
+ 		ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
+ 		if (ret != add_size) {
+ 			unlock_extent(tree, cur, page_end, NULL);
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 885cb5d4e771a..0cdf84cd17912 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -961,7 +961,8 @@ static int __init init_caches(void)
+ 	if (!ceph_mds_request_cachep)
+ 		goto bad_mds_req;
+ 
+-	ceph_wb_pagevec_pool = mempool_create_kmalloc_pool(10, CEPH_MAX_WRITE_SIZE >> PAGE_SHIFT);
++	ceph_wb_pagevec_pool = mempool_create_kmalloc_pool(10,
++	    (CEPH_MAX_WRITE_SIZE >> PAGE_SHIFT) * sizeof(struct page *));
+ 	if (!ceph_wb_pagevec_pool)
+ 		goto bad_pagevec_pool;
+ 
+diff --git a/fs/erofs/zutil.c b/fs/erofs/zutil.c
+index b80f612867c2b..9b53883e5caf8 100644
+--- a/fs/erofs/zutil.c
++++ b/fs/erofs/zutil.c
+@@ -38,11 +38,13 @@ void *z_erofs_get_gbuf(unsigned int requiredpages)
+ {
+ 	struct z_erofs_gbuf *gbuf;
+ 
++	migrate_disable();
+ 	gbuf = &z_erofs_gbufpool[z_erofs_gbuf_id()];
+ 	spin_lock(&gbuf->lock);
+ 	/* check if the buffer is too small */
+ 	if (requiredpages > gbuf->nrpages) {
+ 		spin_unlock(&gbuf->lock);
++		migrate_enable();
+ 		/* (for sparse checker) pretend gbuf->lock is still taken */
+ 		__acquire(gbuf->lock);
+ 		return NULL;
+@@ -57,6 +59,7 @@ void z_erofs_put_gbuf(void *ptr) __releases(gbuf->lock)
+ 	gbuf = &z_erofs_gbufpool[z_erofs_gbuf_id()];
+ 	DBG_BUGON(gbuf->ptr != ptr);
+ 	spin_unlock(&gbuf->lock);
++	migrate_enable();
+ }
+ 
+ int z_erofs_gbuf_growsize(unsigned int nrpages)
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 84572e11cc05f..7446bf09a04a8 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -813,7 +813,7 @@ static int __exfat_get_dentry_set(struct exfat_entry_set_cache *es,
+ 
+ 	num_bh = EXFAT_B_TO_BLK_ROUND_UP(off + num_entries * DENTRY_SIZE, sb);
+ 	if (num_bh > ARRAY_SIZE(es->__bh)) {
+-		es->bh = kmalloc_array(num_bh, sizeof(*es->bh), GFP_KERNEL);
++		es->bh = kmalloc_array(num_bh, sizeof(*es->bh), GFP_NOFS);
+ 		if (!es->bh) {
+ 			brelse(bh);
+ 			return -ENOMEM;
+diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
+index 1bfd6ab110389..b8cfab8f98b97 100644
+--- a/fs/ext2/balloc.c
++++ b/fs/ext2/balloc.c
+@@ -77,26 +77,33 @@ static int ext2_valid_block_bitmap(struct super_block *sb,
+ 	ext2_grpblk_t next_zero_bit;
+ 	ext2_fsblk_t bitmap_blk;
+ 	ext2_fsblk_t group_first_block;
++	ext2_grpblk_t max_bit;
+ 
+ 	group_first_block = ext2_group_first_block_no(sb, block_group);
++	max_bit = ext2_group_last_block_no(sb, block_group) - group_first_block;
+ 
+ 	/* check whether block bitmap block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_block_bitmap);
+ 	offset = bitmap_blk - group_first_block;
+-	if (!ext2_test_bit(offset, bh->b_data))
++	if (offset < 0 || offset > max_bit ||
++	    !ext2_test_bit(offset, bh->b_data))
+ 		/* bad block bitmap */
+ 		goto err_out;
+ 
+ 	/* check whether the inode bitmap block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_inode_bitmap);
+ 	offset = bitmap_blk - group_first_block;
+-	if (!ext2_test_bit(offset, bh->b_data))
++	if (offset < 0 || offset > max_bit ||
++	    !ext2_test_bit(offset, bh->b_data))
+ 		/* bad block bitmap */
+ 		goto err_out;
+ 
+ 	/* check whether the inode table block number is set */
+ 	bitmap_blk = le32_to_cpu(desc->bg_inode_table);
+ 	offset = bitmap_blk - group_first_block;
++	if (offset < 0 || offset > max_bit ||
++	    offset + EXT2_SB(sb)->s_itb_per_group - 1 > max_bit)
++		goto err_out;
+ 	next_zero_bit = ext2_find_next_zero_bit(bh->b_data,
+ 				offset + EXT2_SB(sb)->s_itb_per_group,
+ 				offset);
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 4a00e2f019d93..3a53dbb85e15b 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -310,6 +310,8 @@ void ext4_es_find_extent_range(struct inode *inode,
+ 			       ext4_lblk_t lblk, ext4_lblk_t end,
+ 			       struct extent_status *es)
+ {
++	es->es_lblk = es->es_len = es->es_pblk = 0;
++
+ 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
+ 		return;
+ 
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 87c009e0c59a5..d3a67bc06d109 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -649,6 +649,12 @@ void ext4_fc_track_range(handle_t *handle, struct inode *inode, ext4_lblk_t star
+ 	if (ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_INELIGIBLE))
+ 		return;
+ 
++	if (ext4_has_inline_data(inode)) {
++		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR,
++					handle);
++		return;
++	}
++
+ 	args.start = start;
+ 	args.end = end;
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a630b27a4cc6e..1311ad0464b2a 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -151,10 +151,11 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 
+ 		return bh;
+ 	}
+-	if (!bh && (type == INDEX || type == DIRENT_HTREE)) {
++	/* The first directory block must not be a hole. */
++	if (!bh && (type == INDEX || type == DIRENT_HTREE || block == 0)) {
+ 		ext4_error_inode(inode, func, line, block,
+-				 "Directory hole found for htree %s block",
+-				 (type == INDEX) ? "index" : "leaf");
++				 "Directory hole found for htree %s block %u",
++				 (type == INDEX) ? "index" : "leaf", block);
+ 		return ERR_PTR(-EFSCORRUPTED);
+ 	}
+ 	if (!bh)
+@@ -2217,6 +2218,52 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ 	return err ? err : err2;
+ }
+ 
++static bool ext4_check_dx_root(struct inode *dir, struct dx_root *root)
++{
++	struct fake_dirent *fde;
++	const char *error_msg;
++	unsigned int rlen;
++	unsigned int blocksize = dir->i_sb->s_blocksize;
++	char *blockend = (char *)root + dir->i_sb->s_blocksize;
++
++	fde = &root->dot;
++	if (unlikely(fde->name_len != 1)) {
++		error_msg = "invalid name_len for '.'";
++		goto corrupted;
++	}
++	if (unlikely(strncmp(root->dot_name, ".", fde->name_len))) {
++		error_msg = "invalid name for '.'";
++		goto corrupted;
++	}
++	rlen = ext4_rec_len_from_disk(fde->rec_len, blocksize);
++	if (unlikely((char *)fde + rlen >= blockend)) {
++		error_msg = "invalid rec_len for '.'";
++		goto corrupted;
++	}
++
++	fde = &root->dotdot;
++	if (unlikely(fde->name_len != 2)) {
++		error_msg = "invalid name_len for '..'";
++		goto corrupted;
++	}
++	if (unlikely(strncmp(root->dotdot_name, "..", fde->name_len))) {
++		error_msg = "invalid name for '..'";
++		goto corrupted;
++	}
++	rlen = ext4_rec_len_from_disk(fde->rec_len, blocksize);
++	if (unlikely((char *)fde + rlen >= blockend)) {
++		error_msg = "invalid rec_len for '..'";
++		goto corrupted;
++	}
++
++	return true;
++
++corrupted:
++	EXT4_ERROR_INODE(dir, "Corrupt dir, %s, running e2fsck is recommended",
++			 error_msg);
++	return false;
++}
++
+ /*
+  * This converts a one block unindexed directory to a 3 block indexed
+  * directory, and adds the dentry to the indexed directory.
+@@ -2251,17 +2298,17 @@ static int make_indexed_dir(handle_t *handle, struct ext4_filename *fname,
+ 		brelse(bh);
+ 		return retval;
+ 	}
++
+ 	root = (struct dx_root *) bh->b_data;
++	if (!ext4_check_dx_root(dir, root)) {
++		brelse(bh);
++		return -EFSCORRUPTED;
++	}
+ 
+ 	/* The 0th block becomes the root, move the dirents out */
+ 	fde = &root->dotdot;
+ 	de = (struct ext4_dir_entry_2 *)((char *)fde +
+ 		ext4_rec_len_from_disk(fde->rec_len, blocksize));
+-	if ((char *) de >= (((char *) root) + blocksize)) {
+-		EXT4_ERROR_INODE(dir, "invalid rec_len for '..'");
+-		brelse(bh);
+-		return -EFSCORRUPTED;
+-	}
+ 	len = ((char *) root) + (blocksize - csum_size) - (char *) de;
+ 
+ 	/* Allocate new block for the 0th block's dirents */
+@@ -3083,10 +3130,7 @@ bool ext4_empty_dir(struct inode *inode)
+ 		EXT4_ERROR_INODE(inode, "invalid size");
+ 		return false;
+ 	}
+-	/* The first directory block must not be a hole,
+-	 * so treat it as DIRENT_HTREE
+-	 */
+-	bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
++	bh = ext4_read_dirblock(inode, 0, EITHER);
+ 	if (IS_ERR(bh))
+ 		return false;
+ 
+@@ -3531,10 +3575,7 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 		struct ext4_dir_entry_2 *de;
+ 		unsigned int offset;
+ 
+-		/* The first directory block must not be a hole, so
+-		 * treat it as DIRENT_HTREE
+-		 */
+-		bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
++		bh = ext4_read_dirblock(inode, 0, EITHER);
+ 		if (IS_ERR(bh)) {
+ 			*retval = PTR_ERR(bh);
+ 			return NULL;
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 6460879b9fcbb..46ce2f21fef9d 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1433,6 +1433,12 @@ static int ext4_xattr_inode_write(handle_t *handle, struct inode *ea_inode,
+ 			goto out;
+ 
+ 		memcpy(bh->b_data, buf, csize);
++		/*
++		 * Zero out block tail to avoid writing uninitialized memory
++		 * to disk.
++		 */
++		if (csize < blocksize)
++			memset(bh->b_data + csize, 0, blocksize - csize);
+ 		set_buffer_uptodate(bh);
+ 		ext4_handle_dirty_metadata(handle, ea_inode, bh);
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 55d444bec5c06..b52d10b457fe7 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1186,6 +1186,11 @@ static void __prepare_cp_block(struct f2fs_sb_info *sbi)
+ 	ckpt->valid_node_count = cpu_to_le32(valid_node_count(sbi));
+ 	ckpt->valid_inode_count = cpu_to_le32(valid_inode_count(sbi));
+ 	ckpt->next_free_nid = cpu_to_le32(last_nid);
++
++	/* update user_block_counts */
++	sbi->last_valid_block_count = sbi->total_valid_block_count;
++	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
++	percpu_counter_set(&sbi->rf_node_block_count, 0);
+ }
+ 
+ static bool __need_flush_quota(struct f2fs_sb_info *sbi)
+@@ -1575,11 +1580,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 		start_blk += NR_CURSEG_NODE_TYPE;
+ 	}
+ 
+-	/* update user_block_counts */
+-	sbi->last_valid_block_count = sbi->total_valid_block_count;
+-	percpu_counter_set(&sbi->alloc_valid_block_count, 0);
+-	percpu_counter_set(&sbi->rf_node_block_count, 0);
+-
+ 	/* Here, we have one bio having CP pack except cp pack 2 page */
+ 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
+ 	/* Wait for all dirty meta pages to be submitted for IO */
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b9b0debc6b3d3..467f67cf2b380 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -925,6 +925,7 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
+ #ifdef CONFIG_BLK_DEV_ZONED
+ static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
+ {
++	struct block_device *bdev = sbi->sb->s_bdev;
+ 	int devi = 0;
+ 
+ 	if (f2fs_is_multi_device(sbi)) {
+@@ -935,8 +936,9 @@ static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
+ 			return false;
+ 		}
+ 		blkaddr -= FDEV(devi).start_blk;
++		bdev = FDEV(devi).bdev;
+ 	}
+-	return bdev_is_zoned(FDEV(devi).bdev) &&
++	return bdev_is_zoned(bdev) &&
+ 		f2fs_blkz_is_seq(sbi, devi, blkaddr) &&
+ 		(blkaddr % sbi->blocks_per_blkz == sbi->blocks_per_blkz - 1);
+ }
+@@ -2601,7 +2603,7 @@ bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
+ 		return true;
+ 	if (IS_NOQUOTA(inode))
+ 		return true;
+-	if (f2fs_is_atomic_file(inode))
++	if (f2fs_used_in_atomic_write(inode))
+ 		return true;
+ 	/* rewrite low ratio compress data w/ OPU mode to avoid fragmentation */
+ 	if (f2fs_compressed_file(inode) &&
+@@ -2688,7 +2690,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ 	}
+ 
+ 	/* wait for GCed page writeback via META_MAPPING */
+-	if (fio->post_read)
++	if (fio->meta_gc)
+ 		f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
+ 
+ 	/*
+@@ -2783,7 +2785,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 		.submitted = 0,
+ 		.compr_blocks = compr_blocks,
+ 		.need_lock = compr_blocks ? LOCK_DONE : LOCK_RETRY,
+-		.post_read = f2fs_post_read_required(inode) ? 1 : 0,
++		.meta_gc = f2fs_meta_inode_gc_required(inode) ? 1 : 0,
+ 		.io_type = io_type,
+ 		.io_wbc = wbc,
+ 		.bio = bio,
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 1974b6aff397c..66680159a2968 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -803,6 +803,7 @@ enum {
+ 	FI_COW_FILE,		/* indicate COW file */
+ 	FI_ATOMIC_COMMITTED,	/* indicate atomic commit completed except disk sync */
+ 	FI_ATOMIC_REPLACE,	/* indicate atomic replace */
++	FI_OPENED_FILE,		/* indicate file has been opened */
+ 	FI_MAX,			/* max flag, never be used */
+ };
+ 
+@@ -842,7 +843,11 @@ struct f2fs_inode_info {
+ 	struct task_struct *atomic_write_task;	/* store atomic write task */
+ 	struct extent_tree *extent_tree[NR_EXTENT_CACHES];
+ 					/* cached extent_tree entry */
+-	struct inode *cow_inode;	/* copy-on-write inode for atomic write */
++	union {
++		struct inode *cow_inode;	/* copy-on-write inode for atomic write */
++		struct inode *atomic_inode;
++					/* point to atomic_inode, available only for cow_inode */
++	};
+ 
+ 	/* avoid racing between foreground op and gc */
+ 	struct f2fs_rwsem i_gc_rwsem[2];
+@@ -1210,7 +1215,7 @@ struct f2fs_io_info {
+ 	unsigned int in_list:1;		/* indicate fio is in io_list */
+ 	unsigned int is_por:1;		/* indicate IO is from recovery or not */
+ 	unsigned int encrypted:1;	/* indicate file is encrypted */
+-	unsigned int post_read:1;	/* require post read */
++	unsigned int meta_gc:1;		/* require meta inode GC */
+ 	enum iostat_type io_type;	/* io type */
+ 	struct writeback_control *io_wbc; /* writeback control */
+ 	struct bio **bio;		/* bio for ipu */
+@@ -4261,6 +4266,16 @@ static inline bool f2fs_post_read_required(struct inode *inode)
+ 		f2fs_compressed_file(inode);
+ }
+ 
++static inline bool f2fs_used_in_atomic_write(struct inode *inode)
++{
++	return f2fs_is_atomic_file(inode) || f2fs_is_cow_file(inode);
++}
++
++static inline bool f2fs_meta_inode_gc_required(struct inode *inode)
++{
++	return f2fs_post_read_required(inode) || f2fs_used_in_atomic_write(inode);
++}
++
+ /*
+  * compress.c
+  */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 5c0b281a70f3e..387ce167dda1b 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -554,6 +554,42 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	return 0;
+ }
+ 
++static int finish_preallocate_blocks(struct inode *inode)
++{
++	int ret;
++
++	inode_lock(inode);
++	if (is_inode_flag_set(inode, FI_OPENED_FILE)) {
++		inode_unlock(inode);
++		return 0;
++	}
++
++	if (!file_should_truncate(inode)) {
++		set_inode_flag(inode, FI_OPENED_FILE);
++		inode_unlock(inode);
++		return 0;
++	}
++
++	f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++	filemap_invalidate_lock(inode->i_mapping);
++
++	truncate_setsize(inode, i_size_read(inode));
++	ret = f2fs_truncate(inode);
++
++	filemap_invalidate_unlock(inode->i_mapping);
++	f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++	if (!ret)
++		set_inode_flag(inode, FI_OPENED_FILE);
++
++	inode_unlock(inode);
++	if (ret)
++		return ret;
++
++	file_dont_truncate(inode);
++	return 0;
++}
++
+ static int f2fs_file_open(struct inode *inode, struct file *filp)
+ {
+ 	int err = fscrypt_file_open(inode, filp);
+@@ -571,7 +607,11 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
+ 	filp->f_mode |= FMODE_NOWAIT;
+ 	filp->f_mode |= FMODE_CAN_ODIRECT;
+ 
+-	return dquot_file_open(inode, filp);
++	err = dquot_file_open(inode, filp);
++	if (err)
++		return err;
++
++	return finish_preallocate_blocks(inode);
+ }
+ 
+ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
+@@ -825,6 +865,8 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw)
+ 		return true;
+ 	if (f2fs_compressed_file(inode))
+ 		return true;
++	if (f2fs_has_inline_data(inode))
++		return true;
+ 
+ 	/* disallow direct IO if any of devices has unaligned blksize */
+ 	if (f2fs_is_multi_device(sbi) && !sbi->aligned_blksize)
+@@ -2141,6 +2183,9 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ 
+ 		set_inode_flag(fi->cow_inode, FI_COW_FILE);
+ 		clear_inode_flag(fi->cow_inode, FI_INLINE_DATA);
++
++		/* Set the COW inode's atomic_inode to the atomic inode */
++		F2FS_I(fi->cow_inode)->atomic_inode = inode;
+ 	} else {
+ 		/* Reuse the already created COW inode */
+ 		ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 6066c6eecf41d..b2951cd930d80 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1171,7 +1171,8 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ static int ra_data_block(struct inode *inode, pgoff_t index)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+-	struct address_space *mapping = inode->i_mapping;
++	struct address_space *mapping = f2fs_is_cow_file(inode) ?
++				F2FS_I(inode)->atomic_inode->i_mapping : inode->i_mapping;
+ 	struct dnode_of_data dn;
+ 	struct page *page;
+ 	struct f2fs_io_info fio = {
+@@ -1260,6 +1261,8 @@ static int ra_data_block(struct inode *inode, pgoff_t index)
+ static int move_data_block(struct inode *inode, block_t bidx,
+ 				int gc_type, unsigned int segno, int off)
+ {
++	struct address_space *mapping = f2fs_is_cow_file(inode) ?
++				F2FS_I(inode)->atomic_inode->i_mapping : inode->i_mapping;
+ 	struct f2fs_io_info fio = {
+ 		.sbi = F2FS_I_SB(inode),
+ 		.ino = inode->i_ino,
+@@ -1282,7 +1285,7 @@ static int move_data_block(struct inode *inode, block_t bidx,
+ 				CURSEG_ALL_DATA_ATGC : CURSEG_COLD_DATA;
+ 
+ 	/* do not read out */
+-	page = f2fs_grab_cache_page(inode->i_mapping, bidx, false);
++	page = f2fs_grab_cache_page(mapping, bidx, false);
+ 	if (!page)
+ 		return -ENOMEM;
+ 
+@@ -1579,7 +1582,7 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 			start_bidx = f2fs_start_bidx_of_node(nofs, inode) +
+ 								ofs_in_node;
+ 
+-			if (f2fs_post_read_required(inode)) {
++			if (f2fs_meta_inode_gc_required(inode)) {
+ 				int err = ra_data_block(inode, start_bidx);
+ 
+ 				f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+@@ -1630,7 +1633,7 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 
+ 			start_bidx = f2fs_start_bidx_of_node(nofs, inode)
+ 								+ ofs_in_node;
+-			if (f2fs_post_read_required(inode))
++			if (f2fs_meta_inode_gc_required(inode))
+ 				err = move_data_block(inode, start_bidx,
+ 							gc_type, segno, off);
+ 			else
+@@ -1638,7 +1641,7 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 								segno, off);
+ 
+ 			if (!err && (gc_type == FG_GC ||
+-					f2fs_post_read_required(inode)))
++					f2fs_meta_inode_gc_required(inode)))
+ 				submitted++;
+ 
+ 			if (locked) {
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 7638d0d7b7eed..215daa71dc18a 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -16,7 +16,7 @@
+ 
+ static bool support_inline_data(struct inode *inode)
+ {
+-	if (f2fs_is_atomic_file(inode))
++	if (f2fs_used_in_atomic_write(inode))
+ 		return false;
+ 	if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
+ 		return false;
+@@ -203,8 +203,10 @@ int f2fs_convert_inline_inode(struct inode *inode)
+ 	struct page *ipage, *page;
+ 	int err = 0;
+ 
+-	if (!f2fs_has_inline_data(inode) ||
+-			f2fs_hw_is_readonly(sbi) || f2fs_readonly(sbi->sb))
++	if (f2fs_hw_is_readonly(sbi) || f2fs_readonly(sbi->sb))
++		return -EROFS;
++
++	if (!f2fs_has_inline_data(inode))
+ 		return 0;
+ 
+ 	err = f2fs_dquot_initialize(inode);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 005dde72aff3d..c6b55aedc2762 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -29,6 +29,9 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ 	if (is_inode_flag_set(inode, FI_NEW_INODE))
+ 		return;
+ 
++	if (f2fs_readonly(F2FS_I_SB(inode)->sb))
++		return;
++
+ 	if (f2fs_inode_dirtied(inode, sync))
+ 		return;
+ 
+@@ -610,14 +613,6 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ 	}
+ 	f2fs_set_inode_flags(inode);
+ 
+-	if (file_should_truncate(inode) &&
+-			!is_sbi_flag_set(sbi, SBI_POR_DOING)) {
+-		ret = f2fs_truncate(inode);
+-		if (ret)
+-			goto bad_inode;
+-		file_dont_truncate(inode);
+-	}
+-
+ 	unlock_new_inode(inode);
+ 	trace_f2fs_iget(inode);
+ 	return inode;
+@@ -813,8 +808,9 @@ void f2fs_evict_inode(struct inode *inode)
+ 
+ 	f2fs_abort_atomic_write(inode, true);
+ 
+-	if (fi->cow_inode) {
++	if (fi->cow_inode && f2fs_is_cow_file(fi->cow_inode)) {
+ 		clear_inode_flag(fi->cow_inode, FI_COW_FILE);
++		F2FS_I(fi->cow_inode)->atomic_inode = NULL;
+ 		iput(fi->cow_inode);
+ 		fi->cow_inode = NULL;
+ 	}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index a0ce3d080f80a..259e235becc59 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3828,7 +3828,7 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio)
+ 		goto drop_bio;
+ 	}
+ 
+-	if (fio->post_read)
++	if (fio->meta_gc)
+ 		f2fs_truncate_meta_inode_pages(sbi, fio->new_blkaddr, 1);
+ 
+ 	stat_inc_inplace_blocks(fio->sbi);
+@@ -3998,7 +3998,7 @@ void f2fs_wait_on_block_writeback(struct inode *inode, block_t blkaddr)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct page *cpage;
+ 
+-	if (!f2fs_post_read_required(inode))
++	if (!f2fs_meta_inode_gc_required(inode))
+ 		return;
+ 
+ 	if (!__is_valid_data_blkaddr(blkaddr))
+@@ -4017,7 +4017,7 @@ void f2fs_wait_on_block_writeback_range(struct inode *inode, block_t blkaddr,
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	block_t i;
+ 
+-	if (!f2fs_post_read_required(inode))
++	if (!f2fs_meta_inode_gc_required(inode))
+ 		return;
+ 
+ 	for (i = 0; i < len; i++)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index e1c0f418aa11f..bfc01a521cb98 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -347,7 +347,8 @@ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
+ 				unsigned int segno, bool use_section)
+ {
+ 	if (use_section && __is_large_section(sbi)) {
+-		unsigned int start_segno = START_SEGNO(segno);
++		unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
++		unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 		unsigned int blocks = 0;
+ 		int i;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 99e44ea7d8756..32fe6fa72f460 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -755,6 +755,8 @@ static int fuse_parse_param(struct fs_context *fsc, struct fs_parameter *param)
+ 	struct fs_parse_result result;
+ 	struct fuse_fs_context *ctx = fsc->fs_private;
+ 	int opt;
++	kuid_t kuid;
++	kgid_t kgid;
+ 
+ 	if (fsc->purpose == FS_CONTEXT_FOR_RECONFIGURE) {
+ 		/*
+@@ -799,16 +801,30 @@ static int fuse_parse_param(struct fs_context *fsc, struct fs_parameter *param)
+ 		break;
+ 
+ 	case OPT_USER_ID:
+-		ctx->user_id = make_kuid(fsc->user_ns, result.uint_32);
+-		if (!uid_valid(ctx->user_id))
++		kuid =  make_kuid(fsc->user_ns, result.uint_32);
++		if (!uid_valid(kuid))
+ 			return invalfc(fsc, "Invalid user_id");
++		/*
++		 * The requested uid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kuid_has_mapping(fsc->user_ns, kuid))
++			return invalfc(fsc, "Invalid user_id");
++		ctx->user_id = kuid;
+ 		ctx->user_id_present = true;
+ 		break;
+ 
+ 	case OPT_GROUP_ID:
+-		ctx->group_id = make_kgid(fsc->user_ns, result.uint_32);
+-		if (!gid_valid(ctx->group_id))
++		kgid = make_kgid(fsc->user_ns, result.uint_32);;
++		if (!gid_valid(kgid))
++			return invalfc(fsc, "Invalid group_id");
++		/*
++		 * The requested gid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kgid_has_mapping(fsc->user_ns, kgid))
+ 			return invalfc(fsc, "Invalid group_id");
++		ctx->group_id = kgid;
+ 		ctx->group_id_present = true;
+ 		break;
+ 
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index 8c34798a07157..744e10b469048 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -200,6 +200,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
+ 	HFS_I(inode)->flags = 0;
+ 	HFS_I(inode)->rsrc_inode = NULL;
+ 	HFS_I(inode)->fs_blocks = 0;
++	HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
+ 	if (S_ISDIR(mode)) {
+ 		inode->i_size = 2;
+ 		HFS_SB(sb)->folder_count++;
+@@ -275,6 +276,8 @@ void hfs_inode_read_fork(struct inode *inode, struct hfs_extent *ext,
+ 	for (count = 0, i = 0; i < 3; i++)
+ 		count += be16_to_cpu(ext[i].count);
+ 	HFS_I(inode)->first_blocks = count;
++	HFS_I(inode)->cached_start = 0;
++	HFS_I(inode)->cached_blocks = 0;
+ 
+ 	inode->i_size = HFS_I(inode)->phys_size = log_size;
+ 	HFS_I(inode)->fs_blocks = (log_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
+index ca2ba8c9f82ef..901e83d65d202 100644
+--- a/fs/hfsplus/bfind.c
++++ b/fs/hfsplus/bfind.c
+@@ -25,19 +25,8 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ 	fd->key = ptr + tree->max_key_len + 2;
+ 	hfs_dbg(BNODE_REFS, "find_init: %d (%p)\n",
+ 		tree->cnid, __builtin_return_address(0));
+-	switch (tree->cnid) {
+-	case HFSPLUS_CAT_CNID:
+-		mutex_lock_nested(&tree->tree_lock, CATALOG_BTREE_MUTEX);
+-		break;
+-	case HFSPLUS_EXT_CNID:
+-		mutex_lock_nested(&tree->tree_lock, EXTENTS_BTREE_MUTEX);
+-		break;
+-	case HFSPLUS_ATTR_CNID:
+-		mutex_lock_nested(&tree->tree_lock, ATTR_BTREE_MUTEX);
+-		break;
+-	default:
+-		BUG();
+-	}
++	mutex_lock_nested(&tree->tree_lock,
++			hfsplus_btree_lock_class(tree));
+ 	return 0;
+ }
+ 
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index 3c572e44f2adf..9c51867dddc51 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -430,7 +430,8 @@ int hfsplus_free_fork(struct super_block *sb, u32 cnid,
+ 		hfsplus_free_extents(sb, ext_entry, total_blocks - start,
+ 				     total_blocks);
+ 		total_blocks = start;
+-		mutex_lock(&fd.tree->tree_lock);
++		mutex_lock_nested(&fd.tree->tree_lock,
++			hfsplus_btree_lock_class(fd.tree));
+ 	} while (total_blocks > blocks);
+ 	hfs_find_exit(&fd);
+ 
+@@ -592,7 +593,8 @@ void hfsplus_file_truncate(struct inode *inode)
+ 					     alloc_cnt, alloc_cnt - blk_cnt);
+ 			hfsplus_dump_extent(hip->first_extents);
+ 			hip->first_blocks = blk_cnt;
+-			mutex_lock(&fd.tree->tree_lock);
++			mutex_lock_nested(&fd.tree->tree_lock,
++				hfsplus_btree_lock_class(fd.tree));
+ 			break;
+ 		}
+ 		res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt);
+@@ -606,7 +608,8 @@ void hfsplus_file_truncate(struct inode *inode)
+ 		hfsplus_free_extents(sb, hip->cached_extents,
+ 				     alloc_cnt - start, alloc_cnt - blk_cnt);
+ 		hfsplus_dump_extent(hip->cached_extents);
+-		mutex_lock(&fd.tree->tree_lock);
++		mutex_lock_nested(&fd.tree->tree_lock,
++				hfsplus_btree_lock_class(fd.tree));
+ 		if (blk_cnt > start) {
+ 			hip->extent_state |= HFSPLUS_EXT_DIRTY;
+ 			break;
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index 012a3d003fbe6..9e78f181c24f4 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -553,6 +553,27 @@ static inline __be32 __hfsp_ut2mt(time64_t ut)
+ 	return cpu_to_be32(lower_32_bits(ut) + HFSPLUS_UTC_OFFSET);
+ }
+ 
++static inline enum hfsplus_btree_mutex_classes
++hfsplus_btree_lock_class(struct hfs_btree *tree)
++{
++	enum hfsplus_btree_mutex_classes class;
++
++	switch (tree->cnid) {
++	case HFSPLUS_CAT_CNID:
++		class = CATALOG_BTREE_MUTEX;
++		break;
++	case HFSPLUS_EXT_CNID:
++		class = EXTENTS_BTREE_MUTEX;
++		break;
++	case HFSPLUS_ATTR_CNID:
++		class = ATTR_BTREE_MUTEX;
++		break;
++	default:
++		BUG();
++	}
++	return class;
++}
++
+ /* compatibility */
+ #define hfsp_mt2ut(t)		(struct timespec64){ .tv_sec = __hfsp_mt2ut(t) }
+ #define hfsp_ut2mt(t)		__hfsp_ut2mt((t).tv_sec)
+diff --git a/fs/hostfs/hostfs.h b/fs/hostfs/hostfs.h
+index 0239e3af39455..8b39c15c408cc 100644
+--- a/fs/hostfs/hostfs.h
++++ b/fs/hostfs/hostfs.h
+@@ -63,9 +63,10 @@ struct hostfs_stat {
+ 	struct hostfs_timespec atime, mtime, ctime;
+ 	unsigned int blksize;
+ 	unsigned long long blocks;
+-	unsigned int maj;
+-	unsigned int min;
+-	dev_t dev;
++	struct {
++		unsigned int maj;
++		unsigned int min;
++	} rdev, dev;
+ };
+ 
+ extern int stat_file(const char *path, struct hostfs_stat *p, int fd);
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index a73d27c4dd583..2c4d503a62e02 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -530,10 +530,11 @@ static int hostfs_inode_update(struct inode *ino, const struct hostfs_stat *st)
+ static int hostfs_inode_set(struct inode *ino, void *data)
+ {
+ 	struct hostfs_stat *st = data;
+-	dev_t rdev;
++	dev_t dev, rdev;
+ 
+ 	/* Reencode maj and min with the kernel encoding.*/
+-	rdev = MKDEV(st->maj, st->min);
++	rdev = MKDEV(st->rdev.maj, st->rdev.min);
++	dev = MKDEV(st->dev.maj, st->dev.min);
+ 
+ 	switch (st->mode & S_IFMT) {
+ 	case S_IFLNK:
+@@ -559,7 +560,7 @@ static int hostfs_inode_set(struct inode *ino, void *data)
+ 		return -EIO;
+ 	}
+ 
+-	HOSTFS_I(ino)->dev = st->dev;
++	HOSTFS_I(ino)->dev = dev;
+ 	ino->i_ino = st->ino;
+ 	ino->i_mode = st->mode;
+ 	return hostfs_inode_update(ino, st);
+@@ -568,8 +569,9 @@ static int hostfs_inode_set(struct inode *ino, void *data)
+ static int hostfs_inode_test(struct inode *inode, void *data)
+ {
+ 	const struct hostfs_stat *st = data;
++	dev_t dev = MKDEV(st->dev.maj, st->dev.min);
+ 
+-	return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == st->dev;
++	return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == dev;
+ }
+ 
+ static struct inode *hostfs_iget(struct super_block *sb, char *name)
+diff --git a/fs/hostfs/hostfs_user.c b/fs/hostfs/hostfs_user.c
+index 840619e39a1a6..97e9c40a94488 100644
+--- a/fs/hostfs/hostfs_user.c
++++ b/fs/hostfs/hostfs_user.c
+@@ -34,9 +34,10 @@ static void stat64_to_hostfs(const struct stat64 *buf, struct hostfs_stat *p)
+ 	p->mtime.tv_nsec = 0;
+ 	p->blksize = buf->st_blksize;
+ 	p->blocks = buf->st_blocks;
+-	p->maj = os_major(buf->st_rdev);
+-	p->min = os_minor(buf->st_rdev);
+-	p->dev = buf->st_dev;
++	p->rdev.maj = os_major(buf->st_rdev);
++	p->rdev.min = os_minor(buf->st_rdev);
++	p->dev.maj = os_major(buf->st_dev);
++	p->dev.min = os_minor(buf->st_dev);
+ }
+ 
+ int stat_file(const char *path, struct hostfs_stat *p, int fd)
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 75ea4e9a5cabd..e7fc912693bd7 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -766,7 +766,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ 		if (first_block < journal->j_tail)
+ 			freed += journal->j_last - journal->j_first;
+ 		/* Update tail only if we free significant amount of space */
+-		if (freed < jbd2_journal_get_max_txn_bufs(journal))
++		if (freed < journal->j_max_transaction_buffers)
+ 			update_tail = 0;
+ 	}
+ 	J_ASSERT(commit_transaction->t_state == T_COMMIT);
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 03c4b9214f564..ae5b544ed0cc0 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1451,6 +1451,48 @@ static int journal_revoke_records_per_block(journal_t *journal)
+ 	return space / record_size;
+ }
+ 
++static int jbd2_journal_get_max_txn_bufs(journal_t *journal)
++{
++	return (journal->j_total_len - journal->j_fc_wbufsize) / 4;
++}
++
++/*
++ * Base amount of descriptor blocks we reserve for each transaction.
++ */
++static int jbd2_descriptor_blocks_per_trans(journal_t *journal)
++{
++	int tag_space = journal->j_blocksize - sizeof(journal_header_t);
++	int tags_per_block;
++
++	/* Subtract UUID */
++	tag_space -= 16;
++	if (jbd2_journal_has_csum_v2or3(journal))
++		tag_space -= sizeof(struct jbd2_journal_block_tail);
++	/* Commit code leaves a slack space of 16 bytes at the end of block */
++	tags_per_block = (tag_space - 16) / journal_tag_bytes(journal);
++	/*
++	 * Revoke descriptors are accounted separately so we need to reserve
++	 * space for commit block and normal transaction descriptor blocks.
++	 */
++	return 1 + DIV_ROUND_UP(jbd2_journal_get_max_txn_bufs(journal),
++				tags_per_block);
++}
++
++/*
++ * Initialize number of blocks each transaction reserves for its bookkeeping
++ * and maximum number of blocks a transaction can use. This needs to be called
++ * after the journal size and the fastcommit area size are initialized.
++ */
++static void jbd2_journal_init_transaction_limits(journal_t *journal)
++{
++	journal->j_revoke_records_per_block =
++				journal_revoke_records_per_block(journal);
++	journal->j_transaction_overhead_buffers =
++				jbd2_descriptor_blocks_per_trans(journal);
++	journal->j_max_transaction_buffers =
++				jbd2_journal_get_max_txn_bufs(journal);
++}
++
+ /*
+  * Load the on-disk journal superblock and read the key fields into the
+  * journal_t.
+@@ -1492,8 +1534,8 @@ static int journal_load_superblock(journal_t *journal)
+ 	if (jbd2_journal_has_csum_v2or3(journal))
+ 		journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
+ 						   sizeof(sb->s_uuid));
+-	journal->j_revoke_records_per_block =
+-				journal_revoke_records_per_block(journal);
++	/* After journal features are set, we can compute transaction limits */
++	jbd2_journal_init_transaction_limits(journal);
+ 
+ 	if (jbd2_has_feature_fast_commit(journal)) {
+ 		journal->j_fc_last = be32_to_cpu(sb->s_maxlen);
+@@ -1743,8 +1785,6 @@ static int journal_reset(journal_t *journal)
+ 	journal->j_commit_sequence = journal->j_transaction_sequence - 1;
+ 	journal->j_commit_request = journal->j_commit_sequence;
+ 
+-	journal->j_max_transaction_buffers = jbd2_journal_get_max_txn_bufs(journal);
+-
+ 	/*
+ 	 * Now that journal recovery is done, turn fast commits off here. This
+ 	 * way, if fast commit was enabled before the crash but if now FS has
+@@ -2285,8 +2325,6 @@ jbd2_journal_initialize_fast_commit(journal_t *journal)
+ 	journal->j_fc_first = journal->j_last + 1;
+ 	journal->j_fc_off = 0;
+ 	journal->j_free = journal->j_last - journal->j_first;
+-	journal->j_max_transaction_buffers =
+-		jbd2_journal_get_max_txn_bufs(journal);
+ 
+ 	return 0;
+ }
+@@ -2374,8 +2412,7 @@ int jbd2_journal_set_features(journal_t *journal, unsigned long compat,
+ 	sb->s_feature_ro_compat |= cpu_to_be32(ro);
+ 	sb->s_feature_incompat  |= cpu_to_be32(incompat);
+ 	unlock_buffer(journal->j_sb_buffer);
+-	journal->j_revoke_records_per_block =
+-				journal_revoke_records_per_block(journal);
++	jbd2_journal_init_transaction_limits(journal);
+ 
+ 	return 1;
+ #undef COMPAT_FEATURE_ON
+@@ -2406,8 +2443,7 @@ void jbd2_journal_clear_features(journal_t *journal, unsigned long compat,
+ 	sb->s_feature_compat    &= ~cpu_to_be32(compat);
+ 	sb->s_feature_ro_compat &= ~cpu_to_be32(ro);
+ 	sb->s_feature_incompat  &= ~cpu_to_be32(incompat);
+-	journal->j_revoke_records_per_block =
+-				journal_revoke_records_per_block(journal);
++	jbd2_journal_init_transaction_limits(journal);
+ }
+ EXPORT_SYMBOL(jbd2_journal_clear_features);
+ 
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index cb0b8d6fc0c6d..66513c18ca294 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -62,28 +62,6 @@ void jbd2_journal_free_transaction(transaction_t *transaction)
+ 	kmem_cache_free(transaction_cache, transaction);
+ }
+ 
+-/*
+- * Base amount of descriptor blocks we reserve for each transaction.
+- */
+-static int jbd2_descriptor_blocks_per_trans(journal_t *journal)
+-{
+-	int tag_space = journal->j_blocksize - sizeof(journal_header_t);
+-	int tags_per_block;
+-
+-	/* Subtract UUID */
+-	tag_space -= 16;
+-	if (jbd2_journal_has_csum_v2or3(journal))
+-		tag_space -= sizeof(struct jbd2_journal_block_tail);
+-	/* Commit code leaves a slack space of 16 bytes at the end of block */
+-	tags_per_block = (tag_space - 16) / journal_tag_bytes(journal);
+-	/*
+-	 * Revoke descriptors are accounted separately so we need to reserve
+-	 * space for commit block and normal transaction descriptor blocks.
+-	 */
+-	return 1 + DIV_ROUND_UP(journal->j_max_transaction_buffers,
+-				tags_per_block);
+-}
+-
+ /*
+  * jbd2_get_transaction: obtain a new transaction_t object.
+  *
+@@ -109,7 +87,7 @@ static void jbd2_get_transaction(journal_t *journal,
+ 	transaction->t_expires = jiffies + journal->j_commit_interval;
+ 	atomic_set(&transaction->t_updates, 0);
+ 	atomic_set(&transaction->t_outstanding_credits,
+-		   jbd2_descriptor_blocks_per_trans(journal) +
++		   journal->j_transaction_overhead_buffers +
+ 		   atomic_read(&journal->j_reserved_credits));
+ 	atomic_set(&transaction->t_outstanding_revokes, 0);
+ 	atomic_set(&transaction->t_handle_count, 0);
+@@ -213,6 +191,13 @@ static void sub_reserved_credits(journal_t *journal, int blocks)
+ 	wake_up(&journal->j_wait_reserved);
+ }
+ 
++/* Maximum number of blocks for user transaction payload */
++static int jbd2_max_user_trans_buffers(journal_t *journal)
++{
++	return journal->j_max_transaction_buffers -
++				journal->j_transaction_overhead_buffers;
++}
++
+ /*
+  * Wait until we can add credits for handle to the running transaction.  Called
+  * with j_state_lock held for reading. Returns 0 if handle joined the running
+@@ -262,12 +247,12 @@ __must_hold(&journal->j_state_lock)
+ 		 * big to fit this handle? Wait until reserved credits are freed.
+ 		 */
+ 		if (atomic_read(&journal->j_reserved_credits) + total >
+-		    journal->j_max_transaction_buffers) {
++		    jbd2_max_user_trans_buffers(journal)) {
+ 			read_unlock(&journal->j_state_lock);
+ 			jbd2_might_wait_for_commit(journal);
+ 			wait_event(journal->j_wait_reserved,
+ 				   atomic_read(&journal->j_reserved_credits) + total <=
+-				   journal->j_max_transaction_buffers);
++				   jbd2_max_user_trans_buffers(journal));
+ 			__acquire(&journal->j_state_lock); /* fake out sparse */
+ 			return 1;
+ 		}
+@@ -307,14 +292,14 @@ __must_hold(&journal->j_state_lock)
+ 
+ 	needed = atomic_add_return(rsv_blocks, &journal->j_reserved_credits);
+ 	/* We allow at most half of a transaction to be reserved */
+-	if (needed > journal->j_max_transaction_buffers / 2) {
++	if (needed > jbd2_max_user_trans_buffers(journal) / 2) {
+ 		sub_reserved_credits(journal, rsv_blocks);
+ 		atomic_sub(total, &t->t_outstanding_credits);
+ 		read_unlock(&journal->j_state_lock);
+ 		jbd2_might_wait_for_commit(journal);
+ 		wait_event(journal->j_wait_reserved,
+ 			 atomic_read(&journal->j_reserved_credits) + rsv_blocks
+-			 <= journal->j_max_transaction_buffers / 2);
++			 <= jbd2_max_user_trans_buffers(journal) / 2);
+ 		__acquire(&journal->j_state_lock); /* fake out sparse */
+ 		return 1;
+ 	}
+@@ -344,12 +329,12 @@ static int start_this_handle(journal_t *journal, handle_t *handle,
+ 	 * size and limit the number of total credits to not exceed maximum
+ 	 * transaction size per operation.
+ 	 */
+-	if ((rsv_blocks > journal->j_max_transaction_buffers / 2) ||
+-	    (rsv_blocks + blocks > journal->j_max_transaction_buffers)) {
++	if (rsv_blocks > jbd2_max_user_trans_buffers(journal) / 2 ||
++	    rsv_blocks + blocks > jbd2_max_user_trans_buffers(journal)) {
+ 		printk(KERN_ERR "JBD2: %s wants too many credits "
+ 		       "credits:%d rsv_credits:%d max:%d\n",
+ 		       current->comm, blocks, rsv_blocks,
+-		       journal->j_max_transaction_buffers);
++		       jbd2_max_user_trans_buffers(journal));
+ 		WARN_ON(1);
+ 		return -ENOSPC;
+ 	}
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 2ec35889ad24e..1407feccbc2d0 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -290,7 +290,7 @@ int diSync(struct inode *ipimap)
+ int diRead(struct inode *ip)
+ {
+ 	struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
+-	int iagno, ino, extno, rc;
++	int iagno, ino, extno, rc, agno;
+ 	struct inode *ipimap;
+ 	struct dinode *dp;
+ 	struct iag *iagp;
+@@ -339,8 +339,11 @@ int diRead(struct inode *ip)
+ 
+ 	/* get the ag for the iag */
+ 	agstart = le64_to_cpu(iagp->agstart);
++	agno = BLKTOAG(agstart, JFS_SBI(ip->i_sb));
+ 
+ 	release_metapage(mp);
++	if (agno >= MAXAG || agno < 0)
++		return -EIO;
+ 
+ 	rel_inode = (ino & (INOSPERPAGE - 1));
+ 	pageno = blkno >> sbi->l2nbperpage;
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index d7c971df88660..32bc88bee5d18 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -122,6 +122,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 	wreq->io_streams[1].transferred		= LONG_MAX;
+ 	if (fscache_resources_valid(&wreq->cache_resources)) {
+ 		wreq->io_streams[1].avail	= true;
++		wreq->io_streams[1].active	= true;
+ 		wreq->io_streams[1].prepare_write = wreq->cache_resources.ops->prepare_write_subreq;
+ 		wreq->io_streams[1].issue_write = wreq->cache_resources.ops->issue_write;
+ 	}
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 6bd127e6683dc..445db17f1b6c1 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -434,7 +434,7 @@ static void nfs_invalidate_folio(struct folio *folio, size_t offset,
+ 	/* Cancel any unstarted writes on this page */
+ 	nfs_wb_folio_cancel(inode, folio);
+ 	folio_wait_private_2(folio); /* [DEPRECATED] */
+-	trace_nfs_invalidate_folio(inode, folio);
++	trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length);
+ }
+ 
+ /*
+@@ -502,7 +502,8 @@ static int nfs_launder_folio(struct folio *folio)
+ 
+ 	folio_wait_private_2(folio); /* [DEPRECATED] */
+ 	ret = nfs_wb_folio(inode, folio);
+-	trace_nfs_launder_folio_done(inode, folio, ret);
++	trace_nfs_launder_folio_done(inode, folio_pos(folio),
++			folio_size(folio), ret);
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 84573df5cf5ae..83378f69b35ea 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -231,9 +231,8 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ 		__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
+ 	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
+ 	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
+-
+-	if (test_bit(NFS_CS_DS, &cl_init->init_flags))
+-		__set_bit(NFS_CS_DS, &clp->cl_flags);
++	if (test_bit(NFS_CS_PNFS, &cl_init->init_flags))
++		__set_bit(NFS_CS_PNFS, &clp->cl_flags);
+ 	/*
+ 	 * Set up the connection to the server before we add add to the
+ 	 * global list.
+@@ -1013,7 +1012,6 @@ struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv,
+ 	if (mds_srv->flags & NFS_MOUNT_NORESVPORT)
+ 		__set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags);
+ 
+-	__set_bit(NFS_CS_DS, &cl_init.init_flags);
+ 	__set_bit(NFS_CS_PNFS, &cl_init.init_flags);
+ 	cl_init.max_connect = NFS_MAX_TRANSPORTS;
+ 	/*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index a691fa10b3e95..bff9d6600741e 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8840,7 +8840,7 @@ nfs4_run_exchange_id(struct nfs_client *clp, const struct cred *cred,
+ #ifdef CONFIG_NFS_V4_1_MIGRATION
+ 	calldata->args.flags |= EXCHGID4_FLAG_SUPP_MOVED_MIGR;
+ #endif
+-	if (test_bit(NFS_CS_DS, &clp->cl_flags))
++	if (test_bit(NFS_CS_PNFS, &clp->cl_flags))
+ 		calldata->args.flags |= EXCHGID4_FLAG_USE_PNFS_DS;
+ 	msg.rpc_argp = &calldata->args;
+ 	msg.rpc_resp = &calldata->res;
+diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h
+index 1e710654af117..352fdaed40754 100644
+--- a/fs/nfs/nfstrace.h
++++ b/fs/nfs/nfstrace.h
+@@ -939,10 +939,11 @@ TRACE_EVENT(nfs_sillyrename_unlink,
+ DECLARE_EVENT_CLASS(nfs_folio_event,
+ 		TP_PROTO(
+ 			const struct inode *inode,
+-			struct folio *folio
++			loff_t offset,
++			size_t count
+ 		),
+ 
+-		TP_ARGS(inode, folio),
++		TP_ARGS(inode, offset, count),
+ 
+ 		TP_STRUCT__entry(
+ 			__field(dev_t, dev)
+@@ -950,7 +951,7 @@ DECLARE_EVENT_CLASS(nfs_folio_event,
+ 			__field(u64, fileid)
+ 			__field(u64, version)
+ 			__field(loff_t, offset)
+-			__field(u32, count)
++			__field(size_t, count)
+ 		),
+ 
+ 		TP_fast_assign(
+@@ -960,13 +961,13 @@ DECLARE_EVENT_CLASS(nfs_folio_event,
+ 			__entry->fileid = nfsi->fileid;
+ 			__entry->fhandle = nfs_fhandle_hash(&nfsi->fh);
+ 			__entry->version = inode_peek_iversion_raw(inode);
+-			__entry->offset = folio_file_pos(folio);
+-			__entry->count = nfs_folio_length(folio);
++			__entry->offset = offset,
++			__entry->count = count;
+ 		),
+ 
+ 		TP_printk(
+ 			"fileid=%02x:%02x:%llu fhandle=0x%08x version=%llu "
+-			"offset=%lld count=%u",
++			"offset=%lld count=%zu",
+ 			MAJOR(__entry->dev), MINOR(__entry->dev),
+ 			(unsigned long long)__entry->fileid,
+ 			__entry->fhandle, __entry->version,
+@@ -978,18 +979,20 @@ DECLARE_EVENT_CLASS(nfs_folio_event,
+ 	DEFINE_EVENT(nfs_folio_event, name, \
+ 			TP_PROTO( \
+ 				const struct inode *inode, \
+-				struct folio *folio \
++				loff_t offset, \
++				size_t count \
+ 			), \
+-			TP_ARGS(inode, folio))
++			TP_ARGS(inode, offset, count))
+ 
+ DECLARE_EVENT_CLASS(nfs_folio_event_done,
+ 		TP_PROTO(
+ 			const struct inode *inode,
+-			struct folio *folio,
++			loff_t offset,
++			size_t count,
+ 			int ret
+ 		),
+ 
+-		TP_ARGS(inode, folio, ret),
++		TP_ARGS(inode, offset, count, ret),
+ 
+ 		TP_STRUCT__entry(
+ 			__field(dev_t, dev)
+@@ -998,7 +1001,7 @@ DECLARE_EVENT_CLASS(nfs_folio_event_done,
+ 			__field(u64, fileid)
+ 			__field(u64, version)
+ 			__field(loff_t, offset)
+-			__field(u32, count)
++			__field(size_t, count)
+ 		),
+ 
+ 		TP_fast_assign(
+@@ -1008,14 +1011,14 @@ DECLARE_EVENT_CLASS(nfs_folio_event_done,
+ 			__entry->fileid = nfsi->fileid;
+ 			__entry->fhandle = nfs_fhandle_hash(&nfsi->fh);
+ 			__entry->version = inode_peek_iversion_raw(inode);
+-			__entry->offset = folio_file_pos(folio);
+-			__entry->count = nfs_folio_length(folio);
++			__entry->offset = offset,
++			__entry->count = count,
+ 			__entry->ret = ret;
+ 		),
+ 
+ 		TP_printk(
+ 			"fileid=%02x:%02x:%llu fhandle=0x%08x version=%llu "
+-			"offset=%lld count=%u ret=%d",
++			"offset=%lld count=%zu ret=%d",
+ 			MAJOR(__entry->dev), MINOR(__entry->dev),
+ 			(unsigned long long)__entry->fileid,
+ 			__entry->fhandle, __entry->version,
+@@ -1027,10 +1030,11 @@ DECLARE_EVENT_CLASS(nfs_folio_event_done,
+ 	DEFINE_EVENT(nfs_folio_event_done, name, \
+ 			TP_PROTO( \
+ 				const struct inode *inode, \
+-				struct folio *folio, \
++				loff_t offset, \
++				size_t count, \
+ 				int ret \
+ 			), \
+-			TP_ARGS(inode, folio, ret))
++			TP_ARGS(inode, offset, count, ret))
+ 
+ DEFINE_NFS_FOLIO_EVENT(nfs_aop_readpage);
+ DEFINE_NFS_FOLIO_EVENT_DONE(nfs_aop_readpage_done);
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index a142287d86f68..88e6a78d37fb3 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -332,13 +332,15 @@ int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio,
+ int nfs_read_folio(struct file *file, struct folio *folio)
+ {
+ 	struct inode *inode = file_inode(file);
++	loff_t pos = folio_pos(folio);
++	size_t len = folio_size(folio);
+ 	struct nfs_pageio_descriptor pgio;
+ 	struct nfs_open_context *ctx;
+ 	int ret;
+ 
+-	trace_nfs_aop_readpage(inode, folio);
++	trace_nfs_aop_readpage(inode, pos, len);
+ 	nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
+-	task_io_account_read(folio_size(folio));
++	task_io_account_read(len);
+ 
+ 	/*
+ 	 * Try to flush any pending writes to the file..
+@@ -381,7 +383,7 @@ int nfs_read_folio(struct file *file, struct folio *folio)
+ out_put:
+ 	put_nfs_open_context(ctx);
+ out:
+-	trace_nfs_aop_readpage_done(inode, folio, ret);
++	trace_nfs_aop_readpage_done(inode, pos, len, ret);
+ 	return ret;
+ out_unlock:
+ 	folio_unlock(folio);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 2329cbb0e446b..75c1b3c7faead 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -2073,17 +2073,17 @@ int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio)
+  */
+ int nfs_wb_folio(struct inode *inode, struct folio *folio)
+ {
+-	loff_t range_start = folio_file_pos(folio);
+-	loff_t range_end = range_start + (loff_t)folio_size(folio) - 1;
++	loff_t range_start = folio_pos(folio);
++	size_t len = folio_size(folio);
+ 	struct writeback_control wbc = {
+ 		.sync_mode = WB_SYNC_ALL,
+ 		.nr_to_write = 0,
+ 		.range_start = range_start,
+-		.range_end = range_end,
++		.range_end = range_start + len - 1,
+ 	};
+ 	int ret;
+ 
+-	trace_nfs_writeback_folio(inode, folio);
++	trace_nfs_writeback_folio(inode, range_start, len);
+ 
+ 	for (;;) {
+ 		folio_wait_writeback(folio);
+@@ -2101,7 +2101,7 @@ int nfs_wb_folio(struct inode *inode, struct folio *folio)
+ 			goto out_error;
+ 	}
+ out_error:
+-	trace_nfs_writeback_folio_done(inode, folio, ret);
++	trace_nfs_writeback_folio_done(inode, range_start, len, ret);
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index 272ab8d5c4d76..ec2ab6429e00b 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -162,7 +162,7 @@ config NFSD_V4_SECURITY_LABEL
+ config NFSD_LEGACY_CLIENT_TRACKING
+ 	bool "Support legacy NFSv4 client tracking methods (DEPRECATED)"
+ 	depends on NFSD_V4
+-	default n
++	default y
+ 	help
+ 	  The NFSv4 server needs to store a small amount of information on
+ 	  stable storage in order to handle state recovery after reboot. Most
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index ad9083ca144ba..f4704f5d40867 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -664,7 +664,7 @@ static int
+ nfsd_file_lease_notifier_call(struct notifier_block *nb, unsigned long arg,
+ 			    void *data)
+ {
+-	struct file_lock *fl = data;
++	struct file_lease *fl = data;
+ 
+ 	/* Only close files for F_SETLEASE leases */
+ 	if (fl->c.flc_flags & FL_LEASE)
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 46bd20fe5c0f4..2e39cf2e502a3 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -2269,7 +2269,7 @@ nfsd4_layoutget(struct svc_rqst *rqstp,
+ 	const struct nfsd4_layout_ops *ops;
+ 	struct nfs4_layout_stateid *ls;
+ 	__be32 nfserr;
+-	int accmode = NFSD_MAY_READ_IF_EXEC;
++	int accmode = NFSD_MAY_READ_IF_EXEC | NFSD_MAY_OWNER_OVERRIDE;
+ 
+ 	switch (lgp->lg_seg.iomode) {
+ 	case IOMODE_READ:
+@@ -2359,7 +2359,8 @@ nfsd4_layoutcommit(struct svc_rqst *rqstp,
+ 	struct nfs4_layout_stateid *ls;
+ 	__be32 nfserr;
+ 
+-	nfserr = fh_verify(rqstp, current_fh, 0, NFSD_MAY_WRITE);
++	nfserr = fh_verify(rqstp, current_fh, 0,
++			   NFSD_MAY_WRITE | NFSD_MAY_OWNER_OVERRIDE);
+ 	if (nfserr)
+ 		goto out;
+ 
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 2c060e0b16048..67d8673a9391c 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -2086,8 +2086,8 @@ nfsd4_client_tracking_init(struct net *net)
+ 	status = nn->client_tracking_ops->init(net);
+ out:
+ 	if (status) {
+-		printk(KERN_WARNING "NFSD: Unable to initialize client "
+-				    "recovery tracking! (%d)\n", status);
++		pr_warn("NFSD: Unable to initialize client recovery tracking! (%d)\n", status);
++		pr_warn("NFSD: Is nfsdcld running? If not, enable CONFIG_NFSD_LEGACY_CLIENT_TRACKING.\n");
+ 		nn->client_tracking_ops = NULL;
+ 	}
+ 	return status;
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 0131d83b912de..c034080c334b9 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -51,12 +51,21 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ 
+ 	bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node));
+ 	if (unlikely(!bh))
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (unlikely(buffer_mapped(bh) || buffer_uptodate(bh) ||
+ 		     buffer_dirty(bh))) {
+-		brelse(bh);
+-		BUG();
++		/*
++		 * The block buffer at the specified new address was already
++		 * in use.  This can happen if it is a virtual block number
++		 * and has been reallocated due to corruption of the bitmap
++		 * used to manage its allocation state (if not, the buffer
++		 * clearing of an abandoned b-tree node is missing somewhere).
++		 */
++		nilfs_error(inode->i_sb,
++			    "state inconsistency probably due to duplicate use of b-tree node block address %llu (ino=%lu)",
++			    (unsigned long long)blocknr, inode->i_ino);
++		goto failed;
+ 	}
+ 	memset(bh->b_data, 0, i_blocksize(inode));
+ 	bh->b_bdev = inode->i_sb->s_bdev;
+@@ -67,6 +76,12 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ 	folio_unlock(bh->b_folio);
+ 	folio_put(bh->b_folio);
+ 	return bh;
++
++failed:
++	folio_unlock(bh->b_folio);
++	folio_put(bh->b_folio);
++	brelse(bh);
++	return ERR_PTR(-EIO);
+ }
+ 
+ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
+@@ -217,8 +232,8 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
+ 	}
+ 
+ 	nbh = nilfs_btnode_create_block(btnc, newkey);
+-	if (!nbh)
+-		return -ENOMEM;
++	if (IS_ERR(nbh))
++		return PTR_ERR(nbh);
+ 
+ 	BUG_ON(nbh == obh);
+ 	ctxt->newbh = nbh;
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index a139970e48041..862bdf23120e8 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -63,8 +63,8 @@ static int nilfs_btree_get_new_block(const struct nilfs_bmap *btree,
+ 	struct buffer_head *bh;
+ 
+ 	bh = nilfs_btnode_create_block(btnc, ptr);
+-	if (!bh)
+-		return -ENOMEM;
++	if (IS_ERR(bh))
++		return PTR_ERR(bh);
+ 
+ 	set_buffer_nilfs_volatile(bh);
+ 	*bhp = bh;
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 6ea81f1d50944..d02fd92cdb432 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -136,7 +136,7 @@ static void nilfs_dispose_list(struct the_nilfs *, struct list_head *, int);
+ 
+ #define nilfs_cnt32_ge(a, b)   \
+ 	(typecheck(__u32, a) && typecheck(__u32, b) && \
+-	 ((__s32)(a) - (__s32)(b) >= 0))
++	 ((__s32)((a) - (b)) >= 0))
+ 
+ static int nilfs_prepare_segment_lock(struct super_block *sb,
+ 				      struct nilfs_transaction_info *ti)
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 8e6bcdf99770f..a810ef547501d 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -231,7 +231,7 @@ int attr_make_nonresident(struct ntfs_inode *ni, struct ATTRIB *attr,
+ 	struct ntfs_sb_info *sbi;
+ 	struct ATTRIB *attr_s;
+ 	struct MFT_REC *rec;
+-	u32 used, asize, rsize, aoff, align;
++	u32 used, asize, rsize, aoff;
+ 	bool is_data;
+ 	CLST len, alen;
+ 	char *next;
+@@ -252,10 +252,13 @@ int attr_make_nonresident(struct ntfs_inode *ni, struct ATTRIB *attr,
+ 	rsize = le32_to_cpu(attr->res.data_size);
+ 	is_data = attr->type == ATTR_DATA && !attr->name_len;
+ 
+-	align = sbi->cluster_size;
+-	if (is_attr_compressed(attr))
+-		align <<= COMPRESSION_UNIT;
+-	len = (rsize + align - 1) >> sbi->cluster_bits;
++	/* len - how many clusters required to store 'rsize' bytes */
++	if (is_attr_compressed(attr)) {
++		u8 shift = sbi->cluster_bits + NTFS_LZNT_CUNIT;
++		len = ((rsize + (1u << shift) - 1) >> shift) << NTFS_LZNT_CUNIT;
++	} else {
++		len = bytes_to_cluster(sbi, rsize);
++	}
+ 
+ 	run_init(run);
+ 
+@@ -670,7 +673,8 @@ int attr_set_size(struct ntfs_inode *ni, enum ATTR_TYPE type,
+ 			goto undo_2;
+ 		}
+ 
+-		if (!is_mft)
++		/* keep runs for $MFT::$ATTR_DATA and $MFT::$ATTR_BITMAP. */
++		if (ni->mi.rno != MFT_REC_MFT)
+ 			run_truncate_head(run, evcn + 1);
+ 
+ 		svcn = le64_to_cpu(attr->nres.svcn);
+@@ -972,6 +976,19 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+ 	if (err)
+ 		goto out;
+ 
++	/* Check for compressed frame. */
++	err = attr_is_frame_compressed(ni, attr, vcn >> NTFS_LZNT_CUNIT, &hint);
++	if (err)
++		goto out;
++
++	if (hint) {
++		/* if frame is compressed - don't touch it. */
++		*lcn = COMPRESSED_LCN;
++		*len = hint;
++		err = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	if (!*len) {
+ 		if (run_lookup_entry(run, vcn, lcn, len, NULL)) {
+ 			if (*lcn != SPARSE_LCN || !new)
+@@ -1722,6 +1739,7 @@ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ 
+ 	attr_b->nres.total_size = cpu_to_le64(total_size);
+ 	inode_set_bytes(&ni->vfs_inode, total_size);
++	ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+ 
+ 	mi_b->dirty = true;
+ 	mark_inode_dirty(&ni->vfs_inode);
+diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c
+index c9eb01ccee51b..cf4fe21a50399 100644
+--- a/fs/ntfs3/bitmap.c
++++ b/fs/ntfs3/bitmap.c
+@@ -1382,7 +1382,7 @@ int wnd_extend(struct wnd_bitmap *wnd, size_t new_bits)
+ 
+ 		err = ntfs_vbo_to_lbo(sbi, &wnd->run, vbo, &lbo, &bytes);
+ 		if (err)
+-			break;
++			return err;
+ 
+ 		bh = ntfs_bread(sb, lbo >> sb->s_blocksize_bits);
+ 		if (!bh)
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index 1937e8e612f87..858efe255f6f3 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -326,7 +326,8 @@ static inline int ntfs_filldir(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 	 * It does additional locks/reads just to get the type of name.
+ 	 * Should we use additional mount option to enable branch below?
+ 	 */
+-	if ((fname->dup.fa & FILE_ATTRIBUTE_REPARSE_POINT) &&
++	if (((fname->dup.fa & FILE_ATTRIBUTE_REPARSE_POINT) ||
++	     fname->dup.ea_size) &&
+ 	    ino != ni->mi.rno) {
+ 		struct inode *inode = ntfs_iget5(sbi->sb, &e->ref, NULL);
+ 		if (!IS_ERR_OR_NULL(inode)) {
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 2f903b6ce1570..9ae202901f3c0 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -299,10 +299,7 @@ static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 		}
+ 
+ 		if (ni->i_valid < to) {
+-			if (!inode_trylock(inode)) {
+-				err = -EAGAIN;
+-				goto out;
+-			}
++			inode_lock(inode);
+ 			err = ntfs_extend_initialized_size(file, ni,
+ 							   ni->i_valid, to);
+ 			inode_unlock(inode);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 0008670939a4a..4822cfd6351c2 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1501,7 +1501,7 @@ int ni_insert_nonresident(struct ntfs_inode *ni, enum ATTR_TYPE type,
+ 
+ 	if (is_ext) {
+ 		if (flags & ATTR_FLAG_COMPRESSED)
+-			attr->nres.c_unit = COMPRESSION_UNIT;
++			attr->nres.c_unit = NTFS_LZNT_CUNIT;
+ 		attr->nres.total_size = attr->nres.alloc_size;
+ 	}
+ 
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 40926bf392d28..fcb3e49911ad9 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -2996,7 +2996,7 @@ static struct ATTRIB *attr_create_nonres_log(struct ntfs_sb_info *sbi,
+ 	if (is_ext) {
+ 		attr->name_off = SIZEOF_NONRESIDENT_EX_LE;
+ 		if (is_attr_compressed(attr))
+-			attr->nres.c_unit = COMPRESSION_UNIT;
++			attr->nres.c_unit = NTFS_LZNT_CUNIT;
+ 
+ 		attr->nres.run_off =
+ 			cpu_to_le16(SIZEOF_NONRESIDENT_EX + name_size);
+@@ -3922,6 +3922,9 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 		goto out;
+ 	}
+ 
++	log->page_mask = log->page_size - 1;
++	log->page_bits = blksize_bits(log->page_size);
++
+ 	/* If the file size has shrunk then we won't mount it. */
+ 	if (log->l_size < le64_to_cpu(ra2->l_size)) {
+ 		err = -EINVAL;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index d0f15bbf78f6c..9089c58a005ce 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -978,7 +978,7 @@ static struct indx_node *indx_new(struct ntfs_index *indx,
+ 		hdr->used =
+ 			cpu_to_le32(eo + sizeof(struct NTFS_DE) + sizeof(u64));
+ 		de_set_vbn_le(e, *sub_vbn);
+-		hdr->flags = 1;
++		hdr->flags = NTFS_INDEX_HDR_HAS_SUBNODES;
+ 	} else {
+ 		e->size = cpu_to_le16(sizeof(struct NTFS_DE));
+ 		hdr->used = cpu_to_le32(eo + sizeof(struct NTFS_DE));
+@@ -1683,7 +1683,7 @@ static int indx_insert_into_root(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 	e->size = cpu_to_le16(sizeof(struct NTFS_DE) + sizeof(u64));
+ 	e->flags = NTFS_IE_HAS_SUBNODES | NTFS_IE_LAST;
+ 
+-	hdr->flags = 1;
++	hdr->flags = NTFS_INDEX_HDR_HAS_SUBNODES;
+ 	hdr->used = hdr->total =
+ 		cpu_to_le32(new_root_size - offsetof(struct INDEX_ROOT, ihdr));
+ 
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 0f1664db94ad9..9559d72f86606 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -1508,7 +1508,7 @@ int ntfs_create_inode(struct mnt_idmap *idmap, struct inode *dir,
+ 			attr->size = cpu_to_le32(SIZEOF_NONRESIDENT_EX + 8);
+ 			attr->name_off = SIZEOF_NONRESIDENT_EX_LE;
+ 			attr->flags = ATTR_FLAG_COMPRESSED;
+-			attr->nres.c_unit = COMPRESSION_UNIT;
++			attr->nres.c_unit = NTFS_LZNT_CUNIT;
+ 			asize = SIZEOF_NONRESIDENT_EX + 8;
+ 		} else {
+ 			attr->size = cpu_to_le32(SIZEOF_NONRESIDENT + 8);
+@@ -1668,7 +1668,9 @@ int ntfs_create_inode(struct mnt_idmap *idmap, struct inode *dir,
+ 	 * The packed size of extended attribute is stored in direntry too.
+ 	 * 'fname' here points to inside new_de.
+ 	 */
+-	ntfs_save_wsl_perm(inode, &fname->dup.ea_size);
++	err = ntfs_save_wsl_perm(inode, &fname->dup.ea_size);
++	if (err)
++		goto out6;
+ 
+ 	/*
+ 	 * update ea_size in file_name attribute too.
+@@ -1712,6 +1714,12 @@ int ntfs_create_inode(struct mnt_idmap *idmap, struct inode *dir,
+ 	goto out2;
+ 
+ out6:
++	attr = ni_find_attr(ni, NULL, NULL, ATTR_EA, NULL, 0, NULL, NULL);
++	if (attr && attr->non_res) {
++		/* Delete ATTR_EA, if non-resident. */
++		attr_set_size(ni, ATTR_EA, NULL, 0, NULL, 0, NULL, false, NULL);
++	}
++
+ 	if (rp_inserted)
+ 		ntfs_remove_reparse(sbi, IO_REPARSE_TAG_SYMLINK, &new_de->ref);
+ 
+@@ -2133,5 +2141,6 @@ const struct address_space_operations ntfs_aops = {
+ const struct address_space_operations ntfs_aops_cmpr = {
+ 	.read_folio	= ntfs_read_folio,
+ 	.readahead	= ntfs_readahead,
++	.dirty_folio	= block_dirty_folio,
+ };
+ // clang-format on
+diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
+index 3d6143c7abc03..e1889ad092304 100644
+--- a/fs/ntfs3/ntfs.h
++++ b/fs/ntfs3/ntfs.h
+@@ -82,9 +82,6 @@ typedef u32 CLST;
+ #define RESIDENT_LCN   ((CLST)-2)
+ #define COMPRESSED_LCN ((CLST)-3)
+ 
+-#define COMPRESSION_UNIT     4
+-#define COMPRESS_MAX_CLUSTER 0x1000
+-
+ enum RECORD_NUM {
+ 	MFT_REC_MFT		= 0,
+ 	MFT_REC_MIRR		= 1,
+@@ -696,14 +693,15 @@ static inline bool de_has_vcn_ex(const struct NTFS_DE *e)
+ 	      offsetof(struct ATTR_FILE_NAME, name) + \
+ 	      NTFS_NAME_LEN * sizeof(short), 8)
+ 
++#define NTFS_INDEX_HDR_HAS_SUBNODES cpu_to_le32(1)
++
+ struct INDEX_HDR {
+ 	__le32 de_off;	// 0x00: The offset from the start of this structure
+ 			// to the first NTFS_DE.
+ 	__le32 used;	// 0x04: The size of this structure plus all
+ 			// entries (quad-word aligned).
+ 	__le32 total;	// 0x08: The allocated size of for this structure plus all entries.
+-	u8 flags;	// 0x0C: 0x00 = Small directory, 0x01 = Large directory.
+-	u8 res[3];
++	__le32 flags;	// 0x0C: 0x00 = Small directory, 0x01 = Large directory.
+ 
+ 	//
+ 	// de_off + used <= total
+@@ -751,7 +749,7 @@ static inline struct NTFS_DE *hdr_next_de(const struct INDEX_HDR *hdr,
+ 
+ static inline bool hdr_has_subnode(const struct INDEX_HDR *hdr)
+ {
+-	return hdr->flags & 1;
++	return hdr->flags & NTFS_INDEX_HDR_HAS_SUBNODES;
+ }
+ 
+ struct INDEX_BUFFER {
+@@ -771,7 +769,7 @@ static inline bool ib_is_empty(const struct INDEX_BUFFER *ib)
+ 
+ static inline bool ib_is_leaf(const struct INDEX_BUFFER *ib)
+ {
+-	return !(ib->ihdr.flags & 1);
++	return !(ib->ihdr.flags & NTFS_INDEX_HDR_HAS_SUBNODES);
+ }
+ 
+ /* Index root structure ( 0x90 ). */
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 27fbde2701b63..02b6f51ce6503 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -275,7 +275,7 @@ static const struct fs_parameter_spec ntfs_fs_parameters[] = {
+ 	fsparam_flag_no("acl",			Opt_acl),
+ 	fsparam_string("iocharset",		Opt_iocharset),
+ 	fsparam_flag_no("prealloc",		Opt_prealloc),
+-	fsparam_flag_no("nocase",		Opt_nocase),
++	fsparam_flag_no("case",		Opt_nocase),
+ 	{}
+ };
+ // clang-format on
+@@ -468,7 +468,7 @@ static int ntfs3_volinfo(struct seq_file *m, void *o)
+ 	struct super_block *sb = m->private;
+ 	struct ntfs_sb_info *sbi = sb->s_fs_info;
+ 
+-	seq_printf(m, "ntfs%d.%d\n%u\n%zu\n\%zu\n%zu\n%s\n%s\n",
++	seq_printf(m, "ntfs%d.%d\n%u\n%zu\n%zu\n%zu\n%s\n%s\n",
+ 		   sbi->volume.major_ver, sbi->volume.minor_ver,
+ 		   sbi->cluster_size, sbi->used.bitmap.nbits,
+ 		   sbi->mft.bitmap.nbits,
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index b1c2c0b821161..dd7b462387a00 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -476,12 +476,10 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
+ 			make_empty_dir_inode(inode);
+ 	}
+ 
++	inode->i_uid = GLOBAL_ROOT_UID;
++	inode->i_gid = GLOBAL_ROOT_GID;
+ 	if (root->set_ownership)
+ 		root->set_ownership(head, &inode->i_uid, &inode->i_gid);
+-	else {
+-		inode->i_uid = GLOBAL_ROOT_UID;
+-		inode->i_gid = GLOBAL_ROOT_GID;
+-	}
+ 
+ 	return inode;
+ }
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 71e5039d940dc..a45f2da0ada0d 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1418,7 +1418,6 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ {
+ 	u64 frame = 0, flags = 0;
+ 	struct page *page = NULL;
+-	bool migration = false;
+ 
+ 	if (pte_present(pte)) {
+ 		if (pm->show_pfn)
+@@ -1450,7 +1449,6 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 			    (offset << MAX_SWAPFILES_SHIFT);
+ 		}
+ 		flags |= PM_SWAP;
+-		migration = is_migration_entry(entry);
+ 		if (is_pfn_swap_entry(entry))
+ 			page = pfn_swap_entry_to_page(entry);
+ 		if (pte_marker_entry_uffd_wp(entry))
+@@ -1459,7 +1457,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 
+ 	if (page && !PageAnon(page))
+ 		flags |= PM_FILE;
+-	if (page && !migration && page_mapcount(page) == 1)
++	if (page && (flags & PM_PRESENT) && page_mapcount(page) == 1)
+ 		flags |= PM_MMAP_EXCLUSIVE;
+ 	if (vma->vm_flags & VM_SOFTDIRTY)
+ 		flags |= PM_SOFT_DIRTY;
+@@ -1476,10 +1474,10 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 	pte_t *pte, *orig_pte;
+ 	int err = 0;
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-	bool migration = false;
+ 
+ 	ptl = pmd_trans_huge_lock(pmdp, vma);
+ 	if (ptl) {
++		unsigned int idx = (addr & ~PMD_MASK) >> PAGE_SHIFT;
+ 		u64 flags = 0, frame = 0;
+ 		pmd_t pmd = *pmdp;
+ 		struct page *page = NULL;
+@@ -1496,8 +1494,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 			if (pmd_uffd_wp(pmd))
+ 				flags |= PM_UFFD_WP;
+ 			if (pm->show_pfn)
+-				frame = pmd_pfn(pmd) +
+-					((addr & ~PMD_MASK) >> PAGE_SHIFT);
++				frame = pmd_pfn(pmd) + idx;
+ 		}
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ 		else if (is_swap_pmd(pmd)) {
+@@ -1506,11 +1503,9 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 
+ 			if (pm->show_pfn) {
+ 				if (is_pfn_swap_entry(entry))
+-					offset = swp_offset_pfn(entry);
++					offset = swp_offset_pfn(entry) + idx;
+ 				else
+-					offset = swp_offset(entry);
+-				offset = offset +
+-					((addr & ~PMD_MASK) >> PAGE_SHIFT);
++					offset = swp_offset(entry) + idx;
+ 				frame = swp_type(entry) |
+ 					(offset << MAX_SWAPFILES_SHIFT);
+ 			}
+@@ -1520,17 +1515,22 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 			if (pmd_swp_uffd_wp(pmd))
+ 				flags |= PM_UFFD_WP;
+ 			VM_BUG_ON(!is_pmd_migration_entry(pmd));
+-			migration = is_migration_entry(entry);
+ 			page = pfn_swap_entry_to_page(entry);
+ 		}
+ #endif
+ 
+-		if (page && !migration && page_mapcount(page) == 1)
+-			flags |= PM_MMAP_EXCLUSIVE;
++		if (page && !PageAnon(page))
++			flags |= PM_FILE;
++
++		for (; addr != end; addr += PAGE_SIZE, idx++) {
++			unsigned long cur_flags = flags;
++			pagemap_entry_t pme;
+ 
+-		for (; addr != end; addr += PAGE_SIZE) {
+-			pagemap_entry_t pme = make_pme(frame, flags);
++			if (page && (flags & PM_PRESENT) &&
++			    page_mapcount(page + idx) == 1)
++				cur_flags |= PM_MMAP_EXCLUSIVE;
+ 
++			pme = make_pme(frame, cur_flags);
+ 			err = add_to_pagemap(&pme, pm);
+ 			if (err)
+ 				break;
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index c92937bed1331..2c4b357d85e22 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -1894,12 +1894,12 @@ init_cifs(void)
+ 					   WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ 	if (!serverclose_wq) {
+ 		rc = -ENOMEM;
+-		goto out_destroy_serverclose_wq;
++		goto out_destroy_deferredclose_wq;
+ 	}
+ 
+ 	rc = cifs_init_inodecache();
+ 	if (rc)
+-		goto out_destroy_deferredclose_wq;
++		goto out_destroy_serverclose_wq;
+ 
+ 	rc = cifs_init_netfs();
+ 	if (rc)
+@@ -1967,6 +1967,8 @@ init_cifs(void)
+ 	cifs_destroy_netfs();
+ out_destroy_inodecache:
+ 	cifs_destroy_inodecache();
++out_destroy_serverclose_wq:
++	destroy_workqueue(serverclose_wq);
+ out_destroy_deferredclose_wq:
+ 	destroy_workqueue(deferredclose_wq);
+ out_destroy_cifsoplockd_wq:
+@@ -1977,8 +1979,6 @@ init_cifs(void)
+ 	destroy_workqueue(decrypt_wq);
+ out_destroy_cifsiod_wq:
+ 	destroy_workqueue(cifsiod_wq);
+-out_destroy_serverclose_wq:
+-	destroy_workqueue(serverclose_wq);
+ out_clean_proc:
+ 	cifs_proc_clean();
+ 	return rc;
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 7a16e12f5da87..d2307162a2de1 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -2614,6 +2614,13 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+ 			cifs_dbg(VFS, "Server does not support mounting with posix SMB3.11 extensions\n");
+ 			rc = -EOPNOTSUPP;
+ 			goto out_fail;
++		} else if (ses->server->vals->protocol_id == SMB10_PROT_ID)
++			if (cap_unix(ses))
++				cifs_dbg(FYI, "Unix Extensions requested on SMB1 mount\n");
++			else {
++				cifs_dbg(VFS, "SMB1 Unix Extensions not supported by server\n");
++				rc = -EOPNOTSUPP;
++				goto out_fail;
+ 		} else {
+ 			cifs_dbg(VFS,
+ 				"Check vers= mount option. SMB3.11 disabled but required for POSIX extensions\n");
+@@ -3686,6 +3693,7 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb3_fs_context *ctx)
+ }
+ #endif
+ 
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ /*
+  * Issue a TREE_CONNECT request.
+  */
+@@ -3807,11 +3815,25 @@ CIFSTCon(const unsigned int xid, struct cifs_ses *ses,
+ 		else
+ 			tcon->Flags = 0;
+ 		cifs_dbg(FYI, "Tcon flags: 0x%x\n", tcon->Flags);
+-	}
+ 
++		/*
++		 * reset_cifs_unix_caps calls QFSInfo which requires
++		 * need_reconnect to be false, but we would not need to call
++		 * reset_caps if this were not a reconnect case so must check
++		 * need_reconnect flag here.  The caller will also clear
++		 * need_reconnect when tcon was successful but needed to be
++		 * cleared earlier in the case of unix extensions reconnect
++		 */
++		if (tcon->need_reconnect && tcon->unix_ext) {
++			cifs_dbg(FYI, "resetting caps for %s\n", tcon->tree_name);
++			tcon->need_reconnect = false;
++			reset_cifs_unix_caps(xid, tcon, NULL, NULL);
++		}
++	}
+ 	cifs_buf_release(smb_buffer);
+ 	return rc;
+ }
++#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+ 
+ static void delayed_free(struct rcu_head *p)
+ {
+diff --git a/fs/super.c b/fs/super.c
+index 095ba793e10cf..38d72a3cf6fcf 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -736,6 +736,17 @@ struct super_block *sget_fc(struct fs_context *fc,
+ 	struct user_namespace *user_ns = fc->global ? &init_user_ns : fc->user_ns;
+ 	int err;
+ 
++	/*
++	 * Never allow s_user_ns != &init_user_ns when FS_USERNS_MOUNT is
++	 * not set, as the filesystem is likely unprepared to handle it.
++	 * This can happen when fsconfig() is called from init_user_ns with
++	 * an fs_fd opened in another user namespace.
++	 */
++	if (user_ns != &init_user_ns && !(fc->fs_type->fs_flags & FS_USERNS_MOUNT)) {
++		errorfc(fc, "VFS: Mounting from non-initial user namespace is not allowed");
++		return ERR_PTR(-EPERM);
++	}
++
+ retry:
+ 	spin_lock(&sb_lock);
+ 	if (test) {
+diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c
+index ab3ffc355949d..558ad046972ad 100644
+--- a/fs/udf/balloc.c
++++ b/fs/udf/balloc.c
+@@ -64,8 +64,12 @@ static int read_block_bitmap(struct super_block *sb,
+ 	}
+ 
+ 	for (i = 0; i < count; i++)
+-		if (udf_test_bit(i + off, bh->b_data))
++		if (udf_test_bit(i + off, bh->b_data)) {
++			bitmap->s_block_bitmap[bitmap_nr] =
++							ERR_PTR(-EFSCORRUPTED);
++			brelse(bh);
+ 			return -EFSCORRUPTED;
++		}
+ 	return 0;
+ }
+ 
+@@ -81,8 +85,15 @@ static int __load_block_bitmap(struct super_block *sb,
+ 			  block_group, nr_groups);
+ 	}
+ 
+-	if (bitmap->s_block_bitmap[block_group])
++	if (bitmap->s_block_bitmap[block_group]) {
++		/*
++		 * The bitmap failed verification in the past. No point in
++		 * trying again.
++		 */
++		if (IS_ERR(bitmap->s_block_bitmap[block_group]))
++			return PTR_ERR(bitmap->s_block_bitmap[block_group]);
+ 		return block_group;
++	}
+ 
+ 	retval = read_block_bitmap(sb, bitmap, block_group, block_group);
+ 	if (retval < 0)
+diff --git a/fs/udf/file.c b/fs/udf/file.c
+index 97c59585208ca..3a4179de316b4 100644
+--- a/fs/udf/file.c
++++ b/fs/udf/file.c
+@@ -232,7 +232,9 @@ static int udf_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 
+ 	if ((attr->ia_valid & ATTR_SIZE) &&
+ 	    attr->ia_size != i_size_read(inode)) {
++		filemap_invalidate_lock(inode->i_mapping);
+ 		error = udf_setsize(inode, attr->ia_size);
++		filemap_invalidate_unlock(inode->i_mapping);
+ 		if (error)
+ 			return error;
+ 	}
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 2fb21c5ffccfe..08767bd21eb7f 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -1250,7 +1250,6 @@ int udf_setsize(struct inode *inode, loff_t newsize)
+ 	if (IS_APPEND(inode) || IS_IMMUTABLE(inode))
+ 		return -EPERM;
+ 
+-	filemap_invalidate_lock(inode->i_mapping);
+ 	iinfo = UDF_I(inode);
+ 	if (newsize > inode->i_size) {
+ 		if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {
+@@ -1263,11 +1262,11 @@ int udf_setsize(struct inode *inode, loff_t newsize)
+ 			}
+ 			err = udf_expand_file_adinicb(inode);
+ 			if (err)
+-				goto out_unlock;
++				return err;
+ 		}
+ 		err = udf_extend_file(inode, newsize);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ set_size:
+ 		truncate_setsize(inode, newsize);
+ 	} else {
+@@ -1285,14 +1284,14 @@ int udf_setsize(struct inode *inode, loff_t newsize)
+ 		err = block_truncate_page(inode->i_mapping, newsize,
+ 					  udf_get_block);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 		truncate_setsize(inode, newsize);
+ 		down_write(&iinfo->i_data_sem);
+ 		udf_clear_extent_cache(inode);
+ 		err = udf_truncate_extents(inode);
+ 		up_write(&iinfo->i_data_sem);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 	}
+ update_time:
+ 	inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
+@@ -1300,8 +1299,6 @@ int udf_setsize(struct inode *inode, loff_t newsize)
+ 		udf_sync_inode(inode);
+ 	else
+ 		mark_inode_dirty(inode);
+-out_unlock:
+-	filemap_invalidate_unlock(inode->i_mapping);
+ 	return err;
+ }
+ 
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 1308109fd42d9..78a603129dd58 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -876,8 +876,6 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 	if (has_diriter) {
+ 		diriter.fi.icb.extLocation =
+ 					cpu_to_lelb(UDF_I(new_dir)->i_location);
+-		udf_update_tag((char *)&diriter.fi,
+-			       udf_dir_entry_len(&diriter.fi));
+ 		udf_fiiter_write_fi(&diriter, NULL);
+ 		udf_fiiter_release(&diriter);
+ 	}
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 9381a66c6ce58..92d4770539056 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -336,7 +336,8 @@ static void udf_sb_free_bitmap(struct udf_bitmap *bitmap)
+ 	int nr_groups = bitmap->s_nr_groups;
+ 
+ 	for (i = 0; i < nr_groups; i++)
+-		brelse(bitmap->s_block_bitmap[i]);
++		if (!IS_ERR_OR_NULL(bitmap->s_block_bitmap[i]))
++			brelse(bitmap->s_block_bitmap[i]);
+ 
+ 	kvfree(bitmap);
+ }
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 5703526d6ebf1..70bf1004076b2 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -103,7 +103,7 @@
+ #define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..L* .data..compoundliteral* .data.$__unnamed_* .data.$L*
+ #define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
+ #define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]* .rodata..L*
+-#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]* .bss..compoundliteral*
++#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]* .bss..L* .bss..compoundliteral*
+ #define SBSS_MAIN .sbss .sbss.[0-9a-zA-Z_]*
+ #else
+ #define TEXT_MAIN .text
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index 82b1cc434ea3f..e0f56564bf975 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -314,17 +314,17 @@ int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
+  * @dsi: DSI peripheral device
+  * @seq: buffer containing the payload
+  */
+-#define mipi_dsi_generic_write_seq(dsi, seq...)                                \
+-	do {                                                                   \
+-		static const u8 d[] = { seq };                                 \
+-		struct device *dev = &dsi->dev;                                \
+-		int ret;                                                       \
+-		ret = mipi_dsi_generic_write(dsi, d, ARRAY_SIZE(d));           \
+-		if (ret < 0) {                                                 \
+-			dev_err_ratelimited(dev, "transmit data failed: %d\n", \
+-					    ret);                              \
+-			return ret;                                            \
+-		}                                                              \
++#define mipi_dsi_generic_write_seq(dsi, seq...)                                 \
++	do {                                                                    \
++		static const u8 d[] = { seq };                                  \
++		struct device *dev = &dsi->dev;                                 \
++		ssize_t ret;                                                    \
++		ret = mipi_dsi_generic_write(dsi, d, ARRAY_SIZE(d));            \
++		if (ret < 0) {                                                  \
++			dev_err_ratelimited(dev, "transmit data failed: %zd\n", \
++					    ret);                               \
++			return ret;                                             \
++		}                                                               \
+ 	} while (0)
+ 
+ /**
+@@ -333,18 +333,18 @@ int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
+  * @cmd: Command
+  * @seq: buffer containing data to be transmitted
+  */
+-#define mipi_dsi_dcs_write_seq(dsi, cmd, seq...)                           \
+-	do {                                                               \
+-		static const u8 d[] = { cmd, seq };                        \
+-		struct device *dev = &dsi->dev;                            \
+-		int ret;                                                   \
+-		ret = mipi_dsi_dcs_write_buffer(dsi, d, ARRAY_SIZE(d));    \
+-		if (ret < 0) {                                             \
+-			dev_err_ratelimited(                               \
+-				dev, "sending command %#02x failed: %d\n", \
+-				cmd, ret);                                 \
+-			return ret;                                        \
+-		}                                                          \
++#define mipi_dsi_dcs_write_seq(dsi, cmd, seq...)                            \
++	do {                                                                \
++		static const u8 d[] = { cmd, seq };                         \
++		struct device *dev = &dsi->dev;                             \
++		ssize_t ret;                                                \
++		ret = mipi_dsi_dcs_write_buffer(dsi, d, ARRAY_SIZE(d));     \
++		if (ret < 0) {                                              \
++			dev_err_ratelimited(                                \
++				dev, "sending command %#02x failed: %zd\n", \
++				cmd, ret);                                  \
++			return ret;                                         \
++		}                                                           \
+ 	} while (0)
+ 
+ /**
+diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
+index abd24016a900e..8c61ccd161ba3 100644
+--- a/include/linux/alloc_tag.h
++++ b/include/linux/alloc_tag.h
+@@ -122,7 +122,7 @@ static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag
+ 		  "alloc_tag was not cleared (got tag for %s:%u)\n",
+ 		  ref->ct->filename, ref->ct->lineno);
+ 
+-	WARN_ONCE(!tag, "current->alloc_tag not set");
++	WARN_ONCE(!tag, "current->alloc_tag not set\n");
+ }
+ 
+ static inline void alloc_tag_sub_check(union codetag_ref *ref)
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index e4070fb02b110..ff2a6cdb1fa3f 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -846,7 +846,7 @@ static inline u32 type_flag(u32 type)
+ /* only use after check_attach_btf_id() */
+ static inline enum bpf_prog_type resolve_prog_type(const struct bpf_prog *prog)
+ {
+-	return prog->type == BPF_PROG_TYPE_EXT ?
++	return (prog->type == BPF_PROG_TYPE_EXT && prog->aux->dst_prog) ?
+ 		prog->aux->dst_prog->type : prog->type;
+ }
+ 
+diff --git a/include/linux/firewire.h b/include/linux/firewire.h
+index 00abe0e5d602b..1cca14cf56527 100644
+--- a/include/linux/firewire.h
++++ b/include/linux/firewire.h
+@@ -462,9 +462,8 @@ struct fw_iso_packet {
+ 				/* rx: Sync bit, wait for matching sy	*/
+ 	u32 tag:2;		/* tx: Tag in packet header		*/
+ 	u32 sy:4;		/* tx: Sy in packet header		*/
+-	u32 header_length:8;	/* Length of immediate header		*/
+-				/* tx: Top of 1394 isoch. data_block    */
+-	u32 header[] __counted_by(header_length);
++	u32 header_length:8;	/* Size of immediate header		*/
++	u32 header[];		/* tx: Top of 1394 isoch. data_block	*/
+ };
+ 
+ #define FW_ISO_CONTEXT_TRANSMIT			0
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index 2aa986a5cd1b5..c73ad77fa33d3 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -72,14 +72,20 @@ extern struct kobj_attribute shmem_enabled_attr;
+ #define THP_ORDERS_ALL_ANON	((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
+ 
+ /*
+- * Mask of all large folio orders supported for file THP.
++ * Mask of all large folio orders supported for file THP. Folios in a DAX
++ * file is never split and the MAX_PAGECACHE_ORDER limit does not apply to
++ * it.
+  */
+-#define THP_ORDERS_ALL_FILE	(BIT(PMD_ORDER) | BIT(PUD_ORDER))
++#define THP_ORDERS_ALL_FILE_DAX		\
++	(BIT(PMD_ORDER) | BIT(PUD_ORDER))
++#define THP_ORDERS_ALL_FILE_DEFAULT	\
++	((BIT(MAX_PAGECACHE_ORDER + 1) - 1) & ~BIT(0))
+ 
+ /*
+  * Mask of all large folio orders supported for THP.
+  */
+-#define THP_ORDERS_ALL		(THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE)
++#define THP_ORDERS_ALL	\
++	(THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DAX | THP_ORDERS_ALL_FILE_DEFAULT)
+ 
+ #define TVA_SMAPS		(1 << 0)	/* Will be used for procfs */
+ #define TVA_IN_PF		(1 << 1)	/* Page fault handler */
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 2b3c3a4047691..8120d1976188c 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -681,6 +681,7 @@ HPAGEFLAG(RawHwpUnreliable, raw_hwp_unreliable)
+ /* Defines one hugetlb page size */
+ struct hstate {
+ 	struct mutex resize_lock;
++	struct lock_class_key resize_key;
+ 	int next_nid_to_alloc;
+ 	int next_nid_to_free;
+ 	unsigned int order;
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 5c9bdd3ffccc8..dac7466de5f35 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -168,7 +168,7 @@ static inline int __must_check
+ request_irq(unsigned int irq, irq_handler_t handler, unsigned long flags,
+ 	    const char *name, void *dev)
+ {
+-	return request_threaded_irq(irq, handler, NULL, flags, name, dev);
++	return request_threaded_irq(irq, handler, NULL, flags | IRQF_COND_ONESHOT, name, dev);
+ }
+ 
+ extern int __must_check
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index ab04c1c27fae3..b900c642210cd 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1085,6 +1085,13 @@ struct journal_s
+ 	 */
+ 	int			j_revoke_records_per_block;
+ 
++	/**
++	 * @j_transaction_overhead:
++	 *
++	 * Number of blocks each transaction needs for its own bookkeeping
++	 */
++	int			j_transaction_overhead_buffers;
++
+ 	/**
+ 	 * @j_commit_interval:
+ 	 *
+@@ -1660,11 +1667,6 @@ int jbd2_wait_inode_data(journal_t *journal, struct jbd2_inode *jinode);
+ int jbd2_fc_wait_bufs(journal_t *journal, int num_blks);
+ int jbd2_fc_release_bufs(journal_t *journal);
+ 
+-static inline int jbd2_journal_get_max_txn_bufs(journal_t *journal)
+-{
+-	return (journal->j_total_len - journal->j_fc_wbufsize) / 4;
+-}
+-
+ /*
+  * is_journal_abort
+  *
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 44488b1ab9a97..855db460e08bc 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -144,6 +144,7 @@ LSM_HOOK(int, 0, inode_setattr, struct mnt_idmap *idmap, struct dentry *dentry,
+ LSM_HOOK(void, LSM_RET_VOID, inode_post_setattr, struct mnt_idmap *idmap,
+ 	 struct dentry *dentry, int ia_valid)
+ LSM_HOOK(int, 0, inode_getattr, const struct path *path)
++LSM_HOOK(int, 0, inode_xattr_skipcap, const char *name)
+ LSM_HOOK(int, 0, inode_setxattr, struct mnt_idmap *idmap,
+ 	 struct dentry *dentry, const char *name, const void *value,
+ 	 size_t size, int flags)
+diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h
+index f0e55bf3ec8b5..ad1ce650146cb 100644
+--- a/include/linux/mlx5/qp.h
++++ b/include/linux/mlx5/qp.h
+@@ -576,9 +576,12 @@ static inline const char *mlx5_qp_state_str(int state)
+ 
+ static inline int mlx5_get_qp_default_ts(struct mlx5_core_dev *dev)
+ {
+-	return !MLX5_CAP_ROCE(dev, qp_ts_format) ?
+-		       MLX5_TIMESTAMP_FORMAT_FREE_RUNNING :
+-		       MLX5_TIMESTAMP_FORMAT_DEFAULT;
++	u8 supported_ts_cap = mlx5_get_roce_state(dev) ?
++			      MLX5_CAP_ROCE(dev, qp_ts_format) :
++			      MLX5_CAP_GEN(dev, sq_ts_format);
++
++	return supported_ts_cap ? MLX5_TIMESTAMP_FORMAT_DEFAULT :
++	       MLX5_TIMESTAMP_FORMAT_FREE_RUNNING;
+ }
+ 
+ #endif /* MLX5_QP_H */
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index eb7c96d24ac02..b58bad248eefd 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3177,21 +3177,7 @@ extern void reserve_bootmem_region(phys_addr_t start,
+ 				   phys_addr_t end, int nid);
+ 
+ /* Free the reserved page into the buddy system, so it gets managed. */
+-static inline void free_reserved_page(struct page *page)
+-{
+-	if (mem_alloc_profiling_enabled()) {
+-		union codetag_ref *ref = get_page_tag_ref(page);
+-
+-		if (ref) {
+-			set_codetag_empty(ref);
+-			put_page_tag_ref(ref);
+-		}
+-	}
+-	ClearPageReserved(page);
+-	init_page_count(page);
+-	__free_page(page);
+-	adjust_managed_page_count(page, 1);
+-}
++void free_reserved_page(struct page *page);
+ #define free_highmem_page(page) free_reserved_page(page)
+ 
+ static inline void mark_page_reserved(struct page *page)
+diff --git a/include/linux/objagg.h b/include/linux/objagg.h
+index 78021777df462..6df5b887dc547 100644
+--- a/include/linux/objagg.h
++++ b/include/linux/objagg.h
+@@ -8,7 +8,6 @@ struct objagg_ops {
+ 	size_t obj_size;
+ 	bool (*delta_check)(void *priv, const void *parent_obj,
+ 			    const void *obj);
+-	int (*hints_obj_cmp)(const void *obj1, const void *obj2);
+ 	void * (*delta_create)(void *priv, void *parent_obj, void *obj);
+ 	void (*delta_destroy)(void *priv, void *delta_priv);
+ 	void * (*root_create)(void *priv, void *obj, unsigned int root_id);
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index a5304ae8c654f..393fb13733b02 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -786,6 +786,7 @@ struct perf_event {
+ 	struct irq_work			pending_irq;
+ 	struct callback_head		pending_task;
+ 	unsigned int			pending_work;
++	struct rcuwait			pending_work_wait;
+ 
+ 	atomic_t			event_limit;
+ 
+diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
+index 9cacadbd61f8c..18cd0c0c73d93 100644
+--- a/include/linux/pgalloc_tag.h
++++ b/include/linux/pgalloc_tag.h
+@@ -15,7 +15,7 @@ extern struct page_ext_operations page_alloc_tagging_ops;
+ 
+ static inline union codetag_ref *codetag_ref_from_page_ext(struct page_ext *page_ext)
+ {
+-	return (void *)page_ext + page_alloc_tagging_ops.offset;
++	return (union codetag_ref *)page_ext_data(page_ext, &page_alloc_tagging_ops);
+ }
+ 
+ static inline struct page_ext *page_ext_from_codetag_ref(union codetag_ref *ref)
+@@ -71,6 +71,7 @@ static inline void pgalloc_tag_sub(struct page *page, unsigned int nr)
+ static inline void pgalloc_tag_split(struct page *page, unsigned int nr)
+ {
+ 	int i;
++	struct page_ext *first_page_ext;
+ 	struct page_ext *page_ext;
+ 	union codetag_ref *ref;
+ 	struct alloc_tag *tag;
+@@ -78,7 +79,7 @@ static inline void pgalloc_tag_split(struct page *page, unsigned int nr)
+ 	if (!mem_alloc_profiling_enabled())
+ 		return;
+ 
+-	page_ext = page_ext_get(page);
++	first_page_ext = page_ext = page_ext_get(page);
+ 	if (unlikely(!page_ext))
+ 		return;
+ 
+@@ -94,7 +95,7 @@ static inline void pgalloc_tag_split(struct page *page, unsigned int nr)
+ 		page_ext = page_ext_next(page_ext);
+ 	}
+ out:
+-	page_ext_put(page_ext);
++	page_ext_put(first_page_ext);
+ }
+ 
+ static inline struct alloc_tag *pgalloc_tag_get(struct page *page)
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 7233e9cf1bab6..ce76f1a457225 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -481,4 +481,45 @@ DEFINE_LOCK_GUARD_0(preempt, preempt_disable(), preempt_enable())
+ DEFINE_LOCK_GUARD_0(preempt_notrace, preempt_disable_notrace(), preempt_enable_notrace())
+ DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
+ 
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++extern bool preempt_model_none(void);
++extern bool preempt_model_voluntary(void);
++extern bool preempt_model_full(void);
++
++#else
++
++static inline bool preempt_model_none(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT_NONE);
++}
++static inline bool preempt_model_voluntary(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY);
++}
++static inline bool preempt_model_full(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT);
++}
++
++#endif
++
++static inline bool preempt_model_rt(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT_RT);
++}
++
++/*
++ * Does the preemption model allow non-cooperative preemption?
++ *
++ * For !CONFIG_PREEMPT_DYNAMIC kernels this is an exact match with
++ * CONFIG_PREEMPTION; for CONFIG_PREEMPT_DYNAMIC this doesn't work as the
++ * kernel is *built* with CONFIG_PREEMPTION=y but may run with e.g. the
++ * PREEMPT_NONE model.
++ */
++static inline bool preempt_model_preemptible(void)
++{
++	return preempt_model_full() || preempt_model_rt();
++}
++
+ #endif /* __LINUX_PREEMPT_H */
+diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
+index d662cf136021d..c09cdcc99471e 100644
+--- a/include/linux/sbitmap.h
++++ b/include/linux/sbitmap.h
+@@ -36,6 +36,11 @@ struct sbitmap_word {
+ 	 * @cleared: word holding cleared bits
+ 	 */
+ 	unsigned long cleared ____cacheline_aligned_in_smp;
++
++	/**
++	 * @swap_lock: serializes simultaneous updates of ->word and ->cleared
++	 */
++	spinlock_t swap_lock;
+ } ____cacheline_aligned_in_smp;
+ 
+ /**
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a5f4b48fca184..76214d7c819de 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2064,47 +2064,6 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock);
+ 	__cond_resched_rwlock_write(lock);					\
+ })
+ 
+-#ifdef CONFIG_PREEMPT_DYNAMIC
+-
+-extern bool preempt_model_none(void);
+-extern bool preempt_model_voluntary(void);
+-extern bool preempt_model_full(void);
+-
+-#else
+-
+-static inline bool preempt_model_none(void)
+-{
+-	return IS_ENABLED(CONFIG_PREEMPT_NONE);
+-}
+-static inline bool preempt_model_voluntary(void)
+-{
+-	return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY);
+-}
+-static inline bool preempt_model_full(void)
+-{
+-	return IS_ENABLED(CONFIG_PREEMPT);
+-}
+-
+-#endif
+-
+-static inline bool preempt_model_rt(void)
+-{
+-	return IS_ENABLED(CONFIG_PREEMPT_RT);
+-}
+-
+-/*
+- * Does the preemption model allow non-cooperative preemption?
+- *
+- * For !CONFIG_PREEMPT_DYNAMIC kernels this is an exact match with
+- * CONFIG_PREEMPTION; for CONFIG_PREEMPT_DYNAMIC this doesn't work as the
+- * kernel is *built* with CONFIG_PREEMPTION=y but may run with e.g. the
+- * PREEMPT_NONE model.
+- */
+-static inline bool preempt_model_preemptible(void)
+-{
+-	return preempt_model_full() || preempt_model_rt();
+-}
+-
+ static __always_inline bool need_resched(void)
+ {
+ 	return unlikely(tif_need_resched());
+diff --git a/include/linux/screen_info.h b/include/linux/screen_info.h
+index 75303c126285a..6a4a3cec4638b 100644
+--- a/include/linux/screen_info.h
++++ b/include/linux/screen_info.h
+@@ -49,6 +49,16 @@ static inline u64 __screen_info_lfb_size(const struct screen_info *si, unsigned
+ 	return lfb_size;
+ }
+ 
++static inline bool __screen_info_vbe_mode_nonvga(const struct screen_info *si)
++{
++	/*
++	 * VESA modes typically run on VGA hardware. Set bit 5 signals that this
++	 * is not the case. Drivers can then not make use of VGA resources. See
++	 * Sec 4.4 of the VBE 2.0 spec.
++	 */
++	return si->vesa_attributes & BIT(5);
++}
++
+ static inline unsigned int __screen_info_video_type(unsigned int type)
+ {
+ 	switch (type) {
+diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
+index 3fcd20de6ca88..63dd8cf3c3c2b 100644
+--- a/include/linux/spinlock.h
++++ b/include/linux/spinlock.h
+@@ -462,11 +462,10 @@ static __always_inline int spin_is_contended(spinlock_t *lock)
+  */
+ static inline int spin_needbreak(spinlock_t *lock)
+ {
+-#ifdef CONFIG_PREEMPTION
++	if (!preempt_model_preemptible())
++		return 0;
++
+ 	return spin_is_contended(lock);
+-#else
+-	return 0;
+-#endif
+ }
+ 
+ /*
+@@ -479,11 +478,10 @@ static inline int spin_needbreak(spinlock_t *lock)
+  */
+ static inline int rwlock_needbreak(rwlock_t *lock)
+ {
+-#ifdef CONFIG_PREEMPTION
++	if (!preempt_model_preemptible())
++		return 0;
++
+ 	return rwlock_is_contended(lock);
+-#else
+-	return 0;
+-#endif
+ }
+ 
+ /*
+diff --git a/include/linux/task_work.h b/include/linux/task_work.h
+index 795ef5a684294..26b8a47f41fca 100644
+--- a/include/linux/task_work.h
++++ b/include/linux/task_work.h
+@@ -30,7 +30,8 @@ int task_work_add(struct task_struct *task, struct callback_head *twork,
+ 
+ struct callback_head *task_work_cancel_match(struct task_struct *task,
+ 	bool (*match)(struct callback_head *, void *data), void *data);
+-struct callback_head *task_work_cancel(struct task_struct *, task_work_func_t);
++struct callback_head *task_work_cancel_func(struct task_struct *, task_work_func_t);
++bool task_work_cancel(struct task_struct *task, struct callback_head *cb);
+ void task_work_run(void);
+ 
+ static inline void exit_task_work(struct task_struct *task)
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 4dfa9b69ca8d9..d1d7825318c32 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -56,6 +56,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 	unsigned int thlen = 0;
+ 	unsigned int p_off = 0;
+ 	unsigned int ip_proto;
++	u64 ret, remainder, gso_size;
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+@@ -98,6 +99,16 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 		u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
+ 		u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16));
+ 
++		if (hdr->gso_size) {
++			gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
++			ret = div64_u64_rem(skb->len, gso_size, &remainder);
++			if (!(ret && (hdr->gso_size > needed) &&
++						((remainder > needed) || (remainder == 0)))) {
++				return -EINVAL;
++			}
++			skb_shinfo(skb)->tx_flags |= SKBFL_SHARED_FRAG;
++		}
++
+ 		if (!pskb_may_pull(skb, needed))
+ 			return -EINVAL;
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c43716edf2056..b15f51ae3bfd9 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -91,8 +91,6 @@ struct discovery_state {
+ 	s8			rssi;
+ 	u16			uuid_count;
+ 	u8			(*uuids)[16];
+-	unsigned long		scan_start;
+-	unsigned long		scan_duration;
+ 	unsigned long		name_resolve_timeout;
+ };
+ 
+@@ -890,8 +888,6 @@ static inline void hci_discovery_filter_clear(struct hci_dev *hdev)
+ 	hdev->discovery.uuid_count = 0;
+ 	kfree(hdev->discovery.uuids);
+ 	hdev->discovery.uuids = NULL;
+-	hdev->discovery.scan_start = 0;
+-	hdev->discovery.scan_duration = 0;
+ }
+ 
+ bool hci_discovery_active(struct hci_dev *hdev);
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index a18ed24fed948..6dbdf60b342f6 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -127,18 +127,26 @@ void rt6_age_exceptions(struct fib6_info *f6i, struct fib6_gc_args *gc_args,
+ 
+ static inline int ip6_route_get_saddr(struct net *net, struct fib6_info *f6i,
+ 				      const struct in6_addr *daddr,
+-				      unsigned int prefs,
++				      unsigned int prefs, int l3mdev_index,
+ 				      struct in6_addr *saddr)
+ {
++	struct net_device *l3mdev;
++	struct net_device *dev;
++	bool same_vrf;
+ 	int err = 0;
+ 
+-	if (f6i && f6i->fib6_prefsrc.plen) {
++	rcu_read_lock();
++
++	l3mdev = dev_get_by_index_rcu(net, l3mdev_index);
++	if (!f6i || !f6i->fib6_prefsrc.plen || l3mdev)
++		dev = f6i ? fib6_info_nh_dev(f6i) : NULL;
++	same_vrf = !l3mdev || l3mdev_master_dev_rcu(dev) == l3mdev;
++	if (f6i && f6i->fib6_prefsrc.plen && same_vrf)
+ 		*saddr = f6i->fib6_prefsrc.addr;
+-	} else {
+-		struct net_device *dev = f6i ? fib6_info_nh_dev(f6i) : NULL;
++	else
++		err = ipv6_dev_get_saddr(net, same_vrf ? dev : l3mdev, daddr, prefs, saddr);
+ 
+-		err = ipv6_dev_get_saddr(net, dev, daddr, prefs, saddr);
+-	}
++	rcu_read_unlock();
+ 
+ 	return err;
+ }
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 9b2f69ba5e498..c29639b4323f3 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -173,6 +173,7 @@ struct fib_result {
+ 	unsigned char		type;
+ 	unsigned char		scope;
+ 	u32			tclassid;
++	dscp_t			dscp;
+ 	struct fib_nh_common	*nhc;
+ 	struct fib_info		*fi;
+ 	struct fib_table	*table;
+diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
+index 561f6719fb4ec..f207a6e1042ae 100644
+--- a/include/net/mana/mana.h
++++ b/include/net/mana/mana.h
+@@ -796,4 +796,6 @@ void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type,
+ int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id,
+ 		   u32 doorbell_pg_id);
+ void mana_uncfg_vport(struct mana_port_context *apc);
++
++struct net_device *mana_get_primary_netdev_rcu(struct mana_context *ac, u32 port_index);
+ #endif /* _MANA_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 060e95b331a28..32815a40dea16 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -677,6 +677,7 @@ void tcp_skb_collapse_tstamp(struct sk_buff *skb,
+ /* tcp_input.c */
+ void tcp_rearm_rto(struct sock *sk);
+ void tcp_synack_rtt_meas(struct sock *sk, struct request_sock *req);
++void tcp_done_with_error(struct sock *sk, int err);
+ void tcp_reset(struct sock *sk, struct sk_buff *skb);
+ void tcp_fin(struct sock *sk);
+ void tcp_check_space(struct sock *sk);
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 77ebf5bcf0b90..7d4c2235252c7 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -178,7 +178,10 @@ struct xfrm_state {
+ 		struct hlist_node	gclist;
+ 		struct hlist_node	bydst;
+ 	};
+-	struct hlist_node	bysrc;
++	union {
++		struct hlist_node	dev_gclist;
++		struct hlist_node	bysrc;
++	};
+ 	struct hlist_node	byspi;
+ 	struct hlist_node	byseq;
+ 
+@@ -1588,7 +1591,7 @@ void xfrm_state_update_stats(struct net *net);
+ static inline void xfrm_dev_state_update_stats(struct xfrm_state *x)
+ {
+ 	struct xfrm_dev_offload *xdo = &x->xso;
+-	struct net_device *dev = xdo->dev;
++	struct net_device *dev = READ_ONCE(xdo->dev);
+ 
+ 	if (dev && dev->xfrmdev_ops &&
+ 	    dev->xfrmdev_ops->xdo_dev_state_update_stats)
+@@ -1946,13 +1949,16 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 			struct xfrm_user_offload *xuo, u8 dir,
+ 			struct netlink_ext_ack *extack);
+ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x);
++void xfrm_dev_state_delete(struct xfrm_state *x);
++void xfrm_dev_state_free(struct xfrm_state *x);
+ 
+ static inline void xfrm_dev_state_advance_esn(struct xfrm_state *x)
+ {
+ 	struct xfrm_dev_offload *xso = &x->xso;
++	struct net_device *dev = READ_ONCE(xso->dev);
+ 
+-	if (xso->dev && xso->dev->xfrmdev_ops->xdo_dev_state_advance_esn)
+-		xso->dev->xfrmdev_ops->xdo_dev_state_advance_esn(x);
++	if (dev && dev->xfrmdev_ops->xdo_dev_state_advance_esn)
++		dev->xfrmdev_ops->xdo_dev_state_advance_esn(x);
+ }
+ 
+ static inline bool xfrm_dst_offload_ok(struct dst_entry *dst)
+@@ -1973,28 +1979,6 @@ static inline bool xfrm_dst_offload_ok(struct dst_entry *dst)
+ 	return false;
+ }
+ 
+-static inline void xfrm_dev_state_delete(struct xfrm_state *x)
+-{
+-	struct xfrm_dev_offload *xso = &x->xso;
+-
+-	if (xso->dev)
+-		xso->dev->xfrmdev_ops->xdo_dev_state_delete(x);
+-}
+-
+-static inline void xfrm_dev_state_free(struct xfrm_state *x)
+-{
+-	struct xfrm_dev_offload *xso = &x->xso;
+-	struct net_device *dev = xso->dev;
+-
+-	if (dev && dev->xfrmdev_ops) {
+-		if (dev->xfrmdev_ops->xdo_dev_state_free)
+-			dev->xfrmdev_ops->xdo_dev_state_free(x);
+-		xso->dev = NULL;
+-		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+-		netdev_put(dev, &xso->dev_tracker);
+-	}
+-}
+-
+ static inline void xfrm_dev_policy_delete(struct xfrm_policy *x)
+ {
+ 	struct xfrm_dev_offload *xdo = &x->xdo;
+diff --git a/include/sound/tas2781-dsp.h b/include/sound/tas2781-dsp.h
+index 7fba7ea26a4b0..3cda9da14f6d1 100644
+--- a/include/sound/tas2781-dsp.h
++++ b/include/sound/tas2781-dsp.h
+@@ -117,10 +117,17 @@ struct tasdevice_fw {
+ 	struct device *dev;
+ };
+ 
+-enum tasdevice_dsp_fw_state {
+-	TASDEVICE_DSP_FW_NONE = 0,
++enum tasdevice_fw_state {
++	/* Driver in startup mode, not load any firmware. */
+ 	TASDEVICE_DSP_FW_PENDING,
++	/* DSP firmware in the system, but parsing error. */
+ 	TASDEVICE_DSP_FW_FAIL,
++	/*
++	 * Only RCA (Reconfigurable Architecture) firmware load
++	 * successfully.
++	 */
++	TASDEVICE_RCA_FW_OK,
++	/* Both RCA and DSP firmware load successfully. */
+ 	TASDEVICE_DSP_FW_ALL_OK,
+ };
+ 
+diff --git a/include/trace/events/rpcgss.h b/include/trace/events/rpcgss.h
+index 7f0c1ceae726b..b0b6300a0cabd 100644
+--- a/include/trace/events/rpcgss.h
++++ b/include/trace/events/rpcgss.h
+@@ -54,7 +54,7 @@ TRACE_DEFINE_ENUM(GSS_S_UNSEQ_TOKEN);
+ TRACE_DEFINE_ENUM(GSS_S_GAP_TOKEN);
+ 
+ #define show_gss_status(x)						\
+-	__print_flags(x, "|",						\
++	__print_symbolic(x, 						\
+ 		{ GSS_S_BAD_MECH, "GSS_S_BAD_MECH" },			\
+ 		{ GSS_S_BAD_NAME, "GSS_S_BAD_NAME" },			\
+ 		{ GSS_S_BAD_NAMETYPE, "GSS_S_BAD_NAMETYPE" },		\
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index 1446c3bae5159..d425b83181df9 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -776,7 +776,13 @@ struct drm_xe_gem_create {
+ #define DRM_XE_GEM_CPU_CACHING_WC                      2
+ 	/**
+ 	 * @cpu_caching: The CPU caching mode to select for this object. If
+-	 * mmaping the object the mode selected here will also be used.
++	 * mmaping the object the mode selected here will also be used. The
++	 * exception is when mapping system memory (including data evicted
++	 * to system) on discrete GPUs. The caching mode selected will
++	 * then be overridden to DRM_XE_GEM_CPU_CACHING_WB, and coherency
++	 * between GPU- and CPU is guaranteed. The caching mode of
++	 * existing CPU-mappings will be updated transparently to
++	 * user-space clients.
+ 	 */
+ 	__u16 cpu_caching;
+ 	/** @pad: MBZ */
+diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
+index d316984104107..42ec5ddaab8dc 100644
+--- a/include/uapi/linux/if_xdp.h
++++ b/include/uapi/linux/if_xdp.h
+@@ -41,6 +41,10 @@
+  */
+ #define XDP_UMEM_TX_SW_CSUM		(1 << 1)
+ 
++/* Request to reserve tx_metadata_len bytes of per-chunk metadata.
++ */
++#define XDP_UMEM_TX_METADATA_LEN	(1 << 2)
++
+ struct sockaddr_xdp {
+ 	__u16 sxdp_family;
+ 	__u16 sxdp_flags;
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index aa4094ca2444f..639894ed1b973 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -1376,7 +1376,7 @@ enum nft_secmark_attributes {
+ #define NFTA_SECMARK_MAX	(__NFTA_SECMARK_MAX - 1)
+ 
+ /* Max security context length */
+-#define NFT_SECMARK_CTX_MAXLEN		256
++#define NFT_SECMARK_CTX_MAXLEN		4096
+ 
+ /**
+  * enum nft_reject_types - nf_tables reject expression reject types
+diff --git a/include/uapi/linux/zorro_ids.h b/include/uapi/linux/zorro_ids.h
+index 6e574d7b7d79c..393f2ee9c0422 100644
+--- a/include/uapi/linux/zorro_ids.h
++++ b/include/uapi/linux/zorro_ids.h
+@@ -449,6 +449,9 @@
+ #define  ZORRO_PROD_VMC_ISDN_BLASTER_Z2				ZORRO_ID(VMC, 0x01, 0)
+ #define  ZORRO_PROD_VMC_HYPERCOM_4				ZORRO_ID(VMC, 0x02, 0)
+ 
++#define ZORRO_MANUF_CSLAB					0x1400
++#define  ZORRO_PROD_CSLAB_WARP_1260				ZORRO_ID(CSLAB, 0x65, 0)
++
+ #define ZORRO_MANUF_INFORMATION					0x157C
+ #define  ZORRO_PROD_INFORMATION_ISDN_ENGINE_I			ZORRO_ID(INFORMATION, 0x64, 0)
+ 
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index bad88bd919951..d965e4d1277e6 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -1131,6 +1131,12 @@ static inline bool is_mcq_enabled(struct ufs_hba *hba)
+ 	return hba->mcq_enabled;
+ }
+ 
++static inline unsigned int ufshcd_mcq_opr_offset(struct ufs_hba *hba,
++		enum ufshcd_mcq_opr opr, int idx)
++{
++	return hba->mcq_opr[opr].offset + hba->mcq_opr[opr].stride * idx;
++}
++
+ #ifdef CONFIG_SCSI_UFS_VARIABLE_SG_ENTRY_SIZE
+ static inline size_t ufshcd_sg_entry_size(const struct ufs_hba *hba)
+ {
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 7d3316fe9bfc4..22dac5850327f 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -23,6 +23,7 @@
+ #include "io_uring.h"
+ 
+ #define WORKER_IDLE_TIMEOUT	(5 * HZ)
++#define WORKER_INIT_LIMIT	3
+ 
+ enum {
+ 	IO_WORKER_F_UP		= 0,	/* up and active */
+@@ -58,6 +59,7 @@ struct io_worker {
+ 
+ 	unsigned long create_state;
+ 	struct callback_head create_work;
++	int init_retries;
+ 
+ 	union {
+ 		struct rcu_head rcu;
+@@ -744,7 +746,7 @@ static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
+ 	return true;
+ }
+ 
+-static inline bool io_should_retry_thread(long err)
++static inline bool io_should_retry_thread(struct io_worker *worker, long err)
+ {
+ 	/*
+ 	 * Prevent perpetual task_work retry, if the task (or its group) is
+@@ -752,6 +754,8 @@ static inline bool io_should_retry_thread(long err)
+ 	 */
+ 	if (fatal_signal_pending(current))
+ 		return false;
++	if (worker->init_retries++ >= WORKER_INIT_LIMIT)
++		return false;
+ 
+ 	switch (err) {
+ 	case -EAGAIN:
+@@ -778,7 +782,7 @@ static void create_worker_cont(struct callback_head *cb)
+ 		io_init_new_worker(wq, worker, tsk);
+ 		io_worker_release(worker);
+ 		return;
+-	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++	} else if (!io_should_retry_thread(worker, PTR_ERR(tsk))) {
+ 		struct io_wq_acct *acct = io_wq_get_acct(worker);
+ 
+ 		atomic_dec(&acct->nr_running);
+@@ -845,7 +849,7 @@ static bool create_io_worker(struct io_wq *wq, int index)
+ 	tsk = create_io_thread(io_wq_worker, worker, NUMA_NO_NODE);
+ 	if (!IS_ERR(tsk)) {
+ 		io_init_new_worker(wq, worker, tsk);
+-	} else if (!io_should_retry_thread(PTR_ERR(tsk))) {
++	} else if (!io_should_retry_thread(worker, PTR_ERR(tsk))) {
+ 		kfree(worker);
+ 		goto fail;
+ 	} else {
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index c326e2127dd4d..896e707e06187 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -3071,8 +3071,11 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
+ 		bool loop = false;
+ 
+ 		io_uring_drop_tctx_refs(current);
++		if (!tctx_inflight(tctx, !cancel_all))
++			break;
++
+ 		/* read completions before cancelations */
+-		inflight = tctx_inflight(tctx, !cancel_all);
++		inflight = tctx_inflight(tctx, false);
+ 		if (!inflight)
+ 			break;
+ 
+diff --git a/io_uring/napi.c b/io_uring/napi.c
+index 8c18ede595c41..080d10e0e0afd 100644
+--- a/io_uring/napi.c
++++ b/io_uring/napi.c
+@@ -222,6 +222,8 @@ int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
+ 	};
+ 	struct io_uring_napi napi;
+ 
++	if (ctx->flags & IORING_SETUP_IOPOLL)
++		return -EINVAL;
+ 	if (copy_from_user(&napi, arg, sizeof(napi)))
+ 		return -EFAULT;
+ 	if (napi.pad[0] || napi.pad[1] || napi.pad[2] || napi.resv)
+diff --git a/io_uring/opdef.c b/io_uring/opdef.c
+index 2e3b7b16effb3..760006ccc4083 100644
+--- a/io_uring/opdef.c
++++ b/io_uring/opdef.c
+@@ -725,6 +725,14 @@ const char *io_uring_get_opcode(u8 opcode)
+ 	return "INVALID";
+ }
+ 
++bool io_uring_op_supported(u8 opcode)
++{
++	if (opcode < IORING_OP_LAST &&
++	    io_issue_defs[opcode].prep != io_eopnotsupp_prep)
++		return true;
++	return false;
++}
++
+ void __init io_uring_optable_init(void)
+ {
+ 	int i;
+diff --git a/io_uring/opdef.h b/io_uring/opdef.h
+index 7ee6f5aa90aa3..14456436ff74a 100644
+--- a/io_uring/opdef.h
++++ b/io_uring/opdef.h
+@@ -17,8 +17,6 @@ struct io_issue_def {
+ 	unsigned		poll_exclusive : 1;
+ 	/* op supports buffer selection */
+ 	unsigned		buffer_select : 1;
+-	/* opcode is not supported by this kernel */
+-	unsigned		not_supported : 1;
+ 	/* skip auditing */
+ 	unsigned		audit_skip : 1;
+ 	/* supports ioprio */
+@@ -47,5 +45,7 @@ struct io_cold_def {
+ extern const struct io_issue_def io_issue_defs[];
+ extern const struct io_cold_def io_cold_defs[];
+ 
++bool io_uring_op_supported(u8 opcode);
++
+ void io_uring_optable_init(void);
+ #endif
+diff --git a/io_uring/register.c b/io_uring/register.c
+index c0010a66a6f2c..11517b34cfc8f 100644
+--- a/io_uring/register.c
++++ b/io_uring/register.c
+@@ -113,7 +113,7 @@ static __cold int io_probe(struct io_ring_ctx *ctx, void __user *arg,
+ 
+ 	for (i = 0; i < nr_args; i++) {
+ 		p->ops[i].op = i;
+-		if (!io_issue_defs[i].not_supported)
++		if (io_uring_op_supported(i))
+ 			p->ops[i].flags = IO_URING_OP_SUPPORTED;
+ 	}
+ 	p->ops_len = i;
+diff --git a/io_uring/timeout.c b/io_uring/timeout.c
+index 1c9bf07499b19..9973876d91b0e 100644
+--- a/io_uring/timeout.c
++++ b/io_uring/timeout.c
+@@ -639,7 +639,7 @@ void io_queue_linked_timeout(struct io_kiocb *req)
+ 
+ static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
+ 			  bool cancel_all)
+-	__must_hold(&req->ctx->timeout_lock)
++	__must_hold(&head->ctx->timeout_lock)
+ {
+ 	struct io_kiocb *req;
+ 
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 21ac5fb2d5f08..a54163a839686 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -265,7 +265,7 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
+ 		req_set_fail(req);
+ 	io_req_uring_cleanup(req, issue_flags);
+ 	io_req_set_res(req, ret, 0);
+-	return ret;
++	return ret < 0 ? ret : IOU_OK;
+ }
+ 
+ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 821063660d9f9..fe360b5b211d1 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -414,7 +414,7 @@ const char *btf_type_str(const struct btf_type *t)
+ struct btf_show {
+ 	u64 flags;
+ 	void *target;	/* target of show operation (seq file, buffer) */
+-	void (*showfn)(struct btf_show *show, const char *fmt, va_list args);
++	__printf(2, 0) void (*showfn)(struct btf_show *show, const char *fmt, va_list args);
+ 	const struct btf *btf;
+ 	/* below are used during iteration */
+ 	struct {
+@@ -7370,8 +7370,8 @@ static void btf_type_show(const struct btf *btf, u32 type_id, void *obj,
+ 	btf_type_ops(t)->show(btf, t, type_id, obj, 0, show);
+ }
+ 
+-static void btf_seq_show(struct btf_show *show, const char *fmt,
+-			 va_list args)
++__printf(2, 0) static void btf_seq_show(struct btf_show *show, const char *fmt,
++					va_list args)
+ {
+ 	seq_vprintf((struct seq_file *)show->target, fmt, args);
+ }
+@@ -7404,8 +7404,8 @@ struct btf_show_snprintf {
+ 	int len;		/* length we would have written */
+ };
+ 
+-static void btf_snprintf_show(struct btf_show *show, const char *fmt,
+-			      va_list args)
++__printf(2, 0) static void btf_snprintf_show(struct btf_show *show, const char *fmt,
++					     va_list args)
+ {
+ 	struct btf_show_snprintf *ssnprintf = (struct btf_show_snprintf *)show;
+ 	int len;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 3243c83ef3e39..7268370600f6e 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2786,7 +2786,7 @@ __bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
+ }
+ 
+ __bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
+-					 int (callback_fn)(void *map, int *key, struct bpf_wq *wq),
++					 int (callback_fn)(void *map, int *key, void *value),
+ 					 unsigned int flags,
+ 					 void *aux__ign)
+ {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 214a9fa8c6fb7..6b422c275f78c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3215,7 +3215,8 @@ static int insn_def_regno(const struct bpf_insn *insn)
+ 	case BPF_ST:
+ 		return -1;
+ 	case BPF_STX:
+-		if (BPF_MODE(insn->code) == BPF_ATOMIC &&
++		if ((BPF_MODE(insn->code) == BPF_ATOMIC ||
++		     BPF_MODE(insn->code) == BPF_PROBE_ATOMIC) &&
+ 		    (insn->imm & BPF_FETCH)) {
+ 			if (insn->imm == BPF_CMPXCHG)
+ 				return BPF_REG_0;
+@@ -18767,7 +18768,7 @@ static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta)
+ 		} else {
+ 			if (i + 1 + insn->off != tgt_idx)
+ 				continue;
+-			if (signed_add16_overflows(insn->imm, delta))
++			if (signed_add16_overflows(insn->off, delta))
+ 				return -ERANGE;
+ 			insn->off += delta;
+ 		}
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index c12b9fdb22a4e..5e468db958104 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -21,6 +21,7 @@
+  *  License.  See the file COPYING in the main directory of the Linux
+  *  distribution for more details.
+  */
++#include "cgroup-internal.h"
+ 
+ #include <linux/cpu.h>
+ #include <linux/cpumask.h>
+@@ -169,7 +170,7 @@ struct cpuset {
+ 	/* for custom sched domain */
+ 	int relax_domain_level;
+ 
+-	/* number of valid sub-partitions */
++	/* number of valid local child partitions */
+ 	int nr_subparts;
+ 
+ 	/* partition root state */
+@@ -957,13 +958,15 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ 	int nslot;		/* next empty doms[] struct cpumask slot */
+ 	struct cgroup_subsys_state *pos_css;
+ 	bool root_load_balance = is_sched_load_balance(&top_cpuset);
++	bool cgrpv2 = cgroup_subsys_on_dfl(cpuset_cgrp_subsys);
+ 
+ 	doms = NULL;
+ 	dattr = NULL;
+ 	csa = NULL;
+ 
+ 	/* Special case for the 99% of systems with one, full, sched domain */
+-	if (root_load_balance && !top_cpuset.nr_subparts) {
++	if (root_load_balance && cpumask_empty(subpartitions_cpus)) {
++single_root_domain:
+ 		ndoms = 1;
+ 		doms = alloc_sched_domains(ndoms);
+ 		if (!doms)
+@@ -991,16 +994,18 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ 	cpuset_for_each_descendant_pre(cp, pos_css, &top_cpuset) {
+ 		if (cp == &top_cpuset)
+ 			continue;
++
++		if (cgrpv2)
++			goto v2;
++
+ 		/*
++		 * v1:
+ 		 * Continue traversing beyond @cp iff @cp has some CPUs and
+ 		 * isn't load balancing.  The former is obvious.  The
+ 		 * latter: All child cpusets contain a subset of the
+ 		 * parent's cpus, so just skip them, and then we call
+ 		 * update_domain_attr_tree() to calc relax_domain_level of
+ 		 * the corresponding sched domain.
+-		 *
+-		 * If root is load-balancing, we can skip @cp if it
+-		 * is a subset of the root's effective_cpus.
+ 		 */
+ 		if (!cpumask_empty(cp->cpus_allowed) &&
+ 		    !(is_sched_load_balance(cp) &&
+@@ -1008,20 +1013,39 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ 					 housekeeping_cpumask(HK_TYPE_DOMAIN))))
+ 			continue;
+ 
+-		if (root_load_balance &&
+-		    cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus))
+-			continue;
+-
+ 		if (is_sched_load_balance(cp) &&
+ 		    !cpumask_empty(cp->effective_cpus))
+ 			csa[csn++] = cp;
+ 
+-		/* skip @cp's subtree if not a partition root */
+-		if (!is_partition_valid(cp))
++		/* skip @cp's subtree */
++		pos_css = css_rightmost_descendant(pos_css);
++		continue;
++
++v2:
++		/*
++		 * Only valid partition roots that are not isolated and with
++		 * non-empty effective_cpus will be saved into csn[].
++		 */
++		if ((cp->partition_root_state == PRS_ROOT) &&
++		    !cpumask_empty(cp->effective_cpus))
++			csa[csn++] = cp;
++
++		/*
++		 * Skip @cp's subtree if not a partition root and has no
++		 * exclusive CPUs to be granted to child cpusets.
++		 */
++		if (!is_partition_valid(cp) && cpumask_empty(cp->exclusive_cpus))
+ 			pos_css = css_rightmost_descendant(pos_css);
+ 	}
+ 	rcu_read_unlock();
+ 
++	/*
++	 * If there are only isolated partitions underneath the cgroup root,
++	 * we can optimize out unneeded sched domains scanning.
++	 */
++	if (root_load_balance && (csn == 1))
++		goto single_root_domain;
++
+ 	for (i = 0; i < csn; i++)
+ 		csa[i]->pn = i;
+ 	ndoms = csn;
+@@ -1064,6 +1088,20 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ 	dattr = kmalloc_array(ndoms, sizeof(struct sched_domain_attr),
+ 			      GFP_KERNEL);
+ 
++	/*
++	 * Cgroup v2 doesn't support domain attributes, just set all of them
++	 * to SD_ATTR_INIT. Also non-isolating partition root CPUs are a
++	 * subset of HK_TYPE_DOMAIN housekeeping CPUs.
++	 */
++	if (cgrpv2) {
++		for (i = 0; i < ndoms; i++) {
++			cpumask_copy(doms[i], csa[i]->effective_cpus);
++			if (dattr)
++				dattr[i] = SD_ATTR_INIT;
++		}
++		goto done;
++	}
++
+ 	for (nslot = 0, i = 0; i < csn; i++) {
+ 		struct cpuset *a = csa[i];
+ 		struct cpumask *dp;
+@@ -1223,7 +1261,7 @@ static void rebuild_sched_domains_locked(void)
+ 	 * root should be only a subset of the active CPUs.  Since a CPU in any
+ 	 * partition root could be offlined, all must be checked.
+ 	 */
+-	if (top_cpuset.nr_subparts) {
++	if (!cpumask_empty(subpartitions_cpus)) {
+ 		rcu_read_lock();
+ 		cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
+ 			if (!is_partition_valid(cs)) {
+@@ -4571,7 +4609,7 @@ static void cpuset_handle_hotplug(void)
+ 	 * In the rare case that hotplug removes all the cpus in
+ 	 * subpartitions_cpus, we assumed that cpus are updated.
+ 	 */
+-	if (!cpus_updated && top_cpuset.nr_subparts)
++	if (!cpus_updated && !cpumask_empty(subpartitions_cpus))
+ 		cpus_updated = true;
+ 
+ 	/* For v1, synchronize cpus_allowed to cpu_active_mask */
+@@ -5051,10 +5089,14 @@ int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
+ 	if (!buf)
+ 		goto out;
+ 
+-	css = task_get_css(tsk, cpuset_cgrp_id);
+-	retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
+-				current->nsproxy->cgroup_ns);
+-	css_put(css);
++	rcu_read_lock();
++	spin_lock_irq(&css_set_lock);
++	css = task_css(tsk, cpuset_cgrp_id);
++	retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
++				       current->nsproxy->cgroup_ns);
++	spin_unlock_irq(&css_set_lock);
++	rcu_read_unlock();
++
+ 	if (retval == -E2BIG)
+ 		retval = -ENAMETOOLONG;
+ 	if (retval < 0)
+diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
+index 3131334d7a81c..6a77f1c779c4c 100644
+--- a/kernel/debug/kdb/kdb_io.c
++++ b/kernel/debug/kdb/kdb_io.c
+@@ -206,7 +206,7 @@ char kdb_getchar(void)
+  */
+ static void kdb_position_cursor(char *prompt, char *buffer, char *cp)
+ {
+-	kdb_printf("\r%s", kdb_prompt_str);
++	kdb_printf("\r%s", prompt);
+ 	if (cp > buffer)
+ 		kdb_printf("%.*s", (int)(cp - buffer), buffer);
+ }
+@@ -362,7 +362,7 @@ static char *kdb_read(char *buffer, size_t bufsize)
+ 			if (i >= dtab_count)
+ 				kdb_printf("...");
+ 			kdb_printf("\n");
+-			kdb_printf(kdb_prompt_str);
++			kdb_printf("%s",  kdb_prompt_str);
+ 			kdb_printf("%s", buffer);
+ 			if (cp != lastchar)
+ 				kdb_position_cursor(kdb_prompt_str, buffer, cp);
+@@ -453,7 +453,7 @@ char *kdb_getstr(char *buffer, size_t bufsize, const char *prompt)
+ {
+ 	if (prompt && kdb_prompt_str != prompt)
+ 		strscpy(kdb_prompt_str, prompt, CMD_BUFLEN);
+-	kdb_printf(kdb_prompt_str);
++	kdb_printf("%s", kdb_prompt_str);
+ 	kdb_nextline = 1;	/* Prompt and input resets line number */
+ 	return kdb_read(buffer, bufsize);
+ }
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 81de84318ccc7..b1c18058d55f8 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -67,8 +67,8 @@ void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
+ {
+ 	struct dma_devres match_data = { size, vaddr, dma_handle };
+ 
+-	dma_free_coherent(dev, size, vaddr, dma_handle);
+ 	WARN_ON(devres_destroy(dev, dmam_release, dmam_match, &match_data));
++	dma_free_coherent(dev, size, vaddr, dma_handle);
+ }
+ EXPORT_SYMBOL(dmam_free_coherent);
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8f908f0779354..b2a6aec118f3a 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2284,18 +2284,14 @@ event_sched_out(struct perf_event *event, struct perf_event_context *ctx)
+ 	}
+ 
+ 	if (event->pending_sigtrap) {
+-		bool dec = true;
+-
+ 		event->pending_sigtrap = 0;
+ 		if (state != PERF_EVENT_STATE_OFF &&
+-		    !event->pending_work) {
++		    !event->pending_work &&
++		    !task_work_add(current, &event->pending_task, TWA_RESUME)) {
+ 			event->pending_work = 1;
+-			dec = false;
+-			WARN_ON_ONCE(!atomic_long_inc_not_zero(&event->refcount));
+-			task_work_add(current, &event->pending_task, TWA_RESUME);
+-		}
+-		if (dec)
++		} else {
+ 			local_dec(&event->ctx->nr_pending);
++		}
+ 	}
+ 
+ 	perf_event_set_state(event, state);
+@@ -5206,9 +5202,35 @@ static bool exclusive_event_installable(struct perf_event *event,
+ static void perf_addr_filters_splice(struct perf_event *event,
+ 				       struct list_head *head);
+ 
++static void perf_pending_task_sync(struct perf_event *event)
++{
++	struct callback_head *head = &event->pending_task;
++
++	if (!event->pending_work)
++		return;
++	/*
++	 * If the task is queued to the current task's queue, we
++	 * obviously can't wait for it to complete. Simply cancel it.
++	 */
++	if (task_work_cancel(current, head)) {
++		event->pending_work = 0;
++		local_dec(&event->ctx->nr_pending);
++		return;
++	}
++
++	/*
++	 * All accesses related to the event are within the same
++	 * non-preemptible section in perf_pending_task(). The RCU
++	 * grace period before the event is freed will make sure all
++	 * those accesses are complete by then.
++	 */
++	rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE);
++}
++
+ static void _free_event(struct perf_event *event)
+ {
+ 	irq_work_sync(&event->pending_irq);
++	perf_pending_task_sync(event);
+ 
+ 	unaccount_event(event);
+ 
+@@ -6509,6 +6531,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 			return -EINVAL;
+ 
+ 		nr_pages = vma_size / PAGE_SIZE;
++		if (nr_pages > INT_MAX)
++			return -ENOMEM;
+ 
+ 		mutex_lock(&event->mmap_mutex);
+ 		ret = -EINVAL;
+@@ -6831,24 +6855,28 @@ static void perf_pending_task(struct callback_head *head)
+ 	struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ 	int rctx;
+ 
++	/*
++	 * All accesses to the event must belong to the same implicit RCU read-side
++	 * critical section as the ->pending_work reset. See comment in
++	 * perf_pending_task_sync().
++	 */
++	preempt_disable_notrace();
+ 	/*
+ 	 * If we 'fail' here, that's OK, it means recursion is already disabled
+ 	 * and we won't recurse 'further'.
+ 	 */
+-	preempt_disable_notrace();
+ 	rctx = perf_swevent_get_recursion_context();
+ 
+ 	if (event->pending_work) {
+ 		event->pending_work = 0;
+ 		perf_sigtrap(event);
+ 		local_dec(&event->ctx->nr_pending);
++		rcuwait_wake_up(&event->pending_work_wait);
+ 	}
+ 
+ 	if (rctx >= 0)
+ 		perf_swevent_put_recursion_context(rctx);
+ 	preempt_enable_notrace();
+-
+-	put_event(event);
+ }
+ 
+ #ifdef CONFIG_GUEST_PERF_EVENTS
+@@ -9302,21 +9330,19 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog,
+ 	bool unregister = type == PERF_BPF_EVENT_PROG_UNLOAD;
+ 	int i;
+ 
+-	if (prog->aux->func_cnt == 0) {
+-		perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF,
+-				   (u64)(unsigned long)prog->bpf_func,
+-				   prog->jited_len, unregister,
+-				   prog->aux->ksym.name);
+-	} else {
+-		for (i = 0; i < prog->aux->func_cnt; i++) {
+-			struct bpf_prog *subprog = prog->aux->func[i];
+-
+-			perf_event_ksymbol(
+-				PERF_RECORD_KSYMBOL_TYPE_BPF,
+-				(u64)(unsigned long)subprog->bpf_func,
+-				subprog->jited_len, unregister,
+-				subprog->aux->ksym.name);
+-		}
++	perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF,
++			   (u64)(unsigned long)prog->bpf_func,
++			   prog->jited_len, unregister,
++			   prog->aux->ksym.name);
++
++	for (i = 1; i < prog->aux->func_cnt; i++) {
++		struct bpf_prog *subprog = prog->aux->func[i];
++
++		perf_event_ksymbol(
++			PERF_RECORD_KSYMBOL_TYPE_BPF,
++			(u64)(unsigned long)subprog->bpf_func,
++			subprog->jited_len, unregister,
++			subprog->aux->ksym.name);
+ 	}
+ }
+ 
+@@ -11962,6 +11988,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ 	init_waitqueue_head(&event->waitq);
+ 	init_irq_work(&event->pending_irq, perf_pending_irq);
+ 	init_task_work(&event->pending_task, perf_pending_task);
++	rcuwait_init(&event->pending_work_wait);
+ 
+ 	mutex_init(&event->mmap_mutex);
+ 	raw_spin_lock_init(&event->addr_filters.lock);
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index 5150d5f84c033..386d21c7edfa0 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -128,7 +128,7 @@ static inline unsigned long perf_data_size(struct perf_buffer *rb)
+ 
+ static inline unsigned long perf_aux_size(struct perf_buffer *rb)
+ {
+-	return rb->aux_nr_pages << PAGE_SHIFT;
++	return (unsigned long)rb->aux_nr_pages << PAGE_SHIFT;
+ }
+ 
+ #define __DEFINE_OUTPUT_COPY_BODY(advance_buf, memcpy_func, ...)	\
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 4013408ce0123..485cf0a66631b 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -688,7 +688,9 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
+ 		 * max_order, to aid PMU drivers in double buffering.
+ 		 */
+ 		if (!watermark)
+-			watermark = nr_pages << (PAGE_SHIFT - 1);
++			watermark = min_t(unsigned long,
++					  U32_MAX,
++					  (unsigned long)nr_pages << (PAGE_SHIFT - 1));
+ 
+ 		/*
+ 		 * Use aux_watermark as the basis for chunking to
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index aadc8891cc166..efeacf17c239e 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -155,7 +155,6 @@ static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ 		switch (fwid->type) {
+ 		case IRQCHIP_FWNODE_NAMED:
+ 		case IRQCHIP_FWNODE_NAMED_ID:
+-			domain->fwnode = fwnode;
+ 			domain->name = kstrdup(fwid->name, GFP_KERNEL);
+ 			if (!domain->name) {
+ 				kfree(domain);
+@@ -164,7 +163,6 @@ static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ 			domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+ 			break;
+ 		default:
+-			domain->fwnode = fwnode;
+ 			domain->name = fwid->name;
+ 			break;
+ 		}
+@@ -184,7 +182,6 @@ static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ 		}
+ 
+ 		domain->name = strreplace(name, '/', ':');
+-		domain->fwnode = fwnode;
+ 		domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+ 	}
+ 
+@@ -200,8 +197,8 @@ static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ 		domain->flags |= IRQ_DOMAIN_NAME_ALLOCATED;
+ 	}
+ 
+-	fwnode_handle_get(fwnode);
+-	fwnode_dev_initialized(fwnode, true);
++	domain->fwnode = fwnode_handle_get(fwnode);
++	fwnode_dev_initialized(domain->fwnode, true);
+ 
+ 	/* Fill structure */
+ 	INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 71b0fc2d0aeaa..dd53298ef1a5c 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1337,7 +1337,7 @@ static int irq_thread(void *data)
+ 	 * synchronize_hardirq(). So neither IRQTF_RUNTHREAD nor the
+ 	 * oneshot mask bit can be set.
+ 	 */
+-	task_work_cancel(current, irq_thread_dtor);
++	task_work_cancel_func(current, irq_thread_dtor);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index 3218fa5688b93..1f05a19918f47 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -131,7 +131,7 @@ bool static_key_fast_inc_not_disabled(struct static_key *key)
+ 	STATIC_KEY_CHECK_USE(key);
+ 	/*
+ 	 * Negative key->enabled has a special meaning: it sends
+-	 * static_key_slow_inc() down the slow path, and it is non-zero
++	 * static_key_slow_inc/dec() down the slow path, and it is non-zero
+ 	 * so it counts as "enabled" in jump_label_update().  Note that
+ 	 * atomic_inc_unless_negative() checks >= 0, so roll our own.
+ 	 */
+@@ -150,7 +150,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key)
+ 	lockdep_assert_cpus_held();
+ 
+ 	/*
+-	 * Careful if we get concurrent static_key_slow_inc() calls;
++	 * Careful if we get concurrent static_key_slow_inc/dec() calls;
+ 	 * later calls must wait for the first one to _finish_ the
+ 	 * jump_label_update() process.  At the same time, however,
+ 	 * the jump_label_update() call below wants to see
+@@ -247,20 +247,32 @@ EXPORT_SYMBOL_GPL(static_key_disable);
+ 
+ static bool static_key_slow_try_dec(struct static_key *key)
+ {
+-	int val;
+-
+-	val = atomic_fetch_add_unless(&key->enabled, -1, 1);
+-	if (val == 1)
+-		return false;
++	int v;
+ 
+ 	/*
+-	 * The negative count check is valid even when a negative
+-	 * key->enabled is in use by static_key_slow_inc(); a
+-	 * __static_key_slow_dec() before the first static_key_slow_inc()
+-	 * returns is unbalanced, because all other static_key_slow_inc()
+-	 * instances block while the update is in progress.
++	 * Go into the slow path if key::enabled is less than or equal than
++	 * one. One is valid to shut down the key, anything less than one
++	 * is an imbalance, which is handled at the call site.
++	 *
++	 * That includes the special case of '-1' which is set in
++	 * static_key_slow_inc_cpuslocked(), but that's harmless as it is
++	 * fully serialized in the slow path below. By the time this task
++	 * acquires the jump label lock the value is back to one and the
++	 * retry under the lock must succeed.
+ 	 */
+-	WARN(val < 0, "jump label: negative count!\n");
++	v = atomic_read(&key->enabled);
++	do {
++		/*
++		 * Warn about the '-1' case though; since that means a
++		 * decrement is concurrent with a first (0->1) increment. IOW
++		 * people are trying to disable something that wasn't yet fully
++		 * enabled. This suggests an ordering problem on the user side.
++		 */
++		WARN_ON_ONCE(v < 0);
++		if (v <= 1)
++			return false;
++	} while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1)));
++
+ 	return true;
+ }
+ 
+@@ -271,10 +283,11 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
+ 	if (static_key_slow_try_dec(key))
+ 		return;
+ 
+-	jump_label_lock();
+-	if (atomic_dec_and_test(&key->enabled))
++	guard(mutex)(&jump_label_mutex);
++	if (atomic_cmpxchg(&key->enabled, 1, 0))
+ 		jump_label_update(key);
+-	jump_label_unlock();
++	else
++		WARN_ON_ONCE(!static_key_slow_try_dec(key));
+ }
+ 
+ static void __static_key_slow_dec(struct static_key *key)
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index c6d17aee4209b..33cac79e39946 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -1297,7 +1297,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
+ /*
+  * lock for writing
+  */
+-static inline int __down_write_common(struct rw_semaphore *sem, int state)
++static __always_inline int __down_write_common(struct rw_semaphore *sem, int state)
+ {
+ 	int ret = 0;
+ 
+@@ -1310,12 +1310,12 @@ static inline int __down_write_common(struct rw_semaphore *sem, int state)
+ 	return ret;
+ }
+ 
+-static inline void __down_write(struct rw_semaphore *sem)
++static __always_inline void __down_write(struct rw_semaphore *sem)
+ {
+ 	__down_write_common(sem, TASK_UNINTERRUPTIBLE);
+ }
+ 
+-static inline int __down_write_killable(struct rw_semaphore *sem)
++static __always_inline int __down_write_killable(struct rw_semaphore *sem)
+ {
+ 	return __down_write_common(sem, TASK_KILLABLE);
+ }
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index e1bf33018e6d5..098e82bcc427f 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -1757,6 +1757,16 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
+ 	// allow safe access to the hop list.
+ 	for_each_online_cpu(cpu) {
+ 		rcu_read_lock();
++		// Note that cpu_curr_snapshot() picks up the target
++		// CPU's current task while its runqueue is locked with
++		// an smp_mb__after_spinlock().  This ensures that either
++		// the grace-period kthread will see that task's read-side
++		// critical section or the task will see the updater's pre-GP
++		// accesses.  The trailing smp_mb() in cpu_curr_snapshot()
++		// does not currently play a role other than simplify
++		// that function's ordering semantics.  If these simplified
++		// ordering semantics continue to be redundant, that smp_mb()
++		// might be removed.
+ 		t = cpu_curr_snapshot(cpu);
+ 		if (rcu_tasks_trace_pertask_prep(t, true))
+ 			trc_add_holdout(t, hop);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 59ce0841eb1fd..ebf21373f6634 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1326,27 +1326,24 @@ int tg_nop(struct task_group *tg, void *data)
+ static void set_load_weight(struct task_struct *p, bool update_load)
+ {
+ 	int prio = p->static_prio - MAX_RT_PRIO;
+-	struct load_weight *load = &p->se.load;
++	struct load_weight lw;
+ 
+-	/*
+-	 * SCHED_IDLE tasks get minimal weight:
+-	 */
+ 	if (task_has_idle_policy(p)) {
+-		load->weight = scale_load(WEIGHT_IDLEPRIO);
+-		load->inv_weight = WMULT_IDLEPRIO;
+-		return;
++		lw.weight = scale_load(WEIGHT_IDLEPRIO);
++		lw.inv_weight = WMULT_IDLEPRIO;
++	} else {
++		lw.weight = scale_load(sched_prio_to_weight[prio]);
++		lw.inv_weight = sched_prio_to_wmult[prio];
+ 	}
+ 
+ 	/*
+ 	 * SCHED_OTHER tasks have to update their load when changing their
+ 	 * weight
+ 	 */
+-	if (update_load && p->sched_class == &fair_sched_class) {
+-		reweight_task(p, prio);
+-	} else {
+-		load->weight = scale_load(sched_prio_to_weight[prio]);
+-		load->inv_weight = sched_prio_to_wmult[prio];
+-	}
++	if (update_load && p->sched_class == &fair_sched_class)
++		reweight_task(p, &lw);
++	else
++		p->se.load = lw;
+ }
+ 
+ #ifdef CONFIG_UCLAMP_TASK
+@@ -4466,12 +4463,7 @@ int task_call_func(struct task_struct *p, task_call_f func, void *arg)
+  * @cpu: The CPU on which to snapshot the task.
+  *
+  * Returns the task_struct pointer of the task "currently" running on
+- * the specified CPU.  If the same task is running on that CPU throughout,
+- * the return value will be a pointer to that task's task_struct structure.
+- * If the CPU did any context switches even vaguely concurrently with the
+- * execution of this function, the return value will be a pointer to the
+- * task_struct structure of a randomly chosen task that was running on
+- * that CPU somewhere around the time that this function was executing.
++ * the specified CPU.
+  *
+  * If the specified CPU was offline, the return value is whatever it
+  * is, perhaps a pointer to the task_struct structure of that CPU's idle
+@@ -4485,11 +4477,16 @@ int task_call_func(struct task_struct *p, task_call_f func, void *arg)
+  */
+ struct task_struct *cpu_curr_snapshot(int cpu)
+ {
++	struct rq *rq = cpu_rq(cpu);
+ 	struct task_struct *t;
++	struct rq_flags rf;
+ 
+-	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	rq_lock_irqsave(rq, &rf);
++	smp_mb__after_spinlock(); /* Pairing determined by caller's synchronization design. */
+ 	t = rcu_dereference(cpu_curr(cpu));
++	rq_unlock_irqrestore(rq, &rf);
+ 	smp_mb(); /* Pairing determined by caller's synchronization design. */
++
+ 	return t;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 24dda708b6993..483c137b9d3d7 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3835,15 +3835,14 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
+ 	}
+ }
+ 
+-void reweight_task(struct task_struct *p, int prio)
++void reweight_task(struct task_struct *p, const struct load_weight *lw)
+ {
+ 	struct sched_entity *se = &p->se;
+ 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ 	struct load_weight *load = &se->load;
+-	unsigned long weight = scale_load(sched_prio_to_weight[prio]);
+ 
+-	reweight_entity(cfs_rq, se, weight);
+-	load->inv_weight = sched_prio_to_wmult[prio];
++	reweight_entity(cfs_rq, se, lw->weight);
++	load->inv_weight = lw->inv_weight;
+ }
+ 
+ static inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index ef20c61004ebf..38aeedd8a6cc8 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2464,7 +2464,7 @@ extern void init_sched_dl_class(void);
+ extern void init_sched_rt_class(void);
+ extern void init_sched_fair_class(void);
+ 
+-extern void reweight_task(struct task_struct *p, int prio);
++extern void reweight_task(struct task_struct *p, const struct load_weight *lw);
+ 
+ extern void resched_curr(struct rq *rq);
+ extern void resched_cpu(int cpu);
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 1f9dd41c04be2..60c737e423a18 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2600,6 +2600,14 @@ static void do_freezer_trap(void)
+ 	spin_unlock_irq(&current->sighand->siglock);
+ 	cgroup_enter_frozen();
+ 	schedule();
++
++	/*
++	 * We could've been woken by task_work, run it to clear
++	 * TIF_NOTIFY_SIGNAL. The caller will retry if necessary.
++	 */
++	clear_notify_signal();
++	if (unlikely(task_work_pending(current)))
++		task_work_run();
+ }
+ 
+ static int ptrace_signal(int signr, kernel_siginfo_t *info, enum pid_type type)
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 95a7e1b7f1dab..2134ac8057a94 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -120,9 +120,9 @@ static bool task_work_func_match(struct callback_head *cb, void *data)
+ }
+ 
+ /**
+- * task_work_cancel - cancel a pending work added by task_work_add()
+- * @task: the task which should execute the work
+- * @func: identifies the work to remove
++ * task_work_cancel_func - cancel a pending work matching a function added by task_work_add()
++ * @task: the task which should execute the func's work
++ * @func: identifies the func to match with a work to remove
+  *
+  * Find the last queued pending work with ->func == @func and remove
+  * it from queue.
+@@ -131,11 +131,35 @@ static bool task_work_func_match(struct callback_head *cb, void *data)
+  * The found work or NULL if not found.
+  */
+ struct callback_head *
+-task_work_cancel(struct task_struct *task, task_work_func_t func)
++task_work_cancel_func(struct task_struct *task, task_work_func_t func)
+ {
+ 	return task_work_cancel_match(task, task_work_func_match, func);
+ }
+ 
++static bool task_work_match(struct callback_head *cb, void *data)
++{
++	return cb == data;
++}
++
++/**
++ * task_work_cancel - cancel a pending work added by task_work_add()
++ * @task: the task which should execute the work
++ * @cb: the callback to remove if queued
++ *
++ * Remove a callback from a task's queue if queued.
++ *
++ * RETURNS:
++ * True if the callback was queued and got cancelled, false otherwise.
++ */
++bool task_work_cancel(struct task_struct *task, struct callback_head *cb)
++{
++	struct callback_head *ret;
++
++	ret = task_work_cancel_match(task, task_work_match, cb);
++
++	return ret == cb;
++}
++
+ /**
+  * task_work_run - execute the works added by task_work_add()
+  *
+@@ -168,7 +192,7 @@ void task_work_run(void)
+ 		if (!work)
+ 			break;
+ 		/*
+-		 * Synchronize with task_work_cancel(). It can not remove
++		 * Synchronize with task_work_cancel_match(). It can not remove
+ 		 * the first entry == work, cmpxchg(task_works) must fail.
+ 		 * But it can remove another entry from the ->next list.
+ 		 */
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index 771d1e040303b..b4843099a8da7 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -1141,6 +1141,7 @@ void tick_broadcast_switch_to_oneshot(void)
+ #ifdef CONFIG_HOTPLUG_CPU
+ void hotplug_cpu__broadcast_tick_pull(int deadcpu)
+ {
++	struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
+ 	struct clock_event_device *bc;
+ 	unsigned long flags;
+ 
+@@ -1148,6 +1149,28 @@ void hotplug_cpu__broadcast_tick_pull(int deadcpu)
+ 	bc = tick_broadcast_device.evtdev;
+ 
+ 	if (bc && broadcast_needs_cpu(bc, deadcpu)) {
++		/*
++		 * If the broadcast force bit of the current CPU is set,
++		 * then the current CPU has not yet reprogrammed the local
++		 * timer device to avoid a ping-pong race. See
++		 * ___tick_broadcast_oneshot_control().
++		 *
++		 * If the broadcast device is hrtimer based then
++		 * programming the broadcast event below does not have any
++		 * effect because the local clockevent device is not
++		 * running and not programmed because the broadcast event
++		 * is not earlier than the pending event of the local clock
++		 * event device. As a consequence all CPUs waiting for a
++		 * broadcast event are stuck forever.
++		 *
++		 * Detect this condition and reprogram the cpu local timer
++		 * device to avoid the starvation.
++		 */
++		if (tick_check_broadcast_expired()) {
++			cpumask_clear_cpu(smp_processor_id(), tick_broadcast_force_mask);
++			tick_program_event(td->evtdev->next_event, 1);
++		}
++
+ 		/* This moves the broadcast assignment to this CPU: */
+ 		clockevents_program_event(bc, bc->next_event, 1);
+ 	}
+diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
+index 84413114db5c5..d91efe1dc3bf5 100644
+--- a/kernel/time/timer_migration.c
++++ b/kernel/time/timer_migration.c
+@@ -507,7 +507,14 @@ static void walk_groups(up_f up, void *data, struct tmigr_cpu *tmc)
+  *			(get_next_timer_interrupt())
+  * @firstexp:		Contains the first event expiry information when last
+  *			active CPU of hierarchy is on the way to idle to make
+- *			sure CPU will be back in time.
++ *			sure CPU will be back in time. It is updated in top
++ *			level group only. Be aware, there could occur a new top
++ *			level of the hierarchy between the 'top level call' in
++ *			tmigr_update_events() and the check for the parent group
++ *			in walk_groups(). Then @firstexp might contain a value
++ *			!= KTIME_MAX even if it was not the final top
++ *			level. This is not a problem, as the worst outcome is a
++ *			CPU which might wake up a little early.
+  * @evt:		Pointer to tmigr_event which needs to be queued (of idle
+  *			child group)
+  * @childmask:		childmask of child group
+@@ -649,7 +656,7 @@ static bool tmigr_active_up(struct tmigr_group *group,
+ 
+ 	} while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state));
+ 
+-	if ((walk_done == false) && group->parent)
++	if (walk_done == false)
+ 		data->childmask = group->childmask;
+ 
+ 	/*
+@@ -1317,20 +1324,9 @@ static bool tmigr_inactive_up(struct tmigr_group *group,
+ 	/* Event Handling */
+ 	tmigr_update_events(group, child, data);
+ 
+-	if (group->parent && (walk_done == false))
++	if (walk_done == false)
+ 		data->childmask = group->childmask;
+ 
+-	/*
+-	 * data->firstexp was set by tmigr_update_events() and contains the
+-	 * expiry of the first global event which needs to be handled. It
+-	 * differs from KTIME_MAX if:
+-	 * - group is the top level group and
+-	 * - group is idle (which means CPU was the last active CPU in the
+-	 *   hierarchy) and
+-	 * - there is a pending event in the hierarchy
+-	 */
+-	WARN_ON_ONCE(data->firstexp != KTIME_MAX && group->parent);
+-
+ 	trace_tmigr_group_set_cpu_inactive(group, newstate, childmask);
+ 
+ 	return walk_done;
+@@ -1552,10 +1548,11 @@ static void tmigr_connect_child_parent(struct tmigr_group *child,
+ 		data.childmask = child->childmask;
+ 
+ 		/*
+-		 * There is only one new level per time. When connecting the
+-		 * child and the parent and set the child active when the parent
+-		 * is inactive, the parent needs to be the uppermost
+-		 * level. Otherwise there went something wrong!
++		 * There is only one new level per time (which is protected by
++		 * tmigr_mutex). When connecting the child and the parent and
++		 * set the child active when the parent is inactive, the parent
++		 * needs to be the uppermost level. Otherwise there went
++		 * something wrong!
+ 		 */
+ 		WARN_ON(!tmigr_active_up(parent, child, &data) && parent->parent);
+ 	}
+diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
+index 6c37d94a37d90..494f68cc13f4b 100644
+--- a/kernel/time/timer_migration.h
++++ b/kernel/time/timer_migration.h
+@@ -22,7 +22,17 @@ struct tmigr_event {
+  * struct tmigr_group - timer migration hierarchy group
+  * @lock:		Lock protecting the event information and group hierarchy
+  *			information during setup
+- * @parent:		Pointer to the parent group
++ * @parent:		Pointer to the parent group. Pointer is updated when a
++ *			new hierarchy level is added because of a CPU coming
++ *			online the first time. Once it is set, the pointer will
++ *			not be removed or updated. When accessing parent pointer
++ *			lock less to decide whether to abort a propagation or
++ *			not, it is not a problem. The worst outcome is an
++ *			unnecessary/early CPU wake up. But do not access parent
++ *			pointer several times in the same 'action' (like
++ *			activation, deactivation, check for remote expiry,...)
++ *			without holding the lock as it is not ensured that value
++ *			will not change.
+  * @groupevt:		Next event of the group which is only used when the
+  *			group is !active. The group event is then queued into
+  *			the parent timer queue.
+diff --git a/kernel/trace/pid_list.c b/kernel/trace/pid_list.c
+index 95106d02b32d8..85de221c0b6f2 100644
+--- a/kernel/trace/pid_list.c
++++ b/kernel/trace/pid_list.c
+@@ -354,7 +354,7 @@ static void pid_list_refill_irq(struct irq_work *iwork)
+ 	while (upper_count-- > 0) {
+ 		union upper_chunk *chunk;
+ 
+-		chunk = kzalloc(sizeof(*chunk), GFP_KERNEL);
++		chunk = kzalloc(sizeof(*chunk), GFP_NOWAIT);
+ 		if (!chunk)
+ 			break;
+ 		*upper_next = chunk;
+@@ -365,7 +365,7 @@ static void pid_list_refill_irq(struct irq_work *iwork)
+ 	while (lower_count-- > 0) {
+ 		union lower_chunk *chunk;
+ 
+-		chunk = kzalloc(sizeof(*chunk), GFP_KERNEL);
++		chunk = kzalloc(sizeof(*chunk), GFP_NOWAIT);
+ 		if (!chunk)
+ 			break;
+ 		*lower_next = chunk;
+diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c
+index d577c4a8321ee..59c1d86a73a24 100644
+--- a/kernel/watchdog_perf.c
++++ b/kernel/watchdog_perf.c
+@@ -75,11 +75,15 @@ static bool watchdog_check_timestamp(void)
+ 	__this_cpu_write(last_timestamp, now);
+ 	return true;
+ }
+-#else
+-static inline bool watchdog_check_timestamp(void)
++
++static void watchdog_init_timestamp(void)
+ {
+-	return true;
++	__this_cpu_write(nmi_rearmed, 0);
++	__this_cpu_write(last_timestamp, ktime_get_mono_fast_ns());
+ }
++#else
++static inline bool watchdog_check_timestamp(void) { return true; }
++static inline void watchdog_init_timestamp(void) { }
+ #endif
+ 
+ static struct perf_event_attr wd_hw_attr = {
+@@ -161,6 +165,7 @@ void watchdog_hardlockup_enable(unsigned int cpu)
+ 	if (!atomic_fetch_inc(&watchdog_cpus))
+ 		pr_info("Enabled. Permanently consumes one hw-PMU counter.\n");
+ 
++	watchdog_init_timestamp();
+ 	perf_event_enable(this_cpu_read(watchdog_ev));
+ }
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 3fbaecfc88c28..f98247ec99c20 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2298,9 +2298,13 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+ 	 * If @work was previously on a different pool, it might still be
+ 	 * running there, in which case the work needs to be queued on that
+ 	 * pool to guarantee non-reentrancy.
++	 *
++	 * For ordered workqueue, work items must be queued on the newest pwq
++	 * for accurate order management.  Guaranteed order also guarantees
++	 * non-reentrancy.  See the comments above unplug_oldest_pwq().
+ 	 */
+ 	last_pool = get_work_pool(work);
+-	if (last_pool && last_pool != pool) {
++	if (last_pool && last_pool != pool && !(wq->flags & __WQ_ORDERED)) {
+ 		struct worker *worker;
+ 
+ 		raw_spin_lock(&last_pool->lock);
+diff --git a/lib/decompress_bunzip2.c b/lib/decompress_bunzip2.c
+index 3518e7394eca8..ca736166f1000 100644
+--- a/lib/decompress_bunzip2.c
++++ b/lib/decompress_bunzip2.c
+@@ -232,7 +232,8 @@ static int INIT get_next_block(struct bunzip_data *bd)
+ 	   RUNB) */
+ 	symCount = symTotal+2;
+ 	for (j = 0; j < groupCount; j++) {
+-		unsigned char length[MAX_SYMBOLS], temp[MAX_HUFCODE_BITS+1];
++		unsigned char length[MAX_SYMBOLS];
++		unsigned short temp[MAX_HUFCODE_BITS+1];
+ 		int	minLen,	maxLen, pp;
+ 		/* Read Huffman code lengths for each symbol.  They're
+ 		   stored in a way similar to mtf; record a starting
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index 03b427e2707e3..b7f2fa08d9c82 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -433,8 +433,23 @@ static void zap_modalias_env(struct kobj_uevent_env *env)
+ 		len = strlen(env->envp[i]) + 1;
+ 
+ 		if (i != env->envp_idx - 1) {
++			/* @env->envp[] contains pointers to @env->buf[]
++			 * with @env->buflen chars, and we are removing
++			 * variable MODALIAS here pointed by @env->envp[i]
++			 * with length @len as shown below:
++			 *
++			 * 0               @env->buf[]      @env->buflen
++			 * ---------------------------------------------
++			 * ^             ^              ^              ^
++			 * |             |->   @len   <-| target block |
++			 * @env->envp[0] @env->envp[i]  @env->envp[i + 1]
++			 *
++			 * so the "target block" indicated above is moved
++			 * backward by @len, and its right size is
++			 * @env->buflen - (@env->envp[i + 1] - @env->envp[0]).
++			 */
+ 			memmove(env->envp[i], env->envp[i + 1],
+-				env->buflen - len);
++				env->buflen - (env->envp[i + 1] - env->envp[0]));
+ 
+ 			for (j = i; j < env->envp_idx - 1; j++)
+ 				env->envp[j] = env->envp[j + 1] - len;
+diff --git a/lib/objagg.c b/lib/objagg.c
+index 1e248629ed643..1608895b009c8 100644
+--- a/lib/objagg.c
++++ b/lib/objagg.c
+@@ -167,6 +167,9 @@ static int objagg_obj_parent_assign(struct objagg *objagg,
+ {
+ 	void *delta_priv;
+ 
++	if (WARN_ON(!objagg_obj_is_root(parent)))
++		return -EINVAL;
++
+ 	delta_priv = objagg->ops->delta_create(objagg->priv, parent->obj,
+ 					       objagg_obj->obj);
+ 	if (IS_ERR(delta_priv))
+@@ -903,20 +906,6 @@ static const struct objagg_opt_algo *objagg_opt_algos[] = {
+ 	[OBJAGG_OPT_ALGO_SIMPLE_GREEDY] = &objagg_opt_simple_greedy,
+ };
+ 
+-static int objagg_hints_obj_cmp(struct rhashtable_compare_arg *arg,
+-				const void *obj)
+-{
+-	struct rhashtable *ht = arg->ht;
+-	struct objagg_hints *objagg_hints =
+-			container_of(ht, struct objagg_hints, node_ht);
+-	const struct objagg_ops *ops = objagg_hints->ops;
+-	const char *ptr = obj;
+-
+-	ptr += ht->p.key_offset;
+-	return ops->hints_obj_cmp ? ops->hints_obj_cmp(ptr, arg->key) :
+-				    memcmp(ptr, arg->key, ht->p.key_len);
+-}
+-
+ /**
+  * objagg_hints_get - obtains hints instance
+  * @objagg:		objagg instance
+@@ -955,7 +944,6 @@ struct objagg_hints *objagg_hints_get(struct objagg *objagg,
+ 				offsetof(struct objagg_hints_node, obj);
+ 	objagg_hints->ht_params.head_offset =
+ 				offsetof(struct objagg_hints_node, ht_node);
+-	objagg_hints->ht_params.obj_cmpfn = objagg_hints_obj_cmp;
+ 
+ 	err = rhashtable_init(&objagg_hints->node_ht, &objagg_hints->ht_params);
+ 	if (err)
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 1e453f825c05d..5e2e93307f0d0 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -60,12 +60,30 @@ static inline void update_alloc_hint_after_get(struct sbitmap *sb,
+ /*
+  * See if we have deferred clears that we can batch move
+  */
+-static inline bool sbitmap_deferred_clear(struct sbitmap_word *map)
++static inline bool sbitmap_deferred_clear(struct sbitmap_word *map,
++		unsigned int depth, unsigned int alloc_hint, bool wrap)
+ {
+-	unsigned long mask;
++	unsigned long mask, word_mask;
+ 
+-	if (!READ_ONCE(map->cleared))
+-		return false;
++	guard(spinlock_irqsave)(&map->swap_lock);
++
++	if (!map->cleared) {
++		if (depth == 0)
++			return false;
++
++		word_mask = (~0UL) >> (BITS_PER_LONG - depth);
++		/*
++		 * The current behavior is to always retry after moving
++		 * ->cleared to word, and we change it to retry in case
++		 * of any free bits. To avoid an infinite loop, we need
++		 * to take wrap & alloc_hint into account, otherwise a
++		 * soft lockup may occur.
++		 */
++		if (!wrap && alloc_hint)
++			word_mask &= ~((1UL << alloc_hint) - 1);
++
++		return (READ_ONCE(map->word) & word_mask) != word_mask;
++	}
+ 
+ 	/*
+ 	 * First get a stable cleared mask, setting the old mask to 0.
+@@ -85,6 +103,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
+ 		      bool alloc_hint)
+ {
+ 	unsigned int bits_per_word;
++	int i;
+ 
+ 	if (shift < 0)
+ 		shift = sbitmap_calculate_shift(depth);
+@@ -116,6 +135,9 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
+ 		return -ENOMEM;
+ 	}
+ 
++	for (i = 0; i < sb->map_nr; i++)
++		spin_lock_init(&sb->map[i].swap_lock);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(sbitmap_init_node);
+@@ -126,7 +148,7 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int depth)
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < sb->map_nr; i++)
+-		sbitmap_deferred_clear(&sb->map[i]);
++		sbitmap_deferred_clear(&sb->map[i], 0, 0, 0);
+ 
+ 	sb->depth = depth;
+ 	sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word);
+@@ -179,7 +201,7 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map,
+ 					alloc_hint, wrap);
+ 		if (nr != -1)
+ 			break;
+-		if (!sbitmap_deferred_clear(map))
++		if (!sbitmap_deferred_clear(map, depth, alloc_hint, wrap))
+ 			break;
+ 	} while (1);
+ 
+@@ -496,7 +518,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
+ 		unsigned int map_depth = __map_depth(sb, index);
+ 		unsigned long val;
+ 
+-		sbitmap_deferred_clear(map);
++		sbitmap_deferred_clear(map, 0, 0, 0);
+ 		val = READ_ONCE(map->word);
+ 		if (val == (1UL << (map_depth - 1)) - 1)
+ 			goto next;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 2120f7478e55c..374a0d54b08df 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -88,9 +88,17 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
+ 	bool smaps = tva_flags & TVA_SMAPS;
+ 	bool in_pf = tva_flags & TVA_IN_PF;
+ 	bool enforce_sysfs = tva_flags & TVA_ENFORCE_SYSFS;
++	unsigned long supported_orders;
++
+ 	/* Check the intersection of requested and supported orders. */
+-	orders &= vma_is_anonymous(vma) ?
+-			THP_ORDERS_ALL_ANON : THP_ORDERS_ALL_FILE;
++	if (vma_is_anonymous(vma))
++		supported_orders = THP_ORDERS_ALL_ANON;
++	else if (vma_is_dax(vma))
++		supported_orders = THP_ORDERS_ALL_FILE_DAX;
++	else
++		supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
++
++	orders &= supported_orders;
+ 	if (!orders)
+ 		return 0;
+ 
+@@ -857,7 +865,7 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
+ 	loff_t off_align = round_up(off, size);
+ 	unsigned long len_pad, ret, off_sub;
+ 
+-	if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())
++	if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall())
+ 		return 0;
+ 
+ 	if (off_end <= off_align || (off_end - off_align) < size)
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 43e1af868cfdc..92a2e8dcb7965 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2586,6 +2586,23 @@ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
+ 	return alloc_migrate_hugetlb_folio(h, gfp_mask, preferred_nid, nmask);
+ }
+ 
++static nodemask_t *policy_mbind_nodemask(gfp_t gfp)
++{
++#ifdef CONFIG_NUMA
++	struct mempolicy *mpol = get_task_policy(current);
++
++	/*
++	 * Only enforce MPOL_BIND policy which overlaps with cpuset policy
++	 * (from policy_nodemask) specifically for hugetlb case
++	 */
++	if (mpol->mode == MPOL_BIND &&
++		(apply_policy_zone(mpol, gfp_zone(gfp)) &&
++		 cpuset_nodemask_valid_mems_allowed(&mpol->nodes)))
++		return &mpol->nodes;
++#endif
++	return NULL;
++}
++
+ /*
+  * Increase the hugetlb pool such that it can accommodate a reservation
+  * of size 'delta'.
+@@ -2599,6 +2616,8 @@ static int gather_surplus_pages(struct hstate *h, long delta)
+ 	long i;
+ 	long needed, allocated;
+ 	bool alloc_ok = true;
++	int node;
++	nodemask_t *mbind_nodemask = policy_mbind_nodemask(htlb_alloc_mask(h));
+ 
+ 	lockdep_assert_held(&hugetlb_lock);
+ 	needed = (h->resv_huge_pages + delta) - h->free_huge_pages;
+@@ -2613,8 +2632,15 @@ static int gather_surplus_pages(struct hstate *h, long delta)
+ retry:
+ 	spin_unlock_irq(&hugetlb_lock);
+ 	for (i = 0; i < needed; i++) {
+-		folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h),
+-				NUMA_NO_NODE, NULL);
++		folio = NULL;
++		for_each_node_mask(node, cpuset_current_mems_allowed) {
++			if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) {
++				folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h),
++						node, NULL);
++				if (folio)
++					break;
++			}
++		}
+ 		if (!folio) {
+ 			alloc_ok = false;
+ 			break;
+@@ -4617,7 +4643,7 @@ void __init hugetlb_add_hstate(unsigned int order)
+ 	BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
+ 	BUG_ON(order < order_base_2(__NR_USED_SUBPAGE));
+ 	h = &hstates[hugetlb_max_hstate++];
+-	mutex_init(&h->resize_lock);
++	__mutex_init(&h->resize_lock, "resize mutex", &h->resize_key);
+ 	h->order = order;
+ 	h->mask = ~(huge_page_size(h) - 1);
+ 	for (i = 0; i < MAX_NUMNODES; ++i)
+@@ -4840,23 +4866,6 @@ static int __init default_hugepagesz_setup(char *s)
+ }
+ __setup("default_hugepagesz=", default_hugepagesz_setup);
+ 
+-static nodemask_t *policy_mbind_nodemask(gfp_t gfp)
+-{
+-#ifdef CONFIG_NUMA
+-	struct mempolicy *mpol = get_task_policy(current);
+-
+-	/*
+-	 * Only enforce MPOL_BIND policy which overlaps with cpuset policy
+-	 * (from policy_nodemask) specifically for hugetlb case
+-	 */
+-	if (mpol->mode == MPOL_BIND &&
+-		(apply_policy_zone(mpol, gfp_zone(gfp)) &&
+-		 cpuset_nodemask_valid_mems_allowed(&mpol->nodes)))
+-		return &mpol->nodes;
+-#endif
+-	return NULL;
+-}
+-
+ static unsigned int allowed_mems_nr(struct hstate *h)
+ {
+ 	int node;
+diff --git a/mm/memory.c b/mm/memory.c
+index d10e616d73898..f81760c93801f 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4681,7 +4681,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	bool write = vmf->flags & FAULT_FLAG_WRITE;
+-	bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE);
++	bool prefault = !in_range(vmf->address, addr, nr * PAGE_SIZE);
+ 	pte_t entry;
+ 
+ 	flush_icache_pages(vma, page, nr);
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index aec756ae56377..a1bf9aa15c337 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -3293,8 +3293,9 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
+  * @pol:  pointer to mempolicy to be formatted
+  *
+  * Convert @pol into a string.  If @buffer is too short, truncate the string.
+- * Recommend a @maxlen of at least 32 for the longest mode, "interleave", the
+- * longest flag, "relative", and to display at least a few node ids.
++ * Recommend a @maxlen of at least 51 for the longest mode, "weighted
++ * interleave", plus the longest flag flags, "relative|balancing", and to
++ * display at least a few node ids.
+  */
+ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
+ {
+@@ -3303,7 +3304,10 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
+ 	unsigned short mode = MPOL_DEFAULT;
+ 	unsigned short flags = 0;
+ 
+-	if (pol && pol != &default_policy && !(pol->flags & MPOL_F_MORON)) {
++	if (pol &&
++	    pol != &default_policy &&
++	    !(pol >= &preferred_node_policy[0] &&
++	      pol <= &preferred_node_policy[ARRAY_SIZE(preferred_node_policy) - 1])) {
+ 		mode = pol->mode;
+ 		flags = pol->flags;
+ 	}
+@@ -3331,12 +3335,18 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
+ 		p += snprintf(p, buffer + maxlen - p, "=");
+ 
+ 		/*
+-		 * Currently, the only defined flags are mutually exclusive
++		 * Static and relative are mutually exclusive.
+ 		 */
+ 		if (flags & MPOL_F_STATIC_NODES)
+ 			p += snprintf(p, buffer + maxlen - p, "static");
+ 		else if (flags & MPOL_F_RELATIVE_NODES)
+ 			p += snprintf(p, buffer + maxlen - p, "relative");
++
++		if (flags & MPOL_F_NUMA_BALANCING) {
++			if (!is_power_of_2(flags & MPOL_MODE_FLAGS))
++				p += snprintf(p, buffer + maxlen - p, "|");
++			p += snprintf(p, buffer + maxlen - p, "balancing");
++		}
+ 	}
+ 
+ 	if (!nodes_empty(nodes))
+diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
+index 1854850b4b897..368b840e75082 100644
+--- a/mm/mmap_lock.c
++++ b/mm/mmap_lock.c
+@@ -19,14 +19,7 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released);
+ 
+ #ifdef CONFIG_MEMCG
+ 
+-/*
+- * Our various events all share the same buffer (because we don't want or need
+- * to allocate a set of buffers *per event type*), so we need to protect against
+- * concurrent _reg() and _unreg() calls, and count how many _reg() calls have
+- * been made.
+- */
+-static DEFINE_MUTEX(reg_lock);
+-static int reg_refcount; /* Protected by reg_lock. */
++static atomic_t reg_refcount;
+ 
+ /*
+  * Size of the buffer for memcg path names. Ignoring stack trace support,
+@@ -34,136 +27,22 @@ static int reg_refcount; /* Protected by reg_lock. */
+  */
+ #define MEMCG_PATH_BUF_SIZE MAX_FILTER_STR_VAL
+ 
+-/*
+- * How many contexts our trace events might be called in: normal, softirq, irq,
+- * and NMI.
+- */
+-#define CONTEXT_COUNT 4
+-
+-struct memcg_path {
+-	local_lock_t lock;
+-	char __rcu *buf;
+-	local_t buf_idx;
+-};
+-static DEFINE_PER_CPU(struct memcg_path, memcg_paths) = {
+-	.lock = INIT_LOCAL_LOCK(lock),
+-	.buf_idx = LOCAL_INIT(0),
+-};
+-
+-static char **tmp_bufs;
+-
+-/* Called with reg_lock held. */
+-static void free_memcg_path_bufs(void)
+-{
+-	struct memcg_path *memcg_path;
+-	int cpu;
+-	char **old = tmp_bufs;
+-
+-	for_each_possible_cpu(cpu) {
+-		memcg_path = per_cpu_ptr(&memcg_paths, cpu);
+-		*(old++) = rcu_dereference_protected(memcg_path->buf,
+-			lockdep_is_held(&reg_lock));
+-		rcu_assign_pointer(memcg_path->buf, NULL);
+-	}
+-
+-	/* Wait for inflight memcg_path_buf users to finish. */
+-	synchronize_rcu();
+-
+-	old = tmp_bufs;
+-	for_each_possible_cpu(cpu) {
+-		kfree(*(old++));
+-	}
+-
+-	kfree(tmp_bufs);
+-	tmp_bufs = NULL;
+-}
+-
+ int trace_mmap_lock_reg(void)
+ {
+-	int cpu;
+-	char *new;
+-
+-	mutex_lock(&reg_lock);
+-
+-	/* If the refcount is going 0->1, proceed with allocating buffers. */
+-	if (reg_refcount++)
+-		goto out;
+-
+-	tmp_bufs = kmalloc_array(num_possible_cpus(), sizeof(*tmp_bufs),
+-				 GFP_KERNEL);
+-	if (tmp_bufs == NULL)
+-		goto out_fail;
+-
+-	for_each_possible_cpu(cpu) {
+-		new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL);
+-		if (new == NULL)
+-			goto out_fail_free;
+-		rcu_assign_pointer(per_cpu_ptr(&memcg_paths, cpu)->buf, new);
+-		/* Don't need to wait for inflights, they'd have gotten NULL. */
+-	}
+-
+-out:
+-	mutex_unlock(&reg_lock);
++	atomic_inc(&reg_refcount);
+ 	return 0;
+-
+-out_fail_free:
+-	free_memcg_path_bufs();
+-out_fail:
+-	/* Since we failed, undo the earlier ref increment. */
+-	--reg_refcount;
+-
+-	mutex_unlock(&reg_lock);
+-	return -ENOMEM;
+ }
+ 
+ void trace_mmap_lock_unreg(void)
+ {
+-	mutex_lock(&reg_lock);
+-
+-	/* If the refcount is going 1->0, proceed with freeing buffers. */
+-	if (--reg_refcount)
+-		goto out;
+-
+-	free_memcg_path_bufs();
+-
+-out:
+-	mutex_unlock(&reg_lock);
+-}
+-
+-static inline char *get_memcg_path_buf(void)
+-{
+-	struct memcg_path *memcg_path = this_cpu_ptr(&memcg_paths);
+-	char *buf;
+-	int idx;
+-
+-	rcu_read_lock();
+-	buf = rcu_dereference(memcg_path->buf);
+-	if (buf == NULL) {
+-		rcu_read_unlock();
+-		return NULL;
+-	}
+-	idx = local_add_return(MEMCG_PATH_BUF_SIZE, &memcg_path->buf_idx) -
+-	      MEMCG_PATH_BUF_SIZE;
+-	return &buf[idx];
++	atomic_dec(&reg_refcount);
+ }
+ 
+-static inline void put_memcg_path_buf(void)
+-{
+-	local_sub(MEMCG_PATH_BUF_SIZE, &this_cpu_ptr(&memcg_paths)->buf_idx);
+-	rcu_read_unlock();
+-}
+-
+-#define TRACE_MMAP_LOCK_EVENT(type, mm, ...)                                   \
+-	do {                                                                   \
+-		const char *memcg_path;                                        \
+-		local_lock(&memcg_paths.lock);                                 \
+-		memcg_path = get_mm_memcg_path(mm);                            \
+-		trace_mmap_lock_##type(mm,                                     \
+-				       memcg_path != NULL ? memcg_path : "",   \
+-				       ##__VA_ARGS__);                         \
+-		if (likely(memcg_path != NULL))                                \
+-			put_memcg_path_buf();                                  \
+-		local_unlock(&memcg_paths.lock);                               \
++#define TRACE_MMAP_LOCK_EVENT(type, mm, ...)                    \
++	do {                                                    \
++		char buf[MEMCG_PATH_BUF_SIZE];                  \
++		get_mm_memcg_path(mm, buf, sizeof(buf));        \
++		trace_mmap_lock_##type(mm, buf, ##__VA_ARGS__); \
+ 	} while (0)
+ 
+ #else /* !CONFIG_MEMCG */
+@@ -185,37 +64,23 @@ void trace_mmap_lock_unreg(void)
+ #ifdef CONFIG_TRACING
+ #ifdef CONFIG_MEMCG
+ /*
+- * Write the given mm_struct's memcg path to a percpu buffer, and return a
+- * pointer to it. If the path cannot be determined, or no buffer was available
+- * (because the trace event is being unregistered), NULL is returned.
+- *
+- * Note: buffers are allocated per-cpu to avoid locking, so preemption must be
+- * disabled by the caller before calling us, and re-enabled only after the
+- * caller is done with the pointer.
+- *
+- * The caller must call put_memcg_path_buf() once the buffer is no longer
+- * needed. This must be done while preemption is still disabled.
++ * Write the given mm_struct's memcg path to a buffer. If the path cannot be
++ * determined or the trace event is being unregistered, empty string is written.
+  */
+-static const char *get_mm_memcg_path(struct mm_struct *mm)
++static void get_mm_memcg_path(struct mm_struct *mm, char *buf, size_t buflen)
+ {
+-	char *buf = NULL;
+-	struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm);
++	struct mem_cgroup *memcg;
+ 
++	buf[0] = '\0';
++	/* No need to get path if no trace event is registered. */
++	if (!atomic_read(&reg_refcount))
++		return;
++	memcg = get_mem_cgroup_from_mm(mm);
+ 	if (memcg == NULL)
+-		goto out;
+-	if (unlikely(memcg->css.cgroup == NULL))
+-		goto out_put;
+-
+-	buf = get_memcg_path_buf();
+-	if (buf == NULL)
+-		goto out_put;
+-
+-	cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE);
+-
+-out_put:
++		return;
++	if (memcg->css.cgroup)
++		cgroup_path(memcg->css.cgroup, buf, buflen);
+ 	css_put(&memcg->css);
+-out:
+-	return buf;
+ }
+ 
+ #endif /* CONFIG_MEMCG */
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 9ecf99190ea20..df2c442f1c47b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2323,16 +2323,20 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
+ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
+ {
+ 	struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+-	int count = READ_ONCE(pcp->count);
+-
+-	while (count) {
+-		int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
+-		count -= to_drain;
++	int count;
+ 
++	do {
+ 		spin_lock(&pcp->lock);
+-		free_pcppages_bulk(zone, to_drain, pcp, 0);
++		count = pcp->count;
++		if (count) {
++			int to_drain = min(count,
++				pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
++
++			free_pcppages_bulk(zone, to_drain, pcp, 0);
++			count -= to_drain;
++		}
+ 		spin_unlock(&pcp->lock);
+-	}
++	} while (count);
+ }
+ 
+ /*
+@@ -5805,6 +5809,23 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
+ 	return pages;
+ }
+ 
++void free_reserved_page(struct page *page)
++{
++	if (mem_alloc_profiling_enabled()) {
++		union codetag_ref *ref = get_page_tag_ref(page);
++
++		if (ref) {
++			set_codetag_empty(ref);
++			put_page_tag_ref(ref);
++		}
++	}
++	ClearPageReserved(page);
++	init_page_count(page);
++	__free_page(page);
++	adjust_managed_page_count(page, 1);
++}
++EXPORT_SYMBOL(free_reserved_page);
++
+ static int page_alloc_cpu_dead(unsigned int cpu)
+ {
+ 	struct zone *zone;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 2e34de9cd0d4f..68ac33bea3a3c 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -3900,6 +3900,32 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq,
+  *                          working set protection
+  ******************************************************************************/
+ 
++static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc)
++{
++	int priority;
++	unsigned long reclaimable;
++
++	if (sc->priority != DEF_PRIORITY || sc->nr_to_reclaim < MIN_LRU_BATCH)
++		return;
++	/*
++	 * Determine the initial priority based on
++	 * (total >> priority) * reclaimed_to_scanned_ratio = nr_to_reclaim,
++	 * where reclaimed_to_scanned_ratio = inactive / total.
++	 */
++	reclaimable = node_page_state(pgdat, NR_INACTIVE_FILE);
++	if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
++		reclaimable += node_page_state(pgdat, NR_INACTIVE_ANON);
++
++	/* round down reclaimable and round up sc->nr_to_reclaim */
++	priority = fls_long(reclaimable) - 1 - fls_long(sc->nr_to_reclaim - 1);
++
++	/*
++	 * The estimation is based on LRU pages only, so cap it to prevent
++	 * overshoots of shrinker objects by large margins.
++	 */
++	sc->priority = clamp(priority, DEF_PRIORITY / 2, DEF_PRIORITY);
++}
++
+ static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *sc)
+ {
+ 	int gen, type, zone;
+@@ -3933,19 +3959,17 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc
+ 	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+ 	DEFINE_MIN_SEQ(lruvec);
+ 
+-	/* see the comment on lru_gen_folio */
+-	gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]);
+-	birth = READ_ONCE(lruvec->lrugen.timestamps[gen]);
+-
+-	if (time_is_after_jiffies(birth + min_ttl))
++	if (mem_cgroup_below_min(NULL, memcg))
+ 		return false;
+ 
+ 	if (!lruvec_is_sizable(lruvec, sc))
+ 		return false;
+ 
+-	mem_cgroup_calculate_protection(NULL, memcg);
++	/* see the comment on lru_gen_folio */
++	gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]);
++	birth = READ_ONCE(lruvec->lrugen.timestamps[gen]);
+ 
+-	return !mem_cgroup_below_min(NULL, memcg);
++	return time_is_before_jiffies(birth + min_ttl);
+ }
+ 
+ /* to protect the working set of the last N jiffies */
+@@ -3955,23 +3979,20 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
+ {
+ 	struct mem_cgroup *memcg;
+ 	unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl);
++	bool reclaimable = !min_ttl;
+ 
+ 	VM_WARN_ON_ONCE(!current_is_kswapd());
+ 
+-	/* check the order to exclude compaction-induced reclaim */
+-	if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY)
+-		return;
++	set_initial_priority(pgdat, sc);
+ 
+ 	memcg = mem_cgroup_iter(NULL, NULL, NULL);
+ 	do {
+ 		struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
+ 
+-		if (lruvec_is_reclaimable(lruvec, sc, min_ttl)) {
+-			mem_cgroup_iter_break(NULL, memcg);
+-			return;
+-		}
++		mem_cgroup_calculate_protection(NULL, memcg);
+ 
+-		cond_resched();
++		if (!reclaimable)
++			reclaimable = lruvec_is_reclaimable(lruvec, sc, min_ttl);
+ 	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+ 
+ 	/*
+@@ -3979,7 +4000,7 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
+ 	 * younger than min_ttl. However, another possibility is all memcgs are
+ 	 * either too small or below min.
+ 	 */
+-	if (mutex_trylock(&oom_lock)) {
++	if (!reclaimable && mutex_trylock(&oom_lock)) {
+ 		struct oom_control oc = {
+ 			.gfp_mask = sc->gfp_mask,
+ 		};
+@@ -4582,7 +4603,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
+ 
+ 		/* retry folios that may have missed folio_rotate_reclaimable() */
+ 		list_move(&folio->lru, &clean);
+-		sc->nr_scanned -= folio_nr_pages(folio);
+ 	}
+ 
+ 	spin_lock_irq(&lruvec->lru_lock);
+@@ -4772,8 +4792,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
+ 	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+ 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+ 
+-	mem_cgroup_calculate_protection(NULL, memcg);
+-
++	/* lru_gen_age_node() called mem_cgroup_calculate_protection() */
+ 	if (mem_cgroup_below_min(NULL, memcg))
+ 		return MEMCG_LRU_YOUNG;
+ 
+@@ -4897,28 +4916,6 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc
+ 	blk_finish_plug(&plug);
+ }
+ 
+-static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc)
+-{
+-	int priority;
+-	unsigned long reclaimable;
+-
+-	if (sc->priority != DEF_PRIORITY || sc->nr_to_reclaim < MIN_LRU_BATCH)
+-		return;
+-	/*
+-	 * Determine the initial priority based on
+-	 * (total >> priority) * reclaimed_to_scanned_ratio = nr_to_reclaim,
+-	 * where reclaimed_to_scanned_ratio = inactive / total.
+-	 */
+-	reclaimable = node_page_state(pgdat, NR_INACTIVE_FILE);
+-	if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
+-		reclaimable += node_page_state(pgdat, NR_INACTIVE_ANON);
+-
+-	/* round down reclaimable and round up sc->nr_to_reclaim */
+-	priority = fls_long(reclaimable) - 1 - fls_long(sc->nr_to_reclaim - 1);
+-
+-	sc->priority = clamp(priority, 0, DEF_PRIORITY);
+-}
+-
+ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *sc)
+ {
+ 	struct blk_plug plug;
+@@ -6702,6 +6699,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
+ {
+ 	struct zone *zone;
+ 	int z;
++	unsigned long nr_reclaimed = sc->nr_reclaimed;
+ 
+ 	/* Reclaim a number of pages proportional to the number of zones */
+ 	sc->nr_to_reclaim = 0;
+@@ -6729,7 +6727,8 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
+ 	if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order))
+ 		sc->order = 0;
+ 
+-	return sc->nr_scanned >= sc->nr_to_reclaim;
++	/* account for progress from mm_account_reclaimed_pages() */
++	return max(sc->nr_scanned, sc->nr_reclaimed - nr_reclaimed) >= sc->nr_to_reclaim;
+ }
+ 
+ /* Page allocator PCP high watermark is lowered if reclaim is active. */
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index c644b30977bd8..7ae118a6d947b 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -718,8 +718,8 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
+ 
+ 	switch (cmd) {
+ 	case HCISETAUTH:
+-		err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_AUTH_ENABLE,
+-					    1, &dr.dev_opt, HCI_CMD_TIMEOUT);
++		err = hci_cmd_sync_status(hdev, HCI_OP_WRITE_AUTH_ENABLE,
++					  1, &dr.dev_opt, HCI_CMD_TIMEOUT);
+ 		break;
+ 
+ 	case HCISETENCRYPT:
+@@ -730,23 +730,21 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
+ 
+ 		if (!test_bit(HCI_AUTH, &hdev->flags)) {
+ 			/* Auth must be enabled first */
+-			err = __hci_cmd_sync_status(hdev,
+-						    HCI_OP_WRITE_AUTH_ENABLE,
+-						    1, &dr.dev_opt,
+-						    HCI_CMD_TIMEOUT);
++			err = hci_cmd_sync_status(hdev,
++						  HCI_OP_WRITE_AUTH_ENABLE,
++						  1, &dr.dev_opt,
++						  HCI_CMD_TIMEOUT);
+ 			if (err)
+ 				break;
+ 		}
+ 
+-		err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_ENCRYPT_MODE,
+-					    1, &dr.dev_opt,
+-					    HCI_CMD_TIMEOUT);
++		err = hci_cmd_sync_status(hdev, HCI_OP_WRITE_ENCRYPT_MODE,
++					  1, &dr.dev_opt, HCI_CMD_TIMEOUT);
+ 		break;
+ 
+ 	case HCISETSCAN:
+-		err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_SCAN_ENABLE,
+-					    1, &dr.dev_opt,
+-					    HCI_CMD_TIMEOUT);
++		err = hci_cmd_sync_status(hdev, HCI_OP_WRITE_SCAN_ENABLE,
++					  1, &dr.dev_opt, HCI_CMD_TIMEOUT);
+ 
+ 		/* Ensure that the connectable and discoverable states
+ 		 * get correctly modified as this was a non-mgmt change.
+@@ -758,9 +756,8 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
+ 	case HCISETLINKPOL:
+ 		policy = cpu_to_le16(dr.dev_opt);
+ 
+-		err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_DEF_LINK_POLICY,
+-					    2, &policy,
+-					    HCI_CMD_TIMEOUT);
++		err = hci_cmd_sync_status(hdev, HCI_OP_WRITE_DEF_LINK_POLICY,
++					  2, &policy, HCI_CMD_TIMEOUT);
+ 		break;
+ 
+ 	case HCISETLINKMODE:
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 93f7ac905cece..4611a67d7dcc3 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6988,6 +6988,8 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 	if (!pa_sync)
+ 		goto unlock;
+ 
++	pa_sync->iso_qos.bcast.encryption = ev->encryption;
++
+ 	/* Notify iso layer */
+ 	hci_connect_cfm(pa_sync, 0);
+ 
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index eea34e6a236fd..bb704088559fb 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -371,8 +371,6 @@ static void le_scan_disable(struct work_struct *work)
+ 		goto _return;
+ 	}
+ 
+-	hdev->discovery.scan_start = 0;
+-
+ 	/* If we were running LE only scan, change discovery state. If
+ 	 * we were running both LE and BR/EDR inquiry simultaneously,
+ 	 * and BR/EDR inquiry is already finished, stop discovery,
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index d97064d460dc7..e19b583ff2c6d 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -25,8 +25,8 @@ static inline int should_deliver(const struct net_bridge_port *p,
+ 
+ 	vg = nbp_vlan_group_rcu(p);
+ 	return ((p->flags & BR_HAIRPIN_MODE) || skb->dev != p->dev) &&
+-		p->state == BR_STATE_FORWARDING && br_allowed_egress(vg, skb) &&
+-		nbp_switchdev_allowed_egress(p, skb) &&
++		(br_mst_is_enabled(p->br) || p->state == BR_STATE_FORWARDING) &&
++		br_allowed_egress(vg, skb) && nbp_switchdev_allowed_egress(p, skb) &&
+ 		!br_skb_isolated(p, skb);
+ }
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9933851c685e7..110692c1dd95a 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3544,13 +3544,20 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
+ 	if (skb_is_gso(skb)) {
+ 		struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 
+-		/* Due to header grow, MSS needs to be downgraded. */
+-		if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO))
+-			skb_decrease_gso_size(shinfo, len_diff);
+-
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= gso_type;
+ 		shinfo->gso_segs = 0;
++
++		/* Due to header growth, MSS needs to be downgraded.
++		 * There is a BUG_ON() when segmenting the frag_list with
++		 * head_frag true, so linearize the skb after downgrading
++		 * the MSS.
++		 */
++		if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) {
++			skb_decrease_gso_size(shinfo, len_diff);
++			if (shinfo->frag_list)
++				return skb_linearize(skb);
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index f82e9a7d3b379..7b54f44f5372a 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1101,7 +1101,7 @@ bool __skb_flow_dissect(const struct net *net,
+ 		}
+ 	}
+ 
+-	WARN_ON_ONCE(!net);
++	DEBUG_NET_WARN_ON_ONCE(!net);
+ 	if (net) {
+ 		enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;
+ 		struct bpf_prog_array *run_array;
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index f4444b4e39e63..3772eb63dcad1 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -445,7 +445,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
+ 	return true;
+ 
+ unmap_failed:
+-	WARN_ON_ONCE("unexpected DMA address, please report to netdev@");
++	WARN_ONCE(1, "unexpected DMA address, please report to netdev@");
+ 	dma_unmap_page_attrs(pool->p.dev, dma,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index 022c12059cf2f..bcc5551c6424b 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -127,10 +127,8 @@ void xdp_unreg_mem_model(struct xdp_mem_info *mem)
+ 		return;
+ 
+ 	if (type == MEM_TYPE_PAGE_POOL) {
+-		rcu_read_lock();
+-		xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
++		xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
+ 		page_pool_destroy(xa->page_pool);
+-		rcu_read_unlock();
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xdp_unreg_mem_model);
+diff --git a/net/ethtool/pse-pd.c b/net/ethtool/pse-pd.c
+index 2c981d443f27e..776ac96cdadc9 100644
+--- a/net/ethtool/pse-pd.c
++++ b/net/ethtool/pse-pd.c
+@@ -178,12 +178,14 @@ ethnl_set_pse(struct ethnl_req_info *req_info, struct genl_info *info)
+ 
+ 	phydev = dev->phydev;
+ 	/* These values are already validated by the ethnl_pse_set_policy */
+-	if (pse_has_podl(phydev->psec))
++	if (tb[ETHTOOL_A_PODL_PSE_ADMIN_CONTROL])
+ 		config.podl_admin_control = nla_get_u32(tb[ETHTOOL_A_PODL_PSE_ADMIN_CONTROL]);
+-	if (pse_has_c33(phydev->psec))
++	if (tb[ETHTOOL_A_C33_PSE_ADMIN_CONTROL])
+ 		config.c33_admin_control = nla_get_u32(tb[ETHTOOL_A_C33_PSE_ADMIN_CONTROL]);
+ 
+-	/* Return errno directly - PSE has no notification */
++	/* Return errno directly - PSE has no notification
++	 * pse_ethtool_set_config() will do nothing if the config is null
++	 */
+ 	return pse_ethtool_set_config(phydev->psec, info->extack, &config);
+ }
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 3968d3f98e083..619a4df7be1e8 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -239,8 +239,7 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ #else
+ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ {
+-	kfree_skb(skb);
+-
++	WARN_ON(1);
+ 	return -EOPNOTSUPP;
+ }
+ #endif
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index b3271957ad9a0..3f28ecbdcaef1 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -56,6 +56,13 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ 		x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+ 				      (xfrm_address_t *)&ip_hdr(skb)->daddr,
+ 				      spi, IPPROTO_ESP, AF_INET);
++
++		if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
++			/* non-offload path will record the error and audit log */
++			xfrm_state_put(x);
++			x = NULL;
++		}
++
+ 		if (!x)
+ 			goto out_reset;
+ 
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index f669da98d11d8..8956026bc0a2c 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -2270,6 +2270,15 @@ void fib_select_path(struct net *net, struct fib_result *res,
+ 		fib_select_default(fl4, res);
+ 
+ check_saddr:
+-	if (!fl4->saddr)
+-		fl4->saddr = fib_result_prefsrc(net, res);
++	if (!fl4->saddr) {
++		struct net_device *l3mdev;
++
++		l3mdev = dev_get_by_index_rcu(net, fl4->flowi4_l3mdev);
++
++		if (!l3mdev ||
++		    l3mdev_master_dev_rcu(FIB_RES_DEV(*res)) == l3mdev)
++			fl4->saddr = fib_result_prefsrc(net, res);
++		else
++			fl4->saddr = inet_select_addr(l3mdev, 0, RT_SCOPE_LINK);
++	}
+ }
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index f474106464d2f..8f30e3f00b7f2 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1629,6 +1629,7 @@ int fib_table_lookup(struct fib_table *tb, const struct flowi4 *flp,
+ 			res->nhc = nhc;
+ 			res->type = fa->fa_type;
+ 			res->scope = fi->fib_scope;
++			res->dscp = fa->fa_dscp;
+ 			res->fi = fi;
+ 			res->table = tb;
+ 			res->fa_head = &n->leaf;
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 535856b0f0edc..6b9787ee86017 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -888,9 +888,10 @@ static int nla_put_nh_group(struct sk_buff *skb, struct nexthop *nh,
+ 
+ 	p = nla_data(nla);
+ 	for (i = 0; i < nhg->num_nh; ++i) {
+-		p->id = nhg->nh_entries[i].nh->id;
+-		p->weight = nhg->nh_entries[i].weight - 1;
+-		p += 1;
++		*p++ = (struct nexthop_grp) {
++			.id = nhg->nh_entries[i].nh->id,
++			.weight = nhg->nh_entries[i].weight - 1,
++		};
+ 	}
+ 
+ 	if (nhg->resilient && nla_put_nh_group_res(skb, nhg))
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index b3073d1c8f8f7..990912fa18e85 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1263,7 +1263,7 @@ void ip_rt_get_source(u8 *addr, struct sk_buff *skb, struct rtable *rt)
+ 		struct flowi4 fl4 = {
+ 			.daddr = iph->daddr,
+ 			.saddr = iph->saddr,
+-			.flowi4_tos = RT_TOS(iph->tos),
++			.flowi4_tos = iph->tos & IPTOS_RT_MASK,
+ 			.flowi4_oif = rt->dst.dev->ifindex,
+ 			.flowi4_iif = skb->dev->ifindex,
+ 			.flowi4_mark = skb->mark,
+@@ -2868,9 +2868,9 @@ EXPORT_SYMBOL_GPL(ip_route_output_flow);
+ 
+ /* called with rcu_read_lock held */
+ static int rt_fill_info(struct net *net, __be32 dst, __be32 src,
+-			struct rtable *rt, u32 table_id, struct flowi4 *fl4,
+-			struct sk_buff *skb, u32 portid, u32 seq,
+-			unsigned int flags)
++			struct rtable *rt, u32 table_id, dscp_t dscp,
++			struct flowi4 *fl4, struct sk_buff *skb, u32 portid,
++			u32 seq, unsigned int flags)
+ {
+ 	struct rtmsg *r;
+ 	struct nlmsghdr *nlh;
+@@ -2886,7 +2886,7 @@ static int rt_fill_info(struct net *net, __be32 dst, __be32 src,
+ 	r->rtm_family	 = AF_INET;
+ 	r->rtm_dst_len	= 32;
+ 	r->rtm_src_len	= 0;
+-	r->rtm_tos	= fl4 ? fl4->flowi4_tos : 0;
++	r->rtm_tos	= inet_dscp_to_dsfield(dscp);
+ 	r->rtm_table	= table_id < 256 ? table_id : RT_TABLE_COMPAT;
+ 	if (nla_put_u32(skb, RTA_TABLE, table_id))
+ 		goto nla_put_failure;
+@@ -3036,7 +3036,7 @@ static int fnhe_dump_bucket(struct net *net, struct sk_buff *skb,
+ 				goto next;
+ 
+ 			err = rt_fill_info(net, fnhe->fnhe_daddr, 0, rt,
+-					   table_id, NULL, skb,
++					   table_id, 0, NULL, skb,
+ 					   NETLINK_CB(cb->skb).portid,
+ 					   cb->nlh->nlmsg_seq, flags);
+ 			if (err)
+@@ -3332,7 +3332,7 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 		fri.tb_id = table_id;
+ 		fri.dst = res.prefix;
+ 		fri.dst_len = res.prefixlen;
+-		fri.dscp = inet_dsfield_to_dscp(fl4.flowi4_tos);
++		fri.dscp = res.dscp;
+ 		fri.type = rt->rt_type;
+ 		fri.offload = 0;
+ 		fri.trap = 0;
+@@ -3359,8 +3359,8 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 		err = fib_dump_info(skb, NETLINK_CB(in_skb).portid,
+ 				    nlh->nlmsg_seq, RTM_NEWROUTE, &fri, 0);
+ 	} else {
+-		err = rt_fill_info(net, dst, src, rt, table_id, &fl4, skb,
+-				   NETLINK_CB(in_skb).portid,
++		err = rt_fill_info(net, dst, src, rt, table_id, res.dscp, &fl4,
++				   skb, NETLINK_CB(in_skb).portid,
+ 				   nlh->nlmsg_seq, 0);
+ 	}
+ 	if (err < 0)
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index e6790ea748773..ec6911034138f 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -598,7 +598,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ 		 */
+ 		mask |= EPOLLOUT | EPOLLWRNORM;
+ 	}
+-	/* This barrier is coupled with smp_wmb() in tcp_reset() */
++	/* This barrier is coupled with smp_wmb() in tcp_done_with_error() */
+ 	smp_rmb();
+ 	if (READ_ONCE(sk->sk_err) ||
+ 	    !skb_queue_empty_lockless(&sk->sk_error_queue))
+@@ -4583,14 +4583,10 @@ int tcp_abort(struct sock *sk, int err)
+ 	bh_lock_sock(sk);
+ 
+ 	if (!sock_flag(sk, SOCK_DEAD)) {
+-		WRITE_ONCE(sk->sk_err, err);
+-		/* This barrier is coupled with smp_rmb() in tcp_poll() */
+-		smp_wmb();
+-		sk_error_report(sk);
+ 		if (tcp_need_reset(sk->sk_state))
+ 			tcp_send_active_reset(sk, GFP_ATOMIC,
+ 					      SK_RST_REASON_NOT_SPECIFIED);
+-		tcp_done(sk);
++		tcp_done_with_error(sk, err);
+ 	}
+ 
+ 	bh_unlock_sock(sk);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 38da23f991d60..570e87ad9a56e 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4469,9 +4469,26 @@ static enum skb_drop_reason tcp_sequence(const struct tcp_sock *tp,
+ 	return SKB_NOT_DROPPED_YET;
+ }
+ 
++
++void tcp_done_with_error(struct sock *sk, int err)
++{
++	/* This barrier is coupled with smp_rmb() in tcp_poll() */
++	WRITE_ONCE(sk->sk_err, err);
++	smp_wmb();
++
++	tcp_write_queue_purge(sk);
++	tcp_done(sk);
++
++	if (!sock_flag(sk, SOCK_DEAD))
++		sk_error_report(sk);
++}
++EXPORT_SYMBOL(tcp_done_with_error);
++
+ /* When we get a reset we do this. */
+ void tcp_reset(struct sock *sk, struct sk_buff *skb)
+ {
++	int err;
++
+ 	trace_tcp_receive_reset(sk);
+ 
+ 	/* mptcp can't tell us to ignore reset pkts,
+@@ -4483,24 +4500,17 @@ void tcp_reset(struct sock *sk, struct sk_buff *skb)
+ 	/* We want the right error as BSD sees it (and indeed as we do). */
+ 	switch (sk->sk_state) {
+ 	case TCP_SYN_SENT:
+-		WRITE_ONCE(sk->sk_err, ECONNREFUSED);
++		err = ECONNREFUSED;
+ 		break;
+ 	case TCP_CLOSE_WAIT:
+-		WRITE_ONCE(sk->sk_err, EPIPE);
++		err = EPIPE;
+ 		break;
+ 	case TCP_CLOSE:
+ 		return;
+ 	default:
+-		WRITE_ONCE(sk->sk_err, ECONNRESET);
++		err = ECONNRESET;
+ 	}
+-	/* This barrier is coupled with smp_rmb() in tcp_poll() */
+-	smp_wmb();
+-
+-	tcp_write_queue_purge(sk);
+-	tcp_done(sk);
+-
+-	if (!sock_flag(sk, SOCK_DEAD))
+-		sk_error_report(sk);
++	tcp_done_with_error(sk, err);
+ }
+ 
+ /*
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index b710958393e64..a541659b6562b 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -611,15 +611,10 @@ int tcp_v4_err(struct sk_buff *skb, u32 info)
+ 
+ 		ip_icmp_error(sk, skb, err, th->dest, info, (u8 *)th);
+ 
+-		if (!sock_owned_by_user(sk)) {
+-			WRITE_ONCE(sk->sk_err, err);
+-
+-			sk_error_report(sk);
+-
+-			tcp_done(sk);
+-		} else {
++		if (!sock_owned_by_user(sk))
++			tcp_done_with_error(sk, err);
++		else
+ 			WRITE_ONCE(sk->sk_err_soft, err);
+-		}
+ 		goto out;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 538c06f95918d..0fbebf6266e91 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -515,9 +515,6 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ 	const struct tcp_sock *oldtp;
+ 	struct tcp_sock *newtp;
+ 	u32 seq;
+-#ifdef CONFIG_TCP_AO
+-	struct tcp_ao_key *ao_key;
+-#endif
+ 
+ 	if (!newsk)
+ 		return NULL;
+@@ -608,10 +605,14 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ #endif
+ #ifdef CONFIG_TCP_AO
+ 	newtp->ao_info = NULL;
+-	ao_key = treq->af_specific->ao_lookup(sk, req,
+-				tcp_rsk(req)->ao_keyid, -1);
+-	if (ao_key)
+-		newtp->tcp_header_len += tcp_ao_len_aligned(ao_key);
++
++	if (tcp_rsk_used_ao(req)) {
++		struct tcp_ao_key *ao_key;
++
++		ao_key = treq->af_specific->ao_lookup(sk, req, tcp_rsk(req)->ao_keyid, -1);
++		if (ao_key)
++			newtp->tcp_header_len += tcp_ao_len_aligned(ao_key);
++	}
+  #endif
+ 	if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
+ 		newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len;
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 892c86657fbc2..4d40615dc8fc2 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -74,11 +74,7 @@ u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when)
+ 
+ static void tcp_write_err(struct sock *sk)
+ {
+-	WRITE_ONCE(sk->sk_err, READ_ONCE(sk->sk_err_soft) ? : ETIMEDOUT);
+-	sk_error_report(sk);
+-
+-	tcp_write_queue_purge(sk);
+-	tcp_done(sk);
++	tcp_done_with_error(sk, READ_ONCE(sk->sk_err_soft) ? : ETIMEDOUT);
+ 	__NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
+ }
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 5c424a0e7232f..4f2c5cc31015e 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1873,7 +1873,8 @@ int ipv6_dev_get_saddr(struct net *net, const struct net_device *dst_dev,
+ 							    master, &dst,
+ 							    scores, hiscore_idx);
+ 
+-			if (scores[hiscore_idx].ifa)
++			if (scores[hiscore_idx].ifa &&
++			    scores[hiscore_idx].scopedist >= 0)
+ 				goto out;
+ 		}
+ 
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 34a9a5b9ed00b..3920e8aa1031e 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -256,8 +256,7 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ #else
+ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb)
+ {
+-	kfree_skb(skb);
+-
++	WARN_ON(1);
+ 	return -EOPNOTSUPP;
+ }
+ #endif
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 527b7caddbc68..919ebfabbe4ee 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -83,6 +83,13 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ 		x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+ 				      (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
+ 				      spi, IPPROTO_ESP, AF_INET6);
++
++		if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
++			/* non-offload path will record the error and audit log */
++			xfrm_state_put(x);
++			x = NULL;
++		}
++
+ 		if (!x)
+ 			goto out_reset;
+ 
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 27d8725445e35..784424ac41477 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1124,6 +1124,7 @@ static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk,
+ 		from = rt ? rcu_dereference(rt->from) : NULL;
+ 		err = ip6_route_get_saddr(net, from, &fl6->daddr,
+ 					  sk ? READ_ONCE(inet6_sk(sk)->srcprefs) : 0,
++					  fl6->flowi6_l3mdev,
+ 					  &fl6->saddr);
+ 		rcu_read_unlock();
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8d72ca0b086d7..c9a9506b714d7 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5689,7 +5689,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 				goto nla_put_failure;
+ 	} else if (dest) {
+ 		struct in6_addr saddr_buf;
+-		if (ip6_route_get_saddr(net, rt, dest, 0, &saddr_buf) == 0 &&
++		if (ip6_route_get_saddr(net, rt, dest, 0, 0, &saddr_buf) == 0 &&
+ 		    nla_put_in6_addr(skb, RTA_PREFSRC, &saddr_buf))
+ 			goto nla_put_failure;
+ 	}
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 729faf8bd366a..3385faf1d5dcb 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -490,14 +490,10 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 
+ 		ipv6_icmp_error(sk, skb, err, th->dest, ntohl(info), (u8 *)th);
+ 
+-		if (!sock_owned_by_user(sk)) {
+-			WRITE_ONCE(sk->sk_err, err);
+-			sk_error_report(sk);		/* Wake people up to see the error (see connect in sock.c) */
+-
+-			tcp_done(sk);
+-		} else {
++		if (!sock_owned_by_user(sk))
++			tcp_done_with_error(sk, err);
++		else
+ 			WRITE_ONCE(sk->sk_err_soft, err);
+-		}
+ 		goto out;
+ 	case TCP_LISTEN:
+ 		break;
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 380695fdc32fa..e6a7ff6ca6797 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -775,13 +775,24 @@ void ieee80211_recalc_chanctx_chantype(struct ieee80211_local *local,
+ 
+ 	/* TDLS peers can sometimes affect the chandef width */
+ 	list_for_each_entry(sta, &local->sta_list, list) {
++		struct ieee80211_sub_if_data *sdata = sta->sdata;
+ 		struct ieee80211_chan_req tdls_chanreq = {};
++		int tdls_link_id;
++
+ 		if (!sta->uploaded ||
+ 		    !test_sta_flag(sta, WLAN_STA_TDLS_WIDER_BW) ||
+ 		    !test_sta_flag(sta, WLAN_STA_AUTHORIZED) ||
+ 		    !sta->tdls_chandef.chan)
+ 			continue;
+ 
++		tdls_link_id = ieee80211_tdls_sta_link_id(sta);
++		link = sdata_dereference(sdata->link[tdls_link_id], sdata);
++		if (!link)
++			continue;
++
++		if (rcu_access_pointer(link->conf->chanctx_conf) != conf)
++			continue;
++
+ 		tdls_chanreq.oper = sta->tdls_chandef;
+ 
+ 		/* note this always fills and returns &tmp if compat */
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 0965ad11ec747..7ba329ebdda91 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -148,7 +148,7 @@ static u32 ieee80211_calc_hw_conf_chan(struct ieee80211_local *local,
+ 	offchannel_flag ^= local->hw.conf.flags & IEEE80211_CONF_OFFCHANNEL;
+ 
+ 	/* force it also for scanning, since drivers might config differently */
+-	if (offchannel_flag || local->scanning ||
++	if (offchannel_flag || local->scanning || local->in_reconfig ||
+ 	    !cfg80211_chandef_identical(&local->hw.conf.chandef, &chandef)) {
+ 		local->hw.conf.chandef = chandef;
+ 		changed |= IEEE80211_CONF_CHANGE_CHANNEL;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index a5f2d3cfe60d2..ad2ce9c92ba8a 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -3287,8 +3287,17 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 	       sizeof(sdata->u.mgd.ttlm_info));
+ 	wiphy_delayed_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work);
+ 
++	memset(&sdata->vif.neg_ttlm, 0, sizeof(sdata->vif.neg_ttlm));
+ 	wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ 				  &ifmgd->neg_ttlm_timeout_work);
++
++	sdata->u.mgd.removed_links = 0;
++	wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
++				  &sdata->u.mgd.ml_reconf_work);
++
++	wiphy_work_cancel(sdata->local->hw.wiphy,
++			  &ifmgd->teardown_ttlm_work);
++
+ 	ieee80211_vif_set_links(sdata, 0, 0);
+ 
+ 	ifmgd->mcast_seq_last = IEEE80211_SN_MODULO;
+@@ -6834,7 +6843,7 @@ static void ieee80211_teardown_ttlm_work(struct wiphy *wiphy,
+ 	u16 new_dormant_links;
+ 	struct ieee80211_sub_if_data *sdata =
+ 		container_of(work, struct ieee80211_sub_if_data,
+-			     u.mgd.neg_ttlm_timeout_work.work);
++			     u.mgd.teardown_ttlm_work);
+ 
+ 	if (!sdata->vif.neg_ttlm.valid)
+ 		return;
+@@ -8704,12 +8713,8 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)
+ 			  &ifmgd->beacon_connection_loss_work);
+ 	wiphy_work_cancel(sdata->local->hw.wiphy,
+ 			  &ifmgd->csa_connection_drop_work);
+-	wiphy_work_cancel(sdata->local->hw.wiphy,
+-			  &ifmgd->teardown_ttlm_work);
+ 	wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ 				  &ifmgd->tdls_peer_del_work);
+-	wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+-				  &ifmgd->ml_reconf_work);
+ 	wiphy_delayed_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work);
+ 	wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ 				  &ifmgd->neg_ttlm_timeout_work);
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index bd5e2f7146f67..9195d5a2de0a8 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -727,6 +727,12 @@ struct sta_info {
+ 	struct ieee80211_sta sta;
+ };
+ 
++static inline int ieee80211_tdls_sta_link_id(struct sta_info *sta)
++{
++	/* TDLS STA can only have a single link */
++	return sta->sta.valid_links ? __ffs(sta->sta.valid_links) : 0;
++}
++
+ static inline enum nl80211_plink_state sta_plink_state(struct sta_info *sta)
+ {
+ #ifdef CONFIG_MAC80211_MESH
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index f861d99e5f055..72a9ba8bc5fd9 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2774,8 +2774,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
+ 
+ 		if (tdls_peer) {
+ 			/* For TDLS only one link can be valid with peer STA */
+-			int tdls_link_id = sta->sta.valid_links ?
+-					   __ffs(sta->sta.valid_links) : 0;
++			int tdls_link_id = ieee80211_tdls_sta_link_id(sta);
+ 			struct ieee80211_link_data *link;
+ 
+ 			/* DA SA BSSID */
+@@ -3101,8 +3100,7 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
+ 	case NL80211_IFTYPE_STATION:
+ 		if (test_sta_flag(sta, WLAN_STA_TDLS_PEER)) {
+ 			/* For TDLS only one link can be valid with peer STA */
+-			int tdls_link_id = sta->sta.valid_links ?
+-					   __ffs(sta->sta.valid_links) : 0;
++			int tdls_link_id = ieee80211_tdls_sta_link_id(sta);
+ 			struct ieee80211_link_data *link;
+ 
+ 			/* DA SA BSSID */
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index b6d0dcf3a5c34..f4384e147ee16 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -1459,18 +1459,18 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
+ 	if (ret < 0)
+ 		goto out_err;
+ 
+-	/* Bind the ct retriever */
+-	RCU_INIT_POINTER(svc->pe, pe);
+-	pe = NULL;
+-
+ 	/* Update the virtual service counters */
+ 	if (svc->port == FTPPORT)
+ 		atomic_inc(&ipvs->ftpsvc_counter);
+ 	else if (svc->port == 0)
+ 		atomic_inc(&ipvs->nullsvc_counter);
+-	if (svc->pe && svc->pe->conn_out)
++	if (pe && pe->conn_out)
+ 		atomic_inc(&ipvs->conn_out_counter);
+ 
++	/* Bind the ct retriever */
++	RCU_INIT_POINTER(svc->pe, pe);
++	pe = NULL;
++
+ 	/* Count only IPv4 services for old get/setsockopt interface */
+ 	if (svc->af == AF_INET)
+ 		ipvs->num_services++;
+diff --git a/net/netfilter/ipvs/ip_vs_proto_sctp.c b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+index 1e689c7141271..83e452916403d 100644
+--- a/net/netfilter/ipvs/ip_vs_proto_sctp.c
++++ b/net/netfilter/ipvs/ip_vs_proto_sctp.c
+@@ -126,7 +126,7 @@ sctp_snat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	if (sctph->source != cp->vport || payload_csum ||
+ 	    skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		sctph->source = cp->vport;
+-		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++		if (!skb_is_gso(skb))
+ 			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+@@ -175,7 +175,7 @@ sctp_dnat_handler(struct sk_buff *skb, struct ip_vs_protocol *pp,
+ 	    (skb->ip_summed == CHECKSUM_PARTIAL &&
+ 	     !(skb_dst(skb)->dev->features & NETIF_F_SCTP_CRC))) {
+ 		sctph->dest = cp->dport;
+-		if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb))
++		if (!skb_is_gso(skb))
+ 			sctp_nat_csum(skb, sctph, sctphoff);
+ 	} else if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 3b846cbdc050d..4cbf71d0786b0 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -3420,7 +3420,8 @@ static int ctnetlink_del_expect(struct sk_buff *skb,
+ 
+ 		if (cda[CTA_EXPECT_ID]) {
+ 			__be32 id = nla_get_be32(cda[CTA_EXPECT_ID]);
+-			if (ntohl(id) != (u32)(unsigned long)exp) {
++
++			if (id != nf_expect_get_id(exp)) {
+ 				nf_ct_expect_put(exp);
+ 				return -ENOENT;
+ 			}
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 15a236bebb46a..eb4c4a4ac7ace 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -434,7 +434,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 	res_map  = scratch->map + (map_index ? m->bsize_max : 0);
+ 	fill_map = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+-	memset(res_map, 0xff, m->bsize_max * sizeof(*res_map));
++	pipapo_resmap_init(m, res_map);
+ 
+ 	nft_pipapo_for_each_field(f, i, m) {
+ 		bool last = i == m->field_count - 1;
+@@ -542,7 +542,7 @@ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+ 		goto out;
+ 	}
+ 
+-	memset(res_map, 0xff, m->bsize_max * sizeof(*res_map));
++	pipapo_resmap_init(m, res_map);
+ 
+ 	nft_pipapo_for_each_field(f, i, m) {
+ 		bool last = i == m->field_count - 1;
+diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
+index 0d2e40e10f7f5..4a2ff85ce1c43 100644
+--- a/net/netfilter/nft_set_pipapo.h
++++ b/net/netfilter/nft_set_pipapo.h
+@@ -278,4 +278,25 @@ static u64 pipapo_estimate_size(const struct nft_set_desc *desc)
+ 	return size;
+ }
+ 
++/**
++ * pipapo_resmap_init() - Initialise result map before first use
++ * @m:		Matching data, including mapping table
++ * @res_map:	Result map
++ *
++ * Initialize all bits covered by the first field to one, so that after
++ * the first step, only the matching bits of the first bit group remain.
++ *
++ * If other fields have a large bitmap, set remainder of res_map to 0.
++ */
++static inline void pipapo_resmap_init(const struct nft_pipapo_match *m, unsigned long *res_map)
++{
++	const struct nft_pipapo_field *f = m->f;
++	int i;
++
++	for (i = 0; i < f->bsize; i++)
++		res_map[i] = ULONG_MAX;
++
++	for (i = f->bsize; i < m->bsize_max; i++)
++		res_map[i] = 0ul;
++}
+ #endif /* _NFT_SET_PIPAPO_H */
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index d08407d589eac..b8d3c3213efee 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1036,6 +1036,7 @@ static int nft_pipapo_avx2_lookup_8b_16(unsigned long *map, unsigned long *fill,
+ 
+ /**
+  * nft_pipapo_avx2_lookup_slow() - Fallback function for uncommon field sizes
++ * @mdata:	Matching data, including mapping table
+  * @map:	Previous match result, used as initial bitmap
+  * @fill:	Destination bitmap to be filled with current match result
+  * @f:		Field, containing lookup and mapping tables
+@@ -1051,7 +1052,8 @@ static int nft_pipapo_avx2_lookup_8b_16(unsigned long *map, unsigned long *fill,
+  * Return: -1 on no match, rule index of match if @last, otherwise first long
+  * word index to be checked next (i.e. first filled word).
+  */
+-static int nft_pipapo_avx2_lookup_slow(unsigned long *map, unsigned long *fill,
++static int nft_pipapo_avx2_lookup_slow(const struct nft_pipapo_match *mdata,
++					unsigned long *map, unsigned long *fill,
+ 					const struct nft_pipapo_field *f,
+ 					int offset, const u8 *pkt,
+ 					bool first, bool last)
+@@ -1060,7 +1062,7 @@ static int nft_pipapo_avx2_lookup_slow(unsigned long *map, unsigned long *fill,
+ 	int i, ret = -1, b;
+ 
+ 	if (first)
+-		memset(map, 0xff, bsize * sizeof(*map));
++		pipapo_resmap_init(mdata, map);
+ 
+ 	for (i = offset; i < bsize; i++) {
+ 		if (f->bb == 8)
+@@ -1137,8 +1139,14 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	bool map_index;
+ 	int i, ret = 0;
+ 
+-	if (unlikely(!irq_fpu_usable()))
+-		return nft_pipapo_lookup(net, set, key, ext);
++	local_bh_disable();
++
++	if (unlikely(!irq_fpu_usable())) {
++		bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
++
++		local_bh_enable();
++		return fallback_res;
++	}
+ 
+ 	m = rcu_dereference(priv->match);
+ 
+@@ -1153,6 +1161,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	scratch = *raw_cpu_ptr(m->scratch);
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
++		local_bh_enable();
+ 		return false;
+ 	}
+ 
+@@ -1186,7 +1195,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			} else if (f->groups == 16) {
+ 				NFT_SET_PIPAPO_AVX2_LOOKUP(8, 16);
+ 			} else {
+-				ret = nft_pipapo_avx2_lookup_slow(res, fill, f,
++				ret = nft_pipapo_avx2_lookup_slow(m, res, fill, f,
+ 								  ret, rp,
+ 								  first, last);
+ 			}
+@@ -1202,7 +1211,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			} else if (f->groups == 32) {
+ 				NFT_SET_PIPAPO_AVX2_LOOKUP(4, 32);
+ 			} else {
+-				ret = nft_pipapo_avx2_lookup_slow(res, fill, f,
++				ret = nft_pipapo_avx2_lookup_slow(m, res, fill, f,
+ 								  ret, rp,
+ 								  first, last);
+ 			}
+@@ -1233,6 +1242,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	if (i % 2)
+ 		scratch->map_index = !map_index;
+ 	kernel_fpu_end();
++	local_bh_enable();
+ 
+ 	return ret >= 0;
+ }
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index ea3ebc160e25c..4692a9ef110bb 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -538,6 +538,61 @@ static void *packet_current_frame(struct packet_sock *po,
+ 	return packet_lookup_frame(po, rb, rb->head, status);
+ }
+ 
++static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
++{
++	u8 *skb_orig_data = skb->data;
++	int skb_orig_len = skb->len;
++	struct vlan_hdr vhdr, *vh;
++	unsigned int header_len;
++
++	if (!dev)
++		return 0;
++
++	/* In the SOCK_DGRAM scenario, skb data starts at the network
++	 * protocol, which is after the VLAN headers. The outer VLAN
++	 * header is at the hard_header_len offset in non-variable
++	 * length link layer headers. If it's a VLAN device, the
++	 * min_header_len should be used to exclude the VLAN header
++	 * size.
++	 */
++	if (dev->min_header_len == dev->hard_header_len)
++		header_len = dev->hard_header_len;
++	else if (is_vlan_dev(dev))
++		header_len = dev->min_header_len;
++	else
++		return 0;
++
++	skb_push(skb, skb->data - skb_mac_header(skb));
++	vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr);
++	if (skb_orig_data != skb->data) {
++		skb->data = skb_orig_data;
++		skb->len = skb_orig_len;
++	}
++	if (unlikely(!vh))
++		return 0;
++
++	return ntohs(vh->h_vlan_TCI);
++}
++
++static __be16 vlan_get_protocol_dgram(struct sk_buff *skb)
++{
++	__be16 proto = skb->protocol;
++
++	if (unlikely(eth_type_vlan(proto))) {
++		u8 *skb_orig_data = skb->data;
++		int skb_orig_len = skb->len;
++
++		skb_push(skb, skb->data - skb_mac_header(skb));
++		proto = __vlan_get_protocol(skb, proto, NULL);
++		if (skb_orig_data != skb->data) {
++			skb->data = skb_orig_data;
++			skb->len = skb_orig_len;
++		}
++	}
++
++	return proto;
++}
++
+ static void prb_del_retire_blk_timer(struct tpacket_kbdq_core *pkc)
+ {
+ 	del_timer_sync(&pkc->retire_blk_timer);
+@@ -1007,10 +1062,16 @@ static void prb_clear_rxhash(struct tpacket_kbdq_core *pkc,
+ static void prb_fill_vlan_info(struct tpacket_kbdq_core *pkc,
+ 			struct tpacket3_hdr *ppd)
+ {
++	struct packet_sock *po = container_of(pkc, struct packet_sock, rx_ring.prb_bdqc);
++
+ 	if (skb_vlan_tag_present(pkc->skb)) {
+ 		ppd->hv1.tp_vlan_tci = skb_vlan_tag_get(pkc->skb);
+ 		ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->vlan_proto);
+ 		ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++	} else if (unlikely(po->sk.sk_type == SOCK_DGRAM && eth_type_vlan(pkc->skb->protocol))) {
++		ppd->hv1.tp_vlan_tci = vlan_get_tci(pkc->skb, pkc->skb->dev);
++		ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->protocol);
++		ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
+ 	} else {
+ 		ppd->hv1.tp_vlan_tci = 0;
+ 		ppd->hv1.tp_vlan_tpid = 0;
+@@ -2428,6 +2489,10 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 			h.h2->tp_vlan_tci = skb_vlan_tag_get(skb);
+ 			h.h2->tp_vlan_tpid = ntohs(skb->vlan_proto);
+ 			status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++		} else if (unlikely(sk->sk_type == SOCK_DGRAM && eth_type_vlan(skb->protocol))) {
++			h.h2->tp_vlan_tci = vlan_get_tci(skb, skb->dev);
++			h.h2->tp_vlan_tpid = ntohs(skb->protocol);
++			status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
+ 		} else {
+ 			h.h2->tp_vlan_tci = 0;
+ 			h.h2->tp_vlan_tpid = 0;
+@@ -2457,7 +2522,8 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	sll->sll_halen = dev_parse_header(skb, sll->sll_addr);
+ 	sll->sll_family = AF_PACKET;
+ 	sll->sll_hatype = dev->type;
+-	sll->sll_protocol = skb->protocol;
++	sll->sll_protocol = (sk->sk_type == SOCK_DGRAM) ?
++		vlan_get_protocol_dgram(skb) : skb->protocol;
+ 	sll->sll_pkttype = skb->pkt_type;
+ 	if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
+ 		sll->sll_ifindex = orig_dev->ifindex;
+@@ -3482,7 +3548,8 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		/* Original length was stored in sockaddr_ll fields */
+ 		origlen = PACKET_SKB_CB(skb)->sa.origlen;
+ 		sll->sll_family = AF_PACKET;
+-		sll->sll_protocol = skb->protocol;
++		sll->sll_protocol = (sock->type == SOCK_DGRAM) ?
++			vlan_get_protocol_dgram(skb) : skb->protocol;
+ 	}
+ 
+ 	sock_recv_cmsgs(msg, sk, skb);
+@@ -3539,6 +3606,21 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 			aux.tp_vlan_tci = skb_vlan_tag_get(skb);
+ 			aux.tp_vlan_tpid = ntohs(skb->vlan_proto);
+ 			aux.tp_status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++		} else if (unlikely(sock->type == SOCK_DGRAM && eth_type_vlan(skb->protocol))) {
++			struct sockaddr_ll *sll = &PACKET_SKB_CB(skb)->sa.ll;
++			struct net_device *dev;
++
++			rcu_read_lock();
++			dev = dev_get_by_index_rcu(sock_net(sk), sll->sll_ifindex);
++			if (dev) {
++				aux.tp_vlan_tci = vlan_get_tci(skb, dev);
++				aux.tp_vlan_tpid = ntohs(skb->protocol);
++				aux.tp_status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
++			} else {
++				aux.tp_vlan_tci = 0;
++				aux.tp_vlan_tpid = 0;
++			}
++			rcu_read_unlock();
+ 		} else {
+ 			aux.tp_vlan_tci = 0;
+ 			aux.tp_vlan_tpid = 0;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index fafdb97adfad9..acca3b1a068f0 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -2015,7 +2015,6 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+  */
+ static u8 smc_compress_bufsize(int size, bool is_smcd, bool is_rmb)
+ {
+-	const unsigned int max_scat = SG_MAX_SINGLE_ALLOC * PAGE_SIZE;
+ 	u8 compressed;
+ 
+ 	if (size <= SMC_BUF_MIN_SIZE)
+@@ -2025,9 +2024,11 @@ static u8 smc_compress_bufsize(int size, bool is_smcd, bool is_rmb)
+ 	compressed = min_t(u8, ilog2(size) + 1,
+ 			   is_smcd ? SMCD_DMBE_SIZES : SMCR_RMBE_SIZES);
+ 
++#ifdef CONFIG_ARCH_NO_SG_CHAIN
+ 	if (!is_smcd && is_rmb)
+ 		/* RMBs are backed by & limited to max size of scatterlists */
+-		compressed = min_t(u8, compressed, ilog2(max_scat >> 14));
++		compressed = min_t(u8, compressed, ilog2((SG_MAX_SINGLE_ALLOC * PAGE_SIZE) >> 14));
++#endif
+ 
+ 	return compressed;
+ }
+diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
+index 06d8ee0db000f..4eb19c3a54c70 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_keys.c
++++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
+@@ -168,7 +168,7 @@ static int krb5_DK(const struct gss_krb5_enctype *gk5e,
+ 		goto err_return;
+ 	blocksize = crypto_sync_skcipher_blocksize(cipher);
+ 	if (crypto_sync_skcipher_setkey(cipher, inkey->data, inkey->len))
+-		goto err_return;
++		goto err_free_cipher;
+ 
+ 	ret = -ENOMEM;
+ 	inblockdata = kmalloc(blocksize, gfp_mask);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index cfd1b1bf7e351..09f29a95f2bc3 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2326,12 +2326,13 @@ call_transmit_status(struct rpc_task *task)
+ 		task->tk_action = call_transmit;
+ 		task->tk_status = 0;
+ 		break;
+-	case -ECONNREFUSED:
+ 	case -EHOSTDOWN:
+ 	case -ENETDOWN:
+ 	case -EHOSTUNREACH:
+ 	case -ENETUNREACH:
+ 	case -EPERM:
++		break;
++	case -ECONNREFUSED:
+ 		if (RPC_IS_SOFTCONN(task)) {
+ 			if (!task->tk_msg.rpc_proc->p_proc)
+ 				trace_xprt_ping(task->tk_xprt,
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index ffbf99894970e..47f33bb7bff81 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -92,7 +92,8 @@ static void frwr_mr_put(struct rpcrdma_mr *mr)
+ 	rpcrdma_mr_push(mr, &mr->mr_req->rl_free_mrs);
+ }
+ 
+-/* frwr_reset - Place MRs back on the free list
++/**
++ * frwr_reset - Place MRs back on @req's free list
+  * @req: request to reset
+  *
+  * Used after a failed marshal. For FRWR, this means the MRs
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 432557a553e7e..a0b071089e159 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -897,6 +897,8 @@ static int rpcrdma_reqs_setup(struct rpcrdma_xprt *r_xprt)
+ 
+ static void rpcrdma_req_reset(struct rpcrdma_req *req)
+ {
++	struct rpcrdma_mr *mr;
++
+ 	/* Credits are valid for only one connection */
+ 	req->rl_slot.rq_cong = 0;
+ 
+@@ -906,7 +908,19 @@ static void rpcrdma_req_reset(struct rpcrdma_req *req)
+ 	rpcrdma_regbuf_dma_unmap(req->rl_sendbuf);
+ 	rpcrdma_regbuf_dma_unmap(req->rl_recvbuf);
+ 
+-	frwr_reset(req);
++	/* The verbs consumer can't know the state of an MR on the
++	 * req->rl_registered list unless a successful completion
++	 * has occurred, so they cannot be re-used.
++	 */
++	while ((mr = rpcrdma_mr_pop(&req->rl_registered))) {
++		struct rpcrdma_buffer *buf = &mr->mr_xprt->rx_buf;
++
++		spin_lock(&buf->rb_lock);
++		list_del(&mr->mr_all);
++		spin_unlock(&buf->rb_lock);
++
++		frwr_mr_release(mr);
++	}
+ }
+ 
+ /* ASSUMPTION: the rb_allreqs list is stable for the duration,
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index b849a3d133a01..439f755399772 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -135,8 +135,11 @@ static int tipc_udp_addr2str(struct tipc_media_addr *a, char *buf, int size)
+ 		snprintf(buf, size, "%pI4:%u", &ua->ipv4, ntohs(ua->port));
+ 	else if (ntohs(ua->proto) == ETH_P_IPV6)
+ 		snprintf(buf, size, "%pI6:%u", &ua->ipv6, ntohs(ua->port));
+-	else
++	else {
+ 		pr_err("Invalid UDP media address\n");
++		return 1;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 142f56770b77f..11cb5badafb6d 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2667,10 +2667,49 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
+ 
+ static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ {
++	struct unix_sock *u = unix_sk(sk);
++	struct sk_buff *skb;
++	int err;
++
+ 	if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED))
+ 		return -ENOTCONN;
+ 
+-	return unix_read_skb(sk, recv_actor);
++	mutex_lock(&u->iolock);
++	skb = skb_recv_datagram(sk, MSG_DONTWAIT, &err);
++	mutex_unlock(&u->iolock);
++	if (!skb)
++		return err;
++
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++	if (unlikely(skb == READ_ONCE(u->oob_skb))) {
++		bool drop = false;
++
++		unix_state_lock(sk);
++
++		if (sock_flag(sk, SOCK_DEAD)) {
++			unix_state_unlock(sk);
++			kfree_skb(skb);
++			return -ECONNRESET;
++		}
++
++		spin_lock(&sk->sk_receive_queue.lock);
++		if (likely(skb == u->oob_skb)) {
++			WRITE_ONCE(u->oob_skb, NULL);
++			drop = true;
++		}
++		spin_unlock(&sk->sk_receive_queue.lock);
++
++		unix_state_unlock(sk);
++
++		if (drop) {
++			WARN_ON_ONCE(skb_unref(skb));
++			kfree_skb(skb);
++			return -EAGAIN;
++		}
++	}
++#endif
++
++	return recv_actor(sk, skb);
+ }
+ 
+ static int unix_stream_read_generic(struct unix_stream_read_state *state,
+diff --git a/net/unix/unix_bpf.c b/net/unix/unix_bpf.c
+index bd84785bf8d6c..bca2d86ba97d8 100644
+--- a/net/unix/unix_bpf.c
++++ b/net/unix/unix_bpf.c
+@@ -54,6 +54,9 @@ static int unix_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ 	struct sk_psock *psock;
+ 	int copied;
+ 
++	if (flags & MSG_OOB)
++		return -EOPNOTSUPP;
++
+ 	if (!len)
+ 		return 0;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 72c7bf5585816..0fd075238fc74 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -1208,6 +1208,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
+ 		if ((chan->flags & IEEE80211_CHAN_NO_6GHZ_AFC_CLIENT) &&
+ 		    nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_6GHZ_AFC_CLIENT))
+ 			goto nla_put_failure;
++		if ((chan->flags & IEEE80211_CHAN_CAN_MONITOR) &&
++		    nla_put_flag(msg, NL80211_FREQUENCY_ATTR_CAN_MONITOR))
++			goto nla_put_failure;
+ 	}
+ 
+ 	if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_MAX_TX_POWER,
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 082c6f9c5416e..af6ec719567fc 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1504,7 +1504,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 		  5120, /*  0.833333... */
+ 	};
+ 	u32 rates_160M[3] = { 960777777, 907400000, 816666666 };
+-	u32 rates_969[3] =  { 480388888, 453700000, 408333333 };
++	u32 rates_996[3] =  { 480388888, 453700000, 408333333 };
+ 	u32 rates_484[3] =  { 229411111, 216666666, 195000000 };
+ 	u32 rates_242[3] =  { 114711111, 108333333,  97500000 };
+ 	u32 rates_106[3] =  {  40000000,  37777777,  34000000 };
+@@ -1524,12 +1524,14 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
+ 	if (WARN_ON_ONCE(rate->nss < 1 || rate->nss > 8))
+ 		return 0;
+ 
+-	if (rate->bw == RATE_INFO_BW_160)
++	if (rate->bw == RATE_INFO_BW_160 ||
++	    (rate->bw == RATE_INFO_BW_HE_RU &&
++	     rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_2x996))
+ 		result = rates_160M[rate->he_gi];
+ 	else if (rate->bw == RATE_INFO_BW_80 ||
+ 		 (rate->bw == RATE_INFO_BW_HE_RU &&
+ 		  rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_996))
+-		result = rates_969[rate->he_gi];
++		result = rates_996[rate->he_gi];
+ 	else if (rate->bw == RATE_INFO_BW_40 ||
+ 		 (rate->bw == RATE_INFO_BW_HE_RU &&
+ 		  rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_484))
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index caa340134b0e1..9f76ca591d54f 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -151,6 +151,7 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
+ #define XDP_UMEM_FLAGS_VALID ( \
+ 		XDP_UMEM_UNALIGNED_CHUNK_FLAG | \
+ 		XDP_UMEM_TX_SW_CSUM | \
++		XDP_UMEM_TX_METADATA_LEN | \
+ 	0)
+ 
+ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+@@ -204,8 +205,11 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
+ 		return -EINVAL;
+ 
+-	if (mr->tx_metadata_len >= 256 || mr->tx_metadata_len % 8)
+-		return -EINVAL;
++	if (mr->flags & XDP_UMEM_TX_METADATA_LEN) {
++		if (mr->tx_metadata_len >= 256 || mr->tx_metadata_len % 8)
++			return -EINVAL;
++		umem->tx_metadata_len = mr->tx_metadata_len;
++	}
+ 
+ 	umem->size = size;
+ 	umem->headroom = headroom;
+@@ -215,7 +219,6 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	umem->pgs = NULL;
+ 	umem->user = NULL;
+ 	umem->flags = mr->flags;
+-	umem->tx_metadata_len = mr->tx_metadata_len;
+ 
+ 	INIT_LIST_HEAD(&umem->xsk_dma_list);
+ 	refcount_set(&umem->users, 1);
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index d2ea18dcb0cb5..e95462b982b0f 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -474,11 +474,6 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+ 	if (encap_type < 0 || (xo && xo->flags & XFRM_GRO)) {
+ 		x = xfrm_input_state(skb);
+ 
+-		if (unlikely(x->dir && x->dir != XFRM_SA_DIR_IN)) {
+-			XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATEDIRERROR);
+-			goto drop;
+-		}
+-
+ 		if (unlikely(x->km.state != XFRM_STATE_VALID)) {
+ 			if (x->km.state == XFRM_STATE_ACQ)
+ 				XFRM_INC_STATS(net, LINUX_MIB_XFRMACQUIREERROR);
+@@ -585,8 +580,11 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+ 		}
+ 
+ 		if (unlikely(x->dir && x->dir != XFRM_SA_DIR_IN)) {
++			secpath_reset(skb);
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATEDIRERROR);
++			xfrm_audit_state_notfound(skb, family, spi, seq);
+ 			xfrm_state_put(x);
++			x = NULL;
+ 			goto drop;
+ 		}
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 66e07de2de35c..56b88ad88db6f 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -452,6 +452,8 @@ EXPORT_SYMBOL(xfrm_policy_destroy);
+ 
+ static void xfrm_policy_kill(struct xfrm_policy *policy)
+ {
++	xfrm_dev_policy_delete(policy);
++
+ 	write_lock_bh(&policy->lock);
+ 	policy->walk.dead = 1;
+ 	write_unlock_bh(&policy->lock);
+@@ -1850,7 +1852,6 @@ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid)
+ 
+ 		__xfrm_policy_unlink(pol, dir);
+ 		spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+-		xfrm_dev_policy_delete(pol);
+ 		cnt++;
+ 		xfrm_audit_policy_delete(pol, 1, task_valid);
+ 		xfrm_policy_kill(pol);
+@@ -1891,7 +1892,6 @@ int xfrm_dev_policy_flush(struct net *net, struct net_device *dev,
+ 
+ 		__xfrm_policy_unlink(pol, dir);
+ 		spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+-		xfrm_dev_policy_delete(pol);
+ 		cnt++;
+ 		xfrm_audit_policy_delete(pol, 1, task_valid);
+ 		xfrm_policy_kill(pol);
+@@ -2342,7 +2342,6 @@ int xfrm_policy_delete(struct xfrm_policy *pol, int dir)
+ 	pol = __xfrm_policy_unlink(pol, dir);
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+ 	if (pol) {
+-		xfrm_dev_policy_delete(pol);
+ 		xfrm_policy_kill(pol);
+ 		return 0;
+ 	}
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 649bb739df0dd..67b2a399a48a7 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -49,6 +49,7 @@ static struct kmem_cache *xfrm_state_cache __ro_after_init;
+ 
+ static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task);
+ static HLIST_HEAD(xfrm_state_gc_list);
++static HLIST_HEAD(xfrm_state_dev_gc_list);
+ 
+ static inline bool xfrm_state_hold_rcu(struct xfrm_state __rcu *x)
+ {
+@@ -214,6 +215,7 @@ static DEFINE_SPINLOCK(xfrm_state_afinfo_lock);
+ static struct xfrm_state_afinfo __rcu *xfrm_state_afinfo[NPROTO];
+ 
+ static DEFINE_SPINLOCK(xfrm_state_gc_lock);
++static DEFINE_SPINLOCK(xfrm_state_dev_gc_lock);
+ 
+ int __xfrm_state_delete(struct xfrm_state *x);
+ 
+@@ -683,6 +685,41 @@ struct xfrm_state *xfrm_state_alloc(struct net *net)
+ }
+ EXPORT_SYMBOL(xfrm_state_alloc);
+ 
++#ifdef CONFIG_XFRM_OFFLOAD
++void xfrm_dev_state_delete(struct xfrm_state *x)
++{
++	struct xfrm_dev_offload *xso = &x->xso;
++	struct net_device *dev = READ_ONCE(xso->dev);
++
++	if (dev) {
++		dev->xfrmdev_ops->xdo_dev_state_delete(x);
++		spin_lock_bh(&xfrm_state_dev_gc_lock);
++		hlist_add_head(&x->dev_gclist, &xfrm_state_dev_gc_list);
++		spin_unlock_bh(&xfrm_state_dev_gc_lock);
++	}
++}
++EXPORT_SYMBOL_GPL(xfrm_dev_state_delete);
++
++void xfrm_dev_state_free(struct xfrm_state *x)
++{
++	struct xfrm_dev_offload *xso = &x->xso;
++	struct net_device *dev = READ_ONCE(xso->dev);
++
++	if (dev && dev->xfrmdev_ops) {
++		spin_lock_bh(&xfrm_state_dev_gc_lock);
++		if (!hlist_unhashed(&x->dev_gclist))
++			hlist_del(&x->dev_gclist);
++		spin_unlock_bh(&xfrm_state_dev_gc_lock);
++
++		if (dev->xfrmdev_ops->xdo_dev_state_free)
++			dev->xfrmdev_ops->xdo_dev_state_free(x);
++		WRITE_ONCE(xso->dev, NULL);
++		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
++		netdev_put(dev, &xso->dev_tracker);
++	}
++}
++#endif
++
+ void __xfrm_state_destroy(struct xfrm_state *x, bool sync)
+ {
+ 	WARN_ON(x->km.state != XFRM_STATE_DEAD);
+@@ -848,6 +885,9 @@ EXPORT_SYMBOL(xfrm_state_flush);
+ 
+ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid)
+ {
++	struct xfrm_state *x;
++	struct hlist_node *tmp;
++	struct xfrm_dev_offload *xso;
+ 	int i, err = 0, cnt = 0;
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_state_lock);
+@@ -857,8 +897,6 @@ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_vali
+ 
+ 	err = -ESRCH;
+ 	for (i = 0; i <= net->xfrm.state_hmask; i++) {
+-		struct xfrm_state *x;
+-		struct xfrm_dev_offload *xso;
+ restart:
+ 		hlist_for_each_entry(x, net->xfrm.state_bydst+i, bydst) {
+ 			xso = &x->xso;
+@@ -868,6 +906,8 @@ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_vali
+ 				spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
+ 				err = xfrm_state_delete(x);
++				xfrm_dev_state_free(x);
++
+ 				xfrm_audit_state_delete(x, err ? 0 : 1,
+ 							task_valid);
+ 				xfrm_state_put(x);
+@@ -884,6 +924,24 @@ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_vali
+ 
+ out:
+ 	spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++
++	spin_lock_bh(&xfrm_state_dev_gc_lock);
++restart_gc:
++	hlist_for_each_entry_safe(x, tmp, &xfrm_state_dev_gc_list, dev_gclist) {
++		xso = &x->xso;
++
++		if (xso->dev == dev) {
++			spin_unlock_bh(&xfrm_state_dev_gc_lock);
++			xfrm_dev_state_free(x);
++			spin_lock_bh(&xfrm_state_dev_gc_lock);
++			goto restart_gc;
++		}
++
++	}
++	spin_unlock_bh(&xfrm_state_dev_gc_lock);
++
++	xfrm_flush_gc();
++
+ 	return err;
+ }
+ EXPORT_SYMBOL(xfrm_dev_state_flush);
+@@ -1273,8 +1331,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 			xso->dev = xdo->dev;
+ 			xso->real_dev = xdo->real_dev;
+ 			xso->flags = XFRM_DEV_OFFLOAD_FLAG_ACQ;
+-			netdev_tracker_alloc(xso->dev, &xso->dev_tracker,
+-					     GFP_ATOMIC);
++			netdev_hold(xso->dev, &xso->dev_tracker, GFP_ATOMIC);
+ 			error = xso->dev->xfrmdev_ops->xdo_dev_state_add(x, NULL);
+ 			if (error) {
+ 				xso->dir = 0;
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index e83c687bd64ee..77355422ce82a 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -2455,7 +2455,6 @@ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 					    NETLINK_CB(skb).portid);
+ 		}
+ 	} else {
+-		xfrm_dev_policy_delete(xp);
+ 		xfrm_audit_policy_delete(xp, err ? 0 : 1, true);
+ 
+ 		if (err != 0)
+diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
+index 3ee8ecfb8c044..3500a3d62f0df 100644
+--- a/scripts/Kconfig.include
++++ b/scripts/Kconfig.include
+@@ -33,7 +33,8 @@ ld-option = $(success,$(LD) -v $(1))
+ 
+ # $(as-instr,<instr>)
+ # Return y if the assembler supports <instr>, n otherwise
+-as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o /dev/null -)
++as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) $(2) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o /dev/null -)
++as-instr64 = $(as-instr,$(1),$(m64-flag))
+ 
+ # check if $(CC) and $(LD) exist
+ $(error-if,$(failure,command -v $(CC)),C compiler '$(CC)' not found)
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 9f06f6aaf7fcb..7f8ec77bf35c9 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -407,8 +407,12 @@ cmd_dtc = $(HOSTCC) -E $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ;
+ 		-d $(depfile).dtc.tmp $(dtc-tmp) ; \
+ 	cat $(depfile).pre.tmp $(depfile).dtc.tmp > $(depfile)
+ 
++# NOTE:
++# Do not replace $(filter %.dtb %.dtbo, $^) with $(real-prereqs). When a single
++# DTB is turned into a multi-blob DTB, $^ will contain header file dependencies
++# recorded in the .*.cmd file.
+ quiet_cmd_fdtoverlay = DTOVL   $@
+-      cmd_fdtoverlay = $(objtree)/scripts/dtc/fdtoverlay -o $@ -i $(real-prereqs)
++      cmd_fdtoverlay = $(objtree)/scripts/dtc/fdtoverlay -o $@ -i $(filter %.dtb %.dtbo, $^)
+ 
+ $(multi-dtb-y): FORCE
+ 	$(call if_changed,fdtoverlay)
+diff --git a/scripts/gcc-x86_32-has-stack-protector.sh b/scripts/gcc-x86_32-has-stack-protector.sh
+index 825c75c5b7150..9459ca4f0f11f 100755
+--- a/scripts/gcc-x86_32-has-stack-protector.sh
++++ b/scripts/gcc-x86_32-has-stack-protector.sh
+@@ -5,4 +5,4 @@
+ # -mstack-protector-guard-reg, added by
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81708
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m32 -O0 -fstack-protector -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - -o - 2> /dev/null | grep -q "%fs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -m32 -O0 -fstack-protector -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - -o - 2> /dev/null | grep -q "%fs"
+diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh
+index 75e4e22b986ad..f680bb01aeeb3 100755
+--- a/scripts/gcc-x86_64-has-stack-protector.sh
++++ b/scripts/gcc-x86_64-has-stack-protector.sh
+@@ -1,4 +1,4 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+diff --git a/scripts/syscalltbl.sh b/scripts/syscalltbl.sh
+index 6abe143889ef6..6a903b87a7c21 100755
+--- a/scripts/syscalltbl.sh
++++ b/scripts/syscalltbl.sh
+@@ -54,7 +54,7 @@ nxt=0
+ 
+ grep -E "^[0-9]+[[:space:]]+$abis" "$infile" | {
+ 
+-	while read nr abi name native compat ; do
++	while read nr abi name native compat noreturn; do
+ 
+ 		if [ $nxt -gt $nr ]; then
+ 			echo "error: $infile: syscall table is not sorted or duplicates the same syscall number" >&2
+@@ -66,7 +66,21 @@ grep -E "^[0-9]+[[:space:]]+$abis" "$infile" | {
+ 			nxt=$((nxt + 1))
+ 		done
+ 
+-		if [ -n "$compat" ]; then
++		if [ "$compat" = "-" ]; then
++			unset compat
++		fi
++
++		if [ -n "$noreturn" ]; then
++			if [ "$noreturn" != "noreturn" ]; then
++				echo "error: $infile: invalid string \"$noreturn\" in 'noreturn' column"
++				exit 1
++			fi
++			if [ -n "$compat" ]; then
++				echo "__SYSCALL_COMPAT_NORETURN($nr, $native, $compat)"
++			else
++				echo "__SYSCALL_NORETURN($nr, $native)"
++			fi
++		elif [ -n "$compat" ]; then
+ 			echo "__SYSCALL_WITH_COMPAT($nr, $native, $compat)"
+ 		elif [ -n "$native" ]; then
+ 			echo "__SYSCALL($nr, $native)"
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 6239777090c43..4373b914acf20 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1304,6 +1304,13 @@ static int apparmor_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	if (!skb->secmark)
+ 		return 0;
+ 
++	/*
++	 * If reach here before socket_post_create hook is called, in which
++	 * case label is null, drop the packet.
++	 */
++	if (!ctx->label)
++		return -EACCES;
++
+ 	return apparmor_secmark_check(ctx->label, OP_RECVMSG, AA_MAY_RECEIVE,
+ 				      skb->secmark, sk);
+ }
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 957654d253dd7..14df15e356952 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -225,7 +225,7 @@ static void aa_free_data(void *ptr, void *arg)
+ {
+ 	struct aa_data *data = ptr;
+ 
+-	kfree_sensitive(data->data);
++	kvfree_sensitive(data->data, data->size);
+ 	kfree_sensitive(data->key);
+ 	kfree_sensitive(data);
+ }
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 5e578ef0ddffb..5a570235427d8 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -747,34 +747,42 @@ static int unpack_pdb(struct aa_ext *e, struct aa_policydb **policy,
+ 			*info = "missing required dfa";
+ 			goto fail;
+ 		}
+-		goto out;
++	} else {
++		/*
++		 * only unpack the following if a dfa is present
++		 *
++		 * sadly start was given different names for file and policydb
++		 * but since it is optional we can try both
++		 */
++		if (!aa_unpack_u32(e, &pdb->start[0], "start"))
++			/* default start state */
++			pdb->start[0] = DFA_START;
++		if (!aa_unpack_u32(e, &pdb->start[AA_CLASS_FILE], "dfa_start")) {
++			/* default start state for xmatch and file dfa */
++			pdb->start[AA_CLASS_FILE] = DFA_START;
++		}	/* setup class index */
++		for (i = AA_CLASS_FILE + 1; i <= AA_CLASS_LAST; i++) {
++			pdb->start[i] = aa_dfa_next(pdb->dfa, pdb->start[0],
++						    i);
++		}
+ 	}
+ 
+ 	/*
+-	 * only unpack the following if a dfa is present
+-	 *
+-	 * sadly start was given different names for file and policydb
+-	 * but since it is optional we can try both
++	 * Unfortunately due to a bug in earlier userspaces, a
++	 * transition table may be present even when the dfa is
++	 * not. For compatibility reasons unpack and discard.
+ 	 */
+-	if (!aa_unpack_u32(e, &pdb->start[0], "start"))
+-		/* default start state */
+-		pdb->start[0] = DFA_START;
+-	if (!aa_unpack_u32(e, &pdb->start[AA_CLASS_FILE], "dfa_start")) {
+-		/* default start state for xmatch and file dfa */
+-		pdb->start[AA_CLASS_FILE] = DFA_START;
+-	}	/* setup class index */
+-	for (i = AA_CLASS_FILE + 1; i <= AA_CLASS_LAST; i++) {
+-		pdb->start[i] = aa_dfa_next(pdb->dfa, pdb->start[0],
+-					       i);
+-	}
+ 	if (!unpack_trans_table(e, &pdb->trans) && required_trans) {
+ 		*info = "failed to unpack profile transition table";
+ 		goto fail;
+ 	}
+ 
++	if (!pdb->dfa && pdb->trans.table)
++		aa_free_str_table(&pdb->trans);
++
+ 	/* TODO: move compat mapping here, requires dfa merging first */
+ 	/* TODO: move verify here, it has to be done after compat mappings */
+-out:
++
+ 	*policy = pdb;
+ 	return 0;
+ 
+@@ -1071,6 +1079,7 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 
+ 			if (rhashtable_insert_fast(profile->data, &data->head,
+ 						   profile->data->p)) {
++				kvfree_sensitive(data->data, data->size);
+ 				kfree_sensitive(data->key);
+ 				kfree_sensitive(data);
+ 				info = "failed to insert data to table";
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 4bc3e9398ee3d..ab927a142f515 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -1694,7 +1694,7 @@ long keyctl_session_to_parent(void)
+ 		goto unlock;
+ 
+ 	/* cancel an already pending keyring replacement */
+-	oldwork = task_work_cancel(parent, key_change_session_keyring);
++	oldwork = task_work_cancel_func(parent, key_change_session_keyring);
+ 
+ 	/* the replacement session keyring is applied just prior to userspace
+ 	 * restarting */
+diff --git a/security/landlock/cred.c b/security/landlock/cred.c
+index 786af18c4a1ca..db9fe7d906ba6 100644
+--- a/security/landlock/cred.c
++++ b/security/landlock/cred.c
+@@ -14,8 +14,8 @@
+ #include "ruleset.h"
+ #include "setup.h"
+ 
+-static int hook_cred_prepare(struct cred *const new,
+-			     const struct cred *const old, const gfp_t gfp)
++static void hook_cred_transfer(struct cred *const new,
++			       const struct cred *const old)
+ {
+ 	struct landlock_ruleset *const old_dom = landlock_cred(old)->domain;
+ 
+@@ -23,6 +23,12 @@ static int hook_cred_prepare(struct cred *const new,
+ 		landlock_get_ruleset(old_dom);
+ 		landlock_cred(new)->domain = old_dom;
+ 	}
++}
++
++static int hook_cred_prepare(struct cred *const new,
++			     const struct cred *const old, const gfp_t gfp)
++{
++	hook_cred_transfer(new, old);
+ 	return 0;
+ }
+ 
+@@ -36,6 +42,7 @@ static void hook_cred_free(struct cred *const cred)
+ 
+ static struct security_hook_list landlock_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(cred_prepare, hook_cred_prepare),
++	LSM_HOOK_INIT(cred_transfer, hook_cred_transfer),
+ 	LSM_HOOK_INIT(cred_free, hook_cred_free),
+ };
+ 
+diff --git a/security/security.c b/security/security.c
+index e5ca08789f741..8cee5b6c6e6d5 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2278,7 +2278,20 @@ int security_inode_getattr(const struct path *path)
+  * @size: size of xattr value
+  * @flags: flags
+  *
+- * Check permission before setting the extended attributes.
++ * This hook performs the desired permission checks before setting the extended
++ * attributes (xattrs) on @dentry.  It is important to note that we have some
++ * additional logic before the main LSM implementation calls to detect if we
++ * need to perform an additional capability check at the LSM layer.
++ *
++ * Normally we enforce a capability check prior to executing the various LSM
++ * hook implementations, but if a LSM wants to avoid this capability check,
++ * it can register a 'inode_xattr_skipcap' hook and return a value of 1 for
++ * xattrs that it wants to avoid the capability check, leaving the LSM fully
++ * responsible for enforcing the access control for the specific xattr.  If all
++ * of the enabled LSMs refrain from registering a 'inode_xattr_skipcap' hook,
++ * or return a 0 (the default return value), the capability check is still
++ * performed.  If no 'inode_xattr_skipcap' hooks are registered the capability
++ * check is performed.
+  *
+  * Return: Returns 0 if permission is granted.
+  */
+@@ -2286,20 +2299,20 @@ int security_inode_setxattr(struct mnt_idmap *idmap,
+ 			    struct dentry *dentry, const char *name,
+ 			    const void *value, size_t size, int flags)
+ {
+-	int ret;
++	int rc;
+ 
+ 	if (unlikely(IS_PRIVATE(d_backing_inode(dentry))))
+ 		return 0;
+-	/*
+-	 * SELinux and Smack integrate the cap call,
+-	 * so assume that all LSMs supplying this call do so.
+-	 */
+-	ret = call_int_hook(inode_setxattr, idmap, dentry, name, value, size,
+-			    flags);
+ 
+-	if (ret == 1)
+-		ret = cap_inode_setxattr(dentry, name, value, size, flags);
+-	return ret;
++	/* enforce the capability checks at the lsm layer, if needed */
++	if (!call_int_hook(inode_xattr_skipcap, name)) {
++		rc = cap_inode_setxattr(dentry, name, value, size, flags);
++		if (rc)
++			return rc;
++	}
++
++	return call_int_hook(inode_setxattr, idmap, dentry, name, value, size,
++			     flags);
+ }
+ 
+ /**
+@@ -2452,26 +2465,39 @@ int security_inode_listxattr(struct dentry *dentry)
+  * @dentry: file
+  * @name: xattr name
+  *
+- * Check permission before removing the extended attribute identified by @name
+- * for @dentry.
++ * This hook performs the desired permission checks before setting the extended
++ * attributes (xattrs) on @dentry.  It is important to note that we have some
++ * additional logic before the main LSM implementation calls to detect if we
++ * need to perform an additional capability check at the LSM layer.
++ *
++ * Normally we enforce a capability check prior to executing the various LSM
++ * hook implementations, but if a LSM wants to avoid this capability check,
++ * it can register a 'inode_xattr_skipcap' hook and return a value of 1 for
++ * xattrs that it wants to avoid the capability check, leaving the LSM fully
++ * responsible for enforcing the access control for the specific xattr.  If all
++ * of the enabled LSMs refrain from registering a 'inode_xattr_skipcap' hook,
++ * or return a 0 (the default return value), the capability check is still
++ * performed.  If no 'inode_xattr_skipcap' hooks are registered the capability
++ * check is performed.
+  *
+  * Return: Returns 0 if permission is granted.
+  */
+ int security_inode_removexattr(struct mnt_idmap *idmap,
+ 			       struct dentry *dentry, const char *name)
+ {
+-	int ret;
++	int rc;
+ 
+ 	if (unlikely(IS_PRIVATE(d_backing_inode(dentry))))
+ 		return 0;
+-	/*
+-	 * SELinux and Smack integrate the cap call,
+-	 * so assume that all LSMs supplying this call do so.
+-	 */
+-	ret = call_int_hook(inode_removexattr, idmap, dentry, name);
+-	if (ret == 1)
+-		ret = cap_inode_removexattr(idmap, dentry, name);
+-	return ret;
++
++	/* enforce the capability checks at the lsm layer, if needed */
++	if (!call_int_hook(inode_xattr_skipcap, name)) {
++		rc = cap_inode_removexattr(idmap, dentry, name);
++		if (rc)
++			return rc;
++	}
++
++	return call_int_hook(inode_removexattr, idmap, dentry, name);
+ }
+ 
+ /**
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 7eed331e90f08..55c78c318ccd7 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3177,6 +3177,23 @@ static bool has_cap_mac_admin(bool audit)
+ 	return true;
+ }
+ 
++/**
++ * selinux_inode_xattr_skipcap - Skip the xattr capability checks?
++ * @name: name of the xattr
++ *
++ * Returns 1 to indicate that SELinux "owns" the access control rights to xattrs
++ * named @name; the LSM layer should avoid enforcing any traditional
++ * capability based access controls on this xattr.  Returns 0 to indicate that
++ * SELinux does not "own" the access control rights to xattrs named @name and is
++ * deferring to the LSM layer for further access controls, including capability
++ * based controls.
++ */
++static int selinux_inode_xattr_skipcap(const char *name)
++{
++	/* require capability check if not a selinux xattr */
++	return !strcmp(name, XATTR_NAME_SELINUX);
++}
++
+ static int selinux_inode_setxattr(struct mnt_idmap *idmap,
+ 				  struct dentry *dentry, const char *name,
+ 				  const void *value, size_t size, int flags)
+@@ -3188,15 +3205,9 @@ static int selinux_inode_setxattr(struct mnt_idmap *idmap,
+ 	u32 newsid, sid = current_sid();
+ 	int rc = 0;
+ 
+-	if (strcmp(name, XATTR_NAME_SELINUX)) {
+-		rc = cap_inode_setxattr(dentry, name, value, size, flags);
+-		if (rc)
+-			return rc;
+-
+-		/* Not an attribute we recognize, so just check the
+-		   ordinary setattr permission. */
++	/* if not a selinux xattr, only check the ordinary setattr perm */
++	if (strcmp(name, XATTR_NAME_SELINUX))
+ 		return dentry_has_perm(current_cred(), dentry, FILE__SETATTR);
+-	}
+ 
+ 	if (!selinux_initialized())
+ 		return (inode_owner_or_capable(idmap, inode) ? 0 : -EPERM);
+@@ -3345,15 +3356,9 @@ static int selinux_inode_listxattr(struct dentry *dentry)
+ static int selinux_inode_removexattr(struct mnt_idmap *idmap,
+ 				     struct dentry *dentry, const char *name)
+ {
+-	if (strcmp(name, XATTR_NAME_SELINUX)) {
+-		int rc = cap_inode_removexattr(idmap, dentry, name);
+-		if (rc)
+-			return rc;
+-
+-		/* Not an attribute we recognize, so just check the
+-		   ordinary setattr permission. */
++	/* if not a selinux xattr, only check the ordinary setattr perm */
++	if (strcmp(name, XATTR_NAME_SELINUX))
+ 		return dentry_has_perm(current_cred(), dentry, FILE__SETATTR);
+-	}
+ 
+ 	if (!selinux_initialized())
+ 		return 0;
+@@ -7175,6 +7180,7 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(inode_permission, selinux_inode_permission),
+ 	LSM_HOOK_INIT(inode_setattr, selinux_inode_setattr),
+ 	LSM_HOOK_INIT(inode_getattr, selinux_inode_getattr),
++	LSM_HOOK_INIT(inode_xattr_skipcap, selinux_inode_xattr_skipcap),
+ 	LSM_HOOK_INIT(inode_setxattr, selinux_inode_setxattr),
+ 	LSM_HOOK_INIT(inode_post_setxattr, selinux_inode_post_setxattr),
+ 	LSM_HOOK_INIT(inode_getxattr, selinux_inode_getxattr),
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index f5cbec1e6a923..c1fe422cfbe19 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -1282,6 +1282,33 @@ static int smack_inode_getattr(const struct path *path)
+ 	return rc;
+ }
+ 
++/**
++ * smack_inode_xattr_skipcap - Skip the xattr capability checks?
++ * @name: name of the xattr
++ *
++ * Returns 1 to indicate that Smack "owns" the access control rights to xattrs
++ * named @name; the LSM layer should avoid enforcing any traditional
++ * capability based access controls on this xattr.  Returns 0 to indicate that
++ * Smack does not "own" the access control rights to xattrs named @name and is
++ * deferring to the LSM layer for further access controls, including capability
++ * based controls.
++ */
++static int smack_inode_xattr_skipcap(const char *name)
++{
++	if (strncmp(name, XATTR_SMACK_SUFFIX, strlen(XATTR_SMACK_SUFFIX)))
++		return 0;
++
++	if (strcmp(name, XATTR_NAME_SMACK) == 0 ||
++	    strcmp(name, XATTR_NAME_SMACKIPIN) == 0 ||
++	    strcmp(name, XATTR_NAME_SMACKIPOUT) == 0 ||
++	    strcmp(name, XATTR_NAME_SMACKEXEC) == 0 ||
++	    strcmp(name, XATTR_NAME_SMACKMMAP) == 0 ||
++	    strcmp(name, XATTR_NAME_SMACKTRANSMUTE) == 0)
++		return 1;
++
++	return 0;
++}
++
+ /**
+  * smack_inode_setxattr - Smack check for setting xattrs
+  * @idmap: idmap of the mount
+@@ -1325,8 +1352,7 @@ static int smack_inode_setxattr(struct mnt_idmap *idmap,
+ 		    size != TRANS_TRUE_SIZE ||
+ 		    strncmp(value, TRANS_TRUE, TRANS_TRUE_SIZE) != 0)
+ 			rc = -EINVAL;
+-	} else
+-		rc = cap_inode_setxattr(dentry, name, value, size, flags);
++	}
+ 
+ 	if (check_priv && !smack_privileged(CAP_MAC_ADMIN))
+ 		rc = -EPERM;
+@@ -1435,8 +1461,7 @@ static int smack_inode_removexattr(struct mnt_idmap *idmap,
+ 	    strcmp(name, XATTR_NAME_SMACKMMAP) == 0) {
+ 		if (!smack_privileged(CAP_MAC_ADMIN))
+ 			rc = -EPERM;
+-	} else
+-		rc = cap_inode_removexattr(idmap, dentry, name);
++	}
+ 
+ 	if (rc != 0)
+ 		return rc;
+@@ -5053,6 +5078,7 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(inode_permission, smack_inode_permission),
+ 	LSM_HOOK_INIT(inode_setattr, smack_inode_setattr),
+ 	LSM_HOOK_INIT(inode_getattr, smack_inode_getattr),
++	LSM_HOOK_INIT(inode_xattr_skipcap, smack_inode_xattr_skipcap),
+ 	LSM_HOOK_INIT(inode_setxattr, smack_inode_setxattr),
+ 	LSM_HOOK_INIT(inode_post_setxattr, smack_inode_post_setxattr),
+ 	LSM_HOOK_INIT(inode_getxattr, smack_inode_getxattr),
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 3f61220c23b4e..0f0d7e895c5aa 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -733,6 +733,12 @@ static void fill_fb_info(struct snd_ump_endpoint *ump,
+ 		info->block_id, info->direction, info->active,
+ 		info->first_group, info->num_groups, info->midi_ci_version,
+ 		info->sysex8_streams, info->flags);
++
++	if ((info->flags & SNDRV_UMP_BLOCK_IS_MIDI1) && info->num_groups != 1) {
++		info->num_groups = 1;
++		ump_dbg(ump, "FB %d: corrected groups to 1 for MIDI1\n",
++			info->block_id);
++	}
+ }
+ 
+ /* check whether the FB info gets updated by the current message */
+@@ -806,6 +812,13 @@ static int ump_handle_fb_name_msg(struct snd_ump_endpoint *ump,
+ 	if (!fb)
+ 		return -ENODEV;
+ 
++	if (ump->parsed &&
++	    (ump->info.flags & SNDRV_UMP_EP_INFO_STATIC_BLOCKS)) {
++		ump_dbg(ump, "Skipping static FB name update (blk#%d)\n",
++			fb->info.block_id);
++		return 0;
++	}
++
+ 	ret = ump_append_string(ump, fb->info.name, sizeof(fb->info.name),
+ 				buf->raw, 3);
+ 	/* notify the FB name update to sequencer, too */
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index d35d0a420ee08..1a163bbcabd79 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -1180,8 +1180,7 @@ static void process_rx_packets(struct fw_iso_context *context, u32 tstamp, size_
+ 		(void)fw_card_read_cycle_time(fw_parent_device(s->unit)->card, &curr_cycle_time);
+ 
+ 	for (i = 0; i < packets; ++i) {
+-		DEFINE_FLEX(struct fw_iso_packet, template, header,
+-			    header_length, CIP_HEADER_QUADLETS);
++		DEFINE_RAW_FLEX(struct fw_iso_packet, template, header, CIP_HEADER_QUADLETS);
+ 		bool sched_irq = false;
+ 
+ 		build_it_pkt_header(s, desc->cycle, template, pkt_header_length,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6f4512b598eaa..d749769438ea5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10360,10 +10360,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+-	SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC245_FIXUP_CS35L41_SPI_2),
+-	SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
++	SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+-	SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+diff --git a/sound/soc/amd/acp-es8336.c b/sound/soc/amd/acp-es8336.c
+index e079b3218c6f4..3756b8bef17bc 100644
+--- a/sound/soc/amd/acp-es8336.c
++++ b/sound/soc/amd/acp-es8336.c
+@@ -203,8 +203,10 @@ static int st_es8336_late_probe(struct snd_soc_card *card)
+ 
+ 	codec_dev = acpi_get_first_physical_node(adev);
+ 	acpi_dev_put(adev);
+-	if (!codec_dev)
++	if (!codec_dev) {
+ 		dev_err(card->dev, "can not find codec dev\n");
++		return -ENODEV;
++	}
+ 
+ 	ret = devm_acpi_dev_add_driver_gpios(codec_dev, acpi_es8336_gpios);
+ 	if (ret)
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 4e3a8ce690a45..36dddf230c2c4 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -220,6 +220,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21J6"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index 30497152e02a7..f609cade805d7 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -397,7 +397,7 @@ int cs35l56_irq_request(struct cs35l56_base *cs35l56_base, int irq)
+ {
+ 	int ret;
+ 
+-	if (!irq)
++	if (irq < 1)
+ 		return 0;
+ 
+ 	ret = devm_request_threaded_irq(cs35l56_base->dev, irq, NULL, cs35l56_irq,
+diff --git a/sound/soc/codecs/max98088.c b/sound/soc/codecs/max98088.c
+index 8b56ee550c09e..8b0645c634620 100644
+--- a/sound/soc/codecs/max98088.c
++++ b/sound/soc/codecs/max98088.c
+@@ -1318,6 +1318,7 @@ static int max98088_set_bias_level(struct snd_soc_component *component,
+                                   enum snd_soc_bias_level level)
+ {
+ 	struct max98088_priv *max98088 = snd_soc_component_get_drvdata(component);
++	int ret;
+ 
+ 	switch (level) {
+ 	case SND_SOC_BIAS_ON:
+@@ -1333,10 +1334,13 @@ static int max98088_set_bias_level(struct snd_soc_component *component,
+ 		 */
+ 		if (!IS_ERR(max98088->mclk)) {
+ 			if (snd_soc_component_get_bias_level(component) ==
+-			    SND_SOC_BIAS_ON)
++			    SND_SOC_BIAS_ON) {
+ 				clk_disable_unprepare(max98088->mclk);
+-			else
+-				clk_prepare_enable(max98088->mclk);
++			} else {
++				ret = clk_prepare_enable(max98088->mclk);
++				if (ret)
++					return ret;
++			}
+ 		}
+ 		break;
+ 
+diff --git a/sound/soc/codecs/pcm6240.c b/sound/soc/codecs/pcm6240.c
+index 86e126783a1df..8f7057e689fbf 100644
+--- a/sound/soc/codecs/pcm6240.c
++++ b/sound/soc/codecs/pcm6240.c
+@@ -2087,10 +2087,8 @@ static int pcmdevice_i2c_probe(struct i2c_client *i2c)
+ #endif
+ 
+ 	pcm_dev = devm_kzalloc(&i2c->dev, sizeof(*pcm_dev), GFP_KERNEL);
+-	if (!pcm_dev) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
++	if (!pcm_dev)
++		return -ENOMEM;
+ 
+ 	pcm_dev->chip_id = (id != NULL) ? id->driver_data : 0;
+ 
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index 265a8ca25cbbe..08082806d5892 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -2163,7 +2163,7 @@ static void tasdev_load_calibrated_data(struct tasdevice_priv *priv, int i)
+ 		return;
+ 
+ 	cal = cal_fmw->calibrations;
+-	if (cal)
++	if (!cal)
+ 		return;
+ 
+ 	load_calib_data(priv, &cal->dev_data);
+@@ -2324,14 +2324,21 @@ void tasdevice_tuning_switch(void *context, int state)
+ 	struct tasdevice_fw *tas_fmw = tas_priv->fmw;
+ 	int profile_cfg_id = tas_priv->rcabin.profile_cfg_id;
+ 
+-	if (tas_priv->fw_state == TASDEVICE_DSP_FW_FAIL) {
+-		dev_err(tas_priv->dev, "DSP bin file not loaded\n");
++	/*
++	 * Only RCA-based Playback can still work with no dsp program running
++	 * inside the chip.
++	 */
++	switch (tas_priv->fw_state) {
++	case TASDEVICE_RCA_FW_OK:
++	case TASDEVICE_DSP_FW_ALL_OK:
++		break;
++	default:
+ 		return;
+ 	}
+ 
+ 	if (state == 0) {
+-		if (tas_priv->cur_prog < tas_fmw->nr_programs) {
+-			/*dsp mode or tuning mode*/
++		if (tas_fmw && tas_priv->cur_prog < tas_fmw->nr_programs) {
++			/* dsp mode or tuning mode */
+ 			profile_cfg_id = tas_priv->rcabin.profile_cfg_id;
+ 			tasdevice_select_tuningprm_cfg(tas_priv,
+ 				tas_priv->cur_prog, tas_priv->cur_conf,
+@@ -2340,9 +2347,10 @@ void tasdevice_tuning_switch(void *context, int state)
+ 
+ 		tasdevice_select_cfg_blk(tas_priv, profile_cfg_id,
+ 			TASDEVICE_BIN_BLK_PRE_POWER_UP);
+-	} else
++	} else {
+ 		tasdevice_select_cfg_blk(tas_priv, profile_cfg_id,
+ 			TASDEVICE_BIN_BLK_PRE_SHUTDOWN);
++	}
+ }
+ EXPORT_SYMBOL_NS_GPL(tasdevice_tuning_switch,
+ 	SND_SOC_TAS2781_FMWLIB);
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index 9350972dfefe7..c64d458e524e2 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -380,23 +380,37 @@ static void tasdevice_fw_ready(const struct firmware *fmw,
+ 	mutex_lock(&tas_priv->codec_lock);
+ 
+ 	ret = tasdevice_rca_parser(tas_priv, fmw);
+-	if (ret)
++	if (ret) {
++		tasdevice_config_info_remove(tas_priv);
+ 		goto out;
++	}
+ 	tasdevice_create_control(tas_priv);
+ 
+ 	tasdevice_dsp_remove(tas_priv);
+ 	tasdevice_calbin_remove(tas_priv);
+-	tas_priv->fw_state = TASDEVICE_DSP_FW_PENDING;
++	/*
++	 * The baseline is the RCA-only case, and then the code attempts to
++	 * load DSP firmware but in case of failures just keep going, i.e.
++	 * failing to load DSP firmware is NOT an error.
++	 */
++	tas_priv->fw_state = TASDEVICE_RCA_FW_OK;
+ 	scnprintf(tas_priv->coef_binaryname, 64, "%s_coef.bin",
+ 		tas_priv->dev_name);
+ 	ret = tasdevice_dsp_parser(tas_priv);
+ 	if (ret) {
+ 		dev_err(tas_priv->dev, "dspfw load %s error\n",
+ 			tas_priv->coef_binaryname);
+-		tas_priv->fw_state = TASDEVICE_DSP_FW_FAIL;
+ 		goto out;
+ 	}
+-	tasdevice_dsp_create_ctrls(tas_priv);
++
++	/*
++	 * If no dsp-related kcontrol created, the dsp resource will be freed.
++	 */
++	ret = tasdevice_dsp_create_ctrls(tas_priv);
++	if (ret) {
++		dev_err(tas_priv->dev, "dsp controls error\n");
++		goto out;
++	}
+ 
+ 	tas_priv->fw_state = TASDEVICE_DSP_FW_ALL_OK;
+ 
+@@ -417,9 +431,8 @@ static void tasdevice_fw_ready(const struct firmware *fmw,
+ 	tasdevice_prmg_load(tas_priv, 0);
+ 	tas_priv->cur_prog = 0;
+ out:
+-	if (tas_priv->fw_state == TASDEVICE_DSP_FW_FAIL) {
+-		/*If DSP FW fail, kcontrol won't be created */
+-		tasdevice_config_info_remove(tas_priv);
++	if (tas_priv->fw_state == TASDEVICE_RCA_FW_OK) {
++		/* If DSP FW fail, DSP kcontrol won't be created. */
+ 		tasdevice_dsp_remove(tas_priv);
+ 	}
+ 	mutex_unlock(&tas_priv->codec_lock);
+@@ -466,14 +479,14 @@ static int tasdevice_startup(struct snd_pcm_substream *substream,
+ {
+ 	struct snd_soc_component *codec = dai->component;
+ 	struct tasdevice_priv *tas_priv = snd_soc_component_get_drvdata(codec);
+-	int ret = 0;
+ 
+-	if (tas_priv->fw_state != TASDEVICE_DSP_FW_ALL_OK) {
+-		dev_err(tas_priv->dev, "DSP bin file not loaded\n");
+-		ret = -EINVAL;
++	switch (tas_priv->fw_state) {
++	case TASDEVICE_RCA_FW_OK:
++	case TASDEVICE_DSP_FW_ALL_OK:
++		return 0;
++	default:
++		return -EINVAL;
+ 	}
+-
+-	return ret;
+ }
+ 
+ static int tasdevice_hw_params(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/codecs/wcd939x.c b/sound/soc/codecs/wcd939x.c
+index c49894aad8a54..97fd75a935b68 100644
+--- a/sound/soc/codecs/wcd939x.c
++++ b/sound/soc/codecs/wcd939x.c
+@@ -182,8 +182,6 @@ struct wcd939x_priv {
+ 	/* typec handling */
+ 	bool typec_analog_mux;
+ #if IS_ENABLED(CONFIG_TYPEC)
+-	struct typec_mux_dev *typec_mux;
+-	struct typec_switch_dev *typec_sw;
+ 	enum typec_orientation typec_orientation;
+ 	unsigned long typec_mode;
+ 	struct typec_switch *typec_switch;
+@@ -3528,6 +3526,68 @@ static const struct component_master_ops wcd939x_comp_ops = {
+ 	.unbind = wcd939x_unbind,
+ };
+ 
++static void __maybe_unused wcd939x_typec_mux_unregister(void *data)
++{
++	struct typec_mux_dev *typec_mux = data;
++
++	typec_mux_unregister(typec_mux);
++}
++
++static void __maybe_unused wcd939x_typec_switch_unregister(void *data)
++{
++	struct typec_switch_dev *typec_sw = data;
++
++	typec_switch_unregister(typec_sw);
++}
++
++static int wcd939x_add_typec(struct wcd939x_priv *wcd939x, struct device *dev)
++{
++#if IS_ENABLED(CONFIG_TYPEC)
++	int ret;
++	struct typec_mux_dev *typec_mux;
++	struct typec_switch_dev *typec_sw;
++	struct typec_mux_desc mux_desc = {
++		.drvdata = wcd939x,
++		.fwnode = dev_fwnode(dev),
++		.set = wcd939x_typec_mux_set,
++	};
++	struct typec_switch_desc sw_desc = {
++		.drvdata = wcd939x,
++		.fwnode = dev_fwnode(dev),
++		.set = wcd939x_typec_switch_set,
++	};
++
++	/*
++	 * Is USBSS is used to mux analog lines,
++	 * register a typec mux/switch to get typec events
++	 */
++	if (!wcd939x->typec_analog_mux)
++		return 0;
++
++	typec_mux = typec_mux_register(dev, &mux_desc);
++	if (IS_ERR(typec_mux))
++		return dev_err_probe(dev, PTR_ERR(typec_mux),
++				     "failed to register typec mux\n");
++
++	ret = devm_add_action_or_reset(dev, wcd939x_typec_mux_unregister,
++				       typec_mux);
++	if (ret)
++		return ret;
++
++	typec_sw = typec_switch_register(dev, &sw_desc);
++	if (IS_ERR(typec_sw))
++		return dev_err_probe(dev, PTR_ERR(typec_sw),
++				     "failed to register typec switch\n");
++
++	ret = devm_add_action_or_reset(dev, wcd939x_typec_switch_unregister,
++				       typec_sw);
++	if (ret)
++		return ret;
++#endif
++
++	return 0;
++}
++
+ static int wcd939x_add_slave_components(struct wcd939x_priv *wcd939x,
+ 					struct device *dev,
+ 					struct component_match **matchptr)
+@@ -3576,42 +3636,13 @@ static int wcd939x_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-#if IS_ENABLED(CONFIG_TYPEC)
+-	/*
+-	 * Is USBSS is used to mux analog lines,
+-	 * register a typec mux/switch to get typec events
+-	 */
+-	if (wcd939x->typec_analog_mux) {
+-		struct typec_mux_desc mux_desc = {
+-			.drvdata = wcd939x,
+-			.fwnode = dev_fwnode(dev),
+-			.set = wcd939x_typec_mux_set,
+-		};
+-		struct typec_switch_desc sw_desc = {
+-			.drvdata = wcd939x,
+-			.fwnode = dev_fwnode(dev),
+-			.set = wcd939x_typec_switch_set,
+-		};
+-
+-		wcd939x->typec_mux = typec_mux_register(dev, &mux_desc);
+-		if (IS_ERR(wcd939x->typec_mux)) {
+-			ret = dev_err_probe(dev, PTR_ERR(wcd939x->typec_mux),
+-					    "failed to register typec mux\n");
+-			goto err_disable_regulators;
+-		}
+-
+-		wcd939x->typec_sw = typec_switch_register(dev, &sw_desc);
+-		if (IS_ERR(wcd939x->typec_sw)) {
+-			ret = dev_err_probe(dev, PTR_ERR(wcd939x->typec_sw),
+-					    "failed to register typec switch\n");
+-			goto err_unregister_typec_mux;
+-		}
+-	}
+-#endif /* CONFIG_TYPEC */
++	ret = wcd939x_add_typec(wcd939x, dev);
++	if (ret)
++		goto err_disable_regulators;
+ 
+ 	ret = wcd939x_add_slave_components(wcd939x, dev, &match);
+ 	if (ret)
+-		goto err_unregister_typec_switch;
++		goto err_disable_regulators;
+ 
+ 	wcd939x_reset(wcd939x);
+ 
+@@ -3628,18 +3659,6 @@ static int wcd939x_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-#if IS_ENABLED(CONFIG_TYPEC)
+-err_unregister_typec_mux:
+-	if (wcd939x->typec_analog_mux)
+-		typec_mux_unregister(wcd939x->typec_mux);
+-#endif /* CONFIG_TYPEC */
+-
+-err_unregister_typec_switch:
+-#if IS_ENABLED(CONFIG_TYPEC)
+-	if (wcd939x->typec_analog_mux)
+-		typec_switch_unregister(wcd939x->typec_sw);
+-#endif /* CONFIG_TYPEC */
+-
+ err_disable_regulators:
+ 	regulator_bulk_disable(WCD939X_MAX_SUPPLY, wcd939x->supplies);
+ 	regulator_bulk_free(WCD939X_MAX_SUPPLY, wcd939x->supplies);
+diff --git a/sound/soc/fsl/fsl_qmc_audio.c b/sound/soc/fsl/fsl_qmc_audio.c
+index bfaaa451735b8..dd90ef16fa973 100644
+--- a/sound/soc/fsl/fsl_qmc_audio.c
++++ b/sound/soc/fsl/fsl_qmc_audio.c
+@@ -604,6 +604,8 @@ static int qmc_audio_dai_parse(struct qmc_audio *qmc_audio, struct device_node *
+ 
+ 	qmc_dai->name = devm_kasprintf(qmc_audio->dev, GFP_KERNEL, "%s.%d",
+ 				       np->parent->name, qmc_dai->id);
++	if (!qmc_dai->name)
++		return -ENOMEM;
+ 
+ 	qmc_dai->qmc_chan = devm_qmc_chan_get_byphandle(qmc_audio->dev, np,
+ 							"fsl,qmc-chan");
+diff --git a/sound/soc/intel/common/soc-acpi-intel-ssp-common.c b/sound/soc/intel/common/soc-acpi-intel-ssp-common.c
+index 75d0b931d895d..de7a3f7f47f10 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-ssp-common.c
++++ b/sound/soc/intel/common/soc-acpi-intel-ssp-common.c
+@@ -64,6 +64,15 @@ static const struct codec_map amps[] = {
+ 	CODEC_MAP_ENTRY("RT1015P", "rt1015", RT1015P_ACPI_HID, CODEC_RT1015P),
+ 	CODEC_MAP_ENTRY("RT1019P", "rt1019", RT1019P_ACPI_HID, CODEC_RT1019P),
+ 	CODEC_MAP_ENTRY("RT1308", "rt1308", RT1308_ACPI_HID, CODEC_RT1308),
++
++	/*
++	 * Monolithic components
++	 *
++	 * Only put components that can serve as both the amp and the codec below this line.
++	 * This will ensure that if the part is used just as a codec and there is an amp as well
++	 * then the amp will be selected properly.
++	 */
++	CODEC_MAP_ENTRY("RT5650", "rt5650", RT5650_ACPI_HID, CODEC_RT5650),
+ };
+ 
+ enum snd_soc_acpi_intel_codec
+diff --git a/sound/soc/intel/common/soc-intel-quirks.h b/sound/soc/intel/common/soc-intel-quirks.h
+index de4e550c5b34d..42bd51456b945 100644
+--- a/sound/soc/intel/common/soc-intel-quirks.h
++++ b/sound/soc/intel/common/soc-intel-quirks.h
+@@ -11,7 +11,7 @@
+ 
+ #include <linux/platform_data/x86/soc.h>
+ 
+-#if IS_ENABLED(CONFIG_X86)
++#if IS_REACHABLE(CONFIG_IOSF_MBI)
+ 
+ #include <linux/dmi.h>
+ #include <asm/iosf_mbi.h>
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index b0f3e02cb043c..5a47f661e0c6f 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -1166,9 +1166,13 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "lpass-rxtx-cdc-dma-lpm");
++		if (!res)
++			return -EINVAL;
+ 		drvdata->rxtx_cdc_dma_lpm_buf = res->start;
+ 
+ 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "lpass-va-cdc-dma-lpm");
++		if (!res)
++			return -EINVAL;
+ 		drvdata->va_cdc_dma_lpm_buf = res->start;
+ 	}
+ 
+diff --git a/sound/soc/sof/amd/pci-vangogh.c b/sound/soc/sof/amd/pci-vangogh.c
+index 16eb2994fbab9..eba5808401003 100644
+--- a/sound/soc/sof/amd/pci-vangogh.c
++++ b/sound/soc/sof/amd/pci-vangogh.c
+@@ -34,7 +34,6 @@ static const struct sof_amd_acp_desc vangogh_chip_info = {
+ 	.dsp_intr_base	= ACP5X_DSP_SW_INTR_BASE,
+ 	.sram_pte_offset = ACP5X_SRAM_PTE_OFFSET,
+ 	.hw_semaphore_offset = ACP5X_AXI2DAGB_SEM_0,
+-	.acp_clkmux_sel = ACP5X_CLKMUX_SEL,
+ 	.probe_reg_offset = ACP5X_FUTURE_REG_ACLK_0,
+ };
+ 
+diff --git a/sound/soc/sof/imx/imx8m.c b/sound/soc/sof/imx/imx8m.c
+index 1c7019c3cbd38..cdd1e79ef9f6a 100644
+--- a/sound/soc/sof/imx/imx8m.c
++++ b/sound/soc/sof/imx/imx8m.c
+@@ -234,7 +234,7 @@ static int imx8m_probe(struct snd_sof_dev *sdev)
+ 	/* set default mailbox offset for FW ready message */
+ 	sdev->dsp_box.offset = MBOX_OFFSET;
+ 
+-	priv->regmap = syscon_regmap_lookup_by_compatible("fsl,dsp-ctrl");
++	priv->regmap = syscon_regmap_lookup_by_phandle(np, "fsl,dsp-ctrl");
+ 	if (IS_ERR(priv->regmap)) {
+ 		dev_err(sdev->dev, "cannot find dsp-ctrl registers");
+ 		ret = PTR_ERR(priv->regmap);
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index b8b914eaf7e05..75f6240cf3e1d 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -310,15 +310,19 @@ int hda_cl_copy_fw(struct snd_sof_dev *sdev, struct hdac_ext_stream *hext_stream
+ 		return ret;
+ 	}
+ 
+-	/* Wait for completion of transfer */
+-	time_left = wait_for_completion_timeout(&hda_stream->ioc,
+-						msecs_to_jiffies(HDA_CL_DMA_IOC_TIMEOUT_MS));
+-
+-	if (!time_left) {
+-		dev_err(sdev->dev, "Code loader DMA did not complete\n");
+-		return -ETIMEDOUT;
++	if (sdev->pdata->ipc_type == SOF_IPC_TYPE_4) {
++		/* Wait for completion of transfer */
++		time_left = wait_for_completion_timeout(&hda_stream->ioc,
++							msecs_to_jiffies(HDA_CL_DMA_IOC_TIMEOUT_MS));
++
++		if (!time_left) {
++			dev_err(sdev->dev, "Code loader DMA did not complete\n");
++			return -ETIMEDOUT;
++		}
++		dev_dbg(sdev->dev, "Code loader DMA done\n");
+ 	}
+-	dev_dbg(sdev->dev, "Code loader DMA done, waiting for FW_ENTERED status\n");
++
++	dev_dbg(sdev->dev, "waiting for FW_ENTERED status\n");
+ 
+ 	status = snd_sof_dsp_read_poll_timeout(sdev, HDA_DSP_BAR,
+ 					chip->rom_status_reg, reg,
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index dead1c19558bb..81647ddac8cbc 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1307,9 +1307,10 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ 	const struct sof_dev_desc *desc = sof_pdata->desc;
+ 	struct hdac_bus *bus = sof_to_bus(sdev);
+ 	struct snd_soc_acpi_mach *mach = NULL;
+-	enum snd_soc_acpi_intel_codec codec_type;
++	enum snd_soc_acpi_intel_codec codec_type, amp_type;
+ 	const char *tplg_filename;
+ 	const char *tplg_suffix;
++	bool amp_name_valid;
+ 
+ 	/* Try I2S or DMIC if it is supported */
+ 	if (interface_mask & (BIT(SOF_DAI_INTEL_SSP) | BIT(SOF_DAI_INTEL_DMIC)))
+@@ -1413,15 +1414,16 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ 			}
+ 		}
+ 
+-		codec_type = snd_soc_acpi_intel_detect_amp_type(sdev->dev);
++		amp_type = snd_soc_acpi_intel_detect_amp_type(sdev->dev);
++		codec_type = snd_soc_acpi_intel_detect_codec_type(sdev->dev);
++		amp_name_valid = amp_type != CODEC_NONE && amp_type != codec_type;
+ 
+-		if (tplg_fixup &&
+-		    mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_AMP_NAME &&
+-		    codec_type != CODEC_NONE) {
+-			tplg_suffix = snd_soc_acpi_intel_get_amp_tplg_suffix(codec_type);
++		if (tplg_fixup && amp_name_valid &&
++		    mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_AMP_NAME) {
++			tplg_suffix = snd_soc_acpi_intel_get_amp_tplg_suffix(amp_type);
+ 			if (!tplg_suffix) {
+ 				dev_err(sdev->dev, "no tplg suffix found, amp %d\n",
+-					codec_type);
++					amp_type);
+ 				return NULL;
+ 			}
+ 
+@@ -1436,7 +1438,6 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ 			add_extension = true;
+ 		}
+ 
+-		codec_type = snd_soc_acpi_intel_detect_codec_type(sdev->dev);
+ 
+ 		if (tplg_fixup &&
+ 		    mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_CODEC_NAME &&
+diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
+index 00987039c9720..4d261b9e6d7ec 100644
+--- a/sound/soc/sof/ipc4-topology.c
++++ b/sound/soc/sof/ipc4-topology.c
+@@ -1358,7 +1358,13 @@ static void sof_ipc4_unprepare_copier_module(struct snd_sof_widget *swidget)
+ 		ipc4_copier = dai->private;
+ 
+ 		if (pipeline->use_chain_dma) {
+-			pipeline->msg.primary = 0;
++			/*
++			 * Preserve the DMA Link ID and clear other bits since
++			 * the DMA Link ID is only configured once during
++			 * dai_config, other fields are expected to be 0 for
++			 * re-configuration
++			 */
++			pipeline->msg.primary &= SOF_IPC4_GLB_CHAIN_DMA_LINK_ID_MASK;
+ 			pipeline->msg.extension = 0;
+ 		}
+ 
+@@ -2869,7 +2875,7 @@ static void sof_ipc4_put_queue_id(struct snd_sof_widget *swidget, int queue_id,
+ static int sof_ipc4_set_copier_sink_format(struct snd_sof_dev *sdev,
+ 					   struct snd_sof_widget *src_widget,
+ 					   struct snd_sof_widget *sink_widget,
+-					   int sink_id)
++					   struct snd_sof_route *sroute)
+ {
+ 	struct sof_ipc4_copier_config_set_sink_format format;
+ 	const struct sof_ipc_ops *iops = sdev->ipc->ops;
+@@ -2878,9 +2884,6 @@ static int sof_ipc4_set_copier_sink_format(struct snd_sof_dev *sdev,
+ 	struct sof_ipc4_fw_module *fw_module;
+ 	struct sof_ipc4_msg msg = {{ 0 }};
+ 
+-	dev_dbg(sdev->dev, "%s set copier sink %d format\n",
+-		src_widget->widget->name, sink_id);
+-
+ 	if (WIDGET_IS_DAI(src_widget->id)) {
+ 		struct snd_sof_dai *dai = src_widget->private;
+ 
+@@ -2891,13 +2894,15 @@ static int sof_ipc4_set_copier_sink_format(struct snd_sof_dev *sdev,
+ 
+ 	fw_module = src_widget->module_info;
+ 
+-	format.sink_id = sink_id;
++	format.sink_id = sroute->src_queue_id;
+ 	memcpy(&format.source_fmt, &src_config->audio_fmt, sizeof(format.source_fmt));
+ 
+-	pin_fmt = sof_ipc4_get_input_pin_audio_fmt(sink_widget, sink_id);
++	pin_fmt = sof_ipc4_get_input_pin_audio_fmt(sink_widget, sroute->dst_queue_id);
+ 	if (!pin_fmt) {
+-		dev_err(sdev->dev, "Unable to get pin %d format for %s",
+-			sink_id, sink_widget->widget->name);
++		dev_err(sdev->dev,
++			"Failed to get input audio format of %s:%d for output of %s:%d\n",
++			sink_widget->widget->name, sroute->dst_queue_id,
++			src_widget->widget->name, sroute->src_queue_id);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2955,7 +2960,8 @@ static int sof_ipc4_route_setup(struct snd_sof_dev *sdev, struct snd_sof_route *
+ 	sroute->src_queue_id = sof_ipc4_get_queue_id(src_widget, sink_widget,
+ 						     SOF_PIN_TYPE_OUTPUT);
+ 	if (sroute->src_queue_id < 0) {
+-		dev_err(sdev->dev, "failed to get queue ID for source widget: %s\n",
++		dev_err(sdev->dev,
++			"failed to get src_queue_id ID from source widget %s\n",
+ 			src_widget->widget->name);
+ 		return sroute->src_queue_id;
+ 	}
+@@ -2963,7 +2969,8 @@ static int sof_ipc4_route_setup(struct snd_sof_dev *sdev, struct snd_sof_route *
+ 	sroute->dst_queue_id = sof_ipc4_get_queue_id(src_widget, sink_widget,
+ 						     SOF_PIN_TYPE_INPUT);
+ 	if (sroute->dst_queue_id < 0) {
+-		dev_err(sdev->dev, "failed to get queue ID for sink widget: %s\n",
++		dev_err(sdev->dev,
++			"failed to get dst_queue_id ID from sink widget %s\n",
+ 			sink_widget->widget->name);
+ 		sof_ipc4_put_queue_id(src_widget, sroute->src_queue_id,
+ 				      SOF_PIN_TYPE_OUTPUT);
+@@ -2972,10 +2979,11 @@ static int sof_ipc4_route_setup(struct snd_sof_dev *sdev, struct snd_sof_route *
+ 
+ 	/* Pin 0 format is already set during copier module init */
+ 	if (sroute->src_queue_id > 0 && WIDGET_IS_COPIER(src_widget->id)) {
+-		ret = sof_ipc4_set_copier_sink_format(sdev, src_widget, sink_widget,
+-						      sroute->src_queue_id);
++		ret = sof_ipc4_set_copier_sink_format(sdev, src_widget,
++						      sink_widget, sroute);
+ 		if (ret < 0) {
+-			dev_err(sdev->dev, "failed to set sink format for %s source queue ID %d\n",
++			dev_err(sdev->dev,
++				"failed to set sink format for source %s:%d\n",
+ 				src_widget->widget->name, sroute->src_queue_id);
+ 			goto out;
+ 		}
+@@ -3093,8 +3101,14 @@ static int sof_ipc4_dai_config(struct snd_sof_dev *sdev, struct snd_sof_widget *
+ 		return 0;
+ 
+ 	if (pipeline->use_chain_dma) {
+-		pipeline->msg.primary &= ~SOF_IPC4_GLB_CHAIN_DMA_LINK_ID_MASK;
+-		pipeline->msg.primary |= SOF_IPC4_GLB_CHAIN_DMA_LINK_ID(data->dai_data);
++		/*
++		 * Only configure the DMA Link ID for ChainDMA when this op is
++		 * invoked with SOF_DAI_CONFIG_FLAGS_HW_PARAMS
++		 */
++		if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) {
++			pipeline->msg.primary &= ~SOF_IPC4_GLB_CHAIN_DMA_LINK_ID_MASK;
++			pipeline->msg.primary |= SOF_IPC4_GLB_CHAIN_DMA_LINK_ID(data->dai_data);
++		}
+ 		return 0;
+ 	}
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 409fc11646948..d1bdb0b93bda0 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1211,6 +1211,13 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 			cval->res = 16;
+ 		}
+ 		break;
++	case USB_ID(0x1bcf, 0x2281): /* HD Webcam */
++		if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
++			usb_audio_info(chip,
++				"set resolution quirk: cval->res = 16\n");
++			cval->res = 16;
++		}
++		break;
+ 	}
+ }
+ 
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 58156fbca02c7..ea063a14cdd8f 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2125,6 +2125,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ 		   QUIRK_FLAG_FIXED_RATE),
+ 	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
+@@ -2167,6 +2169,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1bcf, 0x2283, /* NexiGo N930AF FHD Webcam */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x2040, 0x7200, /* Hauppauge HVR-950Q */
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 958e92acca8e2..9b75639434b81 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -410,7 +410,7 @@ void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd,
+ {
+ 	const char *prog_name = prog_info->name;
+ 	const struct btf_type *func_type;
+-	const struct bpf_func_info finfo = {};
++	struct bpf_func_info finfo = {};
+ 	struct bpf_prog_info info = {};
+ 	__u32 info_len = sizeof(info);
+ 	struct btf *prog_btf = NULL;
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 1a501cf09e782..40ea743d139fd 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -1813,6 +1813,10 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
+ 	}
+ 
+ 	if (pinmaps) {
++		err = create_and_mount_bpffs_dir(pinmaps);
++		if (err)
++			goto err_unpin;
++
+ 		err = bpf_object__pin_maps(obj, pinmaps);
+ 		if (err) {
+ 			p_err("failed to pin all maps");
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index af393c7dee1f1..b3edc239fe562 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -696,7 +696,7 @@ static int sets_patch(struct object *obj)
+ 			 * Make sure id is at the beginning of the pairs
+ 			 * struct, otherwise the below qsort would not work.
+ 			 */
+-			BUILD_BUG_ON(set8->pairs != &set8->pairs[0].id);
++			BUILD_BUG_ON((u32 *)set8->pairs != &set8->pairs[0].id);
+ 			qsort(set8->pairs, set8->cnt, sizeof(set8->pairs[0]), cmp_id);
+ 
+ 			/*
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 2d0840ef599af..142060bbce0a0 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -598,7 +598,7 @@ static int btf_sanity_check(const struct btf *btf)
+ 	__u32 i, n = btf__type_cnt(btf);
+ 	int err;
+ 
+-	for (i = 1; i < n; i++) {
++	for (i = btf->start_id; i < n; i++) {
+ 		t = btf_type_by_id(btf, i);
+ 		err = btf_validate_type(btf, t, i);
+ 		if (err)
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 5dbca76b953f4..894860111ddb2 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1559,10 +1559,12 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ 			 * Clang for BPF target generates func_proto with no
+ 			 * args as a func_proto with a single void arg (e.g.,
+ 			 * `int (*f)(void)` vs just `int (*f)()`). We are
+-			 * going to pretend there are no args for such case.
++			 * going to emit valid empty args (void) syntax for
++			 * such case. Similarly and conveniently, valid
++			 * no args case can be special-cased here as well.
+ 			 */
+-			if (vlen == 1 && p->type == 0) {
+-				btf_dump_printf(d, ")");
++			if (vlen == 0 || (vlen == 1 && p->type == 0)) {
++				btf_dump_printf(d, "void)");
+ 				return;
+ 			}
+ 
+diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
+index a0dcfb82e455d..7e7e686008c62 100644
+--- a/tools/lib/bpf/libbpf_internal.h
++++ b/tools/lib/bpf/libbpf_internal.h
+@@ -597,13 +597,9 @@ static inline int ensure_good_fd(int fd)
+ 	return fd;
+ }
+ 
+-static inline int sys_dup2(int oldfd, int newfd)
++static inline int sys_dup3(int oldfd, int newfd, int flags)
+ {
+-#ifdef __NR_dup2
+-	return syscall(__NR_dup2, oldfd, newfd);
+-#else
+-	return syscall(__NR_dup3, oldfd, newfd, 0);
+-#endif
++	return syscall(__NR_dup3, oldfd, newfd, flags);
+ }
+ 
+ /* Point *fixed_fd* to the same file that *tmp_fd* points to.
+@@ -614,7 +610,7 @@ static inline int reuse_fd(int fixed_fd, int tmp_fd)
+ {
+ 	int err;
+ 
+-	err = sys_dup2(tmp_fd, fixed_fd);
++	err = sys_dup3(tmp_fd, fixed_fd, O_CLOEXEC);
+ 	err = err < 0 ? -errno : 0;
+ 	close(tmp_fd); /* clean up temporary FD */
+ 	return err;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 0d4be829551b5..5a583053e3119 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -2213,10 +2213,17 @@ static int linker_fixup_btf(struct src_obj *obj)
+ 		vi = btf_var_secinfos(t);
+ 		for (j = 0, m = btf_vlen(t); j < m; j++, vi++) {
+ 			const struct btf_type *vt = btf__type_by_id(obj->btf, vi->type);
+-			const char *var_name = btf__str_by_offset(obj->btf, vt->name_off);
+-			int var_linkage = btf_var(vt)->linkage;
++			const char *var_name;
++			int var_linkage;
+ 			Elf64_Sym *sym;
+ 
++			/* could be a variable or function */
++			if (!btf_is_var(vt))
++				continue;
++
++			var_name = btf__str_by_offset(obj->btf, vt->name_off);
++			var_linkage = btf_var(vt)->linkage;
++
+ 			/* no need to patch up static or extern vars */
+ 			if (var_linkage != BTF_VAR_GLOBAL_ALLOCATED)
+ 				continue;
+diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
+index 53b5a492739d0..21ba650869383 100644
+--- a/tools/memory-model/lock.cat
++++ b/tools/memory-model/lock.cat
+@@ -102,19 +102,19 @@ let rf-lf = rfe-lf | rfi-lf
+  * within one of the lock's critical sections returns False.
+  *)
+ 
+-(* rfi for RU events: an RU may read from the last po-previous UL *)
+-let rfi-ru = ([UL] ; po-loc ; [RU]) \ ([UL] ; po-loc ; [LKW] ; po-loc)
+-
+-(* rfe for RU events: an RU may read from an external UL or the initial write *)
+-let all-possible-rfe-ru =
+-	let possible-rfe-ru r =
++(*
++ * rf for RU events: an RU may read from an external UL or the initial write,
++ * or from the last po-previous UL
++ *)
++let all-possible-rf-ru =
++	let possible-rf-ru r =
+ 		let pair-to-relation p = p ++ 0
+-		in map pair-to-relation (((UL | IW) * {r}) & loc & ext)
+-	in map possible-rfe-ru RU
++		in map pair-to-relation ((((UL | IW) * {r}) & loc & ext) |
++			(((UL * {r}) & po-loc) \ ([UL] ; po-loc ; [LKW] ; po-loc)))
++	in map possible-rf-ru RU
+ 
+ (* Generate all rf relations for RU events *)
+-with rfe-ru from cross(all-possible-rfe-ru)
+-let rf-ru = rfe-ru | rfi-ru
++with rf-ru from cross(all-possible-rf-ru)
+ 
+ (* Final rf relation *)
+ let rf = rf | rf-lf | rf-ru
+diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
+index 7ebf29c911849..1e8141ef1b15d 100644
+--- a/tools/objtool/noreturns.h
++++ b/tools/objtool/noreturns.h
+@@ -7,12 +7,16 @@
+  * Yes, this is unfortunate.  A better solution is in the works.
+  */
+ NORETURN(__fortify_panic)
++NORETURN(__ia32_sys_exit)
++NORETURN(__ia32_sys_exit_group)
+ NORETURN(__kunit_abort)
+ NORETURN(__module_put_and_kthread_exit)
+ NORETURN(__reiserfs_panic)
+ NORETURN(__stack_chk_fail)
+ NORETURN(__tdx_hypercall_failed)
+ NORETURN(__ubsan_handle_builtin_unreachable)
++NORETURN(__x64_sys_exit)
++NORETURN(__x64_sys_exit_group)
+ NORETURN(arch_cpu_idle_dead)
+ NORETURN(bch2_trans_in_restart_error)
+ NORETURN(bch2_trans_restart_error)
+diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+index 5f3edb3004d84..356786432fd3c 100644
+--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
++++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+@@ -159,9 +159,9 @@ static int check_return_addr(struct dso *dso, u64 map_start, Dwarf_Addr pc)
+ 	Dwarf_Addr	start = pc;
+ 	Dwarf_Addr	end = pc;
+ 	bool		signalp;
+-	const char	*exec_file = dso->long_name;
++	const char	*exec_file = dso__long_name(dso);
+ 
+-	dwfl = dso->dwfl;
++	dwfl = RC_CHK_ACCESS(dso)->dwfl;
+ 
+ 	if (!dwfl) {
+ 		dwfl = dwfl_begin(&offline_callbacks);
+@@ -183,7 +183,7 @@ static int check_return_addr(struct dso *dso, u64 map_start, Dwarf_Addr pc)
+ 			dwfl_end(dwfl);
+ 			goto out;
+ 		}
+-		dso->dwfl = dwfl;
++		RC_CHK_ACCESS(dso)->dwfl = dwfl;
+ 	}
+ 
+ 	mod = dwfl_addrmodule(dwfl, pc);
+@@ -267,7 +267,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
+ 	rc = check_return_addr(dso, map__start(al.map), ip);
+ 
+ 	pr_debug("[DSO %s, sym %s, ip 0x%" PRIx64 "] rc %d\n",
+-				dso->long_name, al.sym->name, ip, rc);
++		dso__long_name(dso), al.sym->name, ip, rc);
+ 
+ 	if (rc == 0) {
+ 		/*
+diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
+index 6de7e2d210756..4b710e875953a 100644
+--- a/tools/perf/arch/x86/util/intel-pt.c
++++ b/tools/perf/arch/x86/util/intel-pt.c
+@@ -32,6 +32,7 @@
+ #include "../../../util/tsc.h"
+ #include <internal/lib.h> // page_size
+ #include "../../../util/intel-pt.h"
++#include <api/fs/fs.h>
+ 
+ #define KiB(x) ((x) * 1024)
+ #define MiB(x) ((x) * 1024 * 1024)
+@@ -428,6 +429,16 @@ static int intel_pt_track_switches(struct evlist *evlist)
+ }
+ #endif
+ 
++static bool intel_pt_exclude_guest(void)
++{
++	int pt_mode;
++
++	if (sysfs__read_int("module/kvm_intel/parameters/pt_mode", &pt_mode))
++		pt_mode = 0;
++
++	return pt_mode == 1;
++}
++
+ static void intel_pt_valid_str(char *str, size_t len, u64 valid)
+ {
+ 	unsigned int val, last = 0, state = 1;
+@@ -620,6 +631,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
+ 			}
+ 			evsel->core.attr.freq = 0;
+ 			evsel->core.attr.sample_period = 1;
++			evsel->core.attr.exclude_guest = intel_pt_exclude_guest();
+ 			evsel->no_aux_samples = true;
+ 			evsel->needs_auxtrace_mmap = true;
+ 			intel_pt_evsel = evsel;
+@@ -758,7 +770,8 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
+ 	}
+ 
+ 	if (!opts->auxtrace_snapshot_mode && !opts->auxtrace_sample_mode) {
+-		u32 aux_watermark = opts->auxtrace_mmap_pages * page_size / 4;
++		size_t aw = opts->auxtrace_mmap_pages * (size_t)page_size / 4;
++		u32 aux_watermark = aw > UINT_MAX ? UINT_MAX : aw;
+ 
+ 		intel_pt_evsel->core.attr.aux_watermark = aux_watermark;
+ 	}
+diff --git a/tools/perf/tests/shell/test_arm_callgraph_fp.sh b/tools/perf/tests/shell/test_arm_callgraph_fp.sh
+index 61898e2566160..9caa361301759 100755
+--- a/tools/perf/tests/shell/test_arm_callgraph_fp.sh
++++ b/tools/perf/tests/shell/test_arm_callgraph_fp.sh
+@@ -28,28 +28,21 @@ cleanup_files()
+ 
+ trap cleanup_files EXIT TERM INT
+ 
+-# Add a 1 second delay to skip samples that are not in the leaf() function
+ # shellcheck disable=SC2086
+-perf record -o "$PERF_DATA" --call-graph fp -e cycles//u -D 1000 --user-callchains -- $TEST_PROGRAM 2> /dev/null &
+-PID=$!
++perf record -o "$PERF_DATA" --call-graph fp -e cycles//u --user-callchains -- $TEST_PROGRAM
+ 
+-echo " + Recording (PID=$PID)..."
+-sleep 2
+-echo " + Stopping perf-record..."
+-
+-kill $PID
+-wait $PID
++# Try opening the file so any immediate errors are visible in the log
++perf script -i "$PERF_DATA" -F comm,ip,sym | head -n4
+ 
+-# expected perf-script output:
++# expected perf-script output if 'leaf' has been inserted correctly:
+ #
+-# program
++# perf
+ # 	728 leaf
+ # 	753 parent
+ # 	76c leafloop
+-# ...
++# ... remaining stack to main() ...
+ 
+-perf script -i "$PERF_DATA" -F comm,ip,sym | head -n4
+-perf script -i "$PERF_DATA" -F comm,ip,sym | head -n4 | \
+-	awk '{ if ($2 != "") sym[i++] = $2 } END { if (sym[0] != "leaf" ||
+-						       sym[1] != "parent" ||
+-						       sym[2] != "leafloop") exit 1 }'
++# Each frame is separated by a tab, some spaces and an address
++SEP="[[:space:]]+ [[:xdigit:]]+"
++perf script -i "$PERF_DATA" -F comm,ip,sym | tr '\n' ' ' | \
++	grep -E -q "perf $SEP leaf $SEP parent $SEP leafloop"
+diff --git a/tools/perf/tests/workloads/leafloop.c b/tools/perf/tests/workloads/leafloop.c
+index 1bf5cc97649b0..f7561767e32cd 100644
+--- a/tools/perf/tests/workloads/leafloop.c
++++ b/tools/perf/tests/workloads/leafloop.c
+@@ -1,6 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
++#include <signal.h>
+ #include <stdlib.h>
+ #include <linux/compiler.h>
++#include <unistd.h>
+ #include "../tests.h"
+ 
+ /* We want to check these symbols in perf script */
+@@ -8,10 +10,16 @@ noinline void leaf(volatile int b);
+ noinline void parent(volatile int b);
+ 
+ static volatile int a;
++static volatile sig_atomic_t done;
++
++static void sighandler(int sig __maybe_unused)
++{
++	done = 1;
++}
+ 
+ noinline void leaf(volatile int b)
+ {
+-	for (;;)
++	while (!done)
+ 		a += b;
+ }
+ 
+@@ -22,12 +30,16 @@ noinline void parent(volatile int b)
+ 
+ static int leafloop(int argc, const char **argv)
+ {
+-	int c = 1;
++	int sec = 1;
+ 
+ 	if (argc > 0)
+-		c = atoi(argv[0]);
++		sec = atoi(argv[0]);
++
++	signal(SIGINT, sighandler);
++	signal(SIGALRM, sighandler);
++	alarm(sec);
+ 
+-	parent(c);
++	parent(sec);
+ 	return 0;
+ }
+ 
+diff --git a/tools/perf/ui/gtk/annotate.c b/tools/perf/ui/gtk/annotate.c
+index 93ce3d47e47e6..6da24aa039ebb 100644
+--- a/tools/perf/ui/gtk/annotate.c
++++ b/tools/perf/ui/gtk/annotate.c
+@@ -180,13 +180,14 @@ static int symbol__gtk_annotate(struct map_symbol *ms, struct evsel *evsel,
+ 	GtkWidget *tab_label;
+ 	int err;
+ 
+-	if (dso->annotate_warned)
++	if (dso__annotate_warned(dso))
+ 		return -1;
+ 
+ 	err = symbol__annotate(ms, evsel, NULL);
+ 	if (err) {
+ 		char msg[BUFSIZ];
+-		dso->annotate_warned = true;
++
++		dso__set_annotate_warned(dso);
+ 		symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
+ 		ui__error("Couldn't annotate %s: %s\n", sym->name, msg);
+ 		return -1;
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index 32818bd7cd177..5e9fbcfad7d44 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -1013,7 +1013,7 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id,
+ 	if (!dso)
+ 		goto out;
+ 
+-	if (dso->data.status == DSO_DATA_STATUS_ERROR &&
++	if (dso__data(dso)->status == DSO_DATA_STATUS_ERROR &&
+ 	    dso__data_status_seen(dso, DSO_DATA_STATUS_SEEN_ITRACE))
+ 		goto out;
+ 
+@@ -1027,11 +1027,11 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id,
+ 	if (len <= 0) {
+ 		ui__warning_once("CS ETM Trace: Missing DSO. Use 'perf archive' or debuginfod to export data from the traced system.\n"
+ 				 "              Enable CONFIG_PROC_KCORE or use option '-k /path/to/vmlinux' for kernel symbols.\n");
+-		if (!dso->auxtrace_warned) {
++		if (!dso__auxtrace_warned(dso)) {
+ 			pr_err("CS ETM Trace: Debug data not found for address %#"PRIx64" in %s\n",
+-				    address,
+-				    dso->long_name ? dso->long_name : "Unknown");
+-			dso->auxtrace_warned = true;
++				address,
++				dso__long_name(dso) ? dso__long_name(dso) : "Unknown");
++			dso__set_auxtrace_warned(dso);
+ 		}
+ 		goto out;
+ 	}
+diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
+index 72aec8f61b944..e10558b79504b 100644
+--- a/tools/perf/util/disasm.c
++++ b/tools/perf/util/disasm.c
+@@ -1199,7 +1199,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 	int ret;
+ 	FILE *s;
+ 
+-	if (dso->binary_type != DSO_BINARY_TYPE__BPF_PROG_INFO)
++	if (dso__binary_type(dso) != DSO_BINARY_TYPE__BPF_PROG_INFO)
+ 		return SYMBOL_ANNOTATE_ERRNO__BPF_INVALID_FILE;
+ 
+ 	pr_debug("%s: handling sym %s addr %" PRIx64 " len %" PRIx64 "\n", __func__,
+@@ -1226,14 +1226,14 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 	info.arch = bfd_get_arch(bfdf);
+ 	info.mach = bfd_get_mach(bfdf);
+ 
+-	info_node = perf_env__find_bpf_prog_info(dso->bpf_prog.env,
+-						 dso->bpf_prog.id);
++	info_node = perf_env__find_bpf_prog_info(dso__bpf_prog(dso)->env,
++						 dso__bpf_prog(dso)->id);
+ 	if (!info_node) {
+ 		ret = SYMBOL_ANNOTATE_ERRNO__BPF_MISSING_BTF;
+ 		goto out;
+ 	}
+ 	info_linear = info_node->info_linear;
+-	sub_id = dso->bpf_prog.sub_id;
++	sub_id = dso__bpf_prog(dso)->sub_id;
+ 
+ 	info.buffer = (void *)(uintptr_t)(info_linear->info.jited_prog_insns);
+ 	info.buffer_length = info_linear->info.jited_prog_len;
+@@ -1244,7 +1244,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
+ 	if (info_linear->info.btf_id) {
+ 		struct btf_node *node;
+ 
+-		node = perf_env__find_btf(dso->bpf_prog.env,
++		node = perf_env__find_btf(dso__bpf_prog(dso)->env,
+ 					  info_linear->info.btf_id);
+ 		if (node)
+ 			btf = btf__new((__u8 *)(node->data),
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index dde706b71da7b..67414944f2457 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -1501,7 +1501,7 @@ void dso__delete(struct dso *dso)
+ 	auxtrace_cache__free(RC_CHK_ACCESS(dso)->auxtrace_cache);
+ 	dso_cache__free(dso);
+ 	dso__free_a2l(dso);
+-	zfree(&RC_CHK_ACCESS(dso)->symsrc_filename);
++	dso__free_symsrc_filename(dso);
+ 	nsinfo__zput(RC_CHK_ACCESS(dso)->nsinfo);
+ 	mutex_destroy(dso__lock(dso));
+ 	RC_CHK_FREE(dso);
+@@ -1652,3 +1652,15 @@ int dso__strerror_load(struct dso *dso, char *buf, size_t buflen)
+ 	scnprintf(buf, buflen, "%s", dso_load__error_str[idx]);
+ 	return 0;
+ }
++
++bool perf_pid_map_tid(const char *dso_name, int *tid)
++{
++	return sscanf(dso_name, "/tmp/perf-%d.map", tid) == 1;
++}
++
++bool is_perf_pid_map_name(const char *dso_name)
++{
++	int tid;
++
++	return perf_pid_map_tid(dso_name, &tid);
++}
+diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h
+index df2c98402af3e..ed0068251c655 100644
+--- a/tools/perf/util/dso.h
++++ b/tools/perf/util/dso.h
+@@ -280,6 +280,16 @@ static inline void dso__set_annotate_warned(struct dso *dso)
+ 	RC_CHK_ACCESS(dso)->annotate_warned = 1;
+ }
+ 
++static inline bool dso__auxtrace_warned(const struct dso *dso)
++{
++	return RC_CHK_ACCESS(dso)->auxtrace_warned;
++}
++
++static inline void dso__set_auxtrace_warned(struct dso *dso)
++{
++	RC_CHK_ACCESS(dso)->auxtrace_warned = 1;
++}
++
+ static inline struct auxtrace_cache *dso__auxtrace_cache(struct dso *dso)
+ {
+ 	return RC_CHK_ACCESS(dso)->auxtrace_cache;
+@@ -592,6 +602,11 @@ static inline void dso__set_symsrc_filename(struct dso *dso, char *val)
+ 	RC_CHK_ACCESS(dso)->symsrc_filename = val;
+ }
+ 
++static inline void dso__free_symsrc_filename(struct dso *dso)
++{
++	zfree(&RC_CHK_ACCESS(dso)->symsrc_filename);
++}
++
+ static inline enum dso_binary_type dso__symtab_type(const struct dso *dso)
+ {
+ 	return RC_CHK_ACCESS(dso)->symtab_type;
+@@ -809,4 +824,8 @@ void reset_fd_limit(void);
+ u64 dso__find_global_type(struct dso *dso, u64 addr);
+ u64 dso__findnew_global_type(struct dso *dso, u64 addr, u64 offset);
+ 
++/* Check if dso name is of format "/tmp/perf-%d.map" */
++bool perf_pid_map_tid(const char *dso_name, int *tid);
++bool is_perf_pid_map_name(const char *dso_name);
++
+ #endif /* __PERF_DSO */
+diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
+index 16b39db594f4c..eaada3e0f5b4e 100644
+--- a/tools/perf/util/maps.c
++++ b/tools/perf/util/maps.c
+@@ -741,7 +741,6 @@ static unsigned int first_ending_after(struct maps *maps, const struct map *map)
+  */
+ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ {
+-	struct map **maps_by_address;
+ 	int err = 0;
+ 	FILE *fp = debug_file();
+ 
+@@ -749,12 +748,12 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 	if (!maps__maps_by_address_sorted(maps))
+ 		__maps__sort_by_address(maps);
+ 
+-	maps_by_address = maps__maps_by_address(maps);
+ 	/*
+ 	 * Iterate through entries where the end of the existing entry is
+ 	 * greater-than the new map's start.
+ 	 */
+ 	for (unsigned int i = first_ending_after(maps, new); i < maps__nr_maps(maps); ) {
++		struct map **maps_by_address = maps__maps_by_address(maps);
+ 		struct map *pos = maps_by_address[i];
+ 		struct map *before = NULL, *after = NULL;
+ 
+@@ -821,8 +820,10 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 			/* Maps are still ordered, go to next one. */
+ 			i++;
+ 			if (after) {
+-				__maps__insert(maps, after);
++				err = __maps__insert(maps, after);
+ 				map__put(after);
++				if (err)
++					goto out_err;
+ 				if (!maps__maps_by_address_sorted(maps)) {
+ 					/*
+ 					 * Sorting broken so invariants don't
+@@ -851,7 +852,7 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 		check_invariants(maps);
+ 	}
+ 	/* Add the map. */
+-	__maps__insert(maps, new);
++	err = __maps__insert(maps, new);
+ out_err:
+ 	return err;
+ }
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index b9b4c5eb50027..6907e3e7fbd16 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -477,8 +477,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
+ 	qsort(aliases, len, sizeof(struct sevent), cmp_sevent);
+ 	for (int j = 0; j < len; j++) {
+ 		/* Skip duplicates */
+-		if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1]))
+-			continue;
++		if (j < len - 1 && pmu_alias_is_duplicate(&aliases[j], &aliases[j + 1]))
++			goto free;
+ 
+ 		print_cb->print_event(print_state,
+ 				aliases[j].pmu_name,
+@@ -491,6 +491,7 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
+ 				aliases[j].desc,
+ 				aliases[j].long_desc,
+ 				aliases[j].encoding_desc);
++free:
+ 		zfree(&aliases[j].name);
+ 		zfree(&aliases[j].alias);
+ 		zfree(&aliases[j].scale_unit);
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index cd39ea9721937..ab7c7ff35f9bb 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -334,7 +334,7 @@ sort__sym_cmp(struct hist_entry *left, struct hist_entry *right)
+ 	 * comparing symbol address alone is not enough since it's a
+ 	 * relative address within a dso.
+ 	 */
+-	if (!hists__has(left->hists, dso) || hists__has(right->hists, dso)) {
++	if (!hists__has(left->hists, dso)) {
+ 		ret = sort__dso_cmp(left, right);
+ 		if (ret != 0)
+ 			return ret;
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index 9d670d8c1c089..4d67c1e095a18 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -288,7 +288,7 @@ static int inline_list__append_dso_a2l(struct dso *dso,
+ 				       struct inline_node *node,
+ 				       struct symbol *sym)
+ {
+-	struct a2l_data *a2l = dso->a2l;
++	struct a2l_data *a2l = dso__a2l(dso);
+ 	struct symbol *inline_sym = new_inline_sym(dso, sym, a2l->funcname);
+ 	char *srcline = NULL;
+ 
+@@ -304,11 +304,11 @@ static int addr2line(const char *dso_name, u64 addr,
+ 		     struct symbol *sym)
+ {
+ 	int ret = 0;
+-	struct a2l_data *a2l = dso->a2l;
++	struct a2l_data *a2l = dso__a2l(dso);
+ 
+ 	if (!a2l) {
+-		dso->a2l = addr2line_init(dso_name);
+-		a2l = dso->a2l;
++		a2l = addr2line_init(dso_name);
++		dso__set_a2l(dso, a2l);
+ 	}
+ 
+ 	if (a2l == NULL) {
+@@ -360,14 +360,14 @@ static int addr2line(const char *dso_name, u64 addr,
+ 
+ void dso__free_a2l(struct dso *dso)
+ {
+-	struct a2l_data *a2l = dso->a2l;
++	struct a2l_data *a2l = dso__a2l(dso);
+ 
+ 	if (!a2l)
+ 		return;
+ 
+ 	addr2line_cleanup(a2l);
+ 
+-	dso->a2l = NULL;
++	dso__set_a2l(dso, NULL);
+ }
+ 
+ #else /* HAVE_LIBBFD_SUPPORT */
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 91d2f7f65df74..186305fd2d0ef 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -38,6 +38,7 @@
+ static int aggr_header_lens[] = {
+ 	[AGGR_CORE] 	= 18,
+ 	[AGGR_CACHE]	= 22,
++	[AGGR_CLUSTER]	= 20,
+ 	[AGGR_DIE] 	= 12,
+ 	[AGGR_SOCKET] 	= 6,
+ 	[AGGR_NODE] 	= 6,
+@@ -49,6 +50,7 @@ static int aggr_header_lens[] = {
+ static const char *aggr_header_csv[] = {
+ 	[AGGR_CORE] 	= 	"core,cpus,",
+ 	[AGGR_CACHE]	= 	"cache,cpus,",
++	[AGGR_CLUSTER]	= 	"cluster,cpus,",
+ 	[AGGR_DIE] 	= 	"die,cpus,",
+ 	[AGGR_SOCKET] 	= 	"socket,cpus,",
+ 	[AGGR_NONE] 	= 	"cpu,",
+@@ -60,6 +62,7 @@ static const char *aggr_header_csv[] = {
+ static const char *aggr_header_std[] = {
+ 	[AGGR_CORE] 	= 	"core",
+ 	[AGGR_CACHE] 	= 	"cache",
++	[AGGR_CLUSTER]	= 	"cluster",
+ 	[AGGR_DIE] 	= 	"die",
+ 	[AGGR_SOCKET] 	= 	"socket",
+ 	[AGGR_NONE] 	= 	"cpu",
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index 3466aa9524421..6bb975e46de37 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -176,6 +176,13 @@ static double find_stat(const struct evsel *evsel, int aggr_idx, enum stat_type
+ 		if (type != evsel__stat_type(cur))
+ 			continue;
+ 
++		/*
++		 * Except the SW CLOCK events,
++		 * ignore if not the PMU we're looking for.
++		 */
++		if ((type != STAT_NSECS) && (evsel->pmu != cur->pmu))
++			continue;
++
+ 		aggr = &cur->stats->aggr[aggr_idx];
+ 		if (type == STAT_NSECS)
+ 			return aggr->counts.val;
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 9e5940b5bc591..22646f0cca7da 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1607,7 +1607,7 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ 
+ 	if (!bfd_check_format(abfd, bfd_object)) {
+ 		pr_debug2("%s: cannot read %s bfd file.\n", __func__,
+-			  dso->long_name);
++			  dso__long_name(dso));
+ 		goto out_close;
+ 	}
+ 
+@@ -1640,12 +1640,13 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ 		}
+ 		if (i < symbols_count) {
+ 			/* PE symbols can only have 4 bytes, so use .text high bits */
+-			dso->text_offset = section->vma - (u32)section->vma;
+-			dso->text_offset += (u32)bfd_asymbol_value(symbols[i]);
+-			dso->text_end = (section->vma - dso->text_offset) + section->size;
++			u64 text_offset = (section->vma - (u32)section->vma)
++				+ (u32)bfd_asymbol_value(symbols[i]);
++			dso__set_text_offset(dso, text_offset);
++			dso__set_text_end(dso, (section->vma - text_offset) + section->size);
+ 		} else {
+-			dso->text_offset = section->vma - section->filepos;
+-			dso->text_end = section->filepos + section->size;
++			dso__set_text_offset(dso, section->vma - section->filepos);
++			dso__set_text_end(dso, section->filepos + section->size);
+ 		}
+ 	}
+ 
+@@ -1671,7 +1672,7 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ 		else
+ 			len = section->size - sym->value;
+ 
+-		start = bfd_asymbol_value(sym) - dso->text_offset;
++		start = bfd_asymbol_value(sym) - dso__text_offset(dso);
+ 		symbol = symbol__new(start, len, bfd2elf_binding(sym), STT_FUNC,
+ 				     bfd_asymbol_name(sym));
+ 		if (!symbol)
+@@ -1799,7 +1800,8 @@ int dso__load(struct dso *dso, struct map *map)
+ 	const char *map_path = dso__long_name(dso);
+ 
+ 	mutex_lock(dso__lock(dso));
+-	perfmap = strncmp(dso__name(dso), "/tmp/perf-", 10) == 0;
++	perfmap = is_perf_pid_map_name(map_path);
++
+ 	if (perfmap) {
+ 		if (dso__nsinfo(dso) &&
+ 		    (dso__find_perf_map(newmapname, sizeof(newmapname),
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index b38d322734b4a..bde216e630d29 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -29,8 +29,8 @@ static int __find_debuginfo(Dwfl_Module *mod __maybe_unused, void **userdata,
+ 	const struct dso *dso = *userdata;
+ 
+ 	assert(dso);
+-	if (dso->symsrc_filename && strcmp (file_name, dso->symsrc_filename))
+-		*debuginfo_file_name = strdup(dso->symsrc_filename);
++	if (dso__symsrc_filename(dso) && strcmp(file_name, dso__symsrc_filename(dso)))
++		*debuginfo_file_name = strdup(dso__symsrc_filename(dso));
+ 	return -1;
+ }
+ 
+@@ -66,7 +66,7 @@ static int __report_module(struct addr_location *al, u64 ip,
+ 	 * a different code in another DSO.  So just use the map->start
+ 	 * directly to pick the correct one.
+ 	 */
+-	if (!strncmp(dso->long_name, "/tmp/jitted-", 12))
++	if (!strncmp(dso__long_name(dso), "/tmp/jitted-", 12))
+ 		base = map__start(al->map);
+ 	else
+ 		base = map__start(al->map) - map__pgoff(al->map);
+@@ -83,15 +83,15 @@ static int __report_module(struct addr_location *al, u64 ip,
+ 	if (!mod) {
+ 		char filename[PATH_MAX];
+ 
+-		__symbol__join_symfs(filename, sizeof(filename), dso->long_name);
+-		mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
++		__symbol__join_symfs(filename, sizeof(filename), dso__long_name(dso));
++		mod = dwfl_report_elf(ui->dwfl, dso__short_name(dso), filename, -1,
+ 				      base, false);
+ 	}
+ 	if (!mod) {
+ 		char filename[PATH_MAX];
+ 
+ 		if (dso__build_id_filename(dso, filename, sizeof(filename), false))
+-			mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
++			mod = dwfl_report_elf(ui->dwfl, dso__short_name(dso), filename, -1,
+ 					      base, false);
+ 	}
+ 
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index cde267ea3e99e..7460bb96bd225 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -363,7 +363,7 @@ static int read_unwind_spec_debug_frame(struct dso *dso,
+ 					struct machine *machine, u64 *offset)
+ {
+ 	int fd;
+-	u64 ofs = dso->data.debug_frame_offset;
++	u64 ofs = dso__data(dso)->debug_frame_offset;
+ 
+ 	/* debug_frame can reside in:
+ 	 *  - dso
+@@ -379,7 +379,7 @@ static int read_unwind_spec_debug_frame(struct dso *dso,
+ 		}
+ 
+ 		if (ofs <= 0) {
+-			fd = open(dso->symsrc_filename, O_RDONLY);
++			fd = open(dso__symsrc_filename(dso), O_RDONLY);
+ 			if (fd >= 0) {
+ 				ofs = elf_section_offset(fd, ".debug_frame");
+ 				close(fd);
+@@ -402,21 +402,21 @@ static int read_unwind_spec_debug_frame(struct dso *dso,
+ 				}
+ 			}
+ 			if (ofs > 0) {
+-				if (dso->symsrc_filename != NULL) {
++				if (dso__symsrc_filename(dso) != NULL) {
+ 					pr_warning(
+ 						"%s: overwrite symsrc(%s,%s)\n",
+ 							__func__,
+-							dso->symsrc_filename,
++							dso__symsrc_filename(dso),
+ 							debuglink);
+-					zfree(&dso->symsrc_filename);
++					dso__free_symsrc_filename(dso);
+ 				}
+-				dso->symsrc_filename = debuglink;
++				dso__set_symsrc_filename(dso, debuglink);
+ 			} else {
+ 				free(debuglink);
+ 			}
+ 		}
+ 
+-		dso->data.debug_frame_offset = ofs;
++		dso__data(dso)->debug_frame_offset = ofs;
+ 	}
+ 
+ 	*offset = ofs;
+@@ -481,7 +481,7 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
+ 	if (ret < 0 &&
+ 	    !read_unwind_spec_debug_frame(dso, ui->machine, &segbase)) {
+ 		int fd = dso__data_get_fd(dso, ui->machine);
+-		int is_exec = elf_is_exec(fd, dso->name);
++		int is_exec = elf_is_exec(fd, dso__name(dso));
+ 		u64 start = map__start(map);
+ 		unw_word_t base = is_exec ? 0 : start;
+ 		const char *symfile;
+@@ -489,7 +489,7 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
+ 		if (fd >= 0)
+ 			dso__data_put_fd(dso);
+ 
+-		symfile = dso->symsrc_filename ?: dso->name;
++		symfile = dso__symsrc_filename(dso) ?: dso__name(dso);
+ 
+ 		memset(&di, 0, sizeof(di));
+ 		if (dwarf_find_debug_frame(0, &di, ip, base, symfile, start, map__end(map)))
+diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
+index be91a69193158..3b6675ab40861 100644
+--- a/tools/testing/selftests/bpf/bpf_kfuncs.h
++++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
+@@ -77,5 +77,5 @@ extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
+ 				      struct bpf_key *trusted_keyring) __ksym;
+ 
+ extern bool bpf_session_is_return(void) __ksym __weak;
+-extern long *bpf_session_cookie(void) __ksym __weak;
++extern __u64 *bpf_session_cookie(void) __ksym __weak;
+ #endif
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
+index 0aca025327948..3f0daf660703f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
+@@ -307,7 +307,8 @@ static void test_update_ca(void)
+ 		return;
+ 
+ 	link = bpf_map__attach_struct_ops(skel->maps.ca_update_1);
+-	ASSERT_OK_PTR(link, "attach_struct_ops");
++	if (!ASSERT_OK_PTR(link, "attach_struct_ops"))
++		goto out;
+ 
+ 	do_test("tcp_ca_update", NULL);
+ 	saved_ca1_cnt = skel->bss->ca1_cnt;
+@@ -321,6 +322,7 @@ static void test_update_ca(void)
+ 	ASSERT_GT(skel->bss->ca2_cnt, 0, "ca2_ca2_cnt");
+ 
+ 	bpf_link__destroy(link);
++out:
+ 	tcp_ca_update__destroy(skel);
+ }
+ 
+@@ -336,7 +338,8 @@ static void test_update_wrong(void)
+ 		return;
+ 
+ 	link = bpf_map__attach_struct_ops(skel->maps.ca_update_1);
+-	ASSERT_OK_PTR(link, "attach_struct_ops");
++	if (!ASSERT_OK_PTR(link, "attach_struct_ops"))
++		goto out;
+ 
+ 	do_test("tcp_ca_update", NULL);
+ 	saved_ca1_cnt = skel->bss->ca1_cnt;
+@@ -349,6 +352,7 @@ static void test_update_wrong(void)
+ 	ASSERT_GT(skel->bss->ca1_cnt, saved_ca1_cnt, "ca2_ca1_cnt");
+ 
+ 	bpf_link__destroy(link);
++out:
+ 	tcp_ca_update__destroy(skel);
+ }
+ 
+@@ -363,7 +367,8 @@ static void test_mixed_links(void)
+ 		return;
+ 
+ 	link_nl = bpf_map__attach_struct_ops(skel->maps.ca_no_link);
+-	ASSERT_OK_PTR(link_nl, "attach_struct_ops_nl");
++	if (!ASSERT_OK_PTR(link_nl, "attach_struct_ops_nl"))
++		goto out;
+ 
+ 	link = bpf_map__attach_struct_ops(skel->maps.ca_update_1);
+ 	ASSERT_OK_PTR(link, "attach_struct_ops");
+@@ -376,6 +381,7 @@ static void test_mixed_links(void)
+ 
+ 	bpf_link__destroy(link);
+ 	bpf_link__destroy(link_nl);
++out:
+ 	tcp_ca_update__destroy(skel);
+ }
+ 
+@@ -418,7 +424,8 @@ static void test_link_replace(void)
+ 	bpf_link__destroy(link);
+ 
+ 	link = bpf_map__attach_struct_ops(skel->maps.ca_update_2);
+-	ASSERT_OK_PTR(link, "attach_struct_ops_2nd");
++	if (!ASSERT_OK_PTR(link, "attach_struct_ops_2nd"))
++		goto out;
+ 
+ 	/* BPF_F_REPLACE with a wrong old map Fd. It should fail!
+ 	 *
+@@ -441,6 +448,7 @@ static void test_link_replace(void)
+ 
+ 	bpf_link__destroy(link);
+ 
++out:
+ 	tcp_ca_update__destroy(skel);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c b/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c
+index f949647dbbc21..552a0875ca6db 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c
++++ b/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c
+@@ -21,13 +21,13 @@ static int do_sleep(void *skel)
+ }
+ 
+ #define STACK_SIZE (1024 * 1024)
+-static char child_stack[STACK_SIZE];
+ 
+ void test_fexit_sleep(void)
+ {
+ 	struct fexit_sleep_lskel *fexit_skel = NULL;
+ 	int wstatus, duration = 0;
+ 	pid_t cpid;
++	char *child_stack = NULL;
+ 	int err, fexit_cnt;
+ 
+ 	fexit_skel = fexit_sleep_lskel__open_and_load();
+@@ -38,6 +38,11 @@ void test_fexit_sleep(void)
+ 	if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err))
+ 		goto cleanup;
+ 
++	child_stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE |
++			   MAP_ANONYMOUS | MAP_STACK, -1, 0);
++	if (!ASSERT_NEQ(child_stack, MAP_FAILED, "mmap"))
++		goto cleanup;
++
+ 	cpid = clone(do_sleep, child_stack + STACK_SIZE, CLONE_FILES | SIGCHLD, fexit_skel);
+ 	if (CHECK(cpid == -1, "clone", "%s\n", strerror(errno)))
+ 		goto cleanup;
+@@ -78,5 +83,6 @@ void test_fexit_sleep(void)
+ 		goto cleanup;
+ 
+ cleanup:
++	munmap(child_stack, STACK_SIZE);
+ 	fexit_sleep_lskel__destroy(fexit_skel);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index 597d0467a9267..de2466547efe0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -994,7 +994,7 @@ static void drop_on_reuseport(const struct test *t)
+ 
+ 	err = update_lookup_map(t->sock_map, SERVER_A, server1);
+ 	if (err)
+-		goto detach;
++		goto close_srv1;
+ 
+ 	/* second server on destination address we should never reach */
+ 	server2 = make_server(t->sotype, t->connect_to.ip, t->connect_to.port,
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
+index f09505f8b0386..53d6ad8c2257e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
+@@ -222,7 +222,7 @@ static void test_xdp_adjust_frags_tail_grow(void)
+ 
+ 	prog = bpf_object__next_program(obj, NULL);
+ 	if (bpf_object__load(obj))
+-		return;
++		goto out;
+ 
+ 	prog_fd = bpf_program__fd(prog);
+ 
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
+index ba97165bdb282..a657651eba523 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
+@@ -14,9 +14,9 @@ typedef int *ptr_arr_t[6];
+ 
+ typedef int *ptr_multiarr_t[7][8][9][10];
+ 
+-typedef int * (*fn_ptr_arr_t[11])();
++typedef int * (*fn_ptr_arr_t[11])(void);
+ 
+-typedef int * (*fn_ptr_multiarr_t[12][13])();
++typedef int * (*fn_ptr_multiarr_t[12][13])(void);
+ 
+ struct root_struct {
+ 	arr_t _1;
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+index ad21ee8c7e234..29d01fff32bd2 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+@@ -100,7 +100,7 @@ typedef void (*printf_fn_t)(const char *, ...);
+  *   `int -> char *` function and returns pointer to a char. Equivalent:
+  *   typedef char * (*fn_input_t)(int);
+  *   typedef char * (*fn_output_outer_t)(fn_input_t);
+- *   typedef const fn_output_outer_t (* fn_output_inner_t)();
++ *   typedef const fn_output_outer_t (* fn_output_inner_t)(void);
+  *   typedef const fn_output_inner_t fn_ptr_arr2_t[5];
+  */
+ /* ----- START-EXPECTED-OUTPUT ----- */
+@@ -127,7 +127,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
+ 
+ typedef char * (*fn_ptr_arr1_t[10])(int **);
+ 
+-typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
++typedef char * (* (* const fn_ptr_arr2_t[5])(void))(char * (*)(int));
+ 
+ struct struct_w_typedefs {
+ 	int_t a;
+diff --git a/tools/testing/selftests/bpf/progs/kprobe_multi_session_cookie.c b/tools/testing/selftests/bpf/progs/kprobe_multi_session_cookie.c
+index d49070803e221..0835b5edf6858 100644
+--- a/tools/testing/selftests/bpf/progs/kprobe_multi_session_cookie.c
++++ b/tools/testing/selftests/bpf/progs/kprobe_multi_session_cookie.c
+@@ -25,7 +25,7 @@ int BPF_PROG(trigger)
+ 
+ static int check_cookie(__u64 val, __u64 *result)
+ {
+-	long *cookie;
++	__u64 *cookie;
+ 
+ 	if (bpf_get_current_pid_tgid() >> 32 != pid)
+ 		return 1;
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 92752f5eededf..a709911cddd2f 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -63,7 +63,7 @@ int passed;
+ int failed;
+ int map_fd[9];
+ struct bpf_map *maps[9];
+-int prog_fd[11];
++int prog_fd[9];
+ 
+ int txmsg_pass;
+ int txmsg_redir;
+@@ -680,7 +680,8 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ 				}
+ 			}
+ 
+-			s->bytes_recvd += recv;
++			if (recv > 0)
++				s->bytes_recvd += recv;
+ 
+ 			if (opt->check_recved_len && s->bytes_recvd > total_bytes) {
+ 				errno = EMSGSIZE;
+@@ -1793,8 +1794,6 @@ int prog_attach_type[] = {
+ 	BPF_SK_MSG_VERDICT,
+ 	BPF_SK_MSG_VERDICT,
+ 	BPF_SK_MSG_VERDICT,
+-	BPF_SK_MSG_VERDICT,
+-	BPF_SK_MSG_VERDICT,
+ };
+ 
+ int prog_type[] = {
+@@ -1807,8 +1806,6 @@ int prog_type[] = {
+ 	BPF_PROG_TYPE_SK_MSG,
+ 	BPF_PROG_TYPE_SK_MSG,
+ 	BPF_PROG_TYPE_SK_MSG,
+-	BPF_PROG_TYPE_SK_MSG,
+-	BPF_PROG_TYPE_SK_MSG,
+ };
+ 
+ static int populate_progs(char *bpf_file)
+diff --git a/tools/testing/selftests/damon/access_memory.c b/tools/testing/selftests/damon/access_memory.c
+index 585a2fa543295..56b17e8fe1be8 100644
+--- a/tools/testing/selftests/damon/access_memory.c
++++ b/tools/testing/selftests/damon/access_memory.c
+@@ -35,7 +35,7 @@ int main(int argc, char *argv[])
+ 		start_clock = clock();
+ 		while ((clock() - start_clock) * 1000 / CLOCKS_PER_SEC <
+ 				access_time_ms)
+-			memset(regions[i], i, 1024 * 1024 * 10);
++			memset(regions[i], i, sz_region);
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+index 31252bc8775e0..4994bea5daf80 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+@@ -11,7 +11,7 @@ ALL_TESTS="single_mask_test identical_filters_test two_masks_test \
+ 	multiple_masks_test ctcam_edge_cases_test delta_simple_test \
+ 	delta_two_masks_one_key_test delta_simple_rehash_test \
+ 	bloom_simple_test bloom_complex_test bloom_delta_test \
+-	max_erp_entries_test max_group_size_test"
++	max_erp_entries_test max_group_size_test collision_test"
+ NUM_NETIFS=2
+ source $lib_dir/lib.sh
+ source $lib_dir/tc_common.sh
+@@ -457,7 +457,7 @@ delta_two_masks_one_key_test()
+ {
+ 	# If 2 keys are the same and only differ in mask in a way that
+ 	# they belong under the same ERP (second is delta of the first),
+-	# there should be no C-TCAM spill.
++	# there should be C-TCAM spill.
+ 
+ 	RET=0
+ 
+@@ -474,8 +474,8 @@ delta_two_masks_one_key_test()
+ 	tp_record "mlxsw:*" "tc filter add dev $h2 ingress protocol ip \
+ 		   pref 2 handle 102 flower $tcflags dst_ip 192.0.2.2 \
+ 		   action drop"
+-	tp_check_hits "mlxsw:mlxsw_sp_acl_atcam_entry_add_ctcam_spill" 0
+-	check_err $? "incorrect C-TCAM spill while inserting the second rule"
++	tp_check_hits "mlxsw:mlxsw_sp_acl_atcam_entry_add_ctcam_spill" 1
++	check_err $? "C-TCAM spill did not happen while inserting the second rule"
+ 
+ 	$MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \
+ 		-t ip -q
+@@ -1087,6 +1087,53 @@ max_group_size_test()
+ 	log_test "max ACL group size test ($tcflags). max size $max_size"
+ }
+ 
++collision_test()
++{
++	# Filters cannot share an eRP if in the common unmasked part (i.e.,
++	# without the delta bits) they have the same values. If the driver does
++	# not prevent such configuration (by spilling into the C-TCAM), then
++	# multiple entries will be present in the device with the same key,
++	# leading to collisions and a reduced scale.
++	#
++	# Create such a scenario and make sure all the filters are successfully
++	# added.
++
++	RET=0
++
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	# Add a single dst_ip/24 filter and multiple dst_ip/32 filters that all
++	# have the same values in the common unmasked part (dst_ip/24).
++
++	tc filter add dev $h2 ingress pref 1 proto ipv4 handle 101 \
++		flower $tcflags dst_ip 198.51.100.0/24 \
++		action drop
++
++	for i in {0..255}; do
++		tc filter add dev $h2 ingress pref 2 proto ipv4 \
++			handle $((102 + i)) \
++			flower $tcflags dst_ip 198.51.100.${i}/32 \
++			action drop
++		ret=$?
++		[[ $ret -ne 0 ]] && break
++	done
++
++	check_err $ret "failed to add all the filters"
++
++	for i in {255..0}; do
++		tc filter del dev $h2 ingress pref 2 proto ipv4 \
++			handle $((102 + i)) flower
++	done
++
++	tc filter del dev $h2 ingress pref 1 proto ipv4 handle 101 flower
++
++	log_test "collision test ($tcflags)"
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}
+diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
+index edf1c99c9936c..5f7d5a5ba89b0 100644
+--- a/tools/testing/selftests/iommu/iommufd.c
++++ b/tools/testing/selftests/iommu/iommufd.c
+@@ -1722,10 +1722,17 @@ FIXTURE_VARIANT(iommufd_dirty_tracking)
+ 
+ FIXTURE_SETUP(iommufd_dirty_tracking)
+ {
++	unsigned long size;
+ 	int mmap_flags;
+ 	void *vrc;
+ 	int rc;
+ 
++	if (variant->buffer_size < MOCK_PAGE_SIZE) {
++		SKIP(return,
++		     "Skipping buffer_size=%lu, less than MOCK_PAGE_SIZE=%lu",
++		     variant->buffer_size, MOCK_PAGE_SIZE);
++	}
++
+ 	self->fd = open("/dev/iommu", O_RDWR);
+ 	ASSERT_NE(-1, self->fd);
+ 
+@@ -1749,12 +1756,11 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
+ 	assert(vrc == self->buffer);
+ 
+ 	self->page_size = MOCK_PAGE_SIZE;
+-	self->bitmap_size =
+-		variant->buffer_size / self->page_size / BITS_PER_BYTE;
++	self->bitmap_size = variant->buffer_size / self->page_size;
+ 
+ 	/* Provision with an extra (PAGE_SIZE) for the unaligned case */
+-	rc = posix_memalign(&self->bitmap, PAGE_SIZE,
+-			    self->bitmap_size + PAGE_SIZE);
++	size = DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE);
++	rc = posix_memalign(&self->bitmap, PAGE_SIZE, size + PAGE_SIZE);
+ 	assert(!rc);
+ 	assert(self->bitmap);
+ 	assert((uintptr_t)self->bitmap % PAGE_SIZE == 0);
+@@ -1775,51 +1781,63 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
+ FIXTURE_TEARDOWN(iommufd_dirty_tracking)
+ {
+ 	munmap(self->buffer, variant->buffer_size);
+-	munmap(self->bitmap, self->bitmap_size);
++	munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
+ 	teardown_iommufd(self->fd, _metadata);
+ }
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128k)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty8k)
++{
++	/* half of an u8 index bitmap */
++	.buffer_size = 8UL * 1024UL,
++};
++
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty16k)
++{
++	/* one u8 index bitmap */
++	.buffer_size = 16UL * 1024UL,
++};
++
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty64k)
+ {
+ 	/* one u32 index bitmap */
+-	.buffer_size = 128UL * 1024UL,
++	.buffer_size = 64UL * 1024UL,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty256k)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128k)
+ {
+ 	/* one u64 index bitmap */
+-	.buffer_size = 256UL * 1024UL,
++	.buffer_size = 128UL * 1024UL,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty640k)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty320k)
+ {
+ 	/* two u64 index and trailing end bitmap */
+-	.buffer_size = 640UL * 1024UL,
++	.buffer_size = 320UL * 1024UL,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128M)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty64M)
+ {
+-	/* 4K bitmap (128M IOVA range) */
+-	.buffer_size = 128UL * 1024UL * 1024UL,
++	/* 4K bitmap (64M IOVA range) */
++	.buffer_size = 64UL * 1024UL * 1024UL,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128M_huge)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty64M_huge)
+ {
+-	/* 4K bitmap (128M IOVA range) */
+-	.buffer_size = 128UL * 1024UL * 1024UL,
++	/* 4K bitmap (64M IOVA range) */
++	.buffer_size = 64UL * 1024UL * 1024UL,
+ 	.hugepages = true,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty256M)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128M)
+ {
+-	/* 8K bitmap (256M IOVA range) */
+-	.buffer_size = 256UL * 1024UL * 1024UL,
++	/* 8K bitmap (128M IOVA range) */
++	.buffer_size = 128UL * 1024UL * 1024UL,
+ };
+ 
+-FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty256M_huge)
++FIXTURE_VARIANT_ADD(iommufd_dirty_tracking, domain_dirty128M_huge)
+ {
+-	/* 8K bitmap (256M IOVA range) */
+-	.buffer_size = 256UL * 1024UL * 1024UL,
++	/* 8K bitmap (128M IOVA range) */
++	.buffer_size = 128UL * 1024UL * 1024UL,
+ 	.hugepages = true,
+ };
+ 
+diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h
+index 8d2b46b2114da..c612fbf0195ba 100644
+--- a/tools/testing/selftests/iommu/iommufd_utils.h
++++ b/tools/testing/selftests/iommu/iommufd_utils.h
+@@ -22,6 +22,8 @@
+ #define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG))
+ #define BIT_WORD(nr) ((nr) / __BITS_PER_LONG)
+ 
++#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
++
+ static inline void set_bit(unsigned int nr, unsigned long *addr)
+ {
+ 	unsigned long mask = BIT_MASK(nr);
+@@ -346,12 +348,12 @@ static int _test_cmd_mock_domain_set_dirty(int fd, __u32 hwpt_id, size_t length,
+ static int _test_mock_dirty_bitmaps(int fd, __u32 hwpt_id, size_t length,
+ 				    __u64 iova, size_t page_size,
+ 				    size_t pte_page_size, __u64 *bitmap,
+-				    __u64 bitmap_size, __u32 flags,
++				    __u64 nbits, __u32 flags,
+ 				    struct __test_metadata *_metadata)
+ {
+ 	unsigned long npte = pte_page_size / page_size, pteset = 2 * npte;
+-	unsigned long nbits = bitmap_size * BITS_PER_BYTE;
+ 	unsigned long j, i, nr = nbits / pteset ?: 1;
++	unsigned long bitmap_size = DIV_ROUND_UP(nbits, BITS_PER_BYTE);
+ 	__u64 out_dirty = 0;
+ 
+ 	/* Mark all even bits as dirty in the mock domain */
+diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
+index 3c1e9f35b5312..3b26bf3cf5b9a 100644
+--- a/tools/testing/selftests/landlock/base_test.c
++++ b/tools/testing/selftests/landlock/base_test.c
+@@ -9,6 +9,7 @@
+ #define _GNU_SOURCE
+ #include <errno.h>
+ #include <fcntl.h>
++#include <linux/keyctl.h>
+ #include <linux/landlock.h>
+ #include <string.h>
+ #include <sys/prctl.h>
+@@ -326,4 +327,77 @@ TEST(ruleset_fd_transfer)
+ 	ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status));
+ }
+ 
++TEST(cred_transfer)
++{
++	struct landlock_ruleset_attr ruleset_attr = {
++		.handled_access_fs = LANDLOCK_ACCESS_FS_READ_DIR,
++	};
++	int ruleset_fd, dir_fd;
++	pid_t child;
++	int status;
++
++	drop_caps(_metadata);
++
++	dir_fd = open("/", O_RDONLY | O_DIRECTORY | O_CLOEXEC);
++	EXPECT_LE(0, dir_fd);
++	EXPECT_EQ(0, close(dir_fd));
++
++	/* Denies opening directories. */
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++	ASSERT_LE(0, ruleset_fd);
++	EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
++	ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
++	EXPECT_EQ(0, close(ruleset_fd));
++
++	/* Checks ruleset enforcement. */
++	EXPECT_EQ(-1, open("/", O_RDONLY | O_DIRECTORY | O_CLOEXEC));
++	EXPECT_EQ(EACCES, errno);
++
++	/* Needed for KEYCTL_SESSION_TO_PARENT permission checks */
++	EXPECT_NE(-1, syscall(__NR_keyctl, KEYCTL_JOIN_SESSION_KEYRING, NULL, 0,
++			      0, 0))
++	{
++		TH_LOG("Failed to join session keyring: %s", strerror(errno));
++	}
++
++	child = fork();
++	ASSERT_LE(0, child);
++	if (child == 0) {
++		/* Checks ruleset enforcement. */
++		EXPECT_EQ(-1, open("/", O_RDONLY | O_DIRECTORY | O_CLOEXEC));
++		EXPECT_EQ(EACCES, errno);
++
++		/*
++		 * KEYCTL_SESSION_TO_PARENT is a no-op unless we have a
++		 * different session keyring in the child, so make that happen.
++		 */
++		EXPECT_NE(-1, syscall(__NR_keyctl, KEYCTL_JOIN_SESSION_KEYRING,
++				      NULL, 0, 0, 0));
++
++		/*
++		 * KEYCTL_SESSION_TO_PARENT installs credentials on the parent
++		 * that never go through the cred_prepare hook, this path uses
++		 * cred_transfer instead.
++		 */
++		EXPECT_EQ(0, syscall(__NR_keyctl, KEYCTL_SESSION_TO_PARENT, 0,
++				     0, 0, 0));
++
++		/* Re-checks ruleset enforcement. */
++		EXPECT_EQ(-1, open("/", O_RDONLY | O_DIRECTORY | O_CLOEXEC));
++		EXPECT_EQ(EACCES, errno);
++
++		_exit(_metadata->exit_code);
++		return;
++	}
++
++	EXPECT_EQ(child, waitpid(child, &status, 0));
++	EXPECT_EQ(1, WIFEXITED(status));
++	EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status));
++
++	/* Re-checks ruleset enforcement. */
++	EXPECT_EQ(-1, open("/", O_RDONLY | O_DIRECTORY | O_CLOEXEC));
++	EXPECT_EQ(EACCES, errno);
++}
++
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config
+index 0086efaa7b681..29af19c4e9f98 100644
+--- a/tools/testing/selftests/landlock/config
++++ b/tools/testing/selftests/landlock/config
+@@ -2,6 +2,7 @@ CONFIG_CGROUPS=y
+ CONFIG_CGROUP_SCHED=y
+ CONFIG_INET=y
+ CONFIG_IPV6=y
++CONFIG_KEYS=y
+ CONFIG_NET=y
+ CONFIG_NET_NS=y
+ CONFIG_OVERLAY_FS=y
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 73895711cdf42..5f3c28fc86249 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1737,53 +1737,53 @@ ipv4_rt_dsfield()
+ 
+ 	# DSCP 0x10 should match the specific route, no matter the ECN bits
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x10 | \
+-		grep -q "via 172.16.103.2"
++		grep -q "172.16.102.0/24 tos 0x10 via 172.16.103.2"
+ 	log_test $? 0 "IPv4 route with DSCP and ECN:Not-ECT"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x11 | \
+-		grep -q "via 172.16.103.2"
++		grep -q "172.16.102.0/24 tos 0x10 via 172.16.103.2"
+ 	log_test $? 0 "IPv4 route with DSCP and ECN:ECT(1)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x12 | \
+-		grep -q "via 172.16.103.2"
++		grep -q "172.16.102.0/24 tos 0x10 via 172.16.103.2"
+ 	log_test $? 0 "IPv4 route with DSCP and ECN:ECT(0)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x13 | \
+-		grep -q "via 172.16.103.2"
++		grep -q "172.16.102.0/24 tos 0x10 via 172.16.103.2"
+ 	log_test $? 0 "IPv4 route with DSCP and ECN:CE"
+ 
+ 	# Unknown DSCP should match the generic route, no matter the ECN bits
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x14 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with unknown DSCP and ECN:Not-ECT"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x15 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with unknown DSCP and ECN:ECT(1)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x16 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with unknown DSCP and ECN:ECT(0)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x17 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with unknown DSCP and ECN:CE"
+ 
+ 	# Null DSCP should match the generic route, no matter the ECN bits
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x00 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with no DSCP and ECN:Not-ECT"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x01 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with no DSCP and ECN:ECT(1)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x02 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with no DSCP and ECN:ECT(0)"
+ 
+ 	$IP route get fibmatch 172.16.102.1 dsfield 0x03 | \
+-		grep -q "via 172.16.101.2"
++		grep -q "172.16.102.0/24 via 172.16.101.2"
+ 	log_test $? 0 "IPv4 route with no DSCP and ECN:CE"
+ }
+ 
+diff --git a/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh b/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
+index 0760a34b71146..a21b7085da2e9 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
+@@ -178,6 +178,22 @@ fdb_del()
+ 	check_err $? "Failed to remove a FDB entry of type ${type}"
+ }
+ 
++check_fdb_n_learned_support()
++{
++	if ! ip link help bridge 2>&1 | grep -q "fdb_max_learned"; then
++		echo "SKIP: iproute2 too old, missing bridge max learned support"
++		exit $ksft_skip
++	fi
++
++	ip link add dev br0 type bridge
++	local learned=$(fdb_get_n_learned)
++	ip link del dev br0
++	if [ "$learned" == "null" ]; then
++		echo "SKIP: kernel too old; bridge fdb_n_learned feature not supported."
++		exit $ksft_skip
++	fi
++}
++
+ check_accounting_one_type()
+ {
+ 	local type=$1 is_counted=$2 overrides_learned=$3
+@@ -274,6 +290,8 @@ check_limit()
+ 	done
+ }
+ 
++check_fdb_n_learned_support
++
+ trap cleanup EXIT
+ 
+ setup_prepare
+diff --git a/tools/testing/selftests/net/forwarding/devlink_lib.sh b/tools/testing/selftests/net/forwarding/devlink_lib.sh
+index f1de525cfa55b..62a05bca1e825 100644
+--- a/tools/testing/selftests/net/forwarding/devlink_lib.sh
++++ b/tools/testing/selftests/net/forwarding/devlink_lib.sh
+@@ -122,6 +122,8 @@ devlink_reload()
+ 	still_pending=$(devlink resource show "$DEVLINK_DEV" | \
+ 			grep -c "size_new")
+ 	check_err $still_pending "Failed reload - There are still unset sizes"
++
++	udevadm settle
+ }
+ 
+ declare -A DEVLINK_ORIG
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index 94bb6e11c16f0..994477ee87bef 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -607,7 +607,7 @@ int expect_strne(const char *expr, int llen, const char *cmp)
+ static __attribute__((unused))
+ int expect_str_buf_eq(size_t expr, const char *buf, size_t val, int llen, const char *cmp)
+ {
+-	llen += printf(" = %lu <%s> ", expr, buf);
++	llen += printf(" = %lu <%s> ", (unsigned long)expr, buf);
+ 	if (strcmp(buf, cmp) != 0) {
+ 		result(llen, FAIL);
+ 		return 1;
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index 445f306d4c2fa..f55f5989de72b 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -293,6 +293,18 @@ static int initialize_mem_bw_imc(void)
+ 	return 0;
+ }
+ 
++static void perf_close_imc_mem_bw(void)
++{
++	int mc;
++
++	for (mc = 0; mc < imcs; mc++) {
++		if (imc_counters_config[mc][READ].fd != -1)
++			close(imc_counters_config[mc][READ].fd);
++		if (imc_counters_config[mc][WRITE].fd != -1)
++			close(imc_counters_config[mc][WRITE].fd);
++	}
++}
++
+ /*
+  * get_mem_bw_imc:	Memory band width as reported by iMC counters
+  * @cpu_no:		CPU number that the benchmark PID is binded to
+@@ -306,26 +318,33 @@ static int initialize_mem_bw_imc(void)
+ static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
+ {
+ 	float reads, writes, of_mul_read, of_mul_write;
+-	int imc, j, ret;
++	int imc, ret;
++
++	for (imc = 0; imc < imcs; imc++) {
++		imc_counters_config[imc][READ].fd = -1;
++		imc_counters_config[imc][WRITE].fd = -1;
++	}
+ 
+ 	/* Start all iMC counters to log values (both read and write) */
+ 	reads = 0, writes = 0, of_mul_read = 1, of_mul_write = 1;
+ 	for (imc = 0; imc < imcs; imc++) {
+-		for (j = 0; j < 2; j++) {
+-			ret = open_perf_event(imc, cpu_no, j);
+-			if (ret)
+-				return -1;
+-		}
+-		for (j = 0; j < 2; j++)
+-			membw_ioctl_perf_event_ioc_reset_enable(imc, j);
++		ret = open_perf_event(imc, cpu_no, READ);
++		if (ret)
++			goto close_fds;
++		ret = open_perf_event(imc, cpu_no, WRITE);
++		if (ret)
++			goto close_fds;
++
++		membw_ioctl_perf_event_ioc_reset_enable(imc, READ);
++		membw_ioctl_perf_event_ioc_reset_enable(imc, WRITE);
+ 	}
+ 
+ 	sleep(1);
+ 
+ 	/* Stop counters after a second to get results (both read and write) */
+ 	for (imc = 0; imc < imcs; imc++) {
+-		for (j = 0; j < 2; j++)
+-			membw_ioctl_perf_event_ioc_disable(imc, j);
++		membw_ioctl_perf_event_ioc_disable(imc, READ);
++		membw_ioctl_perf_event_ioc_disable(imc, WRITE);
+ 	}
+ 
+ 	/*
+@@ -341,15 +360,13 @@ static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
+ 		if (read(r->fd, &r->return_value,
+ 			 sizeof(struct membw_read_format)) == -1) {
+ 			ksft_perror("Couldn't get read b/w through iMC");
+-
+-			return -1;
++			goto close_fds;
+ 		}
+ 
+ 		if (read(w->fd, &w->return_value,
+ 			 sizeof(struct membw_read_format)) == -1) {
+ 			ksft_perror("Couldn't get write bw through iMC");
+-
+-			return -1;
++			goto close_fds;
+ 		}
+ 
+ 		__u64 r_time_enabled = r->return_value.time_enabled;
+@@ -369,10 +386,7 @@ static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
+ 		writes += w->return_value.value * of_mul_write * SCALE;
+ 	}
+ 
+-	for (imc = 0; imc < imcs; imc++) {
+-		close(imc_counters_config[imc][READ].fd);
+-		close(imc_counters_config[imc][WRITE].fd);
+-	}
++	perf_close_imc_mem_bw();
+ 
+ 	if (strcmp(bw_report, "reads") == 0) {
+ 		*bw_imc = reads;
+@@ -386,6 +400,10 @@ static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
+ 
+ 	*bw_imc = reads + writes;
+ 	return 0;
++
++close_fds:
++	perf_close_imc_mem_bw();
++	return -1;
+ }
+ 
+ void set_mbm_path(const char *ctrlgrp, const char *mongrp, int domain_id)
+diff --git a/tools/testing/selftests/sigaltstack/current_stack_pointer.h b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+index ea9bdf3a90b16..09da8f1011ce4 100644
+--- a/tools/testing/selftests/sigaltstack/current_stack_pointer.h
++++ b/tools/testing/selftests/sigaltstack/current_stack_pointer.h
+@@ -8,7 +8,7 @@ register unsigned long sp asm("sp");
+ register unsigned long sp asm("esp");
+ #elif __loongarch64
+ register unsigned long sp asm("$sp");
+-#elif __ppc__
++#elif __powerpc__
+ register unsigned long sp asm("r1");
+ #elif __s390x__
+ register unsigned long sp asm("%15");

diff --git a/2950_jump-label-fix.patch b/2950_jump-label-fix.patch
new file mode 100644
index 00000000..1a5fdf7a
--- /dev/null
+++ b/2950_jump-label-fix.patch
@@ -0,0 +1,57 @@
+From 224fa3552029a3d14bec7acf72ded8171d551b88 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Wed, 31 Jul 2024 12:43:21 +0200
+Subject: jump_label: Fix the fix, brown paper bags galore
+
+Per the example of:
+
+  !atomic_cmpxchg(&key->enabled, 0, 1)
+
+the inverse was written as:
+
+  atomic_cmpxchg(&key->enabled, 1, 0)
+
+except of course, that while !old is only true for old == 0, old is
+true for everything except old == 0.
+
+Fix it to read:
+
+  atomic_cmpxchg(&key->enabled, 1, 0) == 1
+
+such that only the 1->0 transition returns true and goes on to disable
+the keys.
+
+Fixes: 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()")
+Reported-by: Darrick J. Wong <djwong@kernel.org>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Tested-by: Darrick J. Wong <djwong@kernel.org>
+Link: https://lkml.kernel.org/r/20240731105557.GY33588@noisy.programming.kicks-ass.net
+---
+ kernel/jump_label.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index 4ad5ed8adf9691..6dc76b590703ed 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -236,7 +236,7 @@ void static_key_disable_cpuslocked(struct static_key *key)
+ 	}
+ 
+ 	jump_label_lock();
+-	if (atomic_cmpxchg(&key->enabled, 1, 0))
++	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
+ 		jump_label_update(key);
+ 	jump_label_unlock();
+ }
+@@ -289,7 +289,7 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
+ 		return;
+ 
+ 	guard(mutex)(&jump_label_mutex);
+-	if (atomic_cmpxchg(&key->enabled, 1, 0))
++	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
+ 		jump_label_update(key);
+ 	else
+ 		WARN_ON_ONCE(!static_key_slow_try_dec(key));
+-- 
+cgit 1.2.3-korg
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-03 15:55 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-03 15:55 UTC (permalink / raw
  To: gentoo-commits

commit:     e4136859a1f44f287a38a3244451ce8de37e2cb2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug  3 15:55:24 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug  3 15:55:24 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e4136859

Temporarily remove BMQ due to imcompatibility

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |     8 -
 5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch | 11474 -------------------------
 5021_BMQ-and-PDS-gentoo-defaults.patch       |    12 -
 3 files changed, 11494 deletions(-)

diff --git a/0000_README b/0000_README
index 3c01ecbe..6ab23d2a 100644
--- a/0000_README
+++ b/0000_README
@@ -98,11 +98,3 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-
-Patch:  5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
-From:   https://github.com/Frogging-Family/linux-tkg
-Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
-
-Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
-From:   https://gitweb.gentoo.org/proj/linux-patches.git/
-Desc:   Set defaults for BMQ. default to n

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
deleted file mode 100644
index 5f577d24..00000000
--- a/5020_BMQ-and-PDS-io-scheduler-v6.10-r0.patch
+++ /dev/null
@@ -1,11474 +0,0 @@
-diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
---- a/Documentation/admin-guide/sysctl/kernel.rst	2024-07-15 00:43:32.000000000 +0200
-+++ b/Documentation/admin-guide/sysctl/kernel.rst	2024-07-16 11:57:40.064869171 +0200
-@@ -1673,3 +1673,12 @@ is 10 seconds.
- 
- The softlockup threshold is (``2 * watchdog_thresh``). Setting this
- tunable to zero will disable lockup detection altogether.
-+
-+yield_type:
-+===========
-+
-+BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield() will be performed.
-+
-+  0 - No yield.
-+  1 - Requeue task. (default)
-diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
---- a/Documentation/scheduler/sched-BMQ.txt	1970-01-01 01:00:00.000000000 +0100
-+++ b/Documentation/scheduler/sched-BMQ.txt	2024-07-16 11:57:40.268882498 +0200
-@@ -0,0 +1,110 @@
-+                         BitMap queue CPU Scheduler
-+                         --------------------------
-+
-+CONTENT
-+========
-+
-+ Background
-+ Design
-+   Overview
-+   Task policy
-+   Priority management
-+   BitMap Queue
-+   CPU Assignment and Migration
-+
-+
-+Background
-+==========
-+
-+BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
-+of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
-+and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
-+simple, while efficiency and scalable for interactive tasks, such as desktop,
-+movie playback and gaming etc.
-+
-+Design
-+======
-+
-+Overview
-+--------
-+
-+BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
-+each CPU is responsible for scheduling the tasks that are putting into it's
-+run queue.
-+
-+The run queue is a set of priority queues. Note that these queues are fifo
-+queue for non-rt tasks or priority queue for rt tasks in data structure. See
-+BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
-+that most applications are non-rt tasks. No matter the queue is fifo or
-+priority, In each queue is an ordered list of runnable tasks awaiting execution
-+and the data structures are the same. When it is time for a new task to run,
-+the scheduler simply looks the lowest numbered queueue that contains a task,
-+and runs the first task from the head of that queue. And per CPU idle task is
-+also in the run queue, so the scheduler can always find a task to run on from
-+its run queue.
-+
-+Each task will assigned the same timeslice(default 4ms) when it is picked to
-+start running. Task will be reinserted at the end of the appropriate priority
-+queue when it uses its whole timeslice. When the scheduler selects a new task
-+from the priority queue it sets the CPU's preemption timer for the remainder of
-+the previous timeslice. When that timer fires the scheduler will stop execution
-+on that task, select another task and start over again.
-+
-+If a task blocks waiting for a shared resource then it's taken out of its
-+priority queue and is placed in a wait queue for the shared resource. When it
-+is unblocked it will be reinserted in the appropriate priority queue of an
-+eligible CPU.
-+
-+Task policy
-+-----------
-+
-+BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
-+mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
-+NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
-+policy.
-+
-+DEADLINE
-+	It is squashed as priority 0 FIFO task.
-+
-+FIFO/RR
-+	All RT tasks share one single priority queue in BMQ run queue designed. The
-+complexity of insert operation is O(n). BMQ is not designed for system runs
-+with major rt policy tasks.
-+
-+NORMAL/BATCH/IDLE
-+	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
-+NORMAL policy tasks, but they just don't boost. To control the priority of
-+NORMAL/BATCH/IDLE tasks, simply use nice level.
-+
-+ISO
-+	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
-+task instead.
-+
-+Priority management
-+-------------------
-+
-+RT tasks have priority from 0-99. For non-rt tasks, there are three different
-+factors used to determine the effective priority of a task. The effective
-+priority being what is used to determine which queue it will be in.
-+
-+The first factor is simply the task’s static priority. Which is assigned from
-+task's nice level, within [-20, 19] in userland's point of view and [0, 39]
-+internally.
-+
-+The second factor is the priority boost. This is a value bounded between
-+[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
-+modified by the following cases:
-+
-+*When a thread has used up its entire timeslice, always deboost its boost by
-+increasing by one.
-+*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
-+and its switch-in time(time after last switch and run) below the thredhold
-+based on its priority boost, will boost its boost by decreasing by one buti is
-+capped at 0 (won’t go negative).
-+
-+The intent in this system is to ensure that interactive threads are serviced
-+quickly. These are usually the threads that interact directly with the user
-+and cause user-perceivable latency. These threads usually do little work and
-+spend most of their time blocked awaiting another user event. So they get the
-+priority boost from unblocking while background threads that do most of the
-+processing receive the priority penalty for using their entire timeslice.
-diff --git a/fs/proc/base.c b/fs/proc/base.c
---- a/fs/proc/base.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/fs/proc/base.c	2024-07-16 11:57:44.113126230 +0200
-@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq
- 		seq_puts(m, "0 0 0\n");
- 	else
- 		seq_printf(m, "%llu %llu %lu\n",
--		   (unsigned long long)task->se.sum_exec_runtime,
-+		   (unsigned long long)tsk_seruntime(task),
- 		   (unsigned long long)task->sched_info.run_delay,
- 		   task->sched_info.pcount);
- 
-diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
---- a/include/asm-generic/resource.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/asm-generic/resource.h	2024-07-16 11:57:44.173129878 +0200
-@@ -23,7 +23,7 @@
- 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
- 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
--	[RLIMIT_NICE]		= { 0, 0 },				\
-+	[RLIMIT_NICE]		= { 30, 30 },				\
- 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
- 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- }
-diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
---- a/include/linux/sched/deadline.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/linux/sched/deadline.h	2024-07-16 11:57:44.289136930 +0200
-@@ -2,6 +2,25 @@
- #ifndef _LINUX_SCHED_DEADLINE_H
- #define _LINUX_SCHED_DEADLINE_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+static inline int dl_task(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#define __tsk_deadline(p)	(0UL)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
-+#endif
-+
-+#else
-+
-+#define __tsk_deadline(p)	((p)->dl.deadline)
-+
- /*
-  * SCHED_DEADLINE tasks has negative priorities, reflecting
-  * the fact that any of them has higher prio than RT and
-@@ -23,6 +42,7 @@ static inline int dl_task(struct task_st
- {
- 	return dl_prio(p->prio);
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- static inline bool dl_time_before(u64 a, u64 b)
- {
-diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
---- a/include/linux/sched/prio.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/linux/sched/prio.h	2024-07-16 11:57:44.289136930 +0200
-@@ -18,6 +18,28 @@
- #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
- #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+/* Undefine MAX_PRIO and DEFAULT_PRIO */
-+#undef MAX_PRIO
-+#undef DEFAULT_PRIO
-+
-+/* +/- priority levels from the base priority */
-+#ifdef CONFIG_SCHED_BMQ
-+#define MAX_PRIORITY_ADJ	(12)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define MAX_PRIORITY_ADJ	(0)
-+#endif
-+
-+#define MIN_NORMAL_PRIO		(128)
-+#define NORMAL_PRIO_NUM		(64)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
-+#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
-+
-+#endif /* CONFIG_SCHED_ALT */
-+
- /*
-  * Convert user-nice values [ -20 ... 0 ... 19 ]
-  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
-diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
---- a/include/linux/sched/rt.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/linux/sched/rt.h	2024-07-16 11:57:44.289136930 +0200
-@@ -24,8 +24,10 @@ static inline bool task_is_realtime(stru
- 
- 	if (policy == SCHED_FIFO || policy == SCHED_RR)
- 		return true;
-+#ifndef CONFIG_SCHED_ALT
- 	if (policy == SCHED_DEADLINE)
- 		return true;
-+#endif
- 	return false;
- }
- 
-diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
---- a/include/linux/sched/topology.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/linux/sched/topology.h	2024-07-16 11:57:44.289136930 +0200
-@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(
- 
- #endif	/* !CONFIG_SMP */
- 
--#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
-+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
-+	!defined(CONFIG_SCHED_ALT)
- extern void rebuild_sched_domains_energy(void);
- #else
- static inline void rebuild_sched_domains_energy(void)
-diff --git a/include/linux/sched.h b/include/linux/sched.h
---- a/include/linux/sched.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/include/linux/sched.h	2024-07-16 11:57:44.285136688 +0200
-@@ -774,9 +774,16 @@ struct task_struct {
- 	struct alloc_tag		*alloc_tag;
- #endif
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
- 	int				on_cpu;
-+#endif
-+
-+#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_ALT)
- 	struct __call_single_node	wake_entry;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned int			wakee_flips;
- 	unsigned long			wakee_flip_decay_ts;
- 	struct task_struct		*last_wakee;
-@@ -790,6 +797,7 @@ struct task_struct {
- 	 */
- 	int				recent_used_cpu;
- 	int				wake_cpu;
-+#endif /* !CONFIG_SCHED_ALT */
- #endif
- 	int				on_rq;
- 
-@@ -798,6 +806,19 @@ struct task_struct {
- 	int				normal_prio;
- 	unsigned int			rt_priority;
- 
-+#ifdef CONFIG_SCHED_ALT
-+	u64				last_ran;
-+	s64				time_slice;
-+	struct list_head		sq_node;
-+#ifdef CONFIG_SCHED_BMQ
-+	int				boost_prio;
-+#endif /* CONFIG_SCHED_BMQ */
-+#ifdef CONFIG_SCHED_PDS
-+	u64				deadline;
-+#endif /* CONFIG_SCHED_PDS */
-+	/* sched_clock time spent running */
-+	u64				sched_time;
-+#else /* !CONFIG_SCHED_ALT */
- 	struct sched_entity		se;
- 	struct sched_rt_entity		rt;
- 	struct sched_dl_entity		dl;
-@@ -809,6 +830,7 @@ struct task_struct {
- 	unsigned long			core_cookie;
- 	unsigned int			core_occupation;
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_CGROUP_SCHED
- 	struct task_group		*sched_task_group;
-@@ -1571,6 +1593,15 @@ struct task_struct {
- 	 */
- };
- 
-+#ifdef CONFIG_SCHED_ALT
-+#define tsk_seruntime(t)		((t)->sched_time)
-+/* replace the uncertian rt_timeout with 0UL */
-+#define tsk_rttimeout(t)		(0UL)
-+#else /* CFS */
-+#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
-+#define tsk_rttimeout(t)	((t)->rt.timeout)
-+#endif /* !CONFIG_SCHED_ALT */
-+
- #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
- #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
- 
-diff --git a/init/init_task.c b/init/init_task.c
---- a/init/init_task.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/init/init_task.c	2024-07-16 11:57:44.401143740 +0200
-@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L
- 	.stack		= init_stack,
- 	.usage		= REFCOUNT_INIT(2),
- 	.flags		= PF_KTHREAD,
-+#ifdef CONFIG_SCHED_ALT
-+	.prio		= DEFAULT_PRIO,
-+	.static_prio	= DEFAULT_PRIO,
-+	.normal_prio	= DEFAULT_PRIO,
-+#else
- 	.prio		= MAX_PRIO - 20,
- 	.static_prio	= MAX_PRIO - 20,
- 	.normal_prio	= MAX_PRIO - 20,
-+#endif
- 	.policy		= SCHED_NORMAL,
- 	.cpus_ptr	= &init_task.cpus_mask,
- 	.user_cpus_ptr	= NULL,
-@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L
- 	.restart_block	= {
- 		.fn = do_no_restart_syscall,
- 	},
-+#ifdef CONFIG_SCHED_ALT
-+	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
-+#ifdef CONFIG_SCHED_BMQ
-+	.boost_prio	= 0,
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+	.deadline	= 0,
-+#endif
-+	.time_slice	= HZ,
-+#else
- 	.se		= {
- 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
- 	},
-@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L
- 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
- 		.time_slice	= RR_TIMESLICE,
- 	},
-+#endif
- 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
- #ifdef CONFIG_SMP
- 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
-diff --git a/init/Kconfig b/init/Kconfig
---- a/init/Kconfig	2024-07-15 00:43:32.000000000 +0200
-+++ b/init/Kconfig	2024-07-16 11:57:44.401143740 +0200
-@@ -638,6 +638,7 @@ config TASK_IO_ACCOUNTING
- 
- config PSI
- 	bool "Pressure stall information tracking"
-+	depends on !SCHED_ALT
- 	select KERNFS
- 	help
- 	  Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -803,6 +804,7 @@ menu "Scheduler features"
- config UCLAMP_TASK
- 	bool "Enable utilization clamping for RT/FAIR tasks"
- 	depends on CPU_FREQ_GOV_SCHEDUTIL
-+	depends on !SCHED_ALT
- 	help
- 	  This feature enables the scheduler to track the clamped utilization
- 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -849,6 +851,35 @@ config UCLAMP_BUCKETS_COUNT
- 
- 	  If in doubt, use the default value.
- 
-+menuconfig SCHED_ALT
-+	bool "Alternative CPU Schedulers"
-+	default y
-+	help
-+	  This feature enable alternative CPU scheduler"
-+
-+if SCHED_ALT
-+
-+choice
-+	prompt "Alternative CPU Scheduler"
-+	default SCHED_BMQ
-+
-+config SCHED_BMQ
-+	bool "BMQ CPU scheduler"
-+	help
-+	  The BitMap Queue CPU scheduler for excellent interactivity and
-+	  responsiveness on the desktop and solid scalability on normal
-+	  hardware and commodity servers.
-+
-+config SCHED_PDS
-+	bool "PDS CPU scheduler"
-+	help
-+	  The Priority and Deadline based Skip list multiple queue CPU
-+	  Scheduler.
-+
-+endchoice
-+
-+endif
-+
- endmenu
- 
- #
-@@ -914,6 +945,7 @@ config NUMA_BALANCING
- 	depends on ARCH_SUPPORTS_NUMA_BALANCING
- 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
- 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
-+	depends on !SCHED_ALT
- 	help
- 	  This option adds support for automatic NUMA aware memory/task placement.
- 	  The mechanism is quite primitive and is based on migrating memory when
-@@ -1015,6 +1047,7 @@ config FAIR_GROUP_SCHED
- 	depends on CGROUP_SCHED
- 	default CGROUP_SCHED
- 
-+if !SCHED_ALT
- config CFS_BANDWIDTH
- 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
- 	depends on FAIR_GROUP_SCHED
-@@ -1037,6 +1070,7 @@ config RT_GROUP_SCHED
- 	  realtime bandwidth for them.
- 	  See Documentation/scheduler/sched-rt-group.rst for more information.
- 
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
- 
- config SCHED_MM_CID
-@@ -1285,6 +1319,7 @@ config CHECKPOINT_RESTORE
- 
- config SCHED_AUTOGROUP
- 	bool "Automatic process group scheduling"
-+	depends on !SCHED_ALT
- 	select CGROUPS
- 	select CGROUP_SCHED
- 	select FAIR_GROUP_SCHED
-diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
---- a/kernel/cgroup/cpuset.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/cgroup/cpuset.c	2024-07-16 11:57:44.421144957 +0200
-@@ -846,7 +846,7 @@ out:
- 	return ret;
- }
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
- /*
-  * Helper routine for generate_sched_domains().
-  * Do cpusets a, b have overlapping effective cpus_allowed masks?
-@@ -1245,7 +1245,7 @@ static void rebuild_sched_domains_locked
- 	/* Have scheduler rebuild the domains */
- 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
- }
--#else /* !CONFIG_SMP */
-+#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
- static void rebuild_sched_domains_locked(void)
- {
- }
-@@ -3301,12 +3301,15 @@ static int cpuset_can_attach(struct cgro
- 				goto out_unlock;
- 		}
- 
-+#ifndef CONFIG_SCHED_ALT
- 		if (dl_task(task)) {
- 			cs->nr_migrate_dl_tasks++;
- 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
- 		}
-+#endif
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (!cs->nr_migrate_dl_tasks)
- 		goto out_success;
- 
-@@ -3327,6 +3330,7 @@ static int cpuset_can_attach(struct cgro
- 	}
- 
- out_success:
-+#endif
- 	/*
- 	 * Mark attach is in progress.  This makes validate_change() fail
- 	 * changes which zero cpus/mems_allowed.
-@@ -3350,12 +3354,14 @@ static void cpuset_cancel_attach(struct
- 	if (!cs->attach_in_progress)
- 		wake_up(&cpuset_attach_wq);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (cs->nr_migrate_dl_tasks) {
- 		int cpu = cpumask_any(cs->effective_cpus);
- 
- 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
- 		reset_migrate_dl_data(cs);
- 	}
-+#endif
- 
- 	mutex_unlock(&cpuset_mutex);
- }
-diff --git a/kernel/delayacct.c b/kernel/delayacct.c
---- a/kernel/delayacct.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/delayacct.c	2024-07-16 11:57:44.421144957 +0200
-@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *
- 	 */
- 	t1 = tsk->sched_info.pcount;
- 	t2 = tsk->sched_info.run_delay;
--	t3 = tsk->se.sum_exec_runtime;
-+	t3 = tsk_seruntime(tsk);
- 
- 	d->cpu_count += t1;
- 
-diff --git a/kernel/exit.c b/kernel/exit.c
---- a/kernel/exit.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/exit.c	2024-07-16 11:57:44.425145200 +0200
-@@ -175,7 +175,7 @@ static void __exit_signal(struct task_st
- 			sig->curr_target = next_thread(tsk);
- 	}
- 
--	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-+	add_device_randomness((const void*) &tsk_seruntime(tsk),
- 			      sizeof(unsigned long long));
- 
- 	/*
-@@ -196,7 +196,7 @@ static void __exit_signal(struct task_st
- 	sig->inblock += task_io_get_inblock(tsk);
- 	sig->oublock += task_io_get_oublock(tsk);
- 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
--	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
-+	sig->sum_sched_runtime += tsk_seruntime(tsk);
- 	sig->nr_threads--;
- 	__unhash_process(tsk, group_dead);
- 	write_sequnlock(&sig->stats_lock);
-diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
---- a/kernel/Kconfig.preempt	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/Kconfig.preempt	2024-07-16 11:57:44.409144227 +0200
-@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
- 
- config SCHED_CORE
- 	bool "Core Scheduling for SMT"
--	depends on SCHED_SMT
-+	depends on SCHED_SMT && !SCHED_ALT
- 	help
- 	  This option permits Core Scheduling, a means of coordinated task
- 	  selection across SMT siblings. When enabled -- see
-diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
---- a/kernel/locking/rtmutex.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/locking/rtmutex.c	2024-07-16 11:57:44.433145686 +0200
-@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waite
- 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
- 
- 	waiter->tree.prio = __waiter_prio(task);
--	waiter->tree.deadline = task->dl.deadline;
-+	waiter->tree.deadline = __tsk_deadline(task);
- }
- 
- /*
-@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter
-  * Only use with rt_waiter_node_{less,equal}()
-  */
- #define task_to_waiter_node(p)	\
--	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
-+	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
- #define task_to_waiter(p)	\
- 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
- 
- static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- 					       struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline < right->deadline);
-+#else
- 	if (left->prio < right->prio)
- 		return 1;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_nod
- 	 */
- 	if (dl_prio(left->prio))
- 		return dl_time_before(left->deadline, right->deadline);
-+#endif
- 
- 	return 0;
-+#endif
- }
- 
- static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- 						 struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline == right->deadline);
-+#else
- 	if (left->prio != right->prio)
- 		return 0;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_nod
- 	 */
- 	if (dl_prio(left->prio))
- 		return left->deadline == right->deadline;
-+#endif
- 
- 	return 1;
-+#endif
- }
- 
- static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
-diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
---- a/kernel/sched/alt_core.c	1970-01-01 01:00:00.000000000 +0100
-+++ b/kernel/sched/alt_core.c	2024-07-16 11:57:44.445146416 +0200
-@@ -0,0 +1,8963 @@
-+/*
-+ *  kernel/sched/alt_core.c
-+ *
-+ *  Core alternative kernel scheduler code and related syscalls
-+ *
-+ *  Copyright (C) 1991-2002  Linus Torvalds
-+ *
-+ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
-+ *		a whole lot of those previous things.
-+ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
-+ *		scheduler by Alfred Chen.
-+ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
-+ */
-+#include <linux/sched/clock.h>
-+#include <linux/sched/cputime.h>
-+#include <linux/sched/debug.h>
-+#include <linux/sched/hotplug.h>
-+#include <linux/sched/init.h>
-+#include <linux/sched/isolation.h>
-+#include <linux/sched/loadavg.h>
-+#include <linux/sched/mm.h>
-+#include <linux/sched/nohz.h>
-+#include <linux/sched/stat.h>
-+#include <linux/sched/wake_q.h>
-+
-+#include <linux/blkdev.h>
-+#include <linux/context_tracking.h>
-+#include <linux/cpuset.h>
-+#include <linux/delayacct.h>
-+#include <linux/init_task.h>
-+#include <linux/kcov.h>
-+#include <linux/kprobes.h>
-+#include <linux/nmi.h>
-+#include <linux/rseq.h>
-+#include <linux/scs.h>
-+
-+#include <uapi/linux/sched/types.h>
-+
-+#include <asm/irq_regs.h>
-+#include <asm/switch_to.h>
-+
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/sched.h>
-+#include <trace/events/ipi.h>
-+#undef CREATE_TRACE_POINTS
-+
-+#include "sched.h"
-+#include "smp.h"
-+
-+#include "pelt.h"
-+
-+#include "../../io_uring/io-wq.h"
-+#include "../smpboot.h"
-+
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
-+
-+/*
-+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
-+ * associated with them) to allow external modules to probe them.
-+ */
-+EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+#define sched_feat(x)	(1)
-+/*
-+ * Print a warning if need_resched is set for the given duration (if
-+ * LATENCY_WARN is enabled).
-+ *
-+ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
-+ * per boot.
-+ */
-+__read_mostly int sysctl_resched_latency_warn_ms = 100;
-+__read_mostly int sysctl_resched_latency_warn_once = 1;
-+#else
-+#define sched_feat(x)	(0)
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+#define ALT_SCHED_VERSION "v6.10-r0"
-+
-+/*
-+ * Compile time debug macro
-+ * #define ALT_SCHED_DEBUG
-+ */
-+
-+/* rt_prio(prio) defined in include/linux/sched/rt.h */
-+#define rt_task(p)		rt_prio((p)->prio)
-+#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
-+#define task_has_rt_policy(p)	(rt_policy((p)->policy))
-+
-+#define STOP_PRIO		(MAX_RT_PRIO - 1)
-+
-+/*
-+ * Time slice
-+ * (default: 4 msec, units: nanoseconds)
-+ */
-+unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#include "bmq.h"
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+#include "pds.h"
-+#endif
-+
-+struct affinity_context {
-+	const struct cpumask *new_mask;
-+	struct cpumask *user_mask;
-+	unsigned int flags;
-+};
-+
-+/* Reschedule if less than this many μs left */
-+#define RESCHED_NS		(100 << 10)
-+
-+/**
-+ * sched_yield_type - Type of sched_yield() will be performed.
-+ * 0: No yield.
-+ * 1: Requeue task. (default)
-+ */
-+int sched_yield_type __read_mostly = 1;
-+
-+#ifdef CONFIG_SMP
-+static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
-+
-+#ifdef CONFIG_SCHED_SMT
-+DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-+EXPORT_SYMBOL_GPL(sched_smt_present);
-+#endif
-+
-+/*
-+ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
-+ * the domain), this allows us to quickly tell if two cpus are in the same cache
-+ * domain, see cpus_share_cache().
-+ */
-+DEFINE_PER_CPU(int, sd_llc_id);
-+#endif /* CONFIG_SMP */
-+
-+static DEFINE_MUTEX(sched_hotcpu_mutex);
-+
-+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 1] ____cacheline_aligned_in_smp;
-+
-+static cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
-+#ifdef CONFIG_SCHED_SMT
-+static cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
-+#endif
-+
-+/* task function */
-+static inline const struct cpumask *task_user_cpus(struct task_struct *p)
-+{
-+	if (!p->user_cpus_ptr)
-+		return cpu_possible_mask; /* &init_task.cpus_mask */
-+	return p->user_cpus_ptr;
-+}
-+
-+/* sched_queue related functions */
-+static inline void sched_queue_init(struct sched_queue *q)
-+{
-+	int i;
-+
-+	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
-+	for(i = 0; i < SCHED_LEVELS; i++)
-+		INIT_LIST_HEAD(&q->heads[i]);
-+}
-+
-+/*
-+ * Init idle task and put into queue structure of rq
-+ * IMPORTANT: may be called multiple times for a single cpu
-+ */
-+static inline void sched_queue_init_idle(struct sched_queue *q,
-+					 struct task_struct *idle)
-+{
-+	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
-+	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
-+	idle->on_rq = TASK_ON_RQ_QUEUED;
-+}
-+
-+#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
-+	if (low < pr && pr <= high)				\
-+		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
-+
-+#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
-+	if (low < pr && pr <= high)				\
-+		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
-+
-+static atomic_t sched_prio_record = ATOMIC_INIT(0);
-+
-+/* water mark related functions */
-+static inline void update_sched_preempt_mask(struct rq *rq)
-+{
-+	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+	int last_prio = rq->prio;
-+	int cpu, pr;
-+
-+	if (prio == last_prio)
-+		return;
-+
-+	rq->prio = prio;
-+#ifdef CONFIG_SCHED_PDS
-+	rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+#endif
-+	cpu = cpu_of(rq);
-+	pr = atomic_read(&sched_prio_record);
-+
-+	if (prio < last_prio) {
-+		if (IDLE_TASK_SCHED_PRIO == last_prio) {
-+#ifdef CONFIG_SCHED_SMT
-+			if (static_branch_likely(&sched_smt_present))
-+				cpumask_andnot(sched_sg_idle_mask,
-+					       sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+			cpumask_clear_cpu(cpu, sched_idle_mask);
-+			last_prio -= 2;
-+		}
-+		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
-+
-+		return;
-+	}
-+	/* last_prio < prio */
-+	if (IDLE_TASK_SCHED_PRIO == prio) {
-+#ifdef CONFIG_SCHED_SMT
-+		if (static_branch_likely(&sched_smt_present) &&
-+		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
-+			cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+		cpumask_set_cpu(cpu, sched_idle_mask);
-+		prio -= 2;
-+	}
-+	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
-+}
-+
-+/*
-+ * This routine assume that the idle task always in queue
-+ */
-+static inline struct task_struct *sched_rq_first_task(struct rq *rq)
-+{
-+	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
-+
-+	return list_first_entry(head, struct task_struct, sq_node);
-+}
-+
-+static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
-+{
-+	struct list_head *next = p->sq_node.next;
-+
-+	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
-+		struct list_head *head;
-+		unsigned long idx = next - &rq->queue.heads[0];
-+
-+		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
-+				    sched_idx2prio(idx, rq) + 1);
-+		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+		return list_first_entry(head, struct task_struct, sq_node);
-+	}
-+
-+	return list_next_entry(p, sq_node);
-+}
-+
-+/*
-+ * Serialization rules:
-+ *
-+ * Lock order:
-+ *
-+ *   p->pi_lock
-+ *     rq->lock
-+ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
-+ *
-+ *  rq1->lock
-+ *    rq2->lock  where: rq1 < rq2
-+ *
-+ * Regular state:
-+ *
-+ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
-+ * local CPU's rq->lock, it optionally removes the task from the runqueue and
-+ * always looks at the local rq data structures to find the most eligible task
-+ * to run next.
-+ *
-+ * Task enqueue is also under rq->lock, possibly taken from another CPU.
-+ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
-+ * the local CPU to avoid bouncing the runqueue state around [ see
-+ * ttwu_queue_wakelist() ]
-+ *
-+ * Task wakeup, specifically wakeups that involve migration, are horribly
-+ * complicated to avoid having to take two rq->locks.
-+ *
-+ * Special state:
-+ *
-+ * System-calls and anything external will use task_rq_lock() which acquires
-+ * both p->pi_lock and rq->lock. As a consequence the state they change is
-+ * stable while holding either lock:
-+ *
-+ *  - sched_setaffinity()/
-+ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
-+ *  - set_user_nice():		p->se.load, p->*prio
-+ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
-+ *				p->se.load, p->rt_priority,
-+ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
-+ *  - sched_setnuma():		p->numa_preferred_nid
-+ *  - sched_move_task():        p->sched_task_group
-+ *  - uclamp_update_active()	p->uclamp*
-+ *
-+ * p->state <- TASK_*:
-+ *
-+ *   is changed locklessly using set_current_state(), __set_current_state() or
-+ *   set_special_state(), see their respective comments, or by
-+ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
-+ *   concurrent self.
-+ *
-+ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
-+ *
-+ *   is set by activate_task() and cleared by deactivate_task(), under
-+ *   rq->lock. Non-zero indicates the task is runnable, the special
-+ *   ON_RQ_MIGRATING state is used for migration without holding both
-+ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
-+ *
-+ * p->on_cpu <- { 0, 1 }:
-+ *
-+ *   is set by prepare_task() and cleared by finish_task() such that it will be
-+ *   set before p is scheduled-in and cleared after p is scheduled-out, both
-+ *   under rq->lock. Non-zero indicates the task is running on its CPU.
-+ *
-+ *   [ The astute reader will observe that it is possible for two tasks on one
-+ *     CPU to have ->on_cpu = 1 at the same time. ]
-+ *
-+ * task_cpu(p): is changed by set_task_cpu(), the rules are:
-+ *
-+ *  - Don't call set_task_cpu() on a blocked task:
-+ *
-+ *    We don't care what CPU we're not running on, this simplifies hotplug,
-+ *    the CPU assignment of blocked tasks isn't required to be valid.
-+ *
-+ *  - for try_to_wake_up(), called under p->pi_lock:
-+ *
-+ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
-+ *
-+ *  - for migration called under rq->lock:
-+ *    [ see task_on_rq_migrating() in task_rq_lock() ]
-+ *
-+ *    o move_queued_task()
-+ *    o detach_task()
-+ *
-+ *  - for migration called under double_rq_lock():
-+ *
-+ *    o __migrate_swap_task()
-+ *    o push_rt_task() / pull_rt_task()
-+ *    o push_dl_task() / pull_dl_task()
-+ *    o dl_task_offline_migration()
-+ *
-+ */
-+
-+/*
-+ * Context: p->pi_lock
-+ */
-+static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock(&rq->lock);
-+			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock(&rq->lock);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			*plock = NULL;
-+			return rq;
-+		}
-+	}
-+}
-+
-+static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
-+{
-+	if (NULL != lock)
-+		raw_spin_unlock(lock);
-+}
-+
-+static inline struct rq *
-+task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock_irqsave(&rq->lock, *flags);
-+			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&rq->lock, *flags);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			raw_spin_lock_irqsave(&p->pi_lock, *flags);
-+			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
-+				*plock = &p->pi_lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
-+		}
-+	}
-+}
-+
-+static inline void
-+task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
-+{
-+	raw_spin_unlock_irqrestore(lock, *flags);
-+}
-+
-+/*
-+ * __task_rq_lock - lock the rq @p resides on.
-+ */
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	lockdep_assert_held(&p->pi_lock);
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-+			return rq;
-+		raw_spin_unlock(&rq->lock);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+/*
-+ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
-+ */
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	for (;;) {
-+		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		/*
-+		 *	move_queued_task()		task_rq_lock()
-+		 *
-+		 *	ACQUIRE (rq->lock)
-+		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
-+		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
-+		 *	[S] ->cpu = new_cpu		[L] task_rq()
-+		 *					[L] ->on_rq
-+		 *	RELEASE (rq->lock)
-+		 *
-+		 * If we observe the old CPU in task_rq_lock(), the acquire of
-+		 * the old rq->lock will fully serialize against the stores.
-+		 *
-+		 * If we observe the new CPU in task_rq_lock(), the address
-+		 * dependency headed by '[L] rq = task_rq()' and the acquire
-+		 * will pair with the WMB to ensure we then also see migrating.
-+		 */
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
-+			return rq;
-+		}
-+		raw_spin_unlock(&rq->lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irqsave(&rq->lock, rf->flags);
-+}
-+
-+static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
-+}
-+
-+DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
-+		    rq_lock_irqsave(_T->lock, &_T->rf),
-+		    rq_unlock_irqrestore(_T->lock, &_T->rf),
-+		    struct rq_flags rf)
-+
-+void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
-+{
-+	raw_spinlock_t *lock;
-+
-+	/* Matches synchronize_rcu() in __sched_core_enable() */
-+	preempt_disable();
-+
-+	for (;;) {
-+		lock = __rq_lockp(rq);
-+		raw_spin_lock_nested(lock, subclass);
-+		if (likely(lock == __rq_lockp(rq))) {
-+			/* preempt_count *MUST* be > 1 */
-+			preempt_enable_no_resched();
-+			return;
-+		}
-+		raw_spin_unlock(lock);
-+	}
-+}
-+
-+void raw_spin_rq_unlock(struct rq *rq)
-+{
-+	raw_spin_unlock(rq_lockp(rq));
-+}
-+
-+/*
-+ * RQ-clock updating methods:
-+ */
-+
-+static void update_rq_clock_task(struct rq *rq, s64 delta)
-+{
-+/*
-+ * In theory, the compile should just see 0 here, and optimize out the call
-+ * to sched_rt_avg_update. But I don't trust it...
-+ */
-+	s64 __maybe_unused steal = 0, irq_delta = 0;
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
-+
-+	/*
-+	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
-+	 * this case when a previous update_rq_clock() happened inside a
-+	 * {soft,}irq region.
-+	 *
-+	 * When this happens, we stop ->clock_task and only update the
-+	 * prev_irq_time stamp to account for the part that fit, so that a next
-+	 * update will consume the rest. This ensures ->clock_task is
-+	 * monotonic.
-+	 *
-+	 * It does however cause some slight miss-attribution of {soft,}irq
-+	 * time, a more accurate solution would be to update the irq_time using
-+	 * the current rq->clock timestamp, except that would require using
-+	 * atomic ops.
-+	 */
-+	if (irq_delta > delta)
-+		irq_delta = delta;
-+
-+	rq->prev_irq_time += irq_delta;
-+	delta -= irq_delta;
-+	delayacct_irq(rq->curr, irq_delta);
-+#endif
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	if (static_key_false((&paravirt_steal_rq_enabled))) {
-+		steal = paravirt_steal_clock(cpu_of(rq));
-+		steal -= rq->prev_steal_time_rq;
-+
-+		if (unlikely(steal > delta))
-+			steal = delta;
-+
-+		rq->prev_steal_time_rq += steal;
-+		delta -= steal;
-+	}
-+#endif
-+
-+	rq->clock_task += delta;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	if ((irq_delta + steal))
-+		update_irq_load_avg(rq, irq_delta + steal);
-+#endif
-+}
-+
-+static inline void update_rq_clock(struct rq *rq)
-+{
-+	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
-+
-+	if (unlikely(delta <= 0))
-+		return;
-+	rq->clock += delta;
-+	sched_update_rq_clock(rq);
-+	update_rq_clock_task(rq, delta);
-+}
-+
-+/*
-+ * RQ Load update routine
-+ */
-+#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
-+#define RQ_UTIL_SHIFT			(8)
-+#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
-+
-+#define LOAD_BLOCK(t)		((t) >> 17)
-+#define LOAD_HALF_BLOCK(t)	((t) >> 16)
-+#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
-+
-+static inline void rq_load_update(struct rq *rq)
-+{
-+	u64 time = rq->clock;
-+	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
-+	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
-+	u64 curr = !!rq->nr_running;
-+
-+	if (delta) {
-+		rq->load_history = rq->load_history >> delta;
-+
-+		if (delta < RQ_UTIL_SHIFT) {
-+			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
-+			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
-+				rq->load_history ^= LOAD_BLOCK_BIT(delta);
-+		}
-+
-+		rq->load_block = BLOCK_MASK(time) * prev;
-+	} else {
-+		rq->load_block += (time - rq->load_stamp) * prev;
-+	}
-+	if (prev ^ curr)
-+		rq->load_history ^= CURRENT_LOAD_BIT;
-+	rq->load_stamp = time;
-+}
-+
-+unsigned long rq_load_util(struct rq *rq, unsigned long max)
-+{
-+	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
-+}
-+
-+#ifdef CONFIG_SMP
-+unsigned long sched_cpu_util(int cpu)
-+{
-+	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_CPU_FREQ
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid.  Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+	struct update_util_data *data;
-+
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
-+	if (data)
-+		data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+}
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+/*
-+ * Tick may be needed by tasks in the runqueue depending on their policy and
-+ * requirements. If tick is needed, lets send the target an IPI to kick it out
-+ * of nohz mode if necessary.
-+ */
-+static inline void sched_update_tick_dependency(struct rq *rq)
-+{
-+	int cpu = cpu_of(rq);
-+
-+	if (!tick_nohz_full_cpu(cpu))
-+		return;
-+
-+	if (rq->nr_running < 2)
-+		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
-+	else
-+		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
-+}
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_update_tick_dependency(struct rq *rq) { }
-+#endif
-+
-+bool sched_task_on_rq(struct task_struct *p)
-+{
-+	return task_on_rq_queued(p);
-+}
-+
-+unsigned long get_wchan(struct task_struct *p)
-+{
-+	unsigned long ip = 0;
-+	unsigned int state;
-+
-+	if (!p || p == current)
-+		return 0;
-+
-+	/* Only get wchan if task is blocked and we can keep it that way. */
-+	raw_spin_lock_irq(&p->pi_lock);
-+	state = READ_ONCE(p->__state);
-+	smp_rmb(); /* see try_to_wake_up() */
-+	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
-+		ip = __get_wchan(p);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	return ip;
-+}
-+
-+/*
-+ * Add/Remove/Requeue task to/from the runqueue routines
-+ * Context: rq->lock
-+ */
-+#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
-+	sched_info_dequeue(rq, p);							\
-+											\
-+	__list_del_entry(&p->sq_node);							\
-+	if (p->sq_node.prev == p->sq_node.next) {					\
-+		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
-+			  rq->queue.bitmap);						\
-+		func;									\
-+	}
-+
-+#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
-+	sched_info_enqueue(rq, p);							\
-+	{										\
-+	int idx, prio;									\
-+	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
-+	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
-+	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
-+		set_bit(prio, rq->queue.bitmap);					\
-+		func;									\
-+	}										\
-+	}
-+
-+static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+	--rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (1 == rq->nr_running)
-+		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+	++rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (2 == rq->nr_running)
-+		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq)
-+{
-+	struct list_head *node = &p->sq_node;
-+	int deq_idx, idx, prio;
-+
-+	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
-+		  cpu_of(rq), task_cpu(p));
-+#endif
-+	if (list_is_last(node, &rq->queue.heads[idx]))
-+		return;
-+
-+	__list_del_entry(node);
-+	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
-+		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
-+
-+	list_add_tail(node, &rq->queue.heads[idx]);
-+	if (list_is_first(node, &rq->queue.heads[idx]))
-+		set_bit(prio, rq->queue.bitmap);
-+	update_sched_preempt_mask(rq);
-+}
-+
-+/*
-+ * cmpxchg based fetch_or, macro so it works for different integer types
-+ */
-+#define fetch_or(ptr, mask)						\
-+	({								\
-+		typeof(ptr) _ptr = (ptr);				\
-+		typeof(mask) _mask = (mask);				\
-+		typeof(*_ptr) _val = *_ptr;				\
-+									\
-+		do {							\
-+		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
-+	_val;								\
-+})
-+
-+#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
-+/*
-+ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
-+ * this avoids any races wrt polling state changes and thereby avoids
-+ * spurious IPIs.
-+ */
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
-+}
-+
-+/*
-+ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
-+ *
-+ * If this returns true, then the idle task promises to call
-+ * sched_ttwu_pending() and reschedule soon.
-+ */
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	typeof(ti->flags) val = READ_ONCE(ti->flags);
-+
-+	do {
-+		if (!(val & _TIF_POLLING_NRFLAG))
-+			return false;
-+		if (val & _TIF_NEED_RESCHED)
-+			return true;
-+	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
-+
-+	return true;
-+}
-+
-+#else
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	set_tsk_need_resched(p);
-+	return true;
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline bool set_nr_if_polling(struct task_struct *p)
-+{
-+	return false;
-+}
-+#endif
-+#endif
-+
-+static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	struct wake_q_node *node = &task->wake_q;
-+
-+	/*
-+	 * Atomically grab the task, if ->wake_q is !nil already it means
-+	 * it's already queued (either by us or someone else) and will get the
-+	 * wakeup due to that.
-+	 *
-+	 * In order to ensure that a pending wakeup will observe our pending
-+	 * state, even in the failed case, an explicit smp_mb() must be used.
-+	 */
-+	smp_mb__before_atomic();
-+	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
-+		return false;
-+
-+	/*
-+	 * The head is context local, there can be no concurrency.
-+	 */
-+	*head->lastp = node;
-+	head->lastp = &node->next;
-+	return true;
-+}
-+
-+/**
-+ * wake_q_add() - queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ */
-+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (__wake_q_add(head, task))
-+		get_task_struct(task);
-+}
-+
-+/**
-+ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ *
-+ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
-+ * that already hold reference to @task can call the 'safe' version and trust
-+ * wake_q to do the right thing depending whether or not the @task is already
-+ * queued for wakeup.
-+ */
-+void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (!__wake_q_add(head, task))
-+		put_task_struct(task);
-+}
-+
-+void wake_up_q(struct wake_q_head *head)
-+{
-+	struct wake_q_node *node = head->first;
-+
-+	while (node != WAKE_Q_TAIL) {
-+		struct task_struct *task;
-+
-+		task = container_of(node, struct task_struct, wake_q);
-+		/* task can safely be re-inserted now: */
-+		node = node->next;
-+		task->wake_q.next = NULL;
-+
-+		/*
-+		 * wake_up_process() executes a full barrier, which pairs with
-+		 * the queueing in wake_q_add() so as not to miss wakeups.
-+		 */
-+		wake_up_process(task);
-+		put_task_struct(task);
-+	}
-+}
-+
-+/*
-+ * resched_curr - mark rq's current task 'to be rescheduled now'.
-+ *
-+ * On UP this means the setting of the need_resched flag, on SMP it
-+ * might also involve a cross-CPU call to trigger the scheduler on
-+ * the target CPU.
-+ */
-+static inline void resched_curr(struct rq *rq)
-+{
-+	struct task_struct *curr = rq->curr;
-+	int cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	if (test_tsk_need_resched(curr))
-+		return;
-+
-+	cpu = cpu_of(rq);
-+	if (cpu == smp_processor_id()) {
-+		set_tsk_need_resched(curr);
-+		set_preempt_need_resched();
-+		return;
-+	}
-+
-+	if (set_nr_and_not_polling(curr))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+void resched_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (cpu_online(cpu) || cpu == smp_processor_id())
-+		resched_curr(cpu_rq(cpu));
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_NO_HZ_COMMON
-+/*
-+ * This routine will record that the CPU is going idle with tick stopped.
-+ * This info will be used in performing idle load balancing in the future.
-+ */
-+void nohz_balance_enter_idle(int cpu) {}
-+
-+/*
-+ * In the semi idle case, use the nearest busy CPU for migrating timers
-+ * from an idle CPU.  This is good for power-savings.
-+ *
-+ * We don't do similar optimization for completely idle system, as
-+ * selecting an idle CPU will add more delays to the timers than intended
-+ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
-+ */
-+int get_nohz_timer_target(void)
-+{
-+	int i, cpu = smp_processor_id(), default_cpu = -1;
-+	struct cpumask *mask;
-+	const struct cpumask *hk_mask;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
-+		if (!idle_cpu(cpu))
-+			return cpu;
-+		default_cpu = cpu;
-+	}
-+
-+	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
-+
-+	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
-+	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
-+		for_each_cpu_and(i, mask, hk_mask)
-+			if (!idle_cpu(i))
-+				return i;
-+
-+	if (default_cpu == -1)
-+		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
-+	cpu = default_cpu;
-+
-+	return cpu;
-+}
-+
-+/*
-+ * When add_timer_on() enqueues a timer into the timer wheel of an
-+ * idle CPU then this timer might expire before the next timer event
-+ * which is scheduled to wake up that CPU. In case of a completely
-+ * idle system the next event might even be infinite time into the
-+ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
-+ * leaves the inner idle loop so the newly added timer is taken into
-+ * account when the CPU goes back to idle and evaluates the timer
-+ * wheel for the next timer event.
-+ */
-+static inline void wake_up_idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (cpu == smp_processor_id())
-+		return;
-+
-+	/*
-+	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
-+	 * part of the idle loop. This forces an exit from the idle loop
-+	 * and a round trip to schedule(). Now this could be optimized
-+	 * because a simple new idle loop iteration is enough to
-+	 * re-evaluate the next tick. Provided some re-ordering of tick
-+	 * nohz functions that would need to follow TIF_NR_POLLING
-+	 * clearing:
-+	 *
-+	 * - On most archs, a simple fetch_or on ti::flags with a
-+	 *   "0" value would be enough to know if an IPI needs to be sent.
-+	 *
-+	 * - x86 needs to perform a last need_resched() check between
-+	 *   monitor and mwait which doesn't take timers into account.
-+	 *   There a dedicated TIF_TIMER flag would be required to
-+	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
-+	 *   before mwait().
-+	 *
-+	 * However, remote timer enqueue is not such a frequent event
-+	 * and testing of the above solutions didn't appear to report
-+	 * much benefits.
-+	 */
-+	if (set_nr_and_not_polling(rq->idle))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+static inline bool wake_up_full_nohz_cpu(int cpu)
-+{
-+	/*
-+	 * We just need the target to call irq_exit() and re-evaluate
-+	 * the next tick. The nohz full kick at least implies that.
-+	 * If needed we can still optimize that later with an
-+	 * empty IRQ.
-+	 */
-+	if (cpu_is_offline(cpu))
-+		return true;  /* Don't try to wake offline CPUs. */
-+	if (tick_nohz_full_cpu(cpu)) {
-+		if (cpu != smp_processor_id() ||
-+		    tick_nohz_tick_stopped())
-+			tick_nohz_full_kick_cpu(cpu);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_nohz_cpu(int cpu)
-+{
-+	if (!wake_up_full_nohz_cpu(cpu))
-+		wake_up_idle_cpu(cpu);
-+}
-+
-+static void nohz_csd_func(void *info)
-+{
-+	struct rq *rq = info;
-+	int cpu = cpu_of(rq);
-+	unsigned int flags;
-+
-+	/*
-+	 * Release the rq::nohz_csd.
-+	 */
-+	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
-+	WARN_ON(!(flags & NOHZ_KICK_MASK));
-+
-+	rq->idle_balance = idle_cpu(cpu);
-+	if (rq->idle_balance && !need_resched()) {
-+		rq->nohz_idle_balance = flags;
-+		raise_softirq_irqoff(SCHED_SOFTIRQ);
-+	}
-+}
-+
-+#endif /* CONFIG_NO_HZ_COMMON */
-+#endif /* CONFIG_SMP */
-+
-+static inline void wakeup_preempt(struct rq *rq)
-+{
-+	if (sched_rq_first_task(rq) != rq->curr)
-+		resched_curr(rq);
-+}
-+
-+static __always_inline
-+int __task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	if (READ_ONCE(p->__state) & state)
-+		return 1;
-+
-+	if (READ_ONCE(p->saved_state) & state)
-+		return -1;
-+
-+	return 0;
-+}
-+
-+static __always_inline
-+int task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	/*
-+	 * Serialize against current_save_and_set_rtlock_wait_state(),
-+	 * current_restore_rtlock_saved_state(), and __refrigerator().
-+	 */
-+	guard(raw_spinlock_irq)(&p->pi_lock);
-+
-+	return __task_state_match(p, state);
-+}
-+
-+/*
-+ * wait_task_inactive - wait for a thread to unschedule.
-+ *
-+ * Wait for the thread to block in any of the states set in @match_state.
-+ * If it changes, i.e. @p might have woken up, then return zero.  When we
-+ * succeed in waiting for @p to be off its CPU, we return a positive number
-+ * (its total switch count).  If a second call a short while later returns the
-+ * same number, the caller can be sure that @p has remained unscheduled the
-+ * whole time.
-+ *
-+ * The caller must ensure that the task *will* unschedule sometime soon,
-+ * else this function might spin for a *long* time. This function can't
-+ * be called with interrupts off, or it may introduce deadlock with
-+ * smp_call_function() if an IPI is sent by the same process we are
-+ * waiting to become inactive.
-+ */
-+unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
-+{
-+	unsigned long flags;
-+	int running, queued, match;
-+	unsigned long ncsw;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+
-+		/*
-+		 * If the task is actively running on another CPU
-+		 * still, just relax and busy-wait without holding
-+		 * any locks.
-+		 *
-+		 * NOTE! Since we don't hold any locks, it's not
-+		 * even sure that "rq" stays as the right runqueue!
-+		 * But we don't care, since this will return false
-+		 * if the runqueue has changed and p is actually now
-+		 * running somewhere else!
-+		 */
-+		while (task_on_cpu(p)) {
-+			if (!task_state_match(p, match_state))
-+				return 0;
-+			cpu_relax();
-+		}
-+
-+		/*
-+		 * Ok, time to look more closely! We need the rq
-+		 * lock now, to be *sure*. If we're wrong, we'll
-+		 * just go back and repeat.
-+		 */
-+		task_access_lock_irqsave(p, &lock, &flags);
-+		trace_sched_wait_task(p);
-+		running = task_on_cpu(p);
-+		queued = p->on_rq;
-+		ncsw = 0;
-+		if ((match = __task_state_match(p, match_state))) {
-+			/*
-+			 * When matching on p->saved_state, consider this task
-+			 * still queued so it will wait.
-+			 */
-+			if (match < 0)
-+				queued = 1;
-+			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
-+		}
-+		task_access_unlock_irqrestore(p, lock, &flags);
-+
-+		/*
-+		 * If it changed from the expected state, bail out now.
-+		 */
-+		if (unlikely(!ncsw))
-+			break;
-+
-+		/*
-+		 * Was it really running after all now that we
-+		 * checked with the proper locks actually held?
-+		 *
-+		 * Oops. Go back and try again..
-+		 */
-+		if (unlikely(running)) {
-+			cpu_relax();
-+			continue;
-+		}
-+
-+		/*
-+		 * It's not enough that it's not actively running,
-+		 * it must be off the runqueue _entirely_, and not
-+		 * preempted!
-+		 *
-+		 * So if it was still runnable (but just not actively
-+		 * running right now), it's preempted, and we should
-+		 * yield - it could be a while.
-+		 */
-+		if (unlikely(queued)) {
-+			ktime_t to = NSEC_PER_SEC / HZ;
-+
-+			set_current_state(TASK_UNINTERRUPTIBLE);
-+			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
-+			continue;
-+		}
-+
-+		/*
-+		 * Ahh, all good. It wasn't running, and it wasn't
-+		 * runnable, which means that it will never become
-+		 * running in the future either. We're all done!
-+		 */
-+		break;
-+	}
-+
-+	return ncsw;
-+}
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+/*
-+ * Use HR-timers to deliver accurate preemption points.
-+ */
-+
-+static void hrtick_clear(struct rq *rq)
-+{
-+	if (hrtimer_active(&rq->hrtick_timer))
-+		hrtimer_cancel(&rq->hrtick_timer);
-+}
-+
-+/*
-+ * High-resolution timer tick.
-+ * Runs from hardirq context with interrupts disabled.
-+ */
-+static enum hrtimer_restart hrtick(struct hrtimer *timer)
-+{
-+	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
-+
-+	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
-+
-+	raw_spin_lock(&rq->lock);
-+	resched_curr(rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	return HRTIMER_NORESTART;
-+}
-+
-+/*
-+ * Use hrtick when:
-+ *  - enabled by features
-+ *  - hrtimer is actually high res
-+ */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	/**
-+	 * Alt schedule FW doesn't support sched_feat yet
-+	if (!sched_feat(HRTICK))
-+		return 0;
-+	*/
-+	if (!cpu_active(cpu_of(rq)))
-+		return 0;
-+	return hrtimer_is_hres_active(&rq->hrtick_timer);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void __hrtick_restart(struct rq *rq)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	ktime_t time = rq->hrtick_time;
-+
-+	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
-+}
-+
-+/*
-+ * called from hardirq (IPI) context
-+ */
-+static void __hrtick_start(void *arg)
-+{
-+	struct rq *rq = arg;
-+
-+	raw_spin_lock(&rq->lock);
-+	__hrtick_restart(rq);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	s64 delta;
-+
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense and can cause timer DoS.
-+	 */
-+	delta = max_t(s64, delay, 10000LL);
-+
-+	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
-+
-+	if (rq == this_rq())
-+		__hrtick_restart(rq);
-+	else
-+		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-+}
-+
-+#else
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense. Rely on vruntime for fairness.
-+	 */
-+	delay = max_t(u64, delay, 10000LL);
-+	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
-+		      HRTIMER_MODE_REL_PINNED_HARD);
-+}
-+#endif /* CONFIG_SMP */
-+
-+static void hrtick_rq_init(struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
-+#endif
-+
-+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
-+	rq->hrtick_timer.function = hrtick;
-+}
-+#else	/* CONFIG_SCHED_HRTICK */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	return 0;
-+}
-+
-+static inline void hrtick_clear(struct rq *rq)
-+{
-+}
-+
-+static inline void hrtick_rq_init(struct rq *rq)
-+{
-+}
-+#endif	/* CONFIG_SCHED_HRTICK */
-+
-+static inline int __normal_prio(int policy, int rt_prio, int static_prio)
-+{
-+	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
-+}
-+
-+/*
-+ * Calculate the expected normal priority: i.e. priority
-+ * without taking RT-inheritance into account. Might be
-+ * boosted by interactivity modifiers. Changes upon fork,
-+ * setprio syscalls, and whenever the interactivity
-+ * estimator recalculates.
-+ */
-+static inline int normal_prio(struct task_struct *p)
-+{
-+	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
-+}
-+
-+/*
-+ * Calculate the current priority, i.e. the priority
-+ * taken into account by the scheduler. This value might
-+ * be boosted by RT tasks as it will be RT if the task got
-+ * RT-boosted. If not then it returns p->normal_prio.
-+ */
-+static int effective_prio(struct task_struct *p)
-+{
-+	p->normal_prio = normal_prio(p);
-+	/*
-+	 * If we are RT tasks or we were boosted to RT priority,
-+	 * keep the priority unchanged. Otherwise, update priority
-+	 * to the normal priority:
-+	 */
-+	if (!rt_prio(p->prio))
-+		return p->normal_prio;
-+	return p->prio;
-+}
-+
-+/*
-+ * activate_task - move a task to the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static void activate_task(struct task_struct *p, struct rq *rq)
-+{
-+	enqueue_task(p, rq, ENQUEUE_WAKEUP);
-+
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+
-+	/*
-+	 * If in_iowait is set, the code below may not trigger any cpufreq
-+	 * utilization updates, so do it here explicitly with the IOWAIT flag
-+	 * passed.
-+	 */
-+	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
-+}
-+
-+/*
-+ * deactivate_task - remove a task from the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static inline void deactivate_task(struct task_struct *p, struct rq *rq)
-+{
-+	WRITE_ONCE(p->on_rq, 0);
-+	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+
-+	dequeue_task(p, rq, DEQUEUE_SLEEP);
-+
-+	cpufreq_update_util(rq, 0);
-+}
-+
-+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
-+	 * successfully executed on another CPU. We must ensure that updates of
-+	 * per-task data have been completed by this moment.
-+	 */
-+	smp_wmb();
-+
-+	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
-+#endif
-+}
-+
-+static inline bool is_migration_disabled(struct task_struct *p)
-+{
-+#ifdef CONFIG_SMP
-+	return p->migration_disabled;
-+#else
-+	return false;
-+#endif
-+}
-+
-+#define SCA_CHECK		0x01
-+#define SCA_USER		0x08
-+
-+#ifdef CONFIG_SMP
-+
-+void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
-+{
-+#ifdef CONFIG_SCHED_DEBUG
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * We should never call set_task_cpu() on a blocked task,
-+	 * ttwu() will sort out the placement.
-+	 */
-+	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
-+
-+#ifdef CONFIG_LOCKDEP
-+	/*
-+	 * The caller should hold either p->pi_lock or rq->lock, when changing
-+	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
-+	 *
-+	 * sched_move_task() holds both and thus holding either pins the cgroup,
-+	 * see task_group().
-+	 */
-+	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
-+				      lockdep_is_held(&task_rq(p)->lock)));
-+#endif
-+	/*
-+	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
-+	 */
-+	WARN_ON_ONCE(!cpu_online(new_cpu));
-+
-+	WARN_ON_ONCE(is_migration_disabled(p));
-+#endif
-+	trace_sched_migrate_task(p, new_cpu);
-+
-+	if (task_cpu(p) != new_cpu)
-+	{
-+		rseq_migrate(p);
-+		perf_event_task_migrate(p);
-+	}
-+
-+	__set_task_cpu(p, new_cpu);
-+}
-+
-+#define MDF_FORCE_ENABLED	0x80
-+
-+static void
-+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	/*
-+	 * This here violates the locking rules for affinity, since we're only
-+	 * supposed to change these variables while holding both rq->lock and
-+	 * p->pi_lock.
-+	 *
-+	 * HOWEVER, it magically works, because ttwu() is the only code that
-+	 * accesses these variables under p->pi_lock and only does so after
-+	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
-+	 * before finish_task().
-+	 *
-+	 * XXX do further audits, this smells like something putrid.
-+	 */
-+	SCHED_WARN_ON(!p->on_cpu);
-+	p->cpus_ptr = new_mask;
-+}
-+
-+void migrate_disable(void)
-+{
-+	struct task_struct *p = current;
-+	int cpu;
-+
-+	if (p->migration_disabled) {
-+		p->migration_disabled++;
-+		return;
-+	}
-+
-+	guard(preempt)();
-+	cpu = smp_processor_id();
-+	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
-+		cpu_rq(cpu)->nr_pinned++;
-+		p->migration_disabled = 1;
-+		p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
-+		/*
-+		 * Violates locking rules! see comment in __do_set_cpus_ptr().
-+		 */
-+		if (p->cpus_ptr == &p->cpus_mask)
-+			__do_set_cpus_ptr(p, cpumask_of(cpu));
-+	}
-+}
-+EXPORT_SYMBOL_GPL(migrate_disable);
-+
-+void migrate_enable(void)
-+{
-+	struct task_struct *p = current;
-+
-+	if (0 == p->migration_disabled)
-+		return;
-+
-+	if (p->migration_disabled > 1) {
-+		p->migration_disabled--;
-+		return;
-+	}
-+
-+	if (WARN_ON_ONCE(!p->migration_disabled))
-+		return;
-+
-+	/*
-+	 * Ensure stop_task runs either before or after this, and that
-+	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
-+	 */
-+	guard(preempt)();
-+	/*
-+	 * Assumption: current should be running on allowed cpu
-+	 */
-+	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
-+	if (p->cpus_ptr != &p->cpus_mask)
-+		__do_set_cpus_ptr(p, &p->cpus_mask);
-+	/*
-+	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
-+	 * regular cpus_mask, otherwise things that race (eg.
-+	 * select_fallback_rq) get confused.
-+	 */
-+	barrier();
-+	p->migration_disabled = 0;
-+	this_rq()->nr_pinned--;
-+}
-+EXPORT_SYMBOL_GPL(migrate_enable);
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return rq->nr_pinned;
-+}
-+
-+/*
-+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
-+ * __set_cpus_allowed_ptr() and select_fallback_rq().
-+ */
-+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-+{
-+	/* When not in the task's cpumask, no point in looking further. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/* migrate_disabled() must be allowed to finish. */
-+	if (is_migration_disabled(p))
-+		return cpu_online(cpu);
-+
-+	/* Non kernel threads are not allowed during either online or offline. */
-+	if (!(p->flags & PF_KTHREAD))
-+		return cpu_active(cpu) && task_cpu_possible(cpu, p);
-+
-+	/* KTHREAD_IS_PER_CPU is always allowed. */
-+	if (kthread_is_per_cpu(p))
-+		return cpu_online(cpu);
-+
-+	/* Regular kernel threads don't get to stay during offline. */
-+	if (cpu_dying(cpu))
-+		return false;
-+
-+	/* But are allowed during online. */
-+	return cpu_online(cpu);
-+}
-+
-+/*
-+ * This is how migration works:
-+ *
-+ * 1) we invoke migration_cpu_stop() on the target CPU using
-+ *    stop_one_cpu().
-+ * 2) stopper starts to run (implicitly forcing the migrated thread
-+ *    off the CPU)
-+ * 3) it checks whether the migrated task is still in the wrong runqueue.
-+ * 4) if it's in the wrong runqueue then the migration thread removes
-+ *    it and puts it into the right queue.
-+ * 5) stopper completes and stop_one_cpu() returns and the migration
-+ *    is done.
-+ */
-+
-+/*
-+ * move_queued_task - move a queued task to new rq.
-+ *
-+ * Returns (locked) new rq. Old rq's lock is released.
-+ */
-+static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
-+{
-+	int src_cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	src_cpu = cpu_of(rq);
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
-+	dequeue_task(p, rq, 0);
-+	set_task_cpu(p, new_cpu);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rq = cpu_rq(new_cpu);
-+
-+	raw_spin_lock(&rq->lock);
-+	WARN_ON_ONCE(task_cpu(p) != new_cpu);
-+
-+	sched_mm_cid_migrate_to(rq, p, src_cpu);
-+
-+	sched_task_sanity_check(p, rq);
-+	enqueue_task(p, rq, 0);
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+	wakeup_preempt(rq);
-+
-+	return rq;
-+}
-+
-+struct migration_arg {
-+	struct task_struct *task;
-+	int dest_cpu;
-+};
-+
-+/*
-+ * Move (not current) task off this CPU, onto the destination CPU. We're doing
-+ * this because either it can't run here any more (set_cpus_allowed()
-+ * away from this CPU, or CPU going down), or because we're
-+ * attempting to rebalance this task on exec (sched_exec).
-+ *
-+ * So we race with normal scheduler movements, but that's OK, as long
-+ * as the task is no longer on this CPU.
-+ */
-+static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
-+{
-+	/* Affinity changed (again). */
-+	if (!is_cpu_allowed(p, dest_cpu))
-+		return rq;
-+
-+	return move_queued_task(rq, p, dest_cpu);
-+}
-+
-+/*
-+ * migration_cpu_stop - this will be executed by a highprio stopper thread
-+ * and performs thread migration by bumping thread off CPU then
-+ * 'pushing' onto another runqueue.
-+ */
-+static int migration_cpu_stop(void *data)
-+{
-+	struct migration_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+
-+	/*
-+	 * The original target CPU might have gone down and we might
-+	 * be on another CPU but it doesn't matter.
-+	 */
-+	local_irq_save(flags);
-+	/*
-+	 * We need to explicitly wake pending tasks before running
-+	 * __migrate_task() such that we will not miss enforcing cpus_ptr
-+	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
-+	 */
-+	flush_smp_call_function_queue();
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+	/*
-+	 * If task_rq(p) != rq, it cannot be migrated here, because we're
-+	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
-+	 * we're holding p->pi_lock.
-+	 */
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		rq = __migrate_task(rq, p, arg->dest_cpu);
-+	}
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+static inline void
-+set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	cpumask_copy(&p->cpus_mask, ctx->new_mask);
-+	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
-+
-+	/*
-+	 * Swap in a new user_cpus_ptr if SCA_USER flag set
-+	 */
-+	if (ctx->flags & SCA_USER)
-+		swap(p->user_cpus_ptr, ctx->user_mask);
-+}
-+
-+static void
-+__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	lockdep_assert_held(&p->pi_lock);
-+	set_cpus_allowed_common(p, ctx);
-+}
-+
-+/*
-+ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
-+ * affinity (if any) should be destroyed too.
-+ */
-+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.user_mask = NULL,
-+		.flags     = SCA_USER,	/* clear the user requested mask */
-+	};
-+	union cpumask_rcuhead {
-+		cpumask_t cpumask;
-+		struct rcu_head rcu;
-+	};
-+
-+	__do_set_cpus_allowed(p, &ac);
-+
-+	/*
-+	 * Because this is called with p->pi_lock held, it is not possible
-+	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
-+	 * kfree_rcu().
-+	 */
-+	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
-+}
-+
-+static cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	/*
-+	 * See do_set_cpus_allowed() above for the rcu_head usage.
-+	 */
-+	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
-+
-+	return kmalloc_node(size, GFP_KERNEL, node);
-+}
-+
-+int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
-+		      int node)
-+{
-+	cpumask_t *user_mask;
-+	unsigned long flags;
-+
-+	/*
-+	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
-+	 * may differ by now due to racing.
-+	 */
-+	dst->user_cpus_ptr = NULL;
-+
-+	/*
-+	 * This check is racy and losing the race is a valid situation.
-+	 * It is not worth the extra overhead of taking the pi_lock on
-+	 * every fork/clone.
-+	 */
-+	if (data_race(!src->user_cpus_ptr))
-+		return 0;
-+
-+	user_mask = alloc_user_cpus_ptr(node);
-+	if (!user_mask)
-+		return -ENOMEM;
-+
-+	/*
-+	 * Use pi_lock to protect content of user_cpus_ptr
-+	 *
-+	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
-+	 * do_set_cpus_allowed().
-+	 */
-+	raw_spin_lock_irqsave(&src->pi_lock, flags);
-+	if (src->user_cpus_ptr) {
-+		swap(dst->user_cpus_ptr, user_mask);
-+		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
-+	}
-+	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
-+
-+	if (unlikely(user_mask))
-+		kfree(user_mask);
-+
-+	return 0;
-+}
-+
-+static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
-+{
-+	struct cpumask *user_mask = NULL;
-+
-+	swap(p->user_cpus_ptr, user_mask);
-+
-+	return user_mask;
-+}
-+
-+void release_user_cpus_ptr(struct task_struct *p)
-+{
-+	kfree(clear_user_cpus_ptr(p));
-+}
-+
-+#endif
-+
-+/**
-+ * task_curr - is this task currently executing on a CPU?
-+ * @p: the task in question.
-+ *
-+ * Return: 1 if the task is currently executing. 0 otherwise.
-+ */
-+inline int task_curr(const struct task_struct *p)
-+{
-+	return cpu_curr(task_cpu(p)) == p;
-+}
-+
-+#ifdef CONFIG_SMP
-+/***
-+ * kick_process - kick a running thread to enter/exit the kernel
-+ * @p: the to-be-kicked thread
-+ *
-+ * Cause a process which is running on another CPU to enter
-+ * kernel-mode, without any delay. (to get signals handled.)
-+ *
-+ * NOTE: this function doesn't have to take the runqueue lock,
-+ * because all it wants to ensure is that the remote task enters
-+ * the kernel. If the IPI races and the task has been migrated
-+ * to another CPU then no harm is done and the purpose has been
-+ * achieved as well.
-+ */
-+void kick_process(struct task_struct *p)
-+{
-+	guard(preempt)();
-+	int cpu = task_cpu(p);
-+
-+	if ((cpu != smp_processor_id()) && task_curr(p))
-+		smp_send_reschedule(cpu);
-+}
-+EXPORT_SYMBOL_GPL(kick_process);
-+
-+/*
-+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
-+ *
-+ * A few notes on cpu_active vs cpu_online:
-+ *
-+ *  - cpu_active must be a subset of cpu_online
-+ *
-+ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
-+ *    see __set_cpus_allowed_ptr(). At this point the newly online
-+ *    CPU isn't yet part of the sched domains, and balancing will not
-+ *    see it.
-+ *
-+ *  - on cpu-down we clear cpu_active() to mask the sched domains and
-+ *    avoid the load balancer to place new tasks on the to be removed
-+ *    CPU. Existing tasks will remain running there and will be taken
-+ *    off.
-+ *
-+ * This means that fallback selection must not select !active CPUs.
-+ * And can assume that any active CPU must be online. Conversely
-+ * select_task_rq() below may allow selection of !active CPUs in order
-+ * to satisfy the above rules.
-+ */
-+static int select_fallback_rq(int cpu, struct task_struct *p)
-+{
-+	int nid = cpu_to_node(cpu);
-+	const struct cpumask *nodemask = NULL;
-+	enum { cpuset, possible, fail } state = cpuset;
-+	int dest_cpu;
-+
-+	/*
-+	 * If the node that the CPU is on has been offlined, cpu_to_node()
-+	 * will return -1. There is no CPU on the node, and we should
-+	 * select the CPU on the other node.
-+	 */
-+	if (nid != -1) {
-+		nodemask = cpumask_of_node(nid);
-+
-+		/* Look for allowed, online CPU in same node. */
-+		for_each_cpu(dest_cpu, nodemask) {
-+			if (is_cpu_allowed(p, dest_cpu))
-+				return dest_cpu;
-+		}
-+	}
-+
-+	for (;;) {
-+		/* Any allowed, online CPU? */
-+		for_each_cpu(dest_cpu, p->cpus_ptr) {
-+			if (!is_cpu_allowed(p, dest_cpu))
-+				continue;
-+			goto out;
-+		}
-+
-+		/* No more Mr. Nice Guy. */
-+		switch (state) {
-+		case cpuset:
-+			if (cpuset_cpus_allowed_fallback(p)) {
-+				state = possible;
-+				break;
-+			}
-+			fallthrough;
-+		case possible:
-+			/*
-+			 * XXX When called from select_task_rq() we only
-+			 * hold p->pi_lock and again violate locking order.
-+			 *
-+			 * More yuck to audit.
-+			 */
-+			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
-+			state = fail;
-+			break;
-+
-+		case fail:
-+			BUG();
-+			break;
-+		}
-+	}
-+
-+out:
-+	if (state != cpuset) {
-+		/*
-+		 * Don't tell them about moving exiting tasks or
-+		 * kernel threads (both mm NULL), since they never
-+		 * leave kernel.
-+		 */
-+		if (p->mm && printk_ratelimit()) {
-+			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
-+					task_pid_nr(p), p->comm, cpu);
-+		}
-+	}
-+
-+	return dest_cpu;
-+}
-+
-+static inline void
-+sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
-+{
-+	int cpu;
-+
-+	cpumask_copy(mask, sched_preempt_mask + ref);
-+	if (prio < ref) {
-+		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
-+			if (prio < cpu_rq(cpu)->prio)
-+				cpumask_set_cpu(cpu, mask);
-+		}
-+	} else {
-+		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
-+			if (prio >= cpu_rq(cpu)->prio)
-+				cpumask_clear_cpu(cpu, mask);
-+		}
-+	}
-+}
-+
-+static inline int
-+preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
-+{
-+	cpumask_t *mask = sched_preempt_mask + prio;
-+	int pr = atomic_read(&sched_prio_record);
-+
-+	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
-+		sched_preempt_mask_flush(mask, prio, pr);
-+		atomic_set(&sched_prio_record, prio);
-+	}
-+
-+	return cpumask_and(preempt_mask, allow_mask, mask);
-+}
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	cpumask_t allow_mask, mask;
-+
-+	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
-+		return select_fallback_rq(task_cpu(p), p);
-+
-+	if (
-+#ifdef CONFIG_SCHED_SMT
-+	    cpumask_and(&mask, &allow_mask, sched_sg_idle_mask) ||
-+#endif
-+	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
-+	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
-+		return best_mask_cpu(task_cpu(p), &mask);
-+
-+	return best_mask_cpu(task_cpu(p), &allow_mask);
-+}
-+
-+void sched_set_stop_task(int cpu, struct task_struct *stop)
-+{
-+	static struct lock_class_key stop_pi_lock;
-+	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
-+	struct sched_param start_param = { .sched_priority = 0 };
-+	struct task_struct *old_stop = cpu_rq(cpu)->stop;
-+
-+	if (stop) {
-+		/*
-+		 * Make it appear like a SCHED_FIFO task, its something
-+		 * userspace knows about and won't get confused about.
-+		 *
-+		 * Also, it will make PI more or less work without too
-+		 * much confusion -- but then, stop work should not
-+		 * rely on PI working anyway.
-+		 */
-+		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
-+
-+		/*
-+		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
-+		 * adjust the effective priority of a task. As a result,
-+		 * rt_mutex_setprio() can trigger (RT) balancing operations,
-+		 * which can then trigger wakeups of the stop thread to push
-+		 * around the current task.
-+		 *
-+		 * The stop task itself will never be part of the PI-chain, it
-+		 * never blocks, therefore that ->pi_lock recursion is safe.
-+		 * Tell lockdep about this by placing the stop->pi_lock in its
-+		 * own class.
-+		 */
-+		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
-+	}
-+
-+	cpu_rq(cpu)->stop = stop;
-+
-+	if (old_stop) {
-+		/*
-+		 * Reset it back to a normal scheduling policy so that
-+		 * it can die in pieces.
-+		 */
-+		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
-+	}
-+}
-+
-+static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
-+			    raw_spinlock_t *lock, unsigned long irq_flags)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	/* Can the task run on the task's current CPU? If so, we're done */
-+	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+		if (p->migration_disabled) {
-+			if (likely(p->cpus_ptr != &p->cpus_mask))
-+				__do_set_cpus_ptr(p, &p->cpus_mask);
-+			p->migration_disabled = 0;
-+			p->migration_flags |= MDF_FORCE_ENABLED;
-+			/* When p is migrate_disabled, rq->lock should be held */
-+			rq->nr_pinned--;
-+		}
-+
-+		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
-+			struct migration_arg arg = { p, dest_cpu };
-+
-+			/* Need help from migration thread: drop lock and wait. */
-+			__task_access_unlock(p, lock);
-+			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+			return 0;
-+		}
-+		if (task_on_rq_queued(p)) {
-+			/*
-+			 * OK, since we're going to drop the lock immediately
-+			 * afterwards anyway.
-+			 */
-+			update_rq_clock(rq);
-+			rq = move_queued_task(rq, p, dest_cpu);
-+			lock = &rq->lock;
-+		}
-+	}
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return 0;
-+}
-+
-+static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
-+					 struct affinity_context *ctx,
-+					 struct rq *rq,
-+					 raw_spinlock_t *lock,
-+					 unsigned long irq_flags)
-+{
-+	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
-+	const struct cpumask *cpu_valid_mask = cpu_active_mask;
-+	bool kthread = p->flags & PF_KTHREAD;
-+	int dest_cpu;
-+	int ret = 0;
-+
-+	if (kthread || is_migration_disabled(p)) {
-+		/*
-+		 * Kernel threads are allowed on online && !active CPUs,
-+		 * however, during cpu-hot-unplug, even these might get pushed
-+		 * away if not KTHREAD_IS_PER_CPU.
-+		 *
-+		 * Specifically, migration_disabled() tasks must not fail the
-+		 * cpumask_any_and_distribute() pick below, esp. so on
-+		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
-+		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
-+		 */
-+		cpu_valid_mask = cpu_online_mask;
-+	}
-+
-+	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	/*
-+	 * Must re-check here, to close a race against __kthread_bind(),
-+	 * sched_setaffinity() is not guaranteed to observe the flag.
-+	 */
-+	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
-+		goto out;
-+
-+	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
-+	if (dest_cpu >= nr_cpu_ids) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	__do_set_cpus_allowed(p, ctx);
-+
-+	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
-+
-+out:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+
-+	return ret;
-+}
-+
-+/*
-+ * Change a given task's CPU affinity. Migrate the thread to a
-+ * is removed from the allowed bitmask.
-+ *
-+ * NOTE: the caller must have a valid reference to the task, the
-+ * task must not exit() & deallocate itself prematurely. The
-+ * call is not atomic; no spinlocks may be held.
-+ */
-+static int __set_cpus_allowed_ptr(struct task_struct *p,
-+				  struct affinity_context *ctx)
-+{
-+	unsigned long irq_flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
-+	 * flags are set.
-+	 */
-+	if (p->user_cpus_ptr &&
-+	    !(ctx->flags & SCA_USER) &&
-+	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
-+		ctx->new_mask = rq->scratch_mask;
-+
-+
-+	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
-+}
-+
-+int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+
-+	return __set_cpus_allowed_ptr(p, &ac);
-+}
-+EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
-+
-+/*
-+ * Change a given task's CPU affinity to the intersection of its current
-+ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
-+ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
-+ * affinity or use cpu_online_mask instead.
-+ *
-+ * If the resulting mask is empty, leave the affinity unchanged and return
-+ * -EINVAL.
-+ */
-+static int restrict_cpus_allowed_ptr(struct task_struct *p,
-+				     struct cpumask *new_mask,
-+				     const struct cpumask *subset_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+	unsigned long irq_flags;
-+	raw_spinlock_t *lock;
-+	struct rq *rq;
-+	int err;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
-+		err = -EINVAL;
-+		goto err_unlock;
-+	}
-+
-+	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
-+
-+err_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return err;
-+}
-+
-+/*
-+ * Restrict the CPU affinity of task @p so that it is a subset of
-+ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
-+ * old affinity mask. If the resulting mask is empty, we warn and walk
-+ * up the cpuset hierarchy until we find a suitable mask.
-+ */
-+void force_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	cpumask_var_t new_mask;
-+	const struct cpumask *override_mask = task_cpu_possible_mask(p);
-+
-+	alloc_cpumask_var(&new_mask, GFP_KERNEL);
-+
-+	/*
-+	 * __migrate_task() can fail silently in the face of concurrent
-+	 * offlining of the chosen destination CPU, so take the hotplug
-+	 * lock to ensure that the migration succeeds.
-+	 */
-+	cpus_read_lock();
-+	if (!cpumask_available(new_mask))
-+		goto out_set_mask;
-+
-+	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
-+		goto out_free_mask;
-+
-+	/*
-+	 * We failed to find a valid subset of the affinity mask for the
-+	 * task, so override it based on its cpuset hierarchy.
-+	 */
-+	cpuset_cpus_allowed(p, new_mask);
-+	override_mask = new_mask;
-+
-+out_set_mask:
-+	if (printk_ratelimit()) {
-+		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
-+				task_pid_nr(p), p->comm,
-+				cpumask_pr_args(override_mask));
-+	}
-+
-+	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
-+out_free_mask:
-+	cpus_read_unlock();
-+	free_cpumask_var(new_mask);
-+}
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
-+
-+/*
-+ * Restore the affinity of a task @p which was previously restricted by a
-+ * call to force_compatible_cpus_allowed_ptr().
-+ *
-+ * It is the caller's responsibility to serialise this with any calls to
-+ * force_compatible_cpus_allowed_ptr(@p).
-+ */
-+void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = task_user_cpus(p),
-+		.flags     = 0,
-+	};
-+	int ret;
-+
-+	/*
-+	 * Try to restore the old affinity mask with __sched_setaffinity().
-+	 * Cpuset masking will be done there too.
-+	 */
-+	ret = __sched_setaffinity(p, &ac);
-+	WARN_ON_ONCE(ret);
-+}
-+
-+#else /* CONFIG_SMP */
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+static inline int
-+__set_cpus_allowed_ptr(struct task_struct *p,
-+		       struct affinity_context *ctx)
-+{
-+	return set_cpus_allowed_ptr(p, ctx->new_mask);
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return false;
-+}
-+
-+static inline cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	return NULL;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+static void
-+ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq;
-+
-+	if (!schedstat_enabled())
-+		return;
-+
-+	rq = this_rq();
-+
-+#ifdef CONFIG_SMP
-+	if (cpu == rq->cpu) {
-+		__schedstat_inc(rq->ttwu_local);
-+		__schedstat_inc(p->stats.nr_wakeups_local);
-+	} else {
-+		/** Alt schedule FW ToDo:
-+		 * How to do ttwu_wake_remote
-+		 */
-+	}
-+#endif /* CONFIG_SMP */
-+
-+	__schedstat_inc(rq->ttwu_count);
-+	__schedstat_inc(p->stats.nr_wakeups);
-+}
-+
-+/*
-+ * Mark the task runnable.
-+ */
-+static inline void ttwu_do_wakeup(struct task_struct *p)
-+{
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	trace_sched_wakeup(p);
-+}
-+
-+static inline void
-+ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+	if (p->sched_contributes_to_load)
-+		rq->nr_uninterruptible--;
-+
-+	if (
-+#ifdef CONFIG_SMP
-+	    !(wake_flags & WF_MIGRATED) &&
-+#endif
-+	    p->in_iowait) {
-+		delayacct_blkio_end(p);
-+		atomic_dec(&task_rq(p)->nr_iowait);
-+	}
-+
-+	activate_task(p, rq);
-+	wakeup_preempt(rq);
-+
-+	ttwu_do_wakeup(p);
-+}
-+
-+/*
-+ * Consider @p being inside a wait loop:
-+ *
-+ *   for (;;) {
-+ *      set_current_state(TASK_UNINTERRUPTIBLE);
-+ *
-+ *      if (CONDITION)
-+ *         break;
-+ *
-+ *      schedule();
-+ *   }
-+ *   __set_current_state(TASK_RUNNING);
-+ *
-+ * between set_current_state() and schedule(). In this case @p is still
-+ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
-+ * an atomic manner.
-+ *
-+ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
-+ * then schedule() must still happen and p->state can be changed to
-+ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
-+ * need to do a full wakeup with enqueue.
-+ *
-+ * Returns: %true when the wakeup is done,
-+ *          %false otherwise.
-+ */
-+static int ttwu_runnable(struct task_struct *p, int wake_flags)
-+{
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	int ret = 0;
-+
-+	rq = __task_access_lock(p, &lock);
-+	if (task_on_rq_queued(p)) {
-+		if (!task_on_cpu(p)) {
-+			/*
-+			 * When on_rq && !on_cpu the task is preempted, see if
-+			 * it should preempt the task that is current now.
-+			 */
-+			update_rq_clock(rq);
-+			wakeup_preempt(rq);
-+		}
-+		ttwu_do_wakeup(p);
-+		ret = 1;
-+	}
-+	__task_access_unlock(p, lock);
-+
-+	return ret;
-+}
-+
-+#ifdef CONFIG_SMP
-+void sched_ttwu_pending(void *arg)
-+{
-+	struct llist_node *llist = arg;
-+	struct rq *rq = this_rq();
-+	struct task_struct *p, *t;
-+	struct rq_flags rf;
-+
-+	if (!llist)
-+		return;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	update_rq_clock(rq);
-+
-+	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
-+		if (WARN_ON_ONCE(p->on_cpu))
-+			smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
-+			set_task_cpu(p, cpu_of(rq));
-+
-+		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
-+	}
-+
-+	/*
-+	 * Must be after enqueueing at least once task such that
-+	 * idle_cpu() does not observe a false-negative -- if it does,
-+	 * it is possible for select_idle_siblings() to stack a number
-+	 * of tasks on this CPU during that window.
-+	 *
-+	 * It is ok to clear ttwu_pending when another task pending.
-+	 * We will receive IPI after local irq enabled and then enqueue it.
-+	 * Since now nr_running > 0, idle_cpu() will always get correct result.
-+	 */
-+	WRITE_ONCE(rq->ttwu_pending, 0);
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Prepare the scene for sending an IPI for a remote smp_call
-+ *
-+ * Returns true if the caller can proceed with sending the IPI.
-+ * Returns false otherwise.
-+ */
-+bool call_function_single_prep_ipi(int cpu)
-+{
-+	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
-+		trace_sched_wake_idle_without_ipi(cpu);
-+		return false;
-+	}
-+
-+	return true;
-+}
-+
-+/*
-+ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
-+ * necessary. The wakee CPU on receipt of the IPI will queue the task
-+ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
-+ * of the wakeup instead of the waker.
-+ */
-+static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
-+
-+	WRITE_ONCE(rq->ttwu_pending, 1);
-+	__smp_call_single_queue(cpu, &p->wake_entry.llist);
-+}
-+
-+static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
-+{
-+	/*
-+	 * Do not complicate things with the async wake_list while the CPU is
-+	 * in hotplug state.
-+	 */
-+	if (!cpu_active(cpu))
-+		return false;
-+
-+	/* Ensure the task will still be allowed to run on the CPU. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/*
-+	 * If the CPU does not share cache, then queue the task on the
-+	 * remote rqs wakelist to avoid accessing remote data.
-+	 */
-+	if (!cpus_share_cache(smp_processor_id(), cpu))
-+		return true;
-+
-+	if (cpu == smp_processor_id())
-+		return false;
-+
-+	/*
-+	 * If the wakee cpu is idle, or the task is descheduling and the
-+	 * only running task on the CPU, then use the wakelist to offload
-+	 * the task activation to the idle (or soon-to-be-idle) CPU as
-+	 * the current CPU is likely busy. nr_running is checked to
-+	 * avoid unnecessary task stacking.
-+	 *
-+	 * Note that we can only get here with (wakee) p->on_rq=0,
-+	 * p->on_cpu can be whatever, we've done the dequeue, so
-+	 * the wakee has been accounted out of ->nr_running.
-+	 */
-+	if (!cpu_rq(cpu)->nr_running)
-+		return true;
-+
-+	return false;
-+}
-+
-+static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
-+		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
-+		__ttwu_queue_wakelist(p, cpu, wake_flags);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_if_idle(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	guard(rcu)();
-+	if (is_idle_task(rcu_dereference(rq->curr))) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		if (is_idle_task(rq->curr))
-+			resched_curr(rq);
-+	}
-+}
-+
-+extern struct static_key_false sched_asym_cpucapacity;
-+
-+static __always_inline bool sched_asym_cpucap_active(void)
-+{
-+	return static_branch_unlikely(&sched_asym_cpucapacity);
-+}
-+
-+bool cpus_equal_capacity(int this_cpu, int that_cpu)
-+{
-+	if (!sched_asym_cpucap_active())
-+		return true;
-+
-+	if (this_cpu == that_cpu)
-+		return true;
-+
-+	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
-+}
-+
-+bool cpus_share_cache(int this_cpu, int that_cpu)
-+{
-+	if (this_cpu == that_cpu)
-+		return true;
-+
-+	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-+}
-+#else /* !CONFIG_SMP */
-+
-+static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	return false;
-+}
-+
-+#endif /* CONFIG_SMP */
-+
-+static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (ttwu_queue_wakelist(p, cpu, wake_flags))
-+		return;
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+	ttwu_do_activate(rq, p, wake_flags);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Invoked from try_to_wake_up() to check whether the task can be woken up.
-+ *
-+ * The caller holds p::pi_lock if p != current or has preemption
-+ * disabled when p == current.
-+ *
-+ * The rules of saved_state:
-+ *
-+ *   The related locking code always holds p::pi_lock when updating
-+ *   p::saved_state, which means the code is fully serialized in both cases.
-+ *
-+ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
-+ *  No other bits set. This allows to distinguish all wakeup scenarios.
-+ *
-+ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
-+ *  allows us to prevent early wakeup of tasks before they can be run on
-+ *  asymmetric ISA architectures (eg ARMv9).
-+ */
-+static __always_inline
-+bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
-+{
-+	int match;
-+
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
-+			     state != TASK_RTLOCK_WAIT);
-+	}
-+
-+	*success = !!(match = __task_state_match(p, state));
-+
-+	/*
-+	 * Saved state preserves the task state across blocking on
-+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
-+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
-+	 * because it waits for a lock wakeup or __thaw_task(). Also
-+	 * indicate success because from the regular waker's point of
-+	 * view this has succeeded.
-+	 *
-+	 * After acquiring the lock the task will restore p::__state
-+	 * from p::saved_state which ensures that the regular
-+	 * wakeup is not lost. The restore will also set
-+	 * p::saved_state to TASK_RUNNING so any further tests will
-+	 * not result in false positives vs. @success
-+	 */
-+	if (match < 0)
-+		p->saved_state = TASK_RUNNING;
-+
-+	return match > 0;
-+}
-+
-+/*
-+ * Notes on Program-Order guarantees on SMP systems.
-+ *
-+ *  MIGRATION
-+ *
-+ * The basic program-order guarantee on SMP systems is that when a task [t]
-+ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
-+ * execution on its new CPU [c1].
-+ *
-+ * For migration (of runnable tasks) this is provided by the following means:
-+ *
-+ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
-+ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
-+ *     rq(c1)->lock (if not at the same time, then in that order).
-+ *  C) LOCK of the rq(c1)->lock scheduling in task
-+ *
-+ * Transitivity guarantees that B happens after A and C after B.
-+ * Note: we only require RCpc transitivity.
-+ * Note: the CPU doing B need not be c0 or c1
-+ *
-+ * Example:
-+ *
-+ *   CPU0            CPU1            CPU2
-+ *
-+ *   LOCK rq(0)->lock
-+ *   sched-out X
-+ *   sched-in Y
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(0)->lock // orders against CPU0
-+ *                                   dequeue X
-+ *                                   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(1)->lock
-+ *                                   enqueue X
-+ *                                   UNLOCK rq(1)->lock
-+ *
-+ *                   LOCK rq(1)->lock // orders against CPU2
-+ *                   sched-out Z
-+ *                   sched-in X
-+ *                   UNLOCK rq(1)->lock
-+ *
-+ *
-+ *  BLOCKING -- aka. SLEEP + WAKEUP
-+ *
-+ * For blocking we (obviously) need to provide the same guarantee as for
-+ * migration. However the means are completely different as there is no lock
-+ * chain to provide order. Instead we do:
-+ *
-+ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
-+ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
-+ *
-+ * Example:
-+ *
-+ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
-+ *
-+ *   LOCK rq(0)->lock LOCK X->pi_lock
-+ *   dequeue X
-+ *   sched-out X
-+ *   smp_store_release(X->on_cpu, 0);
-+ *
-+ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
-+ *                    X->state = WAKING
-+ *                    set_task_cpu(X,2)
-+ *
-+ *                    LOCK rq(2)->lock
-+ *                    enqueue X
-+ *                    X->state = RUNNING
-+ *                    UNLOCK rq(2)->lock
-+ *
-+ *                                          LOCK rq(2)->lock // orders against CPU1
-+ *                                          sched-out Z
-+ *                                          sched-in X
-+ *                                          UNLOCK rq(2)->lock
-+ *
-+ *                    UNLOCK X->pi_lock
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *
-+ * However; for wakeups there is a second guarantee we must provide, namely we
-+ * must observe the state that lead to our wakeup. That is, not only must our
-+ * task observe its own prior state, it must also observe the stores prior to
-+ * its wakeup.
-+ *
-+ * This means that any means of doing remote wakeups must order the CPU doing
-+ * the wakeup against the CPU the task is going to end up running on. This,
-+ * however, is already required for the regular Program-Order guarantee above,
-+ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
-+ *
-+ */
-+
-+/**
-+ * try_to_wake_up - wake up a thread
-+ * @p: the thread to be awakened
-+ * @state: the mask of task states that can be woken
-+ * @wake_flags: wake modifier flags (WF_*)
-+ *
-+ * Conceptually does:
-+ *
-+ *   If (@state & @p->state) @p->state = TASK_RUNNING.
-+ *
-+ * If the task was not queued/runnable, also place it back on a runqueue.
-+ *
-+ * This function is atomic against schedule() which would dequeue the task.
-+ *
-+ * It issues a full memory barrier before accessing @p->state, see the comment
-+ * with set_current_state().
-+ *
-+ * Uses p->pi_lock to serialize against concurrent wake-ups.
-+ *
-+ * Relies on p->pi_lock stabilizing:
-+ *  - p->sched_class
-+ *  - p->cpus_ptr
-+ *  - p->sched_task_group
-+ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
-+ *
-+ * Tries really hard to only take one task_rq(p)->lock for performance.
-+ * Takes rq->lock in:
-+ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
-+ *  - ttwu_queue()       -- new rq, for enqueue of the task;
-+ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
-+ *
-+ * As a consequence we race really badly with just about everything. See the
-+ * many memory barriers and their comments for details.
-+ *
-+ * Return: %true if @p->state changes (an actual wakeup was done),
-+ *	   %false otherwise.
-+ */
-+int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
-+{
-+	guard(preempt)();
-+	int cpu, success = 0;
-+
-+	if (p == current) {
-+		/*
-+		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
-+		 * == smp_processor_id()'. Together this means we can special
-+		 * case the whole 'p->on_rq && ttwu_runnable()' case below
-+		 * without taking any locks.
-+		 *
-+		 * In particular:
-+		 *  - we rely on Program-Order guarantees for all the ordering,
-+		 *  - we're serialized against set_special_state() by virtue of
-+		 *    it disabling IRQs (this allows not taking ->pi_lock).
-+		 */
-+		if (!ttwu_state_match(p, state, &success))
-+			goto out;
-+
-+		trace_sched_waking(p);
-+		ttwu_do_wakeup(p);
-+		goto out;
-+	}
-+
-+	/*
-+	 * If we are going to wake up a thread waiting for CONDITION we
-+	 * need to ensure that CONDITION=1 done by the caller can not be
-+	 * reordered with p->state check below. This pairs with smp_store_mb()
-+	 * in set_current_state() that the waiting thread does.
-+	 */
-+	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
-+		smp_mb__after_spinlock();
-+		if (!ttwu_state_match(p, state, &success))
-+			break;
-+
-+		trace_sched_waking(p);
-+
-+		/*
-+		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
-+		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
-+		 * in smp_cond_load_acquire() below.
-+		 *
-+		 * sched_ttwu_pending()			try_to_wake_up()
-+		 *   STORE p->on_rq = 1			  LOAD p->state
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * [task p]
-+		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * A similar smp_rmb() lives in __task_needs_rq_lock().
-+		 */
-+		smp_rmb();
-+		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
-+			break;
-+
-+#ifdef CONFIG_SMP
-+		/*
-+		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
-+		 * possible to, falsely, observe p->on_cpu == 0.
-+		 *
-+		 * One must be running (->on_cpu == 1) in order to remove oneself
-+		 * from the runqueue.
-+		 *
-+		 * __schedule() (switch to task 'p')	try_to_wake_up()
-+		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (put 'p' to sleep)
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
-+		 * schedule()'s deactivate_task() has 'happened' and p will no longer
-+		 * care about it's own p->state. See the comment in __schedule().
-+		 */
-+		smp_acquire__after_ctrl_dep();
-+
-+		/*
-+		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
-+		 * == 0), which means we need to do an enqueue, change p->state to
-+		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
-+		 * enqueue, such as ttwu_queue_wakelist().
-+		 */
-+		WRITE_ONCE(p->__state, TASK_WAKING);
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, considering queueing p on the remote CPUs wake_list
-+		 * which potentially sends an IPI instead of spinning on p->on_cpu to
-+		 * let the waker make forward progress. This is safe because IRQs are
-+		 * disabled and the IPI will deliver after on_cpu is cleared.
-+		 *
-+		 * Ensure we load task_cpu(p) after p->on_cpu:
-+		 *
-+		 * set_task_cpu(p, cpu);
-+		 *   STORE p->cpu = @cpu
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock
-+		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
-+		 *   STORE p->on_cpu = 1                LOAD p->cpu
-+		 *
-+		 * to ensure we observe the correct CPU on which the task is currently
-+		 * scheduling.
-+		 */
-+		if (smp_load_acquire(&p->on_cpu) &&
-+		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
-+			break;
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, wait until it's done referencing the task.
-+		 *
-+		 * Pairs with the smp_store_release() in finish_task().
-+		 *
-+		 * This ensures that tasks getting woken will be fully ordered against
-+		 * their previous state and preserve Program Order.
-+		 */
-+		smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		sched_task_ttwu(p);
-+
-+		if ((wake_flags & WF_CURRENT_CPU) &&
-+		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
-+			cpu = smp_processor_id();
-+		else
-+			cpu = select_task_rq(p);
-+
-+		if (cpu != task_cpu(p)) {
-+			if (p->in_iowait) {
-+				delayacct_blkio_end(p);
-+				atomic_dec(&task_rq(p)->nr_iowait);
-+			}
-+
-+			wake_flags |= WF_MIGRATED;
-+			set_task_cpu(p, cpu);
-+		}
-+#else
-+		sched_task_ttwu(p);
-+
-+		cpu = task_cpu(p);
-+#endif /* CONFIG_SMP */
-+
-+		ttwu_queue(p, cpu, wake_flags);
-+	}
-+out:
-+	if (success)
-+		ttwu_stat(p, task_cpu(p), wake_flags);
-+
-+	return success;
-+}
-+
-+static bool __task_needs_rq_lock(struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
-+	 * the task is blocked. Make sure to check @state since ttwu() can drop
-+	 * locks at the end, see ttwu_queue_wakelist().
-+	 */
-+	if (state == TASK_RUNNING || state == TASK_WAKING)
-+		return true;
-+
-+	/*
-+	 * Ensure we load p->on_rq after p->__state, otherwise it would be
-+	 * possible to, falsely, observe p->on_rq == 0.
-+	 *
-+	 * See try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	if (p->on_rq)
-+		return true;
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * Ensure the task has finished __schedule() and will not be referenced
-+	 * anymore. Again, see try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	smp_cond_load_acquire(&p->on_cpu, !VAL);
-+#endif
-+
-+	return false;
-+}
-+
-+/**
-+ * task_call_func - Invoke a function on task in fixed state
-+ * @p: Process for which the function is to be invoked, can be @current.
-+ * @func: Function to invoke.
-+ * @arg: Argument to function.
-+ *
-+ * Fix the task in it's current state by avoiding wakeups and or rq operations
-+ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
-+ * to work out what the state is, if required.  Given that @func can be invoked
-+ * with a runqueue lock held, it had better be quite lightweight.
-+ *
-+ * Returns:
-+ *   Whatever @func returns
-+ */
-+int task_call_func(struct task_struct *p, task_call_f func, void *arg)
-+{
-+	struct rq *rq = NULL;
-+	struct rq_flags rf;
-+	int ret;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
-+
-+	if (__task_needs_rq_lock(p))
-+		rq = __task_rq_lock(p, &rf);
-+
-+	/*
-+	 * At this point the task is pinned; either:
-+	 *  - blocked and we're holding off wakeups      (pi->lock)
-+	 *  - woken, and we're holding off enqueue       (rq->lock)
-+	 *  - queued, and we're holding off schedule     (rq->lock)
-+	 *  - running, and we're holding off de-schedule (rq->lock)
-+	 *
-+	 * The called function (@func) can use: task_curr(), p->on_rq and
-+	 * p->__state to differentiate between these states.
-+	 */
-+	ret = func(p, arg);
-+
-+	if (rq)
-+		__task_rq_unlock(rq, &rf);
-+
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
-+	return ret;
-+}
-+
-+/**
-+ * cpu_curr_snapshot - Return a snapshot of the currently running task
-+ * @cpu: The CPU on which to snapshot the task.
-+ *
-+ * Returns the task_struct pointer of the task "currently" running on
-+ * the specified CPU.  If the same task is running on that CPU throughout,
-+ * the return value will be a pointer to that task's task_struct structure.
-+ * If the CPU did any context switches even vaguely concurrently with the
-+ * execution of this function, the return value will be a pointer to the
-+ * task_struct structure of a randomly chosen task that was running on
-+ * that CPU somewhere around the time that this function was executing.
-+ *
-+ * If the specified CPU was offline, the return value is whatever it
-+ * is, perhaps a pointer to the task_struct structure of that CPU's idle
-+ * task, but there is no guarantee.  Callers wishing a useful return
-+ * value must take some action to ensure that the specified CPU remains
-+ * online throughout.
-+ *
-+ * This function executes full memory barriers before and after fetching
-+ * the pointer, which permits the caller to confine this function's fetch
-+ * with respect to the caller's accesses to other shared variables.
-+ */
-+struct task_struct *cpu_curr_snapshot(int cpu)
-+{
-+	struct task_struct *t;
-+
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	t = rcu_dereference(cpu_curr(cpu));
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	return t;
-+}
-+
-+/**
-+ * wake_up_process - Wake up a specific process
-+ * @p: The process to be woken up.
-+ *
-+ * Attempt to wake up the nominated process and move it to the set of runnable
-+ * processes.
-+ *
-+ * Return: 1 if the process was woken up, 0 if it was already running.
-+ *
-+ * This function executes a full memory barrier before accessing the task state.
-+ */
-+int wake_up_process(struct task_struct *p)
-+{
-+	return try_to_wake_up(p, TASK_NORMAL, 0);
-+}
-+EXPORT_SYMBOL(wake_up_process);
-+
-+int wake_up_state(struct task_struct *p, unsigned int state)
-+{
-+	return try_to_wake_up(p, state, 0);
-+}
-+
-+/*
-+ * Perform scheduler related setup for a newly forked process p.
-+ * p is forked by current.
-+ *
-+ * __sched_fork() is basic setup used by init_idle() too:
-+ */
-+static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	p->on_rq			= 0;
-+	p->on_cpu			= 0;
-+	p->utime			= 0;
-+	p->stime			= 0;
-+	p->sched_time			= 0;
-+
-+#ifdef CONFIG_SCHEDSTATS
-+	/* Even if schedstat is disabled, there should not be garbage */
-+	memset(&p->stats, 0, sizeof(p->stats));
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+	INIT_HLIST_HEAD(&p->preempt_notifiers);
-+#endif
-+
-+#ifdef CONFIG_COMPACTION
-+	p->capture_control = NULL;
-+#endif
-+#ifdef CONFIG_SMP
-+	p->wake_entry.u_flags = CSD_TYPE_TTWU;
-+#endif
-+	init_sched_mm_cid(p);
-+}
-+
-+/*
-+ * fork()/clone()-time setup:
-+ */
-+int sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	__sched_fork(clone_flags, p);
-+	/*
-+	 * We mark the process as NEW here. This guarantees that
-+	 * nobody will actually run it, and a signal or other external
-+	 * event cannot wake it up and insert it on the runqueue either.
-+	 */
-+	p->__state = TASK_NEW;
-+
-+	/*
-+	 * Make sure we do not leak PI boosting priority to the child.
-+	 */
-+	p->prio = current->normal_prio;
-+
-+	/*
-+	 * Revert to default priority/policy on fork if requested.
-+	 */
-+	if (unlikely(p->sched_reset_on_fork)) {
-+		if (task_has_rt_policy(p)) {
-+			p->policy = SCHED_NORMAL;
-+			p->static_prio = NICE_TO_PRIO(0);
-+			p->rt_priority = 0;
-+		} else if (PRIO_TO_NICE(p->static_prio) < 0)
-+			p->static_prio = NICE_TO_PRIO(0);
-+
-+		p->prio = p->normal_prio = p->static_prio;
-+
-+		/*
-+		 * We don't need the reset flag anymore after the fork. It has
-+		 * fulfilled its duty:
-+		 */
-+		p->sched_reset_on_fork = 0;
-+	}
-+
-+#ifdef CONFIG_SCHED_INFO
-+	if (unlikely(sched_info_on()))
-+		memset(&p->sched_info, 0, sizeof(p->sched_info));
-+#endif
-+	init_task_preempt_count(p);
-+
-+	return 0;
-+}
-+
-+void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	/*
-+	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
-+	 * required yet, but lockdep gets upset if rules are violated.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	/*
-+	 * Share the timeslice between parent and child, thus the
-+	 * total amount of pending timeslices in the system doesn't change,
-+	 * resulting in more scheduling fairness.
-+	 */
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->curr->time_slice /= 2;
-+	p->time_slice = rq->curr->time_slice;
-+#ifdef CONFIG_SCHED_HRTICK
-+	hrtick_start(rq, rq->curr->time_slice);
-+#endif
-+
-+	if (p->time_slice < RESCHED_NS) {
-+		p->time_slice = sysctl_sched_base_slice;
-+		resched_curr(rq);
-+	}
-+	sched_task_fork(p, rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rseq_migrate(p);
-+	/*
-+	 * We're setting the CPU for the first time, we don't migrate,
-+	 * so use __set_task_cpu().
-+	 */
-+	__set_task_cpu(p, smp_processor_id());
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+void sched_post_fork(struct task_struct *p)
-+{
-+}
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+DEFINE_STATIC_KEY_FALSE(sched_schedstats);
-+
-+static void set_schedstats(bool enabled)
-+{
-+	if (enabled)
-+		static_branch_enable(&sched_schedstats);
-+	else
-+		static_branch_disable(&sched_schedstats);
-+}
-+
-+void force_schedstat_enabled(void)
-+{
-+	if (!schedstat_enabled()) {
-+		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
-+		static_branch_enable(&sched_schedstats);
-+	}
-+}
-+
-+static int __init setup_schedstats(char *str)
-+{
-+	int ret = 0;
-+	if (!str)
-+		goto out;
-+
-+	if (!strcmp(str, "enable")) {
-+		set_schedstats(true);
-+		ret = 1;
-+	} else if (!strcmp(str, "disable")) {
-+		set_schedstats(false);
-+		ret = 1;
-+	}
-+out:
-+	if (!ret)
-+		pr_warn("Unable to parse schedstats=\n");
-+
-+	return ret;
-+}
-+__setup("schedstats=", setup_schedstats);
-+
-+#ifdef CONFIG_PROC_SYSCTL
-+static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
-+		size_t *lenp, loff_t *ppos)
-+{
-+	struct ctl_table t;
-+	int err;
-+	int state = static_branch_likely(&sched_schedstats);
-+
-+	if (write && !capable(CAP_SYS_ADMIN))
-+		return -EPERM;
-+
-+	t = *table;
-+	t.data = &state;
-+	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
-+	if (err < 0)
-+		return err;
-+	if (write)
-+		set_schedstats(state);
-+	return err;
-+}
-+
-+static struct ctl_table sched_core_sysctls[] = {
-+	{
-+		.procname       = "sched_schedstats",
-+		.data           = NULL,
-+		.maxlen         = sizeof(unsigned int),
-+		.mode           = 0644,
-+		.proc_handler   = sysctl_schedstats,
-+		.extra1         = SYSCTL_ZERO,
-+		.extra2         = SYSCTL_ONE,
-+	},
-+};
-+static int __init sched_core_sysctl_init(void)
-+{
-+	register_sysctl_init("kernel", sched_core_sysctls);
-+	return 0;
-+}
-+late_initcall(sched_core_sysctl_init);
-+#endif /* CONFIG_PROC_SYSCTL */
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+/*
-+ * wake_up_new_task - wake up a newly created task for the first time.
-+ *
-+ * This function will do some initial scheduler statistics housekeeping
-+ * that must be done for every newly created context, then puts the task
-+ * on the runqueue and wakes it.
-+ */
-+void wake_up_new_task(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	rq = cpu_rq(select_task_rq(p));
-+#ifdef CONFIG_SMP
-+	rseq_migrate(p);
-+	/*
-+	 * Fork balancing, do it here and not earlier because:
-+	 * - cpus_ptr can change in the fork path
-+	 * - any previously selected CPU might disappear through hotplug
-+	 *
-+	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
-+	 * as we're not fully set-up yet.
-+	 */
-+	__set_task_cpu(p, cpu_of(rq));
-+#endif
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	activate_task(p, rq);
-+	trace_sched_wakeup_new(p);
-+	wakeup_preempt(rq);
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+
-+static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
-+
-+void preempt_notifier_inc(void)
-+{
-+	static_branch_inc(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_inc);
-+
-+void preempt_notifier_dec(void)
-+{
-+	static_branch_dec(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_dec);
-+
-+/**
-+ * preempt_notifier_register - tell me when current is being preempted & rescheduled
-+ * @notifier: notifier struct to register
-+ */
-+void preempt_notifier_register(struct preempt_notifier *notifier)
-+{
-+	if (!static_branch_unlikely(&preempt_notifier_key))
-+		WARN(1, "registering preempt_notifier while notifiers disabled\n");
-+
-+	hlist_add_head(&notifier->link, &current->preempt_notifiers);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_register);
-+
-+/**
-+ * preempt_notifier_unregister - no longer interested in preemption notifications
-+ * @notifier: notifier struct to unregister
-+ *
-+ * This is *not* safe to call from within a preemption notifier.
-+ */
-+void preempt_notifier_unregister(struct preempt_notifier *notifier)
-+{
-+	hlist_del(&notifier->link);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
-+
-+static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_in(notifier, raw_smp_processor_id());
-+}
-+
-+static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_in_preempt_notifiers(curr);
-+}
-+
-+static void
-+__fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				   struct task_struct *next)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_out(notifier, next);
-+}
-+
-+static __always_inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_out_preempt_notifiers(curr, next);
-+}
-+
-+#else /* !CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+}
-+
-+static inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+}
-+
-+#endif /* CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void prepare_task(struct task_struct *next)
-+{
-+	/*
-+	 * Claim the task as running, we do this before switching to it
-+	 * such that any running task will have this set.
-+	 *
-+	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
-+	 * its ordering comment.
-+	 */
-+	WRITE_ONCE(next->on_cpu, 1);
-+}
-+
-+static inline void finish_task(struct task_struct *prev)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * This must be the very last reference to @prev from this CPU. After
-+	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
-+	 * must ensure this doesn't happen until the switch is completely
-+	 * finished.
-+	 *
-+	 * In particular, the load of prev->state in finish_task_switch() must
-+	 * happen before this.
-+	 *
-+	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
-+	 */
-+	smp_store_release(&prev->on_cpu, 0);
-+#else
-+	prev->on_cpu = 0;
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	void (*func)(struct rq *rq);
-+	struct balance_callback *next;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	while (head) {
-+		func = (void (*)(struct rq *))head->func;
-+		next = head->next;
-+		head->next = NULL;
-+		head = next;
-+
-+		func(rq);
-+	}
-+}
-+
-+static void balance_push(struct rq *rq);
-+
-+/*
-+ * balance_push_callback is a right abuse of the callback interface and plays
-+ * by significantly different rules.
-+ *
-+ * Where the normal balance_callback's purpose is to be ran in the same context
-+ * that queued it (only later, when it's safe to drop rq->lock again),
-+ * balance_push_callback is specifically targeted at __schedule().
-+ *
-+ * This abuse is tolerated because it places all the unlikely/odd cases behind
-+ * a single test, namely: rq->balance_callback == NULL.
-+ */
-+struct balance_callback balance_push_callback = {
-+	.next = NULL,
-+	.func = balance_push,
-+};
-+
-+static inline struct balance_callback *
-+__splice_balance_callbacks(struct rq *rq, bool split)
-+{
-+	struct balance_callback *head = rq->balance_callback;
-+
-+	if (likely(!head))
-+		return NULL;
-+
-+	lockdep_assert_rq_held(rq);
-+	/*
-+	 * Must not take balance_push_callback off the list when
-+	 * splice_balance_callbacks() and balance_callbacks() are not
-+	 * in the same rq->lock section.
-+	 *
-+	 * In that case it would be possible for __schedule() to interleave
-+	 * and observe the list empty.
-+	 */
-+	if (split && head == &balance_push_callback)
-+		head = NULL;
-+	else
-+		rq->balance_callback = NULL;
-+
-+	return head;
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return __splice_balance_callbacks(rq, true);
-+}
-+
-+static void __balance_callbacks(struct rq *rq)
-+{
-+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	unsigned long flags;
-+
-+	if (unlikely(head)) {
-+		raw_spin_lock_irqsave(&rq->lock, flags);
-+		do_balance_callbacks(rq, head);
-+		raw_spin_unlock_irqrestore(&rq->lock, flags);
-+	}
-+}
-+
-+#else
-+
-+static inline void __balance_callbacks(struct rq *rq)
-+{
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return NULL;
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+}
-+
-+#endif
-+
-+static inline void
-+prepare_lock_switch(struct rq *rq, struct task_struct *next)
-+{
-+	/*
-+	 * Since the runqueue lock will be released by the next
-+	 * task (which is an invalid locking op but in the case
-+	 * of the scheduler it's an obvious special-case), so we
-+	 * do an early lockdep release here:
-+	 */
-+	spin_release(&rq->lock.dep_map, _THIS_IP_);
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+	/* this is a valid case when another task releases the spinlock */
-+	rq->lock.owner = next;
-+#endif
-+}
-+
-+static inline void finish_lock_switch(struct rq *rq)
-+{
-+	/*
-+	 * If we are tracking spinlock dependencies then we have to
-+	 * fix up the runqueue lock - which gets 'carried over' from
-+	 * prev into current:
-+	 */
-+	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-+	__balance_callbacks(rq);
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+/*
-+ * NOP if the arch has not defined these:
-+ */
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static inline void kmap_local_sched_out(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_out();
-+#endif
-+}
-+
-+static inline void kmap_local_sched_in(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_in();
-+#endif
-+}
-+
-+/**
-+ * prepare_task_switch - prepare to switch tasks
-+ * @rq: the runqueue preparing to switch
-+ * @next: the task we are going to switch to.
-+ *
-+ * This is called with the rq lock held and interrupts off. It must
-+ * be paired with a subsequent finish_task_switch after the context
-+ * switch.
-+ *
-+ * prepare_task_switch sets up locking and calls architecture specific
-+ * hooks.
-+ */
-+static inline void
-+prepare_task_switch(struct rq *rq, struct task_struct *prev,
-+		    struct task_struct *next)
-+{
-+	kcov_prepare_switch(prev);
-+	sched_info_switch(rq, prev, next);
-+	perf_event_task_sched_out(prev, next);
-+	rseq_preempt(prev);
-+	fire_sched_out_preempt_notifiers(prev, next);
-+	kmap_local_sched_out();
-+	prepare_task(next);
-+	prepare_arch_switch(next);
-+}
-+
-+/**
-+ * finish_task_switch - clean up after a task-switch
-+ * @rq: runqueue associated with task-switch
-+ * @prev: the thread we just switched away from.
-+ *
-+ * finish_task_switch must be called after the context switch, paired
-+ * with a prepare_task_switch call before the context switch.
-+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
-+ * and do any other architecture-specific cleanup actions.
-+ *
-+ * Note that we may have delayed dropping an mm in context_switch(). If
-+ * so, we finish that here outside of the runqueue lock.  (Doing it
-+ * with the lock held can cause deadlocks; see schedule() for
-+ * details.)
-+ *
-+ * The context switch have flipped the stack from under us and restored the
-+ * local variables which were saved when this task called schedule() in the
-+ * past. prev == current is still correct but we need to recalculate this_rq
-+ * because prev may have moved to another CPU.
-+ */
-+static struct rq *finish_task_switch(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	struct rq *rq = this_rq();
-+	struct mm_struct *mm = rq->prev_mm;
-+	unsigned int prev_state;
-+
-+	/*
-+	 * The previous task will have left us with a preempt_count of 2
-+	 * because it left us after:
-+	 *
-+	 *	schedule()
-+	 *	  preempt_disable();			// 1
-+	 *	  __schedule()
-+	 *	    raw_spin_lock_irq(&rq->lock)	// 2
-+	 *
-+	 * Also, see FORK_PREEMPT_COUNT.
-+	 */
-+	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
-+		      "corrupted preempt_count: %s/%d/0x%x\n",
-+		      current->comm, current->pid, preempt_count()))
-+		preempt_count_set(FORK_PREEMPT_COUNT);
-+
-+	rq->prev_mm = NULL;
-+
-+	/*
-+	 * A task struct has one reference for the use as "current".
-+	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
-+	 * schedule one last time. The schedule call will never return, and
-+	 * the scheduled task must drop that reference.
-+	 *
-+	 * We must observe prev->state before clearing prev->on_cpu (in
-+	 * finish_task), otherwise a concurrent wakeup can get prev
-+	 * running on another CPU and we could rave with its RUNNING -> DEAD
-+	 * transition, resulting in a double drop.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	vtime_task_switch(prev);
-+	perf_event_task_sched_in(prev, current);
-+	finish_task(prev);
-+	tick_nohz_task_switch();
-+	finish_lock_switch(rq);
-+	finish_arch_post_lock_switch();
-+	kcov_finish_switch(current);
-+	/*
-+	 * kmap_local_sched_out() is invoked with rq::lock held and
-+	 * interrupts disabled. There is no requirement for that, but the
-+	 * sched out code does not have an interrupt enabled section.
-+	 * Restoring the maps on sched in does not require interrupts being
-+	 * disabled either.
-+	 */
-+	kmap_local_sched_in();
-+
-+	fire_sched_in_preempt_notifiers(current);
-+	/*
-+	 * When switching through a kernel thread, the loop in
-+	 * membarrier_{private,global}_expedited() may have observed that
-+	 * kernel thread and not issued an IPI. It is therefore possible to
-+	 * schedule between user->kernel->user threads without passing though
-+	 * switch_mm(). Membarrier requires a barrier after storing to
-+	 * rq->curr, before returning to userspace, so provide them here:
-+	 *
-+	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
-+	 *   provided by mmdrop(),
-+	 * - a sync_core for SYNC_CORE.
-+	 */
-+	if (mm) {
-+		membarrier_mm_sync_core_before_usermode(mm);
-+		mmdrop_sched(mm);
-+	}
-+	if (unlikely(prev_state == TASK_DEAD)) {
-+		/* Task is done with its stack. */
-+		put_task_stack(prev);
-+
-+		put_task_struct_rcu_user(prev);
-+	}
-+
-+	return rq;
-+}
-+
-+/**
-+ * schedule_tail - first thing a freshly forked thread must call.
-+ * @prev: the thread we just switched away from.
-+ */
-+asmlinkage __visible void schedule_tail(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	/*
-+	 * New tasks start with FORK_PREEMPT_COUNT, see there and
-+	 * finish_task_switch() for details.
-+	 *
-+	 * finish_task_switch() will drop rq->lock() and lower preempt_count
-+	 * and the preempt_enable() will end up enabling preemption (on
-+	 * PREEMPT_COUNT kernels).
-+	 */
-+
-+	finish_task_switch(prev);
-+	preempt_enable();
-+
-+	if (current->set_child_tid)
-+		put_user(task_pid_vnr(current), current->set_child_tid);
-+
-+	calculate_sigpending();
-+}
-+
-+/*
-+ * context_switch - switch to the new MM and the new thread's register state.
-+ */
-+static __always_inline struct rq *
-+context_switch(struct rq *rq, struct task_struct *prev,
-+	       struct task_struct *next)
-+{
-+	prepare_task_switch(rq, prev, next);
-+
-+	/*
-+	 * For paravirt, this is coupled with an exit in switch_to to
-+	 * combine the page table reload and the switch backend into
-+	 * one hypercall.
-+	 */
-+	arch_start_context_switch(prev);
-+
-+	/*
-+	 * kernel -> kernel   lazy + transfer active
-+	 *   user -> kernel   lazy + mmgrab() active
-+	 *
-+	 * kernel ->   user   switch + mmdrop() active
-+	 *   user ->   user   switch
-+	 *
-+	 * switch_mm_cid() needs to be updated if the barriers provided
-+	 * by context_switch() are modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		enter_lazy_tlb(prev->active_mm, next);
-+
-+		next->active_mm = prev->active_mm;
-+		if (prev->mm)                           // from user
-+			mmgrab(prev->active_mm);
-+		else
-+			prev->active_mm = NULL;
-+	} else {                                        // to user
-+		membarrier_switch_mm(rq, prev->active_mm, next->mm);
-+		/*
-+		 * sys_membarrier() requires an smp_mb() between setting
-+		 * rq->curr / membarrier_switch_mm() and returning to userspace.
-+		 *
-+		 * The below provides this either through switch_mm(), or in
-+		 * case 'prev->active_mm == next->mm' through
-+		 * finish_task_switch()'s mmdrop().
-+		 */
-+		switch_mm_irqs_off(prev->active_mm, next->mm, next);
-+		lru_gen_use_mm(next->mm);
-+
-+		if (!prev->mm) {                        // from kernel
-+			/* will mmdrop() in finish_task_switch(). */
-+			rq->prev_mm = prev->active_mm;
-+			prev->active_mm = NULL;
-+		}
-+	}
-+
-+	/* switch_mm_cid() requires the memory barriers above. */
-+	switch_mm_cid(rq, prev, next);
-+
-+	prepare_lock_switch(rq, next);
-+
-+	/* Here we just switch the register state and the stack. */
-+	switch_to(prev, next, prev);
-+	barrier();
-+
-+	return finish_task_switch(prev);
-+}
-+
-+/*
-+ * nr_running, nr_uninterruptible and nr_context_switches:
-+ *
-+ * externally visible scheduler statistics: current number of runnable
-+ * threads, total number of context switches performed since bootup.
-+ */
-+unsigned int nr_running(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_online_cpu(i)
-+		sum += cpu_rq(i)->nr_running;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Check if only the current task is running on the CPU.
-+ *
-+ * Caution: this function does not check that the caller has disabled
-+ * preemption, thus the result might have a time-of-check-to-time-of-use
-+ * race.  The caller is responsible to use it correctly, for example:
-+ *
-+ * - from a non-preemptible section (of course)
-+ *
-+ * - from a thread that is bound to a single CPU
-+ *
-+ * - in a loop with very short iterations (e.g. a polling loop)
-+ */
-+bool single_task_running(void)
-+{
-+	return raw_rq()->nr_running == 1;
-+}
-+EXPORT_SYMBOL(single_task_running);
-+
-+unsigned long long nr_context_switches_cpu(int cpu)
-+{
-+	return cpu_rq(cpu)->nr_switches;
-+}
-+
-+unsigned long long nr_context_switches(void)
-+{
-+	int i;
-+	unsigned long long sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += cpu_rq(i)->nr_switches;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Consumers of these two interfaces, like for example the cpuidle menu
-+ * governor, are using nonsensical data. Preferring shallow idle state selection
-+ * for a CPU that has IO-wait which might not even end up running the task when
-+ * it does become runnable.
-+ */
-+
-+unsigned int nr_iowait_cpu(int cpu)
-+{
-+	return atomic_read(&cpu_rq(cpu)->nr_iowait);
-+}
-+
-+/*
-+ * IO-wait accounting, and how it's mostly bollocks (on SMP).
-+ *
-+ * The idea behind IO-wait account is to account the idle time that we could
-+ * have spend running if it were not for IO. That is, if we were to improve the
-+ * storage performance, we'd have a proportional reduction in IO-wait time.
-+ *
-+ * This all works nicely on UP, where, when a task blocks on IO, we account
-+ * idle time as IO-wait, because if the storage were faster, it could've been
-+ * running and we'd not be idle.
-+ *
-+ * This has been extended to SMP, by doing the same for each CPU. This however
-+ * is broken.
-+ *
-+ * Imagine for instance the case where two tasks block on one CPU, only the one
-+ * CPU will have IO-wait accounted, while the other has regular idle. Even
-+ * though, if the storage were faster, both could've ran at the same time,
-+ * utilising both CPUs.
-+ *
-+ * This means, that when looking globally, the current IO-wait accounting on
-+ * SMP is a lower bound, by reason of under accounting.
-+ *
-+ * Worse, since the numbers are provided per CPU, they are sometimes
-+ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
-+ * associated with any one particular CPU, it can wake to another CPU than it
-+ * blocked on. This means the per CPU IO-wait number is meaningless.
-+ *
-+ * Task CPU affinities can make all that even more 'interesting'.
-+ */
-+
-+unsigned int nr_iowait(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += nr_iowait_cpu(i);
-+
-+	return sum;
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+/*
-+ * sched_exec - execve() is a valuable balancing opportunity, because at
-+ * this point the task has the smallest effective memory and cache
-+ * footprint.
-+ */
-+void sched_exec(void)
-+{
-+}
-+
-+#endif
-+
-+DEFINE_PER_CPU(struct kernel_stat, kstat);
-+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
-+
-+EXPORT_PER_CPU_SYMBOL(kstat);
-+EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
-+
-+static inline void update_curr(struct rq *rq, struct task_struct *p)
-+{
-+	s64 ns = rq->clock_task - p->last_ran;
-+
-+	p->sched_time += ns;
-+	cgroup_account_cputime(p, ns);
-+	account_group_exec_runtime(p, ns);
-+
-+	p->time_slice -= ns;
-+	p->last_ran = rq->clock_task;
-+}
-+
-+/*
-+ * Return accounted runtime for the task.
-+ * Return separately the current's pending runtime that have not been
-+ * accounted yet.
-+ */
-+unsigned long long task_sched_runtime(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	u64 ns;
-+
-+#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
-+	/*
-+	 * 64-bit doesn't need locks to atomically read a 64-bit value.
-+	 * So we have a optimization chance when the task's delta_exec is 0.
-+	 * Reading ->on_cpu is racy, but this is ok.
-+	 *
-+	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
-+	 * If we race with it entering CPU, unaccounted time is 0. This is
-+	 * indistinguishable from the read occurring a few cycles earlier.
-+	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
-+	 * been accounted, so we're correct here as well.
-+	 */
-+	if (!p->on_cpu || !task_on_rq_queued(p))
-+		return tsk_seruntime(p);
-+#endif
-+
-+	rq = task_access_lock_irqsave(p, &lock, &flags);
-+	/*
-+	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
-+	 * project cycles that may never be accounted to this
-+	 * thread, breaking clock_gettime().
-+	 */
-+	if (p == rq->curr && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		update_curr(rq, p);
-+	}
-+	ns = tsk_seruntime(p);
-+	task_access_unlock_irqrestore(p, lock, &flags);
-+
-+	return ns;
-+}
-+
-+/* This manages tasks that have run out of timeslice during a scheduler_tick */
-+static inline void scheduler_task_tick(struct rq *rq)
-+{
-+	struct task_struct *p = rq->curr;
-+
-+	if (is_idle_task(p))
-+		return;
-+
-+	update_curr(rq, p);
-+	cpufreq_update_util(rq, 0);
-+
-+	/*
-+	 * Tasks have less than RESCHED_NS of time slice left they will be
-+	 * rescheduled.
-+	 */
-+	if (p->time_slice >= RESCHED_NS)
-+		return;
-+	set_tsk_need_resched(p);
-+	set_preempt_need_resched();
-+}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+static u64 cpu_resched_latency(struct rq *rq)
-+{
-+	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
-+	u64 resched_latency, now = rq_clock(rq);
-+	static bool warned_once;
-+
-+	if (sysctl_resched_latency_warn_once && warned_once)
-+		return 0;
-+
-+	if (!need_resched() || !latency_warn_ms)
-+		return 0;
-+
-+	if (system_state == SYSTEM_BOOTING)
-+		return 0;
-+
-+	if (!rq->last_seen_need_resched_ns) {
-+		rq->last_seen_need_resched_ns = now;
-+		rq->ticks_without_resched = 0;
-+		return 0;
-+	}
-+
-+	rq->ticks_without_resched++;
-+	resched_latency = now - rq->last_seen_need_resched_ns;
-+	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
-+		return 0;
-+
-+	warned_once = true;
-+
-+	return resched_latency;
-+}
-+
-+static int __init setup_resched_latency_warn_ms(char *str)
-+{
-+	long val;
-+
-+	if ((kstrtol(str, 0, &val))) {
-+		pr_warn("Unable to set resched_latency_warn_ms\n");
-+		return 1;
-+	}
-+
-+	sysctl_resched_latency_warn_ms = val;
-+	return 1;
-+}
-+__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
-+#else
-+static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+/*
-+ * This function gets called by the timer code, with HZ frequency.
-+ * We call it with interrupts disabled.
-+ */
-+void sched_tick(void)
-+{
-+	int cpu __maybe_unused = smp_processor_id();
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *curr = rq->curr;
-+	u64 resched_latency;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		arch_scale_freq_tick();
-+
-+	sched_clock_tick();
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	scheduler_task_tick(rq);
-+	if (sched_feat(LATENCY_WARN))
-+		resched_latency = cpu_resched_latency(rq);
-+	calc_global_load_tick(rq);
-+
-+	task_tick_mm_cid(rq, rq->curr);
-+
-+	raw_spin_unlock(&rq->lock);
-+
-+	if (sched_feat(LATENCY_WARN) && resched_latency)
-+		resched_latency_warn(cpu, resched_latency);
-+
-+	perf_event_task_tick();
-+
-+	if (curr->flags & PF_WQ_WORKER)
-+		wq_worker_tick(curr);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static int active_balance_cpu_stop(void *data)
-+{
-+	struct balance_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+	cpumask_t tmp;
-+
-+	local_irq_save(flags);
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+
-+	arg->active = 0;
-+
-+	if (task_on_rq_queued(p) && task_rq(p) == rq &&
-+	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
-+	    !is_migration_disabled(p)) {
-+		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
-+		rq = move_queued_task(rq, p, dcpu);
-+	}
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+/* trigger_active_balance - for @cpu/@rq */
-+static inline int
-+trigger_active_balance(struct rq *src_rq, struct rq *rq, struct balance_arg *arg)
-+{
-+	unsigned long flags;
-+	struct task_struct *p;
-+	int res;
-+
-+	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-+		return 0;
-+
-+	res = (1 == rq->nr_running) &&					\
-+	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
-+	      cpumask_intersects(p->cpus_ptr, arg->cpumask) &&		\
-+	      !arg->active;
-+	if (res) {
-+		arg->task = p;
-+
-+		arg->active = 1;
-+	}
-+
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	if (res) {
-+		preempt_disable();
-+		raw_spin_unlock(&src_rq->lock);
-+
-+		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop,
-+				    arg, &rq->active_balance_work);
-+
-+		preempt_enable();
-+		raw_spin_lock(&src_rq->lock);
-+	}
-+
-+	return res;
-+}
-+
-+#ifdef CONFIG_SCHED_SMT
-+/*
-+ * sg_balance - slibing group balance check for run queue @rq
-+ */
-+static inline void sg_balance(struct rq *rq)
-+{
-+	cpumask_t chk;
-+
-+	if (cpumask_andnot(&chk, cpu_active_mask, sched_idle_mask) &&
-+	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
-+		int i, cpu = cpu_of(rq);
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
-+				struct rq *target_rq = cpu_rq(i);
-+				if (trigger_active_balance(rq, target_rq, &target_rq->sg_balance_arg))
-+					return;
-+			}
-+		}
-+	}
-+}
-+
-+static DEFINE_PER_CPU(struct balance_callback, sg_balance_head) = {
-+	.func = sg_balance,
-+};
-+#endif /* CONFIG_SCHED_SMT */
-+
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+
-+struct tick_work {
-+	int			cpu;
-+	atomic_t		state;
-+	struct delayed_work	work;
-+};
-+/* Values for ->state, see diagram below. */
-+#define TICK_SCHED_REMOTE_OFFLINE	0
-+#define TICK_SCHED_REMOTE_OFFLINING	1
-+#define TICK_SCHED_REMOTE_RUNNING	2
-+
-+/*
-+ * State diagram for ->state:
-+ *
-+ *
-+ *          TICK_SCHED_REMOTE_OFFLINE
-+ *                    |   ^
-+ *                    |   |
-+ *                    |   | sched_tick_remote()
-+ *                    |   |
-+ *                    |   |
-+ *                    +--TICK_SCHED_REMOTE_OFFLINING
-+ *                    |   ^
-+ *                    |   |
-+ * sched_tick_start() |   | sched_tick_stop()
-+ *                    |   |
-+ *                    V   |
-+ *          TICK_SCHED_REMOTE_RUNNING
-+ *
-+ *
-+ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
-+ * and sched_tick_start() are happy to leave the state in RUNNING.
-+ */
-+
-+static struct tick_work __percpu *tick_work_cpu;
-+
-+static void sched_tick_remote(struct work_struct *work)
-+{
-+	struct delayed_work *dwork = to_delayed_work(work);
-+	struct tick_work *twork = container_of(dwork, struct tick_work, work);
-+	int cpu = twork->cpu;
-+	struct rq *rq = cpu_rq(cpu);
-+	int os;
-+
-+	/*
-+	 * Handle the tick only if it appears the remote CPU is running in full
-+	 * dynticks mode. The check is racy by nature, but missing a tick or
-+	 * having one too much is no big deal because the scheduler tick updates
-+	 * statistics and checks timeslices in a time-independent way, regardless
-+	 * of when exactly it is running.
-+	 */
-+	if (tick_nohz_tick_stopped_cpu(cpu)) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		struct task_struct *curr = rq->curr;
-+
-+		if (cpu_online(cpu)) {
-+			update_rq_clock(rq);
-+
-+			if (!is_idle_task(curr)) {
-+				/*
-+				 * Make sure the next tick runs within a
-+				 * reasonable amount of time.
-+				 */
-+				u64 delta = rq_clock_task(rq) - curr->last_ran;
-+				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
-+			}
-+			scheduler_task_tick(rq);
-+
-+			calc_load_nohz_remote(rq);
-+		}
-+	}
-+
-+	/*
-+	 * Run the remote tick once per second (1Hz). This arbitrary
-+	 * frequency is large enough to avoid overload but short enough
-+	 * to keep scheduler internal stats reasonably up to date.  But
-+	 * first update state to reflect hotplug activity if required.
-+	 */
-+	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
-+	if (os == TICK_SCHED_REMOTE_RUNNING)
-+		queue_delayed_work(system_unbound_wq, dwork, HZ);
-+}
-+
-+static void sched_tick_start(int cpu)
-+{
-+	int os;
-+	struct tick_work *twork;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
-+	if (os == TICK_SCHED_REMOTE_OFFLINE) {
-+		twork->cpu = cpu;
-+		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
-+		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
-+	}
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+static void sched_tick_stop(int cpu)
-+{
-+	struct tick_work *twork;
-+	int os;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	/* There cannot be competing actions, but don't rely on stop-machine. */
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
-+	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
-+	/* Don't cancel, as this would mess up the state machine. */
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+int __init sched_tick_offload_init(void)
-+{
-+	tick_work_cpu = alloc_percpu(struct tick_work);
-+	BUG_ON(!tick_work_cpu);
-+	return 0;
-+}
-+
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_tick_start(int cpu) { }
-+static inline void sched_tick_stop(int cpu) { }
-+#endif
-+
-+#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
-+				defined(CONFIG_PREEMPT_TRACER))
-+/*
-+ * If the value passed in is equal to the current preempt count
-+ * then we just disabled preemption. Start timing the latency.
-+ */
-+static inline void preempt_latency_start(int val)
-+{
-+	if (preempt_count() == val) {
-+		unsigned long ip = get_lock_parent_ip();
-+#ifdef CONFIG_DEBUG_PREEMPT
-+		current->preempt_disable_ip = ip;
-+#endif
-+		trace_preempt_off(CALLER_ADDR0, ip);
-+	}
-+}
-+
-+void preempt_count_add(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
-+		return;
-+#endif
-+	__preempt_count_add(val);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Spinlock count overflowing soon?
-+	 */
-+	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
-+				PREEMPT_MASK - 10);
-+#endif
-+	preempt_latency_start(val);
-+}
-+EXPORT_SYMBOL(preempt_count_add);
-+NOKPROBE_SYMBOL(preempt_count_add);
-+
-+/*
-+ * If the value passed in equals to the current preempt count
-+ * then we just enabled preemption. Stop timing the latency.
-+ */
-+static inline void preempt_latency_stop(int val)
-+{
-+	if (preempt_count() == val)
-+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-+}
-+
-+void preempt_count_sub(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
-+		return;
-+	/*
-+	 * Is the spinlock portion underflowing?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
-+			!(preempt_count() & PREEMPT_MASK)))
-+		return;
-+#endif
-+
-+	preempt_latency_stop(val);
-+	__preempt_count_sub(val);
-+}
-+EXPORT_SYMBOL(preempt_count_sub);
-+NOKPROBE_SYMBOL(preempt_count_sub);
-+
-+#else
-+static inline void preempt_latency_start(int val) { }
-+static inline void preempt_latency_stop(int val) { }
-+#endif
-+
-+static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	return p->preempt_disable_ip;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+/*
-+ * Print scheduling while atomic bug:
-+ */
-+static noinline void __schedule_bug(struct task_struct *prev)
-+{
-+	/* Save this before calling printk(), since that will clobber it */
-+	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	if (oops_in_progress)
-+		return;
-+
-+	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
-+		prev->comm, prev->pid, preempt_count());
-+
-+	debug_show_held_locks(prev);
-+	print_modules();
-+	if (irqs_disabled())
-+		print_irqtrace_events(prev);
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		pr_err("Preemption disabled at:");
-+		print_ip_sym(KERN_ERR, preempt_disable_ip);
-+	}
-+	check_panic_on_warn("scheduling while atomic");
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+
-+/*
-+ * Various schedule()-time debugging checks and statistics:
-+ */
-+static inline void schedule_debug(struct task_struct *prev, bool preempt)
-+{
-+#ifdef CONFIG_SCHED_STACK_END_CHECK
-+	if (task_stack_end_corrupted(prev))
-+		panic("corrupted stack end detected inside scheduler\n");
-+
-+	if (task_scs_end_corrupted(prev))
-+		panic("corrupted shadow stack detected inside scheduler\n");
-+#endif
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
-+		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
-+			prev->comm, prev->pid, prev->non_block_count);
-+		dump_stack();
-+		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+	}
-+#endif
-+
-+	if (unlikely(in_atomic_preempt_off())) {
-+		__schedule_bug(prev);
-+		preempt_count_set(PREEMPT_DISABLED);
-+	}
-+	rcu_sleep_check();
-+	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
-+
-+	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
-+
-+	schedstat_inc(this_rq()->sched_count);
-+}
-+
-+#ifdef ALT_SCHED_DEBUG
-+void alt_sched_debug(void)
-+{
-+	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
-+	       sched_rq_pending_mask.bits[0],
-+	       sched_idle_mask->bits[0],
-+	       sched_sg_idle_mask.bits[0]);
-+}
-+#else
-+inline void alt_sched_debug(void) {}
-+#endif
-+
-+#ifdef	CONFIG_SMP
-+
-+#ifdef CONFIG_PREEMPT_RT
-+#define SCHED_NR_MIGRATE_BREAK 8
-+#else
-+#define SCHED_NR_MIGRATE_BREAK 32
-+#endif
-+
-+const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
-+
-+/*
-+ * Migrate pending tasks in @rq to @dest_cpu
-+ */
-+static inline int
-+migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
-+{
-+	struct task_struct *p, *skip = rq->curr;
-+	int nr_migrated = 0;
-+	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
-+
-+	/* WA to check rq->curr is still on rq */
-+	if (!task_on_rq_queued(skip))
-+		return 0;
-+
-+	while (skip != rq->idle && nr_tries &&
-+	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
-+		skip = sched_rq_next_task(p, rq);
-+		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
-+			__SCHED_DEQUEUE_TASK(p, rq, 0, );
-+			set_task_cpu(p, dest_cpu);
-+			sched_task_sanity_check(p, dest_rq);
-+			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
-+			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
-+			nr_migrated++;
-+		}
-+		nr_tries--;
-+	}
-+
-+	return nr_migrated;
-+}
-+
-+static inline int take_other_rq_tasks(struct rq *rq, int cpu)
-+{
-+	cpumask_t *topo_mask, *end_mask, chk;
-+
-+	if (unlikely(!rq->online))
-+		return 0;
-+
-+	if (cpumask_empty(&sched_rq_pending_mask))
-+		return 0;
-+
-+	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
-+	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
-+	do {
-+		int i;
-+
-+		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
-+			continue;
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			int nr_migrated;
-+			struct rq *src_rq;
-+
-+			src_rq = cpu_rq(i);
-+			if (!do_raw_spin_trylock(&src_rq->lock))
-+				continue;
-+			spin_acquire(&src_rq->lock.dep_map,
-+				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
-+
-+			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
-+				src_rq->nr_running -= nr_migrated;
-+				if (src_rq->nr_running < 2)
-+					cpumask_clear_cpu(i, &sched_rq_pending_mask);
-+
-+				spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+				do_raw_spin_unlock(&src_rq->lock);
-+
-+				rq->nr_running += nr_migrated;
-+				if (rq->nr_running > 1)
-+					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
-+
-+				update_sched_preempt_mask(rq);
-+				cpufreq_update_util(rq, 0);
-+
-+				return 1;
-+			}
-+
-+			spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+			do_raw_spin_unlock(&src_rq->lock);
-+		}
-+	} while (++topo_mask < end_mask);
-+
-+	return 0;
-+}
-+#endif
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+
-+	sched_task_renew(p, rq);
-+
-+	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
-+		requeue_task(p, rq);
-+}
-+
-+/*
-+ * Timeslices below RESCHED_NS are considered as good as expired as there's no
-+ * point rescheduling when there's so little time left.
-+ */
-+static inline void check_curr(struct task_struct *p, struct rq *rq)
-+{
-+	if (unlikely(rq->idle == p))
-+		return;
-+
-+	update_curr(rq, p);
-+
-+	if (p->time_slice < RESCHED_NS)
-+		time_slice_expired(p, rq);
-+}
-+
-+static inline struct task_struct *
-+choose_next_task(struct rq *rq, int cpu)
-+{
-+	struct task_struct *next = sched_rq_first_task(rq);
-+
-+	if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+		if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+			if (static_key_count(&sched_smt_present.key) > 1 &&
-+			    cpumask_test_cpu(cpu, sched_sg_idle_mask) &&
-+			    rq->online)
-+				__queue_balance_callback(rq, &per_cpu(sg_balance_head, cpu));
-+#endif
-+			schedstat_inc(rq->sched_goidle);
-+			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
-+			return next;
-+#ifdef	CONFIG_SMP
-+		}
-+		next = sched_rq_first_task(rq);
-+#endif
-+	}
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+	hrtick_start(rq, next->time_slice);
-+#endif
-+	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
-+	return next;
-+}
-+
-+/*
-+ * Constants for the sched_mode argument of __schedule().
-+ *
-+ * The mode argument allows RT enabled kernels to differentiate a
-+ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
-+ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
-+ * optimize the AND operation out and just check for zero.
-+ */
-+#define SM_NONE			0x0
-+#define SM_PREEMPT		0x1
-+#define SM_RTLOCK_WAIT		0x2
-+
-+#ifndef CONFIG_PREEMPT_RT
-+# define SM_MASK_PREEMPT	(~0U)
-+#else
-+# define SM_MASK_PREEMPT	SM_PREEMPT
-+#endif
-+
-+/*
-+ * schedule() is the main scheduler function.
-+ *
-+ * The main means of driving the scheduler and thus entering this function are:
-+ *
-+ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
-+ *
-+ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
-+ *      paths. For example, see arch/x86/entry_64.S.
-+ *
-+ *      To drive preemption between tasks, the scheduler sets the flag in timer
-+ *      interrupt handler sched_tick().
-+ *
-+ *   3. Wakeups don't really cause entry into schedule(). They add a
-+ *      task to the run-queue and that's it.
-+ *
-+ *      Now, if the new task added to the run-queue preempts the current
-+ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
-+ *      called on the nearest possible occasion:
-+ *
-+ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
-+ *
-+ *         - in syscall or exception context, at the next outmost
-+ *           preempt_enable(). (this might be as soon as the wake_up()'s
-+ *           spin_unlock()!)
-+ *
-+ *         - in IRQ context, return from interrupt-handler to
-+ *           preemptible context
-+ *
-+ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
-+ *         then at the next:
-+ *
-+ *          - cond_resched() call
-+ *          - explicit schedule() call
-+ *          - return from syscall or exception to user-space
-+ *          - return from interrupt-handler to user-space
-+ *
-+ * WARNING: must be called with preemption disabled!
-+ */
-+static void __sched notrace __schedule(unsigned int sched_mode)
-+{
-+	struct task_struct *prev, *next;
-+	unsigned long *switch_count;
-+	unsigned long prev_state;
-+	struct rq *rq;
-+	int cpu;
-+
-+	cpu = smp_processor_id();
-+	rq = cpu_rq(cpu);
-+	prev = rq->curr;
-+
-+	schedule_debug(prev, !!sched_mode);
-+
-+	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
-+	hrtick_clear(rq);
-+
-+	local_irq_disable();
-+	rcu_note_context_switch(!!sched_mode);
-+
-+	/*
-+	 * Make sure that signal_pending_state()->signal_pending() below
-+	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
-+	 * done by the caller to avoid the race with signal_wake_up():
-+	 *
-+	 * __set_current_state(@state)		signal_wake_up()
-+	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
-+	 *					  wake_up_state(p, state)
-+	 *   LOCK rq->lock			    LOCK p->pi_state
-+	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
-+	 *     if (signal_pending_state())	    if (p->state & @state)
-+	 *
-+	 * Also, the membarrier system call requires a full memory barrier
-+	 * after coming from user-space, before storing to rq->curr; this
-+	 * barrier matches a full barrier in the proximity of the membarrier
-+	 * system call exit.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+	smp_mb__after_spinlock();
-+
-+	update_rq_clock(rq);
-+
-+	switch_count = &prev->nivcsw;
-+	/*
-+	 * We must load prev->state once (task_struct::state is volatile), such
-+	 * that we form a control dependency vs deactivate_task() below.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
-+		if (signal_pending_state(prev_state, prev)) {
-+			WRITE_ONCE(prev->__state, TASK_RUNNING);
-+		} else {
-+			prev->sched_contributes_to_load =
-+				(prev_state & TASK_UNINTERRUPTIBLE) &&
-+				!(prev_state & TASK_NOLOAD) &&
-+				!(prev_state & TASK_FROZEN);
-+
-+			if (prev->sched_contributes_to_load)
-+				rq->nr_uninterruptible++;
-+
-+			/*
-+			 * __schedule()			ttwu()
-+			 *   prev_state = prev->state;    if (p->on_rq && ...)
-+			 *   if (prev_state)		    goto out;
-+			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
-+			 *				  p->state = TASK_WAKING
-+			 *
-+			 * Where __schedule() and ttwu() have matching control dependencies.
-+			 *
-+			 * After this, schedule() must not care about p->state any more.
-+			 */
-+			sched_task_deactivate(prev, rq);
-+			deactivate_task(prev, rq);
-+
-+			if (prev->in_iowait) {
-+				atomic_inc(&rq->nr_iowait);
-+				delayacct_blkio_start();
-+			}
-+		}
-+		switch_count = &prev->nvcsw;
-+	}
-+
-+	check_curr(prev, rq);
-+
-+	next = choose_next_task(rq, cpu);
-+	clear_tsk_need_resched(prev);
-+	clear_preempt_need_resched();
-+#ifdef CONFIG_SCHED_DEBUG
-+	rq->last_seen_need_resched_ns = 0;
-+#endif
-+
-+	if (likely(prev != next)) {
-+		next->last_ran = rq->clock_task;
-+
-+		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
-+		rq->nr_switches++;
-+		/*
-+		 * RCU users of rcu_dereference(rq->curr) may not see
-+		 * changes to task_struct made by pick_next_task().
-+		 */
-+		RCU_INIT_POINTER(rq->curr, next);
-+		/*
-+		 * The membarrier system call requires each architecture
-+		 * to have a full memory barrier after updating
-+		 * rq->curr, before returning to user-space.
-+		 *
-+		 * Here are the schemes providing that barrier on the
-+		 * various architectures:
-+		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
-+		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
-+		 *   on PowerPC and on RISC-V.
-+		 * - finish_lock_switch() for weakly-ordered
-+		 *   architectures where spin_unlock is a full barrier,
-+		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
-+		 *   is a RELEASE barrier),
-+		 *
-+		 * The barrier matches a full barrier in the proximity of
-+		 * the membarrier system call entry.
-+		 *
-+		 * On RISC-V, this barrier pairing is also needed for the
-+		 * SYNC_CORE command when switching between processes, cf.
-+		 * the inline comments in membarrier_arch_switch_mm().
-+		 */
-+		++*switch_count;
-+
-+		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
-+
-+		/* Also unlocks the rq: */
-+		rq = context_switch(rq, prev, next);
-+
-+		cpu = cpu_of(rq);
-+	} else {
-+		__balance_callbacks(rq);
-+		raw_spin_unlock_irq(&rq->lock);
-+	}
-+}
-+
-+void __noreturn do_task_dead(void)
-+{
-+	/* Causes final put_task_struct in finish_task_switch(): */
-+	set_special_state(TASK_DEAD);
-+
-+	/* Tell freezer to ignore us: */
-+	current->flags |= PF_NOFREEZE;
-+
-+	__schedule(SM_NONE);
-+	BUG();
-+
-+	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
-+	for (;;)
-+		cpu_relax();
-+}
-+
-+static inline void sched_submit_work(struct task_struct *tsk)
-+{
-+	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
-+	unsigned int task_flags;
-+
-+	/*
-+	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
-+	 * will use a blocking primitive -- which would lead to recursion.
-+	 */
-+	lock_map_acquire_try(&sched_map);
-+
-+	task_flags = tsk->flags;
-+	/*
-+	 * If a worker goes to sleep, notify and ask workqueue whether it
-+	 * wants to wake up a task to maintain concurrency.
-+	 */
-+	if (task_flags & PF_WQ_WORKER)
-+		wq_worker_sleeping(tsk);
-+	else if (task_flags & PF_IO_WORKER)
-+		io_wq_worker_sleeping(tsk);
-+
-+	/*
-+	 * spinlock and rwlock must not flush block requests.  This will
-+	 * deadlock if the callback attempts to acquire a lock which is
-+	 * already acquired.
-+	 */
-+	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
-+
-+	/*
-+	 * If we are going to sleep and we have plugged IO queued,
-+	 * make sure to submit it to avoid deadlocks.
-+	 */
-+	blk_flush_plug(tsk->plug, true);
-+
-+	lock_map_release(&sched_map);
-+}
-+
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
-+		if (tsk->flags & PF_BLOCK_TS)
-+			blk_plug_invalidate_ts(tsk);
-+		if (tsk->flags & PF_WQ_WORKER)
-+			wq_worker_running(tsk);
-+		else if (tsk->flags & PF_IO_WORKER)
-+			io_wq_worker_running(tsk);
-+	}
-+}
-+
-+static __always_inline void __schedule_loop(unsigned int sched_mode)
-+{
-+	do {
-+		preempt_disable();
-+		__schedule(sched_mode);
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+}
-+
-+asmlinkage __visible void __sched schedule(void)
-+{
-+	struct task_struct *tsk = current;
-+
-+#ifdef CONFIG_RT_MUTEXES
-+	lockdep_assert(!tsk->sched_rt_mutex);
-+#endif
-+
-+	if (!task_is_running(tsk))
-+		sched_submit_work(tsk);
-+	__schedule_loop(SM_NONE);
-+	sched_update_worker(tsk);
-+}
-+EXPORT_SYMBOL(schedule);
-+
-+/*
-+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
-+ * state (have scheduled out non-voluntarily) by making sure that all
-+ * tasks have either left the run queue or have gone into user space.
-+ * As idle tasks do not do either, they must not ever be preempted
-+ * (schedule out non-voluntarily).
-+ *
-+ * schedule_idle() is similar to schedule_preempt_disable() except that it
-+ * never enables preemption because it does not call sched_submit_work().
-+ */
-+void __sched schedule_idle(void)
-+{
-+	/*
-+	 * As this skips calling sched_submit_work(), which the idle task does
-+	 * regardless because that function is a nop when the task is in a
-+	 * TASK_RUNNING state, make sure this isn't used someplace that the
-+	 * current task can be in any other state. Note, idle is always in the
-+	 * TASK_RUNNING state.
-+	 */
-+	WARN_ON_ONCE(current->__state);
-+	do {
-+		__schedule(SM_NONE);
-+	} while (need_resched());
-+}
-+
-+#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
-+asmlinkage __visible void __sched schedule_user(void)
-+{
-+	/*
-+	 * If we come here after a random call to set_need_resched(),
-+	 * or we have been woken up remotely but the IPI has not yet arrived,
-+	 * we haven't yet exited the RCU idle mode. Do it here manually until
-+	 * we find a better solution.
-+	 *
-+	 * NB: There are buggy callers of this function.  Ideally we
-+	 * should warn if prev_state != CONTEXT_USER, but that will trigger
-+	 * too frequently to make sense yet.
-+	 */
-+	enum ctx_state prev_state = exception_enter();
-+	schedule();
-+	exception_exit(prev_state);
-+}
-+#endif
-+
-+/**
-+ * schedule_preempt_disabled - called with preemption disabled
-+ *
-+ * Returns with preemption disabled. Note: preempt_count must be 1
-+ */
-+void __sched schedule_preempt_disabled(void)
-+{
-+	sched_preempt_enable_no_resched();
-+	schedule();
-+	preempt_disable();
-+}
-+
-+#ifdef CONFIG_PREEMPT_RT
-+void __sched notrace schedule_rtlock(void)
-+{
-+	__schedule_loop(SM_RTLOCK_WAIT);
-+}
-+NOKPROBE_SYMBOL(schedule_rtlock);
-+#endif
-+
-+static void __sched notrace preempt_schedule_common(void)
-+{
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		__schedule(SM_PREEMPT);
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+
-+		/*
-+		 * Check again in case we missed a preemption opportunity
-+		 * between schedule and now.
-+		 */
-+	} while (need_resched());
-+}
-+
-+#ifdef CONFIG_PREEMPTION
-+/*
-+ * This is the entry point to schedule() from in-kernel preemption
-+ * off of preempt_enable.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule(void)
-+{
-+	/*
-+	 * If there is a non-zero preempt_count or interrupts are disabled,
-+	 * we do not want to preempt the current task. Just return..
-+	 */
-+	if (likely(!preemptible()))
-+		return;
-+
-+	preempt_schedule_common();
-+}
-+NOKPROBE_SYMBOL(preempt_schedule);
-+EXPORT_SYMBOL(preempt_schedule);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_dynamic_enabled
-+#define preempt_schedule_dynamic_enabled	preempt_schedule
-+#define preempt_schedule_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
-+void __sched notrace dynamic_preempt_schedule(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
-+		return;
-+	preempt_schedule();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
-+EXPORT_SYMBOL(dynamic_preempt_schedule);
-+#endif
-+#endif
-+
-+/**
-+ * preempt_schedule_notrace - preempt_schedule called by tracing
-+ *
-+ * The tracing infrastructure uses preempt_enable_notrace to prevent
-+ * recursion and tracing preempt enabling caused by the tracing
-+ * infrastructure itself. But as tracing can happen in areas coming
-+ * from userspace or just about to enter userspace, a preempt enable
-+ * can occur before user_exit() is called. This will cause the scheduler
-+ * to be called when the system is still in usermode.
-+ *
-+ * To prevent this, the preempt_enable_notrace will use this function
-+ * instead of preempt_schedule() to exit user context if needed before
-+ * calling the scheduler.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
-+{
-+	enum ctx_state prev_ctx;
-+
-+	if (likely(!preemptible()))
-+		return;
-+
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		/*
-+		 * Needs preempt disabled in case user_exit() is traced
-+		 * and the tracer calls preempt_enable_notrace() causing
-+		 * an infinite recursion.
-+		 */
-+		prev_ctx = exception_enter();
-+		__schedule(SM_PREEMPT);
-+		exception_exit(prev_ctx);
-+
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+	} while (need_resched());
-+}
-+EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_notrace_dynamic_enabled
-+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
-+#define preempt_schedule_notrace_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
-+void __sched notrace dynamic_preempt_schedule_notrace(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
-+		return;
-+	preempt_schedule_notrace();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
-+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
-+#endif
-+#endif
-+
-+#endif /* CONFIG_PREEMPTION */
-+
-+/*
-+ * This is the entry point to schedule() from kernel preemption
-+ * off of irq context.
-+ * Note, that this is called and return with irqs disabled. This will
-+ * protect us against recursive calling from irq.
-+ */
-+asmlinkage __visible void __sched preempt_schedule_irq(void)
-+{
-+	enum ctx_state prev_state;
-+
-+	/* Catch callers which need to be fixed */
-+	BUG_ON(preempt_count() || !irqs_disabled());
-+
-+	prev_state = exception_enter();
-+
-+	do {
-+		preempt_disable();
-+		local_irq_enable();
-+		__schedule(SM_PREEMPT);
-+		local_irq_disable();
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+
-+	exception_exit(prev_state);
-+}
-+
-+int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
-+			  void *key)
-+{
-+	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
-+	return try_to_wake_up(curr->private, mode, wake_flags);
-+}
-+EXPORT_SYMBOL(default_wake_function);
-+
-+static inline void check_task_changed(struct task_struct *p, struct rq *rq)
-+{
-+	/* Trigger resched if task sched_prio has been modified. */
-+	if (task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		requeue_task(p, rq);
-+		wakeup_preempt(rq);
-+	}
-+}
-+
-+static void __setscheduler_prio(struct task_struct *p, int prio)
-+{
-+	p->prio = prio;
-+}
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+/*
-+ * Would be more useful with typeof()/auto_type but they don't mix with
-+ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
-+ * name such that if someone were to implement this function we get to compare
-+ * notes.
-+ */
-+#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
-+
-+void rt_mutex_pre_schedule(void)
-+{
-+	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
-+	sched_submit_work(current);
-+}
-+
-+void rt_mutex_schedule(void)
-+{
-+	lockdep_assert(current->sched_rt_mutex);
-+	__schedule_loop(SM_NONE);
-+}
-+
-+void rt_mutex_post_schedule(void)
-+{
-+	sched_update_worker(current);
-+	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
-+}
-+
-+static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
-+{
-+	if (pi_task)
-+		prio = min(prio, pi_task->prio);
-+
-+	return prio;
-+}
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	struct task_struct *pi_task = rt_mutex_get_top_task(p);
-+
-+	return __rt_effective_prio(pi_task, prio);
-+}
-+
-+/*
-+ * rt_mutex_setprio - set the current priority of a task
-+ * @p: task to boost
-+ * @pi_task: donor task
-+ *
-+ * This function changes the 'effective' priority of a task. It does
-+ * not touch ->normal_prio like __setscheduler().
-+ *
-+ * Used by the rt_mutex code to implement priority inheritance
-+ * logic. Call site only calls if the priority of the task changed.
-+ */
-+void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
-+{
-+	int prio;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	/* XXX used to be waiter->prio, not waiter->task->prio */
-+	prio = __rt_effective_prio(pi_task, p->normal_prio);
-+
-+	/*
-+	 * If nothing changed; bail early.
-+	 */
-+	if (p->pi_top_task == pi_task && prio == p->prio)
-+		return;
-+
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Set under pi_lock && rq->lock, such that the value can be used under
-+	 * either lock.
-+	 *
-+	 * Note that there is loads of tricky to make this pointer cache work
-+	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
-+	 * ensure a task is de-boosted (pi_task is set to NULL) before the
-+	 * task is allowed to run again (and can exit). This ensures the pointer
-+	 * points to a blocked task -- which guarantees the task is present.
-+	 */
-+	p->pi_top_task = pi_task;
-+
-+	/*
-+	 * For FIFO/RR we only need to set prio, if that matches we're done.
-+	 */
-+	if (prio == p->prio)
-+		goto out_unlock;
-+
-+	/*
-+	 * Idle task boosting is a nono in general. There is one
-+	 * exception, when PREEMPT_RT and NOHZ is active:
-+	 *
-+	 * The idle task calls get_next_timer_interrupt() and holds
-+	 * the timer wheel base->lock on the CPU and another CPU wants
-+	 * to access the timer (probably to cancel it). We can safely
-+	 * ignore the boosting request, as the idle CPU runs this code
-+	 * with interrupts disabled and will complete the lock
-+	 * protected section without being interrupted. So there is no
-+	 * real need to boost.
-+	 */
-+	if (unlikely(p == rq->idle)) {
-+		WARN_ON(p != rq->curr);
-+		WARN_ON(p->pi_blocked_on);
-+		goto out_unlock;
-+	}
-+
-+	trace_sched_pi_setprio(p, pi_task);
-+
-+	__setscheduler_prio(p, prio);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+
-+	if (task_on_rq_queued(p))
-+		__balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+
-+	preempt_enable();
-+}
-+#else
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	return prio;
-+}
-+#endif
-+
-+void set_user_nice(struct task_struct *p, long nice)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
-+		return;
-+	/*
-+	 * We have to be careful, if called from sys_setpriority(),
-+	 * the task might be in the middle of scheduling on another CPU.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	p->static_prio = NICE_TO_PRIO(nice);
-+	/*
-+	 * The RT priorities are set via sched_setscheduler(), but we still
-+	 * allow the 'normal' nice value to be set - but as expected
-+	 * it won't have any effect on scheduling until the task is
-+	 * not SCHED_NORMAL/SCHED_BATCH:
-+	 */
-+	if (task_has_rt_policy(p))
-+		goto out_unlock;
-+
-+	p->prio = effective_prio(p);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+EXPORT_SYMBOL(set_user_nice);
-+
-+/*
-+ * is_nice_reduction - check if nice value is an actual reduction
-+ *
-+ * Similar to can_nice() but does not perform a capability check.
-+ *
-+ * @p: task
-+ * @nice: nice value
-+ */
-+static bool is_nice_reduction(const struct task_struct *p, const int nice)
-+{
-+	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
-+	int nice_rlim = nice_to_rlimit(nice);
-+
-+	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
-+}
-+
-+/*
-+ * can_nice - check if a task can reduce its nice value
-+ * @p: task
-+ * @nice: nice value
-+ */
-+int can_nice(const struct task_struct *p, const int nice)
-+{
-+	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
-+}
-+
-+#ifdef __ARCH_WANT_SYS_NICE
-+
-+/*
-+ * sys_nice - change the priority of the current process.
-+ * @increment: priority increment
-+ *
-+ * sys_setpriority is a more generic, but much slower function that
-+ * does similar things.
-+ */
-+SYSCALL_DEFINE1(nice, int, increment)
-+{
-+	long nice, retval;
-+
-+	/*
-+	 * Setpriority might change our priority at the same moment.
-+	 * We don't have to worry. Conceptually one call occurs first
-+	 * and we have a single winner.
-+	 */
-+
-+	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
-+	nice = task_nice(current) + increment;
-+
-+	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
-+	if (increment < 0 && !can_nice(current, nice))
-+		return -EPERM;
-+
-+	retval = security_task_setnice(current, nice);
-+	if (retval)
-+		return retval;
-+
-+	set_user_nice(current, nice);
-+	return 0;
-+}
-+
-+#endif
-+
-+/**
-+ * task_prio - return the priority value of a given task.
-+ * @p: the task in question.
-+ *
-+ * Return: The priority value as seen by users in /proc.
-+ *
-+ * sched policy         return value   kernel prio    user prio/nice
-+ *
-+ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
-+ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
-+ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
-+ */
-+int task_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
-+		task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+/**
-+ * idle_cpu - is a given CPU idle currently?
-+ * @cpu: the processor in question.
-+ *
-+ * Return: 1 if the CPU is currently idle. 0 otherwise.
-+ */
-+int idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (rq->curr != rq->idle)
-+		return 0;
-+
-+	if (rq->nr_running)
-+		return 0;
-+
-+#ifdef CONFIG_SMP
-+	if (rq->ttwu_pending)
-+		return 0;
-+#endif
-+
-+	return 1;
-+}
-+
-+/**
-+ * idle_task - return the idle task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * Return: The idle task for the cpu @cpu.
-+ */
-+struct task_struct *idle_task(int cpu)
-+{
-+	return cpu_rq(cpu)->idle;
-+}
-+
-+/**
-+ * find_process_by_pid - find a process with a matching PID value.
-+ * @pid: the pid in question.
-+ *
-+ * The task of @pid, if found. %NULL otherwise.
-+ */
-+static inline struct task_struct *find_process_by_pid(pid_t pid)
-+{
-+	return pid ? find_task_by_vpid(pid) : current;
-+}
-+
-+static struct task_struct *find_get_task(pid_t pid)
-+{
-+	struct task_struct *p;
-+	guard(rcu)();
-+
-+	p = find_process_by_pid(pid);
-+	if (likely(p))
-+		get_task_struct(p);
-+
-+	return p;
-+}
-+
-+DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
-+	     find_get_task(pid), pid_t pid)
-+
-+/*
-+ * sched_setparam() passes in -1 for its policy, to let the functions
-+ * it calls know not to change it.
-+ */
-+#define SETPARAM_POLICY -1
-+
-+static void __setscheduler_params(struct task_struct *p,
-+		const struct sched_attr *attr)
-+{
-+	int policy = attr->sched_policy;
-+
-+	if (policy == SETPARAM_POLICY)
-+		policy = p->policy;
-+
-+	p->policy = policy;
-+
-+	/*
-+	 * allow normal nice value to be set, but will not have any
-+	 * effect on scheduling until the task not SCHED_NORMAL/
-+	 * SCHED_BATCH
-+	 */
-+	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
-+
-+	/*
-+	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
-+	 * !rt_policy. Always setting this ensures that things like
-+	 * getparam()/getattr() don't report silly values for !rt tasks.
-+	 */
-+	p->rt_priority = attr->sched_priority;
-+	p->normal_prio = normal_prio(p);
-+}
-+
-+/*
-+ * check the target process has a UID that matches the current process's
-+ */
-+static bool check_same_owner(struct task_struct *p)
-+{
-+	const struct cred *cred = current_cred(), *pcred;
-+	guard(rcu)();
-+
-+	pcred = __task_cred(p);
-+	return (uid_eq(cred->euid, pcred->euid) ||
-+	        uid_eq(cred->euid, pcred->uid));
-+}
-+
-+/*
-+ * Allow unprivileged RT tasks to decrease priority.
-+ * Only issue a capable test if needed and only once to avoid an audit
-+ * event on permitted non-privileged operations:
-+ */
-+static int user_check_sched_setscheduler(struct task_struct *p,
-+					 const struct sched_attr *attr,
-+					 int policy, int reset_on_fork)
-+{
-+	if (rt_policy(policy)) {
-+		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
-+
-+		/* Can't set/change the rt policy: */
-+		if (policy != p->policy && !rlim_rtprio)
-+			goto req_priv;
-+
-+		/* Can't increase priority: */
-+		if (attr->sched_priority > p->rt_priority &&
-+		    attr->sched_priority > rlim_rtprio)
-+			goto req_priv;
-+	}
-+
-+	/* Can't change other user's priorities: */
-+	if (!check_same_owner(p))
-+		goto req_priv;
-+
-+	/* Normal users shall not reset the sched_reset_on_fork flag: */
-+	if (p->sched_reset_on_fork && !reset_on_fork)
-+		goto req_priv;
-+
-+	return 0;
-+
-+req_priv:
-+	if (!capable(CAP_SYS_NICE))
-+		return -EPERM;
-+
-+	return 0;
-+}
-+
-+static int __sched_setscheduler(struct task_struct *p,
-+				const struct sched_attr *attr,
-+				bool user, bool pi)
-+{
-+	const struct sched_attr dl_squash_attr = {
-+		.size		= sizeof(struct sched_attr),
-+		.sched_policy	= SCHED_FIFO,
-+		.sched_nice	= 0,
-+		.sched_priority = 99,
-+	};
-+	int oldpolicy = -1, policy = attr->sched_policy;
-+	int retval, newprio;
-+	struct balance_callback *head;
-+	unsigned long flags;
-+	struct rq *rq;
-+	int reset_on_fork;
-+	raw_spinlock_t *lock;
-+
-+	/* The pi code expects interrupts enabled */
-+	BUG_ON(pi && in_interrupt());
-+
-+	/*
-+	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
-+	 */
-+	if (unlikely(SCHED_DEADLINE == policy)) {
-+		attr = &dl_squash_attr;
-+		policy = attr->sched_policy;
-+	}
-+recheck:
-+	/* Double check policy once rq lock held */
-+	if (policy < 0) {
-+		reset_on_fork = p->sched_reset_on_fork;
-+		policy = oldpolicy = p->policy;
-+	} else {
-+		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
-+
-+		if (policy > SCHED_IDLE)
-+			return -EINVAL;
-+	}
-+
-+	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
-+		return -EINVAL;
-+
-+	/*
-+	 * Valid priorities for SCHED_FIFO and SCHED_RR are
-+	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
-+	 * SCHED_BATCH and SCHED_IDLE is 0.
-+	 */
-+	if (attr->sched_priority < 0 ||
-+	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
-+	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
-+		return -EINVAL;
-+	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
-+	    (attr->sched_priority != 0))
-+		return -EINVAL;
-+
-+	if (user) {
-+		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
-+		if (retval)
-+			return retval;
-+
-+		retval = security_task_setscheduler(p);
-+		if (retval)
-+			return retval;
-+	}
-+
-+	/*
-+	 * Make sure no PI-waiters arrive (or leave) while we are
-+	 * changing the priority of the task:
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+
-+	/*
-+	 * To be able to change p->policy safely, task_access_lock()
-+	 * must be called.
-+	 * IF use task_access_lock() here:
-+	 * For the task p which is not running, reading rq->stop is
-+	 * racy but acceptable as ->stop doesn't change much.
-+	 * An enhancemnet can be made to read rq->stop saftly.
-+	 */
-+	rq = __task_access_lock(p, &lock);
-+
-+	/*
-+	 * Changing the policy of the stop threads its a very bad idea
-+	 */
-+	if (p == rq->stop) {
-+		retval = -EINVAL;
-+		goto unlock;
-+	}
-+
-+	/*
-+	 * If not changing anything there's no need to proceed further:
-+	 */
-+	if (unlikely(policy == p->policy)) {
-+		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
-+			goto change;
-+		if (!rt_policy(policy) &&
-+		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
-+			goto change;
-+
-+		p->sched_reset_on_fork = reset_on_fork;
-+		retval = 0;
-+		goto unlock;
-+	}
-+change:
-+
-+	/* Re-check policy now with rq lock held */
-+	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
-+		policy = oldpolicy = -1;
-+		__task_access_unlock(p, lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+		goto recheck;
-+	}
-+
-+	p->sched_reset_on_fork = reset_on_fork;
-+
-+	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
-+	if (pi) {
-+		/*
-+		 * Take priority boosted tasks into account. If the new
-+		 * effective priority is unchanged, we just store the new
-+		 * normal parameters and do not touch the scheduler class and
-+		 * the runqueue. This will be done when the task deboost
-+		 * itself.
-+		 */
-+		newprio = rt_effective_prio(p, newprio);
-+	}
-+
-+	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
-+		__setscheduler_params(p, attr);
-+		__setscheduler_prio(p, newprio);
-+	}
-+
-+	check_task_changed(p, rq);
-+
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+	head = splice_balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	if (pi)
-+		rt_mutex_adjust_pi(p);
-+
-+	/* Run balance callbacks after we've adjusted the PI chain: */
-+	balance_callbacks(rq, head);
-+	preempt_enable();
-+
-+	return 0;
-+
-+unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+	return retval;
-+}
-+
-+static int _sched_setscheduler(struct task_struct *p, int policy,
-+			       const struct sched_param *param, bool check)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy   = policy,
-+		.sched_priority = param->sched_priority,
-+		.sched_nice     = PRIO_TO_NICE(p->static_prio),
-+	};
-+
-+	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
-+	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
-+		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		policy &= ~SCHED_RESET_ON_FORK;
-+		attr.sched_policy = policy;
-+	}
-+
-+	return __sched_setscheduler(p, &attr, check, true);
-+}
-+
-+/**
-+ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Use sched_set_fifo(), read its comment.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ *
-+ * NOTE that the task may be already dead.
-+ */
-+int sched_setscheduler(struct task_struct *p, int policy,
-+		       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, true);
-+}
-+
-+int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, true, true);
-+}
-+
-+int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, false, true);
-+}
-+EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
-+
-+/**
-+ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Just like sched_setscheduler, only don't bother checking if the
-+ * current context has permission.  For example, this is needed in
-+ * stop_machine(): we create temporary high priority worker threads,
-+ * but our caller might not have that capability.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+int sched_setscheduler_nocheck(struct task_struct *p, int policy,
-+			       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, false);
-+}
-+
-+/*
-+ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
-+ * incapable of resource management, which is the one thing an OS really should
-+ * be doing.
-+ *
-+ * This is of course the reason it is limited to privileged users only.
-+ *
-+ * Worse still; it is fundamentally impossible to compose static priority
-+ * workloads. You cannot take two correctly working static prio workloads
-+ * and smash them together and still expect them to work.
-+ *
-+ * For this reason 'all' FIFO tasks the kernel creates are basically at:
-+ *
-+ *   MAX_RT_PRIO / 2
-+ *
-+ * The administrator _MUST_ configure the system, the kernel simply doesn't
-+ * know enough information to make a sensible choice.
-+ */
-+void sched_set_fifo(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo);
-+
-+/*
-+ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
-+ */
-+void sched_set_fifo_low(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = 1 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo_low);
-+
-+void sched_set_normal(struct task_struct *p, int nice)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+		.sched_nice = nice,
-+	};
-+	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_normal);
-+
-+static int
-+do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
-+{
-+	struct sched_param lparam;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
-+		return -EFAULT;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	return sched_setscheduler(p, policy, &lparam);
-+}
-+
-+/*
-+ * Mimics kernel/events/core.c perf_copy_attr().
-+ */
-+static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
-+{
-+	u32 size;
-+	int ret;
-+
-+	/* Zero the full structure, so that a short copy will be nice: */
-+	memset(attr, 0, sizeof(*attr));
-+
-+	ret = get_user(size, &uattr->size);
-+	if (ret)
-+		return ret;
-+
-+	/* ABI compatibility quirk: */
-+	if (!size)
-+		size = SCHED_ATTR_SIZE_VER0;
-+
-+	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
-+		goto err_size;
-+
-+	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
-+	if (ret) {
-+		if (ret == -E2BIG)
-+			goto err_size;
-+		return ret;
-+	}
-+
-+	/*
-+	 * XXX: Do we want to be lenient like existing syscalls; or do we want
-+	 * to be strict and return an error on out-of-bounds values?
-+	 */
-+	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
-+
-+	/* sched/core.c uses zero here but we already know ret is zero */
-+	return 0;
-+
-+err_size:
-+	put_user(sizeof(*attr), &uattr->size);
-+	return -E2BIG;
-+}
-+
-+/**
-+ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
-+ * @pid: the pid in question.
-+ * @policy: new policy.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ * @param: structure containing the new RT priority.
-+ */
-+SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
-+{
-+	if (policy < 0)
-+		return -EINVAL;
-+
-+	return do_sched_setscheduler(pid, policy, param);
-+}
-+
-+/**
-+ * sys_sched_setparam - set/change the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
-+}
-+
-+static void get_params(struct task_struct *p, struct sched_attr *attr)
-+{
-+	if (task_has_rt_policy(p))
-+		attr->sched_priority = p->rt_priority;
-+	else
-+		attr->sched_nice = task_nice(p);
-+}
-+
-+/**
-+ * sys_sched_setattr - same as above, but with extended sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ */
-+SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
-+			       unsigned int, flags)
-+{
-+	struct sched_attr attr;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || flags)
-+		return -EINVAL;
-+
-+	retval = sched_copy_attr(uattr, &attr);
-+	if (retval)
-+		return retval;
-+
-+	if ((int)attr.sched_policy < 0)
-+		return -EINVAL;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
-+		get_params(p, &attr);
-+
-+	return sched_setattr(p, &attr);
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
-+ * @pid: the pid in question.
-+ *
-+ * Return: On success, the policy of the thread. Otherwise, a negative error
-+ * code.
-+ */
-+SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
-+{
-+	struct task_struct *p;
-+	int retval = -EINVAL;
-+
-+	if (pid < 0)
-+		return -ESRCH;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (!retval)
-+		retval = p->policy;
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the RT priority.
-+ *
-+ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
-+ * code.
-+ */
-+SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	struct sched_param lp = { .sched_priority = 0 };
-+	struct task_struct *p;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		int retval;
-+
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -EINVAL;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		if (task_has_rt_policy(p))
-+			lp.sched_priority = p->rt_priority;
-+	}
-+
-+	/*
-+	 * This one might sleep, we cannot do it with a spinlock held ...
-+	 */
-+	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
-+}
-+
-+/*
-+ * Copy the kernel size attribute structure (which might be larger
-+ * than what user-space knows about) to user-space.
-+ *
-+ * Note that all cases are valid: user-space buffer can be larger or
-+ * smaller than the kernel-space buffer. The usual case is that both
-+ * have the same size.
-+ */
-+static int
-+sched_attr_copy_to_user(struct sched_attr __user *uattr,
-+			struct sched_attr *kattr,
-+			unsigned int usize)
-+{
-+	unsigned int ksize = sizeof(*kattr);
-+
-+	if (!access_ok(uattr, usize))
-+		return -EFAULT;
-+
-+	/*
-+	 * sched_getattr() ABI forwards and backwards compatibility:
-+	 *
-+	 * If usize == ksize then we just copy everything to user-space and all is good.
-+	 *
-+	 * If usize < ksize then we only copy as much as user-space has space for,
-+	 * this keeps ABI compatibility as well. We skip the rest.
-+	 *
-+	 * If usize > ksize then user-space is using a newer version of the ABI,
-+	 * which part the kernel doesn't know about. Just ignore it - tooling can
-+	 * detect the kernel's knowledge of attributes from the attr->size value
-+	 * which is set to ksize in this case.
-+	 */
-+	kattr->size = min(usize, ksize);
-+
-+	if (copy_to_user(uattr, kattr, kattr->size))
-+		return -EFAULT;
-+
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ * @usize: sizeof(attr) for fwd/bwd comp.
-+ * @flags: for future extension.
-+ */
-+SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
-+		unsigned int, usize, unsigned int, flags)
-+{
-+	struct sched_attr kattr = { };
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
-+	    usize < SCHED_ATTR_SIZE_VER0 || flags)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -ESRCH;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		kattr.sched_policy = p->policy;
-+		if (p->sched_reset_on_fork)
-+			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		get_params(p, &kattr);
-+		kattr.sched_flags &= SCHED_FLAG_ALL;
-+
-+#ifdef CONFIG_UCLAMP_TASK
-+		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
-+		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
-+#endif
-+	}
-+
-+	return sched_attr_copy_to_user(uattr, &kattr, usize);
-+}
-+
-+#ifdef CONFIG_SMP
-+int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	int retval;
-+	cpumask_var_t cpus_allowed, new_mask;
-+
-+	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
-+		retval = -ENOMEM;
-+		goto out_free_cpus_allowed;
-+	}
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
-+
-+	ctx->new_mask = new_mask;
-+	ctx->flags |= SCA_CHECK;
-+
-+	retval = __set_cpus_allowed_ptr(p, ctx);
-+	if (retval)
-+		goto out_free_new_mask;
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	if (!cpumask_subset(new_mask, cpus_allowed)) {
-+		/*
-+		 * We must have raced with a concurrent cpuset
-+		 * update. Just reset the cpus_allowed to the
-+		 * cpuset's cpus_allowed
-+		 */
-+		cpumask_copy(new_mask, cpus_allowed);
-+
-+		/*
-+		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
-+		 * will restore the previous user_cpus_ptr value.
-+		 *
-+		 * In the unlikely event a previous user_cpus_ptr exists,
-+		 * we need to further restrict the mask to what is allowed
-+		 * by that old user_cpus_ptr.
-+		 */
-+		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
-+			bool empty = !cpumask_and(new_mask, new_mask,
-+						  ctx->user_mask);
-+
-+			if (WARN_ON_ONCE(empty))
-+				cpumask_copy(new_mask, cpus_allowed);
-+		}
-+		__set_cpus_allowed_ptr(p, ctx);
-+		retval = -EINVAL;
-+	}
-+
-+out_free_new_mask:
-+	free_cpumask_var(new_mask);
-+out_free_cpus_allowed:
-+	free_cpumask_var(cpus_allowed);
-+	return retval;
-+}
-+
-+long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
-+{
-+	struct affinity_context ac;
-+	struct cpumask *user_mask;
-+	int retval;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		return -EINVAL;
-+
-+	if (!check_same_owner(p)) {
-+		guard(rcu)();
-+		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
-+			return -EPERM;
-+	}
-+
-+	retval = security_task_setscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	/*
-+	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
-+	 * alloc_user_cpus_ptr() returns NULL.
-+	 */
-+	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
-+	if (user_mask) {
-+		cpumask_copy(user_mask, in_mask);
-+	} else if (IS_ENABLED(CONFIG_SMP)) {
-+		return -ENOMEM;
-+	}
-+
-+	ac = (struct affinity_context){
-+		.new_mask  = in_mask,
-+		.user_mask = user_mask,
-+		.flags     = SCA_USER,
-+	};
-+
-+	retval = __sched_setaffinity(p, &ac);
-+	kfree(ac.user_mask);
-+
-+	return retval;
-+}
-+
-+static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
-+			     struct cpumask *new_mask)
-+{
-+	if (len < cpumask_size())
-+		cpumask_clear(new_mask);
-+	else if (len > cpumask_size())
-+		len = cpumask_size();
-+
-+	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
-+}
-+
-+/**
-+ * sys_sched_setaffinity - set the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to the new CPU mask
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	cpumask_var_t new_mask;
-+	int retval;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
-+	if (retval == 0)
-+		retval = sched_setaffinity(pid, new_mask);
-+	free_cpumask_var(new_mask);
-+	return retval;
-+}
-+
-+long sched_getaffinity(pid_t pid, cpumask_t *mask)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	guard(raw_spinlock_irqsave)(&p->pi_lock);
-+	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getaffinity - get the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to hold the current CPU mask
-+ *
-+ * Return: size of CPU mask copied to user_mask_ptr on success. An
-+ * error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	int ret;
-+	cpumask_var_t mask;
-+
-+	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
-+		return -EINVAL;
-+	if (len & (sizeof(unsigned long)-1))
-+		return -EINVAL;
-+
-+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	ret = sched_getaffinity(pid, mask);
-+	if (ret == 0) {
-+		unsigned int retlen = min(len, cpumask_size());
-+
-+		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
-+			ret = -EFAULT;
-+		else
-+			ret = retlen;
-+	}
-+	free_cpumask_var(mask);
-+
-+	return ret;
-+}
-+
-+static void do_sched_yield(void)
-+{
-+	struct rq *rq;
-+	struct rq_flags rf;
-+	struct task_struct *p;
-+
-+	if (!sched_yield_type)
-+		return;
-+
-+	rq = this_rq_lock_irq(&rf);
-+
-+	schedstat_inc(rq->yld_count);
-+
-+	p = current;
-+	if (rt_task(p)) {
-+		if (task_on_rq_queued(p))
-+			requeue_task(p, rq);
-+	} else if (rq->nr_running > 1) {
-+		do_sched_yield_type_1(p, rq);
-+		if (task_on_rq_queued(p))
-+			requeue_task(p, rq);
-+	}
-+
-+	preempt_disable();
-+	raw_spin_unlock_irq(&rq->lock);
-+	sched_preempt_enable_no_resched();
-+
-+	schedule();
-+}
-+
-+/**
-+ * sys_sched_yield - yield the current processor to other threads.
-+ *
-+ * This function yields the current CPU to other tasks. If there are no
-+ * other threads running on this CPU then this function will return.
-+ *
-+ * Return: 0.
-+ */
-+SYSCALL_DEFINE0(sched_yield)
-+{
-+	do_sched_yield();
-+	return 0;
-+}
-+
-+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
-+int __sched __cond_resched(void)
-+{
-+	if (should_resched(0)) {
-+		preempt_schedule_common();
-+		return 1;
-+	}
-+	/*
-+	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
-+	 * whether the current CPU is in an RCU read-side critical section,
-+	 * so the tick can report quiescent states even for CPUs looping
-+	 * in kernel context.  In contrast, in non-preemptible kernels,
-+	 * RCU readers leave no in-memory hints, which means that CPU-bound
-+	 * processes executing in kernel context might never report an
-+	 * RCU quiescent state.  Therefore, the following code causes
-+	 * cond_resched() to report a quiescent state, but only when RCU
-+	 * is in urgent need of one.
-+	 */
-+#ifndef CONFIG_PREEMPT_RCU
-+	rcu_all_qs();
-+#endif
-+	return 0;
-+}
-+EXPORT_SYMBOL(__cond_resched);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define cond_resched_dynamic_enabled	__cond_resched
-+#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(cond_resched);
-+
-+#define might_resched_dynamic_enabled	__cond_resched
-+#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(might_resched);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
-+int __sched dynamic_cond_resched(void)
-+{
-+	klp_sched_try_switch();
-+	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_cond_resched);
-+
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
-+int __sched dynamic_might_resched(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_might_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_might_resched);
-+#endif
-+#endif
-+
-+/*
-+ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
-+ * call schedule, and on return reacquire the lock.
-+ *
-+ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
-+ * operations here to prevent schedule() from being called twice (once via
-+ * spin_unlock(), once by hand).
-+ */
-+int __cond_resched_lock(spinlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held(lock);
-+
-+	if (spin_needbreak(lock) || resched) {
-+		spin_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		spin_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_lock);
-+
-+int __cond_resched_rwlock_read(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_read(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		read_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		read_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_read);
-+
-+int __cond_resched_rwlock_write(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_write(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		write_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		write_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_write);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+
-+#ifdef CONFIG_GENERIC_ENTRY
-+#include <linux/entry-common.h>
-+#endif
-+
-+/*
-+ * SC:cond_resched
-+ * SC:might_resched
-+ * SC:preempt_schedule
-+ * SC:preempt_schedule_notrace
-+ * SC:irqentry_exit_cond_resched
-+ *
-+ *
-+ * NONE:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * VOLUNTARY:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- __cond_resched
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * FULL:
-+ *   cond_resched               <- RET0
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- preempt_schedule
-+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
-+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
-+ */
-+
-+enum {
-+	preempt_dynamic_undefined = -1,
-+	preempt_dynamic_none,
-+	preempt_dynamic_voluntary,
-+	preempt_dynamic_full,
-+};
-+
-+int preempt_dynamic_mode = preempt_dynamic_undefined;
-+
-+int sched_dynamic_mode(const char *str)
-+{
-+	if (!strcmp(str, "none"))
-+		return preempt_dynamic_none;
-+
-+	if (!strcmp(str, "voluntary"))
-+		return preempt_dynamic_voluntary;
-+
-+	if (!strcmp(str, "full"))
-+		return preempt_dynamic_full;
-+
-+	return -EINVAL;
-+}
-+
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
-+#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
-+#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
-+#else
-+#error "Unsupported PREEMPT_DYNAMIC mechanism"
-+#endif
-+
-+static DEFINE_MUTEX(sched_dynamic_mutex);
-+static bool klp_override;
-+
-+static void __sched_dynamic_update(int mode)
-+{
-+	/*
-+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-+	 * the ZERO state, which is invalid.
-+	 */
-+	if (!klp_override)
-+		preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(might_resched);
-+	preempt_dynamic_enable(preempt_schedule);
-+	preempt_dynamic_enable(preempt_schedule_notrace);
-+	preempt_dynamic_enable(irqentry_exit_cond_resched);
-+
-+	switch (mode) {
-+	case preempt_dynamic_none:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: none\n");
-+		break;
-+
-+	case preempt_dynamic_voluntary:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_enable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: voluntary\n");
-+		break;
-+
-+	case preempt_dynamic_full:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_enable(preempt_schedule);
-+		preempt_dynamic_enable(preempt_schedule_notrace);
-+		preempt_dynamic_enable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: full\n");
-+		break;
-+	}
-+
-+	preempt_dynamic_mode = mode;
-+}
-+
-+void sched_dynamic_update(int mode)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+	__sched_dynamic_update(mode);
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
-+
-+static int klp_cond_resched(void)
-+{
-+	__klp_sched_try_switch();
-+	return __cond_resched();
-+}
-+
-+void sched_dynamic_klp_enable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = true;
-+	static_call_update(cond_resched, klp_cond_resched);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+void sched_dynamic_klp_disable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = false;
-+	__sched_dynamic_update(preempt_dynamic_mode);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
-+
-+
-+static int __init setup_preempt_mode(char *str)
-+{
-+	int mode = sched_dynamic_mode(str);
-+	if (mode < 0) {
-+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-+		return 0;
-+	}
-+
-+	sched_dynamic_update(mode);
-+	return 1;
-+}
-+__setup("preempt=", setup_preempt_mode);
-+
-+static void __init preempt_dynamic_init(void)
-+{
-+	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-+		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-+			sched_dynamic_update(preempt_dynamic_none);
-+		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-+			sched_dynamic_update(preempt_dynamic_voluntary);
-+		} else {
-+			/* Default static call setting, nothing to do */
-+			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-+			preempt_dynamic_mode = preempt_dynamic_full;
-+			pr_info("Dynamic Preempt: full\n");
-+		}
-+	}
-+}
-+
-+#define PREEMPT_MODEL_ACCESSOR(mode) \
-+	bool preempt_model_##mode(void)						 \
-+	{									 \
-+		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
-+		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
-+	}									 \
-+	EXPORT_SYMBOL_GPL(preempt_model_##mode)
-+
-+PREEMPT_MODEL_ACCESSOR(none);
-+PREEMPT_MODEL_ACCESSOR(voluntary);
-+PREEMPT_MODEL_ACCESSOR(full);
-+
-+#else /* !CONFIG_PREEMPT_DYNAMIC */
-+
-+static inline void preempt_dynamic_init(void) { }
-+
-+#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
-+
-+/**
-+ * yield - yield the current processor to other threads.
-+ *
-+ * Do not ever use this function, there's a 99% chance you're doing it wrong.
-+ *
-+ * The scheduler is at all times free to pick the calling task as the most
-+ * eligible task to run, if removing the yield() call from your code breaks
-+ * it, it's already broken.
-+ *
-+ * Typical broken usage is:
-+ *
-+ * while (!event)
-+ * 	yield();
-+ *
-+ * where one assumes that yield() will let 'the other' process run that will
-+ * make event true. If the current task is a SCHED_FIFO task that will never
-+ * happen. Never use yield() as a progress guarantee!!
-+ *
-+ * If you want to use yield() to wait for something, use wait_event().
-+ * If you want to use yield() to be 'nice' for others, use cond_resched().
-+ * If you still want to use yield(), do not!
-+ */
-+void __sched yield(void)
-+{
-+	set_current_state(TASK_RUNNING);
-+	do_sched_yield();
-+}
-+EXPORT_SYMBOL(yield);
-+
-+/**
-+ * yield_to - yield the current processor to another thread in
-+ * your thread group, or accelerate that thread toward the
-+ * processor it's on.
-+ * @p: target task
-+ * @preempt: whether task preemption is allowed or not
-+ *
-+ * It's the caller's job to ensure that the target task struct
-+ * can't go away on us before we can do any checks.
-+ *
-+ * In Alt schedule FW, yield_to is not supported.
-+ *
-+ * Return:
-+ *	true (>0) if we indeed boosted the target task.
-+ *	false (0) if we failed to boost the target.
-+ *	-ESRCH if there's no task to yield to.
-+ */
-+int __sched yield_to(struct task_struct *p, bool preempt)
-+{
-+	return 0;
-+}
-+EXPORT_SYMBOL_GPL(yield_to);
-+
-+int io_schedule_prepare(void)
-+{
-+	int old_iowait = current->in_iowait;
-+
-+	current->in_iowait = 1;
-+	blk_flush_plug(current->plug, true);
-+	return old_iowait;
-+}
-+
-+void io_schedule_finish(int token)
-+{
-+	current->in_iowait = token;
-+}
-+
-+/*
-+ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
-+ * that process accounting knows that this is a task in IO wait state.
-+ *
-+ * But don't do that if it is a deliberate, throttling IO wait (this task
-+ * has set its backing_dev_info: the queue against which it should throttle)
-+ */
-+
-+long __sched io_schedule_timeout(long timeout)
-+{
-+	int token;
-+	long ret;
-+
-+	token = io_schedule_prepare();
-+	ret = schedule_timeout(timeout);
-+	io_schedule_finish(token);
-+
-+	return ret;
-+}
-+EXPORT_SYMBOL(io_schedule_timeout);
-+
-+void __sched io_schedule(void)
-+{
-+	int token;
-+
-+	token = io_schedule_prepare();
-+	schedule();
-+	io_schedule_finish(token);
-+}
-+EXPORT_SYMBOL(io_schedule);
-+
-+/**
-+ * sys_sched_get_priority_max - return maximum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the maximum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = MAX_RT_PRIO - 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+/**
-+ * sys_sched_get_priority_min - return minimum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the minimum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	alt_sched_debug();
-+
-+	if (pid < 0)
-+		return -EINVAL;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -EINVAL;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	*t = ns_to_timespec64(sysctl_sched_base_slice);
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_rr_get_interval - return the default timeslice of a process.
-+ * @pid: pid of the process.
-+ * @interval: userspace pointer to the timeslice value.
-+ *
-+ *
-+ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
-+ * an error code.
-+ */
-+SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
-+		struct __kernel_timespec __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_timespec64(&t, interval);
-+
-+	return retval;
-+}
-+
-+#ifdef CONFIG_COMPAT_32BIT_TIME
-+SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
-+		struct old_timespec32 __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_old_timespec32(&t, interval);
-+	return retval;
-+}
-+#endif
-+
-+void sched_show_task(struct task_struct *p)
-+{
-+	unsigned long free = 0;
-+	int ppid;
-+
-+	if (!try_get_task_stack(p))
-+		return;
-+
-+	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
-+
-+	if (task_is_running(p))
-+		pr_cont("  running task    ");
-+#ifdef CONFIG_DEBUG_STACK_USAGE
-+	free = stack_not_used(p);
-+#endif
-+	ppid = 0;
-+	rcu_read_lock();
-+	if (pid_alive(p))
-+		ppid = task_pid_nr(rcu_dereference(p->real_parent));
-+	rcu_read_unlock();
-+	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
-+		free, task_pid_nr(p), task_tgid_nr(p),
-+		ppid, read_task_thread_flags(p));
-+
-+	print_worker_info(KERN_INFO, p);
-+	print_stop_info(KERN_INFO, p);
-+	show_stack(p, NULL, KERN_INFO);
-+	put_task_stack(p);
-+}
-+EXPORT_SYMBOL_GPL(sched_show_task);
-+
-+static inline bool
-+state_filter_match(unsigned long state_filter, struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/* no filter, everything matches */
-+	if (!state_filter)
-+		return true;
-+
-+	/* filter, but doesn't match */
-+	if (!(state & state_filter))
-+		return false;
-+
-+	/*
-+	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
-+	 * TASK_KILLABLE).
-+	 */
-+	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
-+		return false;
-+
-+	return true;
-+}
-+
-+
-+void show_state_filter(unsigned int state_filter)
-+{
-+	struct task_struct *g, *p;
-+
-+	rcu_read_lock();
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * reset the NMI-timeout, listing all files on a slow
-+		 * console might take a lot of time:
-+		 * Also, reset softlockup watchdogs on all CPUs, because
-+		 * another CPU might be blocked waiting for us to process
-+		 * an IPI.
-+		 */
-+		touch_nmi_watchdog();
-+		touch_all_softlockup_watchdogs();
-+		if (state_filter_match(state_filter, p))
-+			sched_show_task(p);
-+	}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	/* TODO: Alt schedule FW should support this
-+	if (!state_filter)
-+		sysrq_sched_debug_show();
-+	*/
-+#endif
-+	rcu_read_unlock();
-+	/*
-+	 * Only show locks if all tasks are dumped:
-+	 */
-+	if (!state_filter)
-+		debug_show_all_locks();
-+}
-+
-+void dump_cpu_task(int cpu)
-+{
-+	if (cpu == smp_processor_id() && in_hardirq()) {
-+		struct pt_regs *regs;
-+
-+		regs = get_irq_regs();
-+		if (regs) {
-+			show_regs(regs);
-+			return;
-+		}
-+	}
-+
-+	if (trigger_single_cpu_backtrace(cpu))
-+		return;
-+
-+	pr_info("Task dump for CPU %d:\n", cpu);
-+	sched_show_task(cpu_curr(cpu));
-+}
-+
-+/**
-+ * init_idle - set up an idle thread for a given CPU
-+ * @idle: task in question
-+ * @cpu: CPU the idle task belongs to
-+ *
-+ * NOTE: this function does not set the idle thread's NEED_RESCHED
-+ * flag, to make booting more robust.
-+ */
-+void __init init_idle(struct task_struct *idle, int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	struct affinity_context ac = (struct affinity_context) {
-+		.new_mask  = cpumask_of(cpu),
-+		.flags     = 0,
-+	};
-+#endif
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	__sched_fork(0, idle);
-+
-+	raw_spin_lock_irqsave(&idle->pi_lock, flags);
-+	raw_spin_lock(&rq->lock);
-+
-+	idle->last_ran = rq->clock_task;
-+	idle->__state = TASK_RUNNING;
-+	/*
-+	 * PF_KTHREAD should already be set at this point; regardless, make it
-+	 * look like a proper per-CPU kthread.
-+	 */
-+	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
-+	kthread_set_per_cpu(idle, cpu);
-+
-+	sched_queue_init_idle(&rq->queue, idle);
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * It's possible that init_idle() gets called multiple times on a task,
-+	 * in that case do_set_cpus_allowed() will not do the right thing.
-+	 *
-+	 * And since this is boot we can forgo the serialisation.
-+	 */
-+	set_cpus_allowed_common(idle, &ac);
-+#endif
-+
-+	/* Silence PROVE_RCU */
-+	rcu_read_lock();
-+	__set_task_cpu(idle, cpu);
-+	rcu_read_unlock();
-+
-+	rq->idle = idle;
-+	rcu_assign_pointer(rq->curr, idle);
-+	idle->on_cpu = 1;
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
-+
-+	/* Set the preempt count _outside_ the spinlocks! */
-+	init_idle_preempt_count(idle, cpu);
-+
-+	ftrace_graph_init_idle_task(idle, cpu);
-+	vtime_init_idle(idle, cpu);
-+#ifdef CONFIG_SMP
-+	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
-+			      const struct cpumask __maybe_unused *trial)
-+{
-+	return 1;
-+}
-+
-+int task_can_attach(struct task_struct *p)
-+{
-+	int ret = 0;
-+
-+	/*
-+	 * Kthreads which disallow setaffinity shouldn't be moved
-+	 * to a new cpuset; we don't want to change their CPU
-+	 * affinity and isolating such threads by their set of
-+	 * allowed nodes is unnecessary.  Thus, cpusets are not
-+	 * applicable for such threads.  This prevents checking for
-+	 * success of set_cpus_allowed_ptr() on all attached tasks
-+	 * before cpus_mask may be changed.
-+	 */
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		ret = -EINVAL;
-+
-+	return ret;
-+}
-+
-+bool sched_smp_initialized __read_mostly;
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+/*
-+ * Ensures that the idle task is using init_mm right before its CPU goes
-+ * offline.
-+ */
-+void idle_task_exit(void)
-+{
-+	struct mm_struct *mm = current->active_mm;
-+
-+	BUG_ON(current != this_rq()->idle);
-+
-+	if (mm != &init_mm) {
-+		switch_mm(mm, &init_mm, current);
-+		finish_arch_post_lock_switch();
-+	}
-+
-+	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
-+}
-+
-+static int __balance_push_cpu_stop(void *arg)
-+{
-+	struct task_struct *p = arg;
-+	struct rq *rq = this_rq();
-+	struct rq_flags rf;
-+	int cpu;
-+
-+	raw_spin_lock_irq(&p->pi_lock);
-+	rq_lock(rq, &rf);
-+
-+	update_rq_clock(rq);
-+
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		cpu = select_fallback_rq(rq->cpu, p);
-+		rq = __migrate_task(rq, p, cpu);
-+	}
-+
-+	rq_unlock(rq, &rf);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	put_task_struct(p);
-+
-+	return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
-+
-+/*
-+ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
-+ * effective when the hotplug motion is down.
-+ */
-+static void balance_push(struct rq *rq)
-+{
-+	struct task_struct *push_task = rq->curr;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*
-+	 * Ensure the thing is persistent until balance_push_set(.on = false);
-+	 */
-+	rq->balance_callback = &balance_push_callback;
-+
-+	/*
-+	 * Only active while going offline and when invoked on the outgoing
-+	 * CPU.
-+	 */
-+	if (!cpu_dying(rq->cpu) || rq != this_rq())
-+		return;
-+
-+	/*
-+	 * Both the cpu-hotplug and stop task are in this case and are
-+	 * required to complete the hotplug process.
-+	 */
-+	if (kthread_is_per_cpu(push_task) ||
-+	    is_migration_disabled(push_task)) {
-+
-+		/*
-+		 * If this is the idle task on the outgoing CPU try to wake
-+		 * up the hotplug control thread which might wait for the
-+		 * last task to vanish. The rcuwait_active() check is
-+		 * accurate here because the waiter is pinned on this CPU
-+		 * and can't obviously be running in parallel.
-+		 *
-+		 * On RT kernels this also has to check whether there are
-+		 * pinned and scheduled out tasks on the runqueue. They
-+		 * need to leave the migrate disabled section first.
-+		 */
-+		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
-+		    rcuwait_active(&rq->hotplug_wait)) {
-+			raw_spin_unlock(&rq->lock);
-+			rcuwait_wake_up(&rq->hotplug_wait);
-+			raw_spin_lock(&rq->lock);
-+		}
-+		return;
-+	}
-+
-+	get_task_struct(push_task);
-+	/*
-+	 * Temporarily drop rq->lock such that we can wake-up the stop task.
-+	 * Both preemption and IRQs are still disabled.
-+	 */
-+	preempt_disable();
-+	raw_spin_unlock(&rq->lock);
-+	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
-+			    this_cpu_ptr(&push_work));
-+	preempt_enable();
-+	/*
-+	 * At this point need_resched() is true and we'll take the loop in
-+	 * schedule(). The next pick is obviously going to be the stop task
-+	 * which kthread_is_per_cpu() and will push this task away.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct rq_flags rf;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	if (on) {
-+		WARN_ON_ONCE(rq->balance_callback);
-+		rq->balance_callback = &balance_push_callback;
-+	} else if (rq->balance_callback == &balance_push_callback) {
-+		rq->balance_callback = NULL;
-+	}
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Invoked from a CPUs hotplug control thread after the CPU has been marked
-+ * inactive. All tasks which are not per CPU kernel threads are either
-+ * pushed off this CPU now via balance_push() or placed on a different CPU
-+ * during wakeup. Wait until the CPU is quiescent.
-+ */
-+static void balance_hotplug_wait(void)
-+{
-+	struct rq *rq = this_rq();
-+
-+	rcuwait_wait_event(&rq->hotplug_wait,
-+			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
-+			   TASK_UNINTERRUPTIBLE);
-+}
-+
-+#else
-+
-+static void balance_push(struct rq *rq)
-+{
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+}
-+
-+static inline void balance_hotplug_wait(void)
-+{
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+static void set_rq_offline(struct rq *rq)
-+{
-+	if (rq->online) {
-+		update_rq_clock(rq);
-+		rq->online = false;
-+	}
-+}
-+
-+static void set_rq_online(struct rq *rq)
-+{
-+	if (!rq->online)
-+		rq->online = true;
-+}
-+
-+/*
-+ * used to mark begin/end of suspend/resume:
-+ */
-+static int num_cpus_frozen;
-+
-+/*
-+ * Update cpusets according to cpu_active mask.  If cpusets are
-+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
-+ * around partition_sched_domains().
-+ *
-+ * If we come here as part of a suspend/resume, don't touch cpusets because we
-+ * want to restore it back to its original state upon resume anyway.
-+ */
-+static void cpuset_cpu_active(void)
-+{
-+	if (cpuhp_tasks_frozen) {
-+		/*
-+		 * num_cpus_frozen tracks how many CPUs are involved in suspend
-+		 * resume sequence. As long as this is not the last online
-+		 * operation in the resume sequence, just build a single sched
-+		 * domain, ignoring cpusets.
-+		 */
-+		partition_sched_domains(1, NULL, NULL);
-+		if (--num_cpus_frozen)
-+			return;
-+		/*
-+		 * This is the last CPU online operation. So fall through and
-+		 * restore the original sched domains by considering the
-+		 * cpuset configurations.
-+		 */
-+		cpuset_force_rebuild();
-+	}
-+
-+	cpuset_update_active_cpus();
-+}
-+
-+static int cpuset_cpu_inactive(unsigned int cpu)
-+{
-+	if (!cpuhp_tasks_frozen) {
-+		cpuset_update_active_cpus();
-+	} else {
-+		num_cpus_frozen++;
-+		partition_sched_domains(1, NULL, NULL);
-+	}
-+	return 0;
-+}
-+
-+int sched_cpu_activate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/*
-+	 * Clear the balance_push callback and prepare to schedule
-+	 * regular tasks.
-+	 */
-+	balance_push_set(cpu, false);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going up, increment the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
-+		static_branch_inc_cpuslocked(&sched_smt_present);
-+#endif
-+	set_cpu_active(cpu, true);
-+
-+	if (sched_smp_initialized)
-+		cpuset_cpu_active();
-+
-+	/*
-+	 * Put the rq online, if not already. This happens:
-+	 *
-+	 * 1) In the early boot process, because we build the real domains
-+	 *    after all cpus have been brought up.
-+	 *
-+	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
-+	 *    domains.
-+	 */
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_online(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	return 0;
-+}
-+
-+int sched_cpu_deactivate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+	int ret;
-+
-+	set_cpu_active(cpu, false);
-+
-+	/*
-+	 * From this point forward, this CPU will refuse to run any task that
-+	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
-+	 * push those tasks away until this gets cleared, see
-+	 * sched_cpu_dying().
-+	 */
-+	balance_push_set(cpu, true);
-+
-+	/*
-+	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
-+	 * users of this state to go away such that all new such users will
-+	 * observe it.
-+	 *
-+	 * Specifically, we rely on ttwu to no longer target this CPU, see
-+	 * ttwu_queue_cond() and is_cpu_allowed().
-+	 *
-+	 * Do sync before park smpboot threads to take care the rcu boost case.
-+	 */
-+	synchronize_rcu();
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_offline(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going down, decrement the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+		static_branch_dec_cpuslocked(&sched_smt_present);
-+		if (!static_branch_likely(&sched_smt_present))
-+			cpumask_clear(sched_sg_idle_mask);
-+	}
-+#endif
-+
-+	if (!sched_smp_initialized)
-+		return 0;
-+
-+	ret = cpuset_cpu_inactive(cpu);
-+	if (ret) {
-+		balance_push_set(cpu, false);
-+		set_cpu_active(cpu, true);
-+		return ret;
-+	}
-+
-+	return 0;
-+}
-+
-+static void sched_rq_cpu_starting(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	rq->calc_load_update = calc_load_update;
-+}
-+
-+int sched_cpu_starting(unsigned int cpu)
-+{
-+	sched_rq_cpu_starting(cpu);
-+	sched_tick_start(cpu);
-+	return 0;
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+
-+/*
-+ * Invoked immediately before the stopper thread is invoked to bring the
-+ * CPU down completely. At this point all per CPU kthreads except the
-+ * hotplug thread (current) and the stopper thread (inactive) have been
-+ * either parked or have been unbound from the outgoing CPU. Ensure that
-+ * any of those which might be on the way out are gone.
-+ *
-+ * If after this point a bound task is being woken on this CPU then the
-+ * responsible hotplug callback has failed to do it's job.
-+ * sched_cpu_dying() will catch it with the appropriate fireworks.
-+ */
-+int sched_cpu_wait_empty(unsigned int cpu)
-+{
-+	balance_hotplug_wait();
-+	return 0;
-+}
-+
-+/*
-+ * Since this CPU is going 'away' for a while, fold any nr_active delta we
-+ * might have. Called from the CPU stopper task after ensuring that the
-+ * stopper is the last running task on the CPU, so nr_active count is
-+ * stable. We need to take the teardown thread which is calling this into
-+ * account, so we hand in adjust = 1 to the load calculation.
-+ *
-+ * Also see the comment "Global load-average calculations".
-+ */
-+static void calc_load_migrate(struct rq *rq)
-+{
-+	long delta = calc_load_fold_active(rq, 1);
-+
-+	if (delta)
-+		atomic_long_add(delta, &calc_load_tasks);
-+}
-+
-+static void dump_rq_tasks(struct rq *rq, const char *loglvl)
-+{
-+	struct task_struct *g, *p;
-+	int cpu = cpu_of(rq);
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
-+	for_each_process_thread(g, p) {
-+		if (task_cpu(p) != cpu)
-+			continue;
-+
-+		if (!task_on_rq_queued(p))
-+			continue;
-+
-+		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
-+	}
-+}
-+
-+int sched_cpu_dying(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/* Handle pending wakeups and then migrate everything off */
-+	sched_tick_stop(cpu);
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
-+		WARN(true, "Dying CPU not properly vacated!");
-+		dump_rq_tasks(rq, KERN_WARNING);
-+	}
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	calc_load_migrate(rq);
-+	hrtick_clear(rq);
-+	return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+static void sched_init_topology_cpumask_early(void)
-+{
-+	int cpu;
-+	cpumask_t *tmp;
-+
-+	for_each_possible_cpu(cpu) {
-+		/* init topo masks */
-+		tmp = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		cpumask_copy(tmp, cpu_possible_mask);
-+		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
-+	}
-+}
-+
-+#define TOPOLOGY_CPUMASK(name, mask, last)\
-+	if (cpumask_and(topo, topo, mask)) {					\
-+		cpumask_copy(topo, mask);					\
-+		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
-+		       cpu, (topo++)->bits[0]);					\
-+	}									\
-+	if (!last)								\
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
-+				  nr_cpumask_bits);
-+
-+static void sched_init_topology_cpumask(void)
-+{
-+	int cpu;
-+	cpumask_t *topo;
-+
-+	for_each_online_cpu(cpu) {
-+		/* take chance to reset time slice for idle tasks */
-+		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
-+
-+		topo = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
-+				  nr_cpumask_bits);
-+#ifdef CONFIG_SCHED_SMT
-+		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
-+#endif
-+		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
-+
-+		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
-+		per_cpu(sched_cpu_llc_mask, cpu) = topo;
-+		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
-+
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
-+		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
-+		       cpu, per_cpu(sd_llc_id, cpu),
-+		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
-+			      per_cpu(sched_cpu_topo_masks, cpu)));
-+	}
-+}
-+#endif
-+
-+void __init sched_init_smp(void)
-+{
-+	/* Move init over to a non-isolated CPU */
-+	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
-+		BUG();
-+	current->flags &= ~PF_NO_SETAFFINITY;
-+
-+	sched_init_topology_cpumask();
-+
-+	sched_smp_initialized = true;
-+}
-+
-+static int __init migration_init(void)
-+{
-+	sched_cpu_starting(smp_processor_id());
-+	return 0;
-+}
-+early_initcall(migration_init);
-+
-+#else
-+void __init sched_init_smp(void)
-+{
-+	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
-+}
-+#endif /* CONFIG_SMP */
-+
-+int in_sched_functions(unsigned long addr)
-+{
-+	return in_lock_functions(addr) ||
-+		(addr >= (unsigned long)__sched_text_start
-+		&& addr < (unsigned long)__sched_text_end);
-+}
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/*
-+ * Default task group.
-+ * Every task in system belongs to this group at bootup.
-+ */
-+struct task_group root_task_group;
-+LIST_HEAD(task_groups);
-+
-+/* Cacheline aligned slab cache for task_group */
-+static struct kmem_cache *task_group_cache __ro_after_init;
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+void __init sched_init(void)
-+{
-+	int i;
-+	struct rq *rq;
-+#ifdef CONFIG_SCHED_SMT
-+	struct balance_arg balance_arg = {.cpumask = sched_sg_idle_mask, .active = 0};
-+#endif
-+
-+	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
-+			 " by Alfred Chen.\n");
-+
-+	wait_bit_init();
-+
-+#ifdef CONFIG_SMP
-+	for (i = 0; i < SCHED_QUEUE_BITS; i++)
-+		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+	task_group_cache = KMEM_CACHE(task_group, 0);
-+
-+	list_add(&root_task_group.list, &task_groups);
-+	INIT_LIST_HEAD(&root_task_group.children);
-+	INIT_LIST_HEAD(&root_task_group.siblings);
-+#endif /* CONFIG_CGROUP_SCHED */
-+	for_each_possible_cpu(i) {
-+		rq = cpu_rq(i);
-+
-+		sched_queue_init(&rq->queue);
-+		rq->prio = IDLE_TASK_SCHED_PRIO;
-+#ifdef CONFIG_SCHED_PDS
-+		rq->prio_idx = rq->prio;
-+#endif
-+
-+		raw_spin_lock_init(&rq->lock);
-+		rq->nr_running = rq->nr_uninterruptible = 0;
-+		rq->calc_load_active = 0;
-+		rq->calc_load_update = jiffies + LOAD_FREQ;
-+#ifdef CONFIG_SMP
-+		rq->online = false;
-+		rq->cpu = i;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		rq->sg_balance_arg = balance_arg;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
-+#endif
-+		rq->balance_callback = &balance_push_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+		rcuwait_init(&rq->hotplug_wait);
-+#endif
-+#endif /* CONFIG_SMP */
-+		rq->nr_switches = 0;
-+
-+		hrtick_rq_init(rq);
-+		atomic_set(&rq->nr_iowait, 0);
-+
-+		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
-+	}
-+#ifdef CONFIG_SMP
-+	/* Set rq->online for cpu 0 */
-+	cpu_rq(0)->online = true;
-+#endif
-+	/*
-+	 * The boot idle thread does lazy MMU switching as well:
-+	 */
-+	mmgrab(&init_mm);
-+	enter_lazy_tlb(&init_mm, current);
-+
-+	/*
-+	 * The idle task doesn't need the kthread struct to function, but it
-+	 * is dressed up as a per-CPU kthread and thus needs to play the part
-+	 * if we want to avoid special-casing it in code that deals with per-CPU
-+	 * kthreads.
-+	 */
-+	WARN_ON(!set_kthread_struct(current));
-+
-+	/*
-+	 * Make us the idle thread. Technically, schedule() should not be
-+	 * called from this thread, however somewhere below it might be,
-+	 * but because we are the idle thread, we just pick up running again
-+	 * when this runqueue becomes "idle".
-+	 */
-+	init_idle(current, smp_processor_id());
-+
-+	calc_load_update = jiffies + LOAD_FREQ;
-+
-+#ifdef CONFIG_SMP
-+	idle_thread_set_boot_cpu();
-+	balance_push_set(smp_processor_id(), false);
-+
-+	sched_init_topology_cpumask_early();
-+#endif /* SMP */
-+
-+	preempt_dynamic_init();
-+}
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+
-+void __might_sleep(const char *file, int line)
-+{
-+	unsigned int state = get_current_state();
-+	/*
-+	 * Blocking primitives will set (and therefore destroy) current->state,
-+	 * since we will exit with TASK_RUNNING make sure we enter with it,
-+	 * otherwise we will destroy state.
-+	 */
-+	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
-+			"do not call blocking ops when !TASK_RUNNING; "
-+			"state=%x set at [<%p>] %pS\n", state,
-+			(void *)current->task_state_change,
-+			(void *)current->task_state_change);
-+
-+	__might_resched(file, line, 0);
-+}
-+EXPORT_SYMBOL(__might_sleep);
-+
-+static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
-+{
-+	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
-+		return;
-+
-+	if (preempt_count() == preempt_offset)
-+		return;
-+
-+	pr_err("Preemption disabled at:");
-+	print_ip_sym(KERN_ERR, ip);
-+}
-+
-+static inline bool resched_offsets_ok(unsigned int offsets)
-+{
-+	unsigned int nested = preempt_count();
-+
-+	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
-+
-+	return nested == offsets;
-+}
-+
-+void __might_resched(const char *file, int line, unsigned int offsets)
-+{
-+	/* Ratelimiting timestamp: */
-+	static unsigned long prev_jiffy;
-+
-+	unsigned long preempt_disable_ip;
-+
-+	/* WARN_ON_ONCE() by default, no rate limit required: */
-+	rcu_sleep_check();
-+
-+	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
-+	     !is_idle_task(current) && !current->non_block_count) ||
-+	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
-+	    oops_in_progress)
-+		return;
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	/* Save this before calling printk(), since that will clobber it: */
-+	preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
-+	       file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), current->non_block_count,
-+	       current->pid, current->comm);
-+	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
-+	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
-+
-+	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
-+		pr_err("RCU nest depth: %d, expected: %u\n",
-+		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
-+	}
-+
-+	if (task_stack_end_corrupted(current))
-+		pr_emerg("Thread overran stack, or stack corrupted\n");
-+
-+	debug_show_held_locks(current);
-+	if (irqs_disabled())
-+		print_irqtrace_events(current);
-+
-+	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
-+				 preempt_disable_ip);
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL(__might_resched);
-+
-+void __cant_sleep(const char *file, int line, int preempt_offset)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > preempt_offset)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
-+	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
-+			in_atomic(), irqs_disabled(),
-+			current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_sleep);
-+
-+#ifdef CONFIG_SMP
-+void __cant_migrate(const char *file, int line)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (is_migration_disabled(current))
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > 0)
-+		return;
-+
-+	if (current->migration_flags & MDF_FORCE_ENABLED)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
-+	       current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_migrate);
-+#endif
-+#endif
-+
-+#ifdef CONFIG_MAGIC_SYSRQ
-+void normalize_rt_tasks(void)
-+{
-+	struct task_struct *g, *p;
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+	};
-+
-+	read_lock(&tasklist_lock);
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * Only normalize user tasks:
-+		 */
-+		if (p->flags & PF_KTHREAD)
-+			continue;
-+
-+		schedstat_set(p->stats.wait_start,  0);
-+		schedstat_set(p->stats.sleep_start, 0);
-+		schedstat_set(p->stats.block_start, 0);
-+
-+		if (!rt_task(p)) {
-+			/*
-+			 * Renice negative nice level userspace
-+			 * tasks back to 0:
-+			 */
-+			if (task_nice(p) < 0)
-+				set_user_nice(p, 0);
-+			continue;
-+		}
-+
-+		__sched_setscheduler(p, &attr, false, false);
-+	}
-+	read_unlock(&tasklist_lock);
-+}
-+#endif /* CONFIG_MAGIC_SYSRQ */
-+
-+#if defined(CONFIG_KGDB_KDB)
-+/*
-+ * These functions are only useful for kdb.
-+ *
-+ * They can only be called when the whole system has been
-+ * stopped - every CPU needs to be quiescent, and no scheduling
-+ * activity can take place. Using them for anything else would
-+ * be a serious bug, and as a result, they aren't even visible
-+ * under any other configuration.
-+ */
-+
-+/**
-+ * curr_task - return the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ *
-+ * Return: The current task for @cpu.
-+ */
-+struct task_struct *curr_task(int cpu)
-+{
-+	return cpu_curr(cpu);
-+}
-+
-+#endif /* defined(CONFIG_KGDB_KDB) */
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+static void sched_free_group(struct task_group *tg)
-+{
-+	kmem_cache_free(task_group_cache, tg);
-+}
-+
-+static void sched_free_group_rcu(struct rcu_head *rhp)
-+{
-+	sched_free_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+static void sched_unregister_group(struct task_group *tg)
-+{
-+	/*
-+	 * We have to wait for yet another RCU grace period to expire, as
-+	 * print_cfs_stats() might run concurrently.
-+	 */
-+	call_rcu(&tg->rcu, sched_free_group_rcu);
-+}
-+
-+/* allocate runqueue etc for a new task group */
-+struct task_group *sched_create_group(struct task_group *parent)
-+{
-+	struct task_group *tg;
-+
-+	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
-+	if (!tg)
-+		return ERR_PTR(-ENOMEM);
-+
-+	return tg;
-+}
-+
-+void sched_online_group(struct task_group *tg, struct task_group *parent)
-+{
-+}
-+
-+/* rcu callback to free various structures associated with a task group */
-+static void sched_unregister_group_rcu(struct rcu_head *rhp)
-+{
-+	/* Now it should be safe to free those cfs_rqs: */
-+	sched_unregister_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+void sched_destroy_group(struct task_group *tg)
-+{
-+	/* Wait for possible concurrent references to cfs_rqs complete: */
-+	call_rcu(&tg->rcu, sched_unregister_group_rcu);
-+}
-+
-+void sched_release_group(struct task_group *tg)
-+{
-+}
-+
-+static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
-+{
-+	return css ? container_of(css, struct task_group, css) : NULL;
-+}
-+
-+static struct cgroup_subsys_state *
-+cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
-+{
-+	struct task_group *parent = css_tg(parent_css);
-+	struct task_group *tg;
-+
-+	if (!parent) {
-+		/* This is early initialization for the top cgroup */
-+		return &root_task_group.css;
-+	}
-+
-+	tg = sched_create_group(parent);
-+	if (IS_ERR(tg))
-+		return ERR_PTR(-ENOMEM);
-+	return &tg->css;
-+}
-+
-+/* Expose task group only after completing cgroup initialization */
-+static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+	struct task_group *parent = css_tg(css->parent);
-+
-+	if (parent)
-+		sched_online_group(tg, parent);
-+	return 0;
-+}
-+
-+static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	sched_release_group(tg);
-+}
-+
-+static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	/*
-+	 * Relies on the RCU grace period between css_released() and this.
-+	 */
-+	sched_unregister_group(tg);
-+}
-+
-+#ifdef CONFIG_RT_GROUP_SCHED
-+static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static void cpu_cgroup_attach(struct cgroup_taskset *tset)
-+{
-+}
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+static DEFINE_MUTEX(shares_mutex);
-+
-+static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
-+{
-+	/*
-+	 * We can't change the weight of the root cgroup.
-+	 */
-+	if (&root_task_group == tg)
-+		return -EINVAL;
-+
-+	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
-+
-+	mutex_lock(&shares_mutex);
-+	if (tg->shares == shares)
-+		goto done;
-+
-+	tg->shares = shares;
-+done:
-+	mutex_unlock(&shares_mutex);
-+	return 0;
-+}
-+
-+static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cftype, u64 shareval)
-+{
-+	if (shareval > scale_load_down(ULONG_MAX))
-+		shareval = MAX_SHARES;
-+	return sched_group_set_shares(css_tg(css), scale_load(shareval));
-+}
-+
-+static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	return (u64) scale_load_down(tg->shares);
-+}
-+#endif
-+
-+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, s64 cfs_quota_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 cfs_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, u64 cfs_burst_us)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 val)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 rt_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_legacy_files[] = {
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	{
-+		.name = "shares",
-+		.read_u64 = cpu_shares_read_u64,
-+		.write_u64 = cpu_shares_write_u64,
-+	},
-+#endif
-+	{
-+		.name = "cfs_quota_us",
-+		.read_s64 = cpu_cfs_quota_read_s64,
-+		.write_s64 = cpu_cfs_quota_write_s64,
-+	},
-+	{
-+		.name = "cfs_period_us",
-+		.read_u64 = cpu_cfs_period_read_u64,
-+		.write_u64 = cpu_cfs_period_write_u64,
-+	},
-+	{
-+		.name = "cfs_burst_us",
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "stat",
-+		.seq_show = cpu_cfs_stat_show,
-+	},
-+	{
-+		.name = "stat.local",
-+		.seq_show = cpu_cfs_local_stat_show,
-+	},
-+	{
-+		.name = "rt_runtime_us",
-+		.read_s64 = cpu_rt_runtime_read,
-+		.write_s64 = cpu_rt_runtime_write,
-+	},
-+	{
-+		.name = "rt_period_us",
-+		.read_u64 = cpu_rt_period_read_uint,
-+		.write_u64 = cpu_rt_period_write_uint,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* Terminate */
-+};
-+
-+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, u64 weight)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
-+				    struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
-+				     struct cftype *cft, s64 nice)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 idle)
-+{
-+	return 0;
-+}
-+
-+static int cpu_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_max_write(struct kernfs_open_file *of,
-+			     char *buf, size_t nbytes, loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_files[] = {
-+	{
-+		.name = "weight",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_weight_read_u64,
-+		.write_u64 = cpu_weight_write_u64,
-+	},
-+	{
-+		.name = "weight.nice",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_weight_nice_read_s64,
-+		.write_s64 = cpu_weight_nice_write_s64,
-+	},
-+	{
-+		.name = "idle",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_idle_read_s64,
-+		.write_s64 = cpu_idle_write_s64,
-+	},
-+	{
-+		.name = "max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_max_show,
-+		.write = cpu_max_write,
-+	},
-+	{
-+		.name = "max.burst",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* terminate */
-+};
-+
-+static int cpu_extra_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+static int cpu_local_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+struct cgroup_subsys cpu_cgrp_subsys = {
-+	.css_alloc	= cpu_cgroup_css_alloc,
-+	.css_online	= cpu_cgroup_css_online,
-+	.css_released	= cpu_cgroup_css_released,
-+	.css_free	= cpu_cgroup_css_free,
-+	.css_extra_stat_show = cpu_extra_stat_show,
-+	.css_local_stat_show = cpu_local_stat_show,
-+#ifdef CONFIG_RT_GROUP_SCHED
-+	.can_attach	= cpu_cgroup_can_attach,
-+#endif
-+	.attach		= cpu_cgroup_attach,
-+	.legacy_cftypes	= cpu_legacy_files,
-+	.dfl_cftypes	= cpu_files,
-+	.early_init	= true,
-+	.threaded	= true,
-+};
-+#endif	/* CONFIG_CGROUP_SCHED */
-+
-+#undef CREATE_TRACE_POINTS
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#
-+/*
-+ * @cid_lock: Guarantee forward-progress of cid allocation.
-+ *
-+ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
-+ * is only used when contention is detected by the lock-free allocation so
-+ * forward progress can be guaranteed.
-+ */
-+DEFINE_RAW_SPINLOCK(cid_lock);
-+
-+/*
-+ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
-+ *
-+ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
-+ * detected, it is set to 1 to ensure that all newly coming allocations are
-+ * serialized by @cid_lock until the allocation which detected contention
-+ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
-+ * of a cid allocation.
-+ */
-+int use_cid_lock;
-+
-+/*
-+ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
-+ * concurrently with respect to the execution of the source runqueue context
-+ * switch.
-+ *
-+ * There is one basic properties we want to guarantee here:
-+ *
-+ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
-+ * used by a task. That would lead to concurrent allocation of the cid and
-+ * userspace corruption.
-+ *
-+ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
-+ * that a pair of loads observe at least one of a pair of stores, which can be
-+ * shown as:
-+ *
-+ *      X = Y = 0
-+ *
-+ *      w[X]=1          w[Y]=1
-+ *      MB              MB
-+ *      r[Y]=y          r[X]=x
-+ *
-+ * Which guarantees that x==0 && y==0 is impossible. But rather than using
-+ * values 0 and 1, this algorithm cares about specific state transitions of the
-+ * runqueue current task (as updated by the scheduler context switch), and the
-+ * per-mm/cpu cid value.
-+ *
-+ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
-+ * task->mm != mm for the rest of the discussion. There are two scheduler state
-+ * transitions on context switch we care about:
-+ *
-+ * (TSA) Store to rq->curr with transition from (N) to (Y)
-+ *
-+ * (TSB) Store to rq->curr with transition from (Y) to (N)
-+ *
-+ * On the remote-clear side, there is one transition we care about:
-+ *
-+ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
-+ *
-+ * There is also a transition to UNSET state which can be performed from all
-+ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
-+ * guarantees that only a single thread will succeed:
-+ *
-+ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
-+ *
-+ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
-+ * when a thread is actively using the cid (property (1)).
-+ *
-+ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
-+ *
-+ * Scenario A) (TSA)+(TMA) (from next task perspective)
-+ *
-+ * CPU0                                      CPU1
-+ *
-+ * Context switch CS-1                       Remote-clear
-+ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
-+ *                                             (implied barrier after cmpxchg)
-+ *   - switch_mm_cid()
-+ *     - memory barrier (see switch_mm_cid()
-+ *       comment explaining how this barrier
-+ *       is combined with other scheduler
-+ *       barriers)
-+ *     - mm_cid_get (next)
-+ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
-+ *
-+ * This Dekker ensures that either task (Y) is observed by the
-+ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
-+ * observed.
-+ *
-+ * If task (Y) store is observed by rcu_dereference(), it means that there is
-+ * still an active task on the cpu. Remote-clear will therefore not transition
-+ * to UNSET, which fulfills property (1).
-+ *
-+ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
-+ * it will move its state to UNSET, which clears the percpu cid perhaps
-+ * uselessly (which is not an issue for correctness). Because task (Y) is not
-+ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
-+ * state to UNSET is done with a cmpxchg expecting that the old state has the
-+ * LAZY flag set, only one thread will successfully UNSET.
-+ *
-+ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
-+ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
-+ * CPU1 will observe task (Y) and do nothing more, which is fine.
-+ *
-+ * What we are effectively preventing with this Dekker is a scenario where
-+ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
-+ * because this would UNSET a cid which is actively used.
-+ */
-+
-+void sched_mm_cid_migrate_from(struct task_struct *t)
-+{
-+	t->migrate_from_cpu = task_cpu(t);
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
-+					  struct task_struct *t,
-+					  struct mm_cid *src_pcpu_cid)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct task_struct *src_task;
-+	int src_cid, last_mm_cid;
-+
-+	if (!mm)
-+		return -1;
-+
-+	last_mm_cid = t->last_mm_cid;
-+	/*
-+	 * If the migrated task has no last cid, or if the current
-+	 * task on src rq uses the cid, it means the source cid does not need
-+	 * to be moved to the destination cpu.
-+	 */
-+	if (last_mm_cid == -1)
-+		return -1;
-+	src_cid = READ_ONCE(src_pcpu_cid->cid);
-+	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
-+		return -1;
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq, it means we
-+	 * are not the last task to be migrated from this cpu for this mm, so
-+	 * there is no need to move src_cid to the destination cpu.
-+	 */
-+	rcu_read_lock();
-+	src_task = rcu_dereference(src_rq->curr);
-+	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+		rcu_read_unlock();
-+		t->last_mm_cid = -1;
-+		return -1;
-+	}
-+	rcu_read_unlock();
-+
-+	return src_cid;
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
-+					      struct task_struct *t,
-+					      struct mm_cid *src_pcpu_cid,
-+					      int src_cid)
-+{
-+	struct task_struct *src_task;
-+	struct mm_struct *mm = t->mm;
-+	int lazy_cid;
-+
-+	if (src_cid == -1)
-+		return -1;
-+
-+	/*
-+	 * Attempt to clear the source cpu cid to move it to the destination
-+	 * cpu.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(src_cid);
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
-+		return -1;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, this task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		src_task = rcu_dereference(src_rq->curr);
-+		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+			rcu_read_unlock();
-+			/*
-+			 * We observed an active task for this mm, there is therefore
-+			 * no point in moving this cid to the destination cpu.
-+			 */
-+			t->last_mm_cid = -1;
-+			return -1;
-+		}
-+	}
-+
-+	/*
-+	 * The src_cid is unused, so it can be unset.
-+	 */
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+		return -1;
-+	return src_cid;
-+}
-+
-+/*
-+ * Migration to dst cpu. Called with dst_rq lock held.
-+ * Interrupts are disabled, which keeps the window of cid ownership without the
-+ * source rq lock held small.
-+ */
-+void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
-+{
-+	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
-+	struct mm_struct *mm = t->mm;
-+	int src_cid, dst_cid;
-+	struct rq *src_rq;
-+
-+	lockdep_assert_rq_held(dst_rq);
-+
-+	if (!mm)
-+		return;
-+	if (src_cpu == -1) {
-+		t->last_mm_cid = -1;
-+		return;
-+	}
-+	/*
-+	 * Move the src cid if the dst cid is unset. This keeps id
-+	 * allocation closest to 0 in cases where few threads migrate around
-+	 * many cpus.
-+	 *
-+	 * If destination cid is already set, we may have to just clear
-+	 * the src cid to ensure compactness in frequent migrations
-+	 * scenarios.
-+	 *
-+	 * It is not useful to clear the src cid when the number of threads is
-+	 * greater or equal to the number of allowed cpus, because user-space
-+	 * can expect that the number of allowed cids can reach the number of
-+	 * allowed cpus.
-+	 */
-+	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
-+	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
-+	if (!mm_cid_is_unset(dst_cid) &&
-+	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
-+		return;
-+	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
-+	src_rq = cpu_rq(src_cpu);
-+	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
-+	if (src_cid == -1)
-+		return;
-+	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
-+							    src_cid);
-+	if (src_cid == -1)
-+		return;
-+	if (!mm_cid_is_unset(dst_cid)) {
-+		__mm_cid_put(mm, src_cid);
-+		return;
-+	}
-+	/* Move src_cid to dst cpu. */
-+	mm_cid_snapshot_time(dst_rq, mm);
-+	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
-+}
-+
-+static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
-+				      int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *t;
-+	int cid, lazy_cid;
-+
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid))
-+		return;
-+
-+	/*
-+	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
-+	 * there happens to be other tasks left on the source cpu using this
-+	 * mm, the next task using this mm will reallocate its cid on context
-+	 * switch.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(cid);
-+	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
-+		return;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, that task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		t = rcu_dereference(rq->curr);
-+		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
-+			return;
-+	}
-+
-+	/*
-+	 * The cid is unused, so it can be unset.
-+	 * Disable interrupts to keep the window of cid ownership without rq
-+	 * lock small.
-+	 */
-+	scoped_guard (irqsave) {
-+		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, cid);
-+	}
-+}
-+
-+static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct mm_cid *pcpu_cid;
-+	struct task_struct *curr;
-+	u64 rq_clock;
-+
-+	/*
-+	 * rq->clock load is racy on 32-bit but one spurious clear once in a
-+	 * while is irrelevant.
-+	 */
-+	rq_clock = READ_ONCE(rq->clock);
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+
-+	/*
-+	 * In order to take care of infrequently scheduled tasks, bump the time
-+	 * snapshot associated with this cid if an active task using the mm is
-+	 * observed on this rq.
-+	 */
-+	scoped_guard (rcu) {
-+		curr = rcu_dereference(rq->curr);
-+		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
-+			WRITE_ONCE(pcpu_cid->time, rq_clock);
-+			return;
-+		}
-+	}
-+
-+	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
-+					     int weight)
-+{
-+	struct mm_cid *pcpu_cid;
-+	int cid;
-+
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid) || cid < weight)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void task_mm_cid_work(struct callback_head *work)
-+{
-+	unsigned long now = jiffies, old_scan, next_scan;
-+	struct task_struct *t = current;
-+	struct cpumask *cidmask;
-+	struct mm_struct *mm;
-+	int weight, cpu;
-+
-+	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-+
-+	work->next = work;	/* Prevent double-add */
-+	if (t->flags & PF_EXITING)
-+		return;
-+	mm = t->mm;
-+	if (!mm)
-+		return;
-+	old_scan = READ_ONCE(mm->mm_cid_next_scan);
-+	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	if (!old_scan) {
-+		unsigned long res;
-+
-+		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-+		if (res != old_scan)
-+			old_scan = res;
-+		else
-+			old_scan = next_scan;
-+	}
-+	if (time_before(now, old_scan))
-+		return;
-+	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-+		return;
-+	cidmask = mm_cidmask(mm);
-+	/* Clear cids that were not recently used. */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_old(mm, cpu);
-+	weight = cpumask_weight(cidmask);
-+	/*
-+	 * Clear cids that are greater or equal to the cidmask weight to
-+	 * recompact it.
-+	 */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-+}
-+
-+void init_sched_mm_cid(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	int mm_users = 0;
-+
-+	if (mm) {
-+		mm_users = atomic_read(&mm->mm_users);
-+		if (mm_users == 1)
-+			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	}
-+	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-+	init_task_work(&t->cid_work, task_mm_cid_work);
-+}
-+
-+void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-+{
-+	struct callback_head *work = &curr->cid_work;
-+	unsigned long now = jiffies;
-+
-+	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-+	    work->next != work)
-+		return;
-+	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-+		return;
-+	task_work_add(curr, work, TWA_RESUME);
-+}
-+
-+void sched_mm_cid_exit_signals(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_before_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_after_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	scoped_guard (rq_lock_irqsave, rq) {
-+		preempt_enable_no_resched();	/* holding spinlock */
-+		WRITE_ONCE(t->mm_cid_active, 1);
-+		/*
-+		 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+		 * Matches barrier in sched_mm_cid_remote_clear_old().
-+		 */
-+		smp_mb();
-+		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
-+	}
-+	rseq_set_notify_resume(t);
-+}
-+
-+void sched_mm_cid_fork(struct task_struct *t)
-+{
-+	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
-+	t->mm_cid_active = 1;
-+}
-+#endif
-diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
---- a/kernel/sched/alt_debug.c	1970-01-01 01:00:00.000000000 +0100
-+++ b/kernel/sched/alt_debug.c	2024-07-16 11:57:44.445146416 +0200
-@@ -0,0 +1,32 @@
-+/*
-+ * kernel/sched/alt_debug.c
-+ *
-+ * Print the alt scheduler debugging details
-+ *
-+ * Author: Alfred Chen
-+ * Date  : 2020
-+ */
-+#include "sched.h"
-+#include "linux/sched/debug.h"
-+
-+/*
-+ * This allows printing both to /proc/sched_debug and
-+ * to the console
-+ */
-+#define SEQ_printf(m, x...)			\
-+ do {						\
-+	if (m)					\
-+		seq_printf(m, x);		\
-+	else					\
-+		pr_cont(x);			\
-+ } while (0)
-+
-+void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
-+			  struct seq_file *m)
-+{
-+	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
-+						get_nr_threads(p));
-+}
-+
-+void proc_sched_set_task(struct task_struct *p)
-+{}
-diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
---- a/kernel/sched/alt_sched.h	1970-01-01 01:00:00.000000000 +0100
-+++ b/kernel/sched/alt_sched.h	2024-07-16 11:57:44.445146416 +0200
-@@ -0,0 +1,976 @@
-+#ifndef ALT_SCHED_H
-+#define ALT_SCHED_H
-+
-+#include <linux/context_tracking.h>
-+#include <linux/profile.h>
-+#include <linux/stop_machine.h>
-+#include <linux/syscalls.h>
-+#include <linux/tick.h>
-+
-+#include <trace/events/power.h>
-+#include <trace/events/sched.h>
-+
-+#include "../workqueue_internal.h"
-+
-+#include "cpupri.h"
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/* task group related information */
-+struct task_group {
-+	struct cgroup_subsys_state css;
-+
-+	struct rcu_head rcu;
-+	struct list_head list;
-+
-+	struct task_group *parent;
-+	struct list_head siblings;
-+	struct list_head children;
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	unsigned long		shares;
-+#endif
-+};
-+
-+extern struct task_group *sched_create_group(struct task_group *parent);
-+extern void sched_online_group(struct task_group *tg,
-+			       struct task_group *parent);
-+extern void sched_destroy_group(struct task_group *tg);
-+extern void sched_release_group(struct task_group *tg);
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+#define MIN_SCHED_NORMAL_PRIO	(32)
-+/*
-+ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
-+ *
-+ * -- BMQ --
-+ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
-+ * -- PDS --
-+ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
-+ */
-+#define SCHED_LEVELS		(64 + 1)
-+
-+#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
-+extern void resched_latency_warn(int cpu, u64 latency);
-+#else
-+# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
-+static inline void resched_latency_warn(int cpu, u64 latency) {}
-+#endif
-+
-+/*
-+ * Increase resolution of nice-level calculations for 64-bit architectures.
-+ * The extra resolution improves shares distribution and load balancing of
-+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
-+ * hierarchies, especially on larger systems. This is not a user-visible change
-+ * and does not change the user-interface for setting shares/weights.
-+ *
-+ * We increase resolution only if we have enough bits to allow this increased
-+ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
-+ * are pretty high and the returns do not justify the increased costs.
-+ *
-+ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
-+ * increase coverage and consistency always enable it on 64-bit platforms.
-+ */
-+#ifdef CONFIG_64BIT
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load_down(w) \
-+({ \
-+	unsigned long __w = (w); \
-+	if (__w) \
-+		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
-+	__w; \
-+})
-+#else
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		(w)
-+# define scale_load_down(w)	(w)
-+#endif
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
-+
-+/*
-+ * A weight of 0 or 1 can cause arithmetics problems.
-+ * A weight of a cfs_rq is the sum of weights of which entities
-+ * are queued on this cfs_rq, so a weight of a entity should not be
-+ * too large, so as the shares value of a task group.
-+ * (The default weight is 1024 - so there's no practical
-+ *  limitation from this.)
-+ */
-+#define MIN_SHARES		(1UL <<  1)
-+#define MAX_SHARES		(1UL << 18)
-+#endif
-+
-+/*
-+ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
-+ */
-+#ifdef CONFIG_SCHED_DEBUG
-+# define const_debug __read_mostly
-+#else
-+# define const_debug const
-+#endif
-+
-+/* task_struct::on_rq states: */
-+#define TASK_ON_RQ_QUEUED	1
-+#define TASK_ON_RQ_MIGRATING	2
-+
-+static inline int task_on_rq_queued(struct task_struct *p)
-+{
-+	return p->on_rq == TASK_ON_RQ_QUEUED;
-+}
-+
-+static inline int task_on_rq_migrating(struct task_struct *p)
-+{
-+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
-+}
-+
-+/* Wake flags. The first three directly map to some SD flag value */
-+#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
-+#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
-+#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
-+
-+#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
-+#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
-+#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
-+
-+#ifdef CONFIG_SMP
-+static_assert(WF_EXEC == SD_BALANCE_EXEC);
-+static_assert(WF_FORK == SD_BALANCE_FORK);
-+static_assert(WF_TTWU == SD_BALANCE_WAKE);
-+#endif
-+
-+#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
-+
-+struct sched_queue {
-+	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
-+	struct list_head heads[SCHED_LEVELS];
-+};
-+
-+struct rq;
-+struct cpuidle_state;
-+
-+struct balance_callback {
-+	struct balance_callback *next;
-+	void (*func)(struct rq *rq);
-+};
-+
-+struct balance_arg {
-+	struct task_struct	*task;
-+	int			active;
-+	cpumask_t		*cpumask;
-+};
-+
-+/*
-+ * This is the main, per-CPU runqueue data structure.
-+ * This data should only be modified by the local cpu.
-+ */
-+struct rq {
-+	/* runqueue lock: */
-+	raw_spinlock_t			lock;
-+
-+	struct task_struct __rcu	*curr;
-+	struct task_struct		*idle;
-+	struct task_struct		*stop;
-+	struct mm_struct		*prev_mm;
-+
-+	struct sched_queue		queue		____cacheline_aligned;
-+
-+	int				prio;
-+#ifdef CONFIG_SCHED_PDS
-+	int				prio_idx;
-+	u64				time_edge;
-+#endif
-+
-+	/* switch count */
-+	u64 nr_switches;
-+
-+	atomic_t nr_iowait;
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	u64 last_seen_need_resched_ns;
-+	int ticks_without_resched;
-+#endif
-+
-+#ifdef CONFIG_MEMBARRIER
-+	int membarrier_state;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+	int cpu;		/* cpu of this runqueue */
-+	bool online;
-+
-+	unsigned int		ttwu_pending;
-+	unsigned char		nohz_idle_balance;
-+	unsigned char		idle_balance;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	struct sched_avg	avg_irq;
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+	struct balance_arg	sg_balance_arg		____cacheline_aligned;
-+#endif
-+	struct cpu_stop_work	active_balance_work;
-+
-+	struct balance_callback	*balance_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+	struct rcuwait		hotplug_wait;
-+#endif
-+	unsigned int		nr_pinned;
-+
-+#endif /* CONFIG_SMP */
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	u64 prev_irq_time;
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+#ifdef CONFIG_PARAVIRT
-+	u64 prev_steal_time;
-+#endif /* CONFIG_PARAVIRT */
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	u64 prev_steal_time_rq;
-+#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
-+
-+	/* For genenal cpu load util */
-+	s32 load_history;
-+	u64 load_block;
-+	u64 load_stamp;
-+
-+	/* calc_load related fields */
-+	unsigned long calc_load_update;
-+	long calc_load_active;
-+
-+	/* Ensure that all clocks are in the same cache line */
-+	u64			clock ____cacheline_aligned;
-+	u64			clock_task;
-+
-+	unsigned int  nr_running;
-+	unsigned long nr_uninterruptible;
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+#ifdef CONFIG_SMP
-+	call_single_data_t hrtick_csd;
-+#endif
-+	struct hrtimer		hrtick_timer;
-+	ktime_t			hrtick_time;
-+#endif
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+	/* latency stats */
-+	struct sched_info rq_sched_info;
-+	unsigned long long rq_cpu_time;
-+	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
-+
-+	/* sys_sched_yield() stats */
-+	unsigned int yld_count;
-+
-+	/* schedule() stats */
-+	unsigned int sched_switch;
-+	unsigned int sched_count;
-+	unsigned int sched_goidle;
-+
-+	/* try_to_wake_up() stats */
-+	unsigned int ttwu_count;
-+	unsigned int ttwu_local;
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+#ifdef CONFIG_CPU_IDLE
-+	/* Must be inspected within a rcu lock section */
-+	struct cpuidle_state *idle_state;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#ifdef CONFIG_SMP
-+	call_single_data_t	nohz_csd;
-+#endif
-+	atomic_t		nohz_flags;
-+#endif /* CONFIG_NO_HZ_COMMON */
-+
-+	/* Scratch cpumask to be temporarily used under rq_lock */
-+	cpumask_var_t		scratch_mask;
-+};
-+
-+extern unsigned int sysctl_sched_base_slice;
-+
-+extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
-+
-+extern unsigned long calc_load_update;
-+extern atomic_long_t calc_load_tasks;
-+
-+extern void calc_global_load_tick(struct rq *this_rq);
-+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
-+
-+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
-+#define this_rq()		this_cpu_ptr(&runqueues)
-+#define task_rq(p)		cpu_rq(task_cpu(p))
-+#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
-+#define raw_rq()		raw_cpu_ptr(&runqueues)
-+
-+#ifdef CONFIG_SMP
-+#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
-+void register_sched_domain_sysctl(void);
-+void unregister_sched_domain_sysctl(void);
-+#else
-+static inline void register_sched_domain_sysctl(void)
-+{
-+}
-+static inline void unregister_sched_domain_sysctl(void)
-+{
-+}
-+#endif
-+
-+extern bool sched_smp_initialized;
-+
-+enum {
-+#ifdef CONFIG_SCHED_SMT
-+	SMT_LEVEL_SPACE_HOLDER,
-+#endif
-+	COREGROUP_LEVEL_SPACE_HOLDER,
-+	CORE_LEVEL_SPACE_HOLDER,
-+	OTHER_LEVEL_SPACE_HOLDER,
-+	NR_CPU_AFFINITY_LEVELS
-+};
-+
-+DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+
-+static inline int
-+__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
-+{
-+	int cpu;
-+
-+	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
-+		mask++;
-+
-+	return cpu;
-+}
-+
-+static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
-+{
-+	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
-+}
-+
-+#endif
-+
-+#ifndef arch_scale_freq_tick
-+static __always_inline
-+void arch_scale_freq_tick(void)
-+{
-+}
-+#endif
-+
-+#ifndef arch_scale_freq_capacity
-+static __always_inline
-+unsigned long arch_scale_freq_capacity(int cpu)
-+{
-+	return SCHED_CAPACITY_SCALE;
-+}
-+#endif
-+
-+static inline u64 __rq_clock_broken(struct rq *rq)
-+{
-+	return READ_ONCE(rq->clock);
-+}
-+
-+static inline u64 rq_clock(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock;
-+}
-+
-+static inline u64 rq_clock_task(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock_task;
-+}
-+
-+/*
-+ * {de,en}queue flags:
-+ *
-+ * DEQUEUE_SLEEP  - task is no longer runnable
-+ * ENQUEUE_WAKEUP - task just became runnable
-+ *
-+ */
-+
-+#define DEQUEUE_SLEEP		0x01
-+
-+#define ENQUEUE_WAKEUP		0x01
-+
-+
-+/*
-+ * Below are scheduler API which using in other kernel code
-+ * It use the dummy rq_flags
-+ * ToDo : BMQ need to support these APIs for compatibility with mainline
-+ * scheduler code.
-+ */
-+struct rq_flags {
-+	unsigned long flags;
-+};
-+
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock);
-+
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock);
-+
-+static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+}
-+
-+static inline void
-+rq_lock(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+rq_lock_irq(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irq(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+static inline struct rq *
-+this_rq_lock_irq(struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	local_irq_disable();
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	return rq;
-+}
-+
-+static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
-+{
-+	return &rq->lock;
-+}
-+
-+static inline raw_spinlock_t *rq_lockp(struct rq *rq)
-+{
-+	return __rq_lockp(rq);
-+}
-+
-+static inline void lockdep_assert_rq_held(struct rq *rq)
-+{
-+	lockdep_assert_held(__rq_lockp(rq));
-+}
-+
-+extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
-+extern void raw_spin_rq_unlock(struct rq *rq);
-+
-+static inline void raw_spin_rq_lock(struct rq *rq)
-+{
-+	raw_spin_rq_lock_nested(rq, 0);
-+}
-+
-+static inline void raw_spin_rq_lock_irq(struct rq *rq)
-+{
-+	local_irq_disable();
-+	raw_spin_rq_lock(rq);
-+}
-+
-+static inline void raw_spin_rq_unlock_irq(struct rq *rq)
-+{
-+	raw_spin_rq_unlock(rq);
-+	local_irq_enable();
-+}
-+
-+static inline int task_current(struct rq *rq, struct task_struct *p)
-+{
-+	return rq->curr == p;
-+}
-+
-+static inline bool task_on_cpu(struct task_struct *p)
-+{
-+	return p->on_cpu;
-+}
-+
-+extern int task_running_nice(struct task_struct *p);
-+
-+extern struct static_key_false sched_schedstats;
-+
-+#ifdef CONFIG_CPU_IDLE
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+	rq->idle_state = idle_state;
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	WARN_ON(!rcu_read_lock_held());
-+	return rq->idle_state;
-+}
-+#else
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	return NULL;
-+}
-+#endif
-+
-+static inline int cpu_of(const struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	return rq->cpu;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+extern void resched_cpu(int cpu);
-+
-+#include "stats.h"
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#define NOHZ_BALANCE_KICK_BIT	0
-+#define NOHZ_STATS_KICK_BIT	1
-+
-+#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
-+#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
-+
-+#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
-+
-+#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
-+
-+/* TODO: needed?
-+extern void nohz_balance_exit_idle(struct rq *rq);
-+#else
-+static inline void nohz_balance_exit_idle(struct rq *rq) { }
-+*/
-+#endif
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+struct irqtime {
-+	u64			total;
-+	u64			tick_delta;
-+	u64			irq_start_time;
-+	struct u64_stats_sync	sync;
-+};
-+
-+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
-+
-+/*
-+ * Returns the irqtime minus the softirq time computed by ksoftirqd.
-+ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
-+ * and never move forward.
-+ */
-+static inline u64 irq_time_read(int cpu)
-+{
-+	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
-+	unsigned int seq;
-+	u64 total;
-+
-+	do {
-+		seq = __u64_stats_fetch_begin(&irqtime->sync);
-+		total = irqtime->total;
-+	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
-+
-+	return total;
-+}
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+
-+#ifdef CONFIG_CPU_FREQ
-+DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+extern int __init sched_tick_offload_init(void);
-+#else
-+static inline int sched_tick_offload_init(void) { return 0; }
-+#endif
-+
-+#ifdef arch_scale_freq_capacity
-+#ifndef arch_scale_freq_invariant
-+#define arch_scale_freq_invariant()	(true)
-+#endif
-+#else /* arch_scale_freq_capacity */
-+#define arch_scale_freq_invariant()	(false)
-+#endif
-+
-+#ifdef CONFIG_SMP
-+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
-+				 unsigned long min,
-+				 unsigned long max);
-+#endif /* CONFIG_SMP */
-+
-+extern void schedule_idle(void);
-+
-+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-+
-+/*
-+ * !! For sched_setattr_nocheck() (kernel) only !!
-+ *
-+ * This is actually gross. :(
-+ *
-+ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
-+ * tasks, but still be able to sleep. We need this on platforms that cannot
-+ * atomically change clock frequency. Remove once fast switching will be
-+ * available on such platforms.
-+ *
-+ * SUGOV stands for SchedUtil GOVernor.
-+ */
-+#define SCHED_FLAG_SUGOV	0x10000000
-+
-+#ifdef CONFIG_MEMBARRIER
-+/*
-+ * The scheduler provides memory barriers required by membarrier between:
-+ * - prior user-space memory accesses and store to rq->membarrier_state,
-+ * - store to rq->membarrier_state and following user-space memory accesses.
-+ * In the same way it provides those guarantees around store to rq->curr.
-+ */
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+	int membarrier_state;
-+
-+	if (prev_mm == next_mm)
-+		return;
-+
-+	membarrier_state = atomic_read(&next_mm->membarrier_state);
-+	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
-+		return;
-+
-+	WRITE_ONCE(rq->membarrier_state, membarrier_state);
-+}
-+#else
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+}
-+#endif
-+
-+#ifdef CONFIG_NUMA
-+extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
-+#else
-+static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return nr_cpu_ids;
-+}
-+#endif
-+
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
-+
-+extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+extern int preempt_dynamic_mode;
-+extern int sched_dynamic_mode(const char *str);
-+extern void sched_dynamic_update(int mode);
-+#endif
-+
-+static inline void nohz_run_idle_balance(int cpu) { }
-+
-+static inline
-+unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-+				  struct task_struct *p)
-+{
-+	return util;
-+}
-+
-+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
-+#define MM_CID_SCAN_DELAY	100			/* 100ms */
-+
-+extern raw_spinlock_t cid_lock;
-+extern int use_cid_lock;
-+
-+extern void sched_mm_cid_migrate_from(struct task_struct *t);
-+extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
-+extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-+extern void init_sched_mm_cid(struct task_struct *t);
-+
-+static inline void __mm_cid_put(struct mm_struct *mm, int cid)
-+{
-+	if (cid < 0)
-+		return;
-+	cpumask_clear_cpu(cid, mm_cidmask(mm));
-+}
-+
-+/*
-+ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
-+ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
-+ * be held to transition to other states.
-+ *
-+ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
-+ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
-+ */
-+static inline void mm_cid_put_lazy(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (!mm_cid_is_lazy_put(cid) ||
-+	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid, res;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	for (;;) {
-+		if (mm_cid_is_unset(cid))
-+			return MM_CID_UNSET;
-+		/*
-+		 * Attempt transition from valid or lazy-put to unset.
-+		 */
-+		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
-+		if (res == cid)
-+			break;
-+		cid = res;
-+	}
-+	return cid;
-+}
-+
-+static inline void mm_cid_put(struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = mm_cid_pcpu_unset(mm);
-+	if (cid == MM_CID_UNSET)
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int __mm_cid_try_get(struct mm_struct *mm)
-+{
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	cpumask = mm_cidmask(mm);
-+	/*
-+	 * Retry finding first zero bit if the mask is temporarily
-+	 * filled. This only happens during concurrent remote-clear
-+	 * which owns a cid without holding a rq lock.
-+	 */
-+	for (;;) {
-+		cid = cpumask_first_zero(cpumask);
-+		if (cid < nr_cpu_ids)
-+			break;
-+		cpu_relax();
-+	}
-+	if (cpumask_test_and_set_cpu(cid, cpumask))
-+		return -1;
-+	return cid;
-+}
-+
-+/*
-+ * Save a snapshot of the current runqueue time of this cpu
-+ * with the per-cpu cid value, allowing to estimate how recently it was used.
-+ */
-+static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
-+
-+	lockdep_assert_rq_held(rq);
-+	WRITE_ONCE(pcpu_cid->time, rq->clock);
-+}
-+
-+static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	/*
-+	 * All allocations (even those using the cid_lock) are lock-free. If
-+	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
-+	 * guarantee forward progress.
-+	 */
-+	if (!READ_ONCE(use_cid_lock)) {
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto end;
-+		raw_spin_lock(&cid_lock);
-+	} else {
-+		raw_spin_lock(&cid_lock);
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto unlock;
-+	}
-+
-+	/*
-+	 * cid concurrently allocated. Retry while forcing following
-+	 * allocations to use the cid_lock to ensure forward progress.
-+	 */
-+	WRITE_ONCE(use_cid_lock, 1);
-+	/*
-+	 * Set use_cid_lock before allocation. Only care about program order
-+	 * because this is only required for forward progress.
-+	 */
-+	barrier();
-+	/*
-+	 * Retry until it succeeds. It is guaranteed to eventually succeed once
-+	 * all newcoming allocations observe the use_cid_lock flag set.
-+	 */
-+	do {
-+		cid = __mm_cid_try_get(mm);
-+		cpu_relax();
-+	} while (cid < 0);
-+	/*
-+	 * Allocate before clearing use_cid_lock. Only care about
-+	 * program order because this is for forward progress.
-+	 */
-+	barrier();
-+	WRITE_ONCE(use_cid_lock, 0);
-+unlock:
-+	raw_spin_unlock(&cid_lock);
-+end:
-+	mm_cid_snapshot_time(rq, mm);
-+	return cid;
-+}
-+
-+static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	lockdep_assert_rq_held(rq);
-+	cpumask = mm_cidmask(mm);
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (mm_cid_is_valid(cid)) {
-+		mm_cid_snapshot_time(rq, mm);
-+		return cid;
-+	}
-+	if (mm_cid_is_lazy_put(cid)) {
-+		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+	}
-+	cid = __mm_cid_get(rq, mm);
-+	__this_cpu_write(pcpu_cid->cid, cid);
-+	return cid;
-+}
-+
-+static inline void switch_mm_cid(struct rq *rq,
-+				 struct task_struct *prev,
-+				 struct task_struct *next)
-+{
-+	/*
-+	 * Provide a memory barrier between rq->curr store and load of
-+	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
-+	 *
-+	 * Should be adapted if context_switch() is modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		/*
-+		 * user -> kernel transition does not guarantee a barrier, but
-+		 * we can use the fact that it performs an atomic operation in
-+		 * mmgrab().
-+		 */
-+		if (prev->mm)                           // from user
-+			smp_mb__after_mmgrab();
-+		/*
-+		 * kernel -> kernel transition does not change rq->curr->mm
-+		 * state. It stays NULL.
-+		 */
-+	} else {                                        // to user
-+		/*
-+		 * kernel -> user transition does not provide a barrier
-+		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
-+		 * Provide it here.
-+		 */
-+		if (!prev->mm)                          // from kernel
-+			smp_mb();
-+		/*
-+		 * user -> user transition guarantees a memory barrier through
-+		 * switch_mm() when current->mm changes. If current->mm is
-+		 * unchanged, no barrier is needed.
-+		 */
-+	}
-+	if (prev->mm_cid_active) {
-+		mm_cid_snapshot_time(rq, prev->mm);
-+		mm_cid_put_lazy(prev);
-+		prev->mm_cid = -1;
-+	}
-+	if (next->mm_cid_active)
-+		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
-+}
-+
-+#else
-+static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
-+static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
-+static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
-+static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-+static inline void init_sched_mm_cid(struct task_struct *t) { }
-+#endif
-+
-+#ifdef CONFIG_SMP
-+extern struct balance_callback balance_push_callback;
-+
-+static inline void
-+__queue_balance_callback(struct rq *rq,
-+			 struct balance_callback *head)
-+{
-+	lockdep_assert_rq_held(rq);
-+
-+	/*
-+	 * Don't (re)queue an already queued item; nor queue anything when
-+	 * balance_push() is active, see the comment with
-+	 * balance_push_callback.
-+	 */
-+	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
-+		return;
-+
-+	head->next = rq->balance_callback;
-+	rq->balance_callback = head;
-+}
-+#endif /* CONFIG_SMP */
-+
-+#endif /* ALT_SCHED_H */
-diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
---- a/kernel/sched/bmq.h	1970-01-01 01:00:00.000000000 +0100
-+++ b/kernel/sched/bmq.h	2024-07-16 11:57:44.445146416 +0200
-@@ -0,0 +1,98 @@
-+#define ALT_SCHED_NAME "BMQ"
-+
-+/*
-+ * BMQ only routines
-+ */
-+static inline void boost_task(struct task_struct *p, int n)
-+{
-+	int limit;
-+
-+	switch (p->policy) {
-+	case SCHED_NORMAL:
-+		limit = -MAX_PRIORITY_ADJ;
-+		break;
-+	case SCHED_BATCH:
-+		limit = 0;
-+		break;
-+	default:
-+		return;
-+	}
-+
-+	p->boost_prio = max(limit, p->boost_prio - n);
-+}
-+
-+static inline void deboost_task(struct task_struct *p)
-+{
-+	if (p->boost_prio < MAX_PRIORITY_ADJ)
-+		p->boost_prio++;
-+}
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms) {}
-+
-+/* This API is used in task_prio(), return value readed by human users */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
-+	prio = task_sched_prio(p);		\
-+	idx = prio;
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+	return prio;
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+	return idx;
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+	return rq->prio;
-+}
-+
-+inline int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio + p->boost_prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq) {}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	deboost_task(p);
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
-+static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->boost_prio = MAX_PRIORITY_ADJ;
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p)
-+{
-+	s64 delta = this_rq()->clock_task > p->last_ran;
-+
-+	if (likely(delta > 0))
-+		boost_task(p, delta  >> 22);
-+}
-+
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
-+{
-+	boost_task(p, 1);
-+}
-diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
---- a/kernel/sched/build_policy.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/build_policy.c	2024-07-16 11:57:44.445146416 +0200
-@@ -42,13 +42,19 @@
- 
- #include "idle.c"
- 
-+#ifndef CONFIG_SCHED_ALT
- #include "rt.c"
-+#endif
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- # include "cpudeadline.c"
-+#endif
- # include "pelt.c"
- #endif
- 
- #include "cputime.c"
--#include "deadline.c"
- 
-+#ifndef CONFIG_SCHED_ALT
-+#include "deadline.c"
-+#endif
-diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
---- a/kernel/sched/build_utility.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/build_utility.c	2024-07-16 11:57:44.445146416 +0200
-@@ -84,7 +84,9 @@
- 
- #ifdef CONFIG_SMP
- # include "cpupri.c"
-+#ifndef CONFIG_SCHED_ALT
- # include "stop_task.c"
-+#endif
- # include "topology.c"
- #endif
- 
-diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
---- a/kernel/sched/cpufreq_schedutil.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/cpufreq_schedutil.c	2024-07-16 11:57:44.445146416 +0200
-@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(i
- 
- static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
- 
- 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
- 	util = max(util, boost);
- 	sg_cpu->bw_min = min;
- 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
-+#else /* CONFIG_SCHED_ALT */
-+	sg_cpu->bw_min = 0;
-+	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
-+#endif /* CONFIG_SCHED_ALT */
- }
- 
- /**
-@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(str
-  */
- static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
- 		sg_cpu->sg_policy->limits_changed = true;
-+#endif
- }
- 
- static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
-@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct s
- 	}
- 
- 	ret = sched_setattr_nocheck(thread, &attr);
-+
- 	if (ret) {
- 		kthread_stop(thread);
- 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
---- a/kernel/sched/cputime.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/cputime.c	2024-07-16 11:57:44.445146416 +0200
-@@ -126,7 +126,7 @@ void account_user_time(struct task_struc
- 	p->utime += cputime;
- 	account_group_user_time(p, cputime);
- 
--	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
-+	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
- 
- 	/* Add user time to cpustat. */
- 	task_group_account_field(p, index, cputime);
-@@ -150,7 +150,7 @@ void account_guest_time(struct task_stru
- 	p->gtime += cputime;
- 
- 	/* Add guest time to cpustat. */
--	if (task_nice(p) > 0) {
-+	if (task_running_nice(p)) {
- 		task_group_account_field(p, CPUTIME_NICE, cputime);
- 		cpustat[CPUTIME_GUEST_NICE] += cputime;
- 	} else {
-@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64
- #ifdef CONFIG_64BIT
- static inline u64 read_sum_exec_runtime(struct task_struct *t)
- {
--	return t->se.sum_exec_runtime;
-+	return tsk_seruntime(t);
- }
- #else
- static u64 read_sum_exec_runtime(struct task_struct *t)
-@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct
- 	struct rq *rq;
- 
- 	rq = task_rq_lock(t, &rf);
--	ns = t->se.sum_exec_runtime;
-+	ns = tsk_seruntime(t);
- 	task_rq_unlock(rq, t, &rf);
- 
- 	return ns;
-@@ -617,7 +617,7 @@ out:
- void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
- {
- 	struct task_cputime cputime = {
--		.sum_exec_runtime = p->se.sum_exec_runtime,
-+		.sum_exec_runtime = tsk_seruntime(p),
- 	};
- 
- 	if (task_cputime(p, &cputime.utime, &cputime.stime))
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
---- a/kernel/sched/debug.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/debug.c	2024-07-16 11:57:44.445146416 +0200
-@@ -7,6 +7,7 @@
-  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * This allows printing both to /sys/kernel/debug/sched/debug and
-  * to the console
-@@ -215,6 +216,7 @@ static const struct file_operations sche
- };
- 
- #endif /* SMP */
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 
-@@ -278,6 +280,7 @@ static const struct file_operations sche
- 
- #endif /* CONFIG_PREEMPT_DYNAMIC */
- 
-+#ifndef CONFIG_SCHED_ALT
- __read_mostly bool sched_debug_verbose;
- 
- #ifdef CONFIG_SMP
-@@ -332,6 +335,7 @@ static const struct file_operations sche
- 	.llseek		= seq_lseek,
- 	.release	= seq_release,
- };
-+#endif /* !CONFIG_SCHED_ALT */
- 
- static struct dentry *debugfs_sched;
- 
-@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
- 
- 	debugfs_sched = debugfs_create_dir("sched", NULL);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
- 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
- #endif
- 
- 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
- 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
- 
-@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
- #endif
- 
- 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- 
- 	return 0;
- }
- late_initcall(sched_init_debug);
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 
- static cpumask_var_t		sd_sysctl_cpus;
-@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_str
- 	memset(&p->stats, 0, sizeof(p->stats));
- #endif
- }
-+#endif /* !CONFIG_SCHED_ALT */
- 
- void resched_latency_warn(int cpu, u64 latency)
- {
-diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
---- a/kernel/sched/idle.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/idle.c	2024-07-16 11:57:44.449146659 +0200
-@@ -430,6 +430,7 @@ void cpu_startup_entry(enum cpuhp_state
- 		do_idle();
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * idle-task scheduling class.
-  */
-@@ -551,3 +552,4 @@ DEFINE_SCHED_CLASS(idle) = {
- 	.switched_to		= switched_to_idle,
- 	.update_curr		= update_curr_idle,
- };
-+#endif
-diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
---- a/kernel/sched/Makefile	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/Makefile	2024-07-16 11:57:44.441146173 +0200
-@@ -28,7 +28,12 @@ endif
- # These compilation units have roughly the same size and complexity - so their
- # build parallelizes well and finishes roughly at once:
- #
-+ifdef CONFIG_SCHED_ALT
-+obj-y += alt_core.o
-+obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
-+else
- obj-y += core.o
- obj-y += fair.o
-+endif
- obj-y += build_policy.o
- obj-y += build_utility.o
-diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
---- a/kernel/sched/pds.h	1970-01-01 01:00:00.000000000 +0100
-+++ b/kernel/sched/pds.h	2024-07-16 11:57:44.449146659 +0200
-@@ -0,0 +1,134 @@
-+#define ALT_SCHED_NAME "PDS"
-+
-+static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
-+
-+#define SCHED_NORMAL_PRIO_NUM	(32)
-+#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
-+
-+/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
-+#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
-+
-+/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
-+static __read_mostly int sched_timeslice_shift = 23;
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	u64 sched_dl = max(p->deadline, rq->time_edge);
-+
-+#ifdef ALT_SCHED_DEBUG
-+	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
-+		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
-+		return SCHED_NORMAL_PRIO_NUM - 1;
-+#endif
-+
-+	return sched_dl - rq->time_edge;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
-+	if (p->prio < MIN_NORMAL_PRIO) {							\
-+		prio = p->prio >> 2;								\
-+		idx = prio;									\
-+	} else {										\
-+		u64 sched_dl = max(p->deadline, rq->time_edge);					\
-+		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
-+		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
-+	}
-+
-+static inline int sched_prio2idx(int sched_prio, struct rq *rq)
-+{
-+	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_prio :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
-+}
-+
-+static inline int sched_idx2prio(int sched_idx, struct rq *rq)
-+{
-+	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_idx :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+	return rq->prio_idx;
-+}
-+
-+int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq)
-+{
-+	struct list_head head;
-+	u64 old = rq->time_edge;
-+	u64 now = rq->clock >> sched_timeslice_shift;
-+	u64 prio, delta;
-+	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
-+
-+	if (now == old)
-+		return;
-+
-+	rq->time_edge = now;
-+	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
-+	INIT_LIST_HEAD(&head);
-+
-+	prio = MIN_SCHED_NORMAL_PRIO;
-+	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
-+		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
-+				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
-+
-+	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
-+	if (!list_empty(&head)) {
-+		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
-+
-+		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
-+		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
-+	}
-+	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
-+		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
-+
-+	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
-+		return;
-+
-+	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
-+	rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	if (p->prio >= MIN_NORMAL_PRIO)
-+		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
-+			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
-+{
-+	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
-+	if (unlikely(p->deadline > max_dl))
-+		p->deadline = max_dl;
-+}
-+
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p) {}
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
-diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
---- a/kernel/sched/pelt.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/pelt.c	2024-07-16 11:57:44.449146659 +0200
-@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa,
- 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * sched_entity:
-  *
-@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struc
- 
- 	return 0;
- }
-+#endif
- 
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- /*
-  * hardware:
-  *
-diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
---- a/kernel/sched/pelt.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/pelt.h	2024-07-16 11:57:44.449146659 +0200
-@@ -1,13 +1,15 @@
- #ifdef CONFIG_SMP
- #include "sched-pelt.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
- int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
- int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
- int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
- int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
-+#endif
- 
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
- 
- static inline u64 hw_load_avg(struct rq *rq)
-@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struc
- 	return PELT_MIN_DIVIDER + avg->period_contrib;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void cfs_se_util_change(struct sched_avg *avg)
- {
- 	unsigned int enqueued;
-@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(stru
- 	return rq_clock_pelt(rq_of(cfs_rq));
- }
- #endif
-+#endif /* CONFIG_SCHED_ALT */
- 
- #else
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline int
- update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
- {
-@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq
- {
- 	return 0;
- }
-+#endif
- 
- static inline int
- update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
---- a/kernel/sched/sched.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/sched.h	2024-07-16 11:57:44.449146659 +0200
-@@ -5,6 +5,10 @@
- #ifndef _KERNEL_SCHED_SCHED_H
- #define _KERNEL_SCHED_SCHED_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_sched.h"
-+#else
-+
- #include <linux/sched/affinity.h>
- #include <linux/sched/autogroup.h>
- #include <linux/sched/cpufreq.h>
-@@ -3481,4 +3485,9 @@ static inline void init_sched_mm_cid(str
- extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
- extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
- 
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+	return (task_nice(p) > 0);
-+}
-+#endif /* !CONFIG_SCHED_ALT */
- #endif /* _KERNEL_SCHED_SCHED_H */
-diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
---- a/kernel/sched/stats.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/stats.c	2024-07-16 11:57:44.449146659 +0200
-@@ -125,9 +125,11 @@ static int show_schedstat(struct seq_fil
- 	} else {
- 		struct rq *rq;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		struct sched_domain *sd;
- 		int dcount = 0;
- #endif
-+#endif
- 		cpu = (unsigned long)(v - 2);
- 		rq = cpu_rq(cpu);
- 
-@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_fil
- 		seq_printf(seq, "\n");
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		/* domain-specific stats */
- 		rcu_read_lock();
- 		for_each_domain(cpu, sd) {
-@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_fil
- 		}
- 		rcu_read_unlock();
- #endif
-+#endif
- 	}
- 	return 0;
- }
-diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
---- a/kernel/sched/stats.h	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/stats.h	2024-07-16 11:57:44.449146659 +0200
-@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart
- 
- #endif /* CONFIG_SCHEDSTATS */
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_FAIR_GROUP_SCHED
- struct sched_entity_stats {
- 	struct sched_entity     se;
-@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity
- #endif
- 	return &task_of(se)->stats;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PSI
- void psi_task_change(struct task_struct *task, int clear, int set);
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
---- a/kernel/sched/topology.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sched/topology.c	2024-07-16 11:57:44.449146659 +0200
-@@ -3,6 +3,7 @@
-  * Scheduler topology setup/handling methods
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- #include <linux/bsearch.h>
- 
- DEFINE_MUTEX(sched_domains_mutex);
-@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
-  */
- 
- static int default_relax_domain_level = -1;
-+#endif /* CONFIG_SCHED_ALT */
- int sched_domain_level_max;
- 
-+#ifndef CONFIG_SCHED_ALT
- static int __init setup_relax_domain_level(char *str)
- {
- 	if (kstrtoint(str, 0, &default_relax_domain_level))
-@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_lev
- 
- 	return sd;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- /*
-  * Topology list, bottom-up.
-@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sc
- 	sched_domain_topology_saved = NULL;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_NUMA
- 
- static const struct cpumask *sd_numa_mask(int cpu)
-@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_n
- 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
- 	mutex_unlock(&sched_domains_mutex);
- }
-+#else /* CONFIG_SCHED_ALT */
-+DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
-+
-+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-+			     struct sched_domain_attr *dattr_new)
-+{}
-+
-+#ifdef CONFIG_NUMA
-+int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return best_mask_cpu(cpu, cpus);
-+}
-+
-+int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
-+{
-+	return cpumask_nth(cpu, cpus);
-+}
-+
-+const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
-+{
-+	return ERR_PTR(-EOPNOTSUPP);
-+}
-+EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
-+#endif /* CONFIG_NUMA */
-+#endif
-diff --git a/kernel/sysctl.c b/kernel/sysctl.c
---- a/kernel/sysctl.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/sysctl.c	2024-07-16 11:57:44.449146659 +0200
-@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
- 
- /* Constants used for minimum and maximum */
- 
-+#ifdef CONFIG_SCHED_ALT
-+extern int sched_yield_type;
-+#endif
-+
- #ifdef CONFIG_PERF_EVENTS
- static const int six_hundred_forty_kb = 640 * 1024;
- #endif
-@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
- 		.proc_handler	= proc_dointvec,
- 	},
- #endif
-+#ifdef CONFIG_SCHED_ALT
-+	{
-+		.procname	= "yield_type",
-+		.data		= &sched_yield_type,
-+		.maxlen		= sizeof (int),
-+		.mode		= 0644,
-+		.proc_handler	= &proc_dointvec_minmax,
-+		.extra1		= SYSCTL_ZERO,
-+		.extra2		= SYSCTL_TWO,
-+	},
-+#endif
- #if defined(CONFIG_S390) && defined(CONFIG_SMP)
- 	{
- 		.procname	= "spin_retry",
-diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
---- a/kernel/time/hrtimer.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/time/hrtimer.c	2024-07-16 11:57:44.453146902 +0200
-@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, con
- 	int ret = 0;
- 	u64 slack;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	slack = current->timer_slack_ns;
--	if (rt_task(current))
-+	if (dl_task(current) || rt_task(current))
-+#endif
- 		slack = 0;
- 
- 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
-diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
---- a/kernel/time/posix-cpu-timers.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/time/posix-cpu-timers.c	2024-07-16 11:57:44.453146902 +0200
-@@ -223,7 +223,7 @@ static void task_sample_cputime(struct t
- 	u64 stime, utime;
- 
- 	task_cputime(p, &utime, &stime);
--	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
-+	store_samples(samples, stime, utime, tsk_seruntime(p));
- }
- 
- static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
-@@ -867,6 +867,7 @@ static void collect_posix_cputimers(stru
- 	}
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void check_dl_overrun(struct task_struct *tsk)
- {
- 	if (tsk->dl.dl_overrun) {
-@@ -874,6 +875,7 @@ static inline void check_dl_overrun(stru
- 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
- 	}
- }
-+#endif
- 
- static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
- {
-@@ -901,8 +903,10 @@ static void check_thread_timers(struct t
- 	u64 samples[CPUCLOCK_MAX];
- 	unsigned long soft;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk))
- 		check_dl_overrun(tsk);
-+#endif
- 
- 	if (expiry_cache_is_inactive(pct))
- 		return;
-@@ -916,7 +920,7 @@ static void check_thread_timers(struct t
- 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
- 	if (soft != RLIM_INFINITY) {
- 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
--		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
-+		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
- 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
- 
- 		/* At the hard limit, send SIGKILL. No further action. */
-@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(
- 			return true;
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk) && tsk->dl.dl_overrun)
- 		return true;
-+#endif
- 
- 	return false;
- }
-diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
---- a/kernel/trace/trace_selftest.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/trace/trace_selftest.c	2024-07-16 11:57:44.461147389 +0200
-@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void
- {
- 	/* Make this a -deadline thread */
- 	static const struct sched_attr attr = {
-+#ifdef CONFIG_SCHED_ALT
-+		/* No deadline on BMQ/PDS, use RR */
-+		.sched_policy = SCHED_RR,
-+#else
- 		.sched_policy = SCHED_DEADLINE,
- 		.sched_runtime = 100000ULL,
- 		.sched_deadline = 10000000ULL,
- 		.sched_period = 10000000ULL
-+#endif
- 	};
- 	struct wakeup_test_data *x = data;
- 
-diff --git a/kernel/workqueue.c b/kernel/workqueue.c
---- a/kernel/workqueue.c	2024-07-15 00:43:32.000000000 +0200
-+++ b/kernel/workqueue.c	2024-07-16 11:57:44.465147632 +0200
-@@ -1248,6 +1248,7 @@ static bool kick_pool(struct worker_pool
- 
- 	p = worker->task;
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 	/*
- 	 * Idle @worker is about to execute @work and waking up provides an
-@@ -1277,6 +1278,8 @@ static bool kick_pool(struct worker_pool
- 		}
- 	}
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-+
- 	wake_up_process(p);
- 	return true;
- }
-@@ -1405,7 +1408,11 @@ void wq_worker_running(struct task_struc
- 	 * CPU intensive auto-detection cares about how long a work item hogged
- 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
- 	 */
-+#ifdef CONFIG_SCHED_ALT
-+	worker->current_at = worker->task->sched_time;
-+#else
- 	worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 
- 	WRITE_ONCE(worker->sleeping, 0);
- }
-@@ -1490,7 +1497,11 @@ void wq_worker_tick(struct task_struct *
- 	 * We probably want to make this prettier in the future.
- 	 */
- 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
-+#ifdef CONFIG_SCHED_ALT
-+	    worker->task->sched_time - worker->current_at <
-+#else
- 	    worker->task->se.sum_exec_runtime - worker->current_at <
-+#endif
- 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
- 		return;
- 
-@@ -3176,7 +3187,11 @@ __acquires(&pool->lock)
- 	worker->current_func = work->func;
- 	worker->current_pwq = pwq;
- 	if (worker->task)
-+#ifdef CONFIG_SCHED_ALT
-+		worker->current_at = worker->task->sched_time;
-+#else
- 		worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 	work_data = *work_data_bits(work);
- 	worker->current_color = get_work_color(work_data);
- 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
deleted file mode 100644
index 409dda6b..00000000
--- a/5021_BMQ-and-PDS-gentoo-defaults.patch
+++ /dev/null
@@ -1,12 +0,0 @@
---- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
-+++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
-@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
- 	  If in doubt, use the default value.
- 
- menuconfig SCHED_ALT
-+	depends on X86_64
- 	bool "Alternative CPU Schedulers"
--	default y
-+	default n
- 	help
- 	  This feature enable alternative CPU scheduler"


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-09 18:13 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-09 18:13 UTC (permalink / raw
  To: gentoo-commits

commit:     b5d74b80983311dd62baedc8e4e94656ed58b8eb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug  9 18:13:16 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug  9 18:13:16 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b5d74b80

libbpf: workaround -Wmaybe-uninitialized false positive

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...workaround-Wmaybe-uninitialized-false-pos.patch | 67 ++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/0000_README b/0000_README
index 6ab23d2a..180f4b70 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  2950_jump-label-fix.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/
 Desc:   jump_label: Fix a regression
 
+Patch:  2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
+From:   https://lore.kernel.org/bpf/3ebbe7a4e93a5ddc3a26e2e11d329801d7c8de6b.1723217044.git.sam@gentoo.org/
+Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch b/2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
new file mode 100644
index 00000000..86de18d7
--- /dev/null
+++ b/2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -0,0 +1,67 @@
+From git@z Thu Jan  1 00:00:00 1970
+Subject: [PATCH] libbpf: workaround -Wmaybe-uninitialized false positive
+From: Sam James <sam@gentoo.org>
+Date: Fri, 09 Aug 2024 16:24:04 +0100
+Message-Id: <3ebbe7a4e93a5ddc3a26e2e11d329801d7c8de6b.1723217044.git.sam@gentoo.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+In `elf_close`, we get this with GCC 15 -O3 (at least):
+```
+In function ‘elf_close’,
+    inlined from ‘elf_close’ at elf.c:53:6,
+    inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:57:9: warning: ‘elf_fd.elf’ may be used uninitialized [-Wmaybe-uninitialized]
+   57 |         elf_end(elf_fd->elf);
+      |         ^~~~~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.elf’ was declared here
+  377 |         struct elf_fd elf_fd;
+      |                       ^~~~~~
+In function ‘elf_close’,
+    inlined from ‘elf_close’ at elf.c:53:6,
+    inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:58:9: warning: ‘elf_fd.fd’ may be used uninitialized [-Wmaybe-uninitialized]
+   58 |         close(elf_fd->fd);
+      |         ^~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.fd’ was declared here
+  377 |         struct elf_fd elf_fd;
+      |                       ^~~~~~
+```
+
+In reality, our use is fine, it's just that GCC doesn't model errno
+here (see linked GCC bug). Suppress -Wmaybe-uninitialized accordingly.
+
+Link: https://gcc.gnu.org/PR114952
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ tools/lib/bpf/elf.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
+index c92e02394159e..ee226bb8e1af0 100644
+--- a/tools/lib/bpf/elf.c
++++ b/tools/lib/bpf/elf.c
+@@ -369,6 +369,9 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
+ 	return ret;
+ }
+ 
++#pragma GCC diagnostic push
++/* https://gcc.gnu.org/PR114952 */
++#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
+ /* Find offset of function name in ELF object specified by path. "name" matches
+  * symbol name or name@@LIB for library functions.
+  */
+@@ -384,6 +387,7 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name)
+ 	elf_close(&elf_fd);
+ 	return ret;
+ }
++#pragma GCC diagnostic pop
+ 
+ struct symbol {
+ 	const char *name;
+-- 
+2.45.2
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-11 13:27 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-11 13:27 UTC (permalink / raw
  To: gentoo-commits

commit:     e04d2b24f04bf9588f97a3d1cb94143daf632b49
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 11 13:27:27 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 11 13:27:27 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e04d2b24

Linux patch 6.10.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1003_linux-6.10.4.patch | 7592 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7596 insertions(+)

diff --git a/0000_README b/0000_README
index 180f4b70..8befc23c 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-6.10.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.3
 
+Patch:  1003_linux-6.10.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.4
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1003_linux-6.10.4.patch b/1003_linux-6.10.4.patch
new file mode 100644
index 00000000..a0fa4b2f
--- /dev/null
+++ b/1003_linux-6.10.4.patch
@@ -0,0 +1,7592 @@
+diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
+index d414d3f5592a8..1f901de208bca 100644
+--- a/Documentation/admin-guide/mm/transhuge.rst
++++ b/Documentation/admin-guide/mm/transhuge.rst
+@@ -202,12 +202,11 @@ PMD-mappable transparent hugepage::
+ 
+ 	cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
+ 
+-khugepaged will be automatically started when one or more hugepage
+-sizes are enabled (either by directly setting "always" or "madvise",
+-or by setting "inherit" while the top-level enabled is set to "always"
+-or "madvise"), and it'll be automatically shutdown when the last
+-hugepage size is disabled (either by directly setting "never", or by
+-setting "inherit" while the top-level enabled is set to "never").
++khugepaged will be automatically started when PMD-sized THP is enabled
++(either of the per-size anon control or the top-level control are set
++to "always" or "madvise"), and it'll be automatically shutdown when
++PMD-sized THP is disabled (when both the per-size anon control and the
++top-level control are "never")
+ 
+ Khugepaged controls
+ -------------------
+diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
+index 4510e8d1adcb8..238145c31835e 100644
+--- a/Documentation/netlink/specs/ethtool.yaml
++++ b/Documentation/netlink/specs/ethtool.yaml
+@@ -1634,6 +1634,7 @@ operations:
+         request:
+           attributes:
+             - header
++            - context
+         reply:
+           attributes:
+             - header
+@@ -1642,7 +1643,6 @@ operations:
+             - indir
+             - hkey
+             - input_xfrm
+-      dump: *rss-get-op
+     -
+       name: plca-get-cfg
+       doc: Get PLCA params.
+diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst
+index 160bfb0ae8bae..0d8c487be3993 100644
+--- a/Documentation/networking/ethtool-netlink.rst
++++ b/Documentation/networking/ethtool-netlink.rst
+@@ -1800,6 +1800,7 @@ Kernel response contents:
+ 
+ =====================================  ======  ==========================
+   ``ETHTOOL_A_RSS_HEADER``             nested  reply header
++  ``ETHTOOL_A_RSS_CONTEXT``            u32     context number
+   ``ETHTOOL_A_RSS_HFUNC``              u32     RSS hash func
+   ``ETHTOOL_A_RSS_INDIR``              binary  Indir table bytes
+   ``ETHTOOL_A_RSS_HKEY``               binary  Hash key bytes
+diff --git a/Makefile b/Makefile
+index c0af6d8aeb05f..aec5cc0babf8c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
+index 7147edbe56c67..1d230ac9d0eb5 100644
+--- a/arch/arm/kernel/perf_callchain.c
++++ b/arch/arm/kernel/perf_callchain.c
+@@ -85,8 +85,7 @@ static bool
+ callchain_trace(void *data, unsigned long pc)
+ {
+ 	struct perf_callchain_entry_ctx *entry = data;
+-	perf_callchain_store(entry, pc);
+-	return true;
++	return perf_callchain_store(entry, pc) == 0;
+ }
+ 
+ void
+diff --git a/arch/arm/mm/proc.c b/arch/arm/mm/proc.c
+index bdbbf65d1b366..2027845efefb6 100644
+--- a/arch/arm/mm/proc.c
++++ b/arch/arm/mm/proc.c
+@@ -17,7 +17,7 @@ void cpu_arm7tdmi_proc_init(void);
+ __ADDRESSABLE(cpu_arm7tdmi_proc_init);
+ void cpu_arm7tdmi_proc_fin(void);
+ __ADDRESSABLE(cpu_arm7tdmi_proc_fin);
+-void cpu_arm7tdmi_reset(void);
++void cpu_arm7tdmi_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm7tdmi_reset);
+ int cpu_arm7tdmi_do_idle(void);
+ __ADDRESSABLE(cpu_arm7tdmi_do_idle);
+@@ -32,7 +32,7 @@ void cpu_arm720_proc_init(void);
+ __ADDRESSABLE(cpu_arm720_proc_init);
+ void cpu_arm720_proc_fin(void);
+ __ADDRESSABLE(cpu_arm720_proc_fin);
+-void cpu_arm720_reset(void);
++void cpu_arm720_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm720_reset);
+ int cpu_arm720_do_idle(void);
+ __ADDRESSABLE(cpu_arm720_do_idle);
+@@ -49,7 +49,7 @@ void cpu_arm740_proc_init(void);
+ __ADDRESSABLE(cpu_arm740_proc_init);
+ void cpu_arm740_proc_fin(void);
+ __ADDRESSABLE(cpu_arm740_proc_fin);
+-void cpu_arm740_reset(void);
++void cpu_arm740_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm740_reset);
+ int cpu_arm740_do_idle(void);
+ __ADDRESSABLE(cpu_arm740_do_idle);
+@@ -64,7 +64,7 @@ void cpu_arm9tdmi_proc_init(void);
+ __ADDRESSABLE(cpu_arm9tdmi_proc_init);
+ void cpu_arm9tdmi_proc_fin(void);
+ __ADDRESSABLE(cpu_arm9tdmi_proc_fin);
+-void cpu_arm9tdmi_reset(void);
++void cpu_arm9tdmi_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm9tdmi_reset);
+ int cpu_arm9tdmi_do_idle(void);
+ __ADDRESSABLE(cpu_arm9tdmi_do_idle);
+@@ -79,7 +79,7 @@ void cpu_arm920_proc_init(void);
+ __ADDRESSABLE(cpu_arm920_proc_init);
+ void cpu_arm920_proc_fin(void);
+ __ADDRESSABLE(cpu_arm920_proc_fin);
+-void cpu_arm920_reset(void);
++void cpu_arm920_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm920_reset);
+ int cpu_arm920_do_idle(void);
+ __ADDRESSABLE(cpu_arm920_do_idle);
+@@ -102,7 +102,7 @@ void cpu_arm922_proc_init(void);
+ __ADDRESSABLE(cpu_arm922_proc_init);
+ void cpu_arm922_proc_fin(void);
+ __ADDRESSABLE(cpu_arm922_proc_fin);
+-void cpu_arm922_reset(void);
++void cpu_arm922_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm922_reset);
+ int cpu_arm922_do_idle(void);
+ __ADDRESSABLE(cpu_arm922_do_idle);
+@@ -119,7 +119,7 @@ void cpu_arm925_proc_init(void);
+ __ADDRESSABLE(cpu_arm925_proc_init);
+ void cpu_arm925_proc_fin(void);
+ __ADDRESSABLE(cpu_arm925_proc_fin);
+-void cpu_arm925_reset(void);
++void cpu_arm925_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm925_reset);
+ int cpu_arm925_do_idle(void);
+ __ADDRESSABLE(cpu_arm925_do_idle);
+@@ -159,7 +159,7 @@ void cpu_arm940_proc_init(void);
+ __ADDRESSABLE(cpu_arm940_proc_init);
+ void cpu_arm940_proc_fin(void);
+ __ADDRESSABLE(cpu_arm940_proc_fin);
+-void cpu_arm940_reset(void);
++void cpu_arm940_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm940_reset);
+ int cpu_arm940_do_idle(void);
+ __ADDRESSABLE(cpu_arm940_do_idle);
+@@ -174,7 +174,7 @@ void cpu_arm946_proc_init(void);
+ __ADDRESSABLE(cpu_arm946_proc_init);
+ void cpu_arm946_proc_fin(void);
+ __ADDRESSABLE(cpu_arm946_proc_fin);
+-void cpu_arm946_reset(void);
++void cpu_arm946_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_arm946_reset);
+ int cpu_arm946_do_idle(void);
+ __ADDRESSABLE(cpu_arm946_do_idle);
+@@ -429,7 +429,7 @@ void cpu_v7_proc_init(void);
+ __ADDRESSABLE(cpu_v7_proc_init);
+ void cpu_v7_proc_fin(void);
+ __ADDRESSABLE(cpu_v7_proc_fin);
+-void cpu_v7_reset(void);
++void cpu_v7_reset(unsigned long addr, bool hvc);
+ __ADDRESSABLE(cpu_v7_reset);
+ int cpu_v7_do_idle(void);
+ __ADDRESSABLE(cpu_v7_do_idle);
+diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
+index 4e753908b8018..a0a5bbae7229e 100644
+--- a/arch/arm64/include/asm/jump_label.h
++++ b/arch/arm64/include/asm/jump_label.h
+@@ -13,6 +13,7 @@
+ #include <linux/types.h>
+ #include <asm/insn.h>
+ 
++#define HAVE_JUMP_LABEL_BATCH
+ #define JUMP_LABEL_NOP_SIZE		AARCH64_INSN_SIZE
+ 
+ #define JUMP_TABLE_ENTRY(key, label)			\
+diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
+index faf88ec9c48e8..f63ea915d6ad2 100644
+--- a/arch/arm64/kernel/jump_label.c
++++ b/arch/arm64/kernel/jump_label.c
+@@ -7,11 +7,12 @@
+  */
+ #include <linux/kernel.h>
+ #include <linux/jump_label.h>
++#include <linux/smp.h>
+ #include <asm/insn.h>
+ #include <asm/patching.h>
+ 
+-void arch_jump_label_transform(struct jump_entry *entry,
+-			       enum jump_label_type type)
++bool arch_jump_label_transform_queue(struct jump_entry *entry,
++				     enum jump_label_type type)
+ {
+ 	void *addr = (void *)jump_entry_code(entry);
+ 	u32 insn;
+@@ -25,4 +26,10 @@ void arch_jump_label_transform(struct jump_entry *entry,
+ 	}
+ 
+ 	aarch64_insn_patch_text_nosync(addr, insn);
++	return true;
++}
++
++void arch_jump_label_transform_apply(void)
++{
++	kick_all_cpus_sync();
+ }
+diff --git a/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi b/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
+index c0be84a6e81fd..cc7747c5f21f3 100644
+--- a/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
++++ b/arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi
+@@ -99,8 +99,8 @@ liointc1: interrupt-controller@1fe11440 {
+ 		rtc0: rtc@1fe07800 {
+ 			compatible = "loongson,ls2k1000-rtc";
+ 			reg = <0 0x1fe07800 0 0x78>;
+-			interrupt-parent = <&liointc0>;
+-			interrupts = <60 IRQ_TYPE_LEVEL_LOW>;
++			interrupt-parent = <&liointc1>;
++			interrupts = <8 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+ 		uart0: serial@1fe00000 {
+@@ -108,7 +108,7 @@ uart0: serial@1fe00000 {
+ 			reg = <0 0x1fe00000 0 0x8>;
+ 			clock-frequency = <125000000>;
+ 			interrupt-parent = <&liointc0>;
+-			interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
++			interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
+ 			no-loopback-test;
+ 		};
+ 
+@@ -117,7 +117,6 @@ pci@1a000000 {
+ 			device_type = "pci";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			#interrupt-cells = <2>;
+ 
+ 			reg = <0 0x1a000000 0 0x02000000>,
+ 				<0xfe 0x00000000 0 0x20000000>;
+@@ -132,8 +131,8 @@ gmac@3,0 {
+ 						   "pciclass0c03";
+ 
+ 				reg = <0x1800 0x0 0x0 0x0 0x0>;
+-				interrupts = <12 IRQ_TYPE_LEVEL_LOW>,
+-					     <13 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <12 IRQ_TYPE_LEVEL_HIGH>,
++					     <13 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-names = "macirq", "eth_lpi";
+ 				interrupt-parent = <&liointc0>;
+ 				phy-mode = "rgmii-id";
+@@ -156,8 +155,8 @@ gmac@3,1 {
+ 						   "loongson, pci-gmac";
+ 
+ 				reg = <0x1900 0x0 0x0 0x0 0x0>;
+-				interrupts = <14 IRQ_TYPE_LEVEL_LOW>,
+-					     <15 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <14 IRQ_TYPE_LEVEL_HIGH>,
++					     <15 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-names = "macirq", "eth_lpi";
+ 				interrupt-parent = <&liointc0>;
+ 				phy-mode = "rgmii-id";
+@@ -179,7 +178,7 @@ ehci@4,1 {
+ 						   "pciclass0c03";
+ 
+ 				reg = <0x2100 0x0 0x0 0x0 0x0>;
+-				interrupts = <18 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <18 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 			};
+ 
+@@ -190,7 +189,7 @@ ohci@4,2 {
+ 						   "pciclass0c03";
+ 
+ 				reg = <0x2200 0x0 0x0 0x0 0x0>;
+-				interrupts = <19 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <19 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 			};
+ 
+@@ -201,97 +200,121 @@ sata@8,0 {
+ 						   "pciclass0106";
+ 
+ 				reg = <0x4000 0x0 0x0 0x0 0x0>;
+-				interrupts = <19 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <19 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc0>;
+ 			};
+ 
+-			pci_bridge@9,0 {
++			pcie@9,0 {
+ 				compatible = "pci0014,7a19.0",
+ 						   "pci0014,7a19",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x4800 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 0 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 0 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+-			pci_bridge@a,0 {
++			pcie@a,0 {
+ 				compatible = "pci0014,7a09.0",
+ 						   "pci0014,7a09",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x5000 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <1 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 1 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 1 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+-			pci_bridge@b,0 {
++			pcie@b,0 {
+ 				compatible = "pci0014,7a09.0",
+ 						   "pci0014,7a09",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x5800 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <2 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 2 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 2 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+-			pci_bridge@c,0 {
++			pcie@c,0 {
+ 				compatible = "pci0014,7a09.0",
+ 						   "pci0014,7a09",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x6000 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <3 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 3 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 3 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+-			pci_bridge@d,0 {
++			pcie@d,0 {
+ 				compatible = "pci0014,7a19.0",
+ 						   "pci0014,7a19",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x6800 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <4 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 4 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 4 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+-			pci_bridge@e,0 {
++			pcie@e,0 {
+ 				compatible = "pci0014,7a09.0",
+ 						   "pci0014,7a09",
+ 						   "pciclass060400",
+ 						   "pciclass0604";
+ 
+ 				reg = <0x7000 0x0 0x0 0x0 0x0>;
++				#address-cells = <3>;
++				#size-cells = <2>;
++				device_type = "pci";
+ 				#interrupt-cells = <1>;
+-				interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
++				interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-parent = <&liointc1>;
+ 				interrupt-map-mask = <0 0 0 0>;
+-				interrupt-map = <0 0 0 0 &liointc1 5 IRQ_TYPE_LEVEL_LOW>;
++				interrupt-map = <0 0 0 0 &liointc1 5 IRQ_TYPE_LEVEL_HIGH>;
++				ranges;
+ 				external-facing;
+ 			};
+ 
+diff --git a/arch/riscv/kernel/sbi-ipi.c b/arch/riscv/kernel/sbi-ipi.c
+index 1026e22955ccc..0cc5559c08d8f 100644
+--- a/arch/riscv/kernel/sbi-ipi.c
++++ b/arch/riscv/kernel/sbi-ipi.c
+@@ -71,7 +71,7 @@ void __init sbi_ipi_init(void)
+ 	 * the masking/unmasking of virtual IPIs is done
+ 	 * via generic IPI-Mux
+ 	 */
+-	cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
++	cpuhp_setup_state(CPUHP_AP_IRQ_RISCV_SBI_IPI_STARTING,
+ 			  "irqchip/sbi-ipi:starting",
+ 			  sbi_ipi_starting_cpu, NULL);
+ 
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index 5224f37338022..a9f2b4af8f3f1 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -61,26 +61,27 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
+ 
+ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
+ {
++	if (!user_mode(regs)) {
++		no_context(regs, addr);
++		return;
++	}
++
+ 	if (fault & VM_FAULT_OOM) {
+ 		/*
+ 		 * We ran out of memory, call the OOM killer, and return the userspace
+ 		 * (which will retry the fault, or kill us if we got oom-killed).
+ 		 */
+-		if (!user_mode(regs)) {
+-			no_context(regs, addr);
+-			return;
+-		}
+ 		pagefault_out_of_memory();
+ 		return;
+ 	} else if (fault & (VM_FAULT_SIGBUS | VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE)) {
+ 		/* Kernel mode? Handle exceptions or die */
+-		if (!user_mode(regs)) {
+-			no_context(regs, addr);
+-			return;
+-		}
+ 		do_trap(regs, SIGBUS, BUS_ADRERR, addr);
+ 		return;
++	} else if (fault & VM_FAULT_SIGSEGV) {
++		do_trap(regs, SIGSEGV, SEGV_MAPERR, addr);
++		return;
+ 	}
++
+ 	BUG();
+ }
+ 
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index e3405e4b99af5..7e25606f858aa 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -233,8 +233,6 @@ static void __init setup_bootmem(void)
+ 	 */
+ 	memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+ 
+-	phys_ram_end = memblock_end_of_DRAM();
+-
+ 	/*
+ 	 * Make sure we align the start of the memory on a PMD boundary so that
+ 	 * at worst, we map the linear mapping with PMD mappings.
+@@ -249,6 +247,16 @@ static void __init setup_bootmem(void)
+ 	if (IS_ENABLED(CONFIG_64BIT) && IS_ENABLED(CONFIG_MMU))
+ 		kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base;
+ 
++	/*
++	 * The size of the linear page mapping may restrict the amount of
++	 * usable RAM.
++	 */
++	if (IS_ENABLED(CONFIG_64BIT)) {
++		max_mapped_addr = __pa(PAGE_OFFSET) + KERN_VIRT_SIZE;
++		memblock_cap_memory_range(phys_ram_base,
++					  max_mapped_addr - phys_ram_base);
++	}
++
+ 	/*
+ 	 * Reserve physical address space that would be mapped to virtual
+ 	 * addresses greater than (void *)(-PAGE_SIZE) because:
+@@ -265,6 +273,7 @@ static void __init setup_bootmem(void)
+ 		memblock_reserve(max_mapped_addr, (phys_addr_t)-max_mapped_addr);
+ 	}
+ 
++	phys_ram_end = memblock_end_of_DRAM();
+ 	min_low_pfn = PFN_UP(phys_ram_base);
+ 	max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
+ 	high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
+@@ -1289,8 +1298,6 @@ static void __init create_linear_mapping_page_table(void)
+ 		if (start <= __pa(PAGE_OFFSET) &&
+ 		    __pa(PAGE_OFFSET) < end)
+ 			start = __pa(PAGE_OFFSET);
+-		if (end >= __pa(PAGE_OFFSET) + memory_limit)
+-			end = __pa(PAGE_OFFSET) + memory_limit;
+ 
+ 		create_linear_mapping_range(start, end, 0);
+ 	}
+diff --git a/arch/riscv/purgatory/entry.S b/arch/riscv/purgatory/entry.S
+index 5bcf3af903daa..0e6ca6d5ae4b4 100644
+--- a/arch/riscv/purgatory/entry.S
++++ b/arch/riscv/purgatory/entry.S
+@@ -7,6 +7,7 @@
+  * Author: Li Zhengyu (lizhengyu3@huawei.com)
+  *
+  */
++#include <asm/asm.h>
+ #include <linux/linkage.h>
+ 
+ .text
+@@ -34,6 +35,7 @@ SYM_CODE_END(purgatory_start)
+ 
+ .data
+ 
++.align LGREG
+ SYM_DATA(riscv_kernel_entry, .quad 0)
+ 
+ .end
+diff --git a/arch/s390/kernel/fpu.c b/arch/s390/kernel/fpu.c
+index fa90bbdc5ef94..6f2e87920288a 100644
+--- a/arch/s390/kernel/fpu.c
++++ b/arch/s390/kernel/fpu.c
+@@ -113,7 +113,7 @@ void load_fpu_state(struct fpu *state, int flags)
+ 	int mask;
+ 
+ 	if (flags & KERNEL_FPC)
+-		fpu_lfpc(&state->fpc);
++		fpu_lfpc_safe(&state->fpc);
+ 	if (!cpu_has_vx()) {
+ 		if (flags & KERNEL_VXR_V0V7)
+ 			load_fp_regs_vx(state->vxrs);
+diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c
+index ffd07ed7b4af8..9d0805d6dc1b2 100644
+--- a/arch/s390/mm/dump_pagetables.c
++++ b/arch/s390/mm/dump_pagetables.c
+@@ -20,8 +20,8 @@ struct addr_marker {
+ };
+ 
+ enum address_markers_idx {
+-	IDENTITY_BEFORE_NR = 0,
+-	IDENTITY_BEFORE_END_NR,
++	LOWCORE_START_NR = 0,
++	LOWCORE_END_NR,
+ 	AMODE31_START_NR,
+ 	AMODE31_END_NR,
+ 	KERNEL_START_NR,
+@@ -30,8 +30,8 @@ enum address_markers_idx {
+ 	KFENCE_START_NR,
+ 	KFENCE_END_NR,
+ #endif
+-	IDENTITY_AFTER_NR,
+-	IDENTITY_AFTER_END_NR,
++	IDENTITY_START_NR,
++	IDENTITY_END_NR,
+ 	VMEMMAP_NR,
+ 	VMEMMAP_END_NR,
+ 	VMALLOC_NR,
+@@ -49,8 +49,10 @@ enum address_markers_idx {
+ };
+ 
+ static struct addr_marker address_markers[] = {
+-	[IDENTITY_BEFORE_NR]	= {0, "Identity Mapping Start"},
+-	[IDENTITY_BEFORE_END_NR] = {(unsigned long)_stext, "Identity Mapping End"},
++	[LOWCORE_START_NR]	= {0, "Lowcore Start"},
++	[LOWCORE_END_NR]	= {0, "Lowcore End"},
++	[IDENTITY_START_NR]	= {0, "Identity Mapping Start"},
++	[IDENTITY_END_NR]	= {0, "Identity Mapping End"},
+ 	[AMODE31_START_NR]	= {0, "Amode31 Area Start"},
+ 	[AMODE31_END_NR]	= {0, "Amode31 Area End"},
+ 	[KERNEL_START_NR]	= {(unsigned long)_stext, "Kernel Image Start"},
+@@ -59,8 +61,6 @@ static struct addr_marker address_markers[] = {
+ 	[KFENCE_START_NR]	= {0, "KFence Pool Start"},
+ 	[KFENCE_END_NR]		= {0, "KFence Pool End"},
+ #endif
+-	[IDENTITY_AFTER_NR]	= {(unsigned long)_end, "Identity Mapping Start"},
+-	[IDENTITY_AFTER_END_NR]	= {0, "Identity Mapping End"},
+ 	[VMEMMAP_NR]		= {0, "vmemmap Area Start"},
+ 	[VMEMMAP_END_NR]	= {0, "vmemmap Area End"},
+ 	[VMALLOC_NR]		= {0, "vmalloc Area Start"},
+@@ -290,7 +290,10 @@ static int pt_dump_init(void)
+ 	 */
+ 	max_addr = (S390_lowcore.kernel_asce.val & _REGION_ENTRY_TYPE_MASK) >> 2;
+ 	max_addr = 1UL << (max_addr * 11 + 31);
+-	address_markers[IDENTITY_AFTER_END_NR].start_address = ident_map_size;
++	address_markers[LOWCORE_START_NR].start_address = 0;
++	address_markers[LOWCORE_END_NR].start_address = sizeof(struct lowcore);
++	address_markers[IDENTITY_START_NR].start_address = __identity_base;
++	address_markers[IDENTITY_END_NR].start_address = __identity_base + ident_map_size;
+ 	address_markers[AMODE31_START_NR].start_address = (unsigned long)__samode31;
+ 	address_markers[AMODE31_END_NR].start_address = (unsigned long)__eamode31;
+ 	address_markers[MODULES_NR].start_address = MODULES_VADDR;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 38c1b1f1deaad..101a21fe9c213 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4698,8 +4698,8 @@ static void intel_pmu_check_extra_regs(struct extra_reg *extra_regs);
+ static inline bool intel_pmu_broken_perf_cap(void)
+ {
+ 	/* The Perf Metric (Bit 15) is always cleared */
+-	if ((boot_cpu_data.x86_model == INTEL_FAM6_METEORLAKE) ||
+-	    (boot_cpu_data.x86_model == INTEL_FAM6_METEORLAKE_L))
++	if (boot_cpu_data.x86_vfm == INTEL_METEORLAKE ||
++	    boot_cpu_data.x86_vfm == INTEL_METEORLAKE_L)
+ 		return true;
+ 
+ 	return false;
+@@ -6238,19 +6238,19 @@ __init int intel_pmu_init(void)
+ 	/*
+ 	 * Install the hw-cache-events table:
+ 	 */
+-	switch (boot_cpu_data.x86_model) {
+-	case INTEL_FAM6_CORE_YONAH:
++	switch (boot_cpu_data.x86_vfm) {
++	case INTEL_CORE_YONAH:
+ 		pr_cont("Core events, ");
+ 		name = "core";
+ 		break;
+ 
+-	case INTEL_FAM6_CORE2_MEROM:
++	case INTEL_CORE2_MEROM:
+ 		x86_add_quirk(intel_clovertown_quirk);
+ 		fallthrough;
+ 
+-	case INTEL_FAM6_CORE2_MEROM_L:
+-	case INTEL_FAM6_CORE2_PENRYN:
+-	case INTEL_FAM6_CORE2_DUNNINGTON:
++	case INTEL_CORE2_MEROM_L:
++	case INTEL_CORE2_PENRYN:
++	case INTEL_CORE2_DUNNINGTON:
+ 		memcpy(hw_cache_event_ids, core2_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 
+@@ -6262,9 +6262,9 @@ __init int intel_pmu_init(void)
+ 		name = "core2";
+ 		break;
+ 
+-	case INTEL_FAM6_NEHALEM:
+-	case INTEL_FAM6_NEHALEM_EP:
+-	case INTEL_FAM6_NEHALEM_EX:
++	case INTEL_NEHALEM:
++	case INTEL_NEHALEM_EP:
++	case INTEL_NEHALEM_EX:
+ 		memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
+@@ -6296,11 +6296,11 @@ __init int intel_pmu_init(void)
+ 		name = "nehalem";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_BONNELL:
+-	case INTEL_FAM6_ATOM_BONNELL_MID:
+-	case INTEL_FAM6_ATOM_SALTWELL:
+-	case INTEL_FAM6_ATOM_SALTWELL_MID:
+-	case INTEL_FAM6_ATOM_SALTWELL_TABLET:
++	case INTEL_ATOM_BONNELL:
++	case INTEL_ATOM_BONNELL_MID:
++	case INTEL_ATOM_SALTWELL:
++	case INTEL_ATOM_SALTWELL_MID:
++	case INTEL_ATOM_SALTWELL_TABLET:
+ 		memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 
+@@ -6313,11 +6313,11 @@ __init int intel_pmu_init(void)
+ 		name = "bonnell";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_SILVERMONT:
+-	case INTEL_FAM6_ATOM_SILVERMONT_D:
+-	case INTEL_FAM6_ATOM_SILVERMONT_MID:
+-	case INTEL_FAM6_ATOM_AIRMONT:
+-	case INTEL_FAM6_ATOM_AIRMONT_MID:
++	case INTEL_ATOM_SILVERMONT:
++	case INTEL_ATOM_SILVERMONT_D:
++	case INTEL_ATOM_SILVERMONT_MID:
++	case INTEL_ATOM_AIRMONT:
++	case INTEL_ATOM_AIRMONT_MID:
+ 		memcpy(hw_cache_event_ids, slm_hw_cache_event_ids,
+ 			sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
+@@ -6335,8 +6335,8 @@ __init int intel_pmu_init(void)
+ 		name = "silvermont";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_GOLDMONT:
+-	case INTEL_FAM6_ATOM_GOLDMONT_D:
++	case INTEL_ATOM_GOLDMONT:
++	case INTEL_ATOM_GOLDMONT_D:
+ 		memcpy(hw_cache_event_ids, glm_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, glm_hw_cache_extra_regs,
+@@ -6362,7 +6362,7 @@ __init int intel_pmu_init(void)
+ 		name = "goldmont";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
++	case INTEL_ATOM_GOLDMONT_PLUS:
+ 		memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, glp_hw_cache_extra_regs,
+@@ -6391,9 +6391,9 @@ __init int intel_pmu_init(void)
+ 		name = "goldmont_plus";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_TREMONT_D:
+-	case INTEL_FAM6_ATOM_TREMONT:
+-	case INTEL_FAM6_ATOM_TREMONT_L:
++	case INTEL_ATOM_TREMONT_D:
++	case INTEL_ATOM_TREMONT:
++	case INTEL_ATOM_TREMONT_L:
+ 		x86_pmu.late_ack = true;
+ 		memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+@@ -6420,7 +6420,7 @@ __init int intel_pmu_init(void)
+ 		name = "Tremont";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_GRACEMONT:
++	case INTEL_ATOM_GRACEMONT:
+ 		intel_pmu_init_grt(NULL);
+ 		intel_pmu_pebs_data_source_grt();
+ 		x86_pmu.pebs_latency_data = adl_latency_data_small;
+@@ -6432,8 +6432,8 @@ __init int intel_pmu_init(void)
+ 		name = "gracemont";
+ 		break;
+ 
+-	case INTEL_FAM6_ATOM_CRESTMONT:
+-	case INTEL_FAM6_ATOM_CRESTMONT_X:
++	case INTEL_ATOM_CRESTMONT:
++	case INTEL_ATOM_CRESTMONT_X:
+ 		intel_pmu_init_grt(NULL);
+ 		x86_pmu.extra_regs = intel_cmt_extra_regs;
+ 		intel_pmu_pebs_data_source_cmt();
+@@ -6446,9 +6446,9 @@ __init int intel_pmu_init(void)
+ 		name = "crestmont";
+ 		break;
+ 
+-	case INTEL_FAM6_WESTMERE:
+-	case INTEL_FAM6_WESTMERE_EP:
+-	case INTEL_FAM6_WESTMERE_EX:
++	case INTEL_WESTMERE:
++	case INTEL_WESTMERE_EP:
++	case INTEL_WESTMERE_EX:
+ 		memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
+@@ -6477,8 +6477,8 @@ __init int intel_pmu_init(void)
+ 		name = "westmere";
+ 		break;
+ 
+-	case INTEL_FAM6_SANDYBRIDGE:
+-	case INTEL_FAM6_SANDYBRIDGE_X:
++	case INTEL_SANDYBRIDGE:
++	case INTEL_SANDYBRIDGE_X:
+ 		x86_add_quirk(intel_sandybridge_quirk);
+ 		x86_add_quirk(intel_ht_bug);
+ 		memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
+@@ -6491,7 +6491,7 @@ __init int intel_pmu_init(void)
+ 		x86_pmu.event_constraints = intel_snb_event_constraints;
+ 		x86_pmu.pebs_constraints = intel_snb_pebs_event_constraints;
+ 		x86_pmu.pebs_aliases = intel_pebs_aliases_snb;
+-		if (boot_cpu_data.x86_model == INTEL_FAM6_SANDYBRIDGE_X)
++		if (boot_cpu_data.x86_vfm == INTEL_SANDYBRIDGE_X)
+ 			x86_pmu.extra_regs = intel_snbep_extra_regs;
+ 		else
+ 			x86_pmu.extra_regs = intel_snb_extra_regs;
+@@ -6517,8 +6517,8 @@ __init int intel_pmu_init(void)
+ 		name = "sandybridge";
+ 		break;
+ 
+-	case INTEL_FAM6_IVYBRIDGE:
+-	case INTEL_FAM6_IVYBRIDGE_X:
++	case INTEL_IVYBRIDGE:
++	case INTEL_IVYBRIDGE_X:
+ 		x86_add_quirk(intel_ht_bug);
+ 		memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
+ 		       sizeof(hw_cache_event_ids));
+@@ -6534,7 +6534,7 @@ __init int intel_pmu_init(void)
+ 		x86_pmu.pebs_constraints = intel_ivb_pebs_event_constraints;
+ 		x86_pmu.pebs_aliases = intel_pebs_aliases_ivb;
+ 		x86_pmu.pebs_prec_dist = true;
+-		if (boot_cpu_data.x86_model == INTEL_FAM6_IVYBRIDGE_X)
++		if (boot_cpu_data.x86_vfm == INTEL_IVYBRIDGE_X)
+ 			x86_pmu.extra_regs = intel_snbep_extra_regs;
+ 		else
+ 			x86_pmu.extra_regs = intel_snb_extra_regs;
+@@ -6556,10 +6556,10 @@ __init int intel_pmu_init(void)
+ 		break;
+ 
+ 
+-	case INTEL_FAM6_HASWELL:
+-	case INTEL_FAM6_HASWELL_X:
+-	case INTEL_FAM6_HASWELL_L:
+-	case INTEL_FAM6_HASWELL_G:
++	case INTEL_HASWELL:
++	case INTEL_HASWELL_X:
++	case INTEL_HASWELL_L:
++	case INTEL_HASWELL_G:
+ 		x86_add_quirk(intel_ht_bug);
+ 		x86_add_quirk(intel_pebs_isolation_quirk);
+ 		x86_pmu.late_ack = true;
+@@ -6589,10 +6589,10 @@ __init int intel_pmu_init(void)
+ 		name = "haswell";
+ 		break;
+ 
+-	case INTEL_FAM6_BROADWELL:
+-	case INTEL_FAM6_BROADWELL_D:
+-	case INTEL_FAM6_BROADWELL_G:
+-	case INTEL_FAM6_BROADWELL_X:
++	case INTEL_BROADWELL:
++	case INTEL_BROADWELL_D:
++	case INTEL_BROADWELL_G:
++	case INTEL_BROADWELL_X:
+ 		x86_add_quirk(intel_pebs_isolation_quirk);
+ 		x86_pmu.late_ack = true;
+ 		memcpy(hw_cache_event_ids, hsw_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+@@ -6631,8 +6631,8 @@ __init int intel_pmu_init(void)
+ 		name = "broadwell";
+ 		break;
+ 
+-	case INTEL_FAM6_XEON_PHI_KNL:
+-	case INTEL_FAM6_XEON_PHI_KNM:
++	case INTEL_XEON_PHI_KNL:
++	case INTEL_XEON_PHI_KNM:
+ 		memcpy(hw_cache_event_ids,
+ 		       slm_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs,
+@@ -6651,15 +6651,15 @@ __init int intel_pmu_init(void)
+ 		name = "knights-landing";
+ 		break;
+ 
+-	case INTEL_FAM6_SKYLAKE_X:
++	case INTEL_SKYLAKE_X:
+ 		pmem = true;
+ 		fallthrough;
+-	case INTEL_FAM6_SKYLAKE_L:
+-	case INTEL_FAM6_SKYLAKE:
+-	case INTEL_FAM6_KABYLAKE_L:
+-	case INTEL_FAM6_KABYLAKE:
+-	case INTEL_FAM6_COMETLAKE_L:
+-	case INTEL_FAM6_COMETLAKE:
++	case INTEL_SKYLAKE_L:
++	case INTEL_SKYLAKE:
++	case INTEL_KABYLAKE_L:
++	case INTEL_KABYLAKE:
++	case INTEL_COMETLAKE_L:
++	case INTEL_COMETLAKE:
+ 		x86_add_quirk(intel_pebs_isolation_quirk);
+ 		x86_pmu.late_ack = true;
+ 		memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+@@ -6708,16 +6708,16 @@ __init int intel_pmu_init(void)
+ 		name = "skylake";
+ 		break;
+ 
+-	case INTEL_FAM6_ICELAKE_X:
+-	case INTEL_FAM6_ICELAKE_D:
++	case INTEL_ICELAKE_X:
++	case INTEL_ICELAKE_D:
+ 		x86_pmu.pebs_ept = 1;
+ 		pmem = true;
+ 		fallthrough;
+-	case INTEL_FAM6_ICELAKE_L:
+-	case INTEL_FAM6_ICELAKE:
+-	case INTEL_FAM6_TIGERLAKE_L:
+-	case INTEL_FAM6_TIGERLAKE:
+-	case INTEL_FAM6_ROCKETLAKE:
++	case INTEL_ICELAKE_L:
++	case INTEL_ICELAKE:
++	case INTEL_TIGERLAKE_L:
++	case INTEL_TIGERLAKE:
++	case INTEL_ROCKETLAKE:
+ 		x86_pmu.late_ack = true;
+ 		memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+ 		memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+@@ -6752,16 +6752,22 @@ __init int intel_pmu_init(void)
+ 		name = "icelake";
+ 		break;
+ 
+-	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+-	case INTEL_FAM6_EMERALDRAPIDS_X:
++	case INTEL_SAPPHIRERAPIDS_X:
++	case INTEL_EMERALDRAPIDS_X:
+ 		x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
+ 		x86_pmu.extra_regs = intel_glc_extra_regs;
+-		fallthrough;
+-	case INTEL_FAM6_GRANITERAPIDS_X:
+-	case INTEL_FAM6_GRANITERAPIDS_D:
++		pr_cont("Sapphire Rapids events, ");
++		name = "sapphire_rapids";
++		goto glc_common;
++
++	case INTEL_GRANITERAPIDS_X:
++	case INTEL_GRANITERAPIDS_D:
++		x86_pmu.extra_regs = intel_rwc_extra_regs;
++		pr_cont("Granite Rapids events, ");
++		name = "granite_rapids";
++
++	glc_common:
+ 		intel_pmu_init_glc(NULL);
+-		if (!x86_pmu.extra_regs)
+-			x86_pmu.extra_regs = intel_rwc_extra_regs;
+ 		x86_pmu.pebs_ept = 1;
+ 		x86_pmu.hw_config = hsw_hw_config;
+ 		x86_pmu.get_event_constraints = glc_get_event_constraints;
+@@ -6772,15 +6778,13 @@ __init int intel_pmu_init(void)
+ 		td_attr = glc_td_events_attrs;
+ 		tsx_attr = glc_tsx_events_attrs;
+ 		intel_pmu_pebs_data_source_skl(true);
+-		pr_cont("Sapphire Rapids events, ");
+-		name = "sapphire_rapids";
+ 		break;
+ 
+-	case INTEL_FAM6_ALDERLAKE:
+-	case INTEL_FAM6_ALDERLAKE_L:
+-	case INTEL_FAM6_RAPTORLAKE:
+-	case INTEL_FAM6_RAPTORLAKE_P:
+-	case INTEL_FAM6_RAPTORLAKE_S:
++	case INTEL_ALDERLAKE:
++	case INTEL_ALDERLAKE_L:
++	case INTEL_RAPTORLAKE:
++	case INTEL_RAPTORLAKE_P:
++	case INTEL_RAPTORLAKE_S:
+ 		/*
+ 		 * Alder Lake has 2 types of CPU, core and atom.
+ 		 *
+@@ -6838,8 +6842,8 @@ __init int intel_pmu_init(void)
+ 		name = "alderlake_hybrid";
+ 		break;
+ 
+-	case INTEL_FAM6_METEORLAKE:
+-	case INTEL_FAM6_METEORLAKE_L:
++	case INTEL_METEORLAKE:
++	case INTEL_METEORLAKE_L:
+ 		intel_pmu_init_hybrid(hybrid_big_small);
+ 
+ 		x86_pmu.pebs_latency_data = mtl_latency_data_small;
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 7ecc67deecb09..93900c37349c1 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -3012,6 +3012,9 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		btintel_set_dsm_reset_method(hdev, &ver_tlv);
+ 
+ 		err = btintel_bootloader_setup_tlv(hdev, &ver_tlv);
++		if (err)
++			goto exit_error;
++
+ 		btintel_register_devcoredump_support(hdev);
+ 		btintel_print_fseq_info(hdev);
+ 		break;
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 359b68adafc1b..79628ff837e6f 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -253,6 +253,7 @@ config DRM_EXEC
+ config DRM_GPUVM
+ 	tristate
+ 	depends on DRM
++	select DRM_EXEC
+ 	help
+ 	  GPU-VM representation providing helpers to manage a GPUs virtual
+ 	  address space
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index ec888fc6ead8d..13eb2bc69e342 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1763,7 +1763,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+ 	struct ttm_operation_ctx ctx = { false, false };
+ 	struct amdgpu_vm *vm = &fpriv->vm;
+ 	struct amdgpu_bo_va_mapping *mapping;
+-	int r;
++	int i, r;
+ 
+ 	addr /= AMDGPU_GPU_PAGE_SIZE;
+ 
+@@ -1778,13 +1778,13 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+ 	if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->exec.ticket)
+ 		return -EINVAL;
+ 
+-	if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {
+-		(*bo)->flags |= AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+-		amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
+-		r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
+-		if (r)
+-			return r;
+-	}
++	(*bo)->flags |= AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
++	amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
++	for (i = 0; i < (*bo)->placement.num_placement; i++)
++		(*bo)->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
++	r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
++	if (r)
++		return r;
+ 
+ 	return amdgpu_ttm_alloc_gart(&(*bo)->tbo);
+ }
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 1e9259416980e..e6c7f0d64e995 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -158,7 +158,14 @@ void ast_dp_launch(struct drm_device *dev)
+ 			       ASTDP_HOST_EDID_READ_DONE);
+ }
+ 
++bool ast_dp_power_is_on(struct ast_device *ast)
++{
++	u8 vgacre3;
++
++	vgacre3 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xe3);
+ 
++	return !(vgacre3 & AST_DP_PHY_SLEEP);
++}
+ 
+ void ast_dp_power_on_off(struct drm_device *dev, bool on)
+ {
+diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
+index f8c49ba68e789..af2368f6f00f4 100644
+--- a/drivers/gpu/drm/ast/ast_drv.c
++++ b/drivers/gpu/drm/ast/ast_drv.c
+@@ -391,6 +391,11 @@ static int ast_drm_freeze(struct drm_device *dev)
+ 
+ static int ast_drm_thaw(struct drm_device *dev)
+ {
++	struct ast_device *ast = to_ast_device(dev);
++
++	ast_enable_vga(ast->ioregs);
++	ast_open_key(ast->ioregs);
++	ast_enable_mmio(dev->dev, ast->ioregs);
+ 	ast_post_gpu(dev);
+ 
+ 	return drm_mode_config_helper_resume(dev);
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index ba3d86973995f..47bab5596c16e 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -472,6 +472,7 @@ void ast_init_3rdtx(struct drm_device *dev);
+ bool ast_astdp_is_connected(struct ast_device *ast);
+ int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata);
+ void ast_dp_launch(struct drm_device *dev);
++bool ast_dp_power_is_on(struct ast_device *ast);
+ void ast_dp_power_on_off(struct drm_device *dev, bool no);
+ void ast_dp_set_on_off(struct drm_device *dev, bool no);
+ void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode);
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 6695af70768f9..88f830a7d285a 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -28,6 +28,7 @@
+  * Authors: Dave Airlie <airlied@redhat.com>
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/export.h>
+ #include <linux/pci.h>
+ 
+@@ -1641,11 +1642,35 @@ static int ast_astdp_connector_helper_detect_ctx(struct drm_connector *connector
+ 						 struct drm_modeset_acquire_ctx *ctx,
+ 						 bool force)
+ {
++	struct drm_device *dev = connector->dev;
+ 	struct ast_device *ast = to_ast_device(connector->dev);
++	enum drm_connector_status status = connector_status_disconnected;
++	struct drm_connector_state *connector_state = connector->state;
++	bool is_active = false;
++
++	mutex_lock(&ast->modeset_lock);
++
++	if (connector_state && connector_state->crtc) {
++		struct drm_crtc_state *crtc_state = connector_state->crtc->state;
++
++		if (crtc_state && crtc_state->active)
++			is_active = true;
++	}
++
++	if (!is_active && !ast_dp_power_is_on(ast)) {
++		ast_dp_power_on_off(dev, true);
++		msleep(50);
++	}
+ 
+ 	if (ast_astdp_is_connected(ast))
+-		return connector_status_connected;
+-	return connector_status_disconnected;
++		status = connector_status_connected;
++
++	if (!is_active && status == connector_status_disconnected)
++		ast_dp_power_on_off(dev, false);
++
++	mutex_unlock(&ast->modeset_lock);
++
++	return status;
+ }
+ 
+ static const struct drm_connector_helper_funcs ast_astdp_connector_helper_funcs = {
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index fc16fddee5c59..02b1235c6d619 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -1066,7 +1066,10 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
+ 			break;
+ 		}
+ 
+-		if (async_flip && prop != config->prop_fb_id) {
++		if (async_flip &&
++		    prop != config->prop_fb_id &&
++		    prop != config->prop_in_fence_fd &&
++		    prop != config->prop_fb_damage_clips) {
+ 			ret = drm_atomic_plane_get_property(plane, plane_state,
+ 							    prop, &old_val);
+ 			ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
+diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
+index 2803ac111bbd8..bfedcbf516dbe 100644
+--- a/drivers/gpu/drm/drm_client.c
++++ b/drivers/gpu/drm/drm_client.c
+@@ -355,7 +355,7 @@ int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer,
+ 
+ err_drm_gem_vmap_unlocked:
+ 	drm_gem_unlock(gem);
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL(drm_client_buffer_vmap_local);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+index 90998b037349d..292d163036b12 100644
+--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
++++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+@@ -1658,7 +1658,7 @@ static void skl_wrpll_params_populate(struct skl_wrpll_params *params,
+ }
+ 
+ static int
+-skl_ddi_calculate_wrpll(int clock /* in Hz */,
++skl_ddi_calculate_wrpll(int clock,
+ 			int ref_clock,
+ 			struct skl_wrpll_params *wrpll_params)
+ {
+@@ -1683,7 +1683,7 @@ skl_ddi_calculate_wrpll(int clock /* in Hz */,
+ 	};
+ 	unsigned int dco, d, i;
+ 	unsigned int p0, p1, p2;
+-	u64 afe_clock = clock * 5; /* AFE Clock is 5x Pixel clock */
++	u64 afe_clock = (u64)clock * 1000 * 5; /* AFE Clock is 5x Pixel clock, in Hz */
+ 
+ 	for (d = 0; d < ARRAY_SIZE(dividers); d++) {
+ 		for (dco = 0; dco < ARRAY_SIZE(dco_central_freq); dco++) {
+@@ -1808,7 +1808,7 @@ static int skl_ddi_hdmi_pll_dividers(struct intel_crtc_state *crtc_state)
+ 	struct skl_wrpll_params wrpll_params = {};
+ 	int ret;
+ 
+-	ret = skl_ddi_calculate_wrpll(crtc_state->port_clock * 1000,
++	ret = skl_ddi_calculate_wrpll(crtc_state->port_clock,
+ 				      i915->display.dpll.ref_clks.nssc, &wrpll_params);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp_regs.h b/drivers/gpu/drm/i915/display/intel_hdcp_regs.h
+index a568a457e5326..f590d7f48ba74 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_hdcp_regs.h
+@@ -251,7 +251,7 @@
+ #define HDCP2_STREAM_STATUS(dev_priv, trans, port) \
+ 					(TRANS_HDCP(dev_priv) ? \
+ 					 TRANS_HDCP2_STREAM_STATUS(trans) : \
+-					 PIPE_HDCP2_STREAM_STATUS(pipe))
++					 PIPE_HDCP2_STREAM_STATUS(port))
+ 
+ #define _PORTA_HDCP2_AUTH_STREAM		0x66F00
+ #define _PORTB_HDCP2_AUTH_STREAM		0x66F04
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 0b1cd4c7a525f..025a79fe5920e 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -2748,26 +2748,6 @@ oa_configure_all_contexts(struct i915_perf_stream *stream,
+ 	return 0;
+ }
+ 
+-static int
+-gen12_configure_all_contexts(struct i915_perf_stream *stream,
+-			     const struct i915_oa_config *oa_config,
+-			     struct i915_active *active)
+-{
+-	struct flex regs[] = {
+-		{
+-			GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE),
+-			CTX_R_PWR_CLK_STATE,
+-		},
+-	};
+-
+-	if (stream->engine->class != RENDER_CLASS)
+-		return 0;
+-
+-	return oa_configure_all_contexts(stream,
+-					 regs, ARRAY_SIZE(regs),
+-					 active);
+-}
+-
+ static int
+ lrc_configure_all_contexts(struct i915_perf_stream *stream,
+ 			   const struct i915_oa_config *oa_config,
+@@ -2874,7 +2854,6 @@ gen12_enable_metric_set(struct i915_perf_stream *stream,
+ {
+ 	struct drm_i915_private *i915 = stream->perf->i915;
+ 	struct intel_uncore *uncore = stream->uncore;
+-	struct i915_oa_config *oa_config = stream->oa_config;
+ 	bool periodic = stream->periodic;
+ 	u32 period_exponent = stream->period_exponent;
+ 	u32 sqcnt1;
+@@ -2918,15 +2897,6 @@ gen12_enable_metric_set(struct i915_perf_stream *stream,
+ 
+ 	intel_uncore_rmw(uncore, GEN12_SQCNT1, 0, sqcnt1);
+ 
+-	/*
+-	 * Update all contexts prior writing the mux configurations as we need
+-	 * to make sure all slices/subslices are ON before writing to NOA
+-	 * registers.
+-	 */
+-	ret = gen12_configure_all_contexts(stream, oa_config, active);
+-	if (ret)
+-		return ret;
+-
+ 	/*
+ 	 * For Gen12, performance counters are context
+ 	 * saved/restored. Only enable it for the context that
+@@ -2980,9 +2950,6 @@ static void gen12_disable_metric_set(struct i915_perf_stream *stream)
+ 				   _MASKED_BIT_DISABLE(GEN12_DISABLE_DOP_GATING));
+ 	}
+ 
+-	/* Reset all contexts' slices/subslices configurations. */
+-	gen12_configure_all_contexts(stream, NULL, NULL);
+-
+ 	/* disable the context save/restore or OAR counters */
+ 	if (stream->ctx)
+ 		gen12_configure_oar_context(stream, NULL);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
+index b58ab595faf82..cd95446d68511 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
++++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
+@@ -64,7 +64,8 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
+ 	 * to the caller, instead of a normal nouveau_bo ttm reference. */
+ 	ret = drm_gem_object_init(dev, &nvbo->bo.base, size);
+ 	if (ret) {
+-		nouveau_bo_ref(NULL, &nvbo);
++		drm_gem_object_release(&nvbo->bo.base);
++		kfree(nvbo);
+ 		obj = ERR_PTR(-ENOMEM);
+ 		goto unlock;
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+index ee02cd833c5e4..84a36fe7c37dd 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+@@ -1803,6 +1803,7 @@ nouveau_uvmm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+ {
+ 	struct nouveau_bo *nvbo = nouveau_gem_object(vm_bo->obj);
+ 
++	nouveau_bo_placement_set(nvbo, nvbo->valid_domains, 0);
+ 	return nouveau_bo_validate(nvbo, true, false);
+ }
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a2c516fe6d796..1d535abedc57b 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -556,6 +556,10 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo);
+ void v3d_mmu_remove_ptes(struct v3d_bo *bo);
+ 
+ /* v3d_sched.c */
++void v3d_timestamp_query_info_free(struct v3d_timestamp_query_info *query_info,
++				   unsigned int count);
++void v3d_performance_query_info_free(struct v3d_performance_query_info *query_info,
++				     unsigned int count);
+ void v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue);
+ int v3d_sched_init(struct v3d_dev *v3d);
+ void v3d_sched_fini(struct v3d_dev *v3d);
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 7cd8c335cd9b7..30d5366d62883 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -73,24 +73,44 @@ v3d_sched_job_free(struct drm_sched_job *sched_job)
+ 	v3d_job_cleanup(job);
+ }
+ 
++void
++v3d_timestamp_query_info_free(struct v3d_timestamp_query_info *query_info,
++			      unsigned int count)
++{
++	if (query_info->queries) {
++		unsigned int i;
++
++		for (i = 0; i < count; i++)
++			drm_syncobj_put(query_info->queries[i].syncobj);
++
++		kvfree(query_info->queries);
++	}
++}
++
++void
++v3d_performance_query_info_free(struct v3d_performance_query_info *query_info,
++				unsigned int count)
++{
++	if (query_info->queries) {
++		unsigned int i;
++
++		for (i = 0; i < count; i++)
++			drm_syncobj_put(query_info->queries[i].syncobj);
++
++		kvfree(query_info->queries);
++	}
++}
++
+ static void
+ v3d_cpu_job_free(struct drm_sched_job *sched_job)
+ {
+ 	struct v3d_cpu_job *job = to_cpu_job(sched_job);
+-	struct v3d_timestamp_query_info *timestamp_query = &job->timestamp_query;
+-	struct v3d_performance_query_info *performance_query = &job->performance_query;
+ 
+-	if (timestamp_query->queries) {
+-		for (int i = 0; i < timestamp_query->count; i++)
+-			drm_syncobj_put(timestamp_query->queries[i].syncobj);
+-		kvfree(timestamp_query->queries);
+-	}
++	v3d_timestamp_query_info_free(&job->timestamp_query,
++				      job->timestamp_query.count);
+ 
+-	if (performance_query->queries) {
+-		for (int i = 0; i < performance_query->count; i++)
+-			drm_syncobj_put(performance_query->queries[i].syncobj);
+-		kvfree(performance_query->queries);
+-	}
++	v3d_performance_query_info_free(&job->performance_query,
++					job->performance_query.count);
+ 
+ 	v3d_job_cleanup(&job->base);
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_submit.c b/drivers/gpu/drm/v3d/v3d_submit.c
+index 88f63d526b223..4cdfabbf4964f 100644
+--- a/drivers/gpu/drm/v3d/v3d_submit.c
++++ b/drivers/gpu/drm/v3d/v3d_submit.c
+@@ -452,6 +452,8 @@ v3d_get_cpu_timestamp_query_params(struct drm_file *file_priv,
+ {
+ 	u32 __user *offsets, *syncs;
+ 	struct drm_v3d_timestamp_query timestamp;
++	unsigned int i;
++	int err;
+ 
+ 	if (!job) {
+ 		DRM_DEBUG("CPU job extension was attached to a GPU job.\n");
+@@ -480,26 +482,34 @@ v3d_get_cpu_timestamp_query_params(struct drm_file *file_priv,
+ 	offsets = u64_to_user_ptr(timestamp.offsets);
+ 	syncs = u64_to_user_ptr(timestamp.syncs);
+ 
+-	for (int i = 0; i < timestamp.count; i++) {
++	for (i = 0; i < timestamp.count; i++) {
+ 		u32 offset, sync;
+ 
+ 		if (copy_from_user(&offset, offsets++, sizeof(offset))) {
+-			kvfree(job->timestamp_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		job->timestamp_query.queries[i].offset = offset;
+ 
+ 		if (copy_from_user(&sync, syncs++, sizeof(sync))) {
+-			kvfree(job->timestamp_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
++		if (!job->timestamp_query.queries[i].syncobj) {
++			err = -ENOENT;
++			goto error;
++		}
+ 	}
+ 	job->timestamp_query.count = timestamp.count;
+ 
+ 	return 0;
++
++error:
++	v3d_timestamp_query_info_free(&job->timestamp_query, i);
++	return err;
+ }
+ 
+ static int
+@@ -509,6 +519,8 @@ v3d_get_cpu_reset_timestamp_params(struct drm_file *file_priv,
+ {
+ 	u32 __user *syncs;
+ 	struct drm_v3d_reset_timestamp_query reset;
++	unsigned int i;
++	int err;
+ 
+ 	if (!job) {
+ 		DRM_DEBUG("CPU job extension was attached to a GPU job.\n");
+@@ -533,21 +545,29 @@ v3d_get_cpu_reset_timestamp_params(struct drm_file *file_priv,
+ 
+ 	syncs = u64_to_user_ptr(reset.syncs);
+ 
+-	for (int i = 0; i < reset.count; i++) {
++	for (i = 0; i < reset.count; i++) {
+ 		u32 sync;
+ 
+ 		job->timestamp_query.queries[i].offset = reset.offset + 8 * i;
+ 
+ 		if (copy_from_user(&sync, syncs++, sizeof(sync))) {
+-			kvfree(job->timestamp_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
++		if (!job->timestamp_query.queries[i].syncobj) {
++			err = -ENOENT;
++			goto error;
++		}
+ 	}
+ 	job->timestamp_query.count = reset.count;
+ 
+ 	return 0;
++
++error:
++	v3d_timestamp_query_info_free(&job->timestamp_query, i);
++	return err;
+ }
+ 
+ /* Get data for the copy timestamp query results job submission. */
+@@ -558,7 +578,8 @@ v3d_get_cpu_copy_query_results_params(struct drm_file *file_priv,
+ {
+ 	u32 __user *offsets, *syncs;
+ 	struct drm_v3d_copy_timestamp_query copy;
+-	int i;
++	unsigned int i;
++	int err;
+ 
+ 	if (!job) {
+ 		DRM_DEBUG("CPU job extension was attached to a GPU job.\n");
+@@ -591,18 +612,22 @@ v3d_get_cpu_copy_query_results_params(struct drm_file *file_priv,
+ 		u32 offset, sync;
+ 
+ 		if (copy_from_user(&offset, offsets++, sizeof(offset))) {
+-			kvfree(job->timestamp_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		job->timestamp_query.queries[i].offset = offset;
+ 
+ 		if (copy_from_user(&sync, syncs++, sizeof(sync))) {
+-			kvfree(job->timestamp_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
++		if (!job->timestamp_query.queries[i].syncobj) {
++			err = -ENOENT;
++			goto error;
++		}
+ 	}
+ 	job->timestamp_query.count = copy.count;
+ 
+@@ -613,6 +638,10 @@ v3d_get_cpu_copy_query_results_params(struct drm_file *file_priv,
+ 	job->copy.stride = copy.stride;
+ 
+ 	return 0;
++
++error:
++	v3d_timestamp_query_info_free(&job->timestamp_query, i);
++	return err;
+ }
+ 
+ static int
+@@ -623,6 +652,8 @@ v3d_get_cpu_reset_performance_params(struct drm_file *file_priv,
+ 	u32 __user *syncs;
+ 	u64 __user *kperfmon_ids;
+ 	struct drm_v3d_reset_performance_query reset;
++	unsigned int i, j;
++	int err;
+ 
+ 	if (!job) {
+ 		DRM_DEBUG("CPU job extension was attached to a GPU job.\n");
+@@ -637,6 +668,9 @@ v3d_get_cpu_reset_performance_params(struct drm_file *file_priv,
+ 	if (copy_from_user(&reset, ext, sizeof(reset)))
+ 		return -EFAULT;
+ 
++	if (reset.nperfmons > V3D_MAX_PERFMONS)
++		return -EINVAL;
++
+ 	job->job_type = V3D_CPU_JOB_TYPE_RESET_PERFORMANCE_QUERY;
+ 
+ 	job->performance_query.queries = kvmalloc_array(reset.count,
+@@ -648,39 +682,47 @@ v3d_get_cpu_reset_performance_params(struct drm_file *file_priv,
+ 	syncs = u64_to_user_ptr(reset.syncs);
+ 	kperfmon_ids = u64_to_user_ptr(reset.kperfmon_ids);
+ 
+-	for (int i = 0; i < reset.count; i++) {
++	for (i = 0; i < reset.count; i++) {
+ 		u32 sync;
+ 		u64 ids;
+ 		u32 __user *ids_pointer;
+ 		u32 id;
+ 
+ 		if (copy_from_user(&sync, syncs++, sizeof(sync))) {
+-			kvfree(job->performance_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+-		job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
+-
+ 		if (copy_from_user(&ids, kperfmon_ids++, sizeof(ids))) {
+-			kvfree(job->performance_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		ids_pointer = u64_to_user_ptr(ids);
+ 
+-		for (int j = 0; j < reset.nperfmons; j++) {
++		for (j = 0; j < reset.nperfmons; j++) {
+ 			if (copy_from_user(&id, ids_pointer++, sizeof(id))) {
+-				kvfree(job->performance_query.queries);
+-				return -EFAULT;
++				err = -EFAULT;
++				goto error;
+ 			}
+ 
+ 			job->performance_query.queries[i].kperfmon_ids[j] = id;
+ 		}
++
++		job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
++		if (!job->performance_query.queries[i].syncobj) {
++			err = -ENOENT;
++			goto error;
++		}
+ 	}
+ 	job->performance_query.count = reset.count;
+ 	job->performance_query.nperfmons = reset.nperfmons;
+ 
+ 	return 0;
++
++error:
++	v3d_performance_query_info_free(&job->performance_query, i);
++	return err;
+ }
+ 
+ static int
+@@ -691,6 +733,8 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ 	u32 __user *syncs;
+ 	u64 __user *kperfmon_ids;
+ 	struct drm_v3d_copy_performance_query copy;
++	unsigned int i, j;
++	int err;
+ 
+ 	if (!job) {
+ 		DRM_DEBUG("CPU job extension was attached to a GPU job.\n");
+@@ -708,6 +752,9 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ 	if (copy.pad)
+ 		return -EINVAL;
+ 
++	if (copy.nperfmons > V3D_MAX_PERFMONS)
++		return -EINVAL;
++
+ 	job->job_type = V3D_CPU_JOB_TYPE_COPY_PERFORMANCE_QUERY;
+ 
+ 	job->performance_query.queries = kvmalloc_array(copy.count,
+@@ -719,34 +766,38 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ 	syncs = u64_to_user_ptr(copy.syncs);
+ 	kperfmon_ids = u64_to_user_ptr(copy.kperfmon_ids);
+ 
+-	for (int i = 0; i < copy.count; i++) {
++	for (i = 0; i < copy.count; i++) {
+ 		u32 sync;
+ 		u64 ids;
+ 		u32 __user *ids_pointer;
+ 		u32 id;
+ 
+ 		if (copy_from_user(&sync, syncs++, sizeof(sync))) {
+-			kvfree(job->performance_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+-		job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
+-
+ 		if (copy_from_user(&ids, kperfmon_ids++, sizeof(ids))) {
+-			kvfree(job->performance_query.queries);
+-			return -EFAULT;
++			err = -EFAULT;
++			goto error;
+ 		}
+ 
+ 		ids_pointer = u64_to_user_ptr(ids);
+ 
+-		for (int j = 0; j < copy.nperfmons; j++) {
++		for (j = 0; j < copy.nperfmons; j++) {
+ 			if (copy_from_user(&id, ids_pointer++, sizeof(id))) {
+-				kvfree(job->performance_query.queries);
+-				return -EFAULT;
++				err = -EFAULT;
++				goto error;
+ 			}
+ 
+ 			job->performance_query.queries[i].kperfmon_ids[j] = id;
+ 		}
++
++		job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync);
++		if (!job->performance_query.queries[i].syncobj) {
++			err = -ENOENT;
++			goto error;
++		}
+ 	}
+ 	job->performance_query.count = copy.count;
+ 	job->performance_query.nperfmons = copy.nperfmons;
+@@ -759,6 +810,10 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ 	job->copy.stride = copy.stride;
+ 
+ 	return 0;
++
++error:
++	v3d_performance_query_info_free(&job->performance_query, i);
++	return err;
+ }
+ 
+ /* Whenever userspace sets ioctl extensions, v3d_get_extensions parses data
+diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
+index 1c7c7f61a2228..7d34cf83f5f2b 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
++++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
+@@ -48,7 +48,7 @@ struct virtio_gpu_submit {
+ static int virtio_gpu_do_fence_wait(struct virtio_gpu_submit *submit,
+ 				    struct dma_fence *in_fence)
+ {
+-	u32 context = submit->fence_ctx + submit->ring_idx;
++	u64 context = submit->fence_ctx + submit->ring_idx;
+ 
+ 	if (dma_fence_match_context(in_fence, context))
+ 		return 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmw_surface_cache.h b/drivers/gpu/drm/vmwgfx/vmw_surface_cache.h
+index b0d87c5f58d8e..1ac3cb151b117 100644
+--- a/drivers/gpu/drm/vmwgfx/vmw_surface_cache.h
++++ b/drivers/gpu/drm/vmwgfx/vmw_surface_cache.h
+@@ -1,6 +1,8 @@
++/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+ /**********************************************************
+- * Copyright 2021 VMware, Inc.
+- * SPDX-License-Identifier: GPL-2.0 OR MIT
++ *
++ * Copyright (c) 2021-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person
+  * obtaining a copy of this software and associated documentation
+@@ -31,6 +33,10 @@
+ 
+ #include <drm/vmwgfx_drm.h>
+ 
++#define SVGA3D_FLAGS_UPPER_32(svga3d_flags) ((svga3d_flags) >> 32)
++#define SVGA3D_FLAGS_LOWER_32(svga3d_flags) \
++	((svga3d_flags) & ((uint64_t)U32_MAX))
++
+ static inline u32 clamped_umul32(u32 a, u32 b)
+ {
+ 	uint64_t tmp = (uint64_t) a*b;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index 00144632c600e..f42ebc4a7c225 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -1,8 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright © 2011-2023 VMware, Inc., Palo Alto, CA., USA
+- * All Rights Reserved.
++ * Copyright (c) 2011-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -28,15 +28,39 @@
+ 
+ #include "vmwgfx_bo.h"
+ #include "vmwgfx_drv.h"
+-
++#include "vmwgfx_resource_priv.h"
+ 
+ #include <drm/ttm/ttm_placement.h>
+ 
+ static void vmw_bo_release(struct vmw_bo *vbo)
+ {
++	struct vmw_resource *res;
++
+ 	WARN_ON(vbo->tbo.base.funcs &&
+ 		kref_read(&vbo->tbo.base.refcount) != 0);
+ 	vmw_bo_unmap(vbo);
++
++	xa_destroy(&vbo->detached_resources);
++	WARN_ON(vbo->is_dumb && !vbo->dumb_surface);
++	if (vbo->is_dumb && vbo->dumb_surface) {
++		res = &vbo->dumb_surface->res;
++		WARN_ON(vbo != res->guest_memory_bo);
++		WARN_ON(!res->guest_memory_bo);
++		if (res->guest_memory_bo) {
++			/* Reserve and switch the backing mob. */
++			mutex_lock(&res->dev_priv->cmdbuf_mutex);
++			(void)vmw_resource_reserve(res, false, true);
++			vmw_resource_mob_detach(res);
++			if (res->coherent)
++				vmw_bo_dirty_release(res->guest_memory_bo);
++			res->guest_memory_bo = NULL;
++			res->guest_memory_offset = 0;
++			vmw_resource_unreserve(res, false, false, false, NULL,
++					       0);
++			mutex_unlock(&res->dev_priv->cmdbuf_mutex);
++		}
++		vmw_surface_unreference(&vbo->dumb_surface);
++	}
+ 	drm_gem_object_release(&vbo->tbo.base);
+ }
+ 
+@@ -325,6 +349,11 @@ void vmw_bo_pin_reserved(struct vmw_bo *vbo, bool pin)
+  *
+  */
+ void *vmw_bo_map_and_cache(struct vmw_bo *vbo)
++{
++	return vmw_bo_map_and_cache_size(vbo, vbo->tbo.base.size);
++}
++
++void *vmw_bo_map_and_cache_size(struct vmw_bo *vbo, size_t size)
+ {
+ 	struct ttm_buffer_object *bo = &vbo->tbo;
+ 	bool not_used;
+@@ -335,9 +364,10 @@ void *vmw_bo_map_and_cache(struct vmw_bo *vbo)
+ 	if (virtual)
+ 		return virtual;
+ 
+-	ret = ttm_bo_kmap(bo, 0, PFN_UP(bo->base.size), &vbo->map);
++	ret = ttm_bo_kmap(bo, 0, PFN_UP(size), &vbo->map);
+ 	if (ret)
+-		DRM_ERROR("Buffer object map failed: %d.\n", ret);
++		DRM_ERROR("Buffer object map failed: %d (size: bo = %zu, map = %zu).\n",
++			  ret, bo->base.size, size);
+ 
+ 	return ttm_kmap_obj_virtual(&vbo->map, &not_used);
+ }
+@@ -390,6 +420,7 @@ static int vmw_bo_init(struct vmw_private *dev_priv,
+ 	BUILD_BUG_ON(TTM_MAX_BO_PRIORITY <= 3);
+ 	vmw_bo->tbo.priority = 3;
+ 	vmw_bo->res_tree = RB_ROOT;
++	xa_init(&vmw_bo->detached_resources);
+ 
+ 	params->size = ALIGN(params->size, PAGE_SIZE);
+ 	drm_gem_private_object_init(vdev, &vmw_bo->tbo.base, params->size);
+@@ -654,52 +685,6 @@ void vmw_bo_fence_single(struct ttm_buffer_object *bo,
+ 	dma_fence_put(&fence->base);
+ }
+ 
+-
+-/**
+- * vmw_dumb_create - Create a dumb kms buffer
+- *
+- * @file_priv: Pointer to a struct drm_file identifying the caller.
+- * @dev: Pointer to the drm device.
+- * @args: Pointer to a struct drm_mode_create_dumb structure
+- * Return: Zero on success, negative error code on failure.
+- *
+- * This is a driver callback for the core drm create_dumb functionality.
+- * Note that this is very similar to the vmw_bo_alloc ioctl, except
+- * that the arguments have a different format.
+- */
+-int vmw_dumb_create(struct drm_file *file_priv,
+-		    struct drm_device *dev,
+-		    struct drm_mode_create_dumb *args)
+-{
+-	struct vmw_private *dev_priv = vmw_priv(dev);
+-	struct vmw_bo *vbo;
+-	int cpp = DIV_ROUND_UP(args->bpp, 8);
+-	int ret;
+-
+-	switch (cpp) {
+-	case 1: /* DRM_FORMAT_C8 */
+-	case 2: /* DRM_FORMAT_RGB565 */
+-	case 4: /* DRM_FORMAT_XRGB8888 */
+-		break;
+-	default:
+-		/*
+-		 * Dumb buffers don't allow anything else.
+-		 * This is tested via IGT's dumb_buffers
+-		 */
+-		return -EINVAL;
+-	}
+-
+-	args->pitch = args->width * cpp;
+-	args->size = ALIGN(args->pitch * args->height, PAGE_SIZE);
+-
+-	ret = vmw_gem_object_create_with_handle(dev_priv, file_priv,
+-						args->size, &args->handle,
+-						&vbo);
+-	/* drop reference from allocate - handle holds it now */
+-	drm_gem_object_put(&vbo->tbo.base);
+-	return ret;
+-}
+-
+ /**
+  * vmw_bo_swap_notify - swapout notify callback.
+  *
+@@ -853,3 +838,43 @@ void vmw_bo_placement_set_default_accelerated(struct vmw_bo *bo)
+ 
+ 	vmw_bo_placement_set(bo, domain, domain);
+ }
++
++void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
++{
++	xa_store(&vbo->detached_resources, (unsigned long)res, res, GFP_KERNEL);
++}
++
++void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
++{
++	xa_erase(&vbo->detached_resources, (unsigned long)res);
++}
++
++struct vmw_surface *vmw_bo_surface(struct vmw_bo *vbo)
++{
++	unsigned long index;
++	struct vmw_resource *res = NULL;
++	struct vmw_surface *surf = NULL;
++	struct rb_node *rb_itr = vbo->res_tree.rb_node;
++
++	if (vbo->is_dumb && vbo->dumb_surface) {
++		res = &vbo->dumb_surface->res;
++		goto out;
++	}
++
++	xa_for_each(&vbo->detached_resources, index, res) {
++		if (res->func->res_type == vmw_res_surface)
++			goto out;
++	}
++
++	for (rb_itr = rb_first(&vbo->res_tree); rb_itr;
++	     rb_itr = rb_next(rb_itr)) {
++		res = rb_entry(rb_itr, struct vmw_resource, mob_node);
++		if (res->func->res_type == vmw_res_surface)
++			goto out;
++	}
++
++out:
++	if (res)
++		surf = vmw_res_to_srf(res);
++	return surf;
++}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index f349642e6190d..62b4342d5f7c5 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -1,7 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR MIT */
+ /**************************************************************************
+  *
+- * Copyright 2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2023-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -35,11 +36,13 @@
+ 
+ #include <linux/rbtree_types.h>
+ #include <linux/types.h>
++#include <linux/xarray.h>
+ 
+ struct vmw_bo_dirty;
+ struct vmw_fence_obj;
+ struct vmw_private;
+ struct vmw_resource;
++struct vmw_surface;
+ 
+ enum vmw_bo_domain {
+ 	VMW_BO_DOMAIN_SYS           = BIT(0),
+@@ -85,11 +88,15 @@ struct vmw_bo {
+ 
+ 	struct rb_root res_tree;
+ 	u32 res_prios[TTM_MAX_BO_PRIORITY];
++	struct xarray detached_resources;
+ 
+ 	atomic_t cpu_writers;
+ 	/* Not ref-counted.  Protected by binding_mutex */
+ 	struct vmw_resource *dx_query_ctx;
+ 	struct vmw_bo_dirty *dirty;
++
++	bool is_dumb;
++	struct vmw_surface *dumb_surface;
+ };
+ 
+ void vmw_bo_placement_set(struct vmw_bo *bo, u32 domain, u32 busy_domain);
+@@ -124,15 +131,21 @@ void vmw_bo_fence_single(struct ttm_buffer_object *bo,
+ 			 struct vmw_fence_obj *fence);
+ 
+ void *vmw_bo_map_and_cache(struct vmw_bo *vbo);
++void *vmw_bo_map_and_cache_size(struct vmw_bo *vbo, size_t size);
+ void vmw_bo_unmap(struct vmw_bo *vbo);
+ 
+ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
+ 			struct ttm_resource *mem);
+ void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
+ 
++void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
++void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
++struct vmw_surface *vmw_bo_surface(struct vmw_bo *vbo);
++
+ int vmw_user_bo_lookup(struct drm_file *filp,
+ 		       u32 handle,
+ 		       struct vmw_bo **out);
++
+ /**
+  * vmw_bo_adjust_prio - Adjust the buffer object eviction priority
+  * according to attached resources
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index a1ce41e1c4684..32f50e5958097 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1,7 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR MIT */
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -762,6 +763,26 @@ extern int vmw_gmr_bind(struct vmw_private *dev_priv,
+ 			int gmr_id);
+ extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id);
+ 
++/**
++ * User handles
++ */
++struct vmw_user_object {
++	struct vmw_surface *surface;
++	struct vmw_bo *buffer;
++};
++
++int vmw_user_object_lookup(struct vmw_private *dev_priv, struct drm_file *filp,
++			   u32 handle, struct vmw_user_object *uo);
++struct vmw_user_object *vmw_user_object_ref(struct vmw_user_object *uo);
++void vmw_user_object_unref(struct vmw_user_object *uo);
++bool vmw_user_object_is_null(struct vmw_user_object *uo);
++struct vmw_surface *vmw_user_object_surface(struct vmw_user_object *uo);
++struct vmw_bo *vmw_user_object_buffer(struct vmw_user_object *uo);
++void *vmw_user_object_map(struct vmw_user_object *uo);
++void *vmw_user_object_map_size(struct vmw_user_object *uo, size_t size);
++void vmw_user_object_unmap(struct vmw_user_object *uo);
++bool vmw_user_object_is_mapped(struct vmw_user_object *uo);
++
+ /**
+  * Resource utilities - vmwgfx_resource.c
+  */
+@@ -776,11 +797,6 @@ extern int vmw_resource_validate(struct vmw_resource *res, bool intr,
+ extern int vmw_resource_reserve(struct vmw_resource *res, bool interruptible,
+ 				bool no_backup);
+ extern bool vmw_resource_needs_backup(const struct vmw_resource *res);
+-extern int vmw_user_lookup_handle(struct vmw_private *dev_priv,
+-				  struct drm_file *filp,
+-				  uint32_t handle,
+-				  struct vmw_surface **out_surf,
+-				  struct vmw_bo **out_buf);
+ extern int vmw_user_resource_lookup_handle(
+ 	struct vmw_private *dev_priv,
+ 	struct ttm_object_file *tfile,
+@@ -1057,9 +1073,6 @@ int vmw_kms_suspend(struct drm_device *dev);
+ int vmw_kms_resume(struct drm_device *dev);
+ void vmw_kms_lost_device(struct drm_device *dev);
+ 
+-int vmw_dumb_create(struct drm_file *file_priv,
+-		    struct drm_device *dev,
+-		    struct drm_mode_create_dumb *args);
+ extern int vmw_resource_pin(struct vmw_resource *res, bool interruptible);
+ extern void vmw_resource_unpin(struct vmw_resource *res);
+ extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res);
+@@ -1176,6 +1189,15 @@ extern int vmw_gb_surface_reference_ext_ioctl(struct drm_device *dev,
+ int vmw_gb_surface_define(struct vmw_private *dev_priv,
+ 			  const struct vmw_surface_metadata *req,
+ 			  struct vmw_surface **srf_out);
++struct vmw_surface *vmw_lookup_surface_for_buffer(struct vmw_private *vmw,
++						  struct vmw_bo *bo,
++						  u32 handle);
++u32 vmw_lookup_surface_handle_for_buffer(struct vmw_private *vmw,
++					 struct vmw_bo *bo,
++					 u32 handle);
++int vmw_dumb_create(struct drm_file *file_priv,
++		    struct drm_device *dev,
++		    struct drm_mode_create_dumb *args);
+ 
+ /*
+  * Shader management - vmwgfx_shader.c
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 5efc6a766f64e..588d50ababf60 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -32,7 +32,6 @@
+ #define VMW_FENCE_WRAP (1 << 31)
+ 
+ struct vmw_fence_manager {
+-	int num_fence_objects;
+ 	struct vmw_private *dev_priv;
+ 	spinlock_t lock;
+ 	struct list_head fence_list;
+@@ -124,13 +123,13 @@ static void vmw_fence_obj_destroy(struct dma_fence *f)
+ {
+ 	struct vmw_fence_obj *fence =
+ 		container_of(f, struct vmw_fence_obj, base);
+-
+ 	struct vmw_fence_manager *fman = fman_from_fence(fence);
+ 
+-	spin_lock(&fman->lock);
+-	list_del_init(&fence->head);
+-	--fman->num_fence_objects;
+-	spin_unlock(&fman->lock);
++	if (!list_empty(&fence->head)) {
++		spin_lock(&fman->lock);
++		list_del_init(&fence->head);
++		spin_unlock(&fman->lock);
++	}
+ 	fence->destroy(fence);
+ }
+ 
+@@ -257,7 +256,6 @@ static const struct dma_fence_ops vmw_fence_ops = {
+ 	.release = vmw_fence_obj_destroy,
+ };
+ 
+-
+ /*
+  * Execute signal actions on fences recently signaled.
+  * This is done from a workqueue so we don't have to execute
+@@ -355,7 +353,6 @@ static int vmw_fence_obj_init(struct vmw_fence_manager *fman,
+ 		goto out_unlock;
+ 	}
+ 	list_add_tail(&fence->head, &fman->fence_list);
+-	++fman->num_fence_objects;
+ 
+ out_unlock:
+ 	spin_unlock(&fman->lock);
+@@ -403,7 +400,7 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman,
+ 				      u32 passed_seqno)
+ {
+ 	u32 goal_seqno;
+-	struct vmw_fence_obj *fence;
++	struct vmw_fence_obj *fence, *next_fence;
+ 
+ 	if (likely(!fman->seqno_valid))
+ 		return false;
+@@ -413,7 +410,7 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman,
+ 		return false;
+ 
+ 	fman->seqno_valid = false;
+-	list_for_each_entry(fence, &fman->fence_list, head) {
++	list_for_each_entry_safe(fence, next_fence, &fman->fence_list, head) {
+ 		if (!list_empty(&fence->seq_passed_actions)) {
+ 			fman->seqno_valid = true;
+ 			vmw_fence_goal_write(fman->dev_priv,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 00c4ff6841301..288ed0bb75cb9 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -193,13 +194,16 @@ static u32 vmw_du_cursor_mob_size(u32 w, u32 h)
+  */
+ static u32 *vmw_du_cursor_plane_acquire_image(struct vmw_plane_state *vps)
+ {
+-	if (vps->surf) {
+-		if (vps->surf_mapped)
+-			return vmw_bo_map_and_cache(vps->surf->res.guest_memory_bo);
+-		return vps->surf->snooper.image;
+-	} else if (vps->bo)
+-		return vmw_bo_map_and_cache(vps->bo);
+-	return NULL;
++	struct vmw_surface *surf;
++
++	if (vmw_user_object_is_null(&vps->uo))
++		return NULL;
++
++	surf = vmw_user_object_surface(&vps->uo);
++	if (surf && !vmw_user_object_is_mapped(&vps->uo))
++		return surf->snooper.image;
++
++	return vmw_user_object_map(&vps->uo);
+ }
+ 
+ static bool vmw_du_cursor_plane_has_changed(struct vmw_plane_state *old_vps,
+@@ -536,22 +540,16 @@ void vmw_du_primary_plane_destroy(struct drm_plane *plane)
+  * vmw_du_plane_unpin_surf - unpins resource associated with a framebuffer surface
+  *
+  * @vps: plane state associated with the display surface
+- * @unreference: true if we also want to unreference the display.
+  */
+-void vmw_du_plane_unpin_surf(struct vmw_plane_state *vps,
+-			     bool unreference)
++void vmw_du_plane_unpin_surf(struct vmw_plane_state *vps)
+ {
+-	if (vps->surf) {
++	struct vmw_surface *surf = vmw_user_object_surface(&vps->uo);
++
++	if (surf) {
+ 		if (vps->pinned) {
+-			vmw_resource_unpin(&vps->surf->res);
++			vmw_resource_unpin(&surf->res);
+ 			vps->pinned--;
+ 		}
+-
+-		if (unreference) {
+-			if (vps->pinned)
+-				DRM_ERROR("Surface still pinned\n");
+-			vmw_surface_unreference(&vps->surf);
+-		}
+ 	}
+ }
+ 
+@@ -572,7 +570,7 @@ vmw_du_plane_cleanup_fb(struct drm_plane *plane,
+ {
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state);
+ 
+-	vmw_du_plane_unpin_surf(vps, false);
++	vmw_du_plane_unpin_surf(vps);
+ }
+ 
+ 
+@@ -661,25 +659,14 @@ vmw_du_cursor_plane_cleanup_fb(struct drm_plane *plane,
+ 	struct vmw_cursor_plane *vcp = vmw_plane_to_vcp(plane);
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state);
+ 
+-	if (vps->surf_mapped) {
+-		vmw_bo_unmap(vps->surf->res.guest_memory_bo);
+-		vps->surf_mapped = false;
+-	}
++	if (!vmw_user_object_is_null(&vps->uo))
++		vmw_user_object_unmap(&vps->uo);
+ 
+ 	vmw_du_cursor_plane_unmap_cm(vps);
+ 	vmw_du_put_cursor_mob(vcp, vps);
+ 
+-	vmw_du_plane_unpin_surf(vps, false);
+-
+-	if (vps->surf) {
+-		vmw_surface_unreference(&vps->surf);
+-		vps->surf = NULL;
+-	}
+-
+-	if (vps->bo) {
+-		vmw_bo_unreference(&vps->bo);
+-		vps->bo = NULL;
+-	}
++	vmw_du_plane_unpin_surf(vps);
++	vmw_user_object_unref(&vps->uo);
+ }
+ 
+ 
+@@ -698,64 +685,48 @@ vmw_du_cursor_plane_prepare_fb(struct drm_plane *plane,
+ 	struct drm_framebuffer *fb = new_state->fb;
+ 	struct vmw_cursor_plane *vcp = vmw_plane_to_vcp(plane);
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state);
++	struct vmw_bo *bo = NULL;
+ 	int ret = 0;
+ 
+-	if (vps->surf) {
+-		if (vps->surf_mapped) {
+-			vmw_bo_unmap(vps->surf->res.guest_memory_bo);
+-			vps->surf_mapped = false;
+-		}
+-		vmw_surface_unreference(&vps->surf);
+-		vps->surf = NULL;
+-	}
+-
+-	if (vps->bo) {
+-		vmw_bo_unreference(&vps->bo);
+-		vps->bo = NULL;
++	if (!vmw_user_object_is_null(&vps->uo)) {
++		vmw_user_object_unmap(&vps->uo);
++		vmw_user_object_unref(&vps->uo);
+ 	}
+ 
+ 	if (fb) {
+ 		if (vmw_framebuffer_to_vfb(fb)->bo) {
+-			vps->bo = vmw_framebuffer_to_vfbd(fb)->buffer;
+-			vmw_bo_reference(vps->bo);
++			vps->uo.buffer = vmw_framebuffer_to_vfbd(fb)->buffer;
++			vps->uo.surface = NULL;
+ 		} else {
+-			vps->surf = vmw_framebuffer_to_vfbs(fb)->surface;
+-			vmw_surface_reference(vps->surf);
++			memcpy(&vps->uo, &vmw_framebuffer_to_vfbs(fb)->uo, sizeof(vps->uo));
+ 		}
++		vmw_user_object_ref(&vps->uo);
+ 	}
+ 
+-	if (!vps->surf && vps->bo) {
+-		const u32 size = new_state->crtc_w * new_state->crtc_h * sizeof(u32);
++	bo = vmw_user_object_buffer(&vps->uo);
++	if (bo) {
++		struct ttm_operation_ctx ctx = {false, false};
+ 
+-		/*
+-		 * Not using vmw_bo_map_and_cache() helper here as we need to
+-		 * reserve the ttm_buffer_object first which
+-		 * vmw_bo_map_and_cache() omits.
+-		 */
+-		ret = ttm_bo_reserve(&vps->bo->tbo, true, false, NULL);
+-
+-		if (unlikely(ret != 0))
++		ret = ttm_bo_reserve(&bo->tbo, true, false, NULL);
++		if (ret != 0)
+ 			return -ENOMEM;
+ 
+-		ret = ttm_bo_kmap(&vps->bo->tbo, 0, PFN_UP(size), &vps->bo->map);
+-
+-		ttm_bo_unreserve(&vps->bo->tbo);
+-
+-		if (unlikely(ret != 0))
++		ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
++		if (ret != 0)
+ 			return -ENOMEM;
+-	} else if (vps->surf && !vps->bo && vps->surf->res.guest_memory_bo) {
+ 
+-		WARN_ON(vps->surf->snooper.image);
+-		ret = ttm_bo_reserve(&vps->surf->res.guest_memory_bo->tbo, true, false,
+-				     NULL);
+-		if (unlikely(ret != 0))
+-			return -ENOMEM;
+-		vmw_bo_map_and_cache(vps->surf->res.guest_memory_bo);
+-		ttm_bo_unreserve(&vps->surf->res.guest_memory_bo->tbo);
+-		vps->surf_mapped = true;
++		vmw_bo_pin_reserved(bo, true);
++		if (vmw_framebuffer_to_vfb(fb)->bo) {
++			const u32 size = new_state->crtc_w * new_state->crtc_h * sizeof(u32);
++
++			(void)vmw_bo_map_and_cache_size(bo, size);
++		} else {
++			vmw_bo_map_and_cache(bo);
++		}
++		ttm_bo_unreserve(&bo->tbo);
+ 	}
+ 
+-	if (vps->surf || vps->bo) {
++	if (!vmw_user_object_is_null(&vps->uo)) {
+ 		vmw_du_get_cursor_mob(vcp, vps);
+ 		vmw_du_cursor_plane_map_cm(vps);
+ 	}
+@@ -777,14 +748,17 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ 	struct vmw_display_unit *du = vmw_crtc_to_du(crtc);
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state);
+ 	struct vmw_plane_state *old_vps = vmw_plane_state_to_vps(old_state);
++	struct vmw_bo *old_bo = NULL;
++	struct vmw_bo *new_bo = NULL;
+ 	s32 hotspot_x, hotspot_y;
++	int ret;
+ 
+ 	hotspot_x = du->hotspot_x + new_state->hotspot_x;
+ 	hotspot_y = du->hotspot_y + new_state->hotspot_y;
+ 
+-	du->cursor_surface = vps->surf;
++	du->cursor_surface = vmw_user_object_surface(&vps->uo);
+ 
+-	if (!vps->surf && !vps->bo) {
++	if (vmw_user_object_is_null(&vps->uo)) {
+ 		vmw_cursor_update_position(dev_priv, false, 0, 0);
+ 		return;
+ 	}
+@@ -792,10 +766,26 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ 	vps->cursor.hotspot_x = hotspot_x;
+ 	vps->cursor.hotspot_y = hotspot_y;
+ 
+-	if (vps->surf) {
++	if (du->cursor_surface)
+ 		du->cursor_age = du->cursor_surface->snooper.age;
++
++	if (!vmw_user_object_is_null(&old_vps->uo)) {
++		old_bo = vmw_user_object_buffer(&old_vps->uo);
++		ret = ttm_bo_reserve(&old_bo->tbo, false, false, NULL);
++		if (ret != 0)
++			return;
+ 	}
+ 
++	if (!vmw_user_object_is_null(&vps->uo)) {
++		new_bo = vmw_user_object_buffer(&vps->uo);
++		if (old_bo != new_bo) {
++			ret = ttm_bo_reserve(&new_bo->tbo, false, false, NULL);
++			if (ret != 0)
++				return;
++		} else {
++			new_bo = NULL;
++		}
++	}
+ 	if (!vmw_du_cursor_plane_has_changed(old_vps, vps)) {
+ 		/*
+ 		 * If it hasn't changed, avoid making the device do extra
+@@ -813,6 +803,11 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ 						hotspot_x, hotspot_y);
+ 	}
+ 
++	if (old_bo)
++		ttm_bo_unreserve(&old_bo->tbo);
++	if (new_bo)
++		ttm_bo_unreserve(&new_bo->tbo);
++
+ 	du->cursor_x = new_state->crtc_x + du->set_gui_x;
+ 	du->cursor_y = new_state->crtc_y + du->set_gui_y;
+ 
+@@ -913,7 +908,7 @@ int vmw_du_cursor_plane_atomic_check(struct drm_plane *plane,
+ 	}
+ 
+ 	if (!vmw_framebuffer_to_vfb(fb)->bo) {
+-		surface = vmw_framebuffer_to_vfbs(fb)->surface;
++		surface = vmw_user_object_surface(&vmw_framebuffer_to_vfbs(fb)->uo);
+ 
+ 		WARN_ON(!surface);
+ 
+@@ -1074,12 +1069,7 @@ vmw_du_plane_duplicate_state(struct drm_plane *plane)
+ 	memset(&vps->cursor, 0, sizeof(vps->cursor));
+ 
+ 	/* Each ref counted resource needs to be acquired again */
+-	if (vps->surf)
+-		(void) vmw_surface_reference(vps->surf);
+-
+-	if (vps->bo)
+-		(void) vmw_bo_reference(vps->bo);
+-
++	vmw_user_object_ref(&vps->uo);
+ 	state = &vps->base;
+ 
+ 	__drm_atomic_helper_plane_duplicate_state(plane, state);
+@@ -1128,11 +1118,7 @@ vmw_du_plane_destroy_state(struct drm_plane *plane,
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(state);
+ 
+ 	/* Should have been freed by cleanup_fb */
+-	if (vps->surf)
+-		vmw_surface_unreference(&vps->surf);
+-
+-	if (vps->bo)
+-		vmw_bo_unreference(&vps->bo);
++	vmw_user_object_unref(&vps->uo);
+ 
+ 	drm_atomic_helper_plane_destroy_state(plane, state);
+ }
+@@ -1227,7 +1213,7 @@ static void vmw_framebuffer_surface_destroy(struct drm_framebuffer *framebuffer)
+ 		vmw_framebuffer_to_vfbs(framebuffer);
+ 
+ 	drm_framebuffer_cleanup(framebuffer);
+-	vmw_surface_unreference(&vfbs->surface);
++	vmw_user_object_unref(&vfbs->uo);
+ 
+ 	kfree(vfbs);
+ }
+@@ -1272,29 +1258,41 @@ int vmw_kms_readback(struct vmw_private *dev_priv,
+ 	return -ENOSYS;
+ }
+ 
++static int vmw_framebuffer_surface_create_handle(struct drm_framebuffer *fb,
++						 struct drm_file *file_priv,
++						 unsigned int *handle)
++{
++	struct vmw_framebuffer_surface *vfbs = vmw_framebuffer_to_vfbs(fb);
++	struct vmw_bo *bo = vmw_user_object_buffer(&vfbs->uo);
++
++	return drm_gem_handle_create(file_priv, &bo->tbo.base, handle);
++}
+ 
+ static const struct drm_framebuffer_funcs vmw_framebuffer_surface_funcs = {
++	.create_handle = vmw_framebuffer_surface_create_handle,
+ 	.destroy = vmw_framebuffer_surface_destroy,
+ 	.dirty = drm_atomic_helper_dirtyfb,
+ };
+ 
+ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+-					   struct vmw_surface *surface,
++					   struct vmw_user_object *uo,
+ 					   struct vmw_framebuffer **out,
+ 					   const struct drm_mode_fb_cmd2
+-					   *mode_cmd,
+-					   bool is_bo_proxy)
++					   *mode_cmd)
+ 
+ {
+ 	struct drm_device *dev = &dev_priv->drm;
+ 	struct vmw_framebuffer_surface *vfbs;
+ 	enum SVGA3dSurfaceFormat format;
++	struct vmw_surface *surface;
+ 	int ret;
+ 
+ 	/* 3D is only supported on HWv8 and newer hosts */
+ 	if (dev_priv->active_display_unit == vmw_du_legacy)
+ 		return -ENOSYS;
+ 
++	surface = vmw_user_object_surface(uo);
++
+ 	/*
+ 	 * Sanity checks.
+ 	 */
+@@ -1357,8 +1355,8 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ 	}
+ 
+ 	drm_helper_mode_fill_fb_struct(dev, &vfbs->base.base, mode_cmd);
+-	vfbs->surface = vmw_surface_reference(surface);
+-	vfbs->is_bo_proxy = is_bo_proxy;
++	memcpy(&vfbs->uo, uo, sizeof(vfbs->uo));
++	vmw_user_object_ref(&vfbs->uo);
+ 
+ 	*out = &vfbs->base;
+ 
+@@ -1370,7 +1368,7 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ 	return 0;
+ 
+ out_err2:
+-	vmw_surface_unreference(&surface);
++	vmw_user_object_unref(&vfbs->uo);
+ 	kfree(vfbs);
+ out_err1:
+ 	return ret;
+@@ -1386,7 +1384,6 @@ static int vmw_framebuffer_bo_create_handle(struct drm_framebuffer *fb,
+ {
+ 	struct vmw_framebuffer_bo *vfbd =
+ 			vmw_framebuffer_to_vfbd(fb);
+-
+ 	return drm_gem_handle_create(file_priv, &vfbd->buffer->tbo.base, handle);
+ }
+ 
+@@ -1407,86 +1404,6 @@ static const struct drm_framebuffer_funcs vmw_framebuffer_bo_funcs = {
+ 	.dirty = drm_atomic_helper_dirtyfb,
+ };
+ 
+-/**
+- * vmw_create_bo_proxy - create a proxy surface for the buffer object
+- *
+- * @dev: DRM device
+- * @mode_cmd: parameters for the new surface
+- * @bo_mob: MOB backing the buffer object
+- * @srf_out: newly created surface
+- *
+- * When the content FB is a buffer object, we create a surface as a proxy to the
+- * same buffer.  This way we can do a surface copy rather than a surface DMA.
+- * This is a more efficient approach
+- *
+- * RETURNS:
+- * 0 on success, error code otherwise
+- */
+-static int vmw_create_bo_proxy(struct drm_device *dev,
+-			       const struct drm_mode_fb_cmd2 *mode_cmd,
+-			       struct vmw_bo *bo_mob,
+-			       struct vmw_surface **srf_out)
+-{
+-	struct vmw_surface_metadata metadata = {0};
+-	uint32_t format;
+-	struct vmw_resource *res;
+-	unsigned int bytes_pp;
+-	int ret;
+-
+-	switch (mode_cmd->pixel_format) {
+-	case DRM_FORMAT_ARGB8888:
+-	case DRM_FORMAT_XRGB8888:
+-		format = SVGA3D_X8R8G8B8;
+-		bytes_pp = 4;
+-		break;
+-
+-	case DRM_FORMAT_RGB565:
+-	case DRM_FORMAT_XRGB1555:
+-		format = SVGA3D_R5G6B5;
+-		bytes_pp = 2;
+-		break;
+-
+-	case 8:
+-		format = SVGA3D_P8;
+-		bytes_pp = 1;
+-		break;
+-
+-	default:
+-		DRM_ERROR("Invalid framebuffer format %p4cc\n",
+-			  &mode_cmd->pixel_format);
+-		return -EINVAL;
+-	}
+-
+-	metadata.format = format;
+-	metadata.mip_levels[0] = 1;
+-	metadata.num_sizes = 1;
+-	metadata.base_size.width = mode_cmd->pitches[0] / bytes_pp;
+-	metadata.base_size.height =  mode_cmd->height;
+-	metadata.base_size.depth = 1;
+-	metadata.scanout = true;
+-
+-	ret = vmw_gb_surface_define(vmw_priv(dev), &metadata, srf_out);
+-	if (ret) {
+-		DRM_ERROR("Failed to allocate proxy content buffer\n");
+-		return ret;
+-	}
+-
+-	res = &(*srf_out)->res;
+-
+-	/* Reserve and switch the backing mob. */
+-	mutex_lock(&res->dev_priv->cmdbuf_mutex);
+-	(void) vmw_resource_reserve(res, false, true);
+-	vmw_user_bo_unref(&res->guest_memory_bo);
+-	res->guest_memory_bo = vmw_user_bo_ref(bo_mob);
+-	res->guest_memory_offset = 0;
+-	vmw_resource_unreserve(res, false, false, false, NULL, 0);
+-	mutex_unlock(&res->dev_priv->cmdbuf_mutex);
+-
+-	return 0;
+-}
+-
+-
+-
+ static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv,
+ 				      struct vmw_bo *bo,
+ 				      struct vmw_framebuffer **out,
+@@ -1565,55 +1482,24 @@ vmw_kms_srf_ok(struct vmw_private *dev_priv, uint32_t width, uint32_t height)
+  * vmw_kms_new_framebuffer - Create a new framebuffer.
+  *
+  * @dev_priv: Pointer to device private struct.
+- * @bo: Pointer to buffer object to wrap the kms framebuffer around.
+- * Either @bo or @surface must be NULL.
+- * @surface: Pointer to a surface to wrap the kms framebuffer around.
+- * Either @bo or @surface must be NULL.
+- * @only_2d: No presents will occur to this buffer object based framebuffer.
+- * This helps the code to do some important optimizations.
++ * @uo: Pointer to user object to wrap the kms framebuffer around.
++ * Either the buffer or surface inside the user object must be NULL.
+  * @mode_cmd: Frame-buffer metadata.
+  */
+ struct vmw_framebuffer *
+ vmw_kms_new_framebuffer(struct vmw_private *dev_priv,
+-			struct vmw_bo *bo,
+-			struct vmw_surface *surface,
+-			bool only_2d,
++			struct vmw_user_object *uo,
+ 			const struct drm_mode_fb_cmd2 *mode_cmd)
+ {
+ 	struct vmw_framebuffer *vfb = NULL;
+-	bool is_bo_proxy = false;
+ 	int ret;
+ 
+-	/*
+-	 * We cannot use the SurfaceDMA command in an non-accelerated VM,
+-	 * therefore, wrap the buffer object in a surface so we can use the
+-	 * SurfaceCopy command.
+-	 */
+-	if (vmw_kms_srf_ok(dev_priv, mode_cmd->width, mode_cmd->height)  &&
+-	    bo && only_2d &&
+-	    mode_cmd->width > 64 &&  /* Don't create a proxy for cursor */
+-	    dev_priv->active_display_unit == vmw_du_screen_target) {
+-		ret = vmw_create_bo_proxy(&dev_priv->drm, mode_cmd,
+-					  bo, &surface);
+-		if (ret)
+-			return ERR_PTR(ret);
+-
+-		is_bo_proxy = true;
+-	}
+-
+ 	/* Create the new framebuffer depending one what we have */
+-	if (surface) {
+-		ret = vmw_kms_new_framebuffer_surface(dev_priv, surface, &vfb,
+-						      mode_cmd,
+-						      is_bo_proxy);
+-		/*
+-		 * vmw_create_bo_proxy() adds a reference that is no longer
+-		 * needed
+-		 */
+-		if (is_bo_proxy)
+-			vmw_surface_unreference(&surface);
+-	} else if (bo) {
+-		ret = vmw_kms_new_framebuffer_bo(dev_priv, bo, &vfb,
++	if (vmw_user_object_surface(uo)) {
++		ret = vmw_kms_new_framebuffer_surface(dev_priv, uo, &vfb,
++						      mode_cmd);
++	} else if (uo->buffer) {
++		ret = vmw_kms_new_framebuffer_bo(dev_priv, uo->buffer, &vfb,
+ 						 mode_cmd);
+ 	} else {
+ 		BUG();
+@@ -1635,14 +1521,12 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
+ {
+ 	struct vmw_private *dev_priv = vmw_priv(dev);
+ 	struct vmw_framebuffer *vfb = NULL;
+-	struct vmw_surface *surface = NULL;
+-	struct vmw_bo *bo = NULL;
++	struct vmw_user_object uo = {0};
+ 	int ret;
+ 
+ 	/* returns either a bo or surface */
+-	ret = vmw_user_lookup_handle(dev_priv, file_priv,
+-				     mode_cmd->handles[0],
+-				     &surface, &bo);
++	ret = vmw_user_object_lookup(dev_priv, file_priv, mode_cmd->handles[0],
++				     &uo);
+ 	if (ret) {
+ 		DRM_ERROR("Invalid buffer object handle %u (0x%x).\n",
+ 			  mode_cmd->handles[0], mode_cmd->handles[0]);
+@@ -1650,7 +1534,7 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
+ 	}
+ 
+ 
+-	if (!bo &&
++	if (vmw_user_object_surface(&uo) &&
+ 	    !vmw_kms_srf_ok(dev_priv, mode_cmd->width, mode_cmd->height)) {
+ 		DRM_ERROR("Surface size cannot exceed %dx%d\n",
+ 			dev_priv->texture_max_width,
+@@ -1659,20 +1543,15 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
+ 	}
+ 
+ 
+-	vfb = vmw_kms_new_framebuffer(dev_priv, bo, surface,
+-				      !(dev_priv->capabilities & SVGA_CAP_3D),
+-				      mode_cmd);
++	vfb = vmw_kms_new_framebuffer(dev_priv, &uo, mode_cmd);
+ 	if (IS_ERR(vfb)) {
+ 		ret = PTR_ERR(vfb);
+ 		goto err_out;
+ 	}
+ 
+ err_out:
+-	/* vmw_user_lookup_handle takes one ref so does new_fb */
+-	if (bo)
+-		vmw_user_bo_unref(&bo);
+-	if (surface)
+-		vmw_surface_unreference(&surface);
++	/* vmw_user_object_lookup takes one ref so does new_fb */
++	vmw_user_object_unref(&uo);
+ 
+ 	if (ret) {
+ 		DRM_ERROR("failed to create vmw_framebuffer: %i\n", ret);
+@@ -2584,72 +2463,6 @@ void vmw_kms_helper_validation_finish(struct vmw_private *dev_priv,
+ 		vmw_fence_obj_unreference(&fence);
+ }
+ 
+-/**
+- * vmw_kms_update_proxy - Helper function to update a proxy surface from
+- * its backing MOB.
+- *
+- * @res: Pointer to the surface resource
+- * @clips: Clip rects in framebuffer (surface) space.
+- * @num_clips: Number of clips in @clips.
+- * @increment: Integer with which to increment the clip counter when looping.
+- * Used to skip a predetermined number of clip rects.
+- *
+- * This function makes sure the proxy surface is updated from its backing MOB
+- * using the region given by @clips. The surface resource @res and its backing
+- * MOB needs to be reserved and validated on call.
+- */
+-int vmw_kms_update_proxy(struct vmw_resource *res,
+-			 const struct drm_clip_rect *clips,
+-			 unsigned num_clips,
+-			 int increment)
+-{
+-	struct vmw_private *dev_priv = res->dev_priv;
+-	struct drm_vmw_size *size = &vmw_res_to_srf(res)->metadata.base_size;
+-	struct {
+-		SVGA3dCmdHeader header;
+-		SVGA3dCmdUpdateGBImage body;
+-	} *cmd;
+-	SVGA3dBox *box;
+-	size_t copy_size = 0;
+-	int i;
+-
+-	if (!clips)
+-		return 0;
+-
+-	cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd) * num_clips);
+-	if (!cmd)
+-		return -ENOMEM;
+-
+-	for (i = 0; i < num_clips; ++i, clips += increment, ++cmd) {
+-		box = &cmd->body.box;
+-
+-		cmd->header.id = SVGA_3D_CMD_UPDATE_GB_IMAGE;
+-		cmd->header.size = sizeof(cmd->body);
+-		cmd->body.image.sid = res->id;
+-		cmd->body.image.face = 0;
+-		cmd->body.image.mipmap = 0;
+-
+-		if (clips->x1 > size->width || clips->x2 > size->width ||
+-		    clips->y1 > size->height || clips->y2 > size->height) {
+-			DRM_ERROR("Invalid clips outsize of framebuffer.\n");
+-			return -EINVAL;
+-		}
+-
+-		box->x = clips->x1;
+-		box->y = clips->y1;
+-		box->z = 0;
+-		box->w = clips->x2 - clips->x1;
+-		box->h = clips->y2 - clips->y1;
+-		box->d = 1;
+-
+-		copy_size += sizeof(*cmd);
+-	}
+-
+-	vmw_cmd_commit(dev_priv, copy_size);
+-
+-	return 0;
+-}
+-
+ /**
+  * vmw_kms_create_implicit_placement_property - Set up the implicit placement
+  * property.
+@@ -2784,8 +2597,9 @@ int vmw_du_helper_plane_update(struct vmw_du_update_plane *update)
+ 	} else {
+ 		struct vmw_framebuffer_surface *vfbs =
+ 			container_of(update->vfb, typeof(*vfbs), base);
++		struct vmw_surface *surf = vmw_user_object_surface(&vfbs->uo);
+ 
+-		ret = vmw_validation_add_resource(&val_ctx, &vfbs->surface->res,
++		ret = vmw_validation_add_resource(&val_ctx, &surf->res,
+ 						  0, VMW_RES_DIRTY_NONE, NULL,
+ 						  NULL);
+ 	}
+@@ -2941,3 +2755,93 @@ int vmw_connector_get_modes(struct drm_connector *connector)
+ 
+ 	return num_modes;
+ }
++
++struct vmw_user_object *vmw_user_object_ref(struct vmw_user_object *uo)
++{
++	if (uo->buffer)
++		vmw_user_bo_ref(uo->buffer);
++	else if (uo->surface)
++		vmw_surface_reference(uo->surface);
++	return uo;
++}
++
++void vmw_user_object_unref(struct vmw_user_object *uo)
++{
++	if (uo->buffer)
++		vmw_user_bo_unref(&uo->buffer);
++	else if (uo->surface)
++		vmw_surface_unreference(&uo->surface);
++}
++
++struct vmw_bo *
++vmw_user_object_buffer(struct vmw_user_object *uo)
++{
++	if (uo->buffer)
++		return uo->buffer;
++	else if (uo->surface)
++		return uo->surface->res.guest_memory_bo;
++	return NULL;
++}
++
++struct vmw_surface *
++vmw_user_object_surface(struct vmw_user_object *uo)
++{
++	if (uo->buffer)
++		return uo->buffer->dumb_surface;
++	return uo->surface;
++}
++
++void *vmw_user_object_map(struct vmw_user_object *uo)
++{
++	struct vmw_bo *bo = vmw_user_object_buffer(uo);
++
++	WARN_ON(!bo);
++	return vmw_bo_map_and_cache(bo);
++}
++
++void *vmw_user_object_map_size(struct vmw_user_object *uo, size_t size)
++{
++	struct vmw_bo *bo = vmw_user_object_buffer(uo);
++
++	WARN_ON(!bo);
++	return vmw_bo_map_and_cache_size(bo, size);
++}
++
++void vmw_user_object_unmap(struct vmw_user_object *uo)
++{
++	struct vmw_bo *bo = vmw_user_object_buffer(uo);
++	int ret;
++
++	WARN_ON(!bo);
++
++	/* Fence the mob creation so we are guarateed to have the mob */
++	ret = ttm_bo_reserve(&bo->tbo, false, false, NULL);
++	if (ret != 0)
++		return;
++
++	vmw_bo_unmap(bo);
++	vmw_bo_pin_reserved(bo, false);
++
++	ttm_bo_unreserve(&bo->tbo);
++}
++
++bool vmw_user_object_is_mapped(struct vmw_user_object *uo)
++{
++	struct vmw_bo *bo;
++
++	if (!uo || vmw_user_object_is_null(uo))
++		return false;
++
++	bo = vmw_user_object_buffer(uo);
++
++	if (WARN_ON(!bo))
++		return false;
++
++	WARN_ON(bo->map.bo && !bo->map.virtual);
++	return bo->map.virtual;
++}
++
++bool vmw_user_object_is_null(struct vmw_user_object *uo)
++{
++	return !uo->buffer && !uo->surface;
++}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index bf24f2f0dcfc9..6141fadf81efe 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -1,7 +1,8 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR MIT */
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -221,11 +222,9 @@ struct vmw_framebuffer {
+ 
+ struct vmw_framebuffer_surface {
+ 	struct vmw_framebuffer base;
+-	struct vmw_surface *surface;
+-	bool is_bo_proxy;  /* true if this is proxy surface for DMA buf */
++	struct vmw_user_object uo;
+ };
+ 
+-
+ struct vmw_framebuffer_bo {
+ 	struct vmw_framebuffer base;
+ 	struct vmw_bo *buffer;
+@@ -277,8 +276,7 @@ struct vmw_cursor_plane_state {
+  */
+ struct vmw_plane_state {
+ 	struct drm_plane_state base;
+-	struct vmw_surface *surf;
+-	struct vmw_bo *bo;
++	struct vmw_user_object uo;
+ 
+ 	int content_fb_type;
+ 	unsigned long bo_size;
+@@ -457,9 +455,7 @@ int vmw_kms_readback(struct vmw_private *dev_priv,
+ 		     uint32_t num_clips);
+ struct vmw_framebuffer *
+ vmw_kms_new_framebuffer(struct vmw_private *dev_priv,
+-			struct vmw_bo *bo,
+-			struct vmw_surface *surface,
+-			bool only_2d,
++			struct vmw_user_object *uo,
+ 			const struct drm_mode_fb_cmd2 *mode_cmd);
+ void vmw_guess_mode_timing(struct drm_display_mode *mode);
+ void vmw_kms_update_implicit_fb(struct vmw_private *dev_priv);
+@@ -486,8 +482,7 @@ void vmw_du_plane_reset(struct drm_plane *plane);
+ struct drm_plane_state *vmw_du_plane_duplicate_state(struct drm_plane *plane);
+ void vmw_du_plane_destroy_state(struct drm_plane *plane,
+ 				struct drm_plane_state *state);
+-void vmw_du_plane_unpin_surf(struct vmw_plane_state *vps,
+-			     bool unreference);
++void vmw_du_plane_unpin_surf(struct vmw_plane_state *vps);
+ 
+ int vmw_du_crtc_atomic_check(struct drm_crtc *crtc,
+ 			     struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index 5befc2719a498..39949e0a493f3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -147,8 +148,9 @@ static int vmw_ldu_fb_pin(struct vmw_framebuffer *vfb)
+ 	struct vmw_bo *buf;
+ 	int ret;
+ 
+-	buf = vfb->bo ?  vmw_framebuffer_to_vfbd(&vfb->base)->buffer :
+-		vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.guest_memory_bo;
++	buf = vfb->bo ?
++		vmw_framebuffer_to_vfbd(&vfb->base)->buffer :
++		vmw_user_object_buffer(&vmw_framebuffer_to_vfbs(&vfb->base)->uo);
+ 
+ 	if (!buf)
+ 		return 0;
+@@ -169,8 +171,10 @@ static int vmw_ldu_fb_unpin(struct vmw_framebuffer *vfb)
+ 	struct vmw_private *dev_priv = vmw_priv(vfb->base.dev);
+ 	struct vmw_bo *buf;
+ 
+-	buf = vfb->bo ?  vmw_framebuffer_to_vfbd(&vfb->base)->buffer :
+-		vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.guest_memory_bo;
++	buf = vfb->bo ?
++		vmw_framebuffer_to_vfbd(&vfb->base)->buffer :
++		vmw_user_object_buffer(&vmw_framebuffer_to_vfbs(&vfb->base)->uo);
++
+ 
+ 	if (WARN_ON(!buf))
+ 		return 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
+index c45b4724e4141..e20f64b67b266 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
+@@ -92,7 +92,7 @@ static int vmw_overlay_send_put(struct vmw_private *dev_priv,
+ {
+ 	struct vmw_escape_video_flush *flush;
+ 	size_t fifo_size;
+-	bool have_so = (dev_priv->active_display_unit == vmw_du_screen_object);
++	bool have_so = (dev_priv->active_display_unit != vmw_du_legacy);
+ 	int i, num_items;
+ 	SVGAGuestPtr ptr;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c b/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c
+index c99cad4449915..598b90ac7590b 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_prime.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2013 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2013-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -31,6 +32,7 @@
+  */
+ 
+ #include "vmwgfx_drv.h"
++#include "vmwgfx_bo.h"
+ #include "ttm_object.h"
+ #include <linux/dma-buf.h>
+ 
+@@ -88,13 +90,35 @@ int vmw_prime_handle_to_fd(struct drm_device *dev,
+ 			   uint32_t handle, uint32_t flags,
+ 			   int *prime_fd)
+ {
++	struct vmw_private *vmw = vmw_priv(dev);
+ 	struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
++	struct vmw_bo *vbo;
+ 	int ret;
++	int surf_handle;
+ 
+-	if (handle > VMWGFX_NUM_MOB)
++	if (handle > VMWGFX_NUM_MOB) {
+ 		ret = ttm_prime_handle_to_fd(tfile, handle, flags, prime_fd);
+-	else
+-		ret = drm_gem_prime_handle_to_fd(dev, file_priv, handle, flags, prime_fd);
++	} else {
++		ret = vmw_user_bo_lookup(file_priv, handle, &vbo);
++		if (ret)
++			return ret;
++		if (vbo && vbo->is_dumb) {
++			ret = drm_gem_prime_handle_to_fd(dev, file_priv, handle,
++							 flags, prime_fd);
++		} else {
++			surf_handle = vmw_lookup_surface_handle_for_buffer(vmw,
++									   vbo,
++									   handle);
++			if (surf_handle > 0)
++				ret = ttm_prime_handle_to_fd(tfile, surf_handle,
++							     flags, prime_fd);
++			else
++				ret = drm_gem_prime_handle_to_fd(dev, file_priv,
++								 handle, flags,
++								 prime_fd);
++		}
++		vmw_user_bo_unref(&vbo);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 848dba09981b0..a73af8a355fbf 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -58,6 +59,7 @@ void vmw_resource_mob_attach(struct vmw_resource *res)
+ 
+ 	rb_link_node(&res->mob_node, parent, new);
+ 	rb_insert_color(&res->mob_node, &gbo->res_tree);
++	vmw_bo_del_detached_resource(gbo, res);
+ 
+ 	vmw_bo_prio_add(gbo, res->used_prio);
+ }
+@@ -287,28 +289,35 @@ int vmw_user_resource_lookup_handle(struct vmw_private *dev_priv,
+  *
+  * The pointer this pointed at by out_surf and out_buf needs to be null.
+  */
+-int vmw_user_lookup_handle(struct vmw_private *dev_priv,
++int vmw_user_object_lookup(struct vmw_private *dev_priv,
+ 			   struct drm_file *filp,
+-			   uint32_t handle,
+-			   struct vmw_surface **out_surf,
+-			   struct vmw_bo **out_buf)
++			   u32 handle,
++			   struct vmw_user_object *uo)
+ {
+ 	struct ttm_object_file *tfile = vmw_fpriv(filp)->tfile;
+ 	struct vmw_resource *res;
+ 	int ret;
+ 
+-	BUG_ON(*out_surf || *out_buf);
++	WARN_ON(uo->surface || uo->buffer);
+ 
+ 	ret = vmw_user_resource_lookup_handle(dev_priv, tfile, handle,
+ 					      user_surface_converter,
+ 					      &res);
+ 	if (!ret) {
+-		*out_surf = vmw_res_to_srf(res);
++		uo->surface = vmw_res_to_srf(res);
+ 		return 0;
+ 	}
+ 
+-	*out_surf = NULL;
+-	ret = vmw_user_bo_lookup(filp, handle, out_buf);
++	uo->surface = NULL;
++	ret = vmw_user_bo_lookup(filp, handle, &uo->buffer);
++	if (!ret && !uo->buffer->is_dumb) {
++		uo->surface = vmw_lookup_surface_for_buffer(dev_priv,
++							    uo->buffer,
++							    handle);
++		if (uo->surface)
++			vmw_user_bo_unref(&uo->buffer);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+index df0039a8ef29a..0f4bfd98480af 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2011-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2011-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -240,7 +241,7 @@ static void vmw_sou_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 		struct vmw_connector_state *vmw_conn_state;
+ 		int x, y;
+ 
+-		sou->buffer = vps->bo;
++		sou->buffer = vmw_user_object_buffer(&vps->uo);
+ 
+ 		conn_state = sou->base.connector.state;
+ 		vmw_conn_state = vmw_connector_state_to_vcs(conn_state);
+@@ -376,10 +377,11 @@ vmw_sou_primary_plane_cleanup_fb(struct drm_plane *plane,
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state);
+ 	struct drm_crtc *crtc = plane->state->crtc ?
+ 		plane->state->crtc : old_state->crtc;
++	struct vmw_bo *bo = vmw_user_object_buffer(&vps->uo);
+ 
+-	if (vps->bo)
+-		vmw_bo_unpin(vmw_priv(crtc->dev), vps->bo, false);
+-	vmw_bo_unreference(&vps->bo);
++	if (bo)
++		vmw_bo_unpin(vmw_priv(crtc->dev), bo, false);
++	vmw_user_object_unref(&vps->uo);
+ 	vps->bo_size = 0;
+ 
+ 	vmw_du_plane_cleanup_fb(plane, old_state);
+@@ -411,9 +413,10 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
+ 		.bo_type = ttm_bo_type_device,
+ 		.pin = true
+ 	};
++	struct vmw_bo *bo = NULL;
+ 
+ 	if (!new_fb) {
+-		vmw_bo_unreference(&vps->bo);
++		vmw_user_object_unref(&vps->uo);
+ 		vps->bo_size = 0;
+ 
+ 		return 0;
+@@ -422,17 +425,17 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	bo_params.size = new_state->crtc_w * new_state->crtc_h * 4;
+ 	dev_priv = vmw_priv(crtc->dev);
+ 
+-	if (vps->bo) {
++	bo = vmw_user_object_buffer(&vps->uo);
++	if (bo) {
+ 		if (vps->bo_size == bo_params.size) {
+ 			/*
+ 			 * Note that this might temporarily up the pin-count
+ 			 * to 2, until cleanup_fb() is called.
+ 			 */
+-			return vmw_bo_pin_in_vram(dev_priv, vps->bo,
+-						      true);
++			return vmw_bo_pin_in_vram(dev_priv, bo, true);
+ 		}
+ 
+-		vmw_bo_unreference(&vps->bo);
++		vmw_user_object_unref(&vps->uo);
+ 		vps->bo_size = 0;
+ 	}
+ 
+@@ -442,7 +445,7 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	 * resume the overlays, this is preferred to failing to alloc.
+ 	 */
+ 	vmw_overlay_pause_all(dev_priv);
+-	ret = vmw_bo_create(dev_priv, &bo_params, &vps->bo);
++	ret = vmw_gem_object_create(dev_priv, &bo_params, &vps->uo.buffer);
+ 	vmw_overlay_resume_all(dev_priv);
+ 	if (ret)
+ 		return ret;
+@@ -453,7 +456,7 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	 * TTM already thinks the buffer is pinned, but make sure the
+ 	 * pin_count is upped.
+ 	 */
+-	return vmw_bo_pin_in_vram(dev_priv, vps->bo, true);
++	return vmw_bo_pin_in_vram(dev_priv, vps->uo.buffer, true);
+ }
+ 
+ static uint32_t vmw_sou_bo_fifo_size(struct vmw_du_update_plane *update,
+@@ -580,6 +583,7 @@ static uint32_t vmw_sou_surface_pre_clip(struct vmw_du_update_plane *update,
+ {
+ 	struct vmw_kms_sou_dirty_cmd *blit = cmd;
+ 	struct vmw_framebuffer_surface *vfbs;
++	struct vmw_surface *surf = NULL;
+ 
+ 	vfbs = container_of(update->vfb, typeof(*vfbs), base);
+ 
+@@ -587,7 +591,8 @@ static uint32_t vmw_sou_surface_pre_clip(struct vmw_du_update_plane *update,
+ 	blit->header.size = sizeof(blit->body) + sizeof(SVGASignedRect) *
+ 		num_hits;
+ 
+-	blit->body.srcImage.sid = vfbs->surface->res.id;
++	surf = vmw_user_object_surface(&vfbs->uo);
++	blit->body.srcImage.sid = surf->res.id;
+ 	blit->body.destScreenId = update->du->unit;
+ 
+ 	/* Update the source and destination bounding box later in post_clip */
+@@ -1104,7 +1109,7 @@ int vmw_kms_sou_do_surface_dirty(struct vmw_private *dev_priv,
+ 	int ret;
+ 
+ 	if (!srf)
+-		srf = &vfbs->surface->res;
++		srf = &vmw_user_object_surface(&vfbs->uo)->res;
+ 
+ 	ret = vmw_validation_add_resource(&val_ctx, srf, 0, VMW_RES_DIRTY_NONE,
+ 					  NULL, NULL);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index a04e0736318da..5453f7cf0e2d7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /******************************************************************************
+  *
+- * COPYRIGHT (C) 2014-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2014-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -29,6 +30,7 @@
+ #include "vmwgfx_kms.h"
+ #include "vmwgfx_vkms.h"
+ #include "vmw_surface_cache.h"
++#include <linux/fsnotify.h>
+ 
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -735,7 +737,7 @@ int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv,
+ 	int ret;
+ 
+ 	if (!srf)
+-		srf = &vfbs->surface->res;
++		srf = &vmw_user_object_surface(&vfbs->uo)->res;
+ 
+ 	ret = vmw_validation_add_resource(&val_ctx, srf, 0, VMW_RES_DIRTY_NONE,
+ 					  NULL, NULL);
+@@ -746,12 +748,6 @@ int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv,
+ 	if (ret)
+ 		goto out_unref;
+ 
+-	if (vfbs->is_bo_proxy) {
+-		ret = vmw_kms_update_proxy(srf, clips, num_clips, inc);
+-		if (ret)
+-			goto out_finish;
+-	}
+-
+ 	sdirty.base.fifo_commit = vmw_kms_stdu_surface_fifo_commit;
+ 	sdirty.base.clip = vmw_kms_stdu_surface_clip;
+ 	sdirty.base.fifo_reserve_size = sizeof(struct vmw_stdu_surface_copy) +
+@@ -765,7 +761,7 @@ int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv,
+ 	ret = vmw_kms_helper_dirty(dev_priv, framebuffer, clips, vclips,
+ 				   dest_x, dest_y, num_clips, inc,
+ 				   &sdirty.base);
+-out_finish:
++
+ 	vmw_kms_helper_validation_finish(dev_priv, NULL, &val_ctx, out_fence,
+ 					 NULL);
+ 
+@@ -877,6 +873,32 @@ vmw_stdu_connector_mode_valid(struct drm_connector *connector,
+ 	return MODE_OK;
+ }
+ 
++/*
++ * Trigger a modeset if the X,Y position of the Screen Target changes.
++ * This is needed when multi-mon is cycled. The original Screen Target will have
++ * the same mode but its relative X,Y position in the topology will change.
++ */
++static int vmw_stdu_connector_atomic_check(struct drm_connector *conn,
++					   struct drm_atomic_state *state)
++{
++	struct drm_connector_state *conn_state;
++	struct vmw_screen_target_display_unit *du;
++	struct drm_crtc_state *new_crtc_state;
++
++	conn_state = drm_atomic_get_connector_state(state, conn);
++	du = vmw_connector_to_stdu(conn);
++
++	if (!conn_state->crtc)
++		return 0;
++
++	new_crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc);
++	if (du->base.gui_x != du->base.set_gui_x ||
++	    du->base.gui_y != du->base.set_gui_y)
++		new_crtc_state->mode_changed = true;
++
++	return 0;
++}
++
+ static const struct drm_connector_funcs vmw_stdu_connector_funcs = {
+ 	.dpms = vmw_du_connector_dpms,
+ 	.detect = vmw_du_connector_detect,
+@@ -891,7 +913,8 @@ static const struct drm_connector_funcs vmw_stdu_connector_funcs = {
+ static const struct
+ drm_connector_helper_funcs vmw_stdu_connector_helper_funcs = {
+ 	.get_modes = vmw_connector_get_modes,
+-	.mode_valid = vmw_stdu_connector_mode_valid
++	.mode_valid = vmw_stdu_connector_mode_valid,
++	.atomic_check = vmw_stdu_connector_atomic_check,
+ };
+ 
+ 
+@@ -918,9 +941,8 @@ vmw_stdu_primary_plane_cleanup_fb(struct drm_plane *plane,
+ {
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state);
+ 
+-	if (vps->surf)
++	if (vmw_user_object_surface(&vps->uo))
+ 		WARN_ON(!vps->pinned);
+-
+ 	vmw_du_plane_cleanup_fb(plane, old_state);
+ 
+ 	vps->content_fb_type = SAME_AS_DISPLAY;
+@@ -928,7 +950,6 @@ vmw_stdu_primary_plane_cleanup_fb(struct drm_plane *plane,
+ }
+ 
+ 
+-
+ /**
+  * vmw_stdu_primary_plane_prepare_fb - Readies the display surface
+  *
+@@ -952,13 +973,15 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	enum stdu_content_type new_content_type;
+ 	struct vmw_framebuffer_surface *new_vfbs;
+ 	uint32_t hdisplay = new_state->crtc_w, vdisplay = new_state->crtc_h;
++	struct drm_plane_state *old_state = plane->state;
++	struct drm_rect rect;
+ 	int ret;
+ 
+ 	/* No FB to prepare */
+ 	if (!new_fb) {
+-		if (vps->surf) {
++		if (vmw_user_object_surface(&vps->uo)) {
+ 			WARN_ON(vps->pinned != 0);
+-			vmw_surface_unreference(&vps->surf);
++			vmw_user_object_unref(&vps->uo);
+ 		}
+ 
+ 		return 0;
+@@ -968,8 +991,8 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	new_vfbs = (vfb->bo) ? NULL : vmw_framebuffer_to_vfbs(new_fb);
+ 
+ 	if (new_vfbs &&
+-	    new_vfbs->surface->metadata.base_size.width == hdisplay &&
+-	    new_vfbs->surface->metadata.base_size.height == vdisplay)
++	    vmw_user_object_surface(&new_vfbs->uo)->metadata.base_size.width == hdisplay &&
++	    vmw_user_object_surface(&new_vfbs->uo)->metadata.base_size.height == vdisplay)
+ 		new_content_type = SAME_AS_DISPLAY;
+ 	else if (vfb->bo)
+ 		new_content_type = SEPARATE_BO;
+@@ -1007,29 +1030,29 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 			metadata.num_sizes = 1;
+ 			metadata.scanout = true;
+ 		} else {
+-			metadata = new_vfbs->surface->metadata;
++			metadata = vmw_user_object_surface(&new_vfbs->uo)->metadata;
+ 		}
+ 
+ 		metadata.base_size.width = hdisplay;
+ 		metadata.base_size.height = vdisplay;
+ 		metadata.base_size.depth = 1;
+ 
+-		if (vps->surf) {
++		if (vmw_user_object_surface(&vps->uo)) {
+ 			struct drm_vmw_size cur_base_size =
+-				vps->surf->metadata.base_size;
++				vmw_user_object_surface(&vps->uo)->metadata.base_size;
+ 
+ 			if (cur_base_size.width != metadata.base_size.width ||
+ 			    cur_base_size.height != metadata.base_size.height ||
+-			    vps->surf->metadata.format != metadata.format) {
++			    vmw_user_object_surface(&vps->uo)->metadata.format != metadata.format) {
+ 				WARN_ON(vps->pinned != 0);
+-				vmw_surface_unreference(&vps->surf);
++				vmw_user_object_unref(&vps->uo);
+ 			}
+ 
+ 		}
+ 
+-		if (!vps->surf) {
++		if (!vmw_user_object_surface(&vps->uo)) {
+ 			ret = vmw_gb_surface_define(dev_priv, &metadata,
+-						    &vps->surf);
++						    &vps->uo.surface);
+ 			if (ret != 0) {
+ 				DRM_ERROR("Couldn't allocate STDU surface.\n");
+ 				return ret;
+@@ -1042,18 +1065,19 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 		 * The only time we add a reference in prepare_fb is if the
+ 		 * state object doesn't have a reference to begin with
+ 		 */
+-		if (vps->surf) {
++		if (vmw_user_object_surface(&vps->uo)) {
+ 			WARN_ON(vps->pinned != 0);
+-			vmw_surface_unreference(&vps->surf);
++			vmw_user_object_unref(&vps->uo);
+ 		}
+ 
+-		vps->surf = vmw_surface_reference(new_vfbs->surface);
++		memcpy(&vps->uo, &new_vfbs->uo, sizeof(vps->uo));
++		vmw_user_object_ref(&vps->uo);
+ 	}
+ 
+-	if (vps->surf) {
++	if (vmw_user_object_surface(&vps->uo)) {
+ 
+ 		/* Pin new surface before flipping */
+-		ret = vmw_resource_pin(&vps->surf->res, false);
++		ret = vmw_resource_pin(&vmw_user_object_surface(&vps->uo)->res, false);
+ 		if (ret)
+ 			goto out_srf_unref;
+ 
+@@ -1062,6 +1086,34 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 
+ 	vps->content_fb_type = new_content_type;
+ 
++	/*
++	 * The drm fb code will do blit's via the vmap interface, which doesn't
++	 * trigger vmw_bo page dirty tracking due to being kernel side (and thus
++	 * doesn't require mmap'ing) so we have to update the surface's dirty
++	 * regions by hand but we want to be careful to not overwrite the
++	 * resource if it has been written to by the gpu (res_dirty).
++	 */
++	if (vps->uo.buffer && vps->uo.buffer->is_dumb) {
++		struct vmw_surface *surf = vmw_user_object_surface(&vps->uo);
++		struct vmw_resource *res = &surf->res;
++
++		if (!res->res_dirty && drm_atomic_helper_damage_merged(old_state,
++								       new_state,
++								       &rect)) {
++			/*
++			 * At some point it might be useful to actually translate
++			 * (rect.x1, rect.y1) => start, and (rect.x2, rect.y2) => end,
++			 * but currently the fb code will just report the entire fb
++			 * dirty so in practice it doesn't matter.
++			 */
++			pgoff_t start = res->guest_memory_offset >> PAGE_SHIFT;
++			pgoff_t end = __KERNEL_DIV_ROUND_UP(res->guest_memory_offset +
++							    res->guest_memory_size,
++							    PAGE_SIZE);
++			vmw_resource_dirty_update(res, start, end);
++		}
++	}
++
+ 	/*
+ 	 * This should only happen if the buffer object is too large to create a
+ 	 * proxy surface for.
+@@ -1072,7 +1124,7 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ 	return 0;
+ 
+ out_srf_unref:
+-	vmw_surface_unreference(&vps->surf);
++	vmw_user_object_unref(&vps->uo);
+ 	return ret;
+ }
+ 
+@@ -1214,14 +1266,8 @@ static uint32_t
+ vmw_stdu_surface_fifo_size_same_display(struct vmw_du_update_plane *update,
+ 					uint32_t num_hits)
+ {
+-	struct vmw_framebuffer_surface *vfbs;
+ 	uint32_t size = 0;
+ 
+-	vfbs = container_of(update->vfb, typeof(*vfbs), base);
+-
+-	if (vfbs->is_bo_proxy)
+-		size += sizeof(struct vmw_stdu_update_gb_image) * num_hits;
+-
+ 	size += sizeof(struct vmw_stdu_update);
+ 
+ 	return size;
+@@ -1230,61 +1276,14 @@ vmw_stdu_surface_fifo_size_same_display(struct vmw_du_update_plane *update,
+ static uint32_t vmw_stdu_surface_fifo_size(struct vmw_du_update_plane *update,
+ 					   uint32_t num_hits)
+ {
+-	struct vmw_framebuffer_surface *vfbs;
+ 	uint32_t size = 0;
+ 
+-	vfbs = container_of(update->vfb, typeof(*vfbs), base);
+-
+-	if (vfbs->is_bo_proxy)
+-		size += sizeof(struct vmw_stdu_update_gb_image) * num_hits;
+-
+ 	size += sizeof(struct vmw_stdu_surface_copy) + sizeof(SVGA3dCopyBox) *
+ 		num_hits + sizeof(struct vmw_stdu_update);
+ 
+ 	return size;
+ }
+ 
+-static uint32_t
+-vmw_stdu_surface_update_proxy(struct vmw_du_update_plane *update, void *cmd)
+-{
+-	struct vmw_framebuffer_surface *vfbs;
+-	struct drm_plane_state *state = update->plane->state;
+-	struct drm_plane_state *old_state = update->old_state;
+-	struct vmw_stdu_update_gb_image *cmd_update = cmd;
+-	struct drm_atomic_helper_damage_iter iter;
+-	struct drm_rect clip;
+-	uint32_t copy_size = 0;
+-
+-	vfbs = container_of(update->vfb, typeof(*vfbs), base);
+-
+-	/*
+-	 * proxy surface is special where a buffer object type fb is wrapped
+-	 * in a surface and need an update gb image command to sync with device.
+-	 */
+-	drm_atomic_helper_damage_iter_init(&iter, old_state, state);
+-	drm_atomic_for_each_plane_damage(&iter, &clip) {
+-		SVGA3dBox *box = &cmd_update->body.box;
+-
+-		cmd_update->header.id = SVGA_3D_CMD_UPDATE_GB_IMAGE;
+-		cmd_update->header.size = sizeof(cmd_update->body);
+-		cmd_update->body.image.sid = vfbs->surface->res.id;
+-		cmd_update->body.image.face = 0;
+-		cmd_update->body.image.mipmap = 0;
+-
+-		box->x = clip.x1;
+-		box->y = clip.y1;
+-		box->z = 0;
+-		box->w = drm_rect_width(&clip);
+-		box->h = drm_rect_height(&clip);
+-		box->d = 1;
+-
+-		copy_size += sizeof(*cmd_update);
+-		cmd_update++;
+-	}
+-
+-	return copy_size;
+-}
+-
+ static uint32_t
+ vmw_stdu_surface_populate_copy(struct vmw_du_update_plane  *update, void *cmd,
+ 			       uint32_t num_hits)
+@@ -1299,7 +1298,7 @@ vmw_stdu_surface_populate_copy(struct vmw_du_update_plane  *update, void *cmd,
+ 	cmd_copy->header.id = SVGA_3D_CMD_SURFACE_COPY;
+ 	cmd_copy->header.size = sizeof(cmd_copy->body) + sizeof(SVGA3dCopyBox) *
+ 		num_hits;
+-	cmd_copy->body.src.sid = vfbs->surface->res.id;
++	cmd_copy->body.src.sid = vmw_user_object_surface(&vfbs->uo)->res.id;
+ 	cmd_copy->body.dest.sid = stdu->display_srf->res.id;
+ 
+ 	return sizeof(*cmd_copy);
+@@ -1370,10 +1369,7 @@ static int vmw_stdu_plane_update_surface(struct vmw_private *dev_priv,
+ 	srf_update.mutex = &dev_priv->cmdbuf_mutex;
+ 	srf_update.intr = true;
+ 
+-	if (vfbs->is_bo_proxy)
+-		srf_update.post_prepare = vmw_stdu_surface_update_proxy;
+-
+-	if (vfbs->surface->res.id != stdu->display_srf->res.id) {
++	if (vmw_user_object_surface(&vfbs->uo)->res.id != stdu->display_srf->res.id) {
+ 		srf_update.calc_fifo_size = vmw_stdu_surface_fifo_size;
+ 		srf_update.pre_clip = vmw_stdu_surface_populate_copy;
+ 		srf_update.clip = vmw_stdu_surface_populate_clip;
+@@ -1417,7 +1413,7 @@ vmw_stdu_primary_plane_atomic_update(struct drm_plane *plane,
+ 		stdu = vmw_crtc_to_stdu(crtc);
+ 		dev_priv = vmw_priv(crtc->dev);
+ 
+-		stdu->display_srf = vps->surf;
++		stdu->display_srf = vmw_user_object_surface(&vps->uo);
+ 		stdu->content_fb_type = vps->content_fb_type;
+ 		stdu->cpp = vps->cpp;
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index e7a744dfcecfd..8ae6a761c9003 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0 OR MIT
+ /**************************************************************************
+  *
+- * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA
++ * Copyright (c) 2009-2024 Broadcom. All Rights Reserved. The term
++ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+  *
+  * Permission is hereby granted, free of charge, to any person obtaining a
+  * copy of this software and associated documentation files (the
+@@ -36,9 +37,6 @@
+ #include <drm/ttm/ttm_placement.h>
+ 
+ #define SVGA3D_FLAGS_64(upper32, lower32) (((uint64_t)upper32 << 32) | lower32)
+-#define SVGA3D_FLAGS_UPPER_32(svga3d_flags) (svga3d_flags >> 32)
+-#define SVGA3D_FLAGS_LOWER_32(svga3d_flags) \
+-	(svga3d_flags & ((uint64_t)U32_MAX))
+ 
+ /**
+  * struct vmw_user_surface - User-space visible surface resource
+@@ -686,6 +684,14 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
+ 	struct vmw_resource *res = &user_srf->srf.res;
+ 
+ 	*p_base = NULL;
++
++	/*
++	 * Dumb buffers own the resource and they'll unref the
++	 * resource themselves
++	 */
++	if (res && res->guest_memory_bo && res->guest_memory_bo->is_dumb)
++		return;
++
+ 	vmw_resource_unreference(&res);
+ }
+ 
+@@ -812,7 +818,8 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 		}
+ 	}
+ 	res->guest_memory_size = cur_bo_offset;
+-	if (metadata->scanout &&
++	if (!file_priv->atomic &&
++	    metadata->scanout &&
+ 	    metadata->num_sizes == 1 &&
+ 	    metadata->sizes[0].width == VMW_CURSOR_SNOOP_WIDTH &&
+ 	    metadata->sizes[0].height == VMW_CURSOR_SNOOP_HEIGHT &&
+@@ -864,6 +871,7 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 			vmw_resource_unreference(&res);
+ 			goto out_unlock;
+ 		}
++		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
+ 	}
+ 
+ 	tmp = vmw_resource_reference(&srf->res);
+@@ -892,6 +900,113 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 	return ret;
+ }
+ 
++static struct vmw_user_surface *
++vmw_lookup_user_surface_for_buffer(struct vmw_private *vmw, struct vmw_bo *bo,
++				   u32 handle)
++{
++	struct vmw_user_surface *user_srf = NULL;
++	struct vmw_surface *surf;
++	struct ttm_base_object *base;
++
++	surf = vmw_bo_surface(bo);
++	if (surf) {
++		rcu_read_lock();
++		user_srf = container_of(surf, struct vmw_user_surface, srf);
++		base = &user_srf->prime.base;
++		if (base && !kref_get_unless_zero(&base->refcount)) {
++			drm_dbg_driver(&vmw->drm,
++				       "%s: referencing a stale surface handle %d\n",
++					__func__, handle);
++			base = NULL;
++			user_srf = NULL;
++		}
++		rcu_read_unlock();
++	}
++
++	return user_srf;
++}
++
++struct vmw_surface *vmw_lookup_surface_for_buffer(struct vmw_private *vmw,
++						  struct vmw_bo *bo,
++						  u32 handle)
++{
++	struct vmw_user_surface *user_srf =
++		vmw_lookup_user_surface_for_buffer(vmw, bo, handle);
++	struct vmw_surface *surf = NULL;
++	struct ttm_base_object *base;
++
++	if (user_srf) {
++		surf = vmw_surface_reference(&user_srf->srf);
++		base = &user_srf->prime.base;
++		ttm_base_object_unref(&base);
++	}
++	return surf;
++}
++
++u32 vmw_lookup_surface_handle_for_buffer(struct vmw_private *vmw,
++					 struct vmw_bo *bo,
++					 u32 handle)
++{
++	struct vmw_user_surface *user_srf =
++		vmw_lookup_user_surface_for_buffer(vmw, bo, handle);
++	int surf_handle = 0;
++	struct ttm_base_object *base;
++
++	if (user_srf) {
++		base = &user_srf->prime.base;
++		surf_handle = (u32)base->handle;
++		ttm_base_object_unref(&base);
++	}
++	return surf_handle;
++}
++
++static int vmw_buffer_prime_to_surface_base(struct vmw_private *dev_priv,
++					    struct drm_file *file_priv,
++					    u32 fd, u32 *handle,
++					    struct ttm_base_object **base_p)
++{
++	struct ttm_base_object *base;
++	struct vmw_bo *bo;
++	struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
++	struct vmw_user_surface *user_srf;
++	int ret;
++
++	ret = drm_gem_prime_fd_to_handle(&dev_priv->drm, file_priv, fd, handle);
++	if (ret) {
++		drm_warn(&dev_priv->drm,
++			 "Wasn't able to find user buffer for fd = %u.\n", fd);
++		return ret;
++	}
++
++	ret = vmw_user_bo_lookup(file_priv, *handle, &bo);
++	if (ret) {
++		drm_warn(&dev_priv->drm,
++			 "Wasn't able to lookup user buffer for handle = %u.\n", *handle);
++		return ret;
++	}
++
++	user_srf = vmw_lookup_user_surface_for_buffer(dev_priv, bo, *handle);
++	if (WARN_ON(!user_srf)) {
++		drm_warn(&dev_priv->drm,
++			 "User surface fd %d (handle %d) is null.\n", fd, *handle);
++		ret = -EINVAL;
++		goto out;
++	}
++
++	base = &user_srf->prime.base;
++	ret = ttm_ref_object_add(tfile, base, NULL, false);
++	if (ret) {
++		drm_warn(&dev_priv->drm,
++			 "Couldn't add an object ref for the buffer (%d).\n", *handle);
++		goto out;
++	}
++
++	*base_p = base;
++out:
++	vmw_user_bo_unref(&bo);
++
++	return ret;
++}
+ 
+ static int
+ vmw_surface_handle_reference(struct vmw_private *dev_priv,
+@@ -901,15 +1016,19 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
+ 			     struct ttm_base_object **base_p)
+ {
+ 	struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
+-	struct vmw_user_surface *user_srf;
++	struct vmw_user_surface *user_srf = NULL;
+ 	uint32_t handle;
+ 	struct ttm_base_object *base;
+ 	int ret;
+ 
+ 	if (handle_type == DRM_VMW_HANDLE_PRIME) {
+ 		ret = ttm_prime_fd_to_handle(tfile, u_handle, &handle);
+-		if (unlikely(ret != 0))
+-			return ret;
++		if (ret)
++			return vmw_buffer_prime_to_surface_base(dev_priv,
++								file_priv,
++								u_handle,
++								&handle,
++								base_p);
+ 	} else {
+ 		handle = u_handle;
+ 	}
+@@ -1503,7 +1622,12 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 		ret = vmw_user_bo_lookup(file_priv, req->base.buffer_handle,
+ 					 &res->guest_memory_bo);
+ 		if (ret == 0) {
+-			if (res->guest_memory_bo->tbo.base.size < res->guest_memory_size) {
++			if (res->guest_memory_bo->is_dumb) {
++				VMW_DEBUG_USER("Can't backup surface with a dumb buffer.\n");
++				vmw_user_bo_unref(&res->guest_memory_bo);
++				ret = -EINVAL;
++				goto out_unlock;
++			} else if (res->guest_memory_bo->tbo.base.size < res->guest_memory_size) {
+ 				VMW_DEBUG_USER("Surface backup buffer too small.\n");
+ 				vmw_user_bo_unref(&res->guest_memory_bo);
+ 				ret = -EINVAL;
+@@ -1560,6 +1684,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 	rep->handle      = user_srf->prime.base.handle;
+ 	rep->backup_size = res->guest_memory_size;
+ 	if (res->guest_memory_bo) {
++		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
+ 		rep->buffer_map_handle =
+ 			drm_vma_node_offset_addr(&res->guest_memory_bo->tbo.base.vma_node);
+ 		rep->buffer_size = res->guest_memory_bo->tbo.base.size;
+@@ -2100,3 +2225,140 @@ int vmw_gb_surface_define(struct vmw_private *dev_priv,
+ out_unlock:
+ 	return ret;
+ }
++
++static SVGA3dSurfaceFormat vmw_format_bpp_to_svga(struct vmw_private *vmw,
++						  int bpp)
++{
++	switch (bpp) {
++	case 8: /* DRM_FORMAT_C8 */
++		return SVGA3D_P8;
++	case 16: /* DRM_FORMAT_RGB565 */
++		return SVGA3D_R5G6B5;
++	case 32: /* DRM_FORMAT_XRGB8888 */
++		if (has_sm4_context(vmw))
++			return SVGA3D_B8G8R8X8_UNORM;
++		return SVGA3D_X8R8G8B8;
++	default:
++		drm_warn(&vmw->drm, "Unsupported format bpp: %d\n", bpp);
++		return SVGA3D_X8R8G8B8;
++	}
++}
++
++/**
++ * vmw_dumb_create - Create a dumb kms buffer
++ *
++ * @file_priv: Pointer to a struct drm_file identifying the caller.
++ * @dev: Pointer to the drm device.
++ * @args: Pointer to a struct drm_mode_create_dumb structure
++ * Return: Zero on success, negative error code on failure.
++ *
++ * This is a driver callback for the core drm create_dumb functionality.
++ * Note that this is very similar to the vmw_bo_alloc ioctl, except
++ * that the arguments have a different format.
++ */
++int vmw_dumb_create(struct drm_file *file_priv,
++		    struct drm_device *dev,
++		    struct drm_mode_create_dumb *args)
++{
++	struct vmw_private *dev_priv = vmw_priv(dev);
++	struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
++	struct vmw_bo *vbo = NULL;
++	struct vmw_resource *res = NULL;
++	union drm_vmw_gb_surface_create_ext_arg arg = { 0 };
++	struct drm_vmw_gb_surface_create_ext_req *req = &arg.req;
++	int ret;
++	struct drm_vmw_size drm_size = {
++		.width = args->width,
++		.height = args->height,
++		.depth = 1,
++	};
++	SVGA3dSurfaceFormat format = vmw_format_bpp_to_svga(dev_priv, args->bpp);
++	const struct SVGA3dSurfaceDesc *desc = vmw_surface_get_desc(format);
++	SVGA3dSurfaceAllFlags flags = SVGA3D_SURFACE_HINT_TEXTURE |
++				      SVGA3D_SURFACE_HINT_RENDERTARGET |
++				      SVGA3D_SURFACE_SCREENTARGET |
++				      SVGA3D_SURFACE_BIND_SHADER_RESOURCE |
++				      SVGA3D_SURFACE_BIND_RENDER_TARGET;
++
++	/*
++	 * Without mob support we're just going to use raw memory buffer
++	 * because we wouldn't be able to support full surface coherency
++	 * without mobs
++	 */
++	if (!dev_priv->has_mob) {
++		int cpp = DIV_ROUND_UP(args->bpp, 8);
++
++		switch (cpp) {
++		case 1: /* DRM_FORMAT_C8 */
++		case 2: /* DRM_FORMAT_RGB565 */
++		case 4: /* DRM_FORMAT_XRGB8888 */
++			break;
++		default:
++			/*
++			 * Dumb buffers don't allow anything else.
++			 * This is tested via IGT's dumb_buffers
++			 */
++			return -EINVAL;
++		}
++
++		args->pitch = args->width * cpp;
++		args->size = ALIGN(args->pitch * args->height, PAGE_SIZE);
++
++		ret = vmw_gem_object_create_with_handle(dev_priv, file_priv,
++							args->size, &args->handle,
++							&vbo);
++		/* drop reference from allocate - handle holds it now */
++		drm_gem_object_put(&vbo->tbo.base);
++		return ret;
++	}
++
++	req->version = drm_vmw_gb_surface_v1;
++	req->multisample_pattern = SVGA3D_MS_PATTERN_NONE;
++	req->quality_level = SVGA3D_MS_QUALITY_NONE;
++	req->buffer_byte_stride = 0;
++	req->must_be_zero = 0;
++	req->base.svga3d_flags = SVGA3D_FLAGS_LOWER_32(flags);
++	req->svga3d_flags_upper_32_bits = SVGA3D_FLAGS_UPPER_32(flags);
++	req->base.format = (uint32_t)format;
++	req->base.drm_surface_flags = drm_vmw_surface_flag_scanout;
++	req->base.drm_surface_flags |= drm_vmw_surface_flag_shareable;
++	req->base.drm_surface_flags |= drm_vmw_surface_flag_create_buffer;
++	req->base.drm_surface_flags |= drm_vmw_surface_flag_coherent;
++	req->base.base_size.width = args->width;
++	req->base.base_size.height = args->height;
++	req->base.base_size.depth = 1;
++	req->base.array_size = 0;
++	req->base.mip_levels = 1;
++	req->base.multisample_count = 0;
++	req->base.buffer_handle = SVGA3D_INVALID_ID;
++	req->base.autogen_filter = SVGA3D_TEX_FILTER_NONE;
++	ret = vmw_gb_surface_define_ext_ioctl(dev, &arg, file_priv);
++	if (ret) {
++		drm_warn(dev, "Unable to create a dumb buffer\n");
++		return ret;
++	}
++
++	args->handle = arg.rep.buffer_handle;
++	args->size = arg.rep.buffer_size;
++	args->pitch = vmw_surface_calculate_pitch(desc, &drm_size);
++
++	ret = vmw_user_resource_lookup_handle(dev_priv, tfile, arg.rep.handle,
++					      user_surface_converter,
++					      &res);
++	if (ret) {
++		drm_err(dev, "Created resource handle doesn't exist!\n");
++		goto err;
++	}
++
++	vbo = res->guest_memory_bo;
++	vbo->is_dumb = true;
++	vbo->dumb_surface = vmw_res_to_srf(res);
++
++err:
++	if (res)
++		vmw_resource_unreference(&res);
++	if (ret)
++		ttm_ref_object_base_unref(tfile, arg.rep.handle);
++
++	return ret;
++}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_vkms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_vkms.c
+index 7e93a45948f79..ac002048d8e5e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_vkms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_vkms.c
+@@ -76,7 +76,7 @@ vmw_surface_sync(struct vmw_private *vmw,
+ 	return ret;
+ }
+ 
+-static int
++static void
+ compute_crc(struct drm_crtc *crtc,
+ 	    struct vmw_surface *surf,
+ 	    u32 *crc)
+@@ -102,8 +102,6 @@ compute_crc(struct drm_crtc *crtc,
+ 	}
+ 
+ 	vmw_bo_unmap(bo);
+-
+-	return 0;
+ }
+ 
+ static void
+@@ -117,7 +115,6 @@ crc_generate_worker(struct work_struct *work)
+ 	u64 frame_start, frame_end;
+ 	u32 crc32 = 0;
+ 	struct vmw_surface *surf = 0;
+-	int ret;
+ 
+ 	spin_lock_irq(&du->vkms.crc_state_lock);
+ 	crc_pending = du->vkms.crc_pending;
+@@ -131,22 +128,24 @@ crc_generate_worker(struct work_struct *work)
+ 		return;
+ 
+ 	spin_lock_irq(&du->vkms.crc_state_lock);
+-	surf = du->vkms.surface;
++	surf = vmw_surface_reference(du->vkms.surface);
+ 	spin_unlock_irq(&du->vkms.crc_state_lock);
+ 
+-	if (vmw_surface_sync(vmw, surf)) {
+-		drm_warn(crtc->dev, "CRC worker wasn't able to sync the crc surface!\n");
+-		return;
+-	}
++	if (surf) {
++		if (vmw_surface_sync(vmw, surf)) {
++			drm_warn(
++				crtc->dev,
++				"CRC worker wasn't able to sync the crc surface!\n");
++			return;
++		}
+ 
+-	ret = compute_crc(crtc, surf, &crc32);
+-	if (ret)
+-		return;
++		compute_crc(crtc, surf, &crc32);
++		vmw_surface_unreference(&surf);
++	}
+ 
+ 	spin_lock_irq(&du->vkms.crc_state_lock);
+ 	frame_start = du->vkms.frame_start;
+ 	frame_end = du->vkms.frame_end;
+-	crc_pending = du->vkms.crc_pending;
+ 	du->vkms.frame_start = 0;
+ 	du->vkms.frame_end = 0;
+ 	du->vkms.crc_pending = false;
+@@ -165,7 +164,7 @@ vmw_vkms_vblank_simulate(struct hrtimer *timer)
+ 	struct vmw_display_unit *du = container_of(timer, struct vmw_display_unit, vkms.timer);
+ 	struct drm_crtc *crtc = &du->crtc;
+ 	struct vmw_private *vmw = vmw_priv(crtc->dev);
+-	struct vmw_surface *surf = NULL;
++	bool has_surface = false;
+ 	u64 ret_overrun;
+ 	bool locked, ret;
+ 
+@@ -180,10 +179,10 @@ vmw_vkms_vblank_simulate(struct hrtimer *timer)
+ 	WARN_ON(!ret);
+ 	if (!locked)
+ 		return HRTIMER_RESTART;
+-	surf = du->vkms.surface;
++	has_surface = du->vkms.surface != NULL;
+ 	vmw_vkms_unlock(crtc);
+ 
+-	if (du->vkms.crc_enabled && surf) {
++	if (du->vkms.crc_enabled && has_surface) {
+ 		u64 frame = drm_crtc_accurate_vblank_count(crtc);
+ 
+ 		spin_lock(&du->vkms.crc_state_lock);
+@@ -337,6 +336,8 @@ vmw_vkms_crtc_cleanup(struct drm_crtc *crtc)
+ {
+ 	struct vmw_display_unit *du = vmw_crtc_to_du(crtc);
+ 
++	if (du->vkms.surface)
++		vmw_surface_unreference(&du->vkms.surface);
+ 	WARN_ON(work_pending(&du->vkms.crc_generator_work));
+ 	hrtimer_cancel(&du->vkms.timer);
+ }
+@@ -498,9 +499,12 @@ vmw_vkms_set_crc_surface(struct drm_crtc *crtc,
+ 	struct vmw_display_unit *du = vmw_crtc_to_du(crtc);
+ 	struct vmw_private *vmw = vmw_priv(crtc->dev);
+ 
+-	if (vmw->vkms_enabled) {
++	if (vmw->vkms_enabled && du->vkms.surface != surf) {
+ 		WARN_ON(atomic_read(&du->vkms.atomic_lock) != VMW_VKMS_LOCK_MODESET);
+-		du->vkms.surface = surf;
++		if (du->vkms.surface)
++			vmw_surface_unreference(&du->vkms.surface);
++		if (surf)
++			du->vkms.surface = vmw_surface_reference(surf);
+ 	}
+ }
+ 
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index bdb578e0899f5..4b59687ff5d82 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -288,12 +288,22 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ 		mp2_ops->start(privdata, info);
+ 		cl_data->sensor_sts[i] = amd_sfh_wait_for_response
+ 						(privdata, cl_data->sensor_idx[i], SENSOR_ENABLED);
++
++		if (cl_data->sensor_sts[i] == SENSOR_ENABLED)
++			cl_data->is_any_sensor_enabled = true;
++	}
++
++	if (!cl_data->is_any_sensor_enabled ||
++	    (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0)) {
++		dev_warn(dev, "Failed to discover, sensors not enabled is %d\n",
++			 cl_data->is_any_sensor_enabled);
++		rc = -EOPNOTSUPP;
++		goto cleanup;
+ 	}
+ 
+ 	for (i = 0; i < cl_data->num_hid_devices; i++) {
+ 		cl_data->cur_hid_dev = i;
+ 		if (cl_data->sensor_sts[i] == SENSOR_ENABLED) {
+-			cl_data->is_any_sensor_enabled = true;
+ 			rc = amdtp_hid_probe(i, cl_data);
+ 			if (rc)
+ 				goto cleanup;
+@@ -305,12 +315,6 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ 			cl_data->sensor_sts[i]);
+ 	}
+ 
+-	if (!cl_data->is_any_sensor_enabled ||
+-	   (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0)) {
+-		dev_warn(dev, "Failed to discover, sensors not enabled is %d\n", cl_data->is_any_sensor_enabled);
+-		rc = -EOPNOTSUPP;
+-		goto cleanup;
+-	}
+ 	schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP));
+ 	return 0;
+ 
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index a44367aef6216..20de97ce0f5ee 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -714,13 +714,12 @@ static int wacom_intuos_get_tool_type(int tool_id)
+ 	case 0x8e2: /* IntuosHT2 pen */
+ 	case 0x022:
+ 	case 0x200: /* Pro Pen 3 */
+-	case 0x04200: /* Pro Pen 3 */
+ 	case 0x10842: /* MobileStudio Pro Pro Pen slim */
+ 	case 0x14802: /* Intuos4/5 13HD/24HD Classic Pen */
+ 	case 0x16802: /* Cintiq 13HD Pro Pen */
+ 	case 0x18802: /* DTH2242 Pen */
+ 	case 0x10802: /* Intuos4/5 13HD/24HD General Pen */
+-	case 0x80842: /* Intuos Pro and Cintiq Pro 3D Pen */
++	case 0x8842: /* Intuos Pro and Cintiq Pro 3D Pen */
+ 		tool_type = BTN_TOOL_PEN;
+ 		break;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 43952689bfb0c..23627c973e40f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7491,8 +7491,8 @@ static int bnxt_get_avail_msix(struct bnxt *bp, int num);
+ static int __bnxt_reserve_rings(struct bnxt *bp)
+ {
+ 	struct bnxt_hw_rings hwr = {0};
++	int rx_rings, old_rx_rings, rc;
+ 	int cp = bp->cp_nr_rings;
+-	int rx_rings, rc;
+ 	int ulp_msix = 0;
+ 	bool sh = false;
+ 	int tx_cp;
+@@ -7526,6 +7526,7 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
+ 	hwr.grp = bp->rx_nr_rings;
+ 	hwr.rss_ctx = bnxt_get_total_rss_ctxs(bp, &hwr);
+ 	hwr.stat = bnxt_get_func_stat_ctxs(bp);
++	old_rx_rings = bp->hw_resc.resv_rx_rings;
+ 
+ 	rc = bnxt_hwrm_reserve_rings(bp, &hwr);
+ 	if (rc)
+@@ -7580,7 +7581,8 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
+ 	if (!bnxt_rings_ok(bp, &hwr))
+ 		return -ENOMEM;
+ 
+-	if (!netif_is_rxfh_configured(bp->dev))
++	if (old_rx_rings != bp->hw_resc.resv_rx_rings &&
++	    !netif_is_rxfh_configured(bp->dev))
+ 		bnxt_set_dflt_rss_indir_tbl(bp, NULL);
+ 
+ 	if (!bnxt_ulp_registered(bp->edev) && BNXT_NEW_RM(bp)) {
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 99a75a59078ef..caaa10157909e 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -765,18 +765,17 @@ static inline struct xsk_buff_pool *ice_get_xp_from_qid(struct ice_vsi *vsi,
+ }
+ 
+ /**
+- * ice_xsk_pool - get XSK buffer pool bound to a ring
++ * ice_rx_xsk_pool - assign XSK buff pool to Rx ring
+  * @ring: Rx ring to use
+  *
+- * Returns a pointer to xsk_buff_pool structure if there is a buffer pool
+- * present, NULL otherwise.
++ * Sets XSK buff pool pointer on Rx ring.
+  */
+-static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring)
++static inline void ice_rx_xsk_pool(struct ice_rx_ring *ring)
+ {
+ 	struct ice_vsi *vsi = ring->vsi;
+ 	u16 qid = ring->q_index;
+ 
+-	return ice_get_xp_from_qid(vsi, qid);
++	WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid));
+ }
+ 
+ /**
+@@ -801,7 +800,7 @@ static inline void ice_tx_xsk_pool(struct ice_vsi *vsi, u16 qid)
+ 	if (!ring)
+ 		return;
+ 
+-	ring->xsk_pool = ice_get_xp_from_qid(vsi, qid);
++	WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid));
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 5d396c1a77314..1facf179a96fd 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -536,7 +536,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ 				return err;
+ 		}
+ 
+-		ring->xsk_pool = ice_xsk_pool(ring);
++		ice_rx_xsk_pool(ring);
+ 		if (ring->xsk_pool) {
+ 			xdp_rxq_info_unreg(&ring->xdp_rxq);
+ 
+@@ -597,7 +597,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ 			return 0;
+ 		}
+ 
+-		ok = ice_alloc_rx_bufs_zc(ring, num_bufs);
++		ok = ice_alloc_rx_bufs_zc(ring, ring->xsk_pool, num_bufs);
+ 		if (!ok) {
+ 			u16 pf_q = ring->vsi->rxq_map[ring->q_index];
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 55a42aad92a51..9b075dd48889e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2949,7 +2949,7 @@ static void ice_vsi_rx_napi_schedule(struct ice_vsi *vsi)
+ 	ice_for_each_rxq(vsi, i) {
+ 		struct ice_rx_ring *rx_ring = vsi->rx_rings[i];
+ 
+-		if (rx_ring->xsk_pool)
++		if (READ_ONCE(rx_ring->xsk_pool))
+ 			napi_schedule(&rx_ring->q_vector->napi);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 8bb743f78fcb4..8d25b69812698 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -456,7 +456,7 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring)
+ 	if (rx_ring->vsi->type == ICE_VSI_PF)
+ 		if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
+ 			xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+-	rx_ring->xdp_prog = NULL;
++	WRITE_ONCE(rx_ring->xdp_prog, NULL);
+ 	if (rx_ring->xsk_pool) {
+ 		kfree(rx_ring->xdp_buf);
+ 		rx_ring->xdp_buf = NULL;
+@@ -1521,10 +1521,11 @@ int ice_napi_poll(struct napi_struct *napi, int budget)
+ 	 * budget and be more aggressive about cleaning up the Tx descriptors.
+ 	 */
+ 	ice_for_each_tx_ring(tx_ring, q_vector->tx) {
++		struct xsk_buff_pool *xsk_pool = READ_ONCE(tx_ring->xsk_pool);
+ 		bool wd;
+ 
+-		if (tx_ring->xsk_pool)
+-			wd = ice_xmit_zc(tx_ring);
++		if (xsk_pool)
++			wd = ice_xmit_zc(tx_ring, xsk_pool);
+ 		else if (ice_ring_is_xdp(tx_ring))
+ 			wd = true;
+ 		else
+@@ -1550,6 +1551,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget)
+ 		budget_per_ring = budget;
+ 
+ 	ice_for_each_rx_ring(rx_ring, q_vector->rx) {
++		struct xsk_buff_pool *xsk_pool = READ_ONCE(rx_ring->xsk_pool);
+ 		int cleaned;
+ 
+ 		/* A dedicated path for zero-copy allows making a single
+@@ -1557,7 +1559,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget)
+ 		 * ice_clean_rx_irq function and makes the codebase cleaner.
+ 		 */
+ 		cleaned = rx_ring->xsk_pool ?
+-			  ice_clean_rx_irq_zc(rx_ring, budget_per_ring) :
++			  ice_clean_rx_irq_zc(rx_ring, xsk_pool, budget_per_ring) :
+ 			  ice_clean_rx_irq(rx_ring, budget_per_ring);
+ 		work_done += cleaned;
+ 		/* if we clean as many as budgeted, we must not be done */
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index a65955eb23c0b..240a7bec242be 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -52,10 +52,8 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
+ {
+ 	ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+-	if (ice_is_xdp_ena_vsi(vsi)) {
+-		synchronize_rcu();
++	if (ice_is_xdp_ena_vsi(vsi))
+ 		ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
+-	}
+ 	ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+ }
+ 
+@@ -112,25 +110,29 @@ ice_qvec_dis_irq(struct ice_vsi *vsi, struct ice_rx_ring *rx_ring,
+  * ice_qvec_cfg_msix - Enable IRQ for given queue vector
+  * @vsi: the VSI that contains queue vector
+  * @q_vector: queue vector
++ * @qid: queue index
+  */
+ static void
+-ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector)
++ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector, u16 qid)
+ {
+ 	u16 reg_idx = q_vector->reg_idx;
+ 	struct ice_pf *pf = vsi->back;
+ 	struct ice_hw *hw = &pf->hw;
+-	struct ice_tx_ring *tx_ring;
+-	struct ice_rx_ring *rx_ring;
++	int q, _qid = qid;
+ 
+ 	ice_cfg_itr(hw, q_vector);
+ 
+-	ice_for_each_tx_ring(tx_ring, q_vector->tx)
+-		ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, reg_idx,
+-				      q_vector->tx.itr_idx);
++	for (q = 0; q < q_vector->num_ring_tx; q++) {
++		ice_cfg_txq_interrupt(vsi, _qid, reg_idx, q_vector->tx.itr_idx);
++		_qid++;
++	}
++
++	_qid = qid;
+ 
+-	ice_for_each_rx_ring(rx_ring, q_vector->rx)
+-		ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, reg_idx,
+-				      q_vector->rx.itr_idx);
++	for (q = 0; q < q_vector->num_ring_rx; q++) {
++		ice_cfg_rxq_interrupt(vsi, _qid, reg_idx, q_vector->rx.itr_idx);
++		_qid++;
++	}
+ 
+ 	ice_flush(hw);
+ }
+@@ -164,6 +166,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 	struct ice_tx_ring *tx_ring;
+ 	struct ice_rx_ring *rx_ring;
+ 	int timeout = 50;
++	int fail = 0;
+ 	int err;
+ 
+ 	if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq)
+@@ -180,15 +183,17 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 		usleep_range(1000, 2000);
+ 	}
+ 
++	synchronize_net();
++	netif_carrier_off(vsi->netdev);
++	netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
++
+ 	ice_qvec_dis_irq(vsi, rx_ring, q_vector);
+ 	ice_qvec_toggle_napi(vsi, q_vector, false);
+ 
+-	netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
+-
+ 	ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
+ 	err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
+-	if (err)
+-		return err;
++	if (!fail)
++		fail = err;
+ 	if (ice_is_xdp_ena_vsi(vsi)) {
+ 		struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx];
+ 
+@@ -196,17 +201,15 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 		ice_fill_txq_meta(vsi, xdp_ring, &txq_meta);
+ 		err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, xdp_ring,
+ 					   &txq_meta);
+-		if (err)
+-			return err;
++		if (!fail)
++			fail = err;
+ 	}
+-	err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
+-	if (err)
+-		return err;
+ 
++	ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false);
+ 	ice_qp_clean_rings(vsi, q_idx);
+ 	ice_qp_reset_stats(vsi, q_idx);
+ 
+-	return 0;
++	return fail;
+ }
+ 
+ /**
+@@ -219,40 +222,48 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+ {
+ 	struct ice_q_vector *q_vector;
++	int fail = 0;
++	bool link_up;
+ 	int err;
+ 
+ 	err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx);
+-	if (err)
+-		return err;
++	if (!fail)
++		fail = err;
+ 
+ 	if (ice_is_xdp_ena_vsi(vsi)) {
+ 		struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx];
+ 
+ 		err = ice_vsi_cfg_single_txq(vsi, vsi->xdp_rings, q_idx);
+-		if (err)
+-			return err;
++		if (!fail)
++			fail = err;
+ 		ice_set_ring_xdp(xdp_ring);
+ 		ice_tx_xsk_pool(vsi, q_idx);
+ 	}
+ 
+ 	err = ice_vsi_cfg_single_rxq(vsi, q_idx);
+-	if (err)
+-		return err;
++	if (!fail)
++		fail = err;
+ 
+ 	q_vector = vsi->rx_rings[q_idx]->q_vector;
+-	ice_qvec_cfg_msix(vsi, q_vector);
++	ice_qvec_cfg_msix(vsi, q_vector, q_idx);
+ 
+ 	err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true);
+-	if (err)
+-		return err;
++	if (!fail)
++		fail = err;
+ 
+ 	ice_qvec_toggle_napi(vsi, q_vector, true);
+ 	ice_qvec_ena_irq(vsi, q_vector);
+ 
+-	netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
++	/* make sure NAPI sees updated ice_{t,x}_ring::xsk_pool */
++	synchronize_net();
++	ice_get_link_status(vsi->port_info, &link_up);
++	if (link_up) {
++		netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
++		netif_carrier_on(vsi->netdev);
++	}
+ 	clear_bit(ICE_CFG_BUSY, vsi->state);
+ 
+-	return 0;
++	return fail;
+ }
+ 
+ /**
+@@ -459,6 +470,7 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
+ /**
+  * __ice_alloc_rx_bufs_zc - allocate a number of Rx buffers
+  * @rx_ring: Rx ring
++ * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW
+  * @count: The number of buffers to allocate
+  *
+  * Place the @count of descriptors onto Rx ring. Handle the ring wrap
+@@ -467,7 +479,8 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
+  *
+  * Returns true if all allocations were successful, false if any fail.
+  */
+-static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
++static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring,
++				   struct xsk_buff_pool *xsk_pool, u16 count)
+ {
+ 	u32 nb_buffs_extra = 0, nb_buffs = 0;
+ 	union ice_32b_rx_flex_desc *rx_desc;
+@@ -479,8 +492,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+ 	xdp = ice_xdp_buf(rx_ring, ntu);
+ 
+ 	if (ntu + count >= rx_ring->count) {
+-		nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp,
+-						   rx_desc,
++		nb_buffs_extra = ice_fill_rx_descs(xsk_pool, xdp, rx_desc,
+ 						   rx_ring->count - ntu);
+ 		if (nb_buffs_extra != rx_ring->count - ntu) {
+ 			ntu += nb_buffs_extra;
+@@ -493,7 +505,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+ 		ice_release_rx_desc(rx_ring, 0);
+ 	}
+ 
+-	nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count);
++	nb_buffs = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, count);
+ 
+ 	ntu += nb_buffs;
+ 	if (ntu == rx_ring->count)
+@@ -509,6 +521,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+ /**
+  * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers
+  * @rx_ring: Rx ring
++ * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW
+  * @count: The number of buffers to allocate
+  *
+  * Wrapper for internal allocation routine; figure out how many tail
+@@ -516,7 +529,8 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+  *
+  * Returns true if all calls to internal alloc routine succeeded
+  */
+-bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
++bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring,
++			  struct xsk_buff_pool *xsk_pool, u16 count)
+ {
+ 	u16 rx_thresh = ICE_RING_QUARTER(rx_ring);
+ 	u16 leftover, i, tail_bumps;
+@@ -525,9 +539,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
+ 	leftover = count - (tail_bumps * rx_thresh);
+ 
+ 	for (i = 0; i < tail_bumps; i++)
+-		if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh))
++		if (!__ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, rx_thresh))
+ 			return false;
+-	return __ice_alloc_rx_bufs_zc(rx_ring, leftover);
++	return __ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, leftover);
+ }
+ 
+ /**
+@@ -596,8 +610,10 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
+ /**
+  * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ
+  * @xdp_ring: XDP Tx ring
++ * @xsk_pool: AF_XDP buffer pool pointer
+  */
+-static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
++static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring,
++				struct xsk_buff_pool *xsk_pool)
+ {
+ 	u16 ntc = xdp_ring->next_to_clean;
+ 	struct ice_tx_desc *tx_desc;
+@@ -648,7 +664,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
+ 	if (xdp_ring->next_to_clean >= cnt)
+ 		xdp_ring->next_to_clean -= cnt;
+ 	if (xsk_frames)
+-		xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
++		xsk_tx_completed(xsk_pool, xsk_frames);
+ 
+ 	return completed_frames;
+ }
+@@ -657,6 +673,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
+  * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX
+  * @xdp: XDP buffer to xmit
+  * @xdp_ring: XDP ring to produce descriptor onto
++ * @xsk_pool: AF_XDP buffer pool pointer
+  *
+  * note that this function works directly on xdp_buff, no need to convert
+  * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning
+@@ -666,7 +683,8 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
+  * was not enough space on XDP ring
+  */
+ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
+-			      struct ice_tx_ring *xdp_ring)
++			      struct ice_tx_ring *xdp_ring,
++			      struct xsk_buff_pool *xsk_pool)
+ {
+ 	struct skb_shared_info *sinfo = NULL;
+ 	u32 size = xdp->data_end - xdp->data;
+@@ -680,7 +698,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
+ 
+ 	free_space = ICE_DESC_UNUSED(xdp_ring);
+ 	if (free_space < ICE_RING_QUARTER(xdp_ring))
+-		free_space += ice_clean_xdp_irq_zc(xdp_ring);
++		free_space += ice_clean_xdp_irq_zc(xdp_ring, xsk_pool);
+ 
+ 	if (unlikely(!free_space))
+ 		goto busy;
+@@ -700,7 +718,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
+ 		dma_addr_t dma;
+ 
+ 		dma = xsk_buff_xdp_get_dma(xdp);
+-		xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size);
++		xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, size);
+ 
+ 		tx_buf->xdp = xdp;
+ 		tx_buf->type = ICE_TX_BUF_XSK_TX;
+@@ -742,12 +760,14 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
+  * @xdp: xdp_buff used as input to the XDP program
+  * @xdp_prog: XDP program to run
+  * @xdp_ring: ring to be used for XDP_TX action
++ * @xsk_pool: AF_XDP buffer pool pointer
+  *
+  * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
+  */
+ static int
+ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+-	       struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring)
++	       struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
++	       struct xsk_buff_pool *xsk_pool)
+ {
+ 	int err, result = ICE_XDP_PASS;
+ 	u32 act;
+@@ -758,7 +778,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ 		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+ 		if (!err)
+ 			return ICE_XDP_REDIR;
+-		if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS)
++		if (xsk_uses_need_wakeup(xsk_pool) && err == -ENOBUFS)
+ 			result = ICE_XDP_EXIT;
+ 		else
+ 			result = ICE_XDP_CONSUMED;
+@@ -769,7 +789,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ 	case XDP_PASS:
+ 		break;
+ 	case XDP_TX:
+-		result = ice_xmit_xdp_tx_zc(xdp, xdp_ring);
++		result = ice_xmit_xdp_tx_zc(xdp, xdp_ring, xsk_pool);
+ 		if (result == ICE_XDP_CONSUMED)
+ 			goto out_failure;
+ 		break;
+@@ -821,14 +841,16 @@ ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first,
+ /**
+  * ice_clean_rx_irq_zc - consumes packets from the hardware ring
+  * @rx_ring: AF_XDP Rx ring
++ * @xsk_pool: AF_XDP buffer pool pointer
+  * @budget: NAPI budget
+  *
+  * Returns number of processed packets on success, remaining budget on failure.
+  */
+-int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
++int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring,
++			struct xsk_buff_pool *xsk_pool,
++			int budget)
+ {
+ 	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+-	struct xsk_buff_pool *xsk_pool = rx_ring->xsk_pool;
+ 	u32 ntc = rx_ring->next_to_clean;
+ 	u32 ntu = rx_ring->next_to_use;
+ 	struct xdp_buff *first = NULL;
+@@ -891,7 +913,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
+ 		if (ice_is_non_eop(rx_ring, rx_desc))
+ 			continue;
+ 
+-		xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring);
++		xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring,
++					 xsk_pool);
+ 		if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) {
+ 			xdp_xmit |= xdp_res;
+ 		} else if (xdp_res == ICE_XDP_EXIT) {
+@@ -940,7 +963,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
+ 	rx_ring->next_to_clean = ntc;
+ 	entries_to_alloc = ICE_RX_DESC_UNUSED(rx_ring);
+ 	if (entries_to_alloc > ICE_RING_QUARTER(rx_ring))
+-		failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc);
++		failure |= !ice_alloc_rx_bufs_zc(rx_ring, xsk_pool,
++						 entries_to_alloc);
+ 
+ 	ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0);
+ 	ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);
+@@ -963,17 +987,19 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
+ /**
+  * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor
+  * @xdp_ring: XDP ring to produce the HW Tx descriptor on
++ * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW
+  * @desc: AF_XDP descriptor to pull the DMA address and length from
+  * @total_bytes: bytes accumulator that will be used for stats update
+  */
+-static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc,
++static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring,
++			 struct xsk_buff_pool *xsk_pool, struct xdp_desc *desc,
+ 			 unsigned int *total_bytes)
+ {
+ 	struct ice_tx_desc *tx_desc;
+ 	dma_addr_t dma;
+ 
+-	dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr);
+-	xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len);
++	dma = xsk_buff_raw_get_dma(xsk_pool, desc->addr);
++	xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, desc->len);
+ 
+ 	tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++);
+ 	tx_desc->buf_addr = cpu_to_le64(dma);
+@@ -986,10 +1012,13 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc,
+ /**
+  * ice_xmit_pkt_batch - produce a batch of HW Tx descriptors out of AF_XDP descriptors
+  * @xdp_ring: XDP ring to produce the HW Tx descriptors on
++ * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW
+  * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from
+  * @total_bytes: bytes accumulator that will be used for stats update
+  */
+-static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
++static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring,
++			       struct xsk_buff_pool *xsk_pool,
++			       struct xdp_desc *descs,
+ 			       unsigned int *total_bytes)
+ {
+ 	u16 ntu = xdp_ring->next_to_use;
+@@ -999,8 +1028,8 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
+ 	loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) {
+ 		dma_addr_t dma;
+ 
+-		dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr);
+-		xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len);
++		dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr);
++		xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len);
+ 
+ 		tx_desc = ICE_TX_DESC(xdp_ring, ntu++);
+ 		tx_desc->buf_addr = cpu_to_le64(dma);
+@@ -1016,60 +1045,69 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
+ /**
+  * ice_fill_tx_hw_ring - produce the number of Tx descriptors onto ring
+  * @xdp_ring: XDP ring to produce the HW Tx descriptors on
++ * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW
+  * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from
+  * @nb_pkts: count of packets to be send
+  * @total_bytes: bytes accumulator that will be used for stats update
+  */
+-static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
+-				u32 nb_pkts, unsigned int *total_bytes)
++static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring,
++				struct xsk_buff_pool *xsk_pool,
++				struct xdp_desc *descs, u32 nb_pkts,
++				unsigned int *total_bytes)
+ {
+ 	u32 batched, leftover, i;
+ 
+ 	batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH);
+ 	leftover = nb_pkts & (PKTS_PER_BATCH - 1);
+ 	for (i = 0; i < batched; i += PKTS_PER_BATCH)
+-		ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes);
++		ice_xmit_pkt_batch(xdp_ring, xsk_pool, &descs[i], total_bytes);
+ 	for (; i < batched + leftover; i++)
+-		ice_xmit_pkt(xdp_ring, &descs[i], total_bytes);
++		ice_xmit_pkt(xdp_ring, xsk_pool, &descs[i], total_bytes);
+ }
+ 
+ /**
+  * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring
+  * @xdp_ring: XDP ring to produce the HW Tx descriptors on
++ * @xsk_pool: AF_XDP buffer pool pointer
+  *
+  * Returns true if there is no more work that needs to be done, false otherwise
+  */
+-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring)
++bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool)
+ {
+-	struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
++	struct xdp_desc *descs = xsk_pool->tx_descs;
+ 	u32 nb_pkts, nb_processed = 0;
+ 	unsigned int total_bytes = 0;
+ 	int budget;
+ 
+-	ice_clean_xdp_irq_zc(xdp_ring);
++	ice_clean_xdp_irq_zc(xdp_ring, xsk_pool);
++
++	if (!netif_carrier_ok(xdp_ring->vsi->netdev) ||
++	    !netif_running(xdp_ring->vsi->netdev))
++		return true;
+ 
+ 	budget = ICE_DESC_UNUSED(xdp_ring);
+ 	budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring));
+ 
+-	nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
++	nb_pkts = xsk_tx_peek_release_desc_batch(xsk_pool, budget);
+ 	if (!nb_pkts)
+ 		return true;
+ 
+ 	if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
+ 		nb_processed = xdp_ring->count - xdp_ring->next_to_use;
+-		ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes);
++		ice_fill_tx_hw_ring(xdp_ring, xsk_pool, descs, nb_processed,
++				    &total_bytes);
+ 		xdp_ring->next_to_use = 0;
+ 	}
+ 
+-	ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed,
+-			    &total_bytes);
++	ice_fill_tx_hw_ring(xdp_ring, xsk_pool, &descs[nb_processed],
++			    nb_pkts - nb_processed, &total_bytes);
+ 
+ 	ice_set_rs_bit(xdp_ring);
+ 	ice_xdp_ring_update_tail(xdp_ring);
+ 	ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes);
+ 
+-	if (xsk_uses_need_wakeup(xdp_ring->xsk_pool))
+-		xsk_set_tx_need_wakeup(xdp_ring->xsk_pool);
++	if (xsk_uses_need_wakeup(xsk_pool))
++		xsk_set_tx_need_wakeup(xsk_pool);
+ 
+ 	return nb_pkts < budget;
+ }
+@@ -1091,7 +1129,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id,
+ 	struct ice_vsi *vsi = np->vsi;
+ 	struct ice_tx_ring *ring;
+ 
+-	if (test_bit(ICE_VSI_DOWN, vsi->state))
++	if (test_bit(ICE_VSI_DOWN, vsi->state) || !netif_carrier_ok(netdev))
+ 		return -ENETDOWN;
+ 
+ 	if (!ice_is_xdp_ena_vsi(vsi))
+@@ -1102,7 +1140,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id,
+ 
+ 	ring = vsi->rx_rings[queue_id]->xdp_ring;
+ 
+-	if (!ring->xsk_pool)
++	if (!READ_ONCE(ring->xsk_pool))
+ 		return -EINVAL;
+ 
+ 	/* The idea here is that if NAPI is running, mark a miss, so
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h
+index 6fa181f080ef1..45adeb513253a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.h
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.h
+@@ -20,16 +20,20 @@ struct ice_vsi;
+ #ifdef CONFIG_XDP_SOCKETS
+ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool,
+ 		       u16 qid);
+-int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget);
++int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring,
++			struct xsk_buff_pool *xsk_pool,
++			int budget);
+ int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags);
+-bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count);
++bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring,
++			  struct xsk_buff_pool *xsk_pool, u16 count);
+ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi);
+ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring);
+ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring);
+-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring);
++bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool);
+ int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc);
+ #else
+-static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring)
++static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring,
++			       struct xsk_buff_pool __always_unused *xsk_pool)
+ {
+ 	return false;
+ }
+@@ -44,6 +48,7 @@ ice_xsk_pool_setup(struct ice_vsi __always_unused *vsi,
+ 
+ static inline int
+ ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring,
++		    struct xsk_buff_pool __always_unused *xsk_pool,
+ 		    int __always_unused budget)
+ {
+ 	return 0;
+@@ -51,6 +56,7 @@ ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring,
+ 
+ static inline bool
+ ice_alloc_rx_bufs_zc(struct ice_rx_ring __always_unused *rx_ring,
++		     struct xsk_buff_pool __always_unused *xsk_pool,
+ 		     u16 __always_unused count)
+ {
+ 	return false;
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 87b655b839c1c..33069880c86c0 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -6310,21 +6310,6 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 	size_t n;
+ 	int i;
+ 
+-	switch (qopt->cmd) {
+-	case TAPRIO_CMD_REPLACE:
+-		break;
+-	case TAPRIO_CMD_DESTROY:
+-		return igc_tsn_clear_schedule(adapter);
+-	case TAPRIO_CMD_STATS:
+-		igc_taprio_stats(adapter->netdev, &qopt->stats);
+-		return 0;
+-	case TAPRIO_CMD_QUEUE_STATS:
+-		igc_taprio_queue_stats(adapter->netdev, &qopt->queue_stats);
+-		return 0;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	if (qopt->base_time < 0)
+ 		return -ERANGE;
+ 
+@@ -6433,7 +6418,23 @@ static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter,
+ 	if (hw->mac.type != igc_i225)
+ 		return -EOPNOTSUPP;
+ 
+-	err = igc_save_qbv_schedule(adapter, qopt);
++	switch (qopt->cmd) {
++	case TAPRIO_CMD_REPLACE:
++		err = igc_save_qbv_schedule(adapter, qopt);
++		break;
++	case TAPRIO_CMD_DESTROY:
++		err = igc_tsn_clear_schedule(adapter);
++		break;
++	case TAPRIO_CMD_STATS:
++		igc_taprio_stats(adapter->netdev, &qopt->stats);
++		return 0;
++	case TAPRIO_CMD_QUEUE_STATS:
++		igc_taprio_queue_stats(adapter->netdev, &qopt->queue_stats);
++		return 0;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 9adf4301c9b1d..a40b631188866 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -953,13 +953,13 @@ static void mvpp2_bm_pool_update_fc(struct mvpp2_port *port,
+ static void mvpp2_bm_pool_update_priv_fc(struct mvpp2 *priv, bool en)
+ {
+ 	struct mvpp2_port *port;
+-	int i;
++	int i, j;
+ 
+ 	for (i = 0; i < priv->port_count; i++) {
+ 		port = priv->port_list[i];
+ 		if (port->priv->percpu_pools) {
+-			for (i = 0; i < port->nrxqs; i++)
+-				mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[i],
++			for (j = 0; j < port->nrxqs; j++)
++				mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[j],
+ 							port->tx_fc & en);
+ 		} else {
+ 			mvpp2_bm_pool_update_fc(port, port->pool_long, port->tx_fc & en);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index fadfa8b50bebe..8c4e3ecef5901 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -920,6 +920,7 @@ mlx5_tc_ct_entry_replace_rule(struct mlx5_tc_ct_priv *ct_priv,
+ 	mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, mh);
+ 	mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
+ err_mod_hdr:
++	*attr = *old_attr;
+ 	kfree(old_attr);
+ err_attr:
+ 	kvfree(spec);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+index 6e00afe4671b7..797db853de363 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+@@ -51,9 +51,10 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
+ 		    MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap))
+ 			caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD;
+ 
+-		if ((MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) &&
+-		     MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level)) ||
+-		    MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, ignore_flow_level))
++		if (IS_ENABLED(CONFIG_MLX5_CLS_ACT) &&
++		    ((MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) &&
++		      MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level)) ||
++		     MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, ignore_flow_level)))
+ 			caps |= MLX5_IPSEC_CAP_PRIO;
+ 
+ 		if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 3320f12ba2dbd..58eb96a688533 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1409,7 +1409,12 @@ static int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
+ 	if (!an_changes && link_modes == eproto.admin)
+ 		goto out;
+ 
+-	mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext);
++	err = mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext);
++	if (err) {
++		netdev_err(priv->netdev, "%s: failed to set ptys reg: %d\n", __func__, err);
++		goto out;
++	}
++
+ 	mlx5_toggle_port_link(mdev);
+ 
+ out:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 979c49ae6b5cc..b43ca0b762c30 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -207,6 +207,7 @@ int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev)
+ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++	struct devlink *devlink = priv_to_devlink(dev);
+ 
+ 	/* if this is the driver that initiated the fw reset, devlink completed the reload */
+ 	if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
+@@ -218,9 +219,11 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unload
+ 			mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
+ 		else
+ 			mlx5_load_one(dev, true);
+-		devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
++		devl_lock(devlink);
++		devlink_remote_reload_actions_performed(devlink, 0,
+ 							BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ 							BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
++		devl_unlock(devlink);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
+index 612e666ec2635..e2230c8f18152 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
+@@ -48,6 +48,7 @@ static struct mlx5_irq *
+ irq_pool_request_irq(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc)
+ {
+ 	struct irq_affinity_desc auto_desc = {};
++	struct mlx5_irq *irq;
+ 	u32 irq_index;
+ 	int err;
+ 
+@@ -64,9 +65,12 @@ irq_pool_request_irq(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_de
+ 		else
+ 			cpu_get(pool, cpumask_first(&af_desc->mask));
+ 	}
+-	return mlx5_irq_alloc(pool, irq_index,
+-			      cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc,
+-			      NULL);
++	irq = mlx5_irq_alloc(pool, irq_index,
++			     cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc,
++			     NULL);
++	if (IS_ERR(irq))
++		xa_erase(&pool->irqs, irq_index);
++	return irq;
+ }
+ 
+ /* Looking for the IRQ with the smallest refcount that fits req_mask.
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index d0871c46b8c54..cf8045b926892 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -1538,7 +1538,7 @@ u8 mlx5_lag_get_slave_port(struct mlx5_core_dev *dev,
+ 		goto unlock;
+ 
+ 	for (i = 0; i < ldev->ports; i++) {
+-		if (ldev->pf[MLX5_LAG_P1].netdev == slave) {
++		if (ldev->pf[i].netdev == slave) {
+ 			port = i;
+ 			break;
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 459a836a5d9c1..3e55a6c6a7c9b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -2140,7 +2140,6 @@ static int mlx5_try_fast_unload(struct mlx5_core_dev *dev)
+ 	/* Panic tear down fw command will stop the PCI bus communication
+ 	 * with the HCA, so the health poll is no longer needed.
+ 	 */
+-	mlx5_drain_health_wq(dev);
+ 	mlx5_stop_health_poll(dev, false);
+ 
+ 	ret = mlx5_cmd_fast_teardown_hca(dev);
+@@ -2175,6 +2174,7 @@ static void shutdown(struct pci_dev *pdev)
+ 
+ 	mlx5_core_info(dev, "Shutdown was called\n");
+ 	set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
++	mlx5_drain_health_wq(dev);
+ 	err = mlx5_try_fast_unload(dev);
+ 	if (err)
+ 		mlx5_unload_one(dev, false);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+index b2986175d9afe..b706f1486504a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+@@ -112,6 +112,7 @@ static void mlx5_sf_dev_shutdown(struct auxiliary_device *adev)
+ 	struct mlx5_core_dev *mdev = sf_dev->mdev;
+ 
+ 	set_bit(MLX5_BREAK_FW_WAIT, &mdev->intf_state);
++	mlx5_drain_health_wq(mdev);
+ 	mlx5_unload_one(mdev, false);
+ }
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 7b9e04884575e..b6e89fc5a4ae7 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4347,7 +4347,8 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 	if (unlikely(!rtl_tx_slots_avail(tp))) {
+ 		if (net_ratelimit())
+ 			netdev_err(dev, "BUG! Tx Ring full when queue awake!\n");
+-		goto err_stop_0;
++		netif_stop_queue(dev);
++		return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	opts[1] = rtl8169_tx_vlan_tag(skb);
+@@ -4403,11 +4404,6 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
+ 	dev_kfree_skb_any(skb);
+ 	dev->stats.tx_dropped++;
+ 	return NETDEV_TX_OK;
+-
+-err_stop_0:
+-	netif_stop_queue(dev);
+-	dev->stats.tx_dropped++;
+-	return NETDEV_TX_BUSY;
+ }
+ 
+ static unsigned int rtl_last_frag_len(struct sk_buff *skb)
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index c29809cd92015..fa510f4e26008 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2219,9 +2219,9 @@ static void axienet_dma_err_handler(struct work_struct *work)
+ 			   ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
+ 	axienet_set_mac_address(ndev, NULL);
+ 	axienet_set_multicast_list(ndev);
+-	axienet_setoptions(ndev, lp->options);
+ 	napi_enable(&lp->napi_rx);
+ 	napi_enable(&lp->napi_tx);
++	axienet_setoptions(ndev, lp->options);
+ }
+ 
+ /**
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index ebafedde0ab74..0803b6e83cf74 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1389,6 +1389,8 @@ static int ksz9131_config_init(struct phy_device *phydev)
+ 	const struct device *dev_walker;
+ 	int ret;
+ 
++	phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
++
+ 	dev_walker = &phydev->mdio.dev;
+ 	do {
+ 		of_node = dev_walker->of_node;
+@@ -1438,28 +1440,30 @@ static int ksz9131_config_init(struct phy_device *phydev)
+ #define MII_KSZ9131_AUTO_MDIX		0x1C
+ #define MII_KSZ9131_AUTO_MDI_SET	BIT(7)
+ #define MII_KSZ9131_AUTO_MDIX_SWAP_OFF	BIT(6)
++#define MII_KSZ9131_DIG_AXAN_STS	0x14
++#define MII_KSZ9131_DIG_AXAN_STS_LINK_DET	BIT(14)
++#define MII_KSZ9131_DIG_AXAN_STS_A_SELECT	BIT(12)
+ 
+ static int ksz9131_mdix_update(struct phy_device *phydev)
+ {
+ 	int ret;
+ 
+-	ret = phy_read(phydev, MII_KSZ9131_AUTO_MDIX);
+-	if (ret < 0)
+-		return ret;
+-
+-	if (ret & MII_KSZ9131_AUTO_MDIX_SWAP_OFF) {
+-		if (ret & MII_KSZ9131_AUTO_MDI_SET)
+-			phydev->mdix_ctrl = ETH_TP_MDI;
+-		else
+-			phydev->mdix_ctrl = ETH_TP_MDI_X;
++	if (phydev->mdix_ctrl != ETH_TP_MDI_AUTO) {
++		phydev->mdix = phydev->mdix_ctrl;
+ 	} else {
+-		phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+-	}
++		ret = phy_read(phydev, MII_KSZ9131_DIG_AXAN_STS);
++		if (ret < 0)
++			return ret;
+ 
+-	if (ret & MII_KSZ9131_AUTO_MDI_SET)
+-		phydev->mdix = ETH_TP_MDI;
+-	else
+-		phydev->mdix = ETH_TP_MDI_X;
++		if (ret & MII_KSZ9131_DIG_AXAN_STS_LINK_DET) {
++			if (ret & MII_KSZ9131_DIG_AXAN_STS_A_SELECT)
++				phydev->mdix = ETH_TP_MDI;
++			else
++				phydev->mdix = ETH_TP_MDI_X;
++		} else {
++			phydev->mdix = ETH_TP_MDI_INVALID;
++		}
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index 7ab41f95dae5f..ffa07c3f04c26 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -1351,6 +1351,13 @@ static struct phy_driver realtek_drvs[] = {
+ 		.handle_interrupt = genphy_handle_interrupt_no_ack,
+ 		.suspend	= genphy_suspend,
+ 		.resume		= genphy_resume,
++	}, {
++		PHY_ID_MATCH_EXACT(0x001cc960),
++		.name		= "RTL8366S Gigabit Ethernet",
++		.suspend	= genphy_suspend,
++		.resume		= genphy_resume,
++		.read_mmd	= genphy_read_mmd_unsupported,
++		.write_mmd	= genphy_write_mmd_unsupported,
+ 	},
+ };
+ 
+diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
+index 0a662e42ed965..cb7d2f798fb43 100644
+--- a/drivers/net/usb/sr9700.c
++++ b/drivers/net/usb/sr9700.c
+@@ -179,6 +179,7 @@ static int sr_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	struct usbnet *dev = netdev_priv(netdev);
+ 	__le16 res;
+ 	int rc = 0;
++	int err;
+ 
+ 	if (phy_id) {
+ 		netdev_dbg(netdev, "Only internal phy supported\n");
+@@ -189,11 +190,17 @@ static int sr_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	if (loc == MII_BMSR) {
+ 		u8 value;
+ 
+-		sr_read_reg(dev, SR_NSR, &value);
++		err = sr_read_reg(dev, SR_NSR, &value);
++		if (err < 0)
++			return err;
++
+ 		if (value & NSR_LINKST)
+ 			rc = 1;
+ 	}
+-	sr_share_read_word(dev, 1, loc, &res);
++	err = sr_share_read_word(dev, 1, loc, &res);
++	if (err < 0)
++		return err;
++
+ 	if (rc == 1)
+ 		res = le16_to_cpu(res) | BMSR_LSTATUS;
+ 	else
+diff --git a/drivers/net/wan/fsl_qmc_hdlc.c b/drivers/net/wan/fsl_qmc_hdlc.c
+index c5e7ca793c433..8fcfbde31a1c6 100644
+--- a/drivers/net/wan/fsl_qmc_hdlc.c
++++ b/drivers/net/wan/fsl_qmc_hdlc.c
+@@ -18,6 +18,7 @@
+ #include <linux/hdlc.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+@@ -37,7 +38,7 @@ struct qmc_hdlc {
+ 	struct qmc_chan *qmc_chan;
+ 	struct net_device *netdev;
+ 	struct framer *framer;
+-	spinlock_t carrier_lock; /* Protect carrier detection */
++	struct mutex carrier_lock; /* Protect carrier detection */
+ 	struct notifier_block nb;
+ 	bool is_crc32;
+ 	spinlock_t tx_lock; /* Protect tx descriptors */
+@@ -60,7 +61,7 @@ static int qmc_hdlc_framer_set_carrier(struct qmc_hdlc *qmc_hdlc)
+ 	if (!qmc_hdlc->framer)
+ 		return 0;
+ 
+-	guard(spinlock_irqsave)(&qmc_hdlc->carrier_lock);
++	guard(mutex)(&qmc_hdlc->carrier_lock);
+ 
+ 	ret = framer_get_status(qmc_hdlc->framer, &framer_status);
+ 	if (ret) {
+@@ -249,6 +250,7 @@ static void qmc_hcld_recv_complete(void *context, size_t length, unsigned int fl
+ 	struct qmc_hdlc_desc *desc = context;
+ 	struct net_device *netdev;
+ 	struct qmc_hdlc *qmc_hdlc;
++	size_t crc_size;
+ 	int ret;
+ 
+ 	netdev = desc->netdev;
+@@ -267,15 +269,26 @@ static void qmc_hcld_recv_complete(void *context, size_t length, unsigned int fl
+ 		if (flags & QMC_RX_FLAG_HDLC_CRC) /* CRC error */
+ 			netdev->stats.rx_crc_errors++;
+ 		kfree_skb(desc->skb);
+-	} else {
+-		netdev->stats.rx_packets++;
+-		netdev->stats.rx_bytes += length;
++		goto re_queue;
++	}
+ 
+-		skb_put(desc->skb, length);
+-		desc->skb->protocol = hdlc_type_trans(desc->skb, netdev);
+-		netif_rx(desc->skb);
++	/* Discard the CRC */
++	crc_size = qmc_hdlc->is_crc32 ? 4 : 2;
++	if (length < crc_size) {
++		netdev->stats.rx_length_errors++;
++		kfree_skb(desc->skb);
++		goto re_queue;
+ 	}
++	length -= crc_size;
++
++	netdev->stats.rx_packets++;
++	netdev->stats.rx_bytes += length;
++
++	skb_put(desc->skb, length);
++	desc->skb->protocol = hdlc_type_trans(desc->skb, netdev);
++	netif_rx(desc->skb);
+ 
++re_queue:
+ 	/* Re-queue a transfer using the same descriptor */
+ 	ret = qmc_hdlc_recv_queue(qmc_hdlc, desc, desc->dma_size);
+ 	if (ret) {
+@@ -706,7 +719,7 @@ static int qmc_hdlc_probe(struct platform_device *pdev)
+ 
+ 	qmc_hdlc->dev = dev;
+ 	spin_lock_init(&qmc_hdlc->tx_lock);
+-	spin_lock_init(&qmc_hdlc->carrier_lock);
++	mutex_init(&qmc_hdlc->carrier_lock);
+ 
+ 	qmc_hdlc->qmc_chan = devm_qmc_chan_get_bychild(dev, dev->of_node);
+ 	if (IS_ERR(qmc_hdlc->qmc_chan))
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 16af046c33d9e..55fde0d33183c 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -472,7 +472,8 @@ static void __ath12k_pci_ext_irq_disable(struct ath12k_base *ab)
+ {
+ 	int i;
+ 
+-	clear_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
++	if (!test_and_clear_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags))
++		return;
+ 
+ 	for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index b1d0a1b3917d4..9d3c249207c47 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -485,7 +485,9 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+ 
+ 	pci_config_pm_runtime_get(pdev);
+-	pcie_write_cmd_nowait(ctrl, FIELD_PREP(PCI_EXP_SLTCTL_AIC, status),
++
++	/* Attention and Power Indicator Control bits are supported */
++	pcie_write_cmd_nowait(ctrl, FIELD_PREP(PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC, status),
+ 			      PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC);
+ 	pci_config_pm_runtime_put(pdev);
+ 	return 0;
+diff --git a/drivers/perf/fsl_imx9_ddr_perf.c b/drivers/perf/fsl_imx9_ddr_perf.c
+index 72c2d3074cded..98af97750a6e3 100644
+--- a/drivers/perf/fsl_imx9_ddr_perf.c
++++ b/drivers/perf/fsl_imx9_ddr_perf.c
+@@ -476,12 +476,12 @@ static int ddr_perf_event_add(struct perf_event *event, int flags)
+ 	hwc->idx = counter;
+ 	hwc->state |= PERF_HES_STOPPED;
+ 
+-	if (flags & PERF_EF_START)
+-		ddr_perf_event_start(event, flags);
+-
+ 	/* read trans, write trans, read beat */
+ 	ddr_perf_monitor_config(pmu, cfg, cfg1, cfg2);
+ 
++	if (flags & PERF_EF_START)
++		ddr_perf_event_start(event, flags);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 4e842dcedfbaa..11c7c85047ed4 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -412,7 +412,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event)
+ 	 * but not in the user access mode as we want to use the other counters
+ 	 * that support sampling/filtering.
+ 	 */
+-	if (hwc->flags & PERF_EVENT_FLAG_LEGACY) {
++	if ((hwc->flags & PERF_EVENT_FLAG_LEGACY) && (event->attr.type == PERF_TYPE_HARDWARE)) {
+ 		if (event->attr.config == PERF_COUNT_HW_CPU_CYCLES) {
+ 			cflags |= SBI_PMU_CFG_FLAG_SKIP_MATCH;
+ 			cmask = 1;
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 945b1b15a04ca..f8242b8dda908 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -805,9 +805,11 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev,
+ 	if (ret == -ENOPROTOOPT) {
+ 		dev_dbg(ec_dev->dev,
+ 			"GET_NEXT_EVENT returned invalid version error.\n");
++		mutex_lock(&ec_dev->lock);
+ 		ret = cros_ec_get_host_command_version_mask(ec_dev,
+ 							EC_CMD_GET_NEXT_EVENT,
+ 							&ver_mask);
++		mutex_unlock(&ec_dev->lock);
+ 		if (ret < 0 || ver_mask == 0)
+ 			/*
+ 			 * Do not change the MKBP supported version if we can't
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 60066822b5329..350cd1201cdbe 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1216,8 +1216,8 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 	block_group->space_info->total_bytes -= block_group->length;
+ 	block_group->space_info->bytes_readonly -=
+ 		(block_group->length - block_group->zone_unusable);
+-	block_group->space_info->bytes_zone_unusable -=
+-		block_group->zone_unusable;
++	btrfs_space_info_update_bytes_zone_unusable(fs_info, block_group->space_info,
++						    -block_group->zone_unusable);
+ 	block_group->space_info->disk_total -= block_group->length * factor;
+ 
+ 	spin_unlock(&block_group->space_info->lock);
+@@ -1389,7 +1389,8 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
+ 		if (btrfs_is_zoned(cache->fs_info)) {
+ 			/* Migrate zone_unusable bytes to readonly */
+ 			sinfo->bytes_readonly += cache->zone_unusable;
+-			sinfo->bytes_zone_unusable -= cache->zone_unusable;
++			btrfs_space_info_update_bytes_zone_unusable(cache->fs_info, sinfo,
++								    -cache->zone_unusable);
+ 			cache->zone_unusable = 0;
+ 		}
+ 		cache->ro++;
+@@ -3034,9 +3035,11 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache)
+ 		if (btrfs_is_zoned(cache->fs_info)) {
+ 			/* Migrate zone_unusable bytes back */
+ 			cache->zone_unusable =
+-				(cache->alloc_offset - cache->used) +
++				(cache->alloc_offset - cache->used - cache->pinned -
++				 cache->reserved) +
+ 				(cache->length - cache->zone_capacity);
+-			sinfo->bytes_zone_unusable += cache->zone_unusable;
++			btrfs_space_info_update_bytes_zone_unusable(cache->fs_info, sinfo,
++								    cache->zone_unusable);
+ 			sinfo->bytes_readonly -= cache->zone_unusable;
+ 		}
+ 		num_bytes = cache->length - cache->reserved -
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3774c191e36dc..b75e14f399a01 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2806,7 +2806,8 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info,
+ 			readonly = true;
+ 		} else if (btrfs_is_zoned(fs_info)) {
+ 			/* Need reset before reusing in a zoned block group */
+-			space_info->bytes_zone_unusable += len;
++			btrfs_space_info_update_bytes_zone_unusable(fs_info, space_info,
++								    len);
+ 			readonly = true;
+ 		}
+ 		spin_unlock(&cache->lock);
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index dabc3d0793cf4..d674f2106593a 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2723,8 +2723,10 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
+ 	 * If the block group is read-only, we should account freed space into
+ 	 * bytes_readonly.
+ 	 */
+-	if (!block_group->ro)
++	if (!block_group->ro) {
+ 		block_group->zone_unusable += to_unusable;
++		WARN_ON(block_group->zone_unusable > block_group->length);
++	}
+ 	spin_unlock(&ctl->tree_lock);
+ 	if (!used) {
+ 		spin_lock(&block_group->lock);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3a2b902b2d1f9..39d22693e47b6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -737,8 +737,9 @@ static noinline int __cow_file_range_inline(struct btrfs_inode *inode, u64 offse
+ 	return ret;
+ }
+ 
+-static noinline int cow_file_range_inline(struct btrfs_inode *inode, u64 offset,
+-					  u64 end,
++static noinline int cow_file_range_inline(struct btrfs_inode *inode,
++					  struct page *locked_page,
++					  u64 offset, u64 end,
+ 					  size_t compressed_size,
+ 					  int compress_type,
+ 					  struct folio *compressed_folio,
+@@ -762,7 +763,10 @@ static noinline int cow_file_range_inline(struct btrfs_inode *inode, u64 offset,
+ 		return ret;
+ 	}
+ 
+-	extent_clear_unlock_delalloc(inode, offset, end, NULL, &cached,
++	if (ret == 0)
++		locked_page = NULL;
++
++	extent_clear_unlock_delalloc(inode, offset, end, locked_page, &cached,
+ 				     clear_flags,
+ 				     PAGE_UNLOCK | PAGE_START_WRITEBACK |
+ 				     PAGE_END_WRITEBACK);
+@@ -1037,10 +1041,10 @@ static void compress_file_range(struct btrfs_work *work)
+ 	 * extent for the subpage case.
+ 	 */
+ 	if (total_in < actual_end)
+-		ret = cow_file_range_inline(inode, start, end, 0,
++		ret = cow_file_range_inline(inode, NULL, start, end, 0,
+ 					    BTRFS_COMPRESS_NONE, NULL, false);
+ 	else
+-		ret = cow_file_range_inline(inode, start, end, total_compressed,
++		ret = cow_file_range_inline(inode, NULL, start, end, total_compressed,
+ 					    compress_type, folios[0], false);
+ 	if (ret <= 0) {
+ 		if (ret < 0)
+@@ -1359,7 +1363,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ 
+ 	if (!no_inline) {
+ 		/* lets try to make an inline extent */
+-		ret = cow_file_range_inline(inode, start, end, 0,
++		ret = cow_file_range_inline(inode, locked_page, start, end, 0,
+ 					    BTRFS_COMPRESS_NONE, NULL, false);
+ 		if (ret <= 0) {
+ 			/*
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index ae8c56442549c..8f194eefd3f38 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -311,7 +311,7 @@ void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
+ 	found->bytes_used += block_group->used;
+ 	found->disk_used += block_group->used * factor;
+ 	found->bytes_readonly += block_group->bytes_super;
+-	found->bytes_zone_unusable += block_group->zone_unusable;
++	btrfs_space_info_update_bytes_zone_unusable(info, found, block_group->zone_unusable);
+ 	if (block_group->length > 0)
+ 		found->full = 0;
+ 	btrfs_try_granting_tickets(info, found);
+@@ -573,8 +573,7 @@ void btrfs_dump_space_info(struct btrfs_fs_info *fs_info,
+ 
+ 		spin_lock(&cache->lock);
+ 		avail = cache->length - cache->used - cache->pinned -
+-			cache->reserved - cache->delalloc_bytes -
+-			cache->bytes_super - cache->zone_unusable;
++			cache->reserved - cache->bytes_super - cache->zone_unusable;
+ 		btrfs_info(fs_info,
+ "block group %llu has %llu bytes, %llu used %llu pinned %llu reserved %llu delalloc %llu super %llu zone_unusable (%llu bytes available) %s",
+ 			   cache->start, cache->length, cache->used, cache->pinned,
+diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
+index a733458fd13b3..3e304300fc624 100644
+--- a/fs/btrfs/space-info.h
++++ b/fs/btrfs/space-info.h
+@@ -207,6 +207,7 @@ btrfs_space_info_update_##name(struct btrfs_fs_info *fs_info,		\
+ 
+ DECLARE_SPACE_INFO_UPDATE(bytes_may_use, "space_info");
+ DECLARE_SPACE_INFO_UPDATE(bytes_pinned, "pinned");
++DECLARE_SPACE_INFO_UPDATE(bytes_zone_unusable, "zone_unusable");
+ 
+ int btrfs_init_space_info(struct btrfs_fs_info *fs_info);
+ void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index c4941ba245ac3..68df96800ea99 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2016,6 +2016,8 @@ bool __ceph_should_report_size(struct ceph_inode_info *ci)
+  *  CHECK_CAPS_AUTHONLY - we should only check the auth cap
+  *  CHECK_CAPS_FLUSH - we should flush any dirty caps immediately, without
+  *    further delay.
++ *  CHECK_CAPS_FLUSH_FORCE - we should flush any caps immediately, without
++ *    further delay.
+  */
+ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ {
+@@ -2097,7 +2099,7 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ 	}
+ 
+ 	doutc(cl, "%p %llx.%llx file_want %s used %s dirty %s "
+-	      "flushing %s issued %s revoking %s retain %s %s%s%s\n",
++	      "flushing %s issued %s revoking %s retain %s %s%s%s%s\n",
+ 	     inode, ceph_vinop(inode), ceph_cap_string(file_wanted),
+ 	     ceph_cap_string(used), ceph_cap_string(ci->i_dirty_caps),
+ 	     ceph_cap_string(ci->i_flushing_caps),
+@@ -2105,7 +2107,8 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ 	     ceph_cap_string(retain),
+ 	     (flags & CHECK_CAPS_AUTHONLY) ? " AUTHONLY" : "",
+ 	     (flags & CHECK_CAPS_FLUSH) ? " FLUSH" : "",
+-	     (flags & CHECK_CAPS_NOINVAL) ? " NOINVAL" : "");
++	     (flags & CHECK_CAPS_NOINVAL) ? " NOINVAL" : "",
++	     (flags & CHECK_CAPS_FLUSH_FORCE) ? " FLUSH_FORCE" : "");
+ 
+ 	/*
+ 	 * If we no longer need to hold onto old our caps, and we may
+@@ -2180,6 +2183,11 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ 				queue_writeback = true;
+ 		}
+ 
++		if (flags & CHECK_CAPS_FLUSH_FORCE) {
++			doutc(cl, "force to flush caps\n");
++			goto ack;
++		}
++
+ 		if (cap == ci->i_auth_cap &&
+ 		    (cap->issued & CEPH_CAP_FILE_WR)) {
+ 			/* request larger max_size from MDS? */
+@@ -3504,6 +3512,8 @@ static void handle_cap_grant(struct inode *inode,
+ 	bool queue_invalidate = false;
+ 	bool deleted_inode = false;
+ 	bool fill_inline = false;
++	bool revoke_wait = false;
++	int flags = 0;
+ 
+ 	/*
+ 	 * If there is at least one crypto block then we'll trust
+@@ -3699,16 +3709,18 @@ static void handle_cap_grant(struct inode *inode,
+ 		      ceph_cap_string(cap->issued), ceph_cap_string(newcaps),
+ 		      ceph_cap_string(revoking));
+ 		if (S_ISREG(inode->i_mode) &&
+-		    (revoking & used & CEPH_CAP_FILE_BUFFER))
++		    (revoking & used & CEPH_CAP_FILE_BUFFER)) {
+ 			writeback = true;  /* initiate writeback; will delay ack */
+-		else if (queue_invalidate &&
++			revoke_wait = true;
++		} else if (queue_invalidate &&
+ 			 revoking == CEPH_CAP_FILE_CACHE &&
+-			 (newcaps & CEPH_CAP_FILE_LAZYIO) == 0)
+-			; /* do nothing yet, invalidation will be queued */
+-		else if (cap == ci->i_auth_cap)
++			 (newcaps & CEPH_CAP_FILE_LAZYIO) == 0) {
++			revoke_wait = true; /* do nothing yet, invalidation will be queued */
++		} else if (cap == ci->i_auth_cap) {
+ 			check_caps = 1; /* check auth cap only */
+-		else
++		} else {
+ 			check_caps = 2; /* check all caps */
++		}
+ 		/* If there is new caps, try to wake up the waiters */
+ 		if (~cap->issued & newcaps)
+ 			wake = true;
+@@ -3735,8 +3747,9 @@ static void handle_cap_grant(struct inode *inode,
+ 	BUG_ON(cap->issued & ~cap->implemented);
+ 
+ 	/* don't let check_caps skip sending a response to MDS for revoke msgs */
+-	if (le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
++	if (!revoke_wait && le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
+ 		cap->mds_wanted = 0;
++		flags |= CHECK_CAPS_FLUSH_FORCE;
+ 		if (cap == ci->i_auth_cap)
+ 			check_caps = 1; /* check auth cap only */
+ 		else
+@@ -3792,9 +3805,9 @@ static void handle_cap_grant(struct inode *inode,
+ 
+ 	mutex_unlock(&session->s_mutex);
+ 	if (check_caps == 1)
+-		ceph_check_caps(ci, CHECK_CAPS_AUTHONLY | CHECK_CAPS_NOINVAL);
++		ceph_check_caps(ci, flags | CHECK_CAPS_AUTHONLY | CHECK_CAPS_NOINVAL);
+ 	else if (check_caps == 2)
+-		ceph_check_caps(ci, CHECK_CAPS_NOINVAL);
++		ceph_check_caps(ci, flags | CHECK_CAPS_NOINVAL);
+ }
+ 
+ /*
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index b63b4cd9b5b68..6e817bf1337c6 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -200,9 +200,10 @@ struct ceph_cap {
+ 	struct list_head caps_item;
+ };
+ 
+-#define CHECK_CAPS_AUTHONLY   1  /* only check auth cap */
+-#define CHECK_CAPS_FLUSH      2  /* flush any dirty caps */
+-#define CHECK_CAPS_NOINVAL    4  /* don't invalidate pagecache */
++#define CHECK_CAPS_AUTHONLY     1  /* only check auth cap */
++#define CHECK_CAPS_FLUSH        2  /* flush any dirty caps */
++#define CHECK_CAPS_NOINVAL      4  /* don't invalidate pagecache */
++#define CHECK_CAPS_FLUSH_FORCE  8  /* force flush any caps */
+ 
+ struct ceph_cap_flush {
+ 	u64 tid;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4bae9ccf5fe01..4b0d64a76e88e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -453,6 +453,35 @@ static void ext4_map_blocks_es_recheck(handle_t *handle,
+ }
+ #endif /* ES_AGGRESSIVE_TEST */
+ 
++static int ext4_map_query_blocks(handle_t *handle, struct inode *inode,
++				 struct ext4_map_blocks *map)
++{
++	unsigned int status;
++	int retval;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
++		retval = ext4_ext_map_blocks(handle, inode, map, 0);
++	else
++		retval = ext4_ind_map_blocks(handle, inode, map, 0);
++
++	if (retval <= 0)
++		return retval;
++
++	if (unlikely(retval != map->m_len)) {
++		ext4_warning(inode->i_sb,
++			     "ES len assertion failed for inode "
++			     "%lu: retval %d != map->m_len %d",
++			     inode->i_ino, retval, map->m_len);
++		WARN_ON(1);
++	}
++
++	status = map->m_flags & EXT4_MAP_UNWRITTEN ?
++			EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
++	ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
++			      map->m_pblk, status);
++	return retval;
++}
++
+ /*
+  * The ext4_map_blocks() function tries to look up the requested blocks,
+  * and returns if the blocks are already mapped.
+@@ -1708,6 +1737,7 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 		if (ext4_es_is_hole(&es))
+ 			goto add_delayed;
+ 
++found:
+ 		/*
+ 		 * Delayed extent could be allocated by fallocate.
+ 		 * So we need to check it.
+@@ -1744,36 +1774,34 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ 	down_read(&EXT4_I(inode)->i_data_sem);
+ 	if (ext4_has_inline_data(inode))
+ 		retval = 0;
+-	else if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+-		retval = ext4_ext_map_blocks(NULL, inode, map, 0);
+ 	else
+-		retval = ext4_ind_map_blocks(NULL, inode, map, 0);
+-	if (retval < 0) {
+-		up_read(&EXT4_I(inode)->i_data_sem);
++		retval = ext4_map_query_blocks(NULL, inode, map);
++	up_read(&EXT4_I(inode)->i_data_sem);
++	if (retval)
+ 		return retval;
+-	}
+-	if (retval > 0) {
+-		unsigned int status;
+ 
+-		if (unlikely(retval != map->m_len)) {
+-			ext4_warning(inode->i_sb,
+-				     "ES len assertion failed for inode "
+-				     "%lu: retval %d != map->m_len %d",
+-				     inode->i_ino, retval, map->m_len);
+-			WARN_ON(1);
++add_delayed:
++	down_write(&EXT4_I(inode)->i_data_sem);
++	/*
++	 * Page fault path (ext4_page_mkwrite does not take i_rwsem)
++	 * and fallocate path (no folio lock) can race. Make sure we
++	 * lookup the extent status tree here again while i_data_sem
++	 * is held in write mode, before inserting a new da entry in
++	 * the extent status tree.
++	 */
++	if (ext4_es_lookup_extent(inode, iblock, NULL, &es)) {
++		if (!ext4_es_is_hole(&es)) {
++			up_write(&EXT4_I(inode)->i_data_sem);
++			goto found;
++		}
++	} else if (!ext4_has_inline_data(inode)) {
++		retval = ext4_map_query_blocks(NULL, inode, map);
++		if (retval) {
++			up_write(&EXT4_I(inode)->i_data_sem);
++			return retval;
+ 		}
+-
+-		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+-				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+-		ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+-				      map->m_pblk, status);
+-		up_read(&EXT4_I(inode)->i_data_sem);
+-		return retval;
+ 	}
+-	up_read(&EXT4_I(inode)->i_data_sem);
+ 
+-add_delayed:
+-	down_write(&EXT4_I(inode)->i_data_sem);
+ 	retval = ext4_insert_delayed_block(inode, map->m_lblk);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
+ 	if (retval)
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 259e235becc59..601825785226d 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3483,7 +3483,9 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
+ 		if (page_private_gcing(fio->page)) {
+ 			if (fio->sbi->am.atgc_enabled &&
+ 				(fio->io_type == FS_DATA_IO) &&
+-				(fio->sbi->gc_mode != GC_URGENT_HIGH))
++				(fio->sbi->gc_mode != GC_URGENT_HIGH) &&
++				__is_valid_data_blkaddr(fio->old_blkaddr) &&
++				!is_inode_flag_set(inode, FI_OPU_WRITE))
+ 				return CURSEG_ALL_DATA_ATGC;
+ 			else
+ 				return CURSEG_COLD_DATA;
+diff --git a/fs/file.c b/fs/file.c
+index a3b72aa64f116..a11e59b5d6026 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -1248,6 +1248,7 @@ __releases(&files->file_lock)
+ 	 * tables and this condition does not arise without those.
+ 	 */
+ 	fdt = files_fdtable(files);
++	fd = array_index_nospec(fd, fdt->max_fds);
+ 	tofree = fdt->fd[fd];
+ 	if (!tofree && fd_is_open(fd, fdt))
+ 		goto Ebusy;
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 7a5785f405b62..0a8fd4a3d04c9 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -147,6 +147,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_IRQ_LOONGARCH_STARTING,
+ 	CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
+ 	CPUHP_AP_IRQ_RISCV_IMSIC_STARTING,
++	CPUHP_AP_IRQ_RISCV_SBI_IPI_STARTING,
+ 	CPUHP_AP_ARM_MVEBU_COHERENCY,
+ 	CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
+ 	CPUHP_AP_PERF_X86_STARTING,
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index c73ad77fa33d3..d73c7d89d27b9 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -132,18 +132,6 @@ static inline bool hugepage_global_always(void)
+ 			(1<<TRANSPARENT_HUGEPAGE_FLAG);
+ }
+ 
+-static inline bool hugepage_flags_enabled(void)
+-{
+-	/*
+-	 * We cover both the anon and the file-backed case here; we must return
+-	 * true if globally enabled, even when all anon sizes are set to never.
+-	 * So we don't need to look at huge_anon_orders_inherit.
+-	 */
+-	return hugepage_global_enabled() ||
+-	       huge_anon_orders_always ||
+-	       huge_anon_orders_madvise;
+-}
+-
+ static inline int highest_order(unsigned long orders)
+ {
+ 	return fls_long(orders) - 1;
+diff --git a/include/linux/migrate.h b/include/linux/migrate.h
+index 2ce13e8a309bd..9438cc7c2aeb5 100644
+--- a/include/linux/migrate.h
++++ b/include/linux/migrate.h
+@@ -142,9 +142,16 @@ const struct movable_operations *page_movable_ops(struct page *page)
+ }
+ 
+ #ifdef CONFIG_NUMA_BALANCING
++int migrate_misplaced_folio_prepare(struct folio *folio,
++		struct vm_area_struct *vma, int node);
+ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
+ 			   int node);
+ #else
++static inline int migrate_misplaced_folio_prepare(struct folio *folio,
++		struct vm_area_struct *vma, int node)
++{
++	return -EAGAIN; /* can't migrate now */
++}
+ static inline int migrate_misplaced_folio(struct folio *folio,
+ 					 struct vm_area_struct *vma, int node)
+ {
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index c978fa2893a53..9b38d015c53c1 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -2394,6 +2394,14 @@ DEFINE_EVENT(btrfs__space_info_update, update_bytes_pinned,
+ 	TP_ARGS(fs_info, sinfo, old, diff)
+ );
+ 
++DEFINE_EVENT(btrfs__space_info_update, update_bytes_zone_unusable,
++
++	TP_PROTO(const struct btrfs_fs_info *fs_info,
++		 const struct btrfs_space_info *sinfo, u64 old, s64 diff),
++
++	TP_ARGS(fs_info, sinfo, old, diff)
++);
++
+ DECLARE_EVENT_CLASS(btrfs_raid56_bio,
+ 
+ 	TP_PROTO(const struct btrfs_raid_bio *rbio,
+diff --git a/include/trace/events/mptcp.h b/include/trace/events/mptcp.h
+index 09e72215b9f9b..085b749cdd97e 100644
+--- a/include/trace/events/mptcp.h
++++ b/include/trace/events/mptcp.h
+@@ -34,7 +34,7 @@ TRACE_EVENT(mptcp_subflow_get_send,
+ 		struct sock *ssk;
+ 
+ 		__entry->active = mptcp_subflow_active(subflow);
+-		__entry->backup = subflow->backup;
++		__entry->backup = subflow->backup || subflow->request_bkup;
+ 
+ 		if (subflow->tcp_sock && sk_fullsock(subflow->tcp_sock))
+ 			__entry->free = sk_stream_memory_free(subflow->tcp_sock);
+diff --git a/init/Kconfig b/init/Kconfig
+index febdea2afc3be..d8a971b804d32 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1906,6 +1906,7 @@ config RUST
+ 	depends on !MODVERSIONS
+ 	depends on !GCC_PLUGINS
+ 	depends on !RANDSTRUCT
++	depends on !SHADOW_CALL_STACK
+ 	depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE
+ 	help
+ 	  Enables Rust support in the kernel.
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 0a8e02944689f..1f63b60e85e7c 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -347,6 +347,7 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
+ 		v &= IO_POLL_REF_MASK;
+ 	} while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK);
+ 
++	io_napi_add(req);
+ 	return IOU_POLL_NO_ACTION;
+ }
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 374a0d54b08df..5f32a196a612e 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -517,6 +517,13 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj,
+ 	} else
+ 		ret = -EINVAL;
+ 
++	if (ret > 0) {
++		int err;
++
++		err = start_stop_khugepaged();
++		if (err)
++			ret = err;
++	}
+ 	return ret;
+ }
+ 
+@@ -1659,7 +1666,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
+ 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ 	int nid = NUMA_NO_NODE;
+ 	int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
+-	bool migrated = false, writable = false;
++	bool writable = false;
+ 	int flags = 0;
+ 
+ 	vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+@@ -1695,16 +1702,17 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
+ 	if (node_is_toptier(nid))
+ 		last_cpupid = folio_last_cpupid(folio);
+ 	target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags);
+-	if (target_nid == NUMA_NO_NODE) {
+-		folio_put(folio);
++	if (target_nid == NUMA_NO_NODE)
++		goto out_map;
++	if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) {
++		flags |= TNF_MIGRATE_FAIL;
+ 		goto out_map;
+ 	}
+-
++	/* The folio is isolated and isolation code holds a folio reference. */
+ 	spin_unlock(vmf->ptl);
+ 	writable = false;
+ 
+-	migrated = migrate_misplaced_folio(folio, vma, target_nid);
+-	if (migrated) {
++	if (!migrate_misplaced_folio(folio, vma, target_nid)) {
+ 		flags |= TNF_MIGRATED;
+ 		nid = target_nid;
+ 	} else {
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 774a97e6e2da3..92ecd59fffd41 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -416,6 +416,26 @@ static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm)
+ 	       test_bit(MMF_DISABLE_THP, &mm->flags);
+ }
+ 
++static bool hugepage_pmd_enabled(void)
++{
++	/*
++	 * We cover both the anon and the file-backed case here; file-backed
++	 * hugepages, when configured in, are determined by the global control.
++	 * Anon pmd-sized hugepages are determined by the pmd-size control.
++	 */
++	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
++	    hugepage_global_enabled())
++		return true;
++	if (test_bit(PMD_ORDER, &huge_anon_orders_always))
++		return true;
++	if (test_bit(PMD_ORDER, &huge_anon_orders_madvise))
++		return true;
++	if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) &&
++	    hugepage_global_enabled())
++		return true;
++	return false;
++}
++
+ void __khugepaged_enter(struct mm_struct *mm)
+ {
+ 	struct khugepaged_mm_slot *mm_slot;
+@@ -452,7 +472,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
+ 			  unsigned long vm_flags)
+ {
+ 	if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
+-	    hugepage_flags_enabled()) {
++	    hugepage_pmd_enabled()) {
+ 		if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
+ 					    PMD_ORDER))
+ 			__khugepaged_enter(vma->vm_mm);
+@@ -2465,8 +2485,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
+ 
+ static int khugepaged_has_work(void)
+ {
+-	return !list_empty(&khugepaged_scan.mm_head) &&
+-		hugepage_flags_enabled();
++	return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled();
+ }
+ 
+ static int khugepaged_wait_event(void)
+@@ -2539,7 +2558,7 @@ static void khugepaged_wait_work(void)
+ 		return;
+ 	}
+ 
+-	if (hugepage_flags_enabled())
++	if (hugepage_pmd_enabled())
+ 		wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
+ }
+ 
+@@ -2570,7 +2589,7 @@ static void set_recommended_min_free_kbytes(void)
+ 	int nr_zones = 0;
+ 	unsigned long recommended_min;
+ 
+-	if (!hugepage_flags_enabled()) {
++	if (!hugepage_pmd_enabled()) {
+ 		calculate_min_free_kbytes();
+ 		goto update_wmarks;
+ 	}
+@@ -2620,7 +2639,7 @@ int start_stop_khugepaged(void)
+ 	int err = 0;
+ 
+ 	mutex_lock(&khugepaged_mutex);
+-	if (hugepage_flags_enabled()) {
++	if (hugepage_pmd_enabled()) {
+ 		if (!khugepaged_thread)
+ 			khugepaged_thread = kthread_run(khugepaged, NULL,
+ 							"khugepaged");
+@@ -2646,7 +2665,7 @@ int start_stop_khugepaged(void)
+ void khugepaged_min_free_kbytes_update(void)
+ {
+ 	mutex_lock(&khugepaged_mutex);
+-	if (hugepage_flags_enabled() && khugepaged_thread)
++	if (hugepage_pmd_enabled() && khugepaged_thread)
+ 		set_recommended_min_free_kbytes();
+ 	mutex_unlock(&khugepaged_mutex);
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index f81760c93801f..755ffe082e217 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5067,8 +5067,6 @@ int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf,
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 
+-	folio_get(folio);
+-
+ 	/* Record the current PID acceesing VMA */
+ 	vma_set_access_pid_bit(vma);
+ 
+@@ -5205,16 +5203,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
+ 	else
+ 		last_cpupid = folio_last_cpupid(folio);
+ 	target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags);
+-	if (target_nid == NUMA_NO_NODE) {
+-		folio_put(folio);
++	if (target_nid == NUMA_NO_NODE)
++		goto out_map;
++	if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) {
++		flags |= TNF_MIGRATE_FAIL;
+ 		goto out_map;
+ 	}
++	/* The folio is isolated and isolation code holds a folio reference. */
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+ 	writable = false;
+ 	ignore_writable = true;
+ 
+ 	/* Migrate to the requested node */
+-	if (migrate_misplaced_folio(folio, vma, target_nid)) {
++	if (!migrate_misplaced_folio(folio, vma, target_nid)) {
+ 		nid = target_nid;
+ 		flags |= TNF_MIGRATED;
+ 	} else {
+diff --git a/mm/migrate.c b/mm/migrate.c
+index a8c6f466e33ac..9dabeb90f772d 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2557,16 +2557,44 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
+ 	return __folio_alloc_node(gfp, order, nid);
+ }
+ 
+-static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
++/*
++ * Prepare for calling migrate_misplaced_folio() by isolating the folio if
++ * permitted. Must be called with the PTL still held.
++ */
++int migrate_misplaced_folio_prepare(struct folio *folio,
++		struct vm_area_struct *vma, int node)
+ {
+ 	int nr_pages = folio_nr_pages(folio);
++	pg_data_t *pgdat = NODE_DATA(node);
++
++	if (folio_is_file_lru(folio)) {
++		/*
++		 * Do not migrate file folios that are mapped in multiple
++		 * processes with execute permissions as they are probably
++		 * shared libraries.
++		 *
++		 * See folio_likely_mapped_shared() on possible imprecision
++		 * when we cannot easily detect if a folio is shared.
++		 */
++		if ((vma->vm_flags & VM_EXEC) &&
++		    folio_likely_mapped_shared(folio))
++			return -EACCES;
++
++		/*
++		 * Do not migrate dirty folios as not all filesystems can move
++		 * dirty folios in MIGRATE_ASYNC mode which is a waste of
++		 * cycles.
++		 */
++		if (folio_test_dirty(folio))
++			return -EAGAIN;
++	}
+ 
+ 	/* Avoid migrating to a node that is nearly full */
+ 	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+ 		int z;
+ 
+ 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
+-			return 0;
++			return -EAGAIN;
+ 		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+ 			if (managed_zone(pgdat->node_zones + z))
+ 				break;
+@@ -2577,78 +2605,42 @@ static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
+ 		 * further.
+ 		 */
+ 		if (z < 0)
+-			return 0;
++			return -EAGAIN;
+ 
+ 		wakeup_kswapd(pgdat->node_zones + z, 0,
+ 			      folio_order(folio), ZONE_MOVABLE);
+-		return 0;
++		return -EAGAIN;
+ 	}
+ 
+ 	if (!folio_isolate_lru(folio))
+-		return 0;
++		return -EAGAIN;
+ 
+ 	node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
+ 			    nr_pages);
+-
+-	/*
+-	 * Isolating the folio has taken another reference, so the
+-	 * caller's reference can be safely dropped without the folio
+-	 * disappearing underneath us during migration.
+-	 */
+-	folio_put(folio);
+-	return 1;
++	return 0;
+ }
+ 
+ /*
+  * Attempt to migrate a misplaced folio to the specified destination
+- * node. Caller is expected to have an elevated reference count on
+- * the folio that will be dropped by this function before returning.
++ * node. Caller is expected to have isolated the folio by calling
++ * migrate_misplaced_folio_prepare(), which will result in an
++ * elevated reference count on the folio. This function will un-isolate the
++ * folio, dereferencing the folio before returning.
+  */
+ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
+ 			    int node)
+ {
+ 	pg_data_t *pgdat = NODE_DATA(node);
+-	int isolated;
+ 	int nr_remaining;
+ 	unsigned int nr_succeeded;
+ 	LIST_HEAD(migratepages);
+-	int nr_pages = folio_nr_pages(folio);
+-
+-	/*
+-	 * Don't migrate file folios that are mapped in multiple processes
+-	 * with execute permissions as they are probably shared libraries.
+-	 *
+-	 * See folio_likely_mapped_shared() on possible imprecision when we
+-	 * cannot easily detect if a folio is shared.
+-	 */
+-	if (folio_likely_mapped_shared(folio) && folio_is_file_lru(folio) &&
+-	    (vma->vm_flags & VM_EXEC))
+-		goto out;
+-
+-	/*
+-	 * Also do not migrate dirty folios as not all filesystems can move
+-	 * dirty folios in MIGRATE_ASYNC mode which is a waste of cycles.
+-	 */
+-	if (folio_is_file_lru(folio) && folio_test_dirty(folio))
+-		goto out;
+-
+-	isolated = numamigrate_isolate_folio(pgdat, folio);
+-	if (!isolated)
+-		goto out;
+ 
+ 	list_add(&folio->lru, &migratepages);
+ 	nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio,
+ 				     NULL, node, MIGRATE_ASYNC,
+ 				     MR_NUMA_MISPLACED, &nr_succeeded);
+-	if (nr_remaining) {
+-		if (!list_empty(&migratepages)) {
+-			list_del(&folio->lru);
+-			node_stat_mod_folio(folio, NR_ISOLATED_ANON +
+-					folio_is_file_lru(folio), -nr_pages);
+-			folio_putback_lru(folio);
+-		}
+-		isolated = 0;
+-	}
++	if (nr_remaining && !list_empty(&migratepages))
++		putback_movable_pages(&migratepages);
+ 	if (nr_succeeded) {
+ 		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+ 		if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node))
+@@ -2656,11 +2648,7 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
+ 					    nr_succeeded);
+ 	}
+ 	BUG_ON(!list_empty(&migratepages));
+-	return isolated;
+-
+-out:
+-	folio_put(folio);
+-	return 0;
++	return nr_remaining ? -EAGAIN : 0;
+ }
+ #endif /* CONFIG_NUMA_BALANCING */
+ #endif /* CONFIG_NUMA */
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 7ae118a6d947b..6ecb110bf46bc 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -120,13 +120,6 @@ void hci_discovery_set_state(struct hci_dev *hdev, int state)
+ 	case DISCOVERY_STARTING:
+ 		break;
+ 	case DISCOVERY_FINDING:
+-		/* If discovery was not started then it was initiated by the
+-		 * MGMT interface so no MGMT event shall be generated either
+-		 */
+-		if (old_state != DISCOVERY_STARTING) {
+-			hdev->discovery.state = old_state;
+-			return;
+-		}
+ 		mgmt_discovering(hdev, 1);
+ 		break;
+ 	case DISCOVERY_RESOLVING:
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 4611a67d7dcc3..a78f6d706cd43 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1722,9 +1722,10 @@ static void le_set_scan_enable_complete(struct hci_dev *hdev, u8 enable)
+ 	switch (enable) {
+ 	case LE_SCAN_ENABLE:
+ 		hci_dev_set_flag(hdev, HCI_LE_SCAN);
+-		if (hdev->le_scan_type == LE_SCAN_ACTIVE)
++		if (hdev->le_scan_type == LE_SCAN_ACTIVE) {
+ 			clear_pending_adv_report(hdev);
+-		hci_discovery_set_state(hdev, DISCOVERY_FINDING);
++			hci_discovery_set_state(hdev, DISCOVERY_FINDING);
++		}
+ 		break;
+ 
+ 	case LE_SCAN_DISABLE:
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index bb704088559fb..2f26147fdf3c9 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2929,6 +2929,27 @@ static int hci_passive_scan_sync(struct hci_dev *hdev)
+ 	 */
+ 	filter_policy = hci_update_accept_list_sync(hdev);
+ 
++	/* If suspended and filter_policy set to 0x00 (no acceptlist) then
++	 * passive scanning cannot be started since that would require the host
++	 * to be woken up to process the reports.
++	 */
++	if (hdev->suspended && !filter_policy) {
++		/* Check if accept list is empty then there is no need to scan
++		 * while suspended.
++		 */
++		if (list_empty(&hdev->le_accept_list))
++			return 0;
++
++		/* If there are devices is the accept_list that means some
++		 * devices could not be programmed which in non-suspended case
++		 * means filter_policy needs to be set to 0x00 so the host needs
++		 * to filter, but since this is treating suspended case we
++		 * can ignore device needing host to filter to allow devices in
++		 * the acceptlist to be able to wakeup the system.
++		 */
++		filter_policy = 0x01;
++	}
++
+ 	/* When the controller is using random resolvable addresses and
+ 	 * with that having LE privacy enabled, then controllers with
+ 	 * Extended Scanner Filter Policies support can now enable support
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 4668d67180407..5e589f0a62bc5 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3288,7 +3288,7 @@ static int rtnl_dellink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	if (ifm->ifi_index > 0)
+ 		dev = __dev_get_by_index(tgt_net, ifm->ifi_index);
+ 	else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME])
+-		dev = rtnl_dev_get(net, tb);
++		dev = rtnl_dev_get(tgt_net, tb);
+ 	else if (tb[IFLA_GROUP])
+ 		err = rtnl_group_dellink(tgt_net, nla_get_u32(tb[IFLA_GROUP]));
+ 	else
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 223dcd25d88a2..fcc3dbef8b503 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1277,11 +1277,11 @@ static noinline_for_stack int ethtool_set_rxfh(struct net_device *dev,
+ 	u32 rss_cfg_offset = offsetof(struct ethtool_rxfh, rss_config[0]);
+ 	const struct ethtool_ops *ops = dev->ethtool_ops;
+ 	u32 dev_indir_size = 0, dev_key_size = 0, i;
++	u32 user_indir_len = 0, indir_bytes = 0;
+ 	struct ethtool_rxfh_param rxfh_dev = {};
+ 	struct netlink_ext_ack *extack = NULL;
+ 	struct ethtool_rxnfc rx_rings;
+ 	struct ethtool_rxfh rxfh;
+-	u32 indir_bytes = 0;
+ 	u8 *rss_config;
+ 	int ret;
+ 
+@@ -1342,6 +1342,7 @@ static noinline_for_stack int ethtool_set_rxfh(struct net_device *dev,
+ 	 */
+ 	if (rxfh.indir_size &&
+ 	    rxfh.indir_size != ETH_RXFH_INDIR_NO_CHANGE) {
++		user_indir_len = indir_bytes;
+ 		rxfh_dev.indir = (u32 *)rss_config;
+ 		rxfh_dev.indir_size = dev_indir_size;
+ 		ret = ethtool_copy_validate_indir(rxfh_dev.indir,
+@@ -1368,7 +1369,7 @@ static noinline_for_stack int ethtool_set_rxfh(struct net_device *dev,
+ 		rxfh_dev.key_size = dev_key_size;
+ 		rxfh_dev.key = rss_config + indir_bytes;
+ 		if (copy_from_user(rxfh_dev.key,
+-				   useraddr + rss_cfg_offset + indir_bytes,
++				   useraddr + rss_cfg_offset + user_indir_len,
+ 				   rxfh.key_size)) {
+ 			ret = -EFAULT;
+ 			goto out;
+diff --git a/net/ethtool/rss.c b/net/ethtool/rss.c
+index 71679137eff21..5c4c4505ab9a4 100644
+--- a/net/ethtool/rss.c
++++ b/net/ethtool/rss.c
+@@ -111,7 +111,8 @@ rss_reply_size(const struct ethnl_req_info *req_base,
+ 	const struct rss_reply_data *data = RSS_REPDATA(reply_base);
+ 	int len;
+ 
+-	len = nla_total_size(sizeof(u32)) +	/* _RSS_HFUNC */
++	len = nla_total_size(sizeof(u32)) +	/* _RSS_CONTEXT */
++	      nla_total_size(sizeof(u32)) +	/* _RSS_HFUNC */
+ 	      nla_total_size(sizeof(u32)) +	/* _RSS_INPUT_XFRM */
+ 	      nla_total_size(sizeof(u32) * data->indir_size) + /* _RSS_INDIR */
+ 	      nla_total_size(data->hkey_size);	/* _RSS_HKEY */
+@@ -124,6 +125,11 @@ rss_fill_reply(struct sk_buff *skb, const struct ethnl_req_info *req_base,
+ 	       const struct ethnl_reply_data *reply_base)
+ {
+ 	const struct rss_reply_data *data = RSS_REPDATA(reply_base);
++	struct rss_req_info *request = RSS_REQINFO(req_base);
++
++	if (request->rss_context &&
++	    nla_put_u32(skb, ETHTOOL_A_RSS_CONTEXT, request->rss_context))
++		return -EMSGSIZE;
+ 
+ 	if ((data->hfunc &&
+ 	     nla_put_u32(skb, ETHTOOL_A_RSS_HFUNC, data->hfunc)) ||
+diff --git a/net/ipv4/netfilter/iptable_nat.c b/net/ipv4/netfilter/iptable_nat.c
+index 4d42d0756fd70..a5db7c67d61be 100644
+--- a/net/ipv4/netfilter/iptable_nat.c
++++ b/net/ipv4/netfilter/iptable_nat.c
+@@ -145,25 +145,27 @@ static struct pernet_operations iptable_nat_net_ops = {
+ 
+ static int __init iptable_nat_init(void)
+ {
+-	int ret = xt_register_template(&nf_nat_ipv4_table,
+-				       iptable_nat_table_init);
++	int ret;
+ 
++	/* net->gen->ptr[iptable_nat_net_id] must be allocated
++	 * before calling iptable_nat_table_init().
++	 */
++	ret = register_pernet_subsys(&iptable_nat_net_ops);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = register_pernet_subsys(&iptable_nat_net_ops);
+-	if (ret < 0) {
+-		xt_unregister_template(&nf_nat_ipv4_table);
+-		return ret;
+-	}
++	ret = xt_register_template(&nf_nat_ipv4_table,
++				   iptable_nat_table_init);
++	if (ret < 0)
++		unregister_pernet_subsys(&iptable_nat_net_ops);
+ 
+ 	return ret;
+ }
+ 
+ static void __exit iptable_nat_exit(void)
+ {
+-	unregister_pernet_subsys(&iptable_nat_net_ops);
+ 	xt_unregister_template(&nf_nat_ipv4_table);
++	unregister_pernet_subsys(&iptable_nat_net_ops);
+ }
+ 
+ module_init(iptable_nat_init);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 570e87ad9a56e..ecd521108559f 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -754,8 +754,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 	 * <prev RTT . ><current RTT .. ><next RTT .... >
+ 	 */
+ 
+-	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
+-	    !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf)) {
+ 		u64 rcvwin, grow;
+ 		int rcvbuf;
+ 
+@@ -771,12 +770,22 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 
+ 		rcvbuf = min_t(u64, tcp_space_from_win(sk, rcvwin),
+ 			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
+-		if (rcvbuf > sk->sk_rcvbuf) {
+-			WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
++		if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
++			if (rcvbuf > sk->sk_rcvbuf) {
++				WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
+ 
+-			/* Make the window clamp follow along.  */
+-			WRITE_ONCE(tp->window_clamp,
+-				   tcp_win_from_space(sk, rcvbuf));
++				/* Make the window clamp follow along.  */
++				WRITE_ONCE(tp->window_clamp,
++					   tcp_win_from_space(sk, rcvbuf));
++			}
++		} else {
++			/* Make the window clamp follow along while being bounded
++			 * by SO_RCVBUF.
++			 */
++			int clamp = tcp_win_from_space(sk, min(rcvbuf, sk->sk_rcvbuf));
++
++			if (clamp > tp->window_clamp)
++				WRITE_ONCE(tp->window_clamp, clamp);
+ 		}
+ 	}
+ 	tp->rcvq_space.space = copied;
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index d914b23256ce6..0282d15725095 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -227,6 +227,7 @@ struct ndisc_options *ndisc_parse_options(const struct net_device *dev,
+ 		return NULL;
+ 	memset(ndopts, 0, sizeof(*ndopts));
+ 	while (opt_len) {
++		bool unknown = false;
+ 		int l;
+ 		if (opt_len < sizeof(struct nd_opt_hdr))
+ 			return NULL;
+@@ -262,22 +263,23 @@ struct ndisc_options *ndisc_parse_options(const struct net_device *dev,
+ 			break;
+ #endif
+ 		default:
+-			if (ndisc_is_useropt(dev, nd_opt)) {
+-				ndopts->nd_useropts_end = nd_opt;
+-				if (!ndopts->nd_useropts)
+-					ndopts->nd_useropts = nd_opt;
+-			} else {
+-				/*
+-				 * Unknown options must be silently ignored,
+-				 * to accommodate future extension to the
+-				 * protocol.
+-				 */
+-				ND_PRINTK(2, notice,
+-					  "%s: ignored unsupported option; type=%d, len=%d\n",
+-					  __func__,
+-					  nd_opt->nd_opt_type,
+-					  nd_opt->nd_opt_len);
+-			}
++			unknown = true;
++		}
++		if (ndisc_is_useropt(dev, nd_opt)) {
++			ndopts->nd_useropts_end = nd_opt;
++			if (!ndopts->nd_useropts)
++				ndopts->nd_useropts = nd_opt;
++		} else if (unknown) {
++			/*
++			 * Unknown options must be silently ignored,
++			 * to accommodate future extension to the
++			 * protocol.
++			 */
++			ND_PRINTK(2, notice,
++				  "%s: ignored unsupported option; type=%d, len=%d\n",
++				  __func__,
++				  nd_opt->nd_opt_type,
++				  nd_opt->nd_opt_len);
+ 		}
+ next_opt:
+ 		opt_len -= l;
+diff --git a/net/ipv6/netfilter/ip6table_nat.c b/net/ipv6/netfilter/ip6table_nat.c
+index 52cf104e34788..e119d4f090cc8 100644
+--- a/net/ipv6/netfilter/ip6table_nat.c
++++ b/net/ipv6/netfilter/ip6table_nat.c
+@@ -147,23 +147,27 @@ static struct pernet_operations ip6table_nat_net_ops = {
+ 
+ static int __init ip6table_nat_init(void)
+ {
+-	int ret = xt_register_template(&nf_nat_ipv6_table,
+-				       ip6table_nat_table_init);
++	int ret;
+ 
++	/* net->gen->ptr[ip6table_nat_net_id] must be allocated
++	 * before calling ip6t_nat_register_lookups().
++	 */
++	ret = register_pernet_subsys(&ip6table_nat_net_ops);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = register_pernet_subsys(&ip6table_nat_net_ops);
++	ret = xt_register_template(&nf_nat_ipv6_table,
++				   ip6table_nat_table_init);
+ 	if (ret)
+-		xt_unregister_template(&nf_nat_ipv6_table);
++		unregister_pernet_subsys(&ip6table_nat_net_ops);
+ 
+ 	return ret;
+ }
+ 
+ static void __exit ip6table_nat_exit(void)
+ {
+-	unregister_pernet_subsys(&ip6table_nat_net_ops);
+ 	xt_unregister_template(&nf_nat_ipv6_table);
++	unregister_pernet_subsys(&ip6table_nat_net_ops);
+ }
+ 
+ module_init(ip6table_nat_init);
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index c3b0b610b0aa3..c00323fa9eb66 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -335,8 +335,8 @@ static void iucv_sever_path(struct sock *sk, int with_user_data)
+ 	struct iucv_sock *iucv = iucv_sk(sk);
+ 	struct iucv_path *path = iucv->path;
+ 
+-	if (iucv->path) {
+-		iucv->path = NULL;
++	/* Whoever resets the path pointer, must sever and free it. */
++	if (xchg(&iucv->path, NULL)) {
+ 		if (with_user_data) {
+ 			low_nmcpy(user_data, iucv->src_name);
+ 			high_nmcpy(user_data, iucv->dst_name);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 83ad6c9709fe6..87a7b569cc774 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -114,7 +114,7 @@ static int ieee80211_set_mon_options(struct ieee80211_sub_if_data *sdata,
+ 
+ 	/* apply all changes now - no failures allowed */
+ 
+-	if (monitor_sdata)
++	if (monitor_sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF))
+ 		ieee80211_set_mu_mimo_follow(monitor_sdata, params);
+ 
+ 	if (params->flags) {
+@@ -3038,6 +3038,9 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+ 		sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
+ 
+ 		if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
++			if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF))
++				return -EOPNOTSUPP;
++
+ 			sdata = wiphy_dereference(local->hw.wiphy,
+ 						  local->monitor_sdata);
+ 			if (!sdata)
+@@ -3100,7 +3103,7 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+ 	if (has_monitor) {
+ 		sdata = wiphy_dereference(local->hw.wiphy,
+ 					  local->monitor_sdata);
+-		if (sdata) {
++		if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) {
+ 			sdata->deflink.user_power_level = local->user_power_level;
+ 			if (txp_type != sdata->vif.bss_conf.txpower_type)
+ 				update_txp_type = true;
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 72a9ba8bc5fd9..edba4a31844fb 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -1768,7 +1768,7 @@ static bool __ieee80211_tx(struct ieee80211_local *local,
+ 			break;
+ 		}
+ 		sdata = rcu_dereference(local->monitor_sdata);
+-		if (sdata) {
++		if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) {
+ 			vif = &sdata->vif;
+ 			info->hw_queue =
+ 				vif->hw_queue[skb_get_queue_mapping(skb)];
+@@ -3957,7 +3957,8 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 			break;
+ 		}
+ 		tx.sdata = rcu_dereference(local->monitor_sdata);
+-		if (tx.sdata) {
++		if (tx.sdata &&
++		    ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) {
+ 			vif = &tx.sdata->vif;
+ 			info->hw_queue =
+ 				vif->hw_queue[skb_get_queue_mapping(skb)];
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 771c05640aa3a..c11dbe82ae1b3 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -776,7 +776,7 @@ static void __iterate_interfaces(struct ieee80211_local *local,
+ 	sdata = rcu_dereference_check(local->monitor_sdata,
+ 				      lockdep_is_held(&local->iflist_mtx) ||
+ 				      lockdep_is_held(&local->hw.wiphy->mtx));
+-	if (sdata &&
++	if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF) &&
+ 	    (iter_flags & IEEE80211_IFACE_ITER_RESUME_ALL || !active_only ||
+ 	     sdata->flags & IEEE80211_SDATA_IN_DRIVER))
+ 		iterator(data, sdata->vif.addr, &sdata->vif);
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index c30405e768337..7884217f33eb2 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -19,7 +19,9 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ 	SNMP_MIB_ITEM("MPTCPRetrans", MPTCP_MIB_RETRANSSEGS),
+ 	SNMP_MIB_ITEM("MPJoinNoTokenFound", MPTCP_MIB_JOINNOTOKEN),
+ 	SNMP_MIB_ITEM("MPJoinSynRx", MPTCP_MIB_JOINSYNRX),
++	SNMP_MIB_ITEM("MPJoinSynBackupRx", MPTCP_MIB_JOINSYNBACKUPRX),
+ 	SNMP_MIB_ITEM("MPJoinSynAckRx", MPTCP_MIB_JOINSYNACKRX),
++	SNMP_MIB_ITEM("MPJoinSynAckBackupRx", MPTCP_MIB_JOINSYNACKBACKUPRX),
+ 	SNMP_MIB_ITEM("MPJoinSynAckHMacFailure", MPTCP_MIB_JOINSYNACKMAC),
+ 	SNMP_MIB_ITEM("MPJoinAckRx", MPTCP_MIB_JOINACKRX),
+ 	SNMP_MIB_ITEM("MPJoinAckHMacFailure", MPTCP_MIB_JOINACKMAC),
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index 2704afd0dfe45..66aa67f49d032 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -14,7 +14,9 @@ enum linux_mptcp_mib_field {
+ 	MPTCP_MIB_RETRANSSEGS,		/* Segments retransmitted at the MPTCP-level */
+ 	MPTCP_MIB_JOINNOTOKEN,		/* Received MP_JOIN but the token was not found */
+ 	MPTCP_MIB_JOINSYNRX,		/* Received a SYN + MP_JOIN */
++	MPTCP_MIB_JOINSYNBACKUPRX,	/* Received a SYN + MP_JOIN + backup flag */
+ 	MPTCP_MIB_JOINSYNACKRX,		/* Received a SYN/ACK + MP_JOIN */
++	MPTCP_MIB_JOINSYNACKBACKUPRX,	/* Received a SYN/ACK + MP_JOIN + backup flag */
+ 	MPTCP_MIB_JOINSYNACKMAC,	/* HMAC was wrong on SYN/ACK + MP_JOIN */
+ 	MPTCP_MIB_JOINACKRX,		/* Received an ACK + MP_JOIN */
+ 	MPTCP_MIB_JOINACKMAC,		/* HMAC was wrong on ACK + MP_JOIN */
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 8e8dcfbc29938..8a68382a4fe91 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -909,7 +909,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		return true;
+ 	} else if (subflow_req->mp_join) {
+ 		opts->suboptions = OPTION_MPTCP_MPJ_SYNACK;
+-		opts->backup = subflow_req->backup;
++		opts->backup = subflow_req->request_bkup;
+ 		opts->join_id = subflow_req->local_id;
+ 		opts->thmac = subflow_req->thmac;
+ 		opts->nonce = subflow_req->local_nonce;
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 55406720c6071..23bb89c94e90d 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -426,6 +426,18 @@ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc)
+ 	return mptcp_pm_nl_get_local_id(msk, &skc_local);
+ }
+ 
++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc)
++{
++	struct mptcp_addr_info skc_local;
++
++	mptcp_local_address((struct sock_common *)skc, &skc_local);
++
++	if (mptcp_pm_is_userspace(msk))
++		return mptcp_userspace_pm_is_backup(msk, &skc_local);
++
++	return mptcp_pm_nl_is_backup(msk, &skc_local);
++}
++
+ int mptcp_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int id,
+ 					 u8 *flags, int *ifindex)
+ {
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index ea9e5817b9e9d..37954a0b087d2 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -471,7 +471,6 @@ static void __mptcp_pm_send_ack(struct mptcp_sock *msk, struct mptcp_subflow_con
+ 	slow = lock_sock_fast(ssk);
+ 	if (prio) {
+ 		subflow->send_mp_prio = 1;
+-		subflow->backup = backup;
+ 		subflow->request_bkup = backup;
+ 	}
+ 
+@@ -1102,6 +1101,24 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc
+ 	return ret;
+ }
+ 
++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc)
++{
++	struct pm_nl_pernet *pernet = pm_nl_get_pernet_from_msk(msk);
++	struct mptcp_pm_addr_entry *entry;
++	bool backup = false;
++
++	rcu_read_lock();
++	list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) {
++		if (mptcp_addresses_equal(&entry->addr, skc, entry->addr.port)) {
++			backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP);
++			break;
++		}
++	}
++	rcu_read_unlock();
++
++	return backup;
++}
++
+ #define MPTCP_PM_CMD_GRP_OFFSET       0
+ #define MPTCP_PM_EV_GRP_OFFSET        1
+ 
+@@ -1401,6 +1418,7 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
+ 	ret = remove_anno_list_by_saddr(msk, addr);
+ 	if (ret || force) {
+ 		spin_lock_bh(&msk->pm.lock);
++		msk->pm.add_addr_signaled -= ret;
+ 		mptcp_pm_remove_addr(msk, &list);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 	}
+@@ -1534,16 +1552,25 @@ void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
+ {
+ 	struct mptcp_rm_list alist = { .nr = 0 };
+ 	struct mptcp_pm_addr_entry *entry;
++	int anno_nr = 0;
+ 
+ 	list_for_each_entry(entry, rm_list, list) {
+-		if ((remove_anno_list_by_saddr(msk, &entry->addr) ||
+-		     lookup_subflow_by_saddr(&msk->conn_list, &entry->addr)) &&
+-		    alist.nr < MPTCP_RM_IDS_MAX)
+-			alist.ids[alist.nr++] = entry->addr.id;
++		if (alist.nr >= MPTCP_RM_IDS_MAX)
++			break;
++
++		/* only delete if either announced or matching a subflow */
++		if (remove_anno_list_by_saddr(msk, &entry->addr))
++			anno_nr++;
++		else if (!lookup_subflow_by_saddr(&msk->conn_list,
++						  &entry->addr))
++			continue;
++
++		alist.ids[alist.nr++] = entry->addr.id;
+ 	}
+ 
+ 	if (alist.nr) {
+ 		spin_lock_bh(&msk->pm.lock);
++		msk->pm.add_addr_signaled -= anno_nr;
+ 		mptcp_pm_remove_addr(msk, &alist);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 	}
+@@ -1556,17 +1583,18 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 	struct mptcp_pm_addr_entry *entry;
+ 
+ 	list_for_each_entry(entry, rm_list, list) {
+-		if (lookup_subflow_by_saddr(&msk->conn_list, &entry->addr) &&
+-		    slist.nr < MPTCP_RM_IDS_MAX)
++		if (slist.nr < MPTCP_RM_IDS_MAX &&
++		    lookup_subflow_by_saddr(&msk->conn_list, &entry->addr))
+ 			slist.ids[slist.nr++] = entry->addr.id;
+ 
+-		if (remove_anno_list_by_saddr(msk, &entry->addr) &&
+-		    alist.nr < MPTCP_RM_IDS_MAX)
++		if (alist.nr < MPTCP_RM_IDS_MAX &&
++		    remove_anno_list_by_saddr(msk, &entry->addr))
+ 			alist.ids[alist.nr++] = entry->addr.id;
+ 	}
+ 
+ 	if (alist.nr) {
+ 		spin_lock_bh(&msk->pm.lock);
++		msk->pm.add_addr_signaled -= alist.nr;
+ 		mptcp_pm_remove_addr(msk, &alist);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 	}
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index f0a4590506c69..8eaa9fbe3e343 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -165,6 +165,24 @@ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk,
+ 	return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry, true);
+ }
+ 
++bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk,
++				  struct mptcp_addr_info *skc)
++{
++	struct mptcp_pm_addr_entry *entry;
++	bool backup = false;
++
++	spin_lock_bh(&msk->pm.lock);
++	list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) {
++		if (mptcp_addresses_equal(&entry->addr, skc, false)) {
++			backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP);
++			break;
++		}
++	}
++	spin_unlock_bh(&msk->pm.lock);
++
++	return backup;
++}
++
+ int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info)
+ {
+ 	struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index bb7dca8aa2d9c..ff8292d0cf4e6 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -350,8 +350,10 @@ static bool __mptcp_move_skb(struct mptcp_sock *msk, struct sock *ssk,
+ 	skb_orphan(skb);
+ 
+ 	/* try to fetch required memory from subflow */
+-	if (!mptcp_rmem_schedule(sk, ssk, skb->truesize))
++	if (!mptcp_rmem_schedule(sk, ssk, skb->truesize)) {
++		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
+ 		goto drop;
++	}
+ 
+ 	has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp;
+ 
+@@ -844,10 +846,8 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
+ 		sk_rbuf = ssk_rbuf;
+ 
+ 	/* over limit? can't append more skbs to msk, Also, no need to wake-up*/
+-	if (__mptcp_rmem(sk) > sk_rbuf) {
+-		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
++	if (__mptcp_rmem(sk) > sk_rbuf)
+ 		return;
+-	}
+ 
+ 	/* Wake-up the reader only for in-sequence data */
+ 	mptcp_data_lock(sk);
+@@ -1422,13 +1422,15 @@ struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
+ 	}
+ 
+ 	mptcp_for_each_subflow(msk, subflow) {
++		bool backup = subflow->backup || subflow->request_bkup;
++
+ 		trace_mptcp_subflow_get_send(subflow);
+ 		ssk =  mptcp_subflow_tcp_sock(subflow);
+ 		if (!mptcp_subflow_active(subflow))
+ 			continue;
+ 
+ 		tout = max(tout, mptcp_timeout_from_subflow(subflow));
+-		nr_active += !subflow->backup;
++		nr_active += !backup;
+ 		pace = subflow->avg_pacing_rate;
+ 		if (unlikely(!pace)) {
+ 			/* init pacing rate from socket */
+@@ -1439,9 +1441,9 @@ struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
+ 		}
+ 
+ 		linger_time = div_u64((u64)READ_ONCE(ssk->sk_wmem_queued) << 32, pace);
+-		if (linger_time < send_info[subflow->backup].linger_time) {
+-			send_info[subflow->backup].ssk = ssk;
+-			send_info[subflow->backup].linger_time = linger_time;
++		if (linger_time < send_info[backup].linger_time) {
++			send_info[backup].ssk = ssk;
++			send_info[backup].linger_time = linger_time;
+ 		}
+ 	}
+ 	__mptcp_set_timeout(sk, tout);
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 7aa47e2dd52b1..8357046732f71 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -443,6 +443,7 @@ struct mptcp_subflow_request_sock {
+ 	u16	mp_capable : 1,
+ 		mp_join : 1,
+ 		backup : 1,
++		request_bkup : 1,
+ 		csum_reqd : 1,
+ 		allow_join_id0 : 1;
+ 	u8	local_id;
+@@ -1103,6 +1104,9 @@ bool mptcp_pm_rm_addr_signal(struct mptcp_sock *msk, unsigned int remaining,
+ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc);
+ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
+ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc);
++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
++bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
+ int mptcp_pm_dump_addr(struct sk_buff *msg, struct netlink_callback *cb);
+ int mptcp_pm_nl_dump_addr(struct sk_buff *msg,
+ 			  struct netlink_callback *cb);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 612c38570a642..c330946384ba0 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -100,6 +100,7 @@ static struct mptcp_sock *subflow_token_join_request(struct request_sock *req)
+ 		return NULL;
+ 	}
+ 	subflow_req->local_id = local_id;
++	subflow_req->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)req);
+ 
+ 	return msk;
+ }
+@@ -168,6 +169,9 @@ static int subflow_check_req(struct request_sock *req,
+ 			return 0;
+ 	} else if (opt_mp_join) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNRX);
++
++		if (mp_opt.backup)
++			SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNBACKUPRX);
+ 	}
+ 
+ 	if (opt_mp_capable && listener->request_mptcp) {
+@@ -577,6 +581,9 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		subflow->mp_join = 1;
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX);
+ 
++		if (subflow->backup)
++			MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX);
++
+ 		if (subflow_use_different_dport(msk, sk)) {
+ 			pr_debug("synack inet_dport=%d %d",
+ 				 ntohs(inet_sk(sk)->inet_dport),
+@@ -614,6 +621,8 @@ static int subflow_chk_local_id(struct sock *sk)
+ 		return err;
+ 
+ 	subflow_set_local_id(subflow, err);
++	subflow->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)sk);
++
+ 	return 0;
+ }
+ 
+@@ -1221,14 +1230,22 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ 	bool fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN;
+-	u32 incr;
++	struct tcp_sock *tp = tcp_sk(ssk);
++	u32 offset, incr, avail_len;
+ 
+-	incr = limit >= skb->len ? skb->len + fin : limit;
++	offset = tp->copied_seq - TCP_SKB_CB(skb)->seq;
++	if (WARN_ON_ONCE(offset > skb->len))
++		goto out;
+ 
+-	pr_debug("discarding=%d len=%d seq=%d", incr, skb->len,
+-		 subflow->map_subflow_seq);
++	avail_len = skb->len - offset;
++	incr = limit >= avail_len ? avail_len + fin : limit;
++
++	pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len,
++		 offset, subflow->map_subflow_seq);
+ 	MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA);
+ 	tcp_sk(ssk)->copied_seq += incr;
++
++out:
+ 	if (!before(tcp_sk(ssk)->copied_seq, TCP_SKB_CB(skb)->end_seq))
+ 		sk_eat_skb(ssk, skb);
+ 	if (mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len)
+@@ -2005,6 +2022,7 @@ static void subflow_ulp_clone(const struct request_sock *req,
+ 		new_ctx->fully_established = 1;
+ 		new_ctx->remote_key_valid = 1;
+ 		new_ctx->backup = subflow_req->backup;
++		new_ctx->request_bkup = subflow_req->request_bkup;
+ 		WRITE_ONCE(new_ctx->remote_id, subflow_req->remote_id);
+ 		new_ctx->token = subflow_req->token;
+ 		new_ctx->thmac = subflow_req->thmac;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 6fa3cca87d346..9d451d77d54e2 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -44,6 +44,8 @@ static DEFINE_MUTEX(zones_mutex);
+ struct zones_ht_key {
+ 	struct net *net;
+ 	u16 zone;
++	/* Note : pad[] must be the last field. */
++	u8  pad[];
+ };
+ 
+ struct tcf_ct_flow_table {
+@@ -60,7 +62,7 @@ struct tcf_ct_flow_table {
+ static const struct rhashtable_params zones_params = {
+ 	.head_offset = offsetof(struct tcf_ct_flow_table, node),
+ 	.key_offset = offsetof(struct tcf_ct_flow_table, key),
+-	.key_len = sizeof_field(struct tcf_ct_flow_table, key),
++	.key_len = offsetof(struct zones_ht_key, pad),
+ 	.automatic_shrinking = true,
+ };
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 0222ede0feb60..292b530a6dd31 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -3136,8 +3136,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 			       struct ieee80211_mgmt *mgmt, size_t len,
+ 			       gfp_t gfp)
+ {
+-	size_t min_hdr_len = offsetof(struct ieee80211_mgmt,
+-				      u.probe_resp.variable);
++	size_t min_hdr_len;
+ 	struct ieee80211_ext *ext = NULL;
+ 	enum cfg80211_bss_frame_type ftype;
+ 	u16 beacon_interval;
+@@ -3160,10 +3159,16 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+ 		ext = (void *) mgmt;
+-		min_hdr_len = offsetof(struct ieee80211_ext, u.s1g_beacon);
+ 		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+ 			min_hdr_len = offsetof(struct ieee80211_ext,
+ 					       u.s1g_short_beacon.variable);
++		else
++			min_hdr_len = offsetof(struct ieee80211_ext,
++					       u.s1g_beacon.variable);
++	} else {
++		/* same for beacons */
++		min_hdr_len = offsetof(struct ieee80211_mgmt,
++				       u.probe_resp.variable);
+ 	}
+ 
+ 	if (WARN_ON(len < min_hdr_len))
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index a8ad55f11133b..1cfe673bc52f3 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -1045,6 +1045,7 @@ void cfg80211_connect_done(struct net_device *dev,
+ 			cfg80211_hold_bss(
+ 				bss_from_pub(params->links[link].bss));
+ 		ev->cr.links[link].bss = params->links[link].bss;
++		ev->cr.links[link].status = params->links[link].status;
+ 
+ 		if (params->links[link].addr) {
+ 			ev->cr.links[link].addr = next;
+diff --git a/sound/core/seq/seq_ump_convert.c b/sound/core/seq/seq_ump_convert.c
+index e90b27a135e6f..d9dacfbe4a9ae 100644
+--- a/sound/core/seq/seq_ump_convert.c
++++ b/sound/core/seq/seq_ump_convert.c
+@@ -1192,44 +1192,53 @@ static int cvt_sysex_to_ump(struct snd_seq_client *dest,
+ {
+ 	struct snd_seq_ump_event ev_cvt;
+ 	unsigned char status;
+-	u8 buf[6], *xbuf;
++	u8 buf[8], *xbuf;
+ 	int offset = 0;
+ 	int len, err;
++	bool finished = false;
+ 
+ 	if (!snd_seq_ev_is_variable(event))
+ 		return 0;
+ 
+ 	setup_ump_event(&ev_cvt, event);
+-	for (;;) {
++	while (!finished) {
+ 		len = snd_seq_expand_var_event_at(event, sizeof(buf), buf, offset);
+ 		if (len <= 0)
+ 			break;
+-		if (WARN_ON(len > 6))
++		if (WARN_ON(len > sizeof(buf)))
+ 			break;
+-		offset += len;
++
+ 		xbuf = buf;
++		status = UMP_SYSEX_STATUS_CONTINUE;
++		/* truncate the sysex start-marker */
+ 		if (*xbuf == UMP_MIDI1_MSG_SYSEX_START) {
+ 			status = UMP_SYSEX_STATUS_START;
+-			xbuf++;
+ 			len--;
+-			if (len > 0 && xbuf[len - 1] == UMP_MIDI1_MSG_SYSEX_END) {
++			offset++;
++			xbuf++;
++		}
++
++		/* if the last of this packet or the 1st byte of the next packet
++		 * is the end-marker, finish the transfer with this packet
++		 */
++		if (len > 0 && len < 8 &&
++		    xbuf[len - 1] == UMP_MIDI1_MSG_SYSEX_END) {
++			if (status == UMP_SYSEX_STATUS_START)
+ 				status = UMP_SYSEX_STATUS_SINGLE;
+-				len--;
+-			}
+-		} else {
+-			if (xbuf[len - 1] == UMP_MIDI1_MSG_SYSEX_END) {
++			else
+ 				status = UMP_SYSEX_STATUS_END;
+-				len--;
+-			} else {
+-				status = UMP_SYSEX_STATUS_CONTINUE;
+-			}
++			len--;
++			finished = true;
+ 		}
++
++		len = min(len, 6);
+ 		fill_sysex7_ump(dest_port, ev_cvt.ump, status, xbuf, len);
+ 		err = __snd_seq_deliver_single_event(dest, dest_port,
+ 						     (struct snd_seq_event *)&ev_cvt,
+ 						     atomic, hop);
+ 		if (err < 0)
+ 			return err;
++		offset += len;
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index 1a163bbcabd79..c827d7d8d8003 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -77,6 +77,8 @@
+ // overrun. Actual device can skip more, then this module stops the packet streaming.
+ #define IR_JUMBO_PAYLOAD_MAX_SKIP_CYCLES	5
+ 
++static void pcm_period_work(struct work_struct *work);
++
+ /**
+  * amdtp_stream_init - initialize an AMDTP stream structure
+  * @s: the AMDTP stream to initialize
+@@ -105,6 +107,7 @@ int amdtp_stream_init(struct amdtp_stream *s, struct fw_unit *unit,
+ 	s->flags = flags;
+ 	s->context = ERR_PTR(-1);
+ 	mutex_init(&s->mutex);
++	INIT_WORK(&s->period_work, pcm_period_work);
+ 	s->packet_index = 0;
+ 
+ 	init_waitqueue_head(&s->ready_wait);
+@@ -347,6 +350,7 @@ EXPORT_SYMBOL(amdtp_stream_get_max_payload);
+  */
+ void amdtp_stream_pcm_prepare(struct amdtp_stream *s)
+ {
++	cancel_work_sync(&s->period_work);
+ 	s->pcm_buffer_pointer = 0;
+ 	s->pcm_period_pointer = 0;
+ }
+@@ -611,19 +615,21 @@ static void update_pcm_pointers(struct amdtp_stream *s,
+ 		// The program in user process should periodically check the status of intermediate
+ 		// buffer associated to PCM substream to process PCM frames in the buffer, instead
+ 		// of receiving notification of period elapsed by poll wait.
+-		if (!pcm->runtime->no_period_wakeup) {
+-			if (in_softirq()) {
+-				// In software IRQ context for 1394 OHCI.
+-				snd_pcm_period_elapsed(pcm);
+-			} else {
+-				// In process context of ALSA PCM application under acquired lock of
+-				// PCM substream.
+-				snd_pcm_period_elapsed_under_stream_lock(pcm);
+-			}
+-		}
++		if (!pcm->runtime->no_period_wakeup)
++			queue_work(system_highpri_wq, &s->period_work);
+ 	}
+ }
+ 
++static void pcm_period_work(struct work_struct *work)
++{
++	struct amdtp_stream *s = container_of(work, struct amdtp_stream,
++					      period_work);
++	struct snd_pcm_substream *pcm = READ_ONCE(s->pcm);
++
++	if (pcm)
++		snd_pcm_period_elapsed(pcm);
++}
++
+ static int queue_packet(struct amdtp_stream *s, struct fw_iso_packet *params,
+ 			bool sched_irq)
+ {
+@@ -1849,11 +1855,14 @@ unsigned long amdtp_domain_stream_pcm_pointer(struct amdtp_domain *d,
+ {
+ 	struct amdtp_stream *irq_target = d->irq_target;
+ 
+-	// Process isochronous packets queued till recent isochronous cycle to handle PCM frames.
+ 	if (irq_target && amdtp_stream_running(irq_target)) {
+-		// In software IRQ context, the call causes dead-lock to disable the tasklet
+-		// synchronously.
+-		if (!in_softirq())
++		// use wq to prevent AB/BA deadlock competition for
++		// substream lock:
++		// fw_iso_context_flush_completions() acquires
++		// lock by ohci_flush_iso_completions(),
++		// amdtp-stream process_rx_packets() attempts to
++		// acquire same lock by snd_pcm_elapsed()
++		if (current_work() != &s->period_work)
+ 			fw_iso_context_flush_completions(irq_target->context);
+ 	}
+ 
+@@ -1909,6 +1918,7 @@ static void amdtp_stream_stop(struct amdtp_stream *s)
+ 		return;
+ 	}
+ 
++	cancel_work_sync(&s->period_work);
+ 	fw_iso_context_stop(s->context);
+ 	fw_iso_context_destroy(s->context);
+ 	s->context = ERR_PTR(-1);
+diff --git a/sound/firewire/amdtp-stream.h b/sound/firewire/amdtp-stream.h
+index a1ed2e80f91a7..775db3fc4959f 100644
+--- a/sound/firewire/amdtp-stream.h
++++ b/sound/firewire/amdtp-stream.h
+@@ -191,6 +191,7 @@ struct amdtp_stream {
+ 
+ 	/* For a PCM substream processing. */
+ 	struct snd_pcm_substream *pcm;
++	struct work_struct period_work;
+ 	snd_pcm_uframes_t pcm_buffer_pointer;
+ 	unsigned int pcm_period_pointer;
+ 	unsigned int pcm_frame_multiplier;
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index c2d0109866e62..68c883f202ca5 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -28,7 +28,7 @@
+ #else
+ #define AZX_DCAPS_I915_COMPONENT 0		/* NOP */
+ #endif
+-/* 14 unused */
++#define AZX_DCAPS_AMD_ALLOC_FIX	(1 << 14)	/* AMD allocation workaround */
+ #define AZX_DCAPS_CTX_WORKAROUND (1 << 15)	/* X-Fi workaround */
+ #define AZX_DCAPS_POSFIX_LPIB	(1 << 16)	/* Use LPIB as default */
+ #define AZX_DCAPS_AMD_WORKAROUND (1 << 17)	/* AMD-specific workaround */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 3500108f6ba37..87203b819dd47 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -40,6 +40,7 @@
+ 
+ #ifdef CONFIG_X86
+ /* for snoop control */
++#include <linux/dma-map-ops.h>
+ #include <asm/set_memory.h>
+ #include <asm/cpufeature.h>
+ #endif
+@@ -306,7 +307,7 @@ enum {
+ 
+ /* quirks for ATI HDMI with snoop off */
+ #define AZX_DCAPS_PRESET_ATI_HDMI_NS \
+-	(AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_SNOOP_OFF)
++	(AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_AMD_ALLOC_FIX)
+ 
+ /* quirks for AMD SB */
+ #define AZX_DCAPS_PRESET_AMD_SB \
+@@ -1702,6 +1703,13 @@ static void azx_check_snoop_available(struct azx *chip)
+ 	if (chip->driver_caps & AZX_DCAPS_SNOOP_OFF)
+ 		snoop = false;
+ 
++#ifdef CONFIG_X86
++	/* check the presence of DMA ops (i.e. IOMMU), disable snoop conditionally */
++	if ((chip->driver_caps & AZX_DCAPS_AMD_ALLOC_FIX) &&
++	    !get_dma_ops(chip->card->dev))
++		snoop = false;
++#endif
++
+ 	chip->snoop = snoop;
+ 	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 17389a3801bd1..4472923ba694b 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -21,12 +21,6 @@
+ #include "hda_jack.h"
+ #include "hda_generic.h"
+ 
+-enum {
+-	CX_HEADSET_NOPRESENT = 0,
+-	CX_HEADSET_PARTPRESENT,
+-	CX_HEADSET_ALLPRESENT,
+-};
+-
+ struct conexant_spec {
+ 	struct hda_gen_spec gen;
+ 
+@@ -48,7 +42,6 @@ struct conexant_spec {
+ 	unsigned int gpio_led;
+ 	unsigned int gpio_mute_led_mask;
+ 	unsigned int gpio_mic_led_mask;
+-	unsigned int headset_present_flag;
+ 	bool is_cx8070_sn6140;
+ };
+ 
+@@ -250,48 +243,19 @@ static void cx_process_headset_plugin(struct hda_codec *codec)
+ 	}
+ }
+ 
+-static void cx_update_headset_mic_vref(struct hda_codec *codec, unsigned int res)
++static void cx_update_headset_mic_vref(struct hda_codec *codec, struct hda_jack_callback *event)
+ {
+-	unsigned int phone_present, mic_persent, phone_tag, mic_tag;
+-	struct conexant_spec *spec = codec->spec;
++	unsigned int mic_present;
+ 
+ 	/* In cx8070 and sn6140, the node 16 can only be config to headphone or disabled,
+ 	 * the node 19 can only be config to microphone or disabled.
+ 	 * Check hp&mic tag to process headset pulgin&plugout.
+ 	 */
+-	phone_tag = snd_hda_codec_read(codec, 0x16, 0, AC_VERB_GET_UNSOLICITED_RESPONSE, 0x0);
+-	mic_tag = snd_hda_codec_read(codec, 0x19, 0, AC_VERB_GET_UNSOLICITED_RESPONSE, 0x0);
+-	if ((phone_tag & (res >> AC_UNSOL_RES_TAG_SHIFT)) ||
+-	    (mic_tag & (res >> AC_UNSOL_RES_TAG_SHIFT))) {
+-		phone_present = snd_hda_codec_read(codec, 0x16, 0, AC_VERB_GET_PIN_SENSE, 0x0);
+-		if (!(phone_present & AC_PINSENSE_PRESENCE)) {/* headphone plugout */
+-			spec->headset_present_flag = CX_HEADSET_NOPRESENT;
+-			snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x20);
+-			return;
+-		}
+-		if (spec->headset_present_flag == CX_HEADSET_NOPRESENT) {
+-			spec->headset_present_flag = CX_HEADSET_PARTPRESENT;
+-		} else if (spec->headset_present_flag == CX_HEADSET_PARTPRESENT) {
+-			mic_persent = snd_hda_codec_read(codec, 0x19, 0,
+-							 AC_VERB_GET_PIN_SENSE, 0x0);
+-			/* headset is present */
+-			if ((phone_present & AC_PINSENSE_PRESENCE) &&
+-			    (mic_persent & AC_PINSENSE_PRESENCE)) {
+-				cx_process_headset_plugin(codec);
+-				spec->headset_present_flag = CX_HEADSET_ALLPRESENT;
+-			}
+-		}
+-	}
+-}
+-
+-static void cx_jack_unsol_event(struct hda_codec *codec, unsigned int res)
+-{
+-	struct conexant_spec *spec = codec->spec;
+-
+-	if (spec->is_cx8070_sn6140)
+-		cx_update_headset_mic_vref(codec, res);
+-
+-	snd_hda_jack_unsol_event(codec, res);
++	mic_present = snd_hda_codec_read(codec, 0x19, 0, AC_VERB_GET_PIN_SENSE, 0x0);
++	if (!(mic_present & AC_PINSENSE_PRESENCE)) /* mic plugout */
++		snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x20);
++	else
++		cx_process_headset_plugin(codec);
+ }
+ 
+ static int cx_auto_suspend(struct hda_codec *codec)
+@@ -305,7 +269,7 @@ static const struct hda_codec_ops cx_auto_patch_ops = {
+ 	.build_pcms = snd_hda_gen_build_pcms,
+ 	.init = cx_auto_init,
+ 	.free = cx_auto_free,
+-	.unsol_event = cx_jack_unsol_event,
++	.unsol_event = snd_hda_jack_unsol_event,
+ 	.suspend = cx_auto_suspend,
+ 	.check_power_status = snd_hda_gen_check_power_status,
+ };
+@@ -1163,7 +1127,7 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ 	case 0x14f11f86:
+ 	case 0x14f11f87:
+ 		spec->is_cx8070_sn6140 = true;
+-		spec->headset_present_flag = CX_HEADSET_NOPRESENT;
++		snd_hda_jack_detect_enable_callback(codec, 0x19, cx_update_headset_mic_vref);
+ 		break;
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d749769438ea5..a6c1e2199e703 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9866,6 +9866,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
++	SND_PCI_QUIRK(0x1025, 0x100c, "Acer Aspire E5-574G", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index d5409f3879455..e14c725acebf2 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -244,8 +244,8 @@ static struct snd_pcm_chmap_elem *convert_chmap(int channels, unsigned int bits,
+ 		SNDRV_CHMAP_FR,		/* right front */
+ 		SNDRV_CHMAP_FC,		/* center front */
+ 		SNDRV_CHMAP_LFE,	/* LFE */
+-		SNDRV_CHMAP_SL,		/* left surround */
+-		SNDRV_CHMAP_SR,		/* right surround */
++		SNDRV_CHMAP_RL,		/* left surround */
++		SNDRV_CHMAP_RR,		/* right surround */
+ 		SNDRV_CHMAP_FLC,	/* left of center */
+ 		SNDRV_CHMAP_FRC,	/* right of center */
+ 		SNDRV_CHMAP_RC,		/* surround */
+diff --git a/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json b/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json
+index 9b4a032186a7b..7149caec4f80e 100644
+--- a/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json
++++ b/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json
+@@ -36,7 +36,7 @@
+     "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+   },
+   {
+-    "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
++    "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
+   },
+   {
+     "ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
+diff --git a/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json b/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
+index a9939823b14b5..0c9b9a2d2958a 100644
+--- a/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
++++ b/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
+@@ -74,7 +74,7 @@
+   {
+     "PublicDescription": "Sent SFENCE.VMA with ASID request to other HART event",
+     "ConfigCode": "0x800000000000000c",
+-    "EventName": "FW_SFENCE_VMA_RECEIVED",
++    "EventName": "FW_SFENCE_VMA_ASID_SENT",
+     "BriefDescription": "Sent SFENCE.VMA with ASID request to other HART event"
+   },
+   {
+diff --git a/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json
+index 9b4a032186a7b..7149caec4f80e 100644
+--- a/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json
++++ b/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json
+@@ -36,7 +36,7 @@
+     "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+   },
+   {
+-    "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
++    "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
+   },
+   {
+     "ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
+diff --git a/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json b/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
+index 9b4a032186a7b..7149caec4f80e 100644
+--- a/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
++++ b/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
+@@ -36,7 +36,7 @@
+     "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+   },
+   {
+-    "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
++    "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
+   },
+   {
+     "ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
+diff --git a/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json b/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
+index 9b4a032186a7b..7149caec4f80e 100644
+--- a/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
++++ b/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
+@@ -36,7 +36,7 @@
+     "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+   },
+   {
+-    "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
++    "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
+   },
+   {
+     "ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
+diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
+index 1730b852a9474..6d075648d2ccf 100644
+--- a/tools/perf/util/callchain.c
++++ b/tools/perf/util/callchain.c
+@@ -1141,7 +1141,7 @@ int hist_entry__append_callchain(struct hist_entry *he, struct perf_sample *samp
+ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *node,
+ 			bool hide_unresolved)
+ {
+-	struct machine *machine = maps__machine(node->ms.maps);
++	struct machine *machine = node->ms.maps ? maps__machine(node->ms.maps) : NULL;
+ 
+ 	maps__put(al->maps);
+ 	al->maps = maps__get(node->ms.maps);
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index d2043ec3bf6d6..4209b95690394 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1115,11 +1115,11 @@ int main_loop_s(int listensock)
+ 		return 1;
+ 	}
+ 
+-	if (--cfg_repeat > 0) {
+-		if (cfg_input)
+-			close(fd);
++	if (cfg_input)
++		close(fd);
++
++	if (--cfg_repeat > 0)
+ 		goto again;
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 108aeeb84ef10..7043984b7e74a 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -661,7 +661,7 @@ pm_nl_check_endpoint()
+ 	done
+ 
+ 	if [ -z "${id}" ]; then
+-		test_fail "bad test - missing endpoint id"
++		fail_test "bad test - missing endpoint id"
+ 		return
+ 	fi
+ 
+@@ -1634,6 +1634,8 @@ chk_prio_nr()
+ {
+ 	local mp_prio_nr_tx=$1
+ 	local mp_prio_nr_rx=$2
++	local mpj_syn=$3
++	local mpj_syn_ack=$4
+ 	local count
+ 
+ 	print_check "ptx"
+@@ -1655,6 +1657,26 @@ chk_prio_nr()
+ 	else
+ 		print_ok
+ 	fi
++
++	print_check "syn backup"
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinSynBackupRx")
++	if [ -z "$count" ]; then
++		print_skip
++	elif [ "$count" != "$mpj_syn" ]; then
++		fail_test "got $count JOIN[s] syn with Backup expected $mpj_syn"
++	else
++		print_ok
++	fi
++
++	print_check "synack backup"
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtMPJoinSynAckBackupRx")
++	if [ -z "$count" ]; then
++		print_skip
++	elif [ "$count" != "$mpj_syn_ack" ]; then
++		fail_test "got $count JOIN[s] synack with Backup expected $mpj_syn_ack"
++	else
++		print_ok
++	fi
+ }
+ 
+ chk_subflow_nr()
+@@ -2612,11 +2634,24 @@ backup_tests()
+ 		sflags=nobackup speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 1 1 1
+-		chk_prio_nr 0 1
++		chk_prio_nr 0 1 1 0
+ 	fi
+ 
+ 	# single address, backup
+ 	if reset "single address, backup" &&
++	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
++		pm_nl_set_limits $ns1 0 1
++		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup
++		pm_nl_set_limits $ns2 1 1
++		sflags=nobackup speed=slow \
++			run_tests $ns1 $ns2 10.0.1.1
++		chk_join_nr 1 1 1
++		chk_add_nr 1 1
++		chk_prio_nr 1 0 0 1
++	fi
++
++	# single address, switch to backup
++	if reset "single address, switch to backup" &&
+ 	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 0 1
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+@@ -2625,20 +2660,20 @@ backup_tests()
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 1 1 1
+ 		chk_add_nr 1 1
+-		chk_prio_nr 1 1
++		chk_prio_nr 1 1 0 0
+ 	fi
+ 
+ 	# single address with port, backup
+ 	if reset "single address with port, backup" &&
+ 	   continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+ 		pm_nl_set_limits $ns1 0 1
+-		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100
++		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup port 10100
+ 		pm_nl_set_limits $ns2 1 1
+-		sflags=backup speed=slow \
++		sflags=nobackup speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 1 1 1
+ 		chk_add_nr 1 1
+-		chk_prio_nr 1 1
++		chk_prio_nr 1 0 0 1
+ 	fi
+ 
+ 	if reset "mpc backup" &&
+@@ -2647,17 +2682,26 @@ backup_tests()
+ 		speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 0 0 0
+-		chk_prio_nr 0 1
++		chk_prio_nr 0 1 0 0
+ 	fi
+ 
+ 	if reset "mpc backup both sides" &&
+ 	   continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then
+-		pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup
++		pm_nl_set_limits $ns1 0 2
++		pm_nl_set_limits $ns2 1 2
++		pm_nl_add_endpoint $ns1 10.0.1.1 flags signal,backup
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup
++
++		# 10.0.2.2 (non-backup) -> 10.0.1.1 (backup)
++		pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow
++		# 10.0.1.2 (backup) -> 10.0.2.1 (non-backup)
++		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
++		ip -net "$ns2" route add 10.0.2.1 via 10.0.1.1 dev ns2eth1 # force this path
++
+ 		speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+-		chk_join_nr 0 0 0
+-		chk_prio_nr 1 1
++		chk_join_nr 2 2 2
++		chk_prio_nr 1 1 1 1
+ 	fi
+ 
+ 	if reset "mpc switch to backup" &&
+@@ -2666,7 +2710,7 @@ backup_tests()
+ 		sflags=backup speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 0 0 0
+-		chk_prio_nr 0 1
++		chk_prio_nr 0 1 0 0
+ 	fi
+ 
+ 	if reset "mpc switch to backup both sides" &&
+@@ -2676,7 +2720,7 @@ backup_tests()
+ 		sflags=backup speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 0 0 0
+-		chk_prio_nr 1 1
++		chk_prio_nr 1 1 0 0
+ 	fi
+ }
+ 
+@@ -3053,7 +3097,7 @@ fullmesh_tests()
+ 		addr_nr_ns2=1 sflags=backup,fullmesh speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 2 2 2
+-		chk_prio_nr 0 1
++		chk_prio_nr 0 1 1 0
+ 		chk_rm_nr 0 1
+ 	fi
+ 
+@@ -3066,7 +3110,7 @@ fullmesh_tests()
+ 		sflags=nobackup,nofullmesh speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 2 2 2
+-		chk_prio_nr 0 1
++		chk_prio_nr 0 1 1 0
+ 		chk_rm_nr 0 1
+ 	fi
+ }
+@@ -3318,7 +3362,7 @@ userspace_tests()
+ 		sflags=backup speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 1 1 0
+-		chk_prio_nr 0 0
++		chk_prio_nr 0 0 0 0
+ 	fi
+ 
+ 	# userspace pm type prevents rm_addr


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-14 14:08 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-14 14:08 UTC (permalink / raw
  To: gentoo-commits

commit:     d96679025b88dd13a8f0068426bf68f8144ac048
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 13 17:13:24 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 13 17:13:24 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d9667902

xfs: xfs_finobt_count_blocks() walks the wrong btree

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 +++
 1900_xfs-finobt-count-blocks-fix.patch | 55 ++++++++++++++++++++++++++++++++++
 2 files changed, 59 insertions(+)

diff --git a/0000_README b/0000_README
index 8befc23c..aa55e020 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1900_xfs-finobt-count-blocks-fix.patch
+From:   https://lore.kernel.org/linux-xfs/20240813152530.GF6051@frogsfrogsfrogs/T/#mdc718f38912ccc1b9b53b46d9adfaeff0828b55f
+Desc:   xfs: xfs_finobt_count_blocks() walks the wrong btree
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_xfs-finobt-count-blocks-fix.patch b/1900_xfs-finobt-count-blocks-fix.patch
new file mode 100644
index 00000000..02f60712
--- /dev/null
+++ b/1900_xfs-finobt-count-blocks-fix.patch
@@ -0,0 +1,55 @@
+xfs: xfs_finobt_count_blocks() walks the wrong btree
+
+From: Dave Chinner <dchinner@redhat.com>
+
+As a result of the factoring in commit 14dd46cf31f4 ("xfs: split
+xfs_inobt_init_cursor"), mount started taking a long time on a
+user's filesystem.  For Anders, this made mount times regress from
+under a second to over 15 minutes for a filesystem with only 30
+million inodes in it.
+
+Anders bisected it down to the above commit, but even then the bug
+was not obvious. In this commit, over 20 calls to
+xfs_inobt_init_cursor() were modified, and some we modified to call
+a new function named xfs_finobt_init_cursor().
+
+If that takes you a moment to reread those function names to see
+what the rename was, then you have realised why this bug wasn't
+spotted during review. And it wasn't spotted on inspection even
+after the bisect pointed at this commit - a single missing "f" isn't
+the easiest thing for a human eye to notice....
+
+The result is that xfs_finobt_count_blocks() now incorrectly calls
+xfs_inobt_init_cursor() so it is now walking the inobt instead of
+the finobt. Hence when there are lots of allocated inodes in a
+filesystem, mount takes a -long- time run because it now walks a
+massive allocated inode btrees instead of the small, nearly empty
+free inode btrees. It also means all the finobt space reservations
+are wrong, so mount could potentially given ENOSPC on kernel
+upgrade.
+
+In hindsight, commit 14dd46cf31f4 should have been two commits - the
+first to convert the finobt callers to the new API, the second to
+modify the xfs_inobt_init_cursor() API for the inobt callers. That
+would have made the bug very obvious during review.
+
+Fixes: 14dd46cf31f4 ("xfs: split xfs_inobt_init_cursor")
+Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
+Signed-off-by: Dave Chinner <dchinner@redhat.com>
+---
+ fs/xfs/libxfs/xfs_ialloc_btree.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
+index 496e2f72a85b..797d5b5f7b72 100644
+--- a/fs/xfs/libxfs/xfs_ialloc_btree.c
++++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
+@@ -749,7 +749,7 @@ xfs_finobt_count_blocks(
+ 	if (error)
+ 		return error;
+ 
+-	cur = xfs_inobt_init_cursor(pag, tp, agbp);
++	cur = xfs_finobt_init_cursor(pag, tp, agbp);
+ 	error = xfs_btree_count_blocks(cur, tree_blocks);
+ 	xfs_btree_del_cursor(cur, error);
+ 	xfs_trans_brelse(tp, agbp);


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-14 14:08 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-14 14:08 UTC (permalink / raw
  To: gentoo-commits

commit:     fab41ab6d1472887f05dd343aa0821750eed8c5a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 14 14:08:05 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 14 14:08:05 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fab41ab6

Linux patch 6.10.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1004_linux-6.10.5.patch | 11070 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11074 insertions(+)

diff --git a/0000_README b/0000_README
index aa55e020..308053b4 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-6.10.4.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.4
 
+Patch:  1004_linux-6.10.5.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.5
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1004_linux-6.10.5.patch b/1004_linux-6.10.5.patch
new file mode 100644
index 00000000..f00971be
--- /dev/null
+++ b/1004_linux-6.10.5.patch
@@ -0,0 +1,11070 @@
+diff --git a/Documentation/admin-guide/cifs/usage.rst b/Documentation/admin-guide/cifs/usage.rst
+index fd4b56c0996f4..c09674a75a9e3 100644
+--- a/Documentation/admin-guide/cifs/usage.rst
++++ b/Documentation/admin-guide/cifs/usage.rst
+@@ -742,7 +742,7 @@ SecurityFlags		Flags which control security negotiation and
+ 			  may use NTLMSSP               		0x00080
+ 			  must use NTLMSSP           			0x80080
+ 			  seal (packet encryption)			0x00040
+-			  must seal (not implemented yet)               0x40040
++			  must seal                                     0x40040
+ 
+ cifsFYI			If set to non-zero value, additional debug information
+ 			will be logged to the system error log.  This field
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 2569e7f19b476..c82446cef8e21 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4801,11 +4801,9 @@
+ 
+ 	profile=	[KNL] Enable kernel profiling via /proc/profile
+ 			Format: [<profiletype>,]<number>
+-			Param: <profiletype>: "schedule", "sleep", or "kvm"
++			Param: <profiletype>: "schedule" or "kvm"
+ 				[defaults to kernel profiling]
+ 			Param: "schedule" - profile schedule points.
+-			Param: "sleep" - profile D-state sleeping (millisecs).
+-				Requires CONFIG_SCHEDSTATS
+ 			Param: "kvm" - profile VM exits.
+ 			Param: <number> - step/bucket size as a power of 2 for
+ 				statistical time based profiling.
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index eb8af8032c315..50327c05be8d1 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -122,26 +122,50 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A76      | #1490853        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A76      | #3324349        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A77      | #1491015        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A77      | #1508412        | ARM64_ERRATUM_1508412       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A77      | #3324348        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A78      | #3324344        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A78C     | #3324346,3324347| ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2119858        | ARM64_ERRATUM_2119858       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2054223        | ARM64_ERRATUM_2054223       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A710     | #2224489        | ARM64_ERRATUM_2224489       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A710     | #3324338        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A715     | #2645198        | ARM64_ERRATUM_2645198       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A720     | #3456091        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A725     | #3456106        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-X1       | #1502854        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X1       | #3324344        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X1C      | #3324346        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-X2       | #2119858        | ARM64_ERRATUM_2119858       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-X2       | #2224489        | ARM64_ERRATUM_2224489       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X2       | #3324338        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X3       | #3324335        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-X4       | #3194386        | ARM64_ERRATUM_3194386       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-X925     | #3324334        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1188873,1418040| ARM64_ERRATUM_1418040       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1349291        | N/A                         |
+@@ -150,15 +174,23 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N1     | #1542419        | ARM64_ERRATUM_1542419       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-N1     | #3324349        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N2     | #2139208        | ARM64_ERRATUM_2139208       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N2     | #2067961        | ARM64_ERRATUM_2067961       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N2     | #2253138        | ARM64_ERRATUM_2253138       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-N2     | #3324339        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-V1     | #1619801        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+-| ARM            | Neoverse-V3     | #3312417        | ARM64_ERRATUM_3312417       |
++| ARM            | Neoverse-V1     | #3324341        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-V2     | #3324336        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-V3     | #3312417        | ARM64_ERRATUM_3194386       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | MMU-500         | #841119,826419  | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/hwmon/corsair-psu.rst b/Documentation/hwmon/corsair-psu.rst
+index 16db34d464dd6..7ed794087f848 100644
+--- a/Documentation/hwmon/corsair-psu.rst
++++ b/Documentation/hwmon/corsair-psu.rst
+@@ -15,11 +15,11 @@ Supported devices:
+ 
+   Corsair HX850i
+ 
+-  Corsair HX1000i (Series 2022 and 2023)
++  Corsair HX1000i (Legacy and Series 2023)
+ 
+-  Corsair HX1200i
++  Corsair HX1200i (Legacy and Series 2023)
+ 
+-  Corsair HX1500i (Series 2022 and 2023)
++  Corsair HX1500i (Legacy and Series 2023)
+ 
+   Corsair RM550i
+ 
+diff --git a/Documentation/userspace-api/media/v4l/pixfmt-yuv-luma.rst b/Documentation/userspace-api/media/v4l/pixfmt-yuv-luma.rst
+index b3c5779521d8c..2e7d0d3151a17 100644
+--- a/Documentation/userspace-api/media/v4l/pixfmt-yuv-luma.rst
++++ b/Documentation/userspace-api/media/v4l/pixfmt-yuv-luma.rst
+@@ -21,9 +21,9 @@ are often referred to as greyscale formats.
+ 
+ .. raw:: latex
+ 
+-    \scriptsize
++    \tiny
+ 
+-.. tabularcolumns:: |p{3.6cm}|p{3.0cm}|p{1.3cm}|p{2.6cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|
++.. tabularcolumns:: |p{3.6cm}|p{2.4cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|
+ 
+ .. flat-table:: Luma-Only Image Formats
+     :header-rows: 1
+diff --git a/Makefile b/Makefile
+index aec5cc0babf8c..f9badb79ae8f4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 5d91259ee7b53..11bbdc15c6e5e 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1067,48 +1067,44 @@ config ARM64_ERRATUM_3117295
+ 
+ 	  If unsure, say Y.
+ 
+-config ARM64_WORKAROUND_SPECULATIVE_SSBS
+-	bool
+-
+ config ARM64_ERRATUM_3194386
+-	bool "Cortex-X4: 3194386: workaround for MSR SSBS not self-synchronizing"
+-	select ARM64_WORKAROUND_SPECULATIVE_SSBS
+-	default y
+-	help
+-	  This option adds the workaround for ARM Cortex-X4 erratum 3194386.
++	bool "Cortex-*/Neoverse-*: workaround for MSR SSBS not self-synchronizing"
++	default y
++	help
++	  This option adds the workaround for the following errata:
++
++	  * ARM Cortex-A76 erratum 3324349
++	  * ARM Cortex-A77 erratum 3324348
++	  * ARM Cortex-A78 erratum 3324344
++	  * ARM Cortex-A78C erratum 3324346
++	  * ARM Cortex-A78C erratum 3324347
++	  * ARM Cortex-A710 erratam 3324338
++	  * ARM Cortex-A720 erratum 3456091
++	  * ARM Cortex-A725 erratum 3456106
++	  * ARM Cortex-X1 erratum 3324344
++	  * ARM Cortex-X1C erratum 3324346
++	  * ARM Cortex-X2 erratum 3324338
++	  * ARM Cortex-X3 erratum 3324335
++	  * ARM Cortex-X4 erratum 3194386
++	  * ARM Cortex-X925 erratum 3324334
++	  * ARM Neoverse-N1 erratum 3324349
++	  * ARM Neoverse N2 erratum 3324339
++	  * ARM Neoverse-V1 erratum 3324341
++	  * ARM Neoverse V2 erratum 3324336
++	  * ARM Neoverse-V3 erratum 3312417
+ 
+ 	  On affected cores "MSR SSBS, #0" instructions may not affect
+ 	  subsequent speculative instructions, which may permit unexepected
+ 	  speculative store bypassing.
+ 
+-	  Work around this problem by placing a speculation barrier after
+-	  kernel changes to SSBS. The presence of the SSBS special-purpose
+-	  register is hidden from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such
+-	  that userspace will use the PR_SPEC_STORE_BYPASS prctl to change
+-	  SSBS.
++	  Work around this problem by placing a Speculation Barrier (SB) or
++	  Instruction Synchronization Barrier (ISB) after kernel changes to
++	  SSBS. The presence of the SSBS special-purpose register is hidden
++	  from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such that userspace
++	  will use the PR_SPEC_STORE_BYPASS prctl to change SSBS.
+ 
+ 	  If unsure, say Y.
+ 
+-config ARM64_ERRATUM_3312417
+-	bool "Neoverse-V3: 3312417: workaround for MSR SSBS not self-synchronizing"
+-	select ARM64_WORKAROUND_SPECULATIVE_SSBS
+-	default y
+-	help
+-	  This option adds the workaround for ARM Neoverse-V3 erratum 3312417.
+-
+-	  On affected cores "MSR SSBS, #0" instructions may not affect
+-	  subsequent speculative instructions, which may permit unexepected
+-	  speculative store bypassing.
+-
+-	  Work around this problem by placing a speculation barrier after
+-	  kernel changes to SSBS. The presence of the SSBS special-purpose
+-	  register is hidden from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such
+-	  that userspace will use the PR_SPEC_STORE_BYPASS prctl to change
+-	  SSBS.
+-
+-	  If unsure, say Y.
+-
+-
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
+index e8f4d136e5dfb..9202181fbd652 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
+@@ -43,15 +43,6 @@ simple-audio-card,cpu {
+ 			sound-dai = <&mcasp0>;
+ 		};
+ 	};
+-
+-	reg_usb_hub: regulator-usb-hub {
+-		compatible = "regulator-fixed";
+-		enable-active-high;
+-		/* Verdin CTRL_SLEEP_MOCI# (SODIMM 256) */
+-		gpio = <&main_gpio0 31 GPIO_ACTIVE_HIGH>;
+-		regulator-boot-on;
+-		regulator-name = "HUB_PWR_EN";
+-	};
+ };
+ 
+ /* Verdin ETHs */
+@@ -193,11 +184,6 @@ &ospi0 {
+ 	status = "okay";
+ };
+ 
+-/* Do not force CTRL_SLEEP_MOCI# always enabled */
+-&reg_force_sleep_moci {
+-	status = "disabled";
+-};
+-
+ /* Verdin SD_1 */
+ &sdhci1 {
+ 	status = "okay";
+@@ -218,15 +204,7 @@ &usbss1 {
+ };
+ 
+ &usb1 {
+-	#address-cells = <1>;
+-	#size-cells = <0>;
+ 	status = "okay";
+-
+-	usb-hub@1 {
+-		compatible = "usb424,2744";
+-		reg = <1>;
+-		vdd-supply = <&reg_usb_hub>;
+-	};
+ };
+ 
+ /* Verdin CTRL_WAKE1_MICO# */
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+index 359f53f3e019b..5bef31b8577be 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+@@ -138,12 +138,6 @@ reg_1v8_eth: regulator-1v8-eth {
+ 		vin-supply = <&reg_1v8>;
+ 	};
+ 
+-	/*
+-	 * By default we enable CTRL_SLEEP_MOCI#, this is required to have
+-	 * peripherals on the carrier board powered.
+-	 * If more granularity or power saving is required this can be disabled
+-	 * in the carrier board device tree files.
+-	 */
+ 	reg_force_sleep_moci: regulator-force-sleep-moci {
+ 		compatible = "regulator-fixed";
+ 		enable-active-high;
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 7529c02639332..a6e5b07b64fd5 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -59,7 +59,7 @@ cpucap_is_possible(const unsigned int cap)
+ 	case ARM64_WORKAROUND_REPEAT_TLBI:
+ 		return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI);
+ 	case ARM64_WORKAROUND_SPECULATIVE_SSBS:
+-		return IS_ENABLED(CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS);
++		return IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386);
+ 	}
+ 
+ 	return true;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 7b32b99023a21..5fd7caea44193 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -86,9 +86,14 @@
+ #define ARM_CPU_PART_CORTEX_X2		0xD48
+ #define ARM_CPU_PART_NEOVERSE_N2	0xD49
+ #define ARM_CPU_PART_CORTEX_A78C	0xD4B
++#define ARM_CPU_PART_CORTEX_X1C		0xD4C
++#define ARM_CPU_PART_CORTEX_X3		0xD4E
+ #define ARM_CPU_PART_NEOVERSE_V2	0xD4F
++#define ARM_CPU_PART_CORTEX_A720	0xD81
+ #define ARM_CPU_PART_CORTEX_X4		0xD82
+ #define ARM_CPU_PART_NEOVERSE_V3	0xD84
++#define ARM_CPU_PART_CORTEX_X925	0xD85
++#define ARM_CPU_PART_CORTEX_A725	0xD87
+ 
+ #define APM_CPU_PART_XGENE		0x000
+ #define APM_CPU_VAR_POTENZA		0x00
+@@ -162,9 +167,14 @@
+ #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
+ #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
+ #define MIDR_CORTEX_A78C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
++#define MIDR_CORTEX_X1C	MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C)
++#define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3)
+ #define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2)
++#define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720)
+ #define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4)
+ #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
++#define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
++#define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 828be635e7e1d..f6b6b45073571 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -432,14 +432,26 @@ static const struct midr_range erratum_spec_unpriv_load_list[] = {
+ };
+ #endif
+ 
+-#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS
+-static const struct midr_range erratum_spec_ssbs_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_3194386
++static const struct midr_range erratum_spec_ssbs_list[] = {
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A725),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X1C),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_X4),
+-#endif
+-#ifdef CONFIG_ARM64_ERRATUM_3312417
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_X925),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3),
+-#endif
+ 	{}
+ };
+ #endif
+@@ -741,9 +753,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)),
+ 	},
+ #endif
+-#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS
++#ifdef CONFIG_ARM64_ERRATUM_3194386
+ 	{
+-		.desc = "ARM errata 3194386, 3312417",
++		.desc = "SSBS not fully self-synchronizing",
+ 		.capability = ARM64_WORKAROUND_SPECULATIVE_SSBS,
+ 		ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list),
+ 	},
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index baca47bd443c8..da53722f95d41 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -567,7 +567,7 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
+ 	 * Mitigate this with an unconditional speculation barrier, as CPUs
+ 	 * could mis-speculate branches and bypass a conditional barrier.
+ 	 */
+-	if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS))
++	if (IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386))
+ 		spec_bar();
+ 
+ 	return SPECTRE_MITIGATED;
+diff --git a/arch/loongarch/kernel/efi.c b/arch/loongarch/kernel/efi.c
+index 000825406c1f6..2bf86aeda874c 100644
+--- a/arch/loongarch/kernel/efi.c
++++ b/arch/loongarch/kernel/efi.c
+@@ -66,6 +66,12 @@ void __init efi_runtime_init(void)
+ 	set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
+ }
+ 
++bool efi_poweroff_required(void)
++{
++	return efi_enabled(EFI_RUNTIME_SERVICES) &&
++		(acpi_gbl_reduced_hardware || acpi_no_s5);
++}
++
+ unsigned long __initdata screen_info_table = EFI_INVALID_TABLE_ADDR;
+ 
+ #if defined(CONFIG_SYSFB) || defined(CONFIG_EFI_EARLYCON)
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index 9656e956ed135..4ecfa08ac3e1f 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -20,6 +20,7 @@ config PARISC
+ 	select ARCH_SUPPORTS_HUGETLBFS if PA20
+ 	select ARCH_SUPPORTS_MEMORY_FAILURE
+ 	select ARCH_STACKWALK
++	select ARCH_HAS_CACHE_LINE_SIZE
+ 	select ARCH_HAS_DEBUG_VM_PGTABLE
+ 	select HAVE_RELIABLE_STACKTRACE
+ 	select DMA_OPS
+diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
+index 2a60d7a72f1fa..a3f0f100f2194 100644
+--- a/arch/parisc/include/asm/cache.h
++++ b/arch/parisc/include/asm/cache.h
+@@ -20,7 +20,16 @@
+ 
+ #define SMP_CACHE_BYTES L1_CACHE_BYTES
+ 
+-#define ARCH_DMA_MINALIGN	L1_CACHE_BYTES
++#ifdef CONFIG_PA20
++#define ARCH_DMA_MINALIGN	128
++#else
++#define ARCH_DMA_MINALIGN	32
++#endif
++#define ARCH_KMALLOC_MINALIGN	16	/* ldcw requires 16-byte alignment */
++
++#define arch_slab_minalign()	((unsigned)dcache_stride)
++#define cache_line_size()	dcache_stride
++#define dma_get_cache_alignment cache_line_size
+ 
+ #define __read_mostly __section(".data..read_mostly")
+ 
+diff --git a/arch/parisc/net/bpf_jit_core.c b/arch/parisc/net/bpf_jit_core.c
+index 979f45d4d1fbe..06cbcd6fe87b8 100644
+--- a/arch/parisc/net/bpf_jit_core.c
++++ b/arch/parisc/net/bpf_jit_core.c
+@@ -114,7 +114,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 			jit_data->header =
+ 				bpf_jit_binary_alloc(prog_size + extable_size,
+ 						     &jit_data->image,
+-						     sizeof(u32),
++						     sizeof(long),
+ 						     bpf_fill_ill_insns);
+ 			if (!jit_data->header) {
+ 				prog = orig_prog;
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 1fc4ce44e743c..920e3a640cadd 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -432,8 +432,10 @@ static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc,
+ 	 * be removed on one CPU at a time AND PMU is disabled
+ 	 * when we come here
+ 	 */
+-	for (i = 0; i < x86_pmu.num_counters; i++) {
+-		if (cmpxchg(nb->owners + i, event, NULL) == event)
++	for_each_set_bit(i, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
++		struct perf_event *tmp = event;
++
++		if (try_cmpxchg(nb->owners + i, &tmp, NULL))
+ 			break;
+ 	}
+ }
+@@ -499,7 +501,7 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
+ 	 * because of successive calls to x86_schedule_events() from
+ 	 * hw_perf_group_sched_in() without hw_perf_enable()
+ 	 */
+-	for_each_set_bit(idx, c->idxmsk, x86_pmu.num_counters) {
++	for_each_set_bit(idx, c->idxmsk, x86_pmu_max_num_counters(NULL)) {
+ 		if (new == -1 || hwc->idx == idx)
+ 			/* assign free slot, prefer hwc->idx */
+ 			old = cmpxchg(nb->owners + idx, NULL, event);
+@@ -542,7 +544,7 @@ static struct amd_nb *amd_alloc_nb(int cpu)
+ 	/*
+ 	 * initialize all possible NB constraints
+ 	 */
+-	for (i = 0; i < x86_pmu.num_counters; i++) {
++	for_each_set_bit(i, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		__set_bit(i, nb->event_constraints[i].idxmsk);
+ 		nb->event_constraints[i].weight = 1;
+ 	}
+@@ -735,7 +737,7 @@ static void amd_pmu_check_overflow(void)
+ 	 * counters are always enabled when this function is called and
+ 	 * ARCH_PERFMON_EVENTSEL_INT is always set.
+ 	 */
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+ 
+@@ -755,7 +757,7 @@ static void amd_pmu_enable_all(int added)
+ 
+ 	amd_brs_enable_all();
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		/* only activate events which are marked as active */
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+@@ -978,7 +980,7 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
+ 	/* Clear any reserved bits set by buggy microcode */
+ 	status &= amd_pmu_global_cntr_mask;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+ 
+@@ -1313,7 +1315,7 @@ static __initconst const struct x86_pmu amd_pmu = {
+ 	.addr_offset            = amd_pmu_addr_offset,
+ 	.event_map		= amd_pmu_event_map,
+ 	.max_events		= ARRAY_SIZE(amd_perfmon_event_map),
+-	.num_counters		= AMD64_NUM_COUNTERS,
++	.cntr_mask64		= GENMASK_ULL(AMD64_NUM_COUNTERS - 1, 0),
+ 	.add			= amd_pmu_add_event,
+ 	.del			= amd_pmu_del_event,
+ 	.cntval_bits		= 48,
+@@ -1412,7 +1414,7 @@ static int __init amd_core_pmu_init(void)
+ 	 */
+ 	x86_pmu.eventsel	= MSR_F15H_PERF_CTL;
+ 	x86_pmu.perfctr		= MSR_F15H_PERF_CTR;
+-	x86_pmu.num_counters	= AMD64_NUM_COUNTERS_CORE;
++	x86_pmu.cntr_mask64	= GENMASK_ULL(AMD64_NUM_COUNTERS_CORE - 1, 0);
+ 
+ 	/* Check for Performance Monitoring v2 support */
+ 	if (boot_cpu_has(X86_FEATURE_PERFMON_V2)) {
+@@ -1422,9 +1424,9 @@ static int __init amd_core_pmu_init(void)
+ 		x86_pmu.version = 2;
+ 
+ 		/* Find the number of available Core PMCs */
+-		x86_pmu.num_counters = ebx.split.num_core_pmc;
++		x86_pmu.cntr_mask64 = GENMASK_ULL(ebx.split.num_core_pmc - 1, 0);
+ 
+-		amd_pmu_global_cntr_mask = (1ULL << x86_pmu.num_counters) - 1;
++		amd_pmu_global_cntr_mask = x86_pmu.cntr_mask64;
+ 
+ 		/* Update PMC handling functions */
+ 		x86_pmu.enable_all = amd_pmu_v2_enable_all;
+@@ -1452,12 +1454,12 @@ static int __init amd_core_pmu_init(void)
+ 		 * even numbered counter that has a consecutive adjacent odd
+ 		 * numbered counter following it.
+ 		 */
+-		for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
++		for (i = 0; i < x86_pmu_max_num_counters(NULL) - 1; i += 2)
+ 			even_ctr_mask |= BIT_ULL(i);
+ 
+ 		pair_constraint = (struct event_constraint)
+ 				    __EVENT_CONSTRAINT(0, even_ctr_mask, 0,
+-				    x86_pmu.num_counters / 2, 0,
++				    x86_pmu_max_num_counters(NULL) / 2, 0,
+ 				    PERF_X86_EVENT_PAIR);
+ 
+ 		x86_pmu.get_event_constraints = amd_get_event_constraints_f17h;
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 5a4bfe9aea237..0bfde2ea5cb8c 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -162,7 +162,9 @@ static int amd_uncore_add(struct perf_event *event, int flags)
+ 	/* if not, take the first available counter */
+ 	hwc->idx = -1;
+ 	for (i = 0; i < pmu->num_counters; i++) {
+-		if (cmpxchg(&ctx->events[i], NULL, event) == NULL) {
++		struct perf_event *tmp = NULL;
++
++		if (try_cmpxchg(&ctx->events[i], &tmp, event)) {
+ 			hwc->idx = i;
+ 			break;
+ 		}
+@@ -196,7 +198,9 @@ static void amd_uncore_del(struct perf_event *event, int flags)
+ 	event->pmu->stop(event, PERF_EF_UPDATE);
+ 
+ 	for (i = 0; i < pmu->num_counters; i++) {
+-		if (cmpxchg(&ctx->events[i], event, NULL) == event)
++		struct perf_event *tmp = event;
++
++		if (try_cmpxchg(&ctx->events[i], &tmp, NULL))
+ 			break;
+ 	}
+ 
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index acd367c453341..83d12dd3f831a 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -189,29 +189,31 @@ static DEFINE_MUTEX(pmc_reserve_mutex);
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 
+-static inline int get_possible_num_counters(void)
++static inline u64 get_possible_counter_mask(void)
+ {
+-	int i, num_counters = x86_pmu.num_counters;
++	u64 cntr_mask = x86_pmu.cntr_mask64;
++	int i;
+ 
+ 	if (!is_hybrid())
+-		return num_counters;
++		return cntr_mask;
+ 
+ 	for (i = 0; i < x86_pmu.num_hybrid_pmus; i++)
+-		num_counters = max_t(int, num_counters, x86_pmu.hybrid_pmu[i].num_counters);
++		cntr_mask |= x86_pmu.hybrid_pmu[i].cntr_mask64;
+ 
+-	return num_counters;
++	return cntr_mask;
+ }
+ 
+ static bool reserve_pmc_hardware(void)
+ {
+-	int i, num_counters = get_possible_num_counters();
++	u64 cntr_mask = get_possible_counter_mask();
++	int i, end;
+ 
+-	for (i = 0; i < num_counters; i++) {
++	for_each_set_bit(i, (unsigned long *)&cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (!reserve_perfctr_nmi(x86_pmu_event_addr(i)))
+ 			goto perfctr_fail;
+ 	}
+ 
+-	for (i = 0; i < num_counters; i++) {
++	for_each_set_bit(i, (unsigned long *)&cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (!reserve_evntsel_nmi(x86_pmu_config_addr(i)))
+ 			goto eventsel_fail;
+ 	}
+@@ -219,13 +221,14 @@ static bool reserve_pmc_hardware(void)
+ 	return true;
+ 
+ eventsel_fail:
+-	for (i--; i >= 0; i--)
++	end = i;
++	for_each_set_bit(i, (unsigned long *)&cntr_mask, end)
+ 		release_evntsel_nmi(x86_pmu_config_addr(i));
+-
+-	i = num_counters;
++	i = X86_PMC_IDX_MAX;
+ 
+ perfctr_fail:
+-	for (i--; i >= 0; i--)
++	end = i;
++	for_each_set_bit(i, (unsigned long *)&cntr_mask, end)
+ 		release_perfctr_nmi(x86_pmu_event_addr(i));
+ 
+ 	return false;
+@@ -233,9 +236,10 @@ static bool reserve_pmc_hardware(void)
+ 
+ static void release_pmc_hardware(void)
+ {
+-	int i, num_counters = get_possible_num_counters();
++	u64 cntr_mask = get_possible_counter_mask();
++	int i;
+ 
+-	for (i = 0; i < num_counters; i++) {
++	for_each_set_bit(i, (unsigned long *)&cntr_mask, X86_PMC_IDX_MAX) {
+ 		release_perfctr_nmi(x86_pmu_event_addr(i));
+ 		release_evntsel_nmi(x86_pmu_config_addr(i));
+ 	}
+@@ -248,7 +252,8 @@ static void release_pmc_hardware(void) {}
+ 
+ #endif
+ 
+-bool check_hw_exists(struct pmu *pmu, int num_counters, int num_counters_fixed)
++bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
++		     unsigned long *fixed_cntr_mask)
+ {
+ 	u64 val, val_fail = -1, val_new= ~0;
+ 	int i, reg, reg_fail = -1, ret = 0;
+@@ -259,7 +264,7 @@ bool check_hw_exists(struct pmu *pmu, int num_counters, int num_counters_fixed)
+ 	 * Check to see if the BIOS enabled any of the counters, if so
+ 	 * complain and bail.
+ 	 */
+-	for (i = 0; i < num_counters; i++) {
++	for_each_set_bit(i, cntr_mask, X86_PMC_IDX_MAX) {
+ 		reg = x86_pmu_config_addr(i);
+ 		ret = rdmsrl_safe(reg, &val);
+ 		if (ret)
+@@ -273,12 +278,12 @@ bool check_hw_exists(struct pmu *pmu, int num_counters, int num_counters_fixed)
+ 		}
+ 	}
+ 
+-	if (num_counters_fixed) {
++	if (*(u64 *)fixed_cntr_mask) {
+ 		reg = MSR_ARCH_PERFMON_FIXED_CTR_CTRL;
+ 		ret = rdmsrl_safe(reg, &val);
+ 		if (ret)
+ 			goto msr_fail;
+-		for (i = 0; i < num_counters_fixed; i++) {
++		for_each_set_bit(i, fixed_cntr_mask, X86_PMC_IDX_MAX) {
+ 			if (fixed_counter_disabled(i, pmu))
+ 				continue;
+ 			if (val & (0x03ULL << i*4)) {
+@@ -679,7 +684,7 @@ void x86_pmu_disable_all(void)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct hw_perf_event *hwc = &cpuc->events[idx]->hw;
+ 		u64 val;
+ 
+@@ -736,7 +741,7 @@ void x86_pmu_enable_all(int added)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct hw_perf_event *hwc = &cpuc->events[idx]->hw;
+ 
+ 		if (!test_bit(idx, cpuc->active_mask))
+@@ -975,7 +980,6 @@ EXPORT_SYMBOL_GPL(perf_assign_events);
+ 
+ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
+ {
+-	int num_counters = hybrid(cpuc->pmu, num_counters);
+ 	struct event_constraint *c;
+ 	struct perf_event *e;
+ 	int n0, i, wmin, wmax, unsched = 0;
+@@ -1051,7 +1055,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
+ 
+ 	/* slow path */
+ 	if (i != n) {
+-		int gpmax = num_counters;
++		int gpmax = x86_pmu_max_num_counters(cpuc->pmu);
+ 
+ 		/*
+ 		 * Do not allow scheduling of more than half the available
+@@ -1072,7 +1076,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
+ 		 * the extra Merge events needed by large increment events.
+ 		 */
+ 		if (x86_pmu.flags & PMU_FL_PAIR) {
+-			gpmax = num_counters - cpuc->n_pair;
++			gpmax -= cpuc->n_pair;
+ 			WARN_ON(gpmax <= 0);
+ 		}
+ 
+@@ -1157,12 +1161,10 @@ static int collect_event(struct cpu_hw_events *cpuc, struct perf_event *event,
+  */
+ static int collect_events(struct cpu_hw_events *cpuc, struct perf_event *leader, bool dogrp)
+ {
+-	int num_counters = hybrid(cpuc->pmu, num_counters);
+-	int num_counters_fixed = hybrid(cpuc->pmu, num_counters_fixed);
+ 	struct perf_event *event;
+ 	int n, max_count;
+ 
+-	max_count = num_counters + num_counters_fixed;
++	max_count = x86_pmu_num_counters(cpuc->pmu) + x86_pmu_num_counters_fixed(cpuc->pmu);
+ 
+ 	/* current number of events already accepted */
+ 	n = cpuc->n_events;
+@@ -1519,19 +1521,22 @@ static void x86_pmu_start(struct perf_event *event, int flags)
+ void perf_event_print_debug(void)
+ {
+ 	u64 ctrl, status, overflow, pmc_ctrl, pmc_count, prev_left, fixed;
++	unsigned long *cntr_mask, *fixed_cntr_mask;
++	struct event_constraint *pebs_constraints;
++	struct cpu_hw_events *cpuc;
+ 	u64 pebs, debugctl;
+-	int cpu = smp_processor_id();
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+-	int num_counters = hybrid(cpuc->pmu, num_counters);
+-	int num_counters_fixed = hybrid(cpuc->pmu, num_counters_fixed);
+-	struct event_constraint *pebs_constraints = hybrid(cpuc->pmu, pebs_constraints);
+-	unsigned long flags;
+-	int idx;
++	int cpu, idx;
+ 
+-	if (!num_counters)
+-		return;
++	guard(irqsave)();
++
++	cpu = smp_processor_id();
++	cpuc = &per_cpu(cpu_hw_events, cpu);
++	cntr_mask = hybrid(cpuc->pmu, cntr_mask);
++	fixed_cntr_mask = hybrid(cpuc->pmu, fixed_cntr_mask);
++	pebs_constraints = hybrid(cpuc->pmu, pebs_constraints);
+ 
+-	local_irq_save(flags);
++	if (!*(u64 *)cntr_mask)
++		return;
+ 
+ 	if (x86_pmu.version >= 2) {
+ 		rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, ctrl);
+@@ -1555,7 +1560,7 @@ void perf_event_print_debug(void)
+ 	}
+ 	pr_info("CPU#%d: active:     %016llx\n", cpu, *(u64 *)cpuc->active_mask);
+ 
+-	for (idx = 0; idx < num_counters; idx++) {
++	for_each_set_bit(idx, cntr_mask, X86_PMC_IDX_MAX) {
+ 		rdmsrl(x86_pmu_config_addr(idx), pmc_ctrl);
+ 		rdmsrl(x86_pmu_event_addr(idx), pmc_count);
+ 
+@@ -1568,7 +1573,7 @@ void perf_event_print_debug(void)
+ 		pr_info("CPU#%d:   gen-PMC%d left:  %016llx\n",
+ 			cpu, idx, prev_left);
+ 	}
+-	for (idx = 0; idx < num_counters_fixed; idx++) {
++	for_each_set_bit(idx, fixed_cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (fixed_counter_disabled(idx, cpuc->pmu))
+ 			continue;
+ 		rdmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, pmc_count);
+@@ -1576,7 +1581,6 @@ void perf_event_print_debug(void)
+ 		pr_info("CPU#%d: fixed-PMC%d count: %016llx\n",
+ 			cpu, idx, pmc_count);
+ 	}
+-	local_irq_restore(flags);
+ }
+ 
+ void x86_pmu_stop(struct perf_event *event, int flags)
+@@ -1682,7 +1686,7 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
+ 	 */
+ 	apic_write(APIC_LVTPC, APIC_DM_NMI);
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+ 
+@@ -2038,18 +2042,15 @@ static void _x86_pmu_read(struct perf_event *event)
+ 	static_call(x86_pmu_update)(event);
+ }
+ 
+-void x86_pmu_show_pmu_cap(int num_counters, int num_counters_fixed,
+-			  u64 intel_ctrl)
++void x86_pmu_show_pmu_cap(struct pmu *pmu)
+ {
+ 	pr_info("... version:                %d\n",     x86_pmu.version);
+ 	pr_info("... bit width:              %d\n",     x86_pmu.cntval_bits);
+-	pr_info("... generic registers:      %d\n",     num_counters);
++	pr_info("... generic registers:      %d\n",     x86_pmu_num_counters(pmu));
+ 	pr_info("... value mask:             %016Lx\n", x86_pmu.cntval_mask);
+ 	pr_info("... max period:             %016Lx\n", x86_pmu.max_period);
+-	pr_info("... fixed-purpose events:   %lu\n",
+-			hweight64((((1ULL << num_counters_fixed) - 1)
+-					<< INTEL_PMC_IDX_FIXED) & intel_ctrl));
+-	pr_info("... event mask:             %016Lx\n", intel_ctrl);
++	pr_info("... fixed-purpose events:   %d\n",     x86_pmu_num_counters_fixed(pmu));
++	pr_info("... event mask:             %016Lx\n", hybrid(pmu, intel_ctrl));
+ }
+ 
+ static int __init init_hw_perf_events(void)
+@@ -2086,7 +2087,7 @@ static int __init init_hw_perf_events(void)
+ 	pmu_check_apic();
+ 
+ 	/* sanity check that the hardware exists or is emulated */
+-	if (!check_hw_exists(&pmu, x86_pmu.num_counters, x86_pmu.num_counters_fixed))
++	if (!check_hw_exists(&pmu, x86_pmu.cntr_mask, x86_pmu.fixed_cntr_mask))
+ 		goto out_bad_pmu;
+ 
+ 	pr_cont("%s PMU driver.\n", x86_pmu.name);
+@@ -2097,14 +2098,14 @@ static int __init init_hw_perf_events(void)
+ 		quirk->func();
+ 
+ 	if (!x86_pmu.intel_ctrl)
+-		x86_pmu.intel_ctrl = (1 << x86_pmu.num_counters) - 1;
++		x86_pmu.intel_ctrl = x86_pmu.cntr_mask64;
+ 
+ 	perf_events_lapic_init();
+ 	register_nmi_handler(NMI_LOCAL, perf_event_nmi_handler, 0, "PMI");
+ 
+ 	unconstrained = (struct event_constraint)
+-		__EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_counters) - 1,
+-				   0, x86_pmu.num_counters, 0, 0);
++		__EVENT_CONSTRAINT(0, x86_pmu.cntr_mask64,
++				   0, x86_pmu_num_counters(NULL), 0, 0);
+ 
+ 	x86_pmu_format_group.attrs = x86_pmu.format_attrs;
+ 
+@@ -2113,11 +2114,8 @@ static int __init init_hw_perf_events(void)
+ 
+ 	pmu.attr_update = x86_pmu.attr_update;
+ 
+-	if (!is_hybrid()) {
+-		x86_pmu_show_pmu_cap(x86_pmu.num_counters,
+-				     x86_pmu.num_counters_fixed,
+-				     x86_pmu.intel_ctrl);
+-	}
++	if (!is_hybrid())
++		x86_pmu_show_pmu_cap(NULL);
+ 
+ 	if (!x86_pmu.read)
+ 		x86_pmu.read = _x86_pmu_read;
+@@ -2481,7 +2479,7 @@ void perf_clear_dirty_counters(void)
+ 	for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
+ 		if (i >= INTEL_PMC_IDX_FIXED) {
+ 			/* Metrics and fake events don't have corresponding HW counters. */
+-			if ((i - INTEL_PMC_IDX_FIXED) >= hybrid(cpuc->pmu, num_counters_fixed))
++			if (!test_bit(i - INTEL_PMC_IDX_FIXED, hybrid(cpuc->pmu, fixed_cntr_mask)))
+ 				continue;
+ 
+ 			wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
+@@ -2986,8 +2984,8 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
+ 	 * base PMU holds the correct number of counters for P-cores.
+ 	 */
+ 	cap->version		= x86_pmu.version;
+-	cap->num_counters_gp	= x86_pmu.num_counters;
+-	cap->num_counters_fixed	= x86_pmu.num_counters_fixed;
++	cap->num_counters_gp	= x86_pmu_num_counters(NULL);
++	cap->num_counters_fixed	= x86_pmu_num_counters_fixed(NULL);
+ 	cap->bit_width_gp	= x86_pmu.cntval_bits;
+ 	cap->bit_width_fixed	= x86_pmu.cntval_bits;
+ 	cap->events_mask	= (unsigned int)x86_pmu.events_maskl;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 101a21fe9c213..f25205d047e83 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2874,23 +2874,23 @@ static void intel_pmu_reset(void)
+ {
+ 	struct debug_store *ds = __this_cpu_read(cpu_hw_events.ds);
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+-	int num_counters_fixed = hybrid(cpuc->pmu, num_counters_fixed);
+-	int num_counters = hybrid(cpuc->pmu, num_counters);
++	unsigned long *cntr_mask = hybrid(cpuc->pmu, cntr_mask);
++	unsigned long *fixed_cntr_mask = hybrid(cpuc->pmu, fixed_cntr_mask);
+ 	unsigned long flags;
+ 	int idx;
+ 
+-	if (!num_counters)
++	if (!*(u64 *)cntr_mask)
+ 		return;
+ 
+ 	local_irq_save(flags);
+ 
+ 	pr_info("clearing PMU state on CPU#%d\n", smp_processor_id());
+ 
+-	for (idx = 0; idx < num_counters; idx++) {
++	for_each_set_bit(idx, cntr_mask, INTEL_PMC_MAX_GENERIC) {
+ 		wrmsrl_safe(x86_pmu_config_addr(idx), 0ull);
+ 		wrmsrl_safe(x86_pmu_event_addr(idx),  0ull);
+ 	}
+-	for (idx = 0; idx < num_counters_fixed; idx++) {
++	for_each_set_bit(idx, fixed_cntr_mask, INTEL_PMC_MAX_FIXED) {
+ 		if (fixed_counter_disabled(idx, cpuc->pmu))
+ 			continue;
+ 		wrmsrl_safe(MSR_ARCH_PERFMON_FIXED_CTR0 + idx, 0ull);
+@@ -2940,8 +2940,7 @@ static void x86_pmu_handle_guest_pebs(struct pt_regs *regs,
+ 	    !guest_pebs_idxs)
+ 		return;
+ 
+-	for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs,
+-			 INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed) {
++	for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs, X86_PMC_IDX_MAX) {
+ 		event = cpuc->events[bit];
+ 		if (!event->attr.precise_ip)
+ 			continue;
+@@ -4199,7 +4198,7 @@ static struct perf_guest_switch_msr *core_guest_get_msrs(int *nr, void *data)
+ 	struct perf_guest_switch_msr *arr = cpuc->guest_switch_msrs;
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++)  {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 
+ 		arr[idx].msr = x86_pmu_config_addr(idx);
+@@ -4217,7 +4216,7 @@ static struct perf_guest_switch_msr *core_guest_get_msrs(int *nr, void *data)
+ 			arr[idx].guest &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
+ 	}
+ 
+-	*nr = x86_pmu.num_counters;
++	*nr = x86_pmu_max_num_counters(cpuc->pmu);
+ 	return arr;
+ }
+ 
+@@ -4232,7 +4231,7 @@ static void core_pmu_enable_all(int added)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct hw_perf_event *hwc = &cpuc->events[idx]->hw;
+ 
+ 		if (!test_bit(idx, cpuc->active_mask) ||
+@@ -4684,13 +4683,33 @@ static void flip_smm_bit(void *data)
+ 	}
+ }
+ 
+-static void intel_pmu_check_num_counters(int *num_counters,
+-					 int *num_counters_fixed,
+-					 u64 *intel_ctrl, u64 fixed_mask);
++static void intel_pmu_check_counters_mask(u64 *cntr_mask,
++					  u64 *fixed_cntr_mask,
++					  u64 *intel_ctrl)
++{
++	unsigned int bit;
++
++	bit = fls64(*cntr_mask);
++	if (bit > INTEL_PMC_MAX_GENERIC) {
++		WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!",
++		     bit, INTEL_PMC_MAX_GENERIC);
++		*cntr_mask &= GENMASK_ULL(INTEL_PMC_MAX_GENERIC - 1, 0);
++	}
++	*intel_ctrl = *cntr_mask;
++
++	bit = fls64(*fixed_cntr_mask);
++	if (bit > INTEL_PMC_MAX_FIXED) {
++		WARN(1, KERN_ERR "hw perf events fixed %d > max(%d), clipping!",
++		     bit, INTEL_PMC_MAX_FIXED);
++		*fixed_cntr_mask &= GENMASK_ULL(INTEL_PMC_MAX_FIXED - 1, 0);
++	}
++
++	*intel_ctrl |= *fixed_cntr_mask << INTEL_PMC_IDX_FIXED;
++}
+ 
+ static void intel_pmu_check_event_constraints(struct event_constraint *event_constraints,
+-					      int num_counters,
+-					      int num_counters_fixed,
++					      u64 cntr_mask,
++					      u64 fixed_cntr_mask,
+ 					      u64 intel_ctrl);
+ 
+ static void intel_pmu_check_extra_regs(struct extra_reg *extra_regs);
+@@ -4713,11 +4732,10 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pmu)
+ 	if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) {
+ 		cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF,
+ 			    &eax, &ebx, &ecx, &edx);
+-		pmu->num_counters = fls(eax);
+-		pmu->num_counters_fixed = fls(ebx);
++		pmu->cntr_mask64 = eax;
++		pmu->fixed_cntr_mask64 = ebx;
+ 	}
+ 
+-
+ 	if (!intel_pmu_broken_perf_cap()) {
+ 		/* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration */
+ 		rdmsrl(MSR_IA32_PERF_CAPABILITIES, pmu->intel_cap.capabilities);
+@@ -4726,12 +4744,12 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pmu)
+ 
+ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
+ {
+-	intel_pmu_check_num_counters(&pmu->num_counters, &pmu->num_counters_fixed,
+-				     &pmu->intel_ctrl, (1ULL << pmu->num_counters_fixed) - 1);
+-	pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
++	intel_pmu_check_counters_mask(&pmu->cntr_mask64, &pmu->fixed_cntr_mask64,
++				      &pmu->intel_ctrl);
++	pmu->pebs_events_mask = intel_pmu_pebs_mask(pmu->cntr_mask64);
+ 	pmu->unconstrained = (struct event_constraint)
+-			     __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
+-						0, pmu->num_counters, 0, 0);
++			     __EVENT_CONSTRAINT(0, pmu->cntr_mask64,
++						0, x86_pmu_num_counters(&pmu->pmu), 0, 0);
+ 
+ 	if (pmu->intel_cap.perf_metrics)
+ 		pmu->intel_ctrl |= 1ULL << GLOBAL_CTRL_EN_PERF_METRICS;
+@@ -4744,8 +4762,8 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
+ 		pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT;
+ 
+ 	intel_pmu_check_event_constraints(pmu->event_constraints,
+-					  pmu->num_counters,
+-					  pmu->num_counters_fixed,
++					  pmu->cntr_mask64,
++					  pmu->fixed_cntr_mask64,
+ 					  pmu->intel_ctrl);
+ 
+ 	intel_pmu_check_extra_regs(pmu->extra_regs);
+@@ -4806,7 +4824,7 @@ static bool init_hybrid_pmu(int cpu)
+ 
+ 	intel_pmu_check_hybrid_pmus(pmu);
+ 
+-	if (!check_hw_exists(&pmu->pmu, pmu->num_counters, pmu->num_counters_fixed))
++	if (!check_hw_exists(&pmu->pmu, pmu->cntr_mask, pmu->fixed_cntr_mask))
+ 		return false;
+ 
+ 	pr_info("%s PMU driver: ", pmu->name);
+@@ -4816,8 +4834,7 @@ static bool init_hybrid_pmu(int cpu)
+ 
+ 	pr_cont("\n");
+ 
+-	x86_pmu_show_pmu_cap(pmu->num_counters, pmu->num_counters_fixed,
+-			     pmu->intel_ctrl);
++	x86_pmu_show_pmu_cap(&pmu->pmu);
+ 
+ end:
+ 	cpumask_set_cpu(cpu, &pmu->supported_cpus);
+@@ -5955,29 +5972,9 @@ static const struct attribute_group *hybrid_attr_update[] = {
+ 
+ static struct attribute *empty_attrs;
+ 
+-static void intel_pmu_check_num_counters(int *num_counters,
+-					 int *num_counters_fixed,
+-					 u64 *intel_ctrl, u64 fixed_mask)
+-{
+-	if (*num_counters > INTEL_PMC_MAX_GENERIC) {
+-		WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!",
+-		     *num_counters, INTEL_PMC_MAX_GENERIC);
+-		*num_counters = INTEL_PMC_MAX_GENERIC;
+-	}
+-	*intel_ctrl = (1ULL << *num_counters) - 1;
+-
+-	if (*num_counters_fixed > INTEL_PMC_MAX_FIXED) {
+-		WARN(1, KERN_ERR "hw perf events fixed %d > max(%d), clipping!",
+-		     *num_counters_fixed, INTEL_PMC_MAX_FIXED);
+-		*num_counters_fixed = INTEL_PMC_MAX_FIXED;
+-	}
+-
+-	*intel_ctrl |= fixed_mask << INTEL_PMC_IDX_FIXED;
+-}
+-
+ static void intel_pmu_check_event_constraints(struct event_constraint *event_constraints,
+-					      int num_counters,
+-					      int num_counters_fixed,
++					      u64 cntr_mask,
++					      u64 fixed_cntr_mask,
+ 					      u64 intel_ctrl)
+ {
+ 	struct event_constraint *c;
+@@ -6014,10 +6011,9 @@ static void intel_pmu_check_event_constraints(struct event_constraint *event_con
+ 			 * generic counters
+ 			 */
+ 			if (!use_fixed_pseudo_encoding(c->code))
+-				c->idxmsk64 |= (1ULL << num_counters) - 1;
++				c->idxmsk64 |= cntr_mask;
+ 		}
+-		c->idxmsk64 &=
+-			~(~0ULL << (INTEL_PMC_IDX_FIXED + num_counters_fixed));
++		c->idxmsk64 &= cntr_mask | (fixed_cntr_mask << INTEL_PMC_IDX_FIXED);
+ 		c->weight = hweight64(c->idxmsk64);
+ 	}
+ }
+@@ -6068,12 +6064,12 @@ static __always_inline int intel_pmu_init_hybrid(enum hybrid_pmu_type pmus)
+ 		pmu->pmu_type = intel_hybrid_pmu_type_map[bit].id;
+ 		pmu->name = intel_hybrid_pmu_type_map[bit].name;
+ 
+-		pmu->num_counters = x86_pmu.num_counters;
+-		pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
+-		pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
++		pmu->cntr_mask64 = x86_pmu.cntr_mask64;
++		pmu->fixed_cntr_mask64 = x86_pmu.fixed_cntr_mask64;
++		pmu->pebs_events_mask = intel_pmu_pebs_mask(pmu->cntr_mask64);
+ 		pmu->unconstrained = (struct event_constraint)
+-				     __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
+-							0, pmu->num_counters, 0, 0);
++				     __EVENT_CONSTRAINT(0, pmu->cntr_mask64,
++							0, x86_pmu_num_counters(&pmu->pmu), 0, 0);
+ 
+ 		pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities;
+ 		if (pmu->pmu_type & hybrid_small) {
+@@ -6186,14 +6182,14 @@ __init int intel_pmu_init(void)
+ 		x86_pmu = intel_pmu;
+ 
+ 	x86_pmu.version			= version;
+-	x86_pmu.num_counters		= eax.split.num_counters;
++	x86_pmu.cntr_mask64		= GENMASK_ULL(eax.split.num_counters - 1, 0);
+ 	x86_pmu.cntval_bits		= eax.split.bit_width;
+ 	x86_pmu.cntval_mask		= (1ULL << eax.split.bit_width) - 1;
+ 
+ 	x86_pmu.events_maskl		= ebx.full;
+ 	x86_pmu.events_mask_len		= eax.split.mask_length;
+ 
+-	x86_pmu.max_pebs_events		= min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);
++	x86_pmu.pebs_events_mask	= intel_pmu_pebs_mask(x86_pmu.cntr_mask64);
+ 	x86_pmu.pebs_capable		= PEBS_COUNTER_MASK;
+ 
+ 	/*
+@@ -6203,12 +6199,10 @@ __init int intel_pmu_init(void)
+ 	if (version > 1 && version < 5) {
+ 		int assume = 3 * !boot_cpu_has(X86_FEATURE_HYPERVISOR);
+ 
+-		x86_pmu.num_counters_fixed =
+-			max((int)edx.split.num_counters_fixed, assume);
+-
+-		fixed_mask = (1L << x86_pmu.num_counters_fixed) - 1;
++		x86_pmu.fixed_cntr_mask64 =
++			GENMASK_ULL(max((int)edx.split.num_counters_fixed, assume) - 1, 0);
+ 	} else if (version >= 5)
+-		x86_pmu.num_counters_fixed = fls(fixed_mask);
++		x86_pmu.fixed_cntr_mask64 = fixed_mask;
+ 
+ 	if (boot_cpu_has(X86_FEATURE_PDCM)) {
+ 		u64 capabilities;
+@@ -6807,11 +6801,13 @@ __init int intel_pmu_init(void)
+ 		pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ 		intel_pmu_init_glc(&pmu->pmu);
+ 		if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
+-			pmu->num_counters = x86_pmu.num_counters + 2;
+-			pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
++			pmu->cntr_mask64 <<= 2;
++			pmu->cntr_mask64 |= 0x3;
++			pmu->fixed_cntr_mask64 <<= 1;
++			pmu->fixed_cntr_mask64 |= 0x1;
+ 		} else {
+-			pmu->num_counters = x86_pmu.num_counters;
+-			pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
++			pmu->cntr_mask64 = x86_pmu.cntr_mask64;
++			pmu->fixed_cntr_mask64 = x86_pmu.fixed_cntr_mask64;
+ 		}
+ 
+ 		/*
+@@ -6821,15 +6817,16 @@ __init int intel_pmu_init(void)
+ 		 * mistakenly add extra counters for P-cores. Correct the number of
+ 		 * counters here.
+ 		 */
+-		if ((pmu->num_counters > 8) || (pmu->num_counters_fixed > 4)) {
+-			pmu->num_counters = x86_pmu.num_counters;
+-			pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
++		if ((x86_pmu_num_counters(&pmu->pmu) > 8) || (x86_pmu_num_counters_fixed(&pmu->pmu) > 4)) {
++			pmu->cntr_mask64 = x86_pmu.cntr_mask64;
++			pmu->fixed_cntr_mask64 = x86_pmu.fixed_cntr_mask64;
+ 		}
+ 
+-		pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
++		pmu->pebs_events_mask = intel_pmu_pebs_mask(pmu->cntr_mask64);
+ 		pmu->unconstrained = (struct event_constraint)
+-					__EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
+-							   0, pmu->num_counters, 0, 0);
++				     __EVENT_CONSTRAINT(0, pmu->cntr_mask64,
++				     0, x86_pmu_num_counters(&pmu->pmu), 0, 0);
++
+ 		pmu->extra_regs = intel_glc_extra_regs;
+ 
+ 		/* Initialize Atom core specific PerfMon capabilities.*/
+@@ -6896,9 +6893,9 @@ __init int intel_pmu_init(void)
+ 			 * The constraints may be cut according to the CPUID enumeration
+ 			 * by inserting the EVENT_CONSTRAINT_END.
+ 			 */
+-			if (x86_pmu.num_counters_fixed > INTEL_PMC_MAX_FIXED)
+-				x86_pmu.num_counters_fixed = INTEL_PMC_MAX_FIXED;
+-			intel_v5_gen_event_constraints[x86_pmu.num_counters_fixed].weight = -1;
++			if (fls64(x86_pmu.fixed_cntr_mask64) > INTEL_PMC_MAX_FIXED)
++				x86_pmu.fixed_cntr_mask64 &= GENMASK_ULL(INTEL_PMC_MAX_FIXED - 1, 0);
++			intel_v5_gen_event_constraints[fls64(x86_pmu.fixed_cntr_mask64)].weight = -1;
+ 			x86_pmu.event_constraints = intel_v5_gen_event_constraints;
+ 			pr_cont("generic architected perfmon, ");
+ 			name = "generic_arch_v5+";
+@@ -6925,18 +6922,17 @@ __init int intel_pmu_init(void)
+ 		x86_pmu.attr_update = hybrid_attr_update;
+ 	}
+ 
+-	intel_pmu_check_num_counters(&x86_pmu.num_counters,
+-				     &x86_pmu.num_counters_fixed,
+-				     &x86_pmu.intel_ctrl,
+-				     (u64)fixed_mask);
++	intel_pmu_check_counters_mask(&x86_pmu.cntr_mask64,
++				      &x86_pmu.fixed_cntr_mask64,
++				      &x86_pmu.intel_ctrl);
+ 
+ 	/* AnyThread may be deprecated on arch perfmon v5 or later */
+ 	if (x86_pmu.intel_cap.anythread_deprecated)
+ 		x86_pmu.format_attrs = intel_arch_formats_attr;
+ 
+ 	intel_pmu_check_event_constraints(x86_pmu.event_constraints,
+-					  x86_pmu.num_counters,
+-					  x86_pmu.num_counters_fixed,
++					  x86_pmu.cntr_mask64,
++					  x86_pmu.fixed_cntr_mask64,
+ 					  x86_pmu.intel_ctrl);
+ 	/*
+ 	 * Access LBR MSR may cause #GP under certain circumstances.
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index dd18320558914..9f116dfc47284 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -41,7 +41,7 @@
+  *	MSR_CORE_C1_RES: CORE C1 Residency Counter
+  *			 perf code: 0x00
+  *			 Available model: SLM,AMT,GLM,CNL,ICX,TNT,ADL,RPL
+- *					  MTL,SRF,GRR
++ *					  MTL,SRF,GRR,ARL,LNL
+  *			 Scope: Core (each processor core has a MSR)
+  *	MSR_CORE_C3_RESIDENCY: CORE C3 Residency Counter
+  *			       perf code: 0x01
+@@ -53,30 +53,31 @@
+  *			       Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
+  *						SKL,KNL,GLM,CNL,KBL,CML,ICL,ICX,
+  *						TGL,TNT,RKL,ADL,RPL,SPR,MTL,SRF,
+- *						GRR
++ *						GRR,ARL,LNL
+  *			       Scope: Core
+  *	MSR_CORE_C7_RESIDENCY: CORE C7 Residency Counter
+  *			       perf code: 0x03
+  *			       Available model: SNB,IVB,HSW,BDW,SKL,CNL,KBL,CML,
+- *						ICL,TGL,RKL,ADL,RPL,MTL
++ *						ICL,TGL,RKL,ADL,RPL,MTL,ARL,LNL
+  *			       Scope: Core
+  *	MSR_PKG_C2_RESIDENCY:  Package C2 Residency Counter.
+  *			       perf code: 0x00
+  *			       Available model: SNB,IVB,HSW,BDW,SKL,KNL,GLM,CNL,
+  *						KBL,CML,ICL,ICX,TGL,TNT,RKL,ADL,
+- *						RPL,SPR,MTL
++ *						RPL,SPR,MTL,ARL,LNL,SRF
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C3_RESIDENCY:  Package C3 Residency Counter.
+  *			       perf code: 0x01
+  *			       Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,KNL,
+  *						GLM,CNL,KBL,CML,ICL,TGL,TNT,RKL,
+- *						ADL,RPL,MTL
++ *						ADL,RPL,MTL,ARL,LNL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C6_RESIDENCY:  Package C6 Residency Counter.
+  *			       perf code: 0x02
+  *			       Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
+  *						SKL,KNL,GLM,CNL,KBL,CML,ICL,ICX,
+- *						TGL,TNT,RKL,ADL,RPL,SPR,MTL,SRF
++ *						TGL,TNT,RKL,ADL,RPL,SPR,MTL,SRF,
++ *						ARL,LNL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C7_RESIDENCY:  Package C7 Residency Counter.
+  *			       perf code: 0x03
+@@ -86,7 +87,7 @@
+  *	MSR_PKG_C8_RESIDENCY:  Package C8 Residency Counter.
+  *			       perf code: 0x04
+  *			       Available model: HSW ULT,KBL,CNL,CML,ICL,TGL,RKL,
+- *						ADL,RPL,MTL
++ *						ADL,RPL,MTL,ARL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C9_RESIDENCY:  Package C9 Residency Counter.
+  *			       perf code: 0x05
+@@ -95,7 +96,7 @@
+  *	MSR_PKG_C10_RESIDENCY: Package C10 Residency Counter.
+  *			       perf code: 0x06
+  *			       Available model: HSW ULT,KBL,GLM,CNL,CML,ICL,TGL,
+- *						TNT,RKL,ADL,RPL,MTL
++ *						TNT,RKL,ADL,RPL,MTL,ARL,LNL
+  *			       Scope: Package (physical package)
+  *	MSR_MODULE_C6_RES_MS:  Module C6 Residency Counter.
+  *			       perf code: 0x00
+@@ -640,6 +641,17 @@ static const struct cstate_model adl_cstates __initconst = {
+ 				  BIT(PERF_CSTATE_PKG_C10_RES),
+ };
+ 
++static const struct cstate_model lnl_cstates __initconst = {
++	.core_events		= BIT(PERF_CSTATE_CORE_C1_RES) |
++				  BIT(PERF_CSTATE_CORE_C6_RES) |
++				  BIT(PERF_CSTATE_CORE_C7_RES),
++
++	.pkg_events		= BIT(PERF_CSTATE_PKG_C2_RES) |
++				  BIT(PERF_CSTATE_PKG_C3_RES) |
++				  BIT(PERF_CSTATE_PKG_C6_RES) |
++				  BIT(PERF_CSTATE_PKG_C10_RES),
++};
++
+ static const struct cstate_model slm_cstates __initconst = {
+ 	.core_events		= BIT(PERF_CSTATE_CORE_C1_RES) |
+ 				  BIT(PERF_CSTATE_CORE_C6_RES),
+@@ -681,7 +693,8 @@ static const struct cstate_model srf_cstates __initconst = {
+ 	.core_events		= BIT(PERF_CSTATE_CORE_C1_RES) |
+ 				  BIT(PERF_CSTATE_CORE_C6_RES),
+ 
+-	.pkg_events		= BIT(PERF_CSTATE_PKG_C6_RES),
++	.pkg_events		= BIT(PERF_CSTATE_PKG_C2_RES) |
++				  BIT(PERF_CSTATE_PKG_C6_RES),
+ 
+ 	.module_events		= BIT(PERF_CSTATE_MODULE_C6_RES),
+ };
+@@ -760,6 +773,10 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
+ 	X86_MATCH_VFM(INTEL_RAPTORLAKE_S,	&adl_cstates),
+ 	X86_MATCH_VFM(INTEL_METEORLAKE,		&adl_cstates),
+ 	X86_MATCH_VFM(INTEL_METEORLAKE_L,	&adl_cstates),
++	X86_MATCH_VFM(INTEL_ARROWLAKE,		&adl_cstates),
++	X86_MATCH_VFM(INTEL_ARROWLAKE_H,	&adl_cstates),
++	X86_MATCH_VFM(INTEL_ARROWLAKE_U,	&adl_cstates),
++	X86_MATCH_VFM(INTEL_LUNARLAKE_M,	&lnl_cstates),
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 80a4f712217b7..9212053f6f1d6 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1137,8 +1137,7 @@ void intel_pmu_pebs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sche
+ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc)
+ {
+ 	struct debug_store *ds = cpuc->ds;
+-	int max_pebs_events = hybrid(cpuc->pmu, max_pebs_events);
+-	int num_counters_fixed = hybrid(cpuc->pmu, num_counters_fixed);
++	int max_pebs_events = intel_pmu_max_num_pebs(cpuc->pmu);
+ 	u64 threshold;
+ 	int reserved;
+ 
+@@ -1146,7 +1145,7 @@ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc)
+ 		return;
+ 
+ 	if (x86_pmu.flags & PMU_FL_PEBS_ALL)
+-		reserved = max_pebs_events + num_counters_fixed;
++		reserved = max_pebs_events + x86_pmu_max_num_counters_fixed(cpuc->pmu);
+ 	else
+ 		reserved = max_pebs_events;
+ 
+@@ -2161,6 +2160,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
+ 	void *base, *at, *top;
+ 	short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
+ 	short error[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
++	int max_pebs_events = intel_pmu_max_num_pebs(NULL);
+ 	int bit, i, size;
+ 	u64 mask;
+ 
+@@ -2172,11 +2172,11 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
+ 
+ 	ds->pebs_index = ds->pebs_buffer_base;
+ 
+-	mask = (1ULL << x86_pmu.max_pebs_events) - 1;
+-	size = x86_pmu.max_pebs_events;
++	mask = x86_pmu.pebs_events_mask;
++	size = max_pebs_events;
+ 	if (x86_pmu.flags & PMU_FL_PEBS_ALL) {
+-		mask |= ((1ULL << x86_pmu.num_counters_fixed) - 1) << INTEL_PMC_IDX_FIXED;
+-		size = INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed;
++		mask |= x86_pmu.fixed_cntr_mask64 << INTEL_PMC_IDX_FIXED;
++		size = INTEL_PMC_IDX_FIXED + x86_pmu_max_num_counters_fixed(NULL);
+ 	}
+ 
+ 	if (unlikely(base >= top)) {
+@@ -2212,8 +2212,9 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
+ 			pebs_status = p->status = cpuc->pebs_enabled;
+ 
+ 		bit = find_first_bit((unsigned long *)&pebs_status,
+-					x86_pmu.max_pebs_events);
+-		if (bit >= x86_pmu.max_pebs_events)
++				     max_pebs_events);
++
++		if (!(x86_pmu.pebs_events_mask & (1 << bit)))
+ 			continue;
+ 
+ 		/*
+@@ -2271,12 +2272,10 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
+ {
+ 	short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+-	int max_pebs_events = hybrid(cpuc->pmu, max_pebs_events);
+-	int num_counters_fixed = hybrid(cpuc->pmu, num_counters_fixed);
+ 	struct debug_store *ds = cpuc->ds;
+ 	struct perf_event *event;
+ 	void *base, *at, *top;
+-	int bit, size;
++	int bit;
+ 	u64 mask;
+ 
+ 	if (!x86_pmu.pebs_active)
+@@ -2287,12 +2286,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
+ 
+ 	ds->pebs_index = ds->pebs_buffer_base;
+ 
+-	mask = ((1ULL << max_pebs_events) - 1) |
+-	       (((1ULL << num_counters_fixed) - 1) << INTEL_PMC_IDX_FIXED);
+-	size = INTEL_PMC_IDX_FIXED + num_counters_fixed;
++	mask = hybrid(cpuc->pmu, pebs_events_mask) |
++	       (hybrid(cpuc->pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED);
+ 
+ 	if (unlikely(base >= top)) {
+-		intel_pmu_pebs_event_update_no_drain(cpuc, size);
++		intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX);
+ 		return;
+ 	}
+ 
+@@ -2302,11 +2300,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
+ 		pebs_status = get_pebs_status(at) & cpuc->pebs_enabled;
+ 		pebs_status &= mask;
+ 
+-		for_each_set_bit(bit, (unsigned long *)&pebs_status, size)
++		for_each_set_bit(bit, (unsigned long *)&pebs_status, X86_PMC_IDX_MAX)
+ 			counts[bit]++;
+ 	}
+ 
+-	for_each_set_bit(bit, (unsigned long *)&mask, size) {
++	for_each_set_bit(bit, (unsigned long *)&mask, X86_PMC_IDX_MAX) {
+ 		if (counts[bit] == 0)
+ 			continue;
+ 
+diff --git a/arch/x86/events/intel/knc.c b/arch/x86/events/intel/knc.c
+index 618001c208e81..034a1f6a457c6 100644
+--- a/arch/x86/events/intel/knc.c
++++ b/arch/x86/events/intel/knc.c
+@@ -303,7 +303,7 @@ static const struct x86_pmu knc_pmu __initconst = {
+ 	.apic			= 1,
+ 	.max_period		= (1ULL << 39) - 1,
+ 	.version		= 0,
+-	.num_counters		= 2,
++	.cntr_mask64		= 0x3,
+ 	.cntval_bits		= 40,
+ 	.cntval_mask		= (1ULL << 40) - 1,
+ 	.get_event_constraints	= x86_get_event_constraints,
+diff --git a/arch/x86/events/intel/p4.c b/arch/x86/events/intel/p4.c
+index 35936188db01b..844bc4fc4724d 100644
+--- a/arch/x86/events/intel/p4.c
++++ b/arch/x86/events/intel/p4.c
+@@ -919,7 +919,7 @@ static void p4_pmu_disable_all(void)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+@@ -998,7 +998,7 @@ static void p4_pmu_enable_all(int added)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	int idx;
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+@@ -1040,7 +1040,7 @@ static int p4_pmu_handle_irq(struct pt_regs *regs)
+ 
+ 	cpuc = this_cpu_ptr(&cpu_hw_events);
+ 
+-	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++	for_each_set_bit(idx, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		int overflow;
+ 
+ 		if (!test_bit(idx, cpuc->active_mask)) {
+@@ -1353,7 +1353,7 @@ static __initconst const struct x86_pmu p4_pmu = {
+ 	 * though leave it restricted at moment assuming
+ 	 * HT is on
+ 	 */
+-	.num_counters		= ARCH_P4_MAX_CCCR,
++	.cntr_mask64		= GENMASK_ULL(ARCH_P4_MAX_CCCR - 1, 0),
+ 	.apic			= 1,
+ 	.cntval_bits		= ARCH_P4_CNTRVAL_BITS,
+ 	.cntval_mask		= ARCH_P4_CNTRVAL_MASK,
+@@ -1395,7 +1395,7 @@ __init int p4_pmu_init(void)
+ 	 *
+ 	 * Solve this by zero'ing out the registers to mimic a reset.
+ 	 */
+-	for (i = 0; i < x86_pmu.num_counters; i++) {
++	for_each_set_bit(i, x86_pmu.cntr_mask, X86_PMC_IDX_MAX) {
+ 		reg = x86_pmu_config_addr(i);
+ 		wrmsrl_safe(reg, 0ULL);
+ 	}
+diff --git a/arch/x86/events/intel/p6.c b/arch/x86/events/intel/p6.c
+index 408879b0c0d4e..a6cffb4f4ef52 100644
+--- a/arch/x86/events/intel/p6.c
++++ b/arch/x86/events/intel/p6.c
+@@ -214,7 +214,7 @@ static __initconst const struct x86_pmu p6_pmu = {
+ 	.apic			= 1,
+ 	.max_period		= (1ULL << 31) - 1,
+ 	.version		= 0,
+-	.num_counters		= 2,
++	.cntr_mask64		= 0x3,
+ 	/*
+ 	 * Events have 40 bits implemented. However they are designed such
+ 	 * that bits [32-39] are sign extensions of bit 31. As such the
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 72b022a1e16c5..745c174fc8809 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -684,9 +684,15 @@ struct x86_hybrid_pmu {
+ 	cpumask_t			supported_cpus;
+ 	union perf_capabilities		intel_cap;
+ 	u64				intel_ctrl;
+-	int				max_pebs_events;
+-	int				num_counters;
+-	int				num_counters_fixed;
++	u64				pebs_events_mask;
++	union {
++			u64		cntr_mask64;
++			unsigned long	cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
++	};
++	union {
++			u64		fixed_cntr_mask64;
++			unsigned long	fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
++	};
+ 	struct event_constraint		unconstrained;
+ 
+ 	u64				hw_cache_event_ids
+@@ -774,8 +780,14 @@ struct x86_pmu {
+ 	int		(*rdpmc_index)(int index);
+ 	u64		(*event_map)(int);
+ 	int		max_events;
+-	int		num_counters;
+-	int		num_counters_fixed;
++	union {
++			u64		cntr_mask64;
++			unsigned long	cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
++	};
++	union {
++			u64		fixed_cntr_mask64;
++			unsigned long	fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
++	};
+ 	int		cntval_bits;
+ 	u64		cntval_mask;
+ 	union {
+@@ -852,7 +864,7 @@ struct x86_pmu {
+ 			pebs_ept		:1;
+ 	int		pebs_record_size;
+ 	int		pebs_buffer_size;
+-	int		max_pebs_events;
++	u64		pebs_events_mask;
+ 	void		(*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data);
+ 	struct event_constraint *pebs_constraints;
+ 	void		(*pebs_aliases)(struct perf_event *event);
+@@ -1125,8 +1137,8 @@ static inline int x86_pmu_rdpmc_index(int index)
+ 	return x86_pmu.rdpmc_index ? x86_pmu.rdpmc_index(index) : index;
+ }
+ 
+-bool check_hw_exists(struct pmu *pmu, int num_counters,
+-		     int num_counters_fixed);
++bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
++		     unsigned long *fixed_cntr_mask);
+ 
+ int x86_add_exclusive(unsigned int what);
+ 
+@@ -1197,8 +1209,27 @@ void x86_pmu_enable_event(struct perf_event *event);
+ 
+ int x86_pmu_handle_irq(struct pt_regs *regs);
+ 
+-void x86_pmu_show_pmu_cap(int num_counters, int num_counters_fixed,
+-			  u64 intel_ctrl);
++void x86_pmu_show_pmu_cap(struct pmu *pmu);
++
++static inline int x86_pmu_num_counters(struct pmu *pmu)
++{
++	return hweight64(hybrid(pmu, cntr_mask64));
++}
++
++static inline int x86_pmu_max_num_counters(struct pmu *pmu)
++{
++	return fls64(hybrid(pmu, cntr_mask64));
++}
++
++static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
++{
++	return hweight64(hybrid(pmu, fixed_cntr_mask64));
++}
++
++static inline int x86_pmu_max_num_counters_fixed(struct pmu *pmu)
++{
++	return fls64(hybrid(pmu, fixed_cntr_mask64));
++}
+ 
+ extern struct event_constraint emptyconstraint;
+ 
+@@ -1661,6 +1692,17 @@ static inline int is_ht_workaround_enabled(void)
+ 	return !!(x86_pmu.flags & PMU_FL_EXCL_ENABLED);
+ }
+ 
++static inline u64 intel_pmu_pebs_mask(u64 cntr_mask)
++{
++	return MAX_PEBS_EVENTS_MASK & cntr_mask;
++}
++
++static inline int intel_pmu_max_num_pebs(struct pmu *pmu)
++{
++	static_assert(MAX_PEBS_EVENTS == 32);
++	return fls((u32)hybrid(pmu, pebs_events_mask));
++}
++
+ #else /* CONFIG_CPU_SUP_INTEL */
+ 
+ static inline void reserve_ds_buffers(void)
+diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
+index 3e9acdaeed1ec..2fd9b0cf9a5e5 100644
+--- a/arch/x86/events/zhaoxin/core.c
++++ b/arch/x86/events/zhaoxin/core.c
+@@ -530,13 +530,13 @@ __init int zhaoxin_pmu_init(void)
+ 	pr_info("Version check pass!\n");
+ 
+ 	x86_pmu.version			= version;
+-	x86_pmu.num_counters		= eax.split.num_counters;
++	x86_pmu.cntr_mask64		= GENMASK_ULL(eax.split.num_counters - 1, 0);
+ 	x86_pmu.cntval_bits		= eax.split.bit_width;
+ 	x86_pmu.cntval_mask		= (1ULL << eax.split.bit_width) - 1;
+ 	x86_pmu.events_maskl		= ebx.full;
+ 	x86_pmu.events_mask_len		= eax.split.mask_length;
+ 
+-	x86_pmu.num_counters_fixed = edx.split.num_counters_fixed;
++	x86_pmu.fixed_cntr_mask64	= GENMASK_ULL(edx.split.num_counters_fixed - 1, 0);
+ 	x86_add_quirk(zhaoxin_arch_events_quirk);
+ 
+ 	switch (boot_cpu_data.x86) {
+@@ -604,13 +604,13 @@ __init int zhaoxin_pmu_init(void)
+ 		return -ENODEV;
+ 	}
+ 
+-	x86_pmu.intel_ctrl = (1 << (x86_pmu.num_counters)) - 1;
+-	x86_pmu.intel_ctrl |= ((1LL << x86_pmu.num_counters_fixed)-1) << INTEL_PMC_IDX_FIXED;
++	x86_pmu.intel_ctrl = x86_pmu.cntr_mask64;
++	x86_pmu.intel_ctrl |= x86_pmu.fixed_cntr_mask64 << INTEL_PMC_IDX_FIXED;
+ 
+ 	if (x86_pmu.event_constraints) {
+ 		for_each_event_constraint(c, x86_pmu.event_constraints) {
+-			c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
+-			c->weight += x86_pmu.num_counters;
++			c->idxmsk64 |= x86_pmu.cntr_mask64;
++			c->weight += x86_pmu_num_counters(NULL);
+ 		}
+ 	}
+ 
+diff --git a/arch/x86/include/asm/intel_ds.h b/arch/x86/include/asm/intel_ds.h
+index 2f9eeb5c3069a..5dbeac48a5b93 100644
+--- a/arch/x86/include/asm/intel_ds.h
++++ b/arch/x86/include/asm/intel_ds.h
+@@ -9,6 +9,7 @@
+ /* The maximal number of PEBS events: */
+ #define MAX_PEBS_EVENTS_FMT4	8
+ #define MAX_PEBS_EVENTS		32
++#define MAX_PEBS_EVENTS_MASK	GENMASK_ULL(MAX_PEBS_EVENTS - 1, 0)
+ #define MAX_FIXED_PEBS_EVENTS	16
+ 
+ /*
+diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
+index a053c12939751..68da67df304d5 100644
+--- a/arch/x86/include/asm/qspinlock.h
++++ b/arch/x86/include/asm/qspinlock.h
+@@ -66,13 +66,15 @@ static inline bool vcpu_is_preempted(long cpu)
+ 
+ #ifdef CONFIG_PARAVIRT
+ /*
+- * virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack.
++ * virt_spin_lock_key - disables by default the virt_spin_lock() hijack.
+  *
+- * Native (and PV wanting native due to vCPU pinning) should disable this key.
+- * It is done in this backwards fashion to only have a single direction change,
+- * which removes ordering between native_pv_spin_init() and HV setup.
++ * Native (and PV wanting native due to vCPU pinning) should keep this key
++ * disabled. Native does not touch the key.
++ *
++ * When in a guest then native_pv_lock_init() enables the key first and
++ * KVM/XEN might conditionally disable it later in the boot process again.
+  */
+-DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key);
++DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
+ 
+ /*
+  * Shortcut for the queued_spin_lock_slowpath() function that allows
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
+index 767bf1c71aadd..2a2fc14955cd3 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -609,7 +609,7 @@ void mtrr_save_state(void)
+ {
+ 	int first_cpu;
+ 
+-	if (!mtrr_enabled())
++	if (!mtrr_enabled() || !mtrr_state.have_fixed)
+ 		return;
+ 
+ 	first_cpu = cpumask_first(cpu_online_mask);
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 5358d43886adc..fec3815335558 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -51,13 +51,12 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text);
+ DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
+ #endif
+ 
+-DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
++DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
+ 
+ void __init native_pv_lock_init(void)
+ {
+-	if (IS_ENABLED(CONFIG_PARAVIRT_SPINLOCKS) &&
+-	    !boot_cpu_has(X86_FEATURE_HYPERVISOR))
+-		static_branch_disable(&virt_spin_lock_key);
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++		static_branch_enable(&virt_spin_lock_key);
+ }
+ 
+ static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 2e69abf4f852a..bfdf5f45b1370 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -374,14 +374,14 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 			 */
+ 			*target_pmd = *pmd;
+ 
+-			addr += PMD_SIZE;
++			addr = round_up(addr + 1, PMD_SIZE);
+ 
+ 		} else if (level == PTI_CLONE_PTE) {
+ 
+ 			/* Walk the page-table down to the pte level */
+ 			pte = pte_offset_kernel(pmd, addr);
+ 			if (pte_none(*pte)) {
+-				addr += PAGE_SIZE;
++				addr = round_up(addr + 1, PAGE_SIZE);
+ 				continue;
+ 			}
+ 
+@@ -401,7 +401,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 			/* Clone the PTE */
+ 			*target_pte = *pte;
+ 
+-			addr += PAGE_SIZE;
++			addr = round_up(addr + 1, PAGE_SIZE);
+ 
+ 		} else {
+ 			BUG();
+@@ -496,7 +496,7 @@ static void pti_clone_entry_text(void)
+ {
+ 	pti_clone_pgtable((unsigned long) __entry_text_start,
+ 			  (unsigned long) __entry_text_end,
+-			  PTI_CLONE_PMD);
++			  PTI_LEVEL_KERNEL_IMAGE);
+ }
+ 
+ /*
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index b379401ff1c20..44ca989f16466 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -678,12 +678,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static const struct device_attribute alarm_attr = {
++static struct device_attribute alarm_attr = {
+ 	.attr = {.name = "alarm", .mode = 0644},
+ 	.show = acpi_battery_alarm_show,
+ 	.store = acpi_battery_alarm_store,
+ };
+ 
++static struct attribute *acpi_battery_attrs[] = {
++	&alarm_attr.attr,
++	NULL
++};
++ATTRIBUTE_GROUPS(acpi_battery);
++
+ /*
+  * The Battery Hooking API
+  *
+@@ -823,7 +829,10 @@ static void __exit battery_hook_exit(void)
+ 
+ static int sysfs_add_battery(struct acpi_battery *battery)
+ {
+-	struct power_supply_config psy_cfg = { .drv_data = battery, };
++	struct power_supply_config psy_cfg = {
++		.drv_data = battery,
++		.attr_grp = acpi_battery_groups,
++	};
+ 	bool full_cap_broken = false;
+ 
+ 	if (!ACPI_BATTERY_CAPACITY_VALID(battery->full_charge_capacity) &&
+@@ -868,7 +877,7 @@ static int sysfs_add_battery(struct acpi_battery *battery)
+ 		return result;
+ 	}
+ 	battery_hook_add_battery(battery);
+-	return device_create_file(&battery->bat->dev, &alarm_attr);
++	return 0;
+ }
+ 
+ static void sysfs_remove_battery(struct acpi_battery *battery)
+@@ -879,7 +888,6 @@ static void sysfs_remove_battery(struct acpi_battery *battery)
+ 		return;
+ 	}
+ 	battery_hook_remove_battery(battery);
+-	device_remove_file(&battery->bat->dev, &alarm_attr);
+ 	power_supply_unregister(battery->bat);
+ 	battery->bat = NULL;
+ 	mutex_unlock(&battery->sysfs_lock);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index b5bf8b81a050a..df5d5a554b388 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -524,6 +524,20 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "N6506MV"),
+ 		},
+ 	},
++	{
++		/* Asus Vivobook Pro N6506MU */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "N6506MU"),
++		},
++	},
++	{
++		/* Asus Vivobook Pro N6506MJ */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "N6506MJ"),
++		},
++	},
+ 	{
+ 		/* LG Electronics 17U70P */
+ 		.matches = {
+diff --git a/drivers/acpi/sbs.c b/drivers/acpi/sbs.c
+index dc8164b182dcc..442c5905d43be 100644
+--- a/drivers/acpi/sbs.c
++++ b/drivers/acpi/sbs.c
+@@ -77,7 +77,6 @@ struct acpi_battery {
+ 	u16 spec;
+ 	u8 id;
+ 	u8 present:1;
+-	u8 have_sysfs_alarm:1;
+ };
+ 
+ #define to_acpi_battery(x) power_supply_get_drvdata(x)
+@@ -462,12 +461,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev,
+ 	return count;
+ }
+ 
+-static const struct device_attribute alarm_attr = {
++static struct device_attribute alarm_attr = {
+ 	.attr = {.name = "alarm", .mode = 0644},
+ 	.show = acpi_battery_alarm_show,
+ 	.store = acpi_battery_alarm_store,
+ };
+ 
++static struct attribute *acpi_battery_attrs[] = {
++	&alarm_attr.attr,
++	NULL
++};
++ATTRIBUTE_GROUPS(acpi_battery);
++
+ /* --------------------------------------------------------------------------
+                                  Driver Interface
+    -------------------------------------------------------------------------- */
+@@ -518,7 +523,10 @@ static int acpi_battery_read(struct acpi_battery *battery)
+ static int acpi_battery_add(struct acpi_sbs *sbs, int id)
+ {
+ 	struct acpi_battery *battery = &sbs->battery[id];
+-	struct power_supply_config psy_cfg = { .drv_data = battery, };
++	struct power_supply_config psy_cfg = {
++		.drv_data = battery,
++		.attr_grp = acpi_battery_groups,
++	};
+ 	int result;
+ 
+ 	battery->id = id;
+@@ -548,10 +556,6 @@ static int acpi_battery_add(struct acpi_sbs *sbs, int id)
+ 		goto end;
+ 	}
+ 
+-	result = device_create_file(&battery->bat->dev, &alarm_attr);
+-	if (result)
+-		goto end;
+-	battery->have_sysfs_alarm = 1;
+       end:
+ 	pr_info("%s [%s]: Battery Slot [%s] (battery %s)\n",
+ 	       ACPI_SBS_DEVICE_NAME, acpi_device_bid(sbs->device),
+@@ -563,11 +567,8 @@ static void acpi_battery_remove(struct acpi_sbs *sbs, int id)
+ {
+ 	struct acpi_battery *battery = &sbs->battery[id];
+ 
+-	if (battery->bat) {
+-		if (battery->have_sysfs_alarm)
+-			device_remove_file(&battery->bat->dev, &alarm_attr);
++	if (battery->bat)
+ 		power_supply_unregister(battery->bat);
+-	}
+ }
+ 
+ static int acpi_charger_add(struct acpi_sbs *sbs)
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 2b4c0624b7043..b5399262198a6 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -25,6 +25,7 @@
+ #include <linux/mutex.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/netdevice.h>
++#include <linux/rcupdate.h>
+ #include <linux/sched/signal.h>
+ #include <linux/sched/mm.h>
+ #include <linux/string_helpers.h>
+@@ -2640,6 +2641,7 @@ static const char *dev_uevent_name(const struct kobject *kobj)
+ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ {
+ 	const struct device *dev = kobj_to_dev(kobj);
++	struct device_driver *driver;
+ 	int retval = 0;
+ 
+ 	/* add device node properties if present */
+@@ -2668,8 +2670,12 @@ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ 	if (dev->type && dev->type->name)
+ 		add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
+ 
+-	if (dev->driver)
+-		add_uevent_var(env, "DRIVER=%s", dev->driver->name);
++	/* Synchronize with module_remove_driver() */
++	rcu_read_lock();
++	driver = READ_ONCE(dev->driver);
++	if (driver)
++		add_uevent_var(env, "DRIVER=%s", driver->name);
++	rcu_read_unlock();
+ 
+ 	/* Add common DT information about the device */
+ 	of_device_uevent(dev, env);
+@@ -2739,11 +2745,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
+ 	if (!env)
+ 		return -ENOMEM;
+ 
+-	/* Synchronize with really_probe() */
+-	device_lock(dev);
+ 	/* let the kset specific function add its keys */
+ 	retval = kset->uevent_ops->uevent(&dev->kobj, env);
+-	device_unlock(dev);
+ 	if (retval)
+ 		goto out;
+ 
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index a1b55da07127d..b0b79b9c189d4 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -7,6 +7,7 @@
+ #include <linux/errno.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/rcupdate.h>
+ #include "base.h"
+ 
+ static char *make_driver_name(struct device_driver *drv)
+@@ -97,6 +98,9 @@ void module_remove_driver(struct device_driver *drv)
+ 	if (!drv)
+ 		return;
+ 
++	/* Synchronize with dev_uevent() */
++	synchronize_rcu();
++
+ 	sysfs_remove_link(&drv->p->kobj, "module");
+ 
+ 	if (drv->owner)
+diff --git a/drivers/base/regmap/regmap-kunit.c b/drivers/base/regmap/regmap-kunit.c
+index be32cd4e84da4..292e86f601978 100644
+--- a/drivers/base/regmap/regmap-kunit.c
++++ b/drivers/base/regmap/regmap-kunit.c
+@@ -145,9 +145,9 @@ static struct regmap *gen_regmap(struct kunit *test,
+ 	const struct regmap_test_param *param = test->param_value;
+ 	struct regmap_test_priv *priv = test->priv;
+ 	unsigned int *buf;
+-	struct regmap *ret;
++	struct regmap *ret = ERR_PTR(-ENOMEM);
+ 	size_t size;
+-	int i;
++	int i, error;
+ 	struct reg_default *defaults;
+ 
+ 	config->cache_type = param->cache;
+@@ -172,15 +172,17 @@ static struct regmap *gen_regmap(struct kunit *test,
+ 
+ 	*data = kzalloc(sizeof(**data), GFP_KERNEL);
+ 	if (!(*data))
+-		return ERR_PTR(-ENOMEM);
++		goto out_free;
+ 	(*data)->vals = buf;
+ 
+ 	if (config->num_reg_defaults) {
+-		defaults = kcalloc(config->num_reg_defaults,
+-				   sizeof(struct reg_default),
+-				   GFP_KERNEL);
++		defaults = kunit_kcalloc(test,
++					 config->num_reg_defaults,
++					 sizeof(struct reg_default),
++					 GFP_KERNEL);
+ 		if (!defaults)
+-			return ERR_PTR(-ENOMEM);
++			goto out_free;
++
+ 		config->reg_defaults = defaults;
+ 
+ 		for (i = 0; i < config->num_reg_defaults; i++) {
+@@ -190,12 +192,19 @@ static struct regmap *gen_regmap(struct kunit *test,
+ 	}
+ 
+ 	ret = regmap_init_ram(priv->dev, config, *data);
+-	if (IS_ERR(ret)) {
+-		kfree(buf);
+-		kfree(*data);
+-	} else {
+-		kunit_add_action(test, regmap_exit_action, ret);
+-	}
++	if (IS_ERR(ret))
++		goto out_free;
++
++	/* This calls regmap_exit() on failure, which frees buf and *data */
++	error = kunit_add_action_or_reset(test, regmap_exit_action, ret);
++	if (error)
++		ret = ERR_PTR(error);
++
++	return ret;
++
++out_free:
++	kfree(buf);
++	kfree(*data);
+ 
+ 	return ret;
+ }
+@@ -1497,9 +1506,9 @@ static struct regmap *gen_raw_regmap(struct kunit *test,
+ 	struct regmap_test_priv *priv = test->priv;
+ 	const struct regmap_test_param *param = test->param_value;
+ 	u16 *buf;
+-	struct regmap *ret;
++	struct regmap *ret = ERR_PTR(-ENOMEM);
+ 	size_t size = (config->max_register + 1) * config->reg_bits / 8;
+-	int i;
++	int i, error;
+ 	struct reg_default *defaults;
+ 
+ 	config->cache_type = param->cache;
+@@ -1515,15 +1524,16 @@ static struct regmap *gen_raw_regmap(struct kunit *test,
+ 
+ 	*data = kzalloc(sizeof(**data), GFP_KERNEL);
+ 	if (!(*data))
+-		return ERR_PTR(-ENOMEM);
++		goto out_free;
+ 	(*data)->vals = (void *)buf;
+ 
+ 	config->num_reg_defaults = config->max_register + 1;
+-	defaults = kcalloc(config->num_reg_defaults,
+-			   sizeof(struct reg_default),
+-			   GFP_KERNEL);
++	defaults = kunit_kcalloc(test,
++				 config->num_reg_defaults,
++				 sizeof(struct reg_default),
++				 GFP_KERNEL);
+ 	if (!defaults)
+-		return ERR_PTR(-ENOMEM);
++		goto out_free;
+ 	config->reg_defaults = defaults;
+ 
+ 	for (i = 0; i < config->num_reg_defaults; i++) {
+@@ -1536,7 +1546,8 @@ static struct regmap *gen_raw_regmap(struct kunit *test,
+ 			defaults[i].def = be16_to_cpu(buf[i]);
+ 			break;
+ 		default:
+-			return ERR_PTR(-EINVAL);
++			ret = ERR_PTR(-EINVAL);
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -1548,12 +1559,19 @@ static struct regmap *gen_raw_regmap(struct kunit *test,
+ 		config->num_reg_defaults = 0;
+ 
+ 	ret = regmap_init_raw_ram(priv->dev, config, *data);
+-	if (IS_ERR(ret)) {
+-		kfree(buf);
+-		kfree(*data);
+-	} else {
+-		kunit_add_action(test, regmap_exit_action, ret);
+-	}
++	if (IS_ERR(ret))
++		goto out_free;
++
++	/* This calls regmap_exit() on failure, which frees buf and *data */
++	error = kunit_add_action_or_reset(test, regmap_exit_action, ret);
++	if (error)
++		ret = ERR_PTR(error);
++
++	return ret;
++
++out_free:
++	kfree(buf);
++	kfree(*data);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 6a863328b8053..d310b525fbf00 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -344,7 +344,7 @@ static void ps_cancel_timer(struct btnxpuart_dev *nxpdev)
+ 	struct ps_data *psdata = &nxpdev->psdata;
+ 
+ 	flush_work(&psdata->work);
+-	del_timer_sync(&psdata->ps_timer);
++	timer_shutdown_sync(&psdata->ps_timer);
+ }
+ 
+ static void ps_control(struct hci_dev *hdev, u8 ps_state)
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index 26919556ef5f0..b72b36e0abed8 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -528,6 +528,7 @@ static void sh_cmt_set_next(struct sh_cmt_channel *ch, unsigned long delta)
+ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ {
+ 	struct sh_cmt_channel *ch = dev_id;
++	unsigned long flags;
+ 
+ 	/* clear flags */
+ 	sh_cmt_write_cmcsr(ch, sh_cmt_read_cmcsr(ch) &
+@@ -558,6 +559,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ 
+ 	ch->flags &= ~FLAG_SKIPEVENT;
+ 
++	raw_spin_lock_irqsave(&ch->lock, flags);
++
+ 	if (ch->flags & FLAG_REPROGRAM) {
+ 		ch->flags &= ~FLAG_REPROGRAM;
+ 		sh_cmt_clock_event_program_verify(ch, 1);
+@@ -570,6 +573,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)
+ 
+ 	ch->flags &= ~FLAG_IRQCONTEXT;
+ 
++	raw_spin_unlock_irqrestore(&ch->lock, flags);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -780,12 +785,18 @@ static int sh_cmt_clock_event_next(unsigned long delta,
+ 				   struct clock_event_device *ced)
+ {
+ 	struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
++	unsigned long flags;
+ 
+ 	BUG_ON(!clockevent_state_oneshot(ced));
++
++	raw_spin_lock_irqsave(&ch->lock, flags);
++
+ 	if (likely(ch->flags & FLAG_IRQCONTEXT))
+ 		ch->next_match_value = delta - 1;
+ 	else
+-		sh_cmt_set_next(ch, delta - 1);
++		__sh_cmt_set_next(ch, delta - 1);
++
++	raw_spin_unlock_irqrestore(&ch->lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index a092b13ffbc2f..67c4a6a0ef124 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -304,10 +304,8 @@ static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,
+ 	int epp = -EINVAL;
+ 	int ret;
+ 
+-	if (!pref_index) {
+-		pr_debug("EPP pref_index is invalid\n");
+-		return -EINVAL;
+-	}
++	if (!pref_index)
++		epp = cpudata->epp_default;
+ 
+ 	if (epp == -EINVAL)
+ 		epp = epp_values[pref_index];
+@@ -1439,7 +1437,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
+ 
+ 	policy->driver_data = cpudata;
+ 
+-	cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0);
++	cpudata->epp_cached = cpudata->epp_default = amd_pstate_get_epp(cpudata, 0);
+ 
+ 	policy->min = policy->cpuinfo.min_freq;
+ 	policy->max = policy->cpuinfo.max_freq;
+@@ -1766,8 +1764,13 @@ static int __init amd_pstate_init(void)
+ 	/* check if this machine need CPPC quirks */
+ 	dmi_check_system(amd_pstate_quirks_table);
+ 
+-	switch (cppc_state) {
+-	case AMD_PSTATE_UNDEFINED:
++	/*
++	* determine the driver mode from the command line or kernel config.
++	* If no command line input is provided, cppc_state will be AMD_PSTATE_UNDEFINED.
++	* command line options will override the kernel config settings.
++	*/
++
++	if (cppc_state == AMD_PSTATE_UNDEFINED) {
+ 		/* Disable on the following configs by default:
+ 		 * 1. Undefined platforms
+ 		 * 2. Server platforms
+@@ -1779,15 +1782,20 @@ static int __init amd_pstate_init(void)
+ 			pr_info("driver load is disabled, boot with specific mode to enable this\n");
+ 			return -ENODEV;
+ 		}
+-		ret = amd_pstate_set_driver(CONFIG_X86_AMD_PSTATE_DEFAULT_MODE);
+-		if (ret)
+-			return ret;
+-		break;
++		/* get driver mode from kernel config option [1:4] */
++		cppc_state = CONFIG_X86_AMD_PSTATE_DEFAULT_MODE;
++	}
++
++	switch (cppc_state) {
+ 	case AMD_PSTATE_DISABLE:
++		pr_info("driver load is disabled, boot with specific mode to enable this\n");
+ 		return -ENODEV;
+ 	case AMD_PSTATE_PASSIVE:
+ 	case AMD_PSTATE_ACTIVE:
+ 	case AMD_PSTATE_GUIDED:
++		ret = amd_pstate_set_driver(cppc_state);
++		if (ret)
++			return ret;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -1808,7 +1816,7 @@ static int __init amd_pstate_init(void)
+ 	/* enable amd pstate feature */
+ 	ret = amd_pstate_enable(true);
+ 	if (ret) {
+-		pr_err("failed to enable with return %d\n", ret);
++		pr_err("failed to enable driver mode(%d)\n", cppc_state);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
+index e6a28e7f4dbf1..f80b33fa5d43a 100644
+--- a/drivers/cpufreq/amd-pstate.h
++++ b/drivers/cpufreq/amd-pstate.h
+@@ -99,6 +99,7 @@ struct amd_cpudata {
+ 	u32	policy;
+ 	u64	cppc_cap1_cached;
+ 	bool	suspended;
++	s16	epp_default;
+ };
+ 
+ #endif /* _LINUX_AMD_PSTATE_H */
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index fa62367ee9290..1a9aadd4c803c 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -17,6 +17,7 @@
+ #include <linux/list.h>
+ #include <linux/lockdep.h>
+ #include <linux/module.h>
++#include <linux/nospec.h>
+ #include <linux/of.h>
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/seq_file.h>
+@@ -198,7 +199,7 @@ gpio_device_get_desc(struct gpio_device *gdev, unsigned int hwnum)
+ 	if (hwnum >= gdev->ngpio)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	return &gdev->descs[hwnum];
++	return &gdev->descs[array_index_nospec(hwnum, gdev->ngpio)];
+ }
+ EXPORT_SYMBOL_GPL(gpio_device_get_desc);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index ee7df1d84e028..89cf9ac6da174 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4048,6 +4048,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 	mutex_init(&adev->grbm_idx_mutex);
+ 	mutex_init(&adev->mn_lock);
+ 	mutex_init(&adev->virt.vf_errors.lock);
++	mutex_init(&adev->virt.rlcg_reg_lock);
+ 	hash_init(adev->mn_hash);
+ 	mutex_init(&adev->psp.mutex);
+ 	mutex_init(&adev->notifier_lock);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index e4742b65032d1..4a9cec002691a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -262,9 +262,8 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job,
+ 	struct dma_fence *fence = NULL;
+ 	int r;
+ 
+-	/* Ignore soft recovered fences here */
+ 	r = drm_sched_entity_error(s_entity);
+-	if (r && r != -ENODATA)
++	if (r)
+ 		goto error;
+ 
+ 	if (!fence && job->gang_submit)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+index ca5c86e5f7cd6..8e8afbd237bcd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+@@ -334,7 +334,7 @@ static ssize_t ta_if_invoke_debugfs_write(struct file *fp, const char *buf, size
+ 
+ 	set_ta_context_funcs(psp, ta_type, &context);
+ 
+-	if (!context->initialized) {
++	if (!context || !context->initialized) {
+ 		dev_err(adev->dev, "TA is not initialized\n");
+ 		ret = -EINVAL;
+ 		goto err_free_shared_buf;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 1adc81a55734d..0c4ee06451e9c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -2172,12 +2172,15 @@ static void amdgpu_ras_interrupt_process_handler(struct work_struct *work)
+ int amdgpu_ras_interrupt_dispatch(struct amdgpu_device *adev,
+ 		struct ras_dispatch_if *info)
+ {
+-	struct ras_manager *obj = amdgpu_ras_find_obj(adev, &info->head);
+-	struct ras_ih_data *data = &obj->ih_data;
++	struct ras_manager *obj;
++	struct ras_ih_data *data;
+ 
++	obj = amdgpu_ras_find_obj(adev, &info->head);
+ 	if (!obj)
+ 		return -EINVAL;
+ 
++	data = &obj->ih_data;
++
+ 	if (data->inuse == 0)
+ 		return 0;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 54ab51a4ada77..972a58f0f4924 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -980,6 +980,9 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+ 	scratch_reg1 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg1;
+ 	scratch_reg2 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg2;
+ 	scratch_reg3 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg3;
++
++	mutex_lock(&adev->virt.rlcg_reg_lock);
++
+ 	if (reg_access_ctrl->spare_int)
+ 		spare_int = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->spare_int;
+ 
+@@ -1036,6 +1039,9 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+ 	}
+ 
+ 	ret = readl(scratch_reg0);
++
++	mutex_unlock(&adev->virt.rlcg_reg_lock);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 642f1fd287d83..0ec246c74570c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -272,6 +272,8 @@ struct amdgpu_virt {
+ 
+ 	/* the ucode id to signal the autoload */
+ 	uint32_t autoload_ucode_id;
++
++	struct mutex rlcg_reg_lock;
+ };
+ 
+ struct amdgpu_video_codec_info;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+index 66e8a016126b8..9b748d7058b5c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+@@ -102,6 +102,11 @@ static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p,
+ 	if (!r)
+ 		r = amdgpu_sync_push_to_job(&sync, p->job);
+ 	amdgpu_sync_free(&sync);
++
++	if (r) {
++		p->num_dw_left = 0;
++		amdgpu_job_free(p->job);
++	}
+ 	return r;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 31e500859ab01..92485251247a0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1658,7 +1658,7 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
+ 	start = map_start << PAGE_SHIFT;
+ 	end = (map_last + 1) << PAGE_SHIFT;
+ 	for (addr = start; !r && addr < end; ) {
+-		struct hmm_range *hmm_range;
++		struct hmm_range *hmm_range = NULL;
+ 		unsigned long map_start_vma;
+ 		unsigned long map_last_vma;
+ 		struct vm_area_struct *vma;
+@@ -1696,7 +1696,12 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
+ 		}
+ 
+ 		svm_range_lock(prange);
+-		if (!r && amdgpu_hmm_range_get_pages_done(hmm_range)) {
++
++		/* Free backing memory of hmm_range if it was initialized
++		 * Overrride return value to TRY AGAIN only if prior returns
++		 * were successful
++		 */
++		if (hmm_range && amdgpu_hmm_range_get_pages_done(hmm_range) && !r) {
+ 			pr_debug("hmm update the range, need validate again\n");
+ 			r = -EAGAIN;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3cdcadd41be1a..964bb6d0a3833 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2701,7 +2701,8 @@ static int dm_suspend(void *handle)
+ 
+ 		dm->cached_dc_state = dc_state_create_copy(dm->dc->current_state);
+ 
+-		dm_gpureset_toggle_interrupts(adev, dm->cached_dc_state, false);
++		if (dm->cached_dc_state)
++			dm_gpureset_toggle_interrupts(adev, dm->cached_dc_state, false);
+ 
+ 		amdgpu_dm_commit_zero_streams(dm->dc);
+ 
+@@ -6788,7 +6789,8 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ 		aconnector->dc_sink = aconnector->dc_link->local_sink ?
+ 		aconnector->dc_link->local_sink :
+ 		aconnector->dc_em_sink;
+-		dc_sink_retain(aconnector->dc_sink);
++		if (aconnector->dc_sink)
++			dc_sink_retain(aconnector->dc_sink);
+ 	}
+ }
+ 
+@@ -7615,7 +7617,8 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
+ 				drm_add_modes_noedid(connector, 1920, 1080);
+ 	} else {
+ 		amdgpu_dm_connector_ddc_get_modes(connector, edid);
+-		amdgpu_dm_connector_add_common_modes(encoder, connector);
++		if (encoder)
++			amdgpu_dm_connector_add_common_modes(encoder, connector);
+ 		amdgpu_dm_connector_add_freesync_modes(connector, edid);
+ 	}
+ 	amdgpu_dm_fbc_init(connector);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index a5e1a93ddaea2..bb4e5ab7edc6e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -182,6 +182,8 @@ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
+ 		dc_sink_release(dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		aconnector->edid = NULL;
++		aconnector->dsc_aux = NULL;
++		port->passthrough_aux = NULL;
+ 	}
+ 
+ 	aconnector->mst_status = MST_STATUS_DEFAULT;
+@@ -494,6 +496,8 @@ dm_dp_mst_detect(struct drm_connector *connector,
+ 		dc_sink_release(aconnector->dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		aconnector->edid = NULL;
++		aconnector->dsc_aux = NULL;
++		port->passthrough_aux = NULL;
+ 
+ 		amdgpu_dm_set_mst_status(&aconnector->mst_status,
+ 			MST_REMOTE_EDID | MST_ALLOCATE_NEW_PAYLOAD | MST_CLEAR_ALLOCATED_PAYLOAD,
+@@ -1233,14 +1237,6 @@ static bool is_dsc_need_re_compute(
+ 		if (!aconnector || !aconnector->dsc_aux)
+ 			continue;
+ 
+-		/*
+-		 *	check if cached virtual MST DSC caps are available and DSC is supported
+-		 *	as per specifications in their Virtual DPCD registers.
+-		*/
+-		if (!(aconnector->dc_sink->dsc_caps.dsc_dec_caps.is_dsc_supported ||
+-			aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_PASSTHROUGH_SUPPORT))
+-			continue;
+-
+ 		stream_on_link[new_stream_on_link_num] = aconnector;
+ 		new_stream_on_link_num++;
+ 
+@@ -1268,6 +1264,9 @@ static bool is_dsc_need_re_compute(
+ 		}
+ 	}
+ 
++	if (new_stream_on_link_num == 0)
++		return false;
++
+ 	/* check current_state if there stream on link but it is not in
+ 	 * new request state
+ 	 */
+@@ -1595,109 +1594,171 @@ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+ 	return bw_range->max_target_bpp_x16 && bw_range->min_target_bpp_x16;
+ }
+ 
++#if defined(CONFIG_DRM_AMD_DC_FP)
++static bool dp_get_link_current_set_bw(struct drm_dp_aux *aux, uint32_t *cur_link_bw)
++{
++	uint32_t total_data_bw_efficiency_x10000 = 0;
++	uint32_t link_rate_per_lane_kbps = 0;
++	enum dc_link_rate link_rate;
++	union lane_count_set lane_count;
++	u8 dp_link_encoding;
++	u8 link_bw_set = 0;
++
++	*cur_link_bw = 0;
++
++	if (drm_dp_dpcd_read(aux, DP_MAIN_LINK_CHANNEL_CODING_SET, &dp_link_encoding, 1) != 1 ||
++		drm_dp_dpcd_read(aux, DP_LANE_COUNT_SET, &lane_count.raw, 1) != 1 ||
++		drm_dp_dpcd_read(aux, DP_LINK_BW_SET, &link_bw_set, 1) != 1)
++		return false;
++
++	switch (dp_link_encoding) {
++	case DP_8b_10b_ENCODING:
++		link_rate = link_bw_set;
++		link_rate_per_lane_kbps = link_rate * LINK_RATE_REF_FREQ_IN_KHZ * BITS_PER_DP_BYTE;
++		total_data_bw_efficiency_x10000 = DATA_EFFICIENCY_8b_10b_x10000;
++		total_data_bw_efficiency_x10000 /= 100;
++		total_data_bw_efficiency_x10000 *= DATA_EFFICIENCY_8b_10b_FEC_EFFICIENCY_x100;
++		break;
++	case DP_128b_132b_ENCODING:
++		switch (link_bw_set) {
++		case DP_LINK_BW_10:
++			link_rate = LINK_RATE_UHBR10;
++			break;
++		case DP_LINK_BW_13_5:
++			link_rate = LINK_RATE_UHBR13_5;
++			break;
++		case DP_LINK_BW_20:
++			link_rate = LINK_RATE_UHBR20;
++			break;
++		default:
++			return false;
++		}
++
++		link_rate_per_lane_kbps = link_rate * 10000;
++		total_data_bw_efficiency_x10000 = DATA_EFFICIENCY_128b_132b_x10000;
++		break;
++	default:
++		return false;
++	}
++
++	*cur_link_bw = link_rate_per_lane_kbps * lane_count.bits.LANE_COUNT_SET / 10000 * total_data_bw_efficiency_x10000;
++	return true;
++}
++#endif
++
+ enum dc_status dm_dp_mst_is_port_support_mode(
+ 	struct amdgpu_dm_connector *aconnector,
+ 	struct dc_stream_state *stream)
+ {
+-	int pbn, branch_max_throughput_mps = 0;
++#if defined(CONFIG_DRM_AMD_DC_FP)
++	int branch_max_throughput_mps = 0;
+ 	struct dc_link_settings cur_link_settings;
+-	unsigned int end_to_end_bw_in_kbps = 0;
+-	unsigned int upper_link_bw_in_kbps = 0, down_link_bw_in_kbps = 0;
++	uint32_t end_to_end_bw_in_kbps = 0;
++	uint32_t root_link_bw_in_kbps = 0;
++	uint32_t virtual_channel_bw_in_kbps = 0;
+ 	struct dc_dsc_bw_range bw_range = {0};
+ 	struct dc_dsc_config_options dsc_options = {0};
++	uint32_t stream_kbps;
+ 
+-	/*
+-	 * Consider the case with the depth of the mst topology tree is equal or less than 2
+-	 * A. When dsc bitstream can be transmitted along the entire path
+-	 *    1. dsc is possible between source and branch/leaf device (common dsc params is possible), AND
+-	 *    2. dsc passthrough supported at MST branch, or
+-	 *    3. dsc decoding supported at leaf MST device
+-	 *    Use maximum dsc compression as bw constraint
+-	 * B. When dsc bitstream cannot be transmitted along the entire path
+-	 *    Use native bw as bw constraint
++	/* DSC unnecessary case
++	 * Check if timing could be supported within end-to-end BW
+ 	 */
+-	if (is_dsc_common_config_possible(stream, &bw_range) &&
+-	   (aconnector->mst_output_port->passthrough_aux ||
+-	    aconnector->dsc_aux == &aconnector->mst_output_port->aux)) {
+-		cur_link_settings = stream->link->verified_link_cap;
+-		upper_link_bw_in_kbps = dc_link_bandwidth_kbps(aconnector->dc_link, &cur_link_settings);
+-		down_link_bw_in_kbps = kbps_from_pbn(aconnector->mst_output_port->full_pbn);
+-
+-		/* pick the end to end bw bottleneck */
+-		end_to_end_bw_in_kbps = min(upper_link_bw_in_kbps, down_link_bw_in_kbps);
+-
+-		if (end_to_end_bw_in_kbps < bw_range.min_kbps) {
+-			DRM_DEBUG_DRIVER("maximum dsc compression cannot fit into end-to-end bandwidth\n");
++	stream_kbps =
++		dc_bandwidth_in_kbps_from_timing(&stream->timing,
++			dc_link_get_highest_encoding_format(stream->link));
++	cur_link_settings = stream->link->verified_link_cap;
++	root_link_bw_in_kbps = dc_link_bandwidth_kbps(aconnector->dc_link, &cur_link_settings);
++	virtual_channel_bw_in_kbps = kbps_from_pbn(aconnector->mst_output_port->full_pbn);
++
++	/* pick the end to end bw bottleneck */
++	end_to_end_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
++
++	if (stream_kbps <= end_to_end_bw_in_kbps) {
++		DRM_DEBUG_DRIVER("No DSC needed. End-to-end bw sufficient.");
++		return DC_OK;
++	}
++
++	/*DSC necessary case*/
++	if (!aconnector->dsc_aux)
++		return DC_FAIL_BANDWIDTH_VALIDATE;
++
++	if (is_dsc_common_config_possible(stream, &bw_range)) {
++
++		/*capable of dsc passthough. dsc bitstream along the entire path*/
++		if (aconnector->mst_output_port->passthrough_aux) {
++			if (bw_range.min_kbps > end_to_end_bw_in_kbps) {
++				DRM_DEBUG_DRIVER("DSC passthrough. Max dsc compression can't fit into end-to-end bw\n");
+ 			return DC_FAIL_BANDWIDTH_VALIDATE;
+-		}
++			}
++		} else {
++			/*dsc bitstream decoded at the dp last link*/
++			struct drm_dp_mst_port *immediate_upstream_port = NULL;
++			uint32_t end_link_bw = 0;
++
++			/*Get last DP link BW capability*/
++			if (dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw)) {
++				if (stream_kbps > end_link_bw) {
++					DRM_DEBUG_DRIVER("DSC decode at last link. Mode required bw can't fit into available bw\n");
++					return DC_FAIL_BANDWIDTH_VALIDATE;
++				}
++			}
+ 
+-		if (end_to_end_bw_in_kbps < bw_range.stream_kbps) {
+-			dc_dsc_get_default_config_option(stream->link->dc, &dsc_options);
+-			dsc_options.max_target_bpp_limit_override_x16 = aconnector->base.display_info.max_dsc_bpp * 16;
+-			if (dc_dsc_compute_config(stream->sink->ctx->dc->res_pool->dscs[0],
+-					&stream->sink->dsc_caps.dsc_dec_caps,
+-					&dsc_options,
+-					end_to_end_bw_in_kbps,
+-					&stream->timing,
+-					dc_link_get_highest_encoding_format(stream->link),
+-					&stream->timing.dsc_cfg)) {
+-				stream->timing.flags.DSC = 1;
+-				DRM_DEBUG_DRIVER("end-to-end bandwidth require dsc and dsc config found\n");
+-			} else {
+-				DRM_DEBUG_DRIVER("end-to-end bandwidth require dsc but dsc config not found\n");
+-				return DC_FAIL_BANDWIDTH_VALIDATE;
++			/*Get virtual channel bandwidth between source and the link before the last link*/
++			if (aconnector->mst_output_port->parent->port_parent)
++				immediate_upstream_port = aconnector->mst_output_port->parent->port_parent;
++
++			if (immediate_upstream_port) {
++				virtual_channel_bw_in_kbps = kbps_from_pbn(immediate_upstream_port->full_pbn);
++				virtual_channel_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
++				if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
++					DRM_DEBUG_DRIVER("DSC decode at last link. Max dsc compression can't fit into MST available bw\n");
++					return DC_FAIL_BANDWIDTH_VALIDATE;
++				}
+ 			}
+ 		}
+-	} else {
+-		/* Check if mode could be supported within max slot
+-		 * number of current mst link and full_pbn of mst links.
+-		 */
+-		int pbn_div, slot_num, max_slot_num;
+-		enum dc_link_encoding_format link_encoding;
+-		uint32_t stream_kbps =
+-			dc_bandwidth_in_kbps_from_timing(&stream->timing,
+-				dc_link_get_highest_encoding_format(stream->link));
+-
+-		pbn = kbps_to_peak_pbn(stream_kbps);
+-		pbn_div = dm_mst_get_pbn_divider(stream->link);
+-		slot_num = DIV_ROUND_UP(pbn, pbn_div);
+-
+-		link_encoding = dc_link_get_highest_encoding_format(stream->link);
+-		if (link_encoding == DC_LINK_ENCODING_DP_8b_10b)
+-			max_slot_num = 63;
+-		else if (link_encoding == DC_LINK_ENCODING_DP_128b_132b)
+-			max_slot_num = 64;
+-		else {
+-			DRM_DEBUG_DRIVER("Invalid link encoding format\n");
+-			return DC_FAIL_BANDWIDTH_VALIDATE;
+-		}
+ 
+-		if (slot_num > max_slot_num ||
+-			pbn > aconnector->mst_output_port->full_pbn) {
+-			DRM_DEBUG_DRIVER("Mode can not be supported within mst links!");
++		/*Confirm if we can obtain dsc config*/
++		dc_dsc_get_default_config_option(stream->link->dc, &dsc_options);
++		dsc_options.max_target_bpp_limit_override_x16 = aconnector->base.display_info.max_dsc_bpp * 16;
++		if (dc_dsc_compute_config(stream->sink->ctx->dc->res_pool->dscs[0],
++				&stream->sink->dsc_caps.dsc_dec_caps,
++				&dsc_options,
++				end_to_end_bw_in_kbps,
++				&stream->timing,
++				dc_link_get_highest_encoding_format(stream->link),
++				&stream->timing.dsc_cfg)) {
++			stream->timing.flags.DSC = 1;
++			DRM_DEBUG_DRIVER("Require dsc and dsc config found\n");
++		} else {
++			DRM_DEBUG_DRIVER("Require dsc but can't find appropriate dsc config\n");
+ 			return DC_FAIL_BANDWIDTH_VALIDATE;
+ 		}
+-	}
+ 
+-	/* check is mst dsc output bandwidth branch_overall_throughput_0_mps */
+-	switch (stream->timing.pixel_encoding) {
+-	case PIXEL_ENCODING_RGB:
+-	case PIXEL_ENCODING_YCBCR444:
+-		branch_max_throughput_mps =
+-			aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_0_mps;
+-		break;
+-	case PIXEL_ENCODING_YCBCR422:
+-	case PIXEL_ENCODING_YCBCR420:
+-		branch_max_throughput_mps =
+-			aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_1_mps;
+-		break;
+-	default:
+-		break;
+-	}
++		/* check is mst dsc output bandwidth branch_overall_throughput_0_mps */
++		switch (stream->timing.pixel_encoding) {
++		case PIXEL_ENCODING_RGB:
++		case PIXEL_ENCODING_YCBCR444:
++			branch_max_throughput_mps =
++				aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_0_mps;
++			break;
++		case PIXEL_ENCODING_YCBCR422:
++		case PIXEL_ENCODING_YCBCR420:
++			branch_max_throughput_mps =
++				aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_1_mps;
++			break;
++		default:
++			break;
++		}
+ 
+-	if (branch_max_throughput_mps != 0 &&
+-		((stream->timing.pix_clk_100hz / 10) >  branch_max_throughput_mps * 1000))
++		if (branch_max_throughput_mps != 0 &&
++			((stream->timing.pix_clk_100hz / 10) >  branch_max_throughput_mps * 1000)) {
++			DRM_DEBUG_DRIVER("DSC is required but max throughput mps fails");
+ 		return DC_FAIL_BANDWIDTH_VALIDATE;
+-
++		}
++	} else {
++		DRM_DEBUG_DRIVER("DSC is required but can't find common dsc config.");
++		return DC_FAIL_BANDWIDTH_VALIDATE;
++	}
++#endif
+ 	return DC_OK;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 15819416a2f36..8ed599324693e 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -2267,6 +2267,10 @@ void resource_log_pipe_topology_update(struct dc *dc, struct dc_state *state)
+ 
+ 		otg_master = resource_get_otg_master_for_stream(
+ 				&state->res_ctx, state->streams[stream_idx]);
++
++		if (!otg_master)
++			continue;
++
+ 		resource_log_pipe_for_stream(dc, state, otg_master, stream_idx);
+ 	}
+ 	if (state->phantom_stream_count > 0) {
+@@ -2508,6 +2512,17 @@ static void remove_hpo_dp_link_enc_from_ctx(struct resource_context *res_ctx,
+ 	}
+ }
+ 
++static int get_num_of_free_pipes(const struct resource_pool *pool, const struct dc_state *context)
++{
++	int i;
++	int count = 0;
++
++	for (i = 0; i < pool->pipe_count; i++)
++		if (resource_is_pipe_type(&context->res_ctx.pipe_ctx[i], FREE_PIPE))
++			count++;
++	return count;
++}
++
+ enum dc_status resource_add_otg_master_for_stream_output(struct dc_state *new_ctx,
+ 		const struct resource_pool *pool,
+ 		struct dc_stream_state *stream)
+@@ -2641,37 +2656,33 @@ static bool acquire_secondary_dpp_pipes_and_add_plane(
+ 		struct dc_state *cur_ctx,
+ 		struct resource_pool *pool)
+ {
+-	struct pipe_ctx *opp_head_pipe, *sec_pipe, *tail_pipe;
++	struct pipe_ctx *sec_pipe, *tail_pipe;
++	struct pipe_ctx *opp_heads[MAX_PIPES];
++	int opp_head_count;
++	int i;
+ 
+ 	if (!pool->funcs->acquire_free_pipe_as_secondary_dpp_pipe) {
+ 		ASSERT(0);
+ 		return false;
+ 	}
+ 
+-	opp_head_pipe = otg_master_pipe;
+-	while (opp_head_pipe) {
++	opp_head_count = resource_get_opp_heads_for_otg_master(otg_master_pipe,
++			&new_ctx->res_ctx, opp_heads);
++	if (get_num_of_free_pipes(pool, new_ctx) < opp_head_count)
++		/* not enough free pipes */
++		return false;
++
++	for (i = 0; i < opp_head_count; i++) {
+ 		sec_pipe = pool->funcs->acquire_free_pipe_as_secondary_dpp_pipe(
+ 				cur_ctx,
+ 				new_ctx,
+ 				pool,
+-				opp_head_pipe);
+-		if (!sec_pipe) {
+-			/* try tearing down MPCC combine */
+-			int pipe_idx = acquire_first_split_pipe(
+-					&new_ctx->res_ctx, pool,
+-					otg_master_pipe->stream);
+-
+-			if (pipe_idx >= 0)
+-				sec_pipe = &new_ctx->res_ctx.pipe_ctx[pipe_idx];
+-		}
+-
+-		if (!sec_pipe)
+-			return false;
+-
++				opp_heads[i]);
++		ASSERT(sec_pipe);
+ 		sec_pipe->plane_state = plane_state;
+ 
+ 		/* establish pipe relationship */
+-		tail_pipe = get_tail_pipe(opp_head_pipe);
++		tail_pipe = get_tail_pipe(opp_heads[i]);
+ 		tail_pipe->bottom_pipe = sec_pipe;
+ 		sec_pipe->top_pipe = tail_pipe;
+ 		sec_pipe->bottom_pipe = NULL;
+@@ -2682,8 +2693,6 @@ static bool acquire_secondary_dpp_pipes_and_add_plane(
+ 		} else {
+ 			sec_pipe->prev_odm_pipe = NULL;
+ 		}
+-
+-		opp_head_pipe = opp_head_pipe->next_odm_pipe;
+ 	}
+ 	return true;
+ }
+@@ -2696,6 +2705,7 @@ bool resource_append_dpp_pipes_for_plane_composition(
+ 		struct dc_plane_state *plane_state)
+ {
+ 	bool success;
++
+ 	if (otg_master_pipe->plane_state == NULL)
+ 		success = add_plane_to_opp_head_pipes(otg_master_pipe,
+ 				plane_state, new_ctx);
+@@ -2703,10 +2713,15 @@ bool resource_append_dpp_pipes_for_plane_composition(
+ 		success = acquire_secondary_dpp_pipes_and_add_plane(
+ 				otg_master_pipe, plane_state, new_ctx,
+ 				cur_ctx, pool);
+-	if (success)
++	if (success) {
+ 		/* when appending a plane mpc slice count changes from 0 to 1 */
+ 		success = update_pipe_params_after_mpc_slice_count_change(
+ 				plane_state, new_ctx, pool);
++		if (!success)
++			resource_remove_dpp_pipes_for_plane_composition(new_ctx,
++					pool, plane_state);
++	}
++
+ 	return success;
+ }
+ 
+@@ -2716,6 +2731,7 @@ void resource_remove_dpp_pipes_for_plane_composition(
+ 		const struct dc_plane_state *plane_state)
+ {
+ 	int i;
++
+ 	for (i = pool->pipe_count - 1; i >= 0; i--) {
+ 		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index 76bb05f4d6bf3..52a1cfc5feed8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -437,6 +437,19 @@ enum dc_status dc_state_remove_stream(
+ 	return DC_OK;
+ }
+ 
++static void remove_mpc_combine_for_stream(const struct dc *dc,
++		struct dc_state *new_ctx,
++		const struct dc_state *cur_ctx,
++		struct dc_stream_status *status)
++{
++	int i;
++
++	for (i = 0; i < status->plane_count; i++)
++		resource_update_pipes_for_plane_with_slice_count(
++				new_ctx, cur_ctx, dc->res_pool,
++				status->plane_states[i], 1);
++}
++
+ bool dc_state_add_plane(
+ 		const struct dc *dc,
+ 		struct dc_stream_state *stream,
+@@ -447,8 +460,12 @@ bool dc_state_add_plane(
+ 	struct pipe_ctx *otg_master_pipe;
+ 	struct dc_stream_status *stream_status = NULL;
+ 	bool added = false;
++	int odm_slice_count;
++	int i;
+ 
+ 	stream_status = dc_state_get_stream_status(state, stream);
++	otg_master_pipe = resource_get_otg_master_for_stream(
++			&state->res_ctx, stream);
+ 	if (stream_status == NULL) {
+ 		dm_error("Existing stream not found; failed to attach surface!\n");
+ 		goto out;
+@@ -456,22 +473,39 @@ bool dc_state_add_plane(
+ 		dm_error("Surface: can not attach plane_state %p! Maximum is: %d\n",
+ 				plane_state, MAX_SURFACE_NUM);
+ 		goto out;
++	} else if (!otg_master_pipe) {
++		goto out;
+ 	}
+ 
+-	if (stream_status->plane_count == 0 && dc->config.enable_windowed_mpo_odm)
+-		/* ODM combine could prevent us from supporting more planes
+-		 * we will reset ODM slice count back to 1 when all planes have
+-		 * been removed to maximize the amount of planes supported when
+-		 * new planes are added.
+-		 */
+-		resource_update_pipes_for_stream_with_slice_count(
+-				state, dc->current_state, dc->res_pool, stream, 1);
++	added = resource_append_dpp_pipes_for_plane_composition(state,
++			dc->current_state, pool, otg_master_pipe, plane_state);
+ 
+-	otg_master_pipe = resource_get_otg_master_for_stream(
+-			&state->res_ctx, stream);
+-	if (otg_master_pipe)
++	if (!added) {
++		/* try to remove MPC combine to free up pipes */
++		for (i = 0; i < state->stream_count; i++)
++			remove_mpc_combine_for_stream(dc, state,
++					dc->current_state,
++					&state->stream_status[i]);
+ 		added = resource_append_dpp_pipes_for_plane_composition(state,
+-				dc->current_state, pool, otg_master_pipe, plane_state);
++					dc->current_state, pool,
++					otg_master_pipe, plane_state);
++	}
++
++	if (!added) {
++		/* try to decrease ODM slice count gradually to free up pipes */
++		odm_slice_count = resource_get_odm_slice_count(otg_master_pipe);
++		for (i = odm_slice_count - 1; i > 0; i--) {
++			resource_update_pipes_for_stream_with_slice_count(state,
++					dc->current_state, dc->res_pool, stream,
++					i);
++			added = resource_append_dpp_pipes_for_plane_composition(
++					state,
++					dc->current_state, pool,
++					otg_master_pipe, plane_state);
++			if (added)
++				break;
++		}
++	}
+ 
+ 	if (added) {
+ 		stream_status->plane_states[stream_status->plane_count] =
+@@ -531,15 +565,6 @@ bool dc_state_remove_plane(
+ 
+ 	stream_status->plane_states[stream_status->plane_count] = NULL;
+ 
+-	if (stream_status->plane_count == 0 && dc->config.enable_windowed_mpo_odm)
+-		/* ODM combine could prevent us from supporting more planes
+-		 * we will reset ODM slice count back to 1 when all planes have
+-		 * been removed to maximize the amount of planes supported when
+-		 * new planes are added.
+-		 */
+-		resource_update_pipes_for_stream_with_slice_count(
+-				state, dc->current_state, dc->res_pool, stream, 1);
+-
+ 	return true;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
+index 4f559a025cf00..09cf54586fd5d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
+@@ -84,7 +84,7 @@ static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait,
+ 
+ 	cmd.replay_enable.header.payload_bytes = sizeof(struct dmub_rb_cmd_replay_enable_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	/* Below loops 1000 x 500us = 500 ms.
+ 	 *  Exit REPLAY may need to wait 1-2 frames to power up. Timeout after at
+@@ -127,7 +127,7 @@ static void dmub_replay_set_power_opt(struct dmub_replay *dmub, unsigned int pow
+ 	cmd.replay_set_power_opt.replay_set_power_opt_data.power_opt = power_opt;
+ 	cmd.replay_set_power_opt.replay_set_power_opt_data.panel_inst = panel_inst;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+@@ -209,8 +209,7 @@ static bool dmub_replay_copy_settings(struct dmub_replay *dmub,
+ 	else
+ 		copy_settings_data->flags.bitfields.force_wakeup_by_tps3 = 0;
+ 
+-
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -231,7 +230,7 @@ static void dmub_replay_set_coasting_vtotal(struct dmub_replay *dmub,
+ 	cmd.replay_set_coasting_vtotal.header.payload_bytes = sizeof(struct dmub_cmd_replay_set_coasting_vtotal_data);
+ 	cmd.replay_set_coasting_vtotal.replay_set_coasting_vtotal_data.coasting_vtotal = coasting_vtotal;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 0c4aef8ffe2c5..3306684e805ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -288,6 +288,7 @@ static void dcn10_log_color_state(struct dc *dc,
+ {
+ 	struct dc_context *dc_ctx = dc->ctx;
+ 	struct resource_pool *pool = dc->res_pool;
++	bool is_gamut_remap_available = false;
+ 	int i;
+ 
+ 	DTN_INFO("DPP:    IGAM format    IGAM mode    DGAM mode    RGAM mode"
+@@ -300,16 +301,15 @@ static void dcn10_log_color_state(struct dc *dc,
+ 		struct dcn_dpp_state s = {0};
+ 
+ 		dpp->funcs->dpp_read_state(dpp, &s);
+-		dpp->funcs->dpp_get_gamut_remap(dpp, &s.gamut_remap);
++		if (dpp->funcs->dpp_get_gamut_remap) {
++			dpp->funcs->dpp_get_gamut_remap(dpp, &s.gamut_remap);
++			is_gamut_remap_available = true;
++		}
+ 
+ 		if (!s.is_enabled)
+ 			continue;
+ 
+-		DTN_INFO("[%2d]:  %11xh  %11s    %9s    %9s"
+-			 "  %12s  "
+-			 "%010lld %010lld %010lld %010lld "
+-			 "%010lld %010lld %010lld %010lld "
+-			 "%010lld %010lld %010lld %010lld",
++		DTN_INFO("[%2d]:  %11xh  %11s    %9s    %9s",
+ 				dpp->inst,
+ 				s.igam_input_format,
+ 				(s.igam_lut_mode == 0) ? "BypassFixed" :
+@@ -328,22 +328,27 @@ static void dcn10_log_color_state(struct dc *dc,
+ 					((s.rgam_lut_mode == 2) ? "Ycc" :
+ 					((s.rgam_lut_mode == 3) ? "RAM" :
+ 					((s.rgam_lut_mode == 4) ? "RAM" :
+-								 "Unknown")))),
+-				(s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" :
+-					((s.gamut_remap.gamut_adjust_type == 1) ? "HW" :
+-										  "SW"),
+-				s.gamut_remap.temperature_matrix[0].value,
+-				s.gamut_remap.temperature_matrix[1].value,
+-				s.gamut_remap.temperature_matrix[2].value,
+-				s.gamut_remap.temperature_matrix[3].value,
+-				s.gamut_remap.temperature_matrix[4].value,
+-				s.gamut_remap.temperature_matrix[5].value,
+-				s.gamut_remap.temperature_matrix[6].value,
+-				s.gamut_remap.temperature_matrix[7].value,
+-				s.gamut_remap.temperature_matrix[8].value,
+-				s.gamut_remap.temperature_matrix[9].value,
+-				s.gamut_remap.temperature_matrix[10].value,
+-				s.gamut_remap.temperature_matrix[11].value);
++								 "Unknown")))));
++		if (is_gamut_remap_available)
++			DTN_INFO("  %12s  "
++				 "%010lld %010lld %010lld %010lld "
++				 "%010lld %010lld %010lld %010lld "
++				 "%010lld %010lld %010lld %010lld",
++				 (s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" :
++					((s.gamut_remap.gamut_adjust_type == 1) ? "HW" : "SW"),
++				 s.gamut_remap.temperature_matrix[0].value,
++				 s.gamut_remap.temperature_matrix[1].value,
++				 s.gamut_remap.temperature_matrix[2].value,
++				 s.gamut_remap.temperature_matrix[3].value,
++				 s.gamut_remap.temperature_matrix[4].value,
++				 s.gamut_remap.temperature_matrix[5].value,
++				 s.gamut_remap.temperature_matrix[6].value,
++				 s.gamut_remap.temperature_matrix[7].value,
++				 s.gamut_remap.temperature_matrix[8].value,
++				 s.gamut_remap.temperature_matrix[9].value,
++				 s.gamut_remap.temperature_matrix[10].value,
++				 s.gamut_remap.temperature_matrix[11].value);
++
+ 		DTN_INFO("\n");
+ 	}
+ 	DTN_INFO("\n");
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index ed9141a67db37..5b09d95cc5b8f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -919,6 +919,9 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 			stream = dc->current_state->streams[0];
+ 			plane = (stream ? dc->current_state->stream_status[0].plane_states[0] : NULL);
+ 
++			if (!stream || !plane)
++				return false;
++
+ 			if (stream && plane) {
+ 				cursor_cache_enable = stream->cursor_position.enable &&
+ 						plane->address.grph.cursor_cache_addr.quad_part;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_fixed_vs_pe_retimer_dp.c b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_fixed_vs_pe_retimer_dp.c
+index 3e6c7be7e2786..5302d2c9c7607 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_fixed_vs_pe_retimer_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_fixed_vs_pe_retimer_dp.c
+@@ -165,7 +165,12 @@ static void set_hpo_fixed_vs_pe_retimer_dp_link_test_pattern(struct dc_link *lin
+ 		link_res->hpo_dp_link_enc->funcs->set_link_test_pattern(
+ 				link_res->hpo_dp_link_enc, tp_params);
+ 	}
++
+ 	link->dc->link_srv->dp_trace_source_sequence(link, DPCD_SOURCE_SEQ_AFTER_SET_SOURCE_PATTERN);
++
++	// Give retimer extra time to lock before updating DP_TRAINING_PATTERN_SET to TPS1
++	if (tp_params->dp_phy_pattern == DP_TEST_PATTERN_128b_132b_TPS1_TRAINING_MODE)
++		msleep(30);
+ }
+ 
+ static void set_hpo_fixed_vs_pe_retimer_dp_lane_settings(struct dc_link *link,
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index b53ad18dbfbca..ec9ff5f8bdc5d 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -2313,8 +2313,6 @@ void link_set_dpms_off(struct pipe_ctx *pipe_ctx)
+ 
+ 	dc->hwss.disable_audio_stream(pipe_ctx);
+ 
+-	edp_set_panel_assr(link, pipe_ctx, &panel_mode_dp, false);
+-
+ 	update_psp_stream_config(pipe_ctx, true);
+ 	dc->hwss.blank_stream(pipe_ctx);
+ 
+@@ -2368,6 +2366,7 @@ void link_set_dpms_off(struct pipe_ctx *pipe_ctx)
+ 		dc->hwss.disable_stream(pipe_ctx);
+ 		disable_link(pipe_ctx->stream->link, &pipe_ctx->link_res, pipe_ctx->stream->signal);
+ 	}
++	edp_set_panel_assr(link, pipe_ctx, &panel_mode_dp, false);
+ 
+ 	if (pipe_ctx->stream->timing.flags.DSC) {
+ 		if (dc_is_dp_signal(pipe_ctx->stream->signal))
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
+index 0fcf0b8530acf..564246983f37e 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
+@@ -454,7 +454,8 @@ bool dp_handle_hpd_rx_irq(struct dc_link *link,
+ 	 * If we got sink count changed it means
+ 	 * Downstream port status changed,
+ 	 * then DM should call DC to do the detection.
+-	 * NOTE: Do not handle link loss on eDP since it is internal link*/
++	 * NOTE: Do not handle link loss on eDP since it is internal link
++	 */
+ 	if ((link->connector_signal != SIGNAL_TYPE_EDP) &&
+ 			dp_parse_link_loss_status(
+ 					link,
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+index 0a939437e19f1..6b380e037e3f8 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+@@ -2193,10 +2193,11 @@ bool dcn20_get_dcc_compression_cap(const struct dc *dc,
+ 		const struct dc_dcc_surface_param *input,
+ 		struct dc_surface_dcc_cap *output)
+ {
+-	return dc->res_pool->hubbub->funcs->get_dcc_compression_cap(
+-			dc->res_pool->hubbub,
+-			input,
+-			output);
++	if (dc->res_pool->hubbub->funcs->get_dcc_compression_cap)
++		return dc->res_pool->hubbub->funcs->get_dcc_compression_cap(
++			dc->res_pool->hubbub, input, output);
++
++	return false;
+ }
+ 
+ static void dcn20_destroy_resource_pool(struct resource_pool **pool)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+index 5fb21a0508cd9..f531ce1d2b1dc 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+@@ -929,7 +929,7 @@ static int pp_dpm_switch_power_profile(void *handle,
+ 		enum PP_SMC_POWER_PROFILE type, bool en)
+ {
+ 	struct pp_hwmgr *hwmgr = handle;
+-	long workload;
++	long workload[1];
+ 	uint32_t index;
+ 
+ 	if (!hwmgr || !hwmgr->pm_en)
+@@ -947,12 +947,12 @@ static int pp_dpm_switch_power_profile(void *handle,
+ 		hwmgr->workload_mask &= ~(1 << hwmgr->workload_prority[type]);
+ 		index = fls(hwmgr->workload_mask);
+ 		index = index > 0 && index <= Workload_Policy_Max ? index - 1 : 0;
+-		workload = hwmgr->workload_setting[index];
++		workload[0] = hwmgr->workload_setting[index];
+ 	} else {
+ 		hwmgr->workload_mask |= (1 << hwmgr->workload_prority[type]);
+ 		index = fls(hwmgr->workload_mask);
+ 		index = index <= Workload_Policy_Max ? index - 1 : 0;
+-		workload = hwmgr->workload_setting[index];
++		workload[0] = hwmgr->workload_setting[index];
+ 	}
+ 
+ 	if (type == PP_SMC_POWER_PROFILE_COMPUTE &&
+@@ -962,7 +962,7 @@ static int pp_dpm_switch_power_profile(void *handle,
+ 	}
+ 
+ 	if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL)
+-		hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, &workload, 0);
++		hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, workload, 0);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+index 1d829402cd2e2..f4bd8e9357e22 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+@@ -269,7 +269,7 @@ int psm_adjust_power_state_dynamic(struct pp_hwmgr *hwmgr, bool skip_display_set
+ 						struct pp_power_state *new_ps)
+ {
+ 	uint32_t index;
+-	long workload;
++	long workload[1];
+ 
+ 	if (hwmgr->not_vf) {
+ 		if (!skip_display_settings)
+@@ -294,10 +294,10 @@ int psm_adjust_power_state_dynamic(struct pp_hwmgr *hwmgr, bool skip_display_set
+ 	if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL) {
+ 		index = fls(hwmgr->workload_mask);
+ 		index = index > 0 && index <= Workload_Policy_Max ? index - 1 : 0;
+-		workload = hwmgr->workload_setting[index];
++		workload[0] = hwmgr->workload_setting[index];
+ 
+-		if (hwmgr->power_profile_mode != workload && hwmgr->hwmgr_func->set_power_profile_mode)
+-			hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, &workload, 0);
++		if (hwmgr->power_profile_mode != workload[0] && hwmgr->hwmgr_func->set_power_profile_mode)
++			hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, workload, 0);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 1fcd4451001fa..f1c369945ac5d 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -2957,6 +2957,7 @@ static int smu7_update_edc_leakage_table(struct pp_hwmgr *hwmgr)
+ 
+ static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ {
++	struct amdgpu_device *adev = hwmgr->adev;
+ 	struct smu7_hwmgr *data;
+ 	int result = 0;
+ 
+@@ -2993,40 +2994,37 @@ static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 	/* Initalize Dynamic State Adjustment Rule Settings */
+ 	result = phm_initializa_dynamic_state_adjustment_rule_settings(hwmgr);
+ 
+-	if (0 == result) {
+-		struct amdgpu_device *adev = hwmgr->adev;
++	if (result)
++		goto fail;
+ 
+-		data->is_tlu_enabled = false;
++	data->is_tlu_enabled = false;
+ 
+-		hwmgr->platform_descriptor.hardwareActivityPerformanceLevels =
++	hwmgr->platform_descriptor.hardwareActivityPerformanceLevels =
+ 							SMU7_MAX_HARDWARE_POWERLEVELS;
+-		hwmgr->platform_descriptor.hardwarePerformanceLevels = 2;
+-		hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
++	hwmgr->platform_descriptor.hardwarePerformanceLevels = 2;
++	hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
+ 
+-		data->pcie_gen_cap = adev->pm.pcie_gen_mask;
+-		if (data->pcie_gen_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
+-			data->pcie_spc_cap = 20;
+-		else
+-			data->pcie_spc_cap = 16;
+-		data->pcie_lane_cap = adev->pm.pcie_mlw_mask;
+-
+-		hwmgr->platform_descriptor.vbiosInterruptId = 0x20000400; /* IRQ_SOURCE1_SW_INT */
+-/* The true clock step depends on the frequency, typically 4.5 or 9 MHz. Here we use 5. */
+-		hwmgr->platform_descriptor.clockStep.engineClock = 500;
+-		hwmgr->platform_descriptor.clockStep.memoryClock = 500;
+-		smu7_thermal_parameter_init(hwmgr);
+-	} else {
+-		/* Ignore return value in here, we are cleaning up a mess. */
+-		smu7_hwmgr_backend_fini(hwmgr);
+-	}
++	data->pcie_gen_cap = adev->pm.pcie_gen_mask;
++	if (data->pcie_gen_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
++		data->pcie_spc_cap = 20;
++	else
++		data->pcie_spc_cap = 16;
++	data->pcie_lane_cap = adev->pm.pcie_mlw_mask;
++
++	hwmgr->platform_descriptor.vbiosInterruptId = 0x20000400; /* IRQ_SOURCE1_SW_INT */
++	/* The true clock step depends on the frequency, typically 4.5 or 9 MHz. Here we use 5. */
++	hwmgr->platform_descriptor.clockStep.engineClock = 500;
++	hwmgr->platform_descriptor.clockStep.memoryClock = 500;
++	smu7_thermal_parameter_init(hwmgr);
+ 
+ 	result = smu7_update_edc_leakage_table(hwmgr);
+-	if (result) {
+-		smu7_hwmgr_backend_fini(hwmgr);
+-		return result;
+-	}
++	if (result)
++		goto fail;
+ 
+ 	return 0;
++fail:
++	smu7_hwmgr_backend_fini(hwmgr);
++	return result;
+ }
+ 
+ static int smu7_force_dpm_highest(struct pp_hwmgr *hwmgr)
+@@ -3316,8 +3314,7 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 			const struct pp_power_state *current_ps)
+ {
+ 	struct amdgpu_device *adev = hwmgr->adev;
+-	struct smu7_power_state *smu7_ps =
+-				cast_phw_smu7_power_state(&request_ps->hardware);
++	struct smu7_power_state *smu7_ps;
+ 	uint32_t sclk;
+ 	uint32_t mclk;
+ 	struct PP_Clocks minimum_clocks = {0};
+@@ -3334,6 +3331,10 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 	uint32_t latency;
+ 	bool latency_allowed = false;
+ 
++	smu7_ps = cast_phw_smu7_power_state(&request_ps->hardware);
++	if (!smu7_ps)
++		return -EINVAL;
++
+ 	data->battery_state = (PP_StateUILabel_Battery ==
+ 			request_ps->classification.ui_label);
+ 	data->mclk_ignore_signal = false;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+index b015a601b385a..eb744401e0567 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1065,16 +1065,18 @@ static int smu8_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 				struct pp_power_state  *prequest_ps,
+ 			const struct pp_power_state *pcurrent_ps)
+ {
+-	struct smu8_power_state *smu8_ps =
+-				cast_smu8_power_state(&prequest_ps->hardware);
+-
+-	const struct smu8_power_state *smu8_current_ps =
+-				cast_const_smu8_power_state(&pcurrent_ps->hardware);
+-
++	struct smu8_power_state *smu8_ps;
++	const struct smu8_power_state *smu8_current_ps;
+ 	struct smu8_hwmgr *data = hwmgr->backend;
+ 	struct PP_Clocks clocks = {0, 0, 0, 0};
+ 	bool force_high;
+ 
++	smu8_ps = cast_smu8_power_state(&prequest_ps->hardware);
++	smu8_current_ps = cast_const_smu8_power_state(&pcurrent_ps->hardware);
++
++	if (!smu8_ps || !smu8_current_ps)
++		return -EINVAL;
++
+ 	smu8_ps->need_dfs_bypass = true;
+ 
+ 	data->battery_state = (PP_StateUILabel_Battery == prequest_ps->classification.ui_label);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 9f5bd998c6bff..f4acdb2267416 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -3259,8 +3259,7 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 			const struct pp_power_state *current_ps)
+ {
+ 	struct amdgpu_device *adev = hwmgr->adev;
+-	struct vega10_power_state *vega10_ps =
+-				cast_phw_vega10_power_state(&request_ps->hardware);
++	struct vega10_power_state *vega10_ps;
+ 	uint32_t sclk;
+ 	uint32_t mclk;
+ 	struct PP_Clocks minimum_clocks = {0};
+@@ -3278,6 +3277,10 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ 	uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
+ 	uint32_t latency;
+ 
++	vega10_ps = cast_phw_vega10_power_state(&request_ps->hardware);
++	if (!vega10_ps)
++		return -EINVAL;
++
+ 	data->battery_state = (PP_StateUILabel_Battery ==
+ 			request_ps->classification.ui_label);
+ 
+@@ -3415,13 +3418,17 @@ static int vega10_find_dpm_states_clocks_in_dpm_table(struct pp_hwmgr *hwmgr, co
+ 	const struct vega10_power_state *vega10_ps =
+ 			cast_const_phw_vega10_power_state(states->pnew_state);
+ 	struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
+-	uint32_t sclk = vega10_ps->performance_levels
+-			[vega10_ps->performance_level_count - 1].gfx_clock;
+ 	struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
+-	uint32_t mclk = vega10_ps->performance_levels
+-			[vega10_ps->performance_level_count - 1].mem_clock;
++	uint32_t sclk, mclk;
+ 	uint32_t i;
+ 
++	if (vega10_ps == NULL)
++		return -EINVAL;
++	sclk = vega10_ps->performance_levels
++			[vega10_ps->performance_level_count - 1].gfx_clock;
++	mclk = vega10_ps->performance_levels
++			[vega10_ps->performance_level_count - 1].mem_clock;
++
+ 	for (i = 0; i < sclk_table->count; i++) {
+ 		if (sclk == sclk_table->dpm_levels[i].value)
+ 			break;
+@@ -3728,6 +3735,9 @@ static int vega10_generate_dpm_level_enable_mask(
+ 			cast_const_phw_vega10_power_state(states->pnew_state);
+ 	int i;
+ 
++	if (vega10_ps == NULL)
++		return -EINVAL;
++
+ 	PP_ASSERT_WITH_CODE(!vega10_trim_dpm_states(hwmgr, vega10_ps),
+ 			"Attempt to Trim DPM States Failed!",
+ 			return -1);
+@@ -4995,6 +5005,8 @@ static int vega10_check_states_equal(struct pp_hwmgr *hwmgr,
+ 
+ 	vega10_psa = cast_const_phw_vega10_power_state(pstate1);
+ 	vega10_psb = cast_const_phw_vega10_power_state(pstate2);
++	if (vega10_psa == NULL || vega10_psb == NULL)
++		return -EINVAL;
+ 
+ 	/* If the two states don't even have the same number of performance levels
+ 	 * they cannot be the same state.
+@@ -5128,6 +5140,8 @@ static int vega10_set_sclk_od(struct pp_hwmgr *hwmgr, uint32_t value)
+ 		return -EINVAL;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return -EINVAL;
+ 
+ 	vega10_ps->performance_levels
+ 	[vega10_ps->performance_level_count - 1].gfx_clock =
+@@ -5179,6 +5193,8 @@ static int vega10_set_mclk_od(struct pp_hwmgr *hwmgr, uint32_t value)
+ 		return -EINVAL;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return -EINVAL;
+ 
+ 	vega10_ps->performance_levels
+ 	[vega10_ps->performance_level_count - 1].mem_clock =
+@@ -5420,6 +5436,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr)
+ 		return;
+ 
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return;
++
+ 	max_level = vega10_ps->performance_level_count - 1;
+ 
+ 	if (vega10_ps->performance_levels[max_level].gfx_clock !=
+@@ -5442,6 +5461,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr)
+ 
+ 	ps = (struct pp_power_state *)((unsigned long)(hwmgr->ps) + hwmgr->ps_size * (hwmgr->num_ps - 1));
+ 	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
++	if (vega10_ps == NULL)
++		return;
++
+ 	max_level = vega10_ps->performance_level_count - 1;
+ 
+ 	if (vega10_ps->performance_levels[max_level].gfx_clock !=
+@@ -5632,6 +5654,8 @@ static int vega10_get_performance_level(struct pp_hwmgr *hwmgr, const struct pp_
+ 		return -EINVAL;
+ 
+ 	vega10_ps = cast_const_phw_vega10_power_state(state);
++	if (vega10_ps == NULL)
++		return -EINVAL;
+ 
+ 	i = index > vega10_ps->performance_level_count - 1 ?
+ 			vega10_ps->performance_level_count - 1 : index;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index e1796ecf9c05c..06409133b09b1 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2220,7 +2220,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ {
+ 	int ret = 0;
+ 	int index = 0;
+-	long workload;
++	long workload[1];
+ 	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+ 
+ 	if (!skip_display_settings) {
+@@ -2260,10 +2260,10 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ 		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
+ 		index = fls(smu->workload_mask);
+ 		index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+-		workload = smu->workload_setting[index];
++		workload[0] = smu->workload_setting[index];
+ 
+-		if (smu->power_profile_mode != workload)
+-			smu_bump_power_profile_mode(smu, &workload, 0);
++		if (smu->power_profile_mode != workload[0])
++			smu_bump_power_profile_mode(smu, workload, 0);
+ 	}
+ 
+ 	return ret;
+@@ -2313,7 +2313,7 @@ static int smu_switch_power_profile(void *handle,
+ {
+ 	struct smu_context *smu = handle;
+ 	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+-	long workload;
++	long workload[1];
+ 	uint32_t index;
+ 
+ 	if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
+@@ -2326,17 +2326,17 @@ static int smu_switch_power_profile(void *handle,
+ 		smu->workload_mask &= ~(1 << smu->workload_prority[type]);
+ 		index = fls(smu->workload_mask);
+ 		index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+-		workload = smu->workload_setting[index];
++		workload[0] = smu->workload_setting[index];
+ 	} else {
+ 		smu->workload_mask |= (1 << smu->workload_prority[type]);
+ 		index = fls(smu->workload_mask);
+ 		index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+-		workload = smu->workload_setting[index];
++		workload[0] = smu->workload_setting[index];
+ 	}
+ 
+ 	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+ 		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+-		smu_bump_power_profile_mode(smu, &workload, 0);
++		smu_bump_power_profile_mode(smu, workload, 0);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+index 6a4f20fccf841..7b0bc9704eacb 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
+@@ -1027,7 +1027,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 	u32 status_reg;
+ 	u8 *buffer = msg->buffer;
+ 	unsigned int i;
+-	int num_transferred = 0;
+ 	int ret;
+ 
+ 	/* Buffer size of AUX CH is 16 bytes */
+@@ -1079,7 +1078,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 			reg = buffer[i];
+ 			writel(reg, dp->reg_base + ANALOGIX_DP_BUF_DATA_0 +
+ 			       4 * i);
+-			num_transferred++;
+ 		}
+ 	}
+ 
+@@ -1127,7 +1125,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 			reg = readl(dp->reg_base + ANALOGIX_DP_BUF_DATA_0 +
+ 				    4 * i);
+ 			buffer[i] = (unsigned char)reg;
+-			num_transferred++;
+ 		}
+ 	}
+ 
+@@ -1144,7 +1141,7 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
+ 		 (msg->request & ~DP_AUX_I2C_MOT) == DP_AUX_NATIVE_READ)
+ 		msg->reply = DP_AUX_NATIVE_REPLY_ACK;
+ 
+-	return num_transferred > 0 ? num_transferred : -EBUSY;
++	return msg->size;
+ 
+ aux_error:
+ 	/* if aux err happen, reset aux */
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 68831f4e502a2..fc2ceae61db2d 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4069,6 +4069,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ 	if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
+ 		const struct drm_dp_connection_status_notify *conn_stat =
+ 			&up_req->msg.u.conn_stat;
++		bool handle_csn;
+ 
+ 		drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n",
+ 			    conn_stat->port_number,
+@@ -4077,6 +4078,16 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ 			    conn_stat->message_capability_status,
+ 			    conn_stat->input_port,
+ 			    conn_stat->peer_device_type);
++
++		mutex_lock(&mgr->probe_lock);
++		handle_csn = mgr->mst_primary->link_address_sent;
++		mutex_unlock(&mgr->probe_lock);
++
++		if (!handle_csn) {
++			drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.");
++			kfree(up_req);
++			goto out;
++		}
+ 	} else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
+ 		const struct drm_dp_resource_status_notify *res_stat =
+ 			&up_req->msg.u.resource_stat;
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 02b1235c6d619..106292d6ed268 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -1067,23 +1067,16 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
+ 		}
+ 
+ 		if (async_flip &&
+-		    prop != config->prop_fb_id &&
+-		    prop != config->prop_in_fence_fd &&
+-		    prop != config->prop_fb_damage_clips) {
++		    (plane_state->plane->type != DRM_PLANE_TYPE_PRIMARY ||
++		     (prop != config->prop_fb_id &&
++		      prop != config->prop_in_fence_fd &&
++		      prop != config->prop_fb_damage_clips))) {
+ 			ret = drm_atomic_plane_get_property(plane, plane_state,
+ 							    prop, &old_val);
+ 			ret = drm_atomic_check_prop_changes(ret, old_val, prop_value, prop);
+ 			break;
+ 		}
+ 
+-		if (async_flip && plane_state->plane->type != DRM_PLANE_TYPE_PRIMARY) {
+-			drm_dbg_atomic(prop->dev,
+-				       "[OBJECT:%d] Only primary planes can be changed during async flip\n",
+-				       obj->id);
+-			ret = -EINVAL;
+-			break;
+-		}
+-
+ 		ret = drm_atomic_plane_set_property(plane,
+ 				plane_state, file_priv,
+ 				prop, prop_value);
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index 31af5cf37a099..cee5eafbfb81a 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -880,6 +880,11 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
+ 
+ 			kfree(modeset->mode);
+ 			modeset->mode = drm_mode_duplicate(dev, mode);
++			if (!modeset->mode) {
++				ret = -ENOMEM;
++				break;
++			}
++
+ 			drm_connector_get(connector);
+ 			modeset->connectors[modeset->num_connectors++] = connector;
+ 			modeset->x = offset->x;
+diff --git a/drivers/gpu/drm/i915/display/intel_backlight.c b/drivers/gpu/drm/i915/display/intel_backlight.c
+index 071668bfe5d14..6c3333136737e 100644
+--- a/drivers/gpu/drm/i915/display/intel_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_backlight.c
+@@ -1449,6 +1449,9 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
+ 
+ static int cnp_num_backlight_controllers(struct drm_i915_private *i915)
+ {
++	if (INTEL_PCH_TYPE(i915) >= PCH_MTL)
++		return 2;
++
+ 	if (INTEL_PCH_TYPE(i915) >= PCH_DG1)
+ 		return 1;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index 0ccbf9a85914c..eca07436d1bcd 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -351,6 +351,9 @@ static int intel_num_pps(struct drm_i915_private *i915)
+ 	if (IS_GEMINILAKE(i915) || IS_BROXTON(i915))
+ 		return 2;
+ 
++	if (INTEL_PCH_TYPE(i915) >= PCH_MTL)
++		return 2;
++
+ 	if (INTEL_PCH_TYPE(i915) >= PCH_DG1)
+ 		return 1;
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index a2195e28b625f..cac6d4184506c 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -290,6 +290,41 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
+ 	return i915_error_to_vmf_fault(err);
+ }
+ 
++static void set_address_limits(struct vm_area_struct *area,
++			       struct i915_vma *vma,
++			       unsigned long obj_offset,
++			       unsigned long *start_vaddr,
++			       unsigned long *end_vaddr)
++{
++	unsigned long vm_start, vm_end, vma_size; /* user's memory parameters */
++	long start, end; /* memory boundaries */
++
++	/*
++	 * Let's move into the ">> PAGE_SHIFT"
++	 * domain to be sure not to lose bits
++	 */
++	vm_start = area->vm_start >> PAGE_SHIFT;
++	vm_end = area->vm_end >> PAGE_SHIFT;
++	vma_size = vma->size >> PAGE_SHIFT;
++
++	/*
++	 * Calculate the memory boundaries by considering the offset
++	 * provided by the user during memory mapping and the offset
++	 * provided for the partial mapping.
++	 */
++	start = vm_start;
++	start -= obj_offset;
++	start += vma->gtt_view.partial.offset;
++	end = start + vma_size;
++
++	start = max_t(long, start, vm_start);
++	end = min_t(long, end, vm_end);
++
++	/* Let's move back into the "<< PAGE_SHIFT" domain */
++	*start_vaddr = (unsigned long)start << PAGE_SHIFT;
++	*end_vaddr = (unsigned long)end << PAGE_SHIFT;
++}
++
+ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ {
+ #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
+@@ -302,14 +337,18 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ 	struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
+ 	bool write = area->vm_flags & VM_WRITE;
+ 	struct i915_gem_ww_ctx ww;
++	unsigned long obj_offset;
++	unsigned long start, end; /* memory boundaries */
+ 	intel_wakeref_t wakeref;
+ 	struct i915_vma *vma;
+ 	pgoff_t page_offset;
++	unsigned long pfn;
+ 	int srcu;
+ 	int ret;
+ 
+-	/* We don't use vmf->pgoff since that has the fake offset */
++	obj_offset = area->vm_pgoff - drm_vma_node_start(&mmo->vma_node);
+ 	page_offset = (vmf->address - area->vm_start) >> PAGE_SHIFT;
++	page_offset += obj_offset;
+ 
+ 	trace_i915_gem_object_fault(obj, page_offset, true, write);
+ 
+@@ -402,12 +441,14 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
+ 	if (ret)
+ 		goto err_unpin;
+ 
++	set_address_limits(area, vma, obj_offset, &start, &end);
++
++	pfn = (ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT;
++	pfn += (start - area->vm_start) >> PAGE_SHIFT;
++	pfn += obj_offset - vma->gtt_view.partial.offset;
++
+ 	/* Finally, remap it using the new GTT offset */
+-	ret = remap_io_mapping(area,
+-			       area->vm_start + (vma->gtt_view.partial.offset << PAGE_SHIFT),
+-			       (ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT,
+-			       min_t(u64, vma->size, area->vm_end - area->vm_start),
+-			       &ggtt->iomap);
++	ret = remap_io_mapping(area, start, pfn, end - start, &ggtt->iomap);
+ 	if (ret)
+ 		goto err_fence;
+ 
+@@ -1084,6 +1125,8 @@ int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma
+ 		mmo = mmap_offset_attach(obj, mmap_type, NULL);
+ 		if (IS_ERR(mmo))
+ 			return PTR_ERR(mmo);
++
++		vma->vm_pgoff += drm_vma_node_start(&mmo->vma_node);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+index e6f177183c0fa..5c72462d1f57e 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+@@ -165,7 +165,6 @@ i915_ttm_placement_from_obj(const struct drm_i915_gem_object *obj,
+ 	i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] :
+ 				   obj->mm.region, &places[0], obj->bo_offset,
+ 				   obj->base.size, flags);
+-	places[0].flags |= TTM_PL_FLAG_DESIRED;
+ 
+ 	/* Cache this on object? */
+ 	for (i = 0; i < num_allowed; ++i) {
+@@ -779,13 +778,16 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+ 		.interruptible = true,
+ 		.no_wait_gpu = false,
+ 	};
+-	int real_num_busy;
++	struct ttm_placement initial_placement;
++	struct ttm_place initial_place;
+ 	int ret;
+ 
+ 	/* First try only the requested placement. No eviction. */
+-	real_num_busy = placement->num_placement;
+-	placement->num_placement = 1;
+-	ret = ttm_bo_validate(bo, placement, &ctx);
++	initial_placement.num_placement = 1;
++	memcpy(&initial_place, placement->placement, sizeof(struct ttm_place));
++	initial_place.flags |= TTM_PL_FLAG_DESIRED;
++	initial_placement.placement = &initial_place;
++	ret = ttm_bo_validate(bo, &initial_placement, &ctx);
+ 	if (ret) {
+ 		ret = i915_ttm_err_to_gem(ret);
+ 		/*
+@@ -800,7 +802,6 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+ 		 * If the initial attempt fails, allow all accepted placements,
+ 		 * evicting if necessary.
+ 		 */
+-		placement->num_placement = real_num_busy;
+ 		ret = ttm_bo_validate(bo, placement, &ctx);
+ 		if (ret)
+ 			return i915_ttm_err_to_gem(ret);
+diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
+index 739c865b556f8..10bce18b7c31c 100644
+--- a/drivers/gpu/drm/lima/lima_drv.c
++++ b/drivers/gpu/drm/lima/lima_drv.c
+@@ -501,3 +501,4 @@ module_platform_driver(lima_platform_driver);
+ MODULE_AUTHOR("Lima Project Developers");
+ MODULE_DESCRIPTION("Lima DRM Driver");
+ MODULE_LICENSE("GPL v2");
++MODULE_SOFTDEP("pre: governor_simpleondemand");
+diff --git a/drivers/gpu/drm/mgag200/mgag200_i2c.c b/drivers/gpu/drm/mgag200/mgag200_i2c.c
+index 423eb302be7eb..4caeb68f3010c 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_i2c.c
++++ b/drivers/gpu/drm/mgag200/mgag200_i2c.c
+@@ -31,6 +31,8 @@
+ #include <linux/i2c.h>
+ #include <linux/pci.h>
+ 
++#include <drm/drm_managed.h>
++
+ #include "mgag200_drv.h"
+ 
+ static int mga_i2c_read_gpio(struct mga_device *mdev)
+@@ -86,7 +88,7 @@ static int mga_gpio_getscl(void *data)
+ 	return (mga_i2c_read_gpio(mdev) & i2c->clock) ? 1 : 0;
+ }
+ 
+-static void mgag200_i2c_release(void *res)
++static void mgag200_i2c_release(struct drm_device *dev, void *res)
+ {
+ 	struct mga_i2c_chan *i2c = res;
+ 
+@@ -114,7 +116,7 @@ int mgag200_i2c_init(struct mga_device *mdev, struct mga_i2c_chan *i2c)
+ 	i2c->adapter.algo_data = &i2c->bit;
+ 
+ 	i2c->bit.udelay = 10;
+-	i2c->bit.timeout = 2;
++	i2c->bit.timeout = usecs_to_jiffies(2200);
+ 	i2c->bit.data = i2c;
+ 	i2c->bit.setsda		= mga_gpio_setsda;
+ 	i2c->bit.setscl		= mga_gpio_setscl;
+@@ -125,5 +127,5 @@ int mgag200_i2c_init(struct mga_device *mdev, struct mga_i2c_chan *i2c)
+ 	if (ret)
+ 		return ret;
+ 
+-	return devm_add_action_or_reset(dev->dev, mgag200_i2c_release, i2c);
++	return drmm_add_action_or_reset(dev, mgag200_i2c_release, i2c);
+ }
+diff --git a/drivers/gpu/drm/radeon/pptable.h b/drivers/gpu/drm/radeon/pptable.h
+index b7f22597ee95e..969a8fb0ee9e0 100644
+--- a/drivers/gpu/drm/radeon/pptable.h
++++ b/drivers/gpu/drm/radeon/pptable.h
+@@ -439,7 +439,7 @@ typedef struct _StateArray{
+     //how many states we have 
+     UCHAR ucNumEntries;
+     
+-    ATOM_PPLIB_STATE_V2 states[] __counted_by(ucNumEntries);
++    ATOM_PPLIB_STATE_V2 states[] /* __counted_by(ucNumEntries) */;
+ }StateArray;
+ 
+ 
+diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+index 91202e40cde94..60c652782761b 100644
+--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
++++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+@@ -102,6 +102,17 @@ static void drm_gem_shmem_test_obj_create_private(struct kunit *test)
+ 
+ 	sg_init_one(sgt->sgl, buf, TEST_SIZE);
+ 
++	/*
++	 * Set the DMA mask to 64-bits and map the sgtables
++	 * otherwise drm_gem_shmem_free will cause a warning
++	 * on debug kernels.
++	 */
++	ret = dma_set_mask(drm_dev->dev, DMA_BIT_MASK(64));
++	KUNIT_ASSERT_EQ(test, ret, 0);
++
++	ret = dma_map_sgtable(drm_dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
++	KUNIT_ASSERT_EQ(test, ret, 0);
++
+ 	/* Init a mock DMA-BUF */
+ 	buf_mock.size = TEST_SIZE;
+ 	attach_mock.dmabuf = &buf_mock;
+diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+index af71b87d80301..03c6d4d50a839 100644
+--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+@@ -44,9 +44,10 @@
+ #define GSCCS_RING_BASE				0x11a000
+ 
+ #define RING_TAIL(base)				XE_REG((base) + 0x30)
++#define   TAIL_ADDR				REG_GENMASK(20, 3)
+ 
+ #define RING_HEAD(base)				XE_REG((base) + 0x34)
+-#define   HEAD_ADDR				0x001FFFFC
++#define   HEAD_ADDR				REG_GENMASK(20, 2)
+ 
+ #define RING_START(base)			XE_REG((base) + 0x38)
+ 
+@@ -135,7 +136,6 @@
+ #define   RING_VALID_MASK			0x00000001
+ #define   RING_VALID				0x00000001
+ #define   STOP_RING				REG_BIT(8)
+-#define   TAIL_ADDR				0x001FFFF8
+ 
+ #define RING_CTX_TIMESTAMP(base)		XE_REG((base) + 0x3a8)
+ #define CSBE_DEBUG_STATUS(base)			XE_REG((base) + 0x3fc)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index e4e3658e6a138..0f42971ff0a83 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1429,8 +1429,8 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
+ 			    !xe_sched_job_completed(job)) ||
+ 			    xe_sched_invalidate_job(job, 2)) {
+ 				trace_xe_sched_job_ban(job);
+-				xe_sched_tdr_queue_imm(&q->guc->sched);
+ 				set_exec_queue_banned(q);
++				xe_sched_tdr_queue_imm(&q->guc->sched);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
+index 453e601ddd5e6..d37f1dea9f8b8 100644
+--- a/drivers/gpu/drm/xe/xe_hwmon.c
++++ b/drivers/gpu/drm/xe/xe_hwmon.c
+@@ -200,9 +200,10 @@ static int xe_hwmon_power_max_write(struct xe_hwmon *hwmon, int channel, long va
+ 				     PKG_PWR_LIM_1_EN, 0, channel);
+ 
+ 		if (reg_val & PKG_PWR_LIM_1_EN) {
++			drm_warn(&gt_to_xe(hwmon->gt)->drm, "PL1 disable is not supported!\n");
+ 			ret = -EOPNOTSUPP;
+-			goto unlock;
+ 		}
++		goto unlock;
+ 	}
+ 
+ 	/* Computation in 64-bits to avoid overflow. Round to nearest. */
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 615bbc372ac62..d7bf7bc9dc145 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -1354,7 +1354,10 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
+ 	if (!snapshot)
+ 		return NULL;
+ 
+-	snapshot->context_desc = lower_32_bits(xe_lrc_ggtt_addr(lrc));
++	if (lrc->bo && lrc->bo->vm)
++		xe_vm_get(lrc->bo->vm);
++
++	snapshot->context_desc = xe_lrc_ggtt_addr(lrc);
+ 	snapshot->head = xe_lrc_ring_head(lrc);
+ 	snapshot->tail.internal = lrc->ring.tail;
+ 	snapshot->tail.memory = xe_lrc_read_ctx_reg(lrc, CTX_RING_TAIL);
+@@ -1370,12 +1373,14 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
+ void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot)
+ {
+ 	struct xe_bo *bo;
++	struct xe_vm *vm;
+ 	struct iosys_map src;
+ 
+ 	if (!snapshot)
+ 		return;
+ 
+ 	bo = snapshot->lrc_bo;
++	vm = bo->vm;
+ 	snapshot->lrc_bo = NULL;
+ 
+ 	snapshot->lrc_snapshot = kvmalloc(snapshot->lrc_size, GFP_KERNEL);
+@@ -1395,6 +1400,8 @@ void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot)
+ 	dma_resv_unlock(bo->ttm.base.resv);
+ put_bo:
+ 	xe_bo_put(bo);
++	if (vm)
++		xe_vm_put(vm);
+ }
+ 
+ void xe_lrc_snapshot_print(struct xe_lrc_snapshot *snapshot, struct drm_printer *p)
+@@ -1440,7 +1447,13 @@ void xe_lrc_snapshot_free(struct xe_lrc_snapshot *snapshot)
+ 		return;
+ 
+ 	kvfree(snapshot->lrc_snapshot);
+-	if (snapshot->lrc_bo)
++	if (snapshot->lrc_bo) {
++		struct xe_vm *vm;
++
++		vm = snapshot->lrc_bo->vm;
+ 		xe_bo_put(snapshot->lrc_bo);
++		if (vm)
++			xe_vm_put(vm);
++	}
+ 	kfree(snapshot);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
+index 7d50c6e89d8e7..5b243b7feb59d 100644
+--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
++++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
+@@ -23,11 +23,19 @@ static void preempt_fence_work_func(struct work_struct *w)
+ 		q->ops->suspend_wait(q);
+ 
+ 	dma_fence_signal(&pfence->base);
+-	dma_fence_end_signalling(cookie);
+-
++	/*
++	 * Opt for keep everything in the fence critical section. This looks really strange since we
++	 * have just signalled the fence, however the preempt fences are all signalled via single
++	 * global ordered-wq, therefore anything that happens in this callback can easily block
++	 * progress on the entire wq, which itself may prevent other published preempt fences from
++	 * ever signalling.  Therefore try to keep everything here in the callback in the fence
++	 * critical section. For example if something below grabs a scary lock like vm->lock,
++	 * lockdep should complain since we also hold that lock whilst waiting on preempt fences to
++	 * complete.
++	 */
+ 	xe_vm_queue_rebind_worker(q->vm);
+-
+ 	xe_exec_queue_put(q);
++	dma_fence_end_signalling(cookie);
+ }
+ 
+ static const char *
+diff --git a/drivers/gpu/drm/xe/xe_rtp.c b/drivers/gpu/drm/xe/xe_rtp.c
+index fb44cc7521d8c..10326bd1bfa3b 100644
+--- a/drivers/gpu/drm/xe/xe_rtp.c
++++ b/drivers/gpu/drm/xe/xe_rtp.c
+@@ -200,7 +200,7 @@ static void rtp_mark_active(struct xe_device *xe,
+ 	if (first == last)
+ 		bitmap_set(ctx->active_entries, first, 1);
+ 	else
+-		bitmap_set(ctx->active_entries, first, last - first + 2);
++		bitmap_set(ctx->active_entries, first, last - first + 1);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index 65f1f16282356..2bfff998458ba 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -263,7 +263,7 @@ void xe_sync_entry_cleanup(struct xe_sync_entry *sync)
+ 	if (sync->fence)
+ 		dma_fence_put(sync->fence);
+ 	if (sync->chain_fence)
+-		dma_fence_put(&sync->chain_fence->base);
++		dma_fence_chain_free(sync->chain_fence);
+ 	if (sync->ufence)
+ 		user_fence_put(sync->ufence);
+ }
+diff --git a/drivers/hwmon/corsair-psu.c b/drivers/hwmon/corsair-psu.c
+index 2c7c92272fe39..f8f22b8a67cdf 100644
+--- a/drivers/hwmon/corsair-psu.c
++++ b/drivers/hwmon/corsair-psu.c
+@@ -875,15 +875,16 @@ static const struct hid_device_id corsairpsu_idtable[] = {
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c04) }, /* Corsair HX650i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c05) }, /* Corsair HX750i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c06) }, /* Corsair HX850i */
+-	{ HID_USB_DEVICE(0x1b1c, 0x1c07) }, /* Corsair HX1000i Series 2022 */
+-	{ HID_USB_DEVICE(0x1b1c, 0x1c08) }, /* Corsair HX1200i */
++	{ HID_USB_DEVICE(0x1b1c, 0x1c07) }, /* Corsair HX1000i Legacy */
++	{ HID_USB_DEVICE(0x1b1c, 0x1c08) }, /* Corsair HX1200i Legacy */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c09) }, /* Corsair RM550i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c0a) }, /* Corsair RM650i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c0b) }, /* Corsair RM750i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c0c) }, /* Corsair RM850i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c0d) }, /* Corsair RM1000i */
+ 	{ HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsair HX1000i Series 2023 */
+-	{ HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i Series 2022 and 2023 */
++	{ HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i Legacy and Series 2023 */
++	{ HID_USB_DEVICE(0x1b1c, 0x1c23) }, /* Corsair HX1200i Series 2023 */
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(hid, corsairpsu_idtable);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 0a8b95ce35f79..365e37bba0f33 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -990,8 +990,11 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
+ 		return ret;
+ 
+ 	ret = geni_se_resources_on(&gi2c->se);
+-	if (ret)
++	if (ret) {
++		clk_disable_unprepare(gi2c->core_clk);
++		geni_icc_disable(&gi2c->se);
+ 		return ret;
++	}
+ 
+ 	enable_irq(gi2c->irq);
+ 	gi2c->suspended = 0;
+diff --git a/drivers/i2c/i2c-smbus.c b/drivers/i2c/i2c-smbus.c
+index 97f338b123b11..25bc7b8d98f0d 100644
+--- a/drivers/i2c/i2c-smbus.c
++++ b/drivers/i2c/i2c-smbus.c
+@@ -34,6 +34,7 @@ static int smbus_do_alert(struct device *dev, void *addrp)
+ 	struct i2c_client *client = i2c_verify_client(dev);
+ 	struct alert_data *data = addrp;
+ 	struct i2c_driver *driver;
++	int ret;
+ 
+ 	if (!client || client->addr != data->addr)
+ 		return 0;
+@@ -47,16 +48,47 @@ static int smbus_do_alert(struct device *dev, void *addrp)
+ 	device_lock(dev);
+ 	if (client->dev.driver) {
+ 		driver = to_i2c_driver(client->dev.driver);
+-		if (driver->alert)
++		if (driver->alert) {
++			/* Stop iterating after we find the device */
+ 			driver->alert(client, data->type, data->data);
+-		else
++			ret = -EBUSY;
++		} else {
+ 			dev_warn(&client->dev, "no driver alert()!\n");
+-	} else
++			ret = -EOPNOTSUPP;
++		}
++	} else {
+ 		dev_dbg(&client->dev, "alert with no driver\n");
++		ret = -ENODEV;
++	}
++	device_unlock(dev);
++
++	return ret;
++}
++
++/* Same as above, but call back all drivers with alert handler */
++
++static int smbus_do_alert_force(struct device *dev, void *addrp)
++{
++	struct i2c_client *client = i2c_verify_client(dev);
++	struct alert_data *data = addrp;
++	struct i2c_driver *driver;
++
++	if (!client || (client->flags & I2C_CLIENT_TEN))
++		return 0;
++
++	/*
++	 * Drivers should either disable alerts, or provide at least
++	 * a minimal handler. Lock so the driver won't change.
++	 */
++	device_lock(dev);
++	if (client->dev.driver) {
++		driver = to_i2c_driver(client->dev.driver);
++		if (driver->alert)
++			driver->alert(client, data->type, data->data);
++	}
+ 	device_unlock(dev);
+ 
+-	/* Stop iterating after we find the device */
+-	return -EBUSY;
++	return 0;
+ }
+ 
+ /*
+@@ -67,6 +99,7 @@ static irqreturn_t smbus_alert(int irq, void *d)
+ {
+ 	struct i2c_smbus_alert *alert = d;
+ 	struct i2c_client *ara;
++	unsigned short prev_addr = I2C_CLIENT_END; /* Not a valid address */
+ 
+ 	ara = alert->ara;
+ 
+@@ -94,8 +127,25 @@ static irqreturn_t smbus_alert(int irq, void *d)
+ 			data.addr, data.data);
+ 
+ 		/* Notify driver for the device which issued the alert */
+-		device_for_each_child(&ara->adapter->dev, &data,
+-				      smbus_do_alert);
++		status = device_for_each_child(&ara->adapter->dev, &data,
++					       smbus_do_alert);
++		/*
++		 * If we read the same address more than once, and the alert
++		 * was not handled by a driver, it won't do any good to repeat
++		 * the loop because it will never terminate. Try again, this
++		 * time calling the alert handlers of all devices connected to
++		 * the bus, and abort the loop afterwards. If this helps, we
++		 * are all set. If it doesn't, there is nothing else we can do,
++		 * so we might as well abort the loop.
++		 * Note: This assumes that a driver with alert handler handles
++		 * the alert properly and clears it if necessary.
++		 */
++		if (data.addr == prev_addr && status != -EBUSY) {
++			device_for_each_child(&ara->adapter->dev, &data,
++					      smbus_do_alert_force);
++			break;
++		}
++		prev_addr = data.addr;
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/irqchip/irq-loongarch-cpu.c b/drivers/irqchip/irq-loongarch-cpu.c
+index 9d8f2c4060431..b35903a06902f 100644
+--- a/drivers/irqchip/irq-loongarch-cpu.c
++++ b/drivers/irqchip/irq-loongarch-cpu.c
+@@ -18,11 +18,13 @@ struct fwnode_handle *cpuintc_handle;
+ 
+ static u32 lpic_gsi_to_irq(u32 gsi)
+ {
++	int irq = 0;
++
+ 	/* Only pch irqdomain transferring is required for LoongArch. */
+ 	if (gsi >= GSI_MIN_PCH_IRQ && gsi <= GSI_MAX_PCH_IRQ)
+-		return acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH);
++		irq = acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH);
+ 
+-	return 0;
++	return (irq > 0) ? irq : 0;
+ }
+ 
+ static struct fwnode_handle *lpic_get_gsi_domain_id(u32 gsi)
+diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
+index 58881d3139792..244a8d489cac6 100644
+--- a/drivers/irqchip/irq-mbigen.c
++++ b/drivers/irqchip/irq-mbigen.c
+@@ -64,6 +64,20 @@ struct mbigen_device {
+ 	void __iomem		*base;
+ };
+ 
++static inline unsigned int get_mbigen_node_offset(unsigned int nid)
++{
++	unsigned int offset = nid * MBIGEN_NODE_OFFSET;
++
++	/*
++	 * To avoid touched clear register in unexpected way, we need to directly
++	 * skip clear register when access to more than 10 mbigen nodes.
++	 */
++	if (nid >= (REG_MBIGEN_CLEAR_OFFSET / MBIGEN_NODE_OFFSET))
++		offset += MBIGEN_NODE_OFFSET;
++
++	return offset;
++}
++
+ static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq)
+ {
+ 	unsigned int nid, pin;
+@@ -72,8 +86,7 @@ static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq)
+ 	nid = hwirq / IRQS_PER_MBIGEN_NODE + 1;
+ 	pin = hwirq % IRQS_PER_MBIGEN_NODE;
+ 
+-	return pin * 4 + nid * MBIGEN_NODE_OFFSET
+-			+ REG_MBIGEN_VEC_OFFSET;
++	return pin * 4 + get_mbigen_node_offset(nid) + REG_MBIGEN_VEC_OFFSET;
+ }
+ 
+ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq,
+@@ -88,8 +101,7 @@ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq,
+ 	*mask = 1 << (irq_ofst % 32);
+ 	ofst = irq_ofst / 32 * 4;
+ 
+-	*addr = ofst + nid * MBIGEN_NODE_OFFSET
+-		+ REG_MBIGEN_TYPE_OFFSET;
++	*addr = ofst + get_mbigen_node_offset(nid) + REG_MBIGEN_TYPE_OFFSET;
+ }
+ 
+ static inline void get_mbigen_clear_reg(irq_hw_number_t hwirq,
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index 9a1791908598d..a4c3b57098ba0 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -178,7 +178,7 @@ struct meson_gpio_irq_controller {
+ 	void __iomem *base;
+ 	u32 channel_irqs[MAX_NUM_CHANNEL];
+ 	DECLARE_BITMAP(channel_map, MAX_NUM_CHANNEL);
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ };
+ 
+ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl,
+@@ -187,14 +187,14 @@ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl,
+ 	unsigned long flags;
+ 	u32 tmp;
+ 
+-	spin_lock_irqsave(&ctl->lock, flags);
++	raw_spin_lock_irqsave(&ctl->lock, flags);
+ 
+ 	tmp = readl_relaxed(ctl->base + reg);
+ 	tmp &= ~mask;
+ 	tmp |= val;
+ 	writel_relaxed(tmp, ctl->base + reg);
+ 
+-	spin_unlock_irqrestore(&ctl->lock, flags);
++	raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ }
+ 
+ static void meson_gpio_irq_init_dummy(struct meson_gpio_irq_controller *ctl)
+@@ -244,12 +244,12 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	unsigned long flags;
+ 	unsigned int idx;
+ 
+-	spin_lock_irqsave(&ctl->lock, flags);
++	raw_spin_lock_irqsave(&ctl->lock, flags);
+ 
+ 	/* Find a free channel */
+ 	idx = find_first_zero_bit(ctl->channel_map, ctl->params->nr_channels);
+ 	if (idx >= ctl->params->nr_channels) {
+-		spin_unlock_irqrestore(&ctl->lock, flags);
++		raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ 		pr_err("No channel available\n");
+ 		return -ENOSPC;
+ 	}
+@@ -257,7 +257,7 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	/* Mark the channel as used */
+ 	set_bit(idx, ctl->channel_map);
+ 
+-	spin_unlock_irqrestore(&ctl->lock, flags);
++	raw_spin_unlock_irqrestore(&ctl->lock, flags);
+ 
+ 	/*
+ 	 * Setup the mux of the channel to route the signal of the pad
+@@ -567,7 +567,7 @@ static int meson_gpio_irq_of_init(struct device_node *node, struct device_node *
+ 	if (!ctl)
+ 		return -ENOMEM;
+ 
+-	spin_lock_init(&ctl->lock);
++	raw_spin_lock_init(&ctl->lock);
+ 
+ 	ctl->base = of_iomap(node, 0);
+ 	if (!ctl->base) {
+diff --git a/drivers/irqchip/irq-riscv-aplic-msi.c b/drivers/irqchip/irq-riscv-aplic-msi.c
+index 028444af48bd5..d7773f76e5d0a 100644
+--- a/drivers/irqchip/irq-riscv-aplic-msi.c
++++ b/drivers/irqchip/irq-riscv-aplic-msi.c
+@@ -32,15 +32,10 @@ static void aplic_msi_irq_unmask(struct irq_data *d)
+ 	aplic_irq_unmask(d);
+ }
+ 
+-static void aplic_msi_irq_eoi(struct irq_data *d)
++static void aplic_msi_irq_retrigger_level(struct irq_data *d)
+ {
+ 	struct aplic_priv *priv = irq_data_get_irq_chip_data(d);
+ 
+-	/*
+-	 * EOI handling is required only for level-triggered interrupts
+-	 * when APLIC is in MSI mode.
+-	 */
+-
+ 	switch (irqd_get_trigger_type(d)) {
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 	case IRQ_TYPE_LEVEL_HIGH:
+@@ -59,6 +54,29 @@ static void aplic_msi_irq_eoi(struct irq_data *d)
+ 	}
+ }
+ 
++static void aplic_msi_irq_eoi(struct irq_data *d)
++{
++	/*
++	 * EOI handling is required only for level-triggered interrupts
++	 * when APLIC is in MSI mode.
++	 */
++	aplic_msi_irq_retrigger_level(d);
++}
++
++static int aplic_msi_irq_set_type(struct irq_data *d, unsigned int type)
++{
++	int rc = aplic_irq_set_type(d, type);
++
++	if (rc)
++		return rc;
++	/*
++	 * Updating sourcecfg register for level-triggered interrupts
++	 * requires interrupt retriggering when APLIC is in MSI mode.
++	 */
++	aplic_msi_irq_retrigger_level(d);
++	return 0;
++}
++
+ static void aplic_msi_write_msg(struct irq_data *d, struct msi_msg *msg)
+ {
+ 	unsigned int group_index, hart_index, guest_index, val;
+@@ -130,7 +148,7 @@ static const struct msi_domain_template aplic_msi_template = {
+ 		.name			= "APLIC-MSI",
+ 		.irq_mask		= aplic_msi_irq_mask,
+ 		.irq_unmask		= aplic_msi_irq_unmask,
+-		.irq_set_type		= aplic_irq_set_type,
++		.irq_set_type		= aplic_msi_irq_set_type,
+ 		.irq_eoi		= aplic_msi_irq_eoi,
+ #ifdef CONFIG_SMP
+ 		.irq_set_affinity	= irq_chip_set_affinity_parent,
+diff --git a/drivers/irqchip/irq-xilinx-intc.c b/drivers/irqchip/irq-xilinx-intc.c
+index 238d3d3449496..7e08714d507f4 100644
+--- a/drivers/irqchip/irq-xilinx-intc.c
++++ b/drivers/irqchip/irq-xilinx-intc.c
+@@ -189,7 +189,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc,
+ 		irqc->intr_mask = 0;
+ 	}
+ 
+-	if (irqc->intr_mask >> irqc->nr_irq)
++	if ((u64)irqc->intr_mask >> irqc->nr_irq)
+ 		pr_warn("irq-xilinx: mismatch in kind-of-intr param\n");
+ 
+ 	pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n",
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 9c5be016e5073..a5b5801baa9e8 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -479,7 +479,6 @@ int mddev_suspend(struct mddev *mddev, bool interruptible)
+ 	 */
+ 	WRITE_ONCE(mddev->suspended, mddev->suspended + 1);
+ 
+-	del_timer_sync(&mddev->safemode_timer);
+ 	/* restrict memory reclaim I/O during raid array is suspend */
+ 	mddev->noio_flag = memalloc_noio_save();
+ 
+@@ -8639,12 +8638,12 @@ EXPORT_SYMBOL(md_done_sync);
+  * A return value of 'false' means that the write wasn't recorded
+  * and cannot proceed as the array is being suspend.
+  */
+-bool md_write_start(struct mddev *mddev, struct bio *bi)
++void md_write_start(struct mddev *mddev, struct bio *bi)
+ {
+ 	int did_change = 0;
+ 
+ 	if (bio_data_dir(bi) != WRITE)
+-		return true;
++		return;
+ 
+ 	BUG_ON(mddev->ro == MD_RDONLY);
+ 	if (mddev->ro == MD_AUTO_READ) {
+@@ -8677,15 +8676,9 @@ bool md_write_start(struct mddev *mddev, struct bio *bi)
+ 	if (did_change)
+ 		sysfs_notify_dirent_safe(mddev->sysfs_state);
+ 	if (!mddev->has_superblocks)
+-		return true;
++		return;
+ 	wait_event(mddev->sb_wait,
+-		   !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags) ||
+-		   is_md_suspended(mddev));
+-	if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) {
+-		percpu_ref_put(&mddev->writes_pending);
+-		return false;
+-	}
+-	return true;
++		   !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags));
+ }
+ EXPORT_SYMBOL(md_write_start);
+ 
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index ca085ecad5044..487582058f741 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -785,7 +785,7 @@ extern void md_unregister_thread(struct mddev *mddev, struct md_thread __rcu **t
+ extern void md_wakeup_thread(struct md_thread __rcu *thread);
+ extern void md_check_recovery(struct mddev *mddev);
+ extern void md_reap_sync_thread(struct mddev *mddev);
+-extern bool md_write_start(struct mddev *mddev, struct bio *bi);
++extern void md_write_start(struct mddev *mddev, struct bio *bi);
+ extern void md_write_inc(struct mddev *mddev, struct bio *bi);
+ extern void md_write_end(struct mddev *mddev);
+ extern void md_done_sync(struct mddev *mddev, int blocks, int ok);
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 22bbd06ba6a29..5ea57b6748c53 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1688,8 +1688,7 @@ static bool raid1_make_request(struct mddev *mddev, struct bio *bio)
+ 	if (bio_data_dir(bio) == READ)
+ 		raid1_read_request(mddev, bio, sectors, NULL);
+ 	else {
+-		if (!md_write_start(mddev,bio))
+-			return false;
++		md_write_start(mddev,bio);
+ 		raid1_write_request(mddev, bio, sectors);
+ 	}
+ 	return true;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index a4556d2e46bf9..f8d7c02c6ed56 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1836,8 +1836,7 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
+ 	    && md_flush_request(mddev, bio))
+ 		return true;
+ 
+-	if (!md_write_start(mddev, bio))
+-		return false;
++	md_write_start(mddev, bio);
+ 
+ 	if (unlikely(bio_op(bio) == REQ_OP_DISCARD))
+ 		if (!raid10_handle_discard(mddev, bio))
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 1c6b58adec133..ff9f4751c0965 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -6097,8 +6097,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 		ctx.do_flush = bi->bi_opf & REQ_PREFLUSH;
+ 	}
+ 
+-	if (!md_write_start(mddev, bi))
+-		return false;
++	md_write_start(mddev, bi);
+ 	/*
+ 	 * If array is degraded, better not do chunk aligned read because
+ 	 * later we might have to read it again in order to reconstruct
+@@ -6273,7 +6272,9 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
+ 	safepos = conf->reshape_safe;
+ 	sector_div(safepos, data_disks);
+ 	if (mddev->reshape_backwards) {
+-		BUG_ON(writepos < reshape_sectors);
++		if (WARN_ON(writepos < reshape_sectors))
++			return MaxSector;
++
+ 		writepos -= reshape_sectors;
+ 		readpos += reshape_sectors;
+ 		safepos += reshape_sectors;
+@@ -6291,14 +6292,18 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
+ 	 * to set 'stripe_addr' which is where we will write to.
+ 	 */
+ 	if (mddev->reshape_backwards) {
+-		BUG_ON(conf->reshape_progress == 0);
++		if (WARN_ON(conf->reshape_progress == 0))
++			return MaxSector;
++
+ 		stripe_addr = writepos;
+-		BUG_ON((mddev->dev_sectors &
+-			~((sector_t)reshape_sectors - 1))
+-		       - reshape_sectors - stripe_addr
+-		       != sector_nr);
++		if (WARN_ON((mddev->dev_sectors &
++		    ~((sector_t)reshape_sectors - 1)) -
++		    reshape_sectors - stripe_addr != sector_nr))
++			return MaxSector;
+ 	} else {
+-		BUG_ON(writepos != sector_nr + reshape_sectors);
++		if (WARN_ON(writepos != sector_nr + reshape_sectors))
++			return MaxSector;
++
+ 		stripe_addr = sector_nr;
+ 	}
+ 
+diff --git a/drivers/media/i2c/ov5647.c b/drivers/media/i2c/ov5647.c
+index 7e1ecdf2485f7..0fb4d7bff9d14 100644
+--- a/drivers/media/i2c/ov5647.c
++++ b/drivers/media/i2c/ov5647.c
+@@ -1360,24 +1360,21 @@ static int ov5647_parse_dt(struct ov5647 *sensor, struct device_node *np)
+ 	struct v4l2_fwnode_endpoint bus_cfg = {
+ 		.bus_type = V4L2_MBUS_CSI2_DPHY,
+ 	};
+-	struct device_node *ep;
++	struct device_node *ep __free(device_node) =
++		of_graph_get_endpoint_by_regs(np, 0, -1);
+ 	int ret;
+ 
+-	ep = of_graph_get_endpoint_by_regs(np, 0, -1);
+ 	if (!ep)
+ 		return -EINVAL;
+ 
+ 	ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(ep), &bus_cfg);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	sensor->clock_ncont = bus_cfg.bus.mipi_csi2.flags &
+ 			      V4L2_MBUS_CSI2_NONCONTINUOUS_CLOCK;
+ 
+-out:
+-	of_node_put(ep);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int ov5647_probe(struct i2c_client *client)
+diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig
+index 154343080c82a..40e20f0aa5ae5 100644
+--- a/drivers/media/pci/intel/ipu6/Kconfig
++++ b/drivers/media/pci/intel/ipu6/Kconfig
+@@ -3,13 +3,14 @@ config VIDEO_INTEL_IPU6
+ 	depends on ACPI || COMPILE_TEST
+ 	depends on VIDEO_DEV
+ 	depends on X86 && X86_64 && HAS_DMA
++	depends on IPU_BRIDGE || !IPU_BRIDGE
++	select AUXILIARY_BUS
+ 	select DMA_OPS
+ 	select IOMMU_IOVA
+ 	select VIDEO_V4L2_SUBDEV_API
+ 	select MEDIA_CONTROLLER
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select V4L2_FWNODE
+-	select IPU_BRIDGE
+ 	help
+ 	  This is the 6th Gen Intel Image Processing Unit, found in Intel SoCs
+ 	  and used for capturing images and video from camera sensors.
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index a57f9f4f3b876..6a38a0fa0e2d4 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -195,7 +195,6 @@ static int vdec_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ 	struct vdec_t *vdec = inst->priv;
+ 	int ret = 0;
+ 
+-	vpu_inst_lock(inst);
+ 	switch (ctrl->id) {
+ 	case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE:
+ 		vdec->params.display_delay_enable = ctrl->val;
+@@ -207,7 +206,6 @@ static int vdec_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	vpu_inst_unlock(inst);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
+index 4eb57d793a9c0..16ed4d21519cd 100644
+--- a/drivers/media/platform/amphion/venc.c
++++ b/drivers/media/platform/amphion/venc.c
+@@ -518,7 +518,6 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ 	struct venc_t *venc = inst->priv;
+ 	int ret = 0;
+ 
+-	vpu_inst_lock(inst);
+ 	switch (ctrl->id) {
+ 	case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
+ 		venc->params.profile = ctrl->val;
+@@ -579,7 +578,6 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	vpu_inst_unlock(inst);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/tuners/xc2028.c b/drivers/media/tuners/xc2028.c
+index 5a967edceca93..352b8a3679b72 100644
+--- a/drivers/media/tuners/xc2028.c
++++ b/drivers/media/tuners/xc2028.c
+@@ -1361,9 +1361,16 @@ static void load_firmware_cb(const struct firmware *fw,
+ 			     void *context)
+ {
+ 	struct dvb_frontend *fe = context;
+-	struct xc2028_data *priv = fe->tuner_priv;
++	struct xc2028_data *priv;
+ 	int rc;
+ 
++	if (!fe) {
++		pr_warn("xc2028: No frontend in %s\n", __func__);
++		return;
++	}
++
++	priv = fe->tuner_priv;
++
+ 	tuner_dbg("request_firmware_nowait(): %s\n", fw ? "OK" : "error");
+ 	if (!fw) {
+ 		tuner_err("Could not load firmware %s.\n", priv->fname);
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 51f4f653b983d..5bebe1460a9f7 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -214,13 +214,13 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 		 * Compute a bandwidth estimation by multiplying the frame
+ 		 * size by the number of video frames per second, divide the
+ 		 * result by the number of USB frames (or micro-frames for
+-		 * high-speed devices) per second and add the UVC header size
+-		 * (assumed to be 12 bytes long).
++		 * high- and super-speed devices) per second and add the UVC
++		 * header size (assumed to be 12 bytes long).
+ 		 */
+ 		bandwidth = frame->wWidth * frame->wHeight / 8 * format->bpp;
+ 		bandwidth *= 10000000 / interval + 1;
+ 		bandwidth /= 1000;
+-		if (stream->dev->udev->speed == USB_SPEED_HIGH)
++		if (stream->dev->udev->speed >= USB_SPEED_HIGH)
+ 			bandwidth /= 8;
+ 		bandwidth += 12;
+ 
+@@ -478,6 +478,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	ktime_t time;
+ 	u16 host_sof;
+ 	u16 dev_sof;
++	u32 dev_stc;
+ 
+ 	switch (data[1] & (UVC_STREAM_PTS | UVC_STREAM_SCR)) {
+ 	case UVC_STREAM_PTS | UVC_STREAM_SCR:
+@@ -526,6 +527,34 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	if (dev_sof == stream->clock.last_sof)
+ 		return;
+ 
++	dev_stc = get_unaligned_le32(&data[header_size - 6]);
++
++	/*
++	 * STC (Source Time Clock) is the clock used by the camera. The UVC 1.5
++	 * standard states that it "must be captured when the first video data
++	 * of a video frame is put on the USB bus". This is generally understood
++	 * as requiring devices to clear the payload header's SCR bit before
++	 * the first packet containing video data.
++	 *
++	 * Most vendors follow that interpretation, but some (namely SunplusIT
++	 * on some devices) always set the `UVC_STREAM_SCR` bit, fill the SCR
++	 * field with 0's,and expect that the driver only processes the SCR if
++	 * there is data in the packet.
++	 *
++	 * Ignore all the hardware timestamp information if we haven't received
++	 * any data for this frame yet, the packet contains no data, and both
++	 * STC and SOF are zero. This heuristics should be safe on compliant
++	 * devices. This should be safe with compliant devices, as in the very
++	 * unlikely case where a UVC 1.1 device would send timing information
++	 * only before the first packet containing data, and both STC and SOF
++	 * happen to be zero for a particular frame, we would only miss one
++	 * clock sample from many and the clock recovery algorithm wouldn't
++	 * suffer from this condition.
++	 */
++	if (buf && buf->bytesused == 0 && len == header_size &&
++	    dev_stc == 0 && dev_sof == 0)
++		return;
++
+ 	stream->clock.last_sof = dev_sof;
+ 
+ 	host_sof = usb_get_current_frame_number(stream->dev->udev);
+@@ -575,7 +604,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf,
+ 	spin_lock_irqsave(&stream->clock.lock, flags);
+ 
+ 	sample = &stream->clock.samples[stream->clock.head];
+-	sample->dev_stc = get_unaligned_le32(&data[header_size - 6]);
++	sample->dev_stc = dev_stc;
+ 	sample->dev_sof = dev_sof;
+ 	sample->host_sof = host_sof;
+ 	sample->host_time = time;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+index bfe4caa0c99d4..4cb79a4f24612 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+@@ -485,6 +485,8 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv)
+ 		clear_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags);
+ 	}
+ 
++	tx_ring->obj_num_shift_to_u8 = BITS_PER_TYPE(tx_ring->obj_num) -
++		ilog2(tx_ring->obj_num);
+ 	tx_ring->obj_size = tx_obj_size;
+ 
+ 	rem = priv->rx_obj_num;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index e5bd57b65aafe..5b0c7890d4b44 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -2,7 +2,7 @@
+ //
+ // mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+ //
+-// Copyright (c) 2019, 2020, 2021 Pengutronix,
++// Copyright (c) 2019, 2020, 2021, 2023 Pengutronix,
+ //               Marc Kleine-Budde <kernel@pengutronix.de>
+ //
+ // Based on:
+@@ -16,6 +16,11 @@
+ 
+ #include "mcp251xfd.h"
+ 
++static inline bool mcp251xfd_tx_fifo_sta_full(u32 fifo_sta)
++{
++	return !(fifo_sta & MCP251XFD_REG_FIFOSTA_TFNRFNIF);
++}
++
+ static inline int
+ mcp251xfd_tef_tail_get_from_chip(const struct mcp251xfd_priv *priv,
+ 				 u8 *tef_tail)
+@@ -55,56 +60,39 @@ static int mcp251xfd_check_tef_tail(const struct mcp251xfd_priv *priv)
+ 	return 0;
+ }
+ 
+-static int
+-mcp251xfd_handle_tefif_recover(const struct mcp251xfd_priv *priv, const u32 seq)
+-{
+-	const struct mcp251xfd_tx_ring *tx_ring = priv->tx;
+-	u32 tef_sta;
+-	int err;
+-
+-	err = regmap_read(priv->map_reg, MCP251XFD_REG_TEFSTA, &tef_sta);
+-	if (err)
+-		return err;
+-
+-	if (tef_sta & MCP251XFD_REG_TEFSTA_TEFOVIF) {
+-		netdev_err(priv->ndev,
+-			   "Transmit Event FIFO buffer overflow.\n");
+-		return -ENOBUFS;
+-	}
+-
+-	netdev_info(priv->ndev,
+-		    "Transmit Event FIFO buffer %s. (seq=0x%08x, tef_tail=0x%08x, tef_head=0x%08x, tx_head=0x%08x).\n",
+-		    tef_sta & MCP251XFD_REG_TEFSTA_TEFFIF ?
+-		    "full" : tef_sta & MCP251XFD_REG_TEFSTA_TEFNEIF ?
+-		    "not empty" : "empty",
+-		    seq, priv->tef->tail, priv->tef->head, tx_ring->head);
+-
+-	/* The Sequence Number in the TEF doesn't match our tef_tail. */
+-	return -EAGAIN;
+-}
+-
+ static int
+ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
+ 			   const struct mcp251xfd_hw_tef_obj *hw_tef_obj,
+ 			   unsigned int *frame_len_ptr)
+ {
+ 	struct net_device_stats *stats = &priv->ndev->stats;
++	u32 seq, tef_tail_masked, tef_tail;
+ 	struct sk_buff *skb;
+-	u32 seq, seq_masked, tef_tail_masked, tef_tail;
+ 
+-	seq = FIELD_GET(MCP251XFD_OBJ_FLAGS_SEQ_MCP2518FD_MASK,
++	 /* Use the MCP2517FD mask on the MCP2518FD, too. We only
++	  * compare 7 bits, this is enough to detect old TEF objects.
++	  */
++	seq = FIELD_GET(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK,
+ 			hw_tef_obj->flags);
+-
+-	/* Use the MCP2517FD mask on the MCP2518FD, too. We only
+-	 * compare 7 bits, this should be enough to detect
+-	 * net-yet-completed, i.e. old TEF objects.
+-	 */
+-	seq_masked = seq &
+-		field_mask(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK);
+ 	tef_tail_masked = priv->tef->tail &
+ 		field_mask(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK);
+-	if (seq_masked != tef_tail_masked)
+-		return mcp251xfd_handle_tefif_recover(priv, seq);
++
++	/* According to mcp2518fd erratum DS80000789E 6. the FIFOCI
++	 * bits of a FIFOSTA register, here the TX FIFO tail index
++	 * might be corrupted and we might process past the TEF FIFO's
++	 * head into old CAN frames.
++	 *
++	 * Compare the sequence number of the currently processed CAN
++	 * frame with the expected sequence number. Abort with
++	 * -EBADMSG if an old CAN frame is detected.
++	 */
++	if (seq != tef_tail_masked) {
++		netdev_dbg(priv->ndev, "%s: chip=0x%02x ring=0x%02x\n", __func__,
++			   seq, tef_tail_masked);
++		stats->tx_fifo_errors++;
++
++		return -EBADMSG;
++	}
+ 
+ 	tef_tail = mcp251xfd_get_tef_tail(priv);
+ 	skb = priv->can.echo_skb[tef_tail];
+@@ -120,28 +108,44 @@ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
+ 	return 0;
+ }
+ 
+-static int mcp251xfd_tef_ring_update(struct mcp251xfd_priv *priv)
++static int
++mcp251xfd_get_tef_len(struct mcp251xfd_priv *priv, u8 *len_p)
+ {
+ 	const struct mcp251xfd_tx_ring *tx_ring = priv->tx;
+-	unsigned int new_head;
+-	u8 chip_tx_tail;
++	const u8 shift = tx_ring->obj_num_shift_to_u8;
++	u8 chip_tx_tail, tail, len;
++	u32 fifo_sta;
+ 	int err;
+ 
+-	err = mcp251xfd_tx_tail_get_from_chip(priv, &chip_tx_tail);
++	err = regmap_read(priv->map_reg, MCP251XFD_REG_FIFOSTA(priv->tx->fifo_nr),
++			  &fifo_sta);
+ 	if (err)
+ 		return err;
+ 
+-	/* chip_tx_tail, is the next TX-Object send by the HW.
+-	 * The new TEF head must be >= the old head, ...
++	if (mcp251xfd_tx_fifo_sta_full(fifo_sta)) {
++		*len_p = tx_ring->obj_num;
++		return 0;
++	}
++
++	chip_tx_tail = FIELD_GET(MCP251XFD_REG_FIFOSTA_FIFOCI_MASK, fifo_sta);
++
++	err =  mcp251xfd_check_tef_tail(priv);
++	if (err)
++		return err;
++	tail = mcp251xfd_get_tef_tail(priv);
++
++	/* First shift to full u8. The subtraction works on signed
++	 * values, that keeps the difference steady around the u8
++	 * overflow. The right shift acts on len, which is an u8.
+ 	 */
+-	new_head = round_down(priv->tef->head, tx_ring->obj_num) + chip_tx_tail;
+-	if (new_head <= priv->tef->head)
+-		new_head += tx_ring->obj_num;
++	BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(chip_tx_tail));
++	BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(tail));
++	BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(len));
+ 
+-	/* ... but it cannot exceed the TX head. */
+-	priv->tef->head = min(new_head, tx_ring->head);
++	len = (chip_tx_tail << shift) - (tail << shift);
++	*len_p = len >> shift;
+ 
+-	return mcp251xfd_check_tef_tail(priv);
++	return 0;
+ }
+ 
+ static inline int
+@@ -182,13 +186,12 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
+ 	u8 tef_tail, len, l;
+ 	int err, i;
+ 
+-	err = mcp251xfd_tef_ring_update(priv);
++	err = mcp251xfd_get_tef_len(priv, &len);
+ 	if (err)
+ 		return err;
+ 
+ 	tef_tail = mcp251xfd_get_tef_tail(priv);
+-	len = mcp251xfd_get_tef_len(priv);
+-	l = mcp251xfd_get_tef_linear_len(priv);
++	l = mcp251xfd_get_tef_linear_len(priv, len);
+ 	err = mcp251xfd_tef_obj_read(priv, hw_tef_obj, tef_tail, l);
+ 	if (err)
+ 		return err;
+@@ -203,12 +206,12 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
+ 		unsigned int frame_len = 0;
+ 
+ 		err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i], &frame_len);
+-		/* -EAGAIN means the Sequence Number in the TEF
+-		 * doesn't match our tef_tail. This can happen if we
+-		 * read the TEF objects too early. Leave loop let the
+-		 * interrupt handler call us again.
++		/* -EBADMSG means we're affected by mcp2518fd erratum
++		 * DS80000789E 6., i.e. the Sequence Number in the TEF
++		 * doesn't match our tef_tail. Don't process any
++		 * further and mark processed frames as good.
+ 		 */
+-		if (err == -EAGAIN)
++		if (err == -EBADMSG)
+ 			goto out_netif_wake_queue;
+ 		if (err)
+ 			return err;
+@@ -223,6 +226,8 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
+ 		struct mcp251xfd_tx_ring *tx_ring = priv->tx;
+ 		int offset;
+ 
++		ring->head += len;
++
+ 		/* Increment the TEF FIFO tail pointer 'len' times in
+ 		 * a single SPI message.
+ 		 *
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index b35bfebd23f29..4628bf847bc9b 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -524,6 +524,7 @@ struct mcp251xfd_tef_ring {
+ 
+ 	/* u8 obj_num equals tx_ring->obj_num */
+ 	/* u8 obj_size equals sizeof(struct mcp251xfd_hw_tef_obj) */
++	/* u8 obj_num_shift_to_u8 equals tx_ring->obj_num_shift_to_u8 */
+ 
+ 	union mcp251xfd_write_reg_buf irq_enable_buf;
+ 	struct spi_transfer irq_enable_xfer;
+@@ -542,6 +543,7 @@ struct mcp251xfd_tx_ring {
+ 	u8 nr;
+ 	u8 fifo_nr;
+ 	u8 obj_num;
++	u8 obj_num_shift_to_u8;
+ 	u8 obj_size;
+ 
+ 	struct mcp251xfd_tx_obj obj[MCP251XFD_TX_OBJ_NUM_MAX];
+@@ -861,17 +863,8 @@ static inline u8 mcp251xfd_get_tef_tail(const struct mcp251xfd_priv *priv)
+ 	return priv->tef->tail & (priv->tx->obj_num - 1);
+ }
+ 
+-static inline u8 mcp251xfd_get_tef_len(const struct mcp251xfd_priv *priv)
++static inline u8 mcp251xfd_get_tef_linear_len(const struct mcp251xfd_priv *priv, u8 len)
+ {
+-	return priv->tef->head - priv->tef->tail;
+-}
+-
+-static inline u8 mcp251xfd_get_tef_linear_len(const struct mcp251xfd_priv *priv)
+-{
+-	u8 len;
+-
+-	len = mcp251xfd_get_tef_len(priv);
+-
+ 	return min_t(u8, len, priv->tx->obj_num - mcp251xfd_get_tef_tail(priv));
+ }
+ 
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index ed1e6560df25e..0e663ec0c12a3 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -675,8 +675,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 			of_remove_property(child, prop);
+ 
+ 		phydev = of_phy_find_device(child);
+-		if (phydev)
++		if (phydev) {
+ 			phy_device_remove(phydev);
++			phy_device_free(phydev);
++		}
+ 	}
+ 
+ 	err = mdiobus_register(priv->user_mii_bus);
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index baa1eeb9a1b04..3103e1b32d0ba 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -2578,7 +2578,11 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
+ 		if (!port)
+ 			return MICREL_KSZ8_P1_ERRATA;
+ 		break;
++	case KSZ8567_CHIP_ID:
+ 	case KSZ9477_CHIP_ID:
++	case KSZ9567_CHIP_ID:
++	case KSZ9896_CHIP_ID:
++	case KSZ9897_CHIP_ID:
+ 		/* KSZ9477 Errata DS80000754C
+ 		 *
+ 		 * Module 4: Energy Efficient Ethernet (EEE) feature select must
+@@ -2588,6 +2592,13 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
+ 		 *   controls. If not disabled, the PHY ports can auto-negotiate
+ 		 *   to enable EEE, and this feature can cause link drops when
+ 		 *   linked to another device supporting EEE.
++		 *
++		 * The same item appears in the errata for the KSZ9567, KSZ9896,
++		 * and KSZ9897.
++		 *
++		 * A similar item appears in the errata for the KSZ8567, but
++		 * provides an alternative workaround. For now, use the simple
++		 * workaround of disabling the EEE feature for this device too.
+ 		 */
+ 		return MICREL_NO_EEE;
+ 	}
+@@ -3763,6 +3774,11 @@ static int ksz_port_set_mac_address(struct dsa_switch *ds, int port,
+ 		return -EBUSY;
+ 	}
+ 
++	/* Need to initialize variable as the code to fill in settings may
++	 * not be executed.
++	 */
++	wol.wolopts = 0;
++
+ 	ksz_get_wol(ds, dp->index, &wol);
+ 	if (wol.wolopts & WAKE_MAGIC) {
+ 		dev_err(ds->dev,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 23627c973e40f..a2d672a698e35 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7433,19 +7433,20 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp)
+ 	int rx = bp->rx_nr_rings, stat;
+ 	int vnic, grp = rx;
+ 
+-	if (hw_resc->resv_tx_rings != bp->tx_nr_rings &&
+-	    bp->hwrm_spec_code >= 0x10601)
+-		return true;
+-
+ 	/* Old firmware does not need RX ring reservations but we still
+ 	 * need to setup a default RSS map when needed.  With new firmware
+ 	 * we go through RX ring reservations first and then set up the
+ 	 * RSS map for the successfully reserved RX rings when needed.
+ 	 */
+-	if (!BNXT_NEW_RM(bp)) {
++	if (!BNXT_NEW_RM(bp))
+ 		bnxt_check_rss_tbl_no_rmgr(bp);
++
++	if (hw_resc->resv_tx_rings != bp->tx_nr_rings &&
++	    bp->hwrm_spec_code >= 0x10601)
++		return true;
++
++	if (!BNXT_NEW_RM(bp))
+ 		return false;
+-	}
+ 
+ 	vnic = bnxt_get_total_vnics(bp, rx);
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index 1248792d7fd4d..0715ea5bf13ed 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -42,19 +42,15 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct device *kdev = &priv->pdev->dev;
+ 
+-	if (dev->phydev) {
++	if (dev->phydev)
+ 		phy_ethtool_get_wol(dev->phydev, wol);
+-		if (wol->supported)
+-			return;
+-	}
+ 
+-	if (!device_can_wakeup(kdev)) {
+-		wol->supported = 0;
+-		wol->wolopts = 0;
++	/* MAC is not wake-up capable, return what the PHY does */
++	if (!device_can_wakeup(kdev))
+ 		return;
+-	}
+ 
+-	wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
++	/* Overlay MAC capabilities with that of the PHY queried before */
++	wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
+ 	wol->wolopts = priv->wolopts;
+ 	memset(wol->sopass, 0, sizeof(wol->sopass));
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index e32f6724f5681..2e4f3e1782a25 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -775,6 +775,9 @@ void fec_ptp_stop(struct platform_device *pdev)
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+ 
++	if (fep->pps_enable)
++		fec_ptp_enable_pps(fep, 0);
++
+ 	cancel_delayed_work_sync(&fep->time_keep);
+ 	hrtimer_cancel(&fep->perout_timer);
+ 	if (fep->ptp_clock)
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index fe1741d482b4a..cf816ede05f69 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -492,7 +492,7 @@ static int gve_set_channels(struct net_device *netdev,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!netif_carrier_ok(netdev)) {
++	if (!netif_running(netdev)) {
+ 		priv->tx_cfg.num_queues = new_tx;
+ 		priv->rx_cfg.num_queues = new_rx;
+ 		return 0;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index cabf7d4bcecb8..8b14efd14a505 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1511,7 +1511,7 @@ static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
+ 	u32 status;
+ 
+ 	old_prog = READ_ONCE(priv->xdp_prog);
+-	if (!netif_carrier_ok(priv->dev)) {
++	if (!netif_running(priv->dev)) {
+ 		WRITE_ONCE(priv->xdp_prog, prog);
+ 		if (old_prog)
+ 			bpf_prog_put(old_prog);
+@@ -1784,7 +1784,7 @@ int gve_adjust_queues(struct gve_priv *priv,
+ 	rx_alloc_cfg.qcfg = &new_rx_config;
+ 	tx_alloc_cfg.num_rings = new_tx_config.num_queues;
+ 
+-	if (netif_carrier_ok(priv->dev)) {
++	if (netif_running(priv->dev)) {
+ 		err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+ 		return err;
+ 	}
+@@ -2001,7 +2001,7 @@ static int gve_set_features(struct net_device *netdev,
+ 
+ 	if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) {
+ 		netdev->features ^= NETIF_F_LRO;
+-		if (netif_carrier_ok(netdev)) {
++		if (netif_running(netdev)) {
+ 			err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+ 			if (err) {
+ 				/* Revert the change on error. */
+@@ -2290,7 +2290,7 @@ static int gve_reset_recovery(struct gve_priv *priv, bool was_up)
+ 
+ int gve_reset(struct gve_priv *priv, bool attempt_teardown)
+ {
+-	bool was_up = netif_carrier_ok(priv->dev);
++	bool was_up = netif_running(priv->dev);
+ 	int err;
+ 
+ 	dev_info(&priv->pdev->dev, "Performing reset\n");
+@@ -2631,7 +2631,7 @@ static void gve_shutdown(struct pci_dev *pdev)
+ {
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct gve_priv *priv = netdev_priv(netdev);
+-	bool was_up = netif_carrier_ok(priv->dev);
++	bool was_up = netif_running(priv->dev);
+ 
+ 	rtnl_lock();
+ 	if (was_up && gve_close(priv->dev)) {
+@@ -2649,7 +2649,7 @@ static int gve_suspend(struct pci_dev *pdev, pm_message_t state)
+ {
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct gve_priv *priv = netdev_priv(netdev);
+-	bool was_up = netif_carrier_ok(priv->dev);
++	bool was_up = netif_running(priv->dev);
+ 
+ 	priv->suspend_cnt++;
+ 	rtnl_lock();
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 9b075dd48889e..f16d13e9ff6e3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -560,6 +560,8 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
+ 	if (test_bit(ICE_PREPARED_FOR_RESET, pf->state))
+ 		return;
+ 
++	synchronize_irq(pf->oicr_irq.virq);
++
+ 	ice_unplug_aux_dev(pf);
+ 
+ 	/* Notify VFs of impending reset */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index f1ee5584e8fa2..3ac9d7ab83f20 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -905,8 +905,8 @@ static void idpf_vport_stop(struct idpf_vport *vport)
+ 
+ 	vport->link_up = false;
+ 	idpf_vport_intr_deinit(vport);
+-	idpf_vport_intr_rel(vport);
+ 	idpf_vport_queues_rel(vport);
++	idpf_vport_intr_rel(vport);
+ 	np->state = __IDPF_VPORT_DOWN;
+ }
+ 
+@@ -1337,9 +1337,8 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
+ /**
+  * idpf_vport_open - Bring up a vport
+  * @vport: vport to bring up
+- * @alloc_res: allocate queue resources
+  */
+-static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
++static int idpf_vport_open(struct idpf_vport *vport)
+ {
+ 	struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ 	struct idpf_adapter *adapter = vport->adapter;
+@@ -1352,45 +1351,43 @@ static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
+ 	/* we do not allow interface up just yet */
+ 	netif_carrier_off(vport->netdev);
+ 
+-	if (alloc_res) {
+-		err = idpf_vport_queues_alloc(vport);
+-		if (err)
+-			return err;
+-	}
+-
+ 	err = idpf_vport_intr_alloc(vport);
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n",
+ 			vport->vport_id, err);
+-		goto queues_rel;
++		return err;
+ 	}
+ 
++	err = idpf_vport_queues_alloc(vport);
++	if (err)
++		goto intr_rel;
++
+ 	err = idpf_vport_queue_ids_init(vport);
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
+ 			vport->vport_id, err);
+-		goto intr_rel;
++		goto queues_rel;
+ 	}
+ 
+ 	err = idpf_vport_intr_init(vport);
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n",
+ 			vport->vport_id, err);
+-		goto intr_rel;
++		goto queues_rel;
+ 	}
+ 
+ 	err = idpf_rx_bufs_init_all(vport);
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",
+ 			vport->vport_id, err);
+-		goto intr_rel;
++		goto queues_rel;
+ 	}
+ 
+ 	err = idpf_queue_reg_init(vport);
+ 	if (err) {
+ 		dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
+ 			vport->vport_id, err);
+-		goto intr_rel;
++		goto queues_rel;
+ 	}
+ 
+ 	idpf_rx_init_buf_tail(vport);
+@@ -1457,10 +1454,10 @@ static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
+ 	idpf_send_map_unmap_queue_vector_msg(vport, false);
+ intr_deinit:
+ 	idpf_vport_intr_deinit(vport);
+-intr_rel:
+-	idpf_vport_intr_rel(vport);
+ queues_rel:
+ 	idpf_vport_queues_rel(vport);
++intr_rel:
++	idpf_vport_intr_rel(vport);
+ 
+ 	return err;
+ }
+@@ -1541,7 +1538,7 @@ void idpf_init_task(struct work_struct *work)
+ 	np = netdev_priv(vport->netdev);
+ 	np->state = __IDPF_VPORT_DOWN;
+ 	if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags))
+-		idpf_vport_open(vport, true);
++		idpf_vport_open(vport);
+ 
+ 	/* Spawn and return 'idpf_init_task' work queue until all the
+ 	 * default vports are created
+@@ -1900,9 +1897,6 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ 		goto free_vport;
+ 	}
+ 
+-	err = idpf_vport_queues_alloc(new_vport);
+-	if (err)
+-		goto free_vport;
+ 	if (current_state <= __IDPF_VPORT_DOWN) {
+ 		idpf_send_delete_queues_msg(vport);
+ 	} else {
+@@ -1974,17 +1968,23 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ 
+ 	err = idpf_set_real_num_queues(vport);
+ 	if (err)
+-		goto err_reset;
++		goto err_open;
+ 
+ 	if (current_state == __IDPF_VPORT_UP)
+-		err = idpf_vport_open(vport, false);
++		err = idpf_vport_open(vport);
+ 
+ 	kfree(new_vport);
+ 
+ 	return err;
+ 
+ err_reset:
+-	idpf_vport_queues_rel(new_vport);
++	idpf_send_add_queues_msg(vport, vport->num_txq, vport->num_complq,
++				 vport->num_rxq, vport->num_bufq);
++
++err_open:
++	if (current_state == __IDPF_VPORT_UP)
++		idpf_vport_open(vport);
++
+ free_vport:
+ 	kfree(new_vport);
+ 
+@@ -2213,7 +2213,7 @@ static int idpf_open(struct net_device *netdev)
+ 	idpf_vport_ctrl_lock(netdev);
+ 	vport = idpf_netdev_to_vport(netdev);
+ 
+-	err = idpf_vport_open(vport, true);
++	err = idpf_vport_open(vport);
+ 
+ 	idpf_vport_ctrl_unlock(netdev);
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index b023704bbbdab..20ca04320d4bd 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -3436,9 +3436,7 @@ static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport)
+  */
+ void idpf_vport_intr_rel(struct idpf_vport *vport)
+ {
+-	int i, j, v_idx;
+-
+-	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
++	for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ 		struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
+ 
+ 		kfree(q_vector->bufq);
+@@ -3449,26 +3447,6 @@ void idpf_vport_intr_rel(struct idpf_vport *vport)
+ 		q_vector->rx = NULL;
+ 	}
+ 
+-	/* Clean up the mapping of queues to vectors */
+-	for (i = 0; i < vport->num_rxq_grp; i++) {
+-		struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+-
+-		if (idpf_is_queue_model_split(vport->rxq_model))
+-			for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++)
+-				rx_qgrp->splitq.rxq_sets[j]->rxq.q_vector = NULL;
+-		else
+-			for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+-				rx_qgrp->singleq.rxqs[j]->q_vector = NULL;
+-	}
+-
+-	if (idpf_is_queue_model_split(vport->txq_model))
+-		for (i = 0; i < vport->num_txq_grp; i++)
+-			vport->txq_grps[i].complq->q_vector = NULL;
+-	else
+-		for (i = 0; i < vport->num_txq_grp; i++)
+-			for (j = 0; j < vport->txq_grps[i].num_txq; j++)
+-				vport->txq_grps[i].txqs[j]->q_vector = NULL;
+-
+ 	kfree(vport->q_vectors);
+ 	vport->q_vectors = NULL;
+ }
+@@ -3636,13 +3614,15 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector)
+ /**
+  * idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport
+  * @vport: main vport structure
+- * @basename: name for the vector
+  */
+-static int idpf_vport_intr_req_irq(struct idpf_vport *vport, char *basename)
++static int idpf_vport_intr_req_irq(struct idpf_vport *vport)
+ {
+ 	struct idpf_adapter *adapter = vport->adapter;
++	const char *drv_name, *if_name, *vec_name;
+ 	int vector, err, irq_num, vidx;
+-	const char *vec_name;
++
++	drv_name = dev_driver_string(&adapter->pdev->dev);
++	if_name = netdev_name(vport->netdev);
+ 
+ 	for (vector = 0; vector < vport->num_q_vectors; vector++) {
+ 		struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
+@@ -3659,8 +3639,8 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport, char *basename)
+ 		else
+ 			continue;
+ 
+-		q_vector->name = kasprintf(GFP_KERNEL, "%s-%s-%d",
+-					   basename, vec_name, vidx);
++		q_vector->name = kasprintf(GFP_KERNEL, "%s-%s-%s-%d", drv_name,
++					   if_name, vec_name, vidx);
+ 
+ 		err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0,
+ 				  q_vector->name, q_vector);
+@@ -4170,7 +4150,6 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+  */
+ int idpf_vport_intr_init(struct idpf_vport *vport)
+ {
+-	char *int_name;
+ 	int err;
+ 
+ 	err = idpf_vport_intr_init_vec_idx(vport);
+@@ -4184,11 +4163,7 @@ int idpf_vport_intr_init(struct idpf_vport *vport)
+ 	if (err)
+ 		goto unroll_vectors_alloc;
+ 
+-	int_name = kasprintf(GFP_KERNEL, "%s-%s",
+-			     dev_driver_string(&vport->adapter->pdev->dev),
+-			     vport->netdev->name);
+-
+-	err = idpf_vport_intr_req_irq(vport, int_name);
++	err = idpf_vport_intr_req_irq(vport);
+ 	if (err)
+ 		goto unroll_vectors_alloc;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index b5333da20e8a7..cdc84a27a04ed 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -2374,6 +2374,9 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
+ 	if (likely(wi->consumed_strides < rq->mpwqe.num_strides))
+ 		return;
+ 
++	if (unlikely(!cstrides))
++		return;
++
+ 	wq  = &rq->mpwqe.wq;
+ 	wqe = mlx5_wq_ll_get_wqe(wq, wqe_id);
+ 	mlx5_wq_ll_pop(wq, cqe->wqe_id, &wqe->next.next_wqe_index);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index c0ced4d315f3d..d92f640bae575 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -1599,6 +1599,7 @@ static int mlxsw_pci_reset_at_pci_disable(struct mlxsw_pci *mlxsw_pci,
+ {
+ 	struct pci_dev *pdev = mlxsw_pci->pdev;
+ 	char mrsr_pl[MLXSW_REG_MRSR_LEN];
++	struct pci_dev *bridge;
+ 	int err;
+ 
+ 	if (!pci_reset_sbr_supported) {
+@@ -1615,6 +1616,9 @@ static int mlxsw_pci_reset_at_pci_disable(struct mlxsw_pci *mlxsw_pci,
+ sbr:
+ 	device_lock_assert(&pdev->dev);
+ 
++	bridge = pci_upstream_bridge(pdev);
++	if (bridge)
++		pci_cfg_access_lock(bridge);
+ 	pci_cfg_access_lock(pdev);
+ 	pci_save_state(pdev);
+ 
+@@ -1624,6 +1628,8 @@ static int mlxsw_pci_reset_at_pci_disable(struct mlxsw_pci *mlxsw_pci,
+ 
+ 	pci_restore_state(pdev);
+ 	pci_cfg_access_unlock(pdev);
++	if (bridge)
++		pci_cfg_access_unlock(bridge);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+index 466c4002f00d4..3a7f3a8b06718 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+@@ -21,6 +21,7 @@
+ #define RGMII_IO_MACRO_CONFIG2		0x1C
+ #define RGMII_IO_MACRO_DEBUG1		0x20
+ #define EMAC_SYSTEM_LOW_POWER_DEBUG	0x28
++#define EMAC_WRAPPER_SGMII_PHY_CNTRL1	0xf4
+ 
+ /* RGMII_IO_MACRO_CONFIG fields */
+ #define RGMII_CONFIG_FUNC_CLK_EN		BIT(30)
+@@ -79,6 +80,9 @@
+ #define ETHQOS_MAC_CTRL_SPEED_MODE		BIT(14)
+ #define ETHQOS_MAC_CTRL_PORT_SEL		BIT(15)
+ 
++/* EMAC_WRAPPER_SGMII_PHY_CNTRL1 bits */
++#define SGMII_PHY_CNTRL1_SGMII_TX_TO_RX_LOOPBACK_EN	BIT(3)
++
+ #define SGMII_10M_RX_CLK_DVDR			0x31
+ 
+ struct ethqos_emac_por {
+@@ -95,6 +99,7 @@ struct ethqos_emac_driver_data {
+ 	bool has_integrated_pcs;
+ 	u32 dma_addr_width;
+ 	struct dwmac4_addrs dwmac4_addrs;
++	bool needs_sgmii_loopback;
+ };
+ 
+ struct qcom_ethqos {
+@@ -114,6 +119,7 @@ struct qcom_ethqos {
+ 	unsigned int num_por;
+ 	bool rgmii_config_loopback_en;
+ 	bool has_emac_ge_3;
++	bool needs_sgmii_loopback;
+ };
+ 
+ static int rgmii_readl(struct qcom_ethqos *ethqos, unsigned int offset)
+@@ -191,8 +197,22 @@ ethqos_update_link_clk(struct qcom_ethqos *ethqos, unsigned int speed)
+ 	clk_set_rate(ethqos->link_clk, ethqos->link_clk_rate);
+ }
+ 
++static void
++qcom_ethqos_set_sgmii_loopback(struct qcom_ethqos *ethqos, bool enable)
++{
++	if (!ethqos->needs_sgmii_loopback ||
++	    ethqos->phy_mode != PHY_INTERFACE_MODE_2500BASEX)
++		return;
++
++	rgmii_updatel(ethqos,
++		      SGMII_PHY_CNTRL1_SGMII_TX_TO_RX_LOOPBACK_EN,
++		      enable ? SGMII_PHY_CNTRL1_SGMII_TX_TO_RX_LOOPBACK_EN : 0,
++		      EMAC_WRAPPER_SGMII_PHY_CNTRL1);
++}
++
+ static void ethqos_set_func_clk_en(struct qcom_ethqos *ethqos)
+ {
++	qcom_ethqos_set_sgmii_loopback(ethqos, true);
+ 	rgmii_updatel(ethqos, RGMII_CONFIG_FUNC_CLK_EN,
+ 		      RGMII_CONFIG_FUNC_CLK_EN, RGMII_IO_MACRO_CONFIG);
+ }
+@@ -277,6 +297,7 @@ static const struct ethqos_emac_driver_data emac_v4_0_0_data = {
+ 	.has_emac_ge_3 = true,
+ 	.link_clk_name = "phyaux",
+ 	.has_integrated_pcs = true,
++	.needs_sgmii_loopback = true,
+ 	.dma_addr_width = 36,
+ 	.dwmac4_addrs = {
+ 		.dma_chan = 0x00008100,
+@@ -674,6 +695,7 @@ static void ethqos_fix_mac_speed(void *priv, unsigned int speed, unsigned int mo
+ {
+ 	struct qcom_ethqos *ethqos = priv;
+ 
++	qcom_ethqos_set_sgmii_loopback(ethqos, false);
+ 	ethqos->speed = speed;
+ 	ethqos_update_link_clk(ethqos, speed);
+ 	ethqos_configure(ethqos);
+@@ -809,6 +831,7 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 	ethqos->num_por = data->num_por;
+ 	ethqos->rgmii_config_loopback_en = data->rgmii_config_loopback_en;
+ 	ethqos->has_emac_ge_3 = data->has_emac_ge_3;
++	ethqos->needs_sgmii_loopback = data->needs_sgmii_loopback;
+ 
+ 	ethqos->link_clk = devm_clk_get(dev, data->link_clk_name ?: "rgmii");
+ 	if (IS_ERR(ethqos->link_clk))
+diff --git a/drivers/net/pse-pd/tps23881.c b/drivers/net/pse-pd/tps23881.c
+index 98ffbb1bbf13c..2d1c2e5706f8b 100644
+--- a/drivers/net/pse-pd/tps23881.c
++++ b/drivers/net/pse-pd/tps23881.c
+@@ -5,6 +5,7 @@
+  * Copyright (c) 2023 Bootlin, Kory Maincent <kory.maincent@bootlin.com>
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/firmware.h>
+ #include <linux/i2c.h>
+@@ -29,6 +30,8 @@
+ #define TPS23881_REG_TPON	BIT(0)
+ #define TPS23881_REG_FWREV	0x41
+ #define TPS23881_REG_DEVID	0x43
++#define TPS23881_REG_DEVID_MASK	0xF0
++#define TPS23881_DEVICE_ID	0x02
+ #define TPS23881_REG_SRAM_CTRL	0x60
+ #define TPS23881_REG_SRAM_DATA	0x61
+ 
+@@ -750,7 +753,7 @@ static int tps23881_i2c_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (ret != 0x22) {
++	if (FIELD_GET(TPS23881_REG_DEVID_MASK, ret) != TPS23881_DEVICE_ID) {
+ 		dev_err(dev, "Wrong device ID\n");
+ 		return -ENXIO;
+ 	}
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 386d62769dedb..cfda32047cffb 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -201,6 +201,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ 			break;
+ 		default:
+ 			/* not ip - do not know what to do */
++			kfree_skb(skbn);
+ 			goto skip;
+ 		}
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 5161e7efda2cb..f32e017b62e9b 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3257,7 +3257,11 @@ static int virtnet_set_ringparam(struct net_device *dev,
+ 			err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, i,
+ 							       vi->intr_coal_tx.max_usecs,
+ 							       vi->intr_coal_tx.max_packets);
+-			if (err)
++
++			/* Don't break the tx resize action if the vq coalescing is not
++			 * supported. The same is true for rx resize below.
++			 */
++			if (err && err != -EOPNOTSUPP)
+ 				return err;
+ 		}
+ 
+@@ -3272,7 +3276,7 @@ static int virtnet_set_ringparam(struct net_device *dev,
+ 							       vi->intr_coal_rx.max_usecs,
+ 							       vi->intr_coal_rx.max_packets);
+ 			mutex_unlock(&vi->rq[i].dim_lock);
+-			if (err)
++			if (err && err != -EOPNOTSUPP)
+ 				return err;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 121f27284be59..1d287ed25a949 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2793,6 +2793,7 @@ int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev
+ 	peer = ath12k_peer_find(ab, vdev_id, peer_mac);
+ 	if (!peer) {
+ 		spin_unlock_bh(&ab->base_lock);
++		crypto_free_shash(tfm);
+ 		ath12k_warn(ab, "failed to find the peer to set up fragment info\n");
+ 		return -ENOENT;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 55fde0d33183c..f92b4ce49dfd4 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1091,14 +1091,14 @@ void ath12k_pci_ext_irq_enable(struct ath12k_base *ab)
+ {
+ 	int i;
+ 
+-	set_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
+-
+ 	for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+ 		napi_enable(&irq_grp->napi);
+ 		ath12k_pci_ext_grp_enable(irq_grp);
+ 	}
++
++	set_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
+ }
+ 
+ void ath12k_pci_ext_irq_disable(struct ath12k_base *ab)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 2ea72d9e39577..4d2931e544278 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -23,6 +23,8 @@ MODULE_DESCRIPTION("USB basic driver for rtlwifi");
+ 
+ #define MAX_USBCTRL_VENDORREQ_TIMES		10
+ 
++static void _rtl_usb_cleanup_tx(struct ieee80211_hw *hw);
++
+ static void _usbctrl_vendorreq_sync(struct usb_device *udev, u8 reqtype,
+ 				   u16 value, void *pdata, u16 len)
+ {
+@@ -285,9 +287,23 @@ static int _rtl_usb_init(struct ieee80211_hw *hw)
+ 	}
+ 	/* usb endpoint mapping */
+ 	err = rtlpriv->cfg->usb_interface_cfg->usb_endpoint_mapping(hw);
+-	rtlusb->usb_mq_to_hwq =  rtlpriv->cfg->usb_interface_cfg->usb_mq_to_hwq;
+-	_rtl_usb_init_tx(hw);
+-	_rtl_usb_init_rx(hw);
++	if (err)
++		return err;
++
++	rtlusb->usb_mq_to_hwq = rtlpriv->cfg->usb_interface_cfg->usb_mq_to_hwq;
++
++	err = _rtl_usb_init_tx(hw);
++	if (err)
++		return err;
++
++	err = _rtl_usb_init_rx(hw);
++	if (err)
++		goto err_out;
++
++	return 0;
++
++err_out:
++	_rtl_usb_cleanup_tx(hw);
+ 	return err;
+ }
+ 
+@@ -691,17 +707,13 @@ static int rtl_usb_start(struct ieee80211_hw *hw)
+ }
+ 
+ /*=======================  tx =========================================*/
+-static void rtl_usb_cleanup(struct ieee80211_hw *hw)
++static void _rtl_usb_cleanup_tx(struct ieee80211_hw *hw)
+ {
+ 	u32 i;
+ 	struct sk_buff *_skb;
+ 	struct rtl_usb *rtlusb = rtl_usbdev(rtl_usbpriv(hw));
+ 	struct ieee80211_tx_info *txinfo;
+ 
+-	/* clean up rx stuff. */
+-	_rtl_usb_cleanup_rx(hw);
+-
+-	/* clean up tx stuff */
+ 	for (i = 0; i < RTL_USB_MAX_EP_NUM; i++) {
+ 		while ((_skb = skb_dequeue(&rtlusb->tx_skb_queue[i]))) {
+ 			rtlusb->usb_tx_cleanup(hw, _skb);
+@@ -715,6 +727,12 @@ static void rtl_usb_cleanup(struct ieee80211_hw *hw)
+ 	usb_kill_anchored_urbs(&rtlusb->tx_submitted);
+ }
+ 
++static void rtl_usb_cleanup(struct ieee80211_hw *hw)
++{
++	_rtl_usb_cleanup_rx(hw);
++	_rtl_usb_cleanup_tx(hw);
++}
++
+ /* We may add some struct into struct rtl_usb later. Do deinit here.  */
+ static void rtl_usb_deinit(struct ieee80211_hw *hw)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index b36aa9a6bb3fc..312b57d7da642 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -183,14 +183,17 @@ static void rtw89_pci_sync_skb_for_device(struct rtw89_dev *rtwdev,
+ static void rtw89_pci_rxbd_info_update(struct rtw89_dev *rtwdev,
+ 				       struct sk_buff *skb)
+ {
+-	struct rtw89_pci_rxbd_info *rxbd_info;
+ 	struct rtw89_pci_rx_info *rx_info = RTW89_PCI_RX_SKB_CB(skb);
++	struct rtw89_pci_rxbd_info *rxbd_info;
++	__le32 info;
+ 
+ 	rxbd_info = (struct rtw89_pci_rxbd_info *)skb->data;
+-	rx_info->fs = le32_get_bits(rxbd_info->dword, RTW89_PCI_RXBD_FS);
+-	rx_info->ls = le32_get_bits(rxbd_info->dword, RTW89_PCI_RXBD_LS);
+-	rx_info->len = le32_get_bits(rxbd_info->dword, RTW89_PCI_RXBD_WRITE_SIZE);
+-	rx_info->tag = le32_get_bits(rxbd_info->dword, RTW89_PCI_RXBD_TAG);
++	info = rxbd_info->dword;
++
++	rx_info->fs = le32_get_bits(info, RTW89_PCI_RXBD_FS);
++	rx_info->ls = le32_get_bits(info, RTW89_PCI_RXBD_LS);
++	rx_info->len = le32_get_bits(info, RTW89_PCI_RXBD_WRITE_SIZE);
++	rx_info->tag = le32_get_bits(info, RTW89_PCI_RXBD_TAG);
+ }
+ 
+ static int rtw89_pci_validate_rx_tag(struct rtw89_dev *rtwdev,
+diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
+index 0cfa39361d3b6..25ecc1a005c5a 100644
+--- a/drivers/nvme/host/apple.c
++++ b/drivers/nvme/host/apple.c
+@@ -1388,7 +1388,7 @@ static void devm_apple_nvme_mempool_destroy(void *data)
+ 	mempool_destroy(data);
+ }
+ 
+-static int apple_nvme_probe(struct platform_device *pdev)
++static struct apple_nvme *apple_nvme_alloc(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct apple_nvme *anv;
+@@ -1396,7 +1396,7 @@ static int apple_nvme_probe(struct platform_device *pdev)
+ 
+ 	anv = devm_kzalloc(dev, sizeof(*anv), GFP_KERNEL);
+ 	if (!anv)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	anv->dev = get_device(dev);
+ 	anv->adminq.is_adminq = true;
+@@ -1516,10 +1516,26 @@ static int apple_nvme_probe(struct platform_device *pdev)
+ 		goto put_dev;
+ 	}
+ 
++	return anv;
++put_dev:
++	put_device(anv->dev);
++	return ERR_PTR(ret);
++}
++
++static int apple_nvme_probe(struct platform_device *pdev)
++{
++	struct apple_nvme *anv;
++	int ret;
++
++	anv = apple_nvme_alloc(pdev);
++	if (IS_ERR(anv))
++		return PTR_ERR(anv);
++
+ 	anv->ctrl.admin_q = blk_mq_alloc_queue(&anv->admin_tagset, NULL, NULL);
+ 	if (IS_ERR(anv->ctrl.admin_q)) {
+ 		ret = -ENOMEM;
+-		goto put_dev;
++		anv->ctrl.admin_q = NULL;
++		goto out_uninit_ctrl;
+ 	}
+ 
+ 	nvme_reset_ctrl(&anv->ctrl);
+@@ -1527,8 +1543,9 @@ static int apple_nvme_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-put_dev:
+-	put_device(anv->dev);
++out_uninit_ctrl:
++	nvme_uninit_ctrl(&anv->ctrl);
++	nvme_put_ctrl(&anv->ctrl);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 5a93f021ca4f1..7168ff4cc62bb 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -826,9 +826,9 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req,
+ 		struct nvme_command *cmnd)
+ {
+ 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
++	struct bio_vec bv = rq_integrity_vec(req);
+ 
+-	iod->meta_dma = dma_map_bvec(dev->dev, rq_integrity_vec(req),
+-			rq_dma_dir(req), 0);
++	iod->meta_dma = dma_map_bvec(dev->dev, &bv, rq_dma_dir(req), 0);
+ 	if (dma_mapping_error(dev->dev, iod->meta_dma))
+ 		return BLK_STS_IOERR;
+ 	cmnd->rw.metadata = cpu_to_le64(iod->meta_dma);
+@@ -968,7 +968,7 @@ static __always_inline void nvme_pci_unmap_rq(struct request *req)
+ 	        struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ 
+ 		dma_unmap_page(dev->dev, iod->meta_dma,
+-			       rq_integrity_vec(req)->bv_len, rq_dma_dir(req));
++			       rq_integrity_vec(req).bv_len, rq_dma_dir(req));
+ 	}
+ 
+ 	if (blk_rq_nr_phys_segments(req))
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index ddfbfec44f4cc..43e0914256a3c 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -39,6 +39,11 @@ static bool cros_ec_lpc_acpi_device_found;
+  * be used as the base port for EC mapped memory.
+  */
+ #define CROS_EC_LPC_QUIRK_REMAP_MEMORY              BIT(0)
++/*
++ * Indicates that lpc_driver_data.quirk_acpi_id should be used to find
++ * the ACPI device.
++ */
++#define CROS_EC_LPC_QUIRK_ACPI_ID                   BIT(1)
+ 
+ /**
+  * struct lpc_driver_data - driver data attached to a DMI device ID to indicate
+@@ -46,10 +51,12 @@ static bool cros_ec_lpc_acpi_device_found;
+  * @quirks: a bitfield composed of quirks from CROS_EC_LPC_QUIRK_*
+  * @quirk_mmio_memory_base: The first I/O port addressing EC mapped memory (used
+  *                          when quirk ...REMAP_MEMORY is set.)
++ * @quirk_acpi_id: An ACPI HID to be used to find the ACPI device.
+  */
+ struct lpc_driver_data {
+ 	u32 quirks;
+ 	u16 quirk_mmio_memory_base;
++	const char *quirk_acpi_id;
+ };
+ 
+ /**
+@@ -374,6 +381,26 @@ static void cros_ec_lpc_acpi_notify(acpi_handle device, u32 value, void *data)
+ 		pm_system_wakeup();
+ }
+ 
++static acpi_status cros_ec_lpc_parse_device(acpi_handle handle, u32 level,
++					    void *context, void **retval)
++{
++	*(struct acpi_device **)context = acpi_fetch_acpi_dev(handle);
++	return AE_CTRL_TERMINATE;
++}
++
++static struct acpi_device *cros_ec_lpc_get_device(const char *id)
++{
++	struct acpi_device *adev = NULL;
++	acpi_status status = acpi_get_devices(id, cros_ec_lpc_parse_device,
++					      &adev, NULL);
++	if (ACPI_FAILURE(status)) {
++		pr_warn(DRV_NAME ": Looking for %s failed\n", id);
++		return NULL;
++	}
++
++	return adev;
++}
++
+ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -401,6 +428,16 @@ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ 
+ 		if (quirks & CROS_EC_LPC_QUIRK_REMAP_MEMORY)
+ 			ec_lpc->mmio_memory_base = driver_data->quirk_mmio_memory_base;
++
++		if (quirks & CROS_EC_LPC_QUIRK_ACPI_ID) {
++			adev = cros_ec_lpc_get_device(driver_data->quirk_acpi_id);
++			if (!adev) {
++				dev_err(dev, "failed to get ACPI device '%s'",
++					driver_data->quirk_acpi_id);
++				return -ENODEV;
++			}
++			ACPI_COMPANION_SET(dev, adev);
++		}
+ 	}
+ 
+ 	/*
+@@ -661,23 +698,12 @@ static struct platform_device cros_ec_lpc_device = {
+ 	.name = DRV_NAME
+ };
+ 
+-static acpi_status cros_ec_lpc_parse_device(acpi_handle handle, u32 level,
+-					    void *context, void **retval)
+-{
+-	*(bool *)context = true;
+-	return AE_CTRL_TERMINATE;
+-}
+-
+ static int __init cros_ec_lpc_init(void)
+ {
+ 	int ret;
+-	acpi_status status;
+ 	const struct dmi_system_id *dmi_match;
+ 
+-	status = acpi_get_devices(ACPI_DRV_NAME, cros_ec_lpc_parse_device,
+-				  &cros_ec_lpc_acpi_device_found, NULL);
+-	if (ACPI_FAILURE(status))
+-		pr_warn(DRV_NAME ": Looking for %s failed\n", ACPI_DRV_NAME);
++	cros_ec_lpc_acpi_device_found = !!cros_ec_lpc_get_device(ACPI_DRV_NAME);
+ 
+ 	dmi_match = dmi_first_match(cros_ec_lpc_dmi_table);
+ 
+diff --git a/drivers/platform/x86/intel/ifs/runtest.c b/drivers/platform/x86/intel/ifs/runtest.c
+index 282e4bfe30da3..be3d51ed0e474 100644
+--- a/drivers/platform/x86/intel/ifs/runtest.c
++++ b/drivers/platform/x86/intel/ifs/runtest.c
+@@ -221,8 +221,8 @@ static int doscan(void *data)
+  */
+ static void ifs_test_core(int cpu, struct device *dev)
+ {
++	union ifs_status status = {};
+ 	union ifs_scan activate;
+-	union ifs_status status;
+ 	unsigned long timeout;
+ 	struct ifs_data *ifsd;
+ 	int to_start, to_stop;
+diff --git a/drivers/platform/x86/intel/vbtn.c b/drivers/platform/x86/intel/vbtn.c
+index 9b7ce03ba085c..a353e830b65fd 100644
+--- a/drivers/platform/x86/intel/vbtn.c
++++ b/drivers/platform/x86/intel/vbtn.c
+@@ -7,11 +7,13 @@
+  */
+ 
+ #include <linux/acpi.h>
++#include <linux/cleanup.h>
+ #include <linux/dmi.h>
+ #include <linux/input.h>
+ #include <linux/input/sparse-keymap.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_device.h>
+ #include <linux/suspend.h>
+ #include "../dual_accel_detect.h"
+@@ -66,6 +68,7 @@ static const struct key_entry intel_vbtn_switchmap[] = {
+ };
+ 
+ struct intel_vbtn_priv {
++	struct mutex mutex; /* Avoid notify_handler() racing with itself */
+ 	struct input_dev *buttons_dev;
+ 	struct input_dev *switches_dev;
+ 	bool dual_accel;
+@@ -155,6 +158,8 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 	bool autorelease;
+ 	int ret;
+ 
++	guard(mutex)(&priv->mutex);
++
+ 	if ((ke = sparse_keymap_entry_from_scancode(priv->buttons_dev, event))) {
+ 		if (!priv->has_buttons) {
+ 			dev_warn(&device->dev, "Warning: received 0x%02x button event on a device without buttons, please report this.\n",
+@@ -290,6 +295,10 @@ static int intel_vbtn_probe(struct platform_device *device)
+ 		return -ENOMEM;
+ 	dev_set_drvdata(&device->dev, priv);
+ 
++	err = devm_mutex_init(&device->dev, &priv->mutex);
++	if (err)
++		return err;
++
+ 	priv->dual_accel = dual_accel;
+ 	priv->has_buttons = has_buttons;
+ 	priv->has_switches = has_switches;
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index b5903193e2f96..ac05942e4e6ac 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -178,18 +178,18 @@ static inline int axp288_charger_set_cv(struct axp288_chrg_info *info, int cv)
+ 	u8 reg_val;
+ 	int ret;
+ 
+-	if (cv <= CV_4100MV) {
+-		reg_val = CHRG_CCCV_CV_4100MV;
+-		cv = CV_4100MV;
+-	} else if (cv <= CV_4150MV) {
+-		reg_val = CHRG_CCCV_CV_4150MV;
+-		cv = CV_4150MV;
+-	} else if (cv <= CV_4200MV) {
++	if (cv >= CV_4350MV) {
++		reg_val = CHRG_CCCV_CV_4350MV;
++		cv = CV_4350MV;
++	} else if (cv >= CV_4200MV) {
+ 		reg_val = CHRG_CCCV_CV_4200MV;
+ 		cv = CV_4200MV;
++	} else if (cv >= CV_4150MV) {
++		reg_val = CHRG_CCCV_CV_4150MV;
++		cv = CV_4150MV;
+ 	} else {
+-		reg_val = CHRG_CCCV_CV_4350MV;
+-		cv = CV_4350MV;
++		reg_val = CHRG_CCCV_CV_4100MV;
++		cv = CV_4100MV;
+ 	}
+ 
+ 	reg_val = reg_val << CHRG_CCCV_CV_BIT_POS;
+@@ -337,8 +337,8 @@ static int axp288_charger_usb_set_property(struct power_supply *psy,
+ 		}
+ 		break;
+ 	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+-		scaled_val = min(val->intval, info->max_cv);
+-		scaled_val = DIV_ROUND_CLOSEST(scaled_val, 1000);
++		scaled_val = DIV_ROUND_CLOSEST(val->intval, 1000);
++		scaled_val = min(scaled_val, info->max_cv);
+ 		ret = axp288_charger_set_cv(info, scaled_val);
+ 		if (ret < 0) {
+ 			dev_warn(&info->pdev->dev, "set charge voltage failed\n");
+diff --git a/drivers/power/supply/qcom_battmgr.c b/drivers/power/supply/qcom_battmgr.c
+index ec163d1bcd189..44c6301f5f174 100644
+--- a/drivers/power/supply/qcom_battmgr.c
++++ b/drivers/power/supply/qcom_battmgr.c
+@@ -486,7 +486,7 @@ static int qcom_battmgr_bat_get_property(struct power_supply *psy,
+ 	int ret;
+ 
+ 	if (!battmgr->service_up)
+-		return -ENODEV;
++		return -EAGAIN;
+ 
+ 	if (battmgr->variant == QCOM_BATTMGR_SC8280XP)
+ 		ret = qcom_battmgr_bat_sc8280xp_update(battmgr, psp);
+@@ -683,7 +683,7 @@ static int qcom_battmgr_ac_get_property(struct power_supply *psy,
+ 	int ret;
+ 
+ 	if (!battmgr->service_up)
+-		return -ENODEV;
++		return -EAGAIN;
+ 
+ 	ret = qcom_battmgr_bat_sc8280xp_update(battmgr, psp);
+ 	if (ret)
+@@ -748,7 +748,7 @@ static int qcom_battmgr_usb_get_property(struct power_supply *psy,
+ 	int ret;
+ 
+ 	if (!battmgr->service_up)
+-		return -ENODEV;
++		return -EAGAIN;
+ 
+ 	if (battmgr->variant == QCOM_BATTMGR_SC8280XP)
+ 		ret = qcom_battmgr_bat_sc8280xp_update(battmgr, psp);
+@@ -867,7 +867,7 @@ static int qcom_battmgr_wls_get_property(struct power_supply *psy,
+ 	int ret;
+ 
+ 	if (!battmgr->service_up)
+-		return -ENODEV;
++		return -EAGAIN;
+ 
+ 	if (battmgr->variant == QCOM_BATTMGR_SC8280XP)
+ 		ret = qcom_battmgr_bat_sc8280xp_update(battmgr, psp);
+diff --git a/drivers/power/supply/rt5033_battery.c b/drivers/power/supply/rt5033_battery.c
+index 32eafe2c00af5..7a27b262fb84a 100644
+--- a/drivers/power/supply/rt5033_battery.c
++++ b/drivers/power/supply/rt5033_battery.c
+@@ -159,6 +159,7 @@ static int rt5033_battery_probe(struct i2c_client *client)
+ 		return -EINVAL;
+ 	}
+ 
++	i2c_set_clientdata(client, battery);
+ 	psy_cfg.of_node = client->dev.of_node;
+ 	psy_cfg.drv_data = battery;
+ 
+diff --git a/drivers/s390/char/sclp_sd.c b/drivers/s390/char/sclp_sd.c
+index f9e164be7568f..944e75beb160c 100644
+--- a/drivers/s390/char/sclp_sd.c
++++ b/drivers/s390/char/sclp_sd.c
+@@ -320,8 +320,14 @@ static int sclp_sd_store_data(struct sclp_sd_data *result, u8 di)
+ 			  &esize);
+ 	if (rc) {
+ 		/* Cancel running request if interrupted */
+-		if (rc == -ERESTARTSYS)
+-			sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL);
++		if (rc == -ERESTARTSYS) {
++			if (sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL)) {
++				pr_warn("Could not stop Store Data request - leaking at least %zu bytes\n",
++					(size_t)dsize * PAGE_SIZE);
++				data = NULL;
++				asce = 0;
++			}
++		}
+ 		vfree(data);
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index bce639a6cca17..c051692395ccb 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -3453,6 +3453,17 @@ static int mpi3mr_prepare_sg_scmd(struct mpi3mr_ioc *mrioc,
+ 		    scmd->sc_data_direction);
+ 		priv->meta_sg_valid = 1; /* To unmap meta sg DMA */
+ 	} else {
++		/*
++		 * Some firmware versions byte-swap the REPORT ZONES command
++		 * reply from ATA-ZAC devices by directly accessing in the host
++		 * buffer. This does not respect the default command DMA
++		 * direction and causes IOMMU page faults on some architectures
++		 * with an IOMMU enforcing write mappings (e.g. AMD hosts).
++		 * Avoid such issue by making the REPORT ZONES buffer mapping
++		 * bi-directional.
++		 */
++		if (scmd->cmnd[0] == ZBC_IN && scmd->cmnd[1] == ZI_REPORT_ZONES)
++			scmd->sc_data_direction = DMA_BIDIRECTIONAL;
+ 		sg_scmd = scsi_sglist(scmd);
+ 		sges_left = scsi_dma_map(scmd);
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index b2bcf4a27ddcd..b785a7e88b498 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -2671,6 +2671,22 @@ _base_build_zero_len_sge_ieee(struct MPT3SAS_ADAPTER *ioc, void *paddr)
+ 	_base_add_sg_single_ieee(paddr, sgl_flags, 0, 0, -1);
+ }
+ 
++static inline int _base_scsi_dma_map(struct scsi_cmnd *cmd)
++{
++	/*
++	 * Some firmware versions byte-swap the REPORT ZONES command reply from
++	 * ATA-ZAC devices by directly accessing in the host buffer. This does
++	 * not respect the default command DMA direction and causes IOMMU page
++	 * faults on some architectures with an IOMMU enforcing write mappings
++	 * (e.g. AMD hosts). Avoid such issue by making the report zones buffer
++	 * mapping bi-directional.
++	 */
++	if (cmd->cmnd[0] == ZBC_IN && cmd->cmnd[1] == ZI_REPORT_ZONES)
++		cmd->sc_data_direction = DMA_BIDIRECTIONAL;
++
++	return scsi_dma_map(cmd);
++}
++
+ /**
+  * _base_build_sg_scmd - main sg creation routine
+  *		pcie_device is unused here!
+@@ -2717,7 +2733,7 @@ _base_build_sg_scmd(struct MPT3SAS_ADAPTER *ioc,
+ 	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+ 
+ 	sg_scmd = scsi_sglist(scmd);
+-	sges_left = scsi_dma_map(scmd);
++	sges_left = _base_scsi_dma_map(scmd);
+ 	if (sges_left < 0)
+ 		return -ENOMEM;
+ 
+@@ -2861,7 +2877,7 @@ _base_build_sg_scmd_ieee(struct MPT3SAS_ADAPTER *ioc,
+ 	}
+ 
+ 	sg_scmd = scsi_sglist(scmd);
+-	sges_left = scsi_dma_map(scmd);
++	sges_left = _base_scsi_dma_map(scmd);
+ 	if (sges_left < 0)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 1b7561abe05d9..6b64af7d49273 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -4119,6 +4119,8 @@ static int sd_resume(struct device *dev)
+ {
+ 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
+ 
++	sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
++
+ 	if (opal_unlock_from_suspend(sdkp->opal_dev)) {
+ 		sd_printk(KERN_NOTICE, sdkp, "OPAL unlock failed\n");
+ 		return -EIO;
+@@ -4135,13 +4137,12 @@ static int sd_resume_common(struct device *dev, bool runtime)
+ 	if (!sdkp)	/* E.g.: runtime resume at the start of sd_probe() */
+ 		return 0;
+ 
+-	sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
+-
+ 	if (!sd_do_start_stop(sdkp->device, runtime)) {
+ 		sdkp->suspended = false;
+ 		return 0;
+ 	}
+ 
++	sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
+ 	ret = sd_start_stop_device(sdkp, 1);
+ 	if (!ret) {
+ 		sd_resume(dev);
+diff --git a/drivers/soc/qcom/icc-bwmon.c b/drivers/soc/qcom/icc-bwmon.c
+index ecddb60bd6650..e7851974084b6 100644
+--- a/drivers/soc/qcom/icc-bwmon.c
++++ b/drivers/soc/qcom/icc-bwmon.c
+@@ -783,9 +783,14 @@ static int bwmon_probe(struct platform_device *pdev)
+ 	bwmon->dev = dev;
+ 
+ 	bwmon_disable(bwmon);
+-	ret = devm_request_threaded_irq(dev, bwmon->irq, bwmon_intr,
+-					bwmon_intr_thread,
+-					IRQF_ONESHOT, dev_name(dev), bwmon);
++
++	/*
++	 * SoCs with multiple cpu-bwmon instances can end up using a shared interrupt
++	 * line. Using the devm_ variant might result in the IRQ handler being executed
++	 * after bwmon_disable in bwmon_remove()
++	 */
++	ret = request_threaded_irq(bwmon->irq, bwmon_intr, bwmon_intr_thread,
++				   IRQF_ONESHOT | IRQF_SHARED, dev_name(dev), bwmon);
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "failed to request IRQ\n");
+ 
+@@ -800,6 +805,7 @@ static void bwmon_remove(struct platform_device *pdev)
+ 	struct icc_bwmon *bwmon = platform_get_drvdata(pdev);
+ 
+ 	bwmon_disable(bwmon);
++	free_irq(bwmon->irq, bwmon);
+ }
+ 
+ static const struct icc_bwmon_data msm8998_bwmon_data = {
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index aa5ed254be46c..f2d7eedd324b7 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -296,7 +296,7 @@ static void fsl_lpspi_set_watermark(struct fsl_lpspi_data *fsl_lpspi)
+ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ {
+ 	struct lpspi_config config = fsl_lpspi->config;
+-	unsigned int perclk_rate, scldiv;
++	unsigned int perclk_rate, scldiv, div;
+ 	u8 prescale;
+ 
+ 	perclk_rate = clk_get_rate(fsl_lpspi->clk_per);
+@@ -313,8 +313,10 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ 		return -EINVAL;
+ 	}
+ 
++	div = DIV_ROUND_UP(perclk_rate, config.speed_hz);
++
+ 	for (prescale = 0; prescale < 8; prescale++) {
+-		scldiv = perclk_rate / config.speed_hz / (1 << prescale) - 2;
++		scldiv = div / (1 << prescale) - 2;
+ 		if (scldiv < 256) {
+ 			fsl_lpspi->config.prescale = prescale;
+ 			break;
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 05e6d007f9a7f..5304728c68c20 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -700,6 +700,7 @@ static const struct class spidev_class = {
+ };
+ 
+ static const struct spi_device_id spidev_spi_ids[] = {
++	{ .name = "bh2228fv" },
+ 	{ .name = "dh2228fv" },
+ 	{ .name = "ltc2488" },
+ 	{ .name = "sx1301" },
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index 791cdc160c515..f36f57bde0771 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -398,7 +398,7 @@ static int pmic_arb_fmt_read_cmd(struct spmi_pmic_arb_bus *bus, u8 opc, u8 sid,
+ 
+ 	*offset = rc;
+ 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
+-		dev_err(&bus->spmic->dev, "pmic-arb supports 1..%d bytes per trans, but:%zu requested",
++		dev_err(&bus->spmic->dev, "pmic-arb supports 1..%d bytes per trans, but:%zu requested\n",
+ 			PMIC_ARB_MAX_TRANS_BYTES, len);
+ 		return  -EINVAL;
+ 	}
+@@ -477,7 +477,7 @@ static int pmic_arb_fmt_write_cmd(struct spmi_pmic_arb_bus *bus, u8 opc,
+ 
+ 	*offset = rc;
+ 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
+-		dev_err(&bus->spmic->dev, "pmic-arb supports 1..%d bytes per trans, but:%zu requested",
++		dev_err(&bus->spmic->dev, "pmic-arb supports 1..%d bytes per trans, but:%zu requested\n",
+ 			PMIC_ARB_MAX_TRANS_BYTES, len);
+ 		return  -EINVAL;
+ 	}
+@@ -1702,7 +1702,7 @@ static int spmi_pmic_arb_bus_init(struct platform_device *pdev,
+ 
+ 	index = of_property_match_string(node, "reg-names", "cnfg");
+ 	if (index < 0) {
+-		dev_err(dev, "cnfg reg region missing");
++		dev_err(dev, "cnfg reg region missing\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1712,7 +1712,7 @@ static int spmi_pmic_arb_bus_init(struct platform_device *pdev,
+ 
+ 	index = of_property_match_string(node, "reg-names", "intr");
+ 	if (index < 0) {
+-		dev_err(dev, "intr reg region missing");
++		dev_err(dev, "intr reg region missing\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1737,8 +1737,7 @@ static int spmi_pmic_arb_bus_init(struct platform_device *pdev,
+ 
+ 	dev_dbg(&pdev->dev, "adding irq domain for bus %d\n", bus_index);
+ 
+-	bus->domain = irq_domain_add_tree(dev->of_node,
+-					  &pmic_arb_irq_domain_ops, bus);
++	bus->domain = irq_domain_add_tree(node, &pmic_arb_irq_domain_ops, bus);
+ 	if (!bus->domain) {
+ 		dev_err(&pdev->dev, "unable to create irq_domain\n");
+ 		return -ENOMEM;
+diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
+index a180a98bb9f15..5b18a46a10b06 100644
+--- a/drivers/thermal/intel/intel_hfi.c
++++ b/drivers/thermal/intel/intel_hfi.c
+@@ -401,10 +401,10 @@ static void hfi_disable(void)
+  * intel_hfi_online() - Enable HFI on @cpu
+  * @cpu:	CPU in which the HFI will be enabled
+  *
+- * Enable the HFI to be used in @cpu. The HFI is enabled at the die/package
+- * level. The first CPU in the die/package to come online does the full HFI
++ * Enable the HFI to be used in @cpu. The HFI is enabled at the package
++ * level. The first CPU in the package to come online does the full HFI
+  * initialization. Subsequent CPUs will just link themselves to the HFI
+- * instance of their die/package.
++ * instance of their package.
+  *
+  * This function is called before enabling the thermal vector in the local APIC
+  * in order to ensure that @cpu has an associated HFI instance when it receives
+@@ -414,31 +414,31 @@ void intel_hfi_online(unsigned int cpu)
+ {
+ 	struct hfi_instance *hfi_instance;
+ 	struct hfi_cpu_info *info;
+-	u16 die_id;
++	u16 pkg_id;
+ 
+ 	/* Nothing to do if hfi_instances are missing. */
+ 	if (!hfi_instances)
+ 		return;
+ 
+ 	/*
+-	 * Link @cpu to the HFI instance of its package/die. It does not
++	 * Link @cpu to the HFI instance of its package. It does not
+ 	 * matter whether the instance has been initialized.
+ 	 */
+ 	info = &per_cpu(hfi_cpu_info, cpu);
+-	die_id = topology_logical_die_id(cpu);
++	pkg_id = topology_logical_package_id(cpu);
+ 	hfi_instance = info->hfi_instance;
+ 	if (!hfi_instance) {
+-		if (die_id >= max_hfi_instances)
++		if (pkg_id >= max_hfi_instances)
+ 			return;
+ 
+-		hfi_instance = &hfi_instances[die_id];
++		hfi_instance = &hfi_instances[pkg_id];
+ 		info->hfi_instance = hfi_instance;
+ 	}
+ 
+ 	init_hfi_cpu_index(info);
+ 
+ 	/*
+-	 * Now check if the HFI instance of the package/die of @cpu has been
++	 * Now check if the HFI instance of the package of @cpu has been
+ 	 * initialized (by checking its header). In such case, all we have to
+ 	 * do is to add @cpu to this instance's cpumask and enable the instance
+ 	 * if needed.
+@@ -504,7 +504,7 @@ void intel_hfi_online(unsigned int cpu)
+  *
+  * On some processors, hardware remembers previous programming settings even
+  * after being reprogrammed. Thus, keep HFI enabled even if all CPUs in the
+- * die/package of @cpu are offline. See note in intel_hfi_online().
++ * package of @cpu are offline. See note in intel_hfi_online().
+  */
+ void intel_hfi_offline(unsigned int cpu)
+ {
+@@ -674,9 +674,13 @@ void __init intel_hfi_init(void)
+ 	if (hfi_parse_features())
+ 		return;
+ 
+-	/* There is one HFI instance per die/package. */
+-	max_hfi_instances = topology_max_packages() *
+-			    topology_max_dies_per_package();
++	/*
++	 * Note: HFI resources are managed at the physical package scope.
++	 * There could be platforms that enumerate packages as Linux dies.
++	 * Special handling would be needed if this happens on an HFI-capable
++	 * platform.
++	 */
++	max_hfi_instances = topology_max_packages();
+ 
+ 	/*
+ 	 * This allocation may fail. CPU hotplug callbacks must check
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index bf0065d1c8e9c..9547aa8f45205 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -326,6 +326,7 @@ struct sc16is7xx_one {
+ 	struct kthread_work		reg_work;
+ 	struct kthread_delayed_work	ms_work;
+ 	struct sc16is7xx_one_config	config;
++	unsigned char			buf[SC16IS7XX_FIFO_SIZE]; /* Rx buffer. */
+ 	unsigned int			old_mctrl;
+ 	u8				old_lcr; /* Value before EFR access. */
+ 	bool				irda_mode;
+@@ -339,7 +340,6 @@ struct sc16is7xx_port {
+ 	unsigned long			gpio_valid_mask;
+ #endif
+ 	u8				mctrl_mask;
+-	unsigned char			buf[SC16IS7XX_FIFO_SIZE];
+ 	struct kthread_worker		kworker;
+ 	struct task_struct		*kworker_task;
+ 	struct sc16is7xx_one		p[];
+@@ -591,6 +591,8 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 			      SC16IS7XX_MCR_CLKSEL_BIT,
+ 			      prescaler == 1 ? 0 : SC16IS7XX_MCR_CLKSEL_BIT);
+ 
++	mutex_lock(&one->efr_lock);
++
+ 	/* Backup LCR and access special register set (DLL/DLH) */
+ 	lcr = sc16is7xx_port_read(port, SC16IS7XX_LCR_REG);
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+@@ -605,24 +607,26 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 	/* Restore LCR and access to general register set */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+ 
++	mutex_unlock(&one->efr_lock);
++
+ 	return DIV_ROUND_CLOSEST((clk / prescaler) / 16, div);
+ }
+ 
+ static void sc16is7xx_handle_rx(struct uart_port *port, unsigned int rxlen,
+ 				unsigned int iir)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 	unsigned int lsr = 0, bytes_read, i;
+ 	bool read_lsr = (iir == SC16IS7XX_IIR_RLSE_SRC) ? true : false;
+ 	u8 ch, flag;
+ 
+-	if (unlikely(rxlen >= sizeof(s->buf))) {
++	if (unlikely(rxlen >= sizeof(one->buf))) {
+ 		dev_warn_ratelimited(port->dev,
+ 				     "ttySC%i: Possible RX FIFO overrun: %d\n",
+ 				     port->line, rxlen);
+ 		port->icount.buf_overrun++;
+ 		/* Ensure sanity of RX level */
+-		rxlen = sizeof(s->buf);
++		rxlen = sizeof(one->buf);
+ 	}
+ 
+ 	while (rxlen) {
+@@ -635,10 +639,10 @@ static void sc16is7xx_handle_rx(struct uart_port *port, unsigned int rxlen,
+ 			lsr = 0;
+ 
+ 		if (read_lsr) {
+-			s->buf[0] = sc16is7xx_port_read(port, SC16IS7XX_RHR_REG);
++			one->buf[0] = sc16is7xx_port_read(port, SC16IS7XX_RHR_REG);
+ 			bytes_read = 1;
+ 		} else {
+-			sc16is7xx_fifo_read(port, s->buf, rxlen);
++			sc16is7xx_fifo_read(port, one->buf, rxlen);
+ 			bytes_read = rxlen;
+ 		}
+ 
+@@ -671,7 +675,7 @@ static void sc16is7xx_handle_rx(struct uart_port *port, unsigned int rxlen,
+ 		}
+ 
+ 		for (i = 0; i < bytes_read; ++i) {
+-			ch = s->buf[i];
++			ch = one->buf[i];
+ 			if (uart_handle_sysrq_char(port, ch))
+ 				continue;
+ 
+@@ -689,10 +693,10 @@ static void sc16is7xx_handle_rx(struct uart_port *port, unsigned int rxlen,
+ 
+ static void sc16is7xx_handle_tx(struct uart_port *port)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 	struct tty_port *tport = &port->state->port;
+ 	unsigned long flags;
+ 	unsigned int txlen;
++	unsigned char *tail;
+ 
+ 	if (unlikely(port->x_char)) {
+ 		sc16is7xx_port_write(port, SC16IS7XX_THR_REG, port->x_char);
+@@ -717,8 +721,9 @@ static void sc16is7xx_handle_tx(struct uart_port *port)
+ 		txlen = 0;
+ 	}
+ 
+-	txlen = uart_fifo_out(port, s->buf, txlen);
+-	sc16is7xx_fifo_write(port, s->buf, txlen);
++	txlen = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, txlen);
++	sc16is7xx_fifo_write(port, tail, txlen);
++	uart_xmit_advance(port, txlen);
+ 
+ 	uart_port_lock_irqsave(port, &flags);
+ 	if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS)
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 2a8006e3d6878..9967444eae10c 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -881,6 +881,14 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
+ 	new_flags = (__force upf_t)new_info->flags;
+ 	old_custom_divisor = uport->custom_divisor;
+ 
++	if (!(uport->flags & UPF_FIXED_PORT)) {
++		unsigned int uartclk = new_info->baud_base * 16;
++		/* check needs to be done here before other settings made */
++		if (uartclk == 0) {
++			retval = -EINVAL;
++			goto exit;
++		}
++	}
+ 	if (!capable(CAP_SYS_ADMIN)) {
+ 		retval = -EPERM;
+ 		if (change_irq || change_port ||
+diff --git a/drivers/tty/vt/conmakehash.c b/drivers/tty/vt/conmakehash.c
+index dc2177fec7156..82d9db68b2ce8 100644
+--- a/drivers/tty/vt/conmakehash.c
++++ b/drivers/tty/vt/conmakehash.c
+@@ -11,6 +11,8 @@
+  * Copyright (C) 1995-1997 H. Peter Anvin
+  */
+ 
++#include <libgen.h>
++#include <linux/limits.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <sysexits.h>
+@@ -76,8 +78,8 @@ static void addpair(int fp, int un)
+ int main(int argc, char *argv[])
+ {
+   FILE *ctbl;
+-  const char *tblname, *rel_tblname;
+-  const char *abs_srctree;
++  const char *tblname;
++  char base_tblname[PATH_MAX];
+   char buffer[65536];
+   int fontlen;
+   int i, nuni, nent;
+@@ -102,16 +104,6 @@ int main(int argc, char *argv[])
+ 	}
+     }
+ 
+-  abs_srctree = getenv("abs_srctree");
+-  if (abs_srctree && !strncmp(abs_srctree, tblname, strlen(abs_srctree)))
+-    {
+-      rel_tblname = tblname + strlen(abs_srctree);
+-      while (*rel_tblname == '/')
+-	++rel_tblname;
+-    }
+-  else
+-    rel_tblname = tblname;
+-
+   /* For now we assume the default font is always 256 characters. */
+   fontlen = 256;
+ 
+@@ -253,6 +245,8 @@ int main(int argc, char *argv[])
+   for ( i = 0 ; i < fontlen ; i++ )
+     nuni += unicount[i];
+ 
++  strncpy(base_tblname, tblname, PATH_MAX);
++  base_tblname[PATH_MAX - 1] = 0;
+   printf("\
+ /*\n\
+  * Do not edit this file; it was automatically generated by\n\
+@@ -264,7 +258,7 @@ int main(int argc, char *argv[])
+ #include <linux/types.h>\n\
+ \n\
+ u8 dfont_unicount[%d] = \n\
+-{\n\t", rel_tblname, fontlen);
++{\n\t", basename(base_tblname), fontlen);
+ 
+   for ( i = 0 ; i < fontlen ; i++ )
+     {
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index f42d99ce5bf1e..81d6f0cfb148b 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -329,6 +329,11 @@ static inline int ufshcd_rpm_get_sync(struct ufs_hba *hba)
+ 	return pm_runtime_get_sync(&hba->ufs_device_wlun->sdev_gendev);
+ }
+ 
++static inline int ufshcd_rpm_get_if_active(struct ufs_hba *hba)
++{
++	return pm_runtime_get_if_active(&hba->ufs_device_wlun->sdev_gendev);
++}
++
+ static inline int ufshcd_rpm_put_sync(struct ufs_hba *hba)
+ {
+ 	return pm_runtime_put_sync(&hba->ufs_device_wlun->sdev_gendev);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 46433ecf0c4dc..5864d65448ce5 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -4086,11 +4086,16 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba)
+ 			min_sleep_time_us =
+ 				MIN_DELAY_BEFORE_DME_CMDS_US - delta;
+ 		else
+-			return; /* no more delay required */
++			min_sleep_time_us = 0; /* no more delay required */
+ 	}
+ 
+-	/* allow sleep for extra 50us if needed */
+-	usleep_range(min_sleep_time_us, min_sleep_time_us + 50);
++	if (min_sleep_time_us > 0) {
++		/* allow sleep for extra 50us if needed */
++		usleep_range(min_sleep_time_us, min_sleep_time_us + 50);
++	}
++
++	/* update the last_dme_cmd_tstamp */
++	hba->last_dme_cmd_tstamp = ktime_get();
+ }
+ 
+ /**
+@@ -8171,7 +8176,10 @@ static void ufshcd_update_rtc(struct ufs_hba *hba)
+ 	 */
+ 	val = ts64.tv_sec - hba->dev_info.rtc_time_baseline;
+ 
+-	ufshcd_rpm_get_sync(hba);
++	/* Skip update RTC if RPM state is not RPM_ACTIVE */
++	if (ufshcd_rpm_get_if_active(hba) <= 0)
++		return;
++
+ 	err = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, QUERY_ATTR_IDN_SECONDS_PASSED,
+ 				0, 0, &val);
+ 	ufshcd_rpm_put_sync(hba);
+@@ -10220,9 +10228,6 @@ int ufshcd_system_restore(struct device *dev)
+ 	 */
+ 	ufshcd_readl(hba, REG_UTP_TASK_REQ_LIST_BASE_H);
+ 
+-	/* Resuming from hibernate, assume that link was OFF */
+-	ufshcd_set_link_off(hba);
+-
+ 	return 0;
+ 
+ }
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 1f21459b1188c..a7bc715bcafa9 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3731,10 +3731,10 @@ static int ffs_func_set_alt(struct usb_function *f,
+ 	struct ffs_data *ffs = func->ffs;
+ 	int ret = 0, intf;
+ 
+-	if (alt > MAX_ALT_SETTINGS)
+-		return -EINVAL;
+-
+ 	if (alt != (unsigned)-1) {
++		if (alt > MAX_ALT_SETTINGS)
++			return -EINVAL;
++
+ 		intf = ffs_func_revmap_intf(func, interface);
+ 		if (intf < 0)
+ 			return intf;
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 0e38bb145e8f5..6908fdd4a83f3 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -642,12 +642,21 @@ static void process_ump_stream_msg(struct f_midi2_ep *ep, const u32 *data)
+ 		if (format)
+ 			return; // invalid
+ 		blk = (*data >> 8) & 0xff;
+-		if (blk >= ep->num_blks)
+-			return;
+-		if (*data & UMP_STREAM_MSG_REQUEST_FB_INFO)
+-			reply_ump_stream_fb_info(ep, blk);
+-		if (*data & UMP_STREAM_MSG_REQUEST_FB_NAME)
+-			reply_ump_stream_fb_name(ep, blk);
++		if (blk == 0xff) {
++			/* inquiry for all blocks */
++			for (blk = 0; blk < ep->num_blks; blk++) {
++				if (*data & UMP_STREAM_MSG_REQUEST_FB_INFO)
++					reply_ump_stream_fb_info(ep, blk);
++				if (*data & UMP_STREAM_MSG_REQUEST_FB_NAME)
++					reply_ump_stream_fb_name(ep, blk);
++			}
++		} else if (blk < ep->num_blks) {
++			/* only the specified block */
++			if (*data & UMP_STREAM_MSG_REQUEST_FB_INFO)
++				reply_ump_stream_fb_info(ep, blk);
++			if (*data & UMP_STREAM_MSG_REQUEST_FB_NAME)
++				reply_ump_stream_fb_name(ep, blk);
++		}
+ 		return;
+ 	}
+ }
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 89af0feb75120..24299576972fe 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -592,16 +592,25 @@ int u_audio_start_capture(struct g_audio *audio_dev)
+ 	struct usb_ep *ep, *ep_fback;
+ 	struct uac_rtd_params *prm;
+ 	struct uac_params *params = &audio_dev->params;
+-	int req_len, i;
++	int req_len, i, ret;
+ 
+ 	prm = &uac->c_prm;
+ 	dev_dbg(dev, "start capture with rate %d\n", prm->srate);
+ 	ep = audio_dev->out_ep;
+-	config_ep_by_speed(gadget, &audio_dev->func, ep);
++	ret = config_ep_by_speed(gadget, &audio_dev->func, ep);
++	if (ret < 0) {
++		dev_err(dev, "config_ep_by_speed for out_ep failed (%d)\n", ret);
++		return ret;
++	}
++
+ 	req_len = ep->maxpacket;
+ 
+ 	prm->ep_enabled = true;
+-	usb_ep_enable(ep);
++	ret = usb_ep_enable(ep);
++	if (ret < 0) {
++		dev_err(dev, "usb_ep_enable failed for out_ep (%d)\n", ret);
++		return ret;
++	}
+ 
+ 	for (i = 0; i < params->req_number; i++) {
+ 		if (!prm->reqs[i]) {
+@@ -629,9 +638,18 @@ int u_audio_start_capture(struct g_audio *audio_dev)
+ 		return 0;
+ 
+ 	/* Setup feedback endpoint */
+-	config_ep_by_speed(gadget, &audio_dev->func, ep_fback);
++	ret = config_ep_by_speed(gadget, &audio_dev->func, ep_fback);
++	if (ret < 0) {
++		dev_err(dev, "config_ep_by_speed in_ep_fback failed (%d)\n", ret);
++		return ret; // TODO: Clean up out_ep
++	}
++
+ 	prm->fb_ep_enabled = true;
+-	usb_ep_enable(ep_fback);
++	ret = usb_ep_enable(ep_fback);
++	if (ret < 0) {
++		dev_err(dev, "usb_ep_enable failed for in_ep_fback (%d)\n", ret);
++		return ret; // TODO: Clean up out_ep
++	}
+ 	req_len = ep_fback->maxpacket;
+ 
+ 	req_fback = usb_ep_alloc_request(ep_fback, GFP_ATOMIC);
+@@ -687,13 +705,17 @@ int u_audio_start_playback(struct g_audio *audio_dev)
+ 	struct uac_params *params = &audio_dev->params;
+ 	unsigned int factor;
+ 	const struct usb_endpoint_descriptor *ep_desc;
+-	int req_len, i;
++	int req_len, i, ret;
+ 	unsigned int p_pktsize;
+ 
+ 	prm = &uac->p_prm;
+ 	dev_dbg(dev, "start playback with rate %d\n", prm->srate);
+ 	ep = audio_dev->in_ep;
+-	config_ep_by_speed(gadget, &audio_dev->func, ep);
++	ret = config_ep_by_speed(gadget, &audio_dev->func, ep);
++	if (ret < 0) {
++		dev_err(dev, "config_ep_by_speed for in_ep failed (%d)\n", ret);
++		return ret;
++	}
+ 
+ 	ep_desc = ep->desc;
+ 	/*
+@@ -720,7 +742,11 @@ int u_audio_start_playback(struct g_audio *audio_dev)
+ 	uac->p_residue_mil = 0;
+ 
+ 	prm->ep_enabled = true;
+-	usb_ep_enable(ep);
++	ret = usb_ep_enable(ep);
++	if (ret < 0) {
++		dev_err(dev, "usb_ep_enable failed for in_ep (%d)\n", ret);
++		return ret;
++	}
+ 
+ 	for (i = 0; i < params->req_number; i++) {
+ 		if (!prm->reqs[i]) {
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index a92eb6d909768..8962f96ae7294 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1441,6 +1441,7 @@ void gserial_suspend(struct gserial *gser)
+ 	spin_lock(&port->port_lock);
+ 	spin_unlock(&serial_port_lock);
+ 	port->suspended = true;
++	port->start_delayed = true;
+ 	spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(gserial_suspend);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 2dfae7a17b3f1..81f9140f36813 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -118,12 +118,10 @@ int usb_ep_enable(struct usb_ep *ep)
+ 		goto out;
+ 
+ 	/* UDC drivers can't handle endpoints with maxpacket size 0 */
+-	if (usb_endpoint_maxp(ep->desc) == 0) {
+-		/*
+-		 * We should log an error message here, but we can't call
+-		 * dev_err() because there's no way to find the gadget
+-		 * given only ep.
+-		 */
++	if (!ep->desc || usb_endpoint_maxp(ep->desc) == 0) {
++		WARN_ONCE(1, "%s: ep%d (%s) has %s\n", __func__, ep->address, ep->name,
++			  (!ep->desc) ? "NULL descriptor" : "maxpacket 0");
++
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/usb/serial/usb_debug.c b/drivers/usb/serial/usb_debug.c
+index 6934970f180d7..5a8869cd95d52 100644
+--- a/drivers/usb/serial/usb_debug.c
++++ b/drivers/usb/serial/usb_debug.c
+@@ -76,6 +76,11 @@ static void usb_debug_process_read_urb(struct urb *urb)
+ 	usb_serial_generic_process_read_urb(urb);
+ }
+ 
++static void usb_debug_init_termios(struct tty_struct *tty)
++{
++	tty->termios.c_lflag &= ~(ECHO | ECHONL);
++}
++
+ static struct usb_serial_driver debug_device = {
+ 	.driver = {
+ 		.owner =	THIS_MODULE,
+@@ -85,6 +90,7 @@ static struct usb_serial_driver debug_device = {
+ 	.num_ports =		1,
+ 	.bulk_out_size =	USB_DEBUG_MAX_PACKET_SIZE,
+ 	.break_ctl =		usb_debug_break_ctl,
++	.init_termios =		usb_debug_init_termios,
+ 	.process_read_urb =	usb_debug_process_read_urb,
+ };
+ 
+@@ -96,6 +102,7 @@ static struct usb_serial_driver dbc_device = {
+ 	.id_table =		dbc_id_table,
+ 	.num_ports =		1,
+ 	.break_ctl =		usb_debug_break_ctl,
++	.init_termios =		usb_debug_init_termios,
+ 	.process_read_urb =	usb_debug_process_read_urb,
+ };
+ 
+diff --git a/drivers/usb/typec/mux/fsa4480.c b/drivers/usb/typec/mux/fsa4480.c
+index cb7cdf90cb0aa..cd235339834b0 100644
+--- a/drivers/usb/typec/mux/fsa4480.c
++++ b/drivers/usb/typec/mux/fsa4480.c
+@@ -13,6 +13,10 @@
+ #include <linux/usb/typec_dp.h>
+ #include <linux/usb/typec_mux.h>
+ 
++#define FSA4480_DEVICE_ID	0x00
++ #define FSA4480_DEVICE_ID_VENDOR_ID	GENMASK(7, 6)
++ #define FSA4480_DEVICE_ID_VERSION_ID	GENMASK(5, 3)
++ #define FSA4480_DEVICE_ID_REV_ID	GENMASK(2, 0)
+ #define FSA4480_SWITCH_ENABLE	0x04
+ #define FSA4480_SWITCH_SELECT	0x05
+ #define FSA4480_SWITCH_STATUS1	0x07
+@@ -251,6 +255,7 @@ static int fsa4480_probe(struct i2c_client *client)
+ 	struct typec_switch_desc sw_desc = { };
+ 	struct typec_mux_desc mux_desc = { };
+ 	struct fsa4480 *fsa;
++	int val = 0;
+ 	int ret;
+ 
+ 	fsa = devm_kzalloc(dev, sizeof(*fsa), GFP_KERNEL);
+@@ -268,6 +273,15 @@ static int fsa4480_probe(struct i2c_client *client)
+ 	if (IS_ERR(fsa->regmap))
+ 		return dev_err_probe(dev, PTR_ERR(fsa->regmap), "failed to initialize regmap\n");
+ 
++	ret = regmap_read(fsa->regmap, FSA4480_DEVICE_ID, &val);
++	if (ret || !val)
++		return dev_err_probe(dev, -ENODEV, "FSA4480 not found\n");
++
++	dev_dbg(dev, "Found FSA4480 v%lu.%lu (Vendor ID = %lu)\n",
++		FIELD_GET(FSA4480_DEVICE_ID_VERSION_ID, val),
++		FIELD_GET(FSA4480_DEVICE_ID_REV_ID, val),
++		FIELD_GET(FSA4480_DEVICE_ID_VENDOR_ID, val));
++
+ 	/* Safe mode */
+ 	fsa->cur_enable = FSA4480_ENABLE_DEVICE | FSA4480_ENABLE_USB;
+ 	fsa->mode = TYPEC_STATE_SAFE;
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 82650c11e4516..302a89aeb258a 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -745,6 +745,7 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 	 *
+ 	 */
+ 	if (usb_pipedevice(urb->pipe) == 0) {
++		struct usb_device *old;
+ 		__u8 type = usb_pipetype(urb->pipe);
+ 		struct usb_ctrlrequest *ctrlreq =
+ 			(struct usb_ctrlrequest *) urb->setup_packet;
+@@ -755,14 +756,15 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 			goto no_need_xmit;
+ 		}
+ 
++		old = vdev->udev;
+ 		switch (ctrlreq->bRequest) {
+ 		case USB_REQ_SET_ADDRESS:
+ 			/* set_address may come when a device is reset */
+ 			dev_info(dev, "SetAddress Request (%d) to port %d\n",
+ 				 ctrlreq->wValue, vdev->rhport);
+ 
+-			usb_put_dev(vdev->udev);
+ 			vdev->udev = usb_get_dev(urb->dev);
++			usb_put_dev(old);
+ 
+ 			spin_lock(&vdev->ud.lock);
+ 			vdev->ud.status = VDEV_ST_USED;
+@@ -781,8 +783,8 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ 				usbip_dbg_vhci_hc(
+ 					"Not yet?:Get_Descriptor to device 0 (get max pipe size)\n");
+ 
+-			usb_put_dev(vdev->udev);
+ 			vdev->udev = usb_get_dev(urb->dev);
++			usb_put_dev(old);
+ 			goto out;
+ 
+ 		default:
+@@ -1067,6 +1069,7 @@ static void vhci_shutdown_connection(struct usbip_device *ud)
+ static void vhci_device_reset(struct usbip_device *ud)
+ {
+ 	struct vhci_device *vdev = container_of(ud, struct vhci_device, ud);
++	struct usb_device *old = vdev->udev;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&ud->lock, flags);
+@@ -1074,8 +1077,8 @@ static void vhci_device_reset(struct usbip_device *ud)
+ 	vdev->speed  = 0;
+ 	vdev->devid  = 0;
+ 
+-	usb_put_dev(vdev->udev);
+ 	vdev->udev = NULL;
++	usb_put_dev(old);
+ 
+ 	if (ud->tcp_socket) {
+ 		sockfd_put(ud->tcp_socket);
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 63a53680a85cb..6b9c12acf4381 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -1483,13 +1483,7 @@ static vm_fault_t vhost_vdpa_fault(struct vm_fault *vmf)
+ 
+ 	notify = ops->get_vq_notification(vdpa, index);
+ 
+-	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+-	if (remap_pfn_range(vma, vmf->address & PAGE_MASK,
+-			    PFN_DOWN(notify.addr), PAGE_SIZE,
+-			    vma->vm_page_prot))
+-		return VM_FAULT_SIGBUS;
+-
+-	return VM_FAULT_NOPAGE;
++	return vmf_insert_pfn(vma, vmf->address & PAGE_MASK, PFN_DOWN(notify.addr));
+ }
+ 
+ static const struct vm_operations_struct vhost_vdpa_vm_ops = {
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 67dfa47788649..c9c620e32fa8b 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -845,7 +845,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ #ifdef CONFIG_XEN_PRIVCMD_EVENTFD
+ /* Irqfd support */
+ static struct workqueue_struct *irqfd_cleanup_wq;
+-static DEFINE_MUTEX(irqfds_lock);
++static DEFINE_SPINLOCK(irqfds_lock);
+ static LIST_HEAD(irqfds_list);
+ 
+ struct privcmd_kernel_irqfd {
+@@ -909,9 +909,11 @@ irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, void *key)
+ 		irqfd_inject(kirqfd);
+ 
+ 	if (flags & EPOLLHUP) {
+-		mutex_lock(&irqfds_lock);
++		unsigned long flags;
++
++		spin_lock_irqsave(&irqfds_lock, flags);
+ 		irqfd_deactivate(kirqfd);
+-		mutex_unlock(&irqfds_lock);
++		spin_unlock_irqrestore(&irqfds_lock, flags);
+ 	}
+ 
+ 	return 0;
+@@ -929,6 +931,7 @@ irqfd_poll_func(struct file *file, wait_queue_head_t *wqh, poll_table *pt)
+ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
+ {
+ 	struct privcmd_kernel_irqfd *kirqfd, *tmp;
++	unsigned long flags;
+ 	__poll_t events;
+ 	struct fd f;
+ 	void *dm_op;
+@@ -968,18 +971,18 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
+ 	init_waitqueue_func_entry(&kirqfd->wait, irqfd_wakeup);
+ 	init_poll_funcptr(&kirqfd->pt, irqfd_poll_func);
+ 
+-	mutex_lock(&irqfds_lock);
++	spin_lock_irqsave(&irqfds_lock, flags);
+ 
+ 	list_for_each_entry(tmp, &irqfds_list, list) {
+ 		if (kirqfd->eventfd == tmp->eventfd) {
+ 			ret = -EBUSY;
+-			mutex_unlock(&irqfds_lock);
++			spin_unlock_irqrestore(&irqfds_lock, flags);
+ 			goto error_eventfd;
+ 		}
+ 	}
+ 
+ 	list_add_tail(&kirqfd->list, &irqfds_list);
+-	mutex_unlock(&irqfds_lock);
++	spin_unlock_irqrestore(&irqfds_lock, flags);
+ 
+ 	/*
+ 	 * Check if there was an event already pending on the eventfd before we
+@@ -1011,12 +1014,13 @@ static int privcmd_irqfd_deassign(struct privcmd_irqfd *irqfd)
+ {
+ 	struct privcmd_kernel_irqfd *kirqfd;
+ 	struct eventfd_ctx *eventfd;
++	unsigned long flags;
+ 
+ 	eventfd = eventfd_ctx_fdget(irqfd->fd);
+ 	if (IS_ERR(eventfd))
+ 		return PTR_ERR(eventfd);
+ 
+-	mutex_lock(&irqfds_lock);
++	spin_lock_irqsave(&irqfds_lock, flags);
+ 
+ 	list_for_each_entry(kirqfd, &irqfds_list, list) {
+ 		if (kirqfd->eventfd == eventfd) {
+@@ -1025,7 +1029,7 @@ static int privcmd_irqfd_deassign(struct privcmd_irqfd *irqfd)
+ 		}
+ 	}
+ 
+-	mutex_unlock(&irqfds_lock);
++	spin_unlock_irqrestore(&irqfds_lock, flags);
+ 
+ 	eventfd_ctx_put(eventfd);
+ 
+@@ -1073,13 +1077,14 @@ static int privcmd_irqfd_init(void)
+ static void privcmd_irqfd_exit(void)
+ {
+ 	struct privcmd_kernel_irqfd *kirqfd, *tmp;
++	unsigned long flags;
+ 
+-	mutex_lock(&irqfds_lock);
++	spin_lock_irqsave(&irqfds_lock, flags);
+ 
+ 	list_for_each_entry_safe(kirqfd, tmp, &irqfds_list, list)
+ 		irqfd_deactivate(kirqfd);
+ 
+-	mutex_unlock(&irqfds_lock);
++	spin_unlock_irqrestore(&irqfds_lock, flags);
+ 
+ 	destroy_workqueue(irqfd_cleanup_wq);
+ }
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 1a49b92329908..8a791b648ac53 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -321,7 +321,7 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans,
+ 	WARN_ON(test_bit(BTRFS_ROOT_SHAREABLE, &root->state) &&
+ 		trans->transid != fs_info->running_transaction->transid);
+ 	WARN_ON(test_bit(BTRFS_ROOT_SHAREABLE, &root->state) &&
+-		trans->transid != root->last_trans);
++		trans->transid != btrfs_get_root_last_trans(root));
+ 
+ 	level = btrfs_header_level(buf);
+ 	if (level == 0)
+@@ -551,7 +551,7 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ 	WARN_ON(test_bit(BTRFS_ROOT_SHAREABLE, &root->state) &&
+ 		trans->transid != fs_info->running_transaction->transid);
+ 	WARN_ON(test_bit(BTRFS_ROOT_SHAREABLE, &root->state) &&
+-		trans->transid != root->last_trans);
++		trans->transid != btrfs_get_root_last_trans(root));
+ 
+ 	level = btrfs_header_level(buf);
+ 
+@@ -620,10 +620,16 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ 		atomic_inc(&cow->refs);
+ 		rcu_assign_pointer(root->node, cow);
+ 
+-		btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+-				      parent_start, last_ref);
++		ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
++					    parent_start, last_ref);
+ 		free_extent_buffer(buf);
+ 		add_root_to_dirty_list(root);
++		if (ret < 0) {
++			btrfs_tree_unlock(cow);
++			free_extent_buffer(cow);
++			btrfs_abort_transaction(trans, ret);
++			return ret;
++		}
+ 	} else {
+ 		WARN_ON(trans->transid != btrfs_header_generation(parent));
+ 		ret = btrfs_tree_mod_log_insert_key(parent, parent_slot,
+@@ -648,8 +654,14 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ 				return ret;
+ 			}
+ 		}
+-		btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+-				      parent_start, last_ref);
++		ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
++					    parent_start, last_ref);
++		if (ret < 0) {
++			btrfs_tree_unlock(cow);
++			free_extent_buffer(cow);
++			btrfs_abort_transaction(trans, ret);
++			return ret;
++		}
+ 	}
+ 	if (unlock_orig)
+ 		btrfs_tree_unlock(buf);
+@@ -983,9 +995,13 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 		free_extent_buffer(mid);
+ 
+ 		root_sub_used_bytes(root);
+-		btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
++		ret = btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
+ 		/* once for the root ptr */
+ 		free_extent_buffer_stale(mid);
++		if (ret < 0) {
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
+ 		return 0;
+ 	}
+ 	if (btrfs_header_nritems(mid) >
+@@ -1053,10 +1069,14 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 				goto out;
+ 			}
+ 			root_sub_used_bytes(root);
+-			btrfs_free_tree_block(trans, btrfs_root_id(root), right,
+-					      0, 1);
++			ret = btrfs_free_tree_block(trans, btrfs_root_id(root),
++						    right, 0, 1);
+ 			free_extent_buffer_stale(right);
+ 			right = NULL;
++			if (ret < 0) {
++				btrfs_abort_transaction(trans, ret);
++				goto out;
++			}
+ 		} else {
+ 			struct btrfs_disk_key right_key;
+ 			btrfs_node_key(right, &right_key, 0);
+@@ -1111,9 +1131,13 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 			goto out;
+ 		}
+ 		root_sub_used_bytes(root);
+-		btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
++		ret = btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
+ 		free_extent_buffer_stale(mid);
+ 		mid = NULL;
++		if (ret < 0) {
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
+ 	} else {
+ 		/* update the parent key to reflect our changes */
+ 		struct btrfs_disk_key mid_key;
+@@ -2883,7 +2907,11 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
+ 	old = root->node;
+ 	ret = btrfs_tree_mod_log_insert_root(root->node, c, false);
+ 	if (ret < 0) {
+-		btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
++		int ret2;
++
++		ret2 = btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
++		if (ret2 < 0)
++			btrfs_abort_transaction(trans, ret2);
+ 		btrfs_tree_unlock(c);
+ 		free_extent_buffer(c);
+ 		return ret;
+@@ -4452,9 +4480,12 @@ static noinline int btrfs_del_leaf(struct btrfs_trans_handle *trans,
+ 	root_sub_used_bytes(root);
+ 
+ 	atomic_inc(&leaf->refs);
+-	btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1);
++	ret = btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1);
+ 	free_extent_buffer_stale(leaf);
+-	return 0;
++	if (ret < 0)
++		btrfs_abort_transaction(trans, ret);
++
++	return ret;
+ }
+ /*
+  * delete the item at the leaf level in path.  If that empties
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index c03c58246033b..a56209d275c15 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -354,6 +354,16 @@ static inline void btrfs_set_root_last_log_commit(struct btrfs_root *root, int c
+ 	WRITE_ONCE(root->last_log_commit, commit_id);
+ }
+ 
++static inline u64 btrfs_get_root_last_trans(const struct btrfs_root *root)
++{
++	return READ_ONCE(root->last_trans);
++}
++
++static inline void btrfs_set_root_last_trans(struct btrfs_root *root, u64 transid)
++{
++	WRITE_ONCE(root->last_trans, transid);
++}
++
+ /*
+  * Structure that conveys information about an extent that is going to replace
+  * all the extents in a file range.
+@@ -447,6 +457,7 @@ struct btrfs_file_private {
+ 	void *filldir_buf;
+ 	u64 last_index;
+ 	struct extent_state *llseek_cached_state;
++	bool fsync_skip_inode_lock;
+ };
+ 
+ static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
+diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
+index 407ccec3e57ed..f664678c71d15 100644
+--- a/fs/btrfs/defrag.c
++++ b/fs/btrfs/defrag.c
+@@ -139,7 +139,7 @@ int btrfs_add_inode_defrag(struct btrfs_trans_handle *trans,
+ 	if (trans)
+ 		transid = trans->transid;
+ 	else
+-		transid = inode->root->last_trans;
++		transid = btrfs_get_root_last_trans(root);
+ 
+ 	defrag = kmem_cache_zalloc(btrfs_inode_defrag_cachep, GFP_NOFS);
+ 	if (!defrag)
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index cabb558dbdaa8..3791813dc7b62 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -658,7 +658,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
+ 	root->state = 0;
+ 	RB_CLEAR_NODE(&root->rb_node);
+ 
+-	root->last_trans = 0;
++	btrfs_set_root_last_trans(root, 0);
+ 	root->free_objectid = 0;
+ 	root->nr_delalloc_inodes = 0;
+ 	root->nr_ordered_extents = 0;
+@@ -1010,7 +1010,7 @@ int btrfs_add_log_tree(struct btrfs_trans_handle *trans,
+ 		return ret;
+ 	}
+ 
+-	log_root->last_trans = trans->transid;
++	btrfs_set_root_last_trans(log_root, trans->transid);
+ 	log_root->root_key.offset = btrfs_root_id(root);
+ 
+ 	inode_item = &log_root->root_item.inode;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index b75e14f399a01..844b677d054ec 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -104,10 +104,7 @@ int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,
+ 	struct btrfs_delayed_ref_head *head;
+ 	struct btrfs_delayed_ref_root *delayed_refs;
+ 	struct btrfs_path *path;
+-	struct btrfs_extent_item *ei;
+-	struct extent_buffer *leaf;
+ 	struct btrfs_key key;
+-	u32 item_size;
+ 	u64 num_refs;
+ 	u64 extent_flags;
+ 	u64 owner = 0;
+@@ -157,16 +154,11 @@ int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	if (ret == 0) {
+-		leaf = path->nodes[0];
+-		item_size = btrfs_item_size(leaf, path->slots[0]);
+-		if (item_size >= sizeof(*ei)) {
+-			ei = btrfs_item_ptr(leaf, path->slots[0],
+-					    struct btrfs_extent_item);
+-			num_refs = btrfs_extent_refs(leaf, ei);
+-			extent_flags = btrfs_extent_flags(leaf, ei);
+-			owner = btrfs_get_extent_owner_root(fs_info, leaf,
+-							    path->slots[0]);
+-		} else {
++		struct extent_buffer *leaf = path->nodes[0];
++		struct btrfs_extent_item *ei;
++		const u32 item_size = btrfs_item_size(leaf, path->slots[0]);
++
++		if (unlikely(item_size < sizeof(*ei))) {
+ 			ret = -EUCLEAN;
+ 			btrfs_err(fs_info,
+ 			"unexpected extent item size, has %u expect >= %zu",
+@@ -179,6 +171,10 @@ int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,
+ 			goto out_free;
+ 		}
+ 
++		ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_extent_item);
++		num_refs = btrfs_extent_refs(leaf, ei);
++		extent_flags = btrfs_extent_flags(leaf, ei);
++		owner = btrfs_get_extent_owner_root(fs_info, leaf, path->slots[0]);
+ 		BUG_ON(num_refs == 0);
+ 	} else {
+ 		num_refs = 0;
+@@ -3420,10 +3416,10 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
+ 	return 0;
+ }
+ 
+-void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+-			   u64 root_id,
+-			   struct extent_buffer *buf,
+-			   u64 parent, int last_ref)
++int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
++			  u64 root_id,
++			  struct extent_buffer *buf,
++			  u64 parent, int last_ref)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_block_group *bg;
+@@ -3450,11 +3446,12 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ 		btrfs_init_tree_ref(&generic_ref, btrfs_header_level(buf), 0, false);
+ 		btrfs_ref_tree_mod(fs_info, &generic_ref);
+ 		ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL);
+-		BUG_ON(ret); /* -ENOMEM */
++		if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	if (!last_ref)
+-		return;
++		return 0;
+ 
+ 	if (btrfs_header_generation(buf) != trans->transid)
+ 		goto out;
+@@ -3511,6 +3508,7 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ 	 * matter anymore.
+ 	 */
+ 	clear_bit(EXTENT_BUFFER_CORRUPT, &buf->bflags);
++	return 0;
+ }
+ 
+ /* Can return -ENOMEM */
+@@ -5644,7 +5642,7 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 				 struct walk_control *wc)
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+-	int ret;
++	int ret = 0;
+ 	int level = wc->level;
+ 	struct extent_buffer *eb = path->nodes[level];
+ 	u64 parent = 0;
+@@ -5731,12 +5729,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 			goto owner_mismatch;
+ 	}
+ 
+-	btrfs_free_tree_block(trans, btrfs_root_id(root), eb, parent,
+-			      wc->refs[level] == 1);
++	ret = btrfs_free_tree_block(trans, btrfs_root_id(root), eb, parent,
++				    wc->refs[level] == 1);
++	if (ret < 0)
++		btrfs_abort_transaction(trans, ret);
+ out:
+ 	wc->refs[level] = 0;
+ 	wc->flags[level] = 0;
+-	return 0;
++	return ret;
+ 
+ owner_mismatch:
+ 	btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
+diff --git a/fs/btrfs/extent-tree.h b/fs/btrfs/extent-tree.h
+index af9f8800d5aca..2ad51130c037e 100644
+--- a/fs/btrfs/extent-tree.h
++++ b/fs/btrfs/extent-tree.h
+@@ -127,10 +127,10 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ 					     u64 empty_size,
+ 					     u64 reloc_src_root,
+ 					     enum btrfs_lock_nesting nest);
+-void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+-			   u64 root_id,
+-			   struct extent_buffer *buf,
+-			   u64 parent, int last_ref);
++int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
++			  u64 root_id,
++			  struct extent_buffer *buf,
++			  u64 parent, int last_ref);
+ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
+ 				     struct btrfs_root *root, u64 owner,
+ 				     u64 offset, u64 ram_bytes,
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 958155cc43a81..0486b1f911248 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2246,10 +2246,8 @@ void extent_write_locked_range(struct inode *inode, struct page *locked_page,
+ 
+ 		page = find_get_page(mapping, cur >> PAGE_SHIFT);
+ 		ASSERT(PageLocked(page));
+-		if (pages_dirty && page != locked_page) {
++		if (pages_dirty && page != locked_page)
+ 			ASSERT(PageDirty(page));
+-			clear_page_dirty_for_io(page);
+-		}
+ 
+ 		ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl,
+ 					    i_size, &nr);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index d90138683a0a3..ca434f0cd27f0 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1550,21 +1550,37 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
+ 	 * So here we disable page faults in the iov_iter and then retry if we
+ 	 * got -EFAULT, faulting in the pages before the retry.
+ 	 */
++again:
+ 	from->nofault = true;
+ 	dio = btrfs_dio_write(iocb, from, written);
+ 	from->nofault = false;
+ 
+-	/*
+-	 * iomap_dio_complete() will call btrfs_sync_file() if we have a dsync
+-	 * iocb, and that needs to lock the inode. So unlock it before calling
+-	 * iomap_dio_complete() to avoid a deadlock.
+-	 */
+-	btrfs_inode_unlock(BTRFS_I(inode), ilock_flags);
+-
+-	if (IS_ERR_OR_NULL(dio))
++	if (IS_ERR_OR_NULL(dio)) {
+ 		ret = PTR_ERR_OR_ZERO(dio);
+-	else
++	} else {
++		struct btrfs_file_private stack_private = { 0 };
++		struct btrfs_file_private *private;
++		const bool have_private = (file->private_data != NULL);
++
++		if (!have_private)
++			file->private_data = &stack_private;
++
++		/*
++		 * If we have a synchoronous write, we must make sure the fsync
++		 * triggered by the iomap_dio_complete() call below doesn't
++		 * deadlock on the inode lock - we are already holding it and we
++		 * can't call it after unlocking because we may need to complete
++		 * partial writes due to the input buffer (or parts of it) not
++		 * being already faulted in.
++		 */
++		private = file->private_data;
++		private->fsync_skip_inode_lock = true;
+ 		ret = iomap_dio_complete(dio);
++		private->fsync_skip_inode_lock = false;
++
++		if (!have_private)
++			file->private_data = NULL;
++	}
+ 
+ 	/* No increment (+=) because iomap returns a cumulative value. */
+ 	if (ret > 0)
+@@ -1591,10 +1607,12 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
+ 		} else {
+ 			fault_in_iov_iter_readable(from, left);
+ 			prev_left = left;
+-			goto relock;
++			goto again;
+ 		}
+ 	}
+ 
++	btrfs_inode_unlock(BTRFS_I(inode), ilock_flags);
++
+ 	/*
+ 	 * If 'ret' is -ENOTBLK or we have not written all data, then it means
+ 	 * we must fallback to buffered IO.
+@@ -1793,6 +1811,7 @@ static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx)
+  */
+ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ {
++	struct btrfs_file_private *private = file->private_data;
+ 	struct dentry *dentry = file_dentry(file);
+ 	struct inode *inode = d_inode(dentry);
+ 	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+@@ -1802,6 +1821,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	int ret = 0, err;
+ 	u64 len;
+ 	bool full_sync;
++	const bool skip_ilock = (private ? private->fsync_skip_inode_lock : false);
+ 
+ 	trace_btrfs_sync_file(file, datasync);
+ 
+@@ -1829,7 +1849,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	if (ret)
+ 		goto out;
+ 
+-	btrfs_inode_lock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
++	if (skip_ilock)
++		down_write(&BTRFS_I(inode)->i_mmap_lock);
++	else
++		btrfs_inode_lock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
+ 
+ 	atomic_inc(&root->log_batch);
+ 
+@@ -1853,7 +1876,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 */
+ 	ret = start_ordered_ops(inode, start, end);
+ 	if (ret) {
+-		btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
++		if (skip_ilock)
++			up_write(&BTRFS_I(inode)->i_mmap_lock);
++		else
++			btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
+ 		goto out;
+ 	}
+ 
+@@ -1982,7 +2008,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 * file again, but that will end up using the synchronization
+ 	 * inside btrfs_sync_log to keep things safe.
+ 	 */
+-	btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
++	if (skip_ilock)
++		up_write(&BTRFS_I(inode)->i_mmap_lock);
++	else
++		btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
+ 
+ 	if (ret == BTRFS_NO_LOG_SYNC) {
+ 		ret = btrfs_end_transaction(trans);
+@@ -2051,7 +2080,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 
+ out_release_extents:
+ 	btrfs_release_log_ctx_extents(&ctx);
+-	btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
++	if (skip_ilock)
++		up_write(&BTRFS_I(inode)->i_mmap_lock);
++	else
++		btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
+ 	goto out;
+ }
+ 
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index d674f2106593a..62c3dea9572ab 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -858,6 +858,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode,
+ 				spin_unlock(&ctl->tree_lock);
+ 				btrfs_err(fs_info,
+ 					"Duplicate entries in free space cache, dumping");
++				kmem_cache_free(btrfs_free_space_bitmap_cachep, e->bitmap);
+ 				kmem_cache_free(btrfs_free_space_cachep, e);
+ 				goto free_cache;
+ 			}
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 90f2938bd743d..7ba50e133921a 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1300,10 +1300,14 @@ int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info)
+ 	btrfs_tree_lock(free_space_root->node);
+ 	btrfs_clear_buffer_dirty(trans, free_space_root->node);
+ 	btrfs_tree_unlock(free_space_root->node);
+-	btrfs_free_tree_block(trans, btrfs_root_id(free_space_root),
+-			      free_space_root->node, 0, 1);
+-
++	ret = btrfs_free_tree_block(trans, btrfs_root_id(free_space_root),
++				    free_space_root->node, 0, 1);
+ 	btrfs_put_root(free_space_root);
++	if (ret < 0) {
++		btrfs_abort_transaction(trans, ret);
++		btrfs_end_transaction(trans);
++		return ret;
++	}
+ 
+ 	return btrfs_commit_transaction(trans);
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index efd5d6e9589e0..c1b0556e40368 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -719,6 +719,8 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
+ 	ret = btrfs_insert_root(trans, fs_info->tree_root, &key,
+ 				root_item);
+ 	if (ret) {
++		int ret2;
++
+ 		/*
+ 		 * Since we don't abort the transaction in this case, free the
+ 		 * tree block so that we don't leak space and leave the
+@@ -729,7 +731,9 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
+ 		btrfs_tree_lock(leaf);
+ 		btrfs_clear_buffer_dirty(trans, leaf);
+ 		btrfs_tree_unlock(leaf);
+-		btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
++		ret2 = btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
++		if (ret2 < 0)
++			btrfs_abort_transaction(trans, ret2);
+ 		free_extent_buffer(leaf);
+ 		goto out;
+ 	}
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index 7e46aa8a04446..b876156fcd035 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -14,7 +14,7 @@
+ 
+ struct root_name_map {
+ 	u64 id;
+-	char name[16];
++	const char *name;
+ };
+ 
+ static const struct root_name_map root_map[] = {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 39a15cca58ca9..29d6ca3b874ec 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1446,9 +1446,11 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 	btrfs_tree_lock(quota_root->node);
+ 	btrfs_clear_buffer_dirty(trans, quota_root->node);
+ 	btrfs_tree_unlock(quota_root->node);
+-	btrfs_free_tree_block(trans, btrfs_root_id(quota_root),
+-			      quota_root->node, 0, 1);
++	ret = btrfs_free_tree_block(trans, btrfs_root_id(quota_root),
++				    quota_root->node, 0, 1);
+ 
++	if (ret < 0)
++		btrfs_abort_transaction(trans, ret);
+ 
+ out:
+ 	btrfs_put_root(quota_root);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 8b24bb5a0aa18..f2935252b981a 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -817,7 +817,7 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 		goto abort;
+ 	}
+ 	set_bit(BTRFS_ROOT_SHAREABLE, &reloc_root->state);
+-	reloc_root->last_trans = trans->transid;
++	btrfs_set_root_last_trans(reloc_root, trans->transid);
+ 	return reloc_root;
+ fail:
+ 	kfree(root_item);
+@@ -864,7 +864,7 @@ int btrfs_init_reloc_root(struct btrfs_trans_handle *trans,
+ 	 */
+ 	if (root->reloc_root) {
+ 		reloc_root = root->reloc_root;
+-		reloc_root->last_trans = trans->transid;
++		btrfs_set_root_last_trans(reloc_root, trans->transid);
+ 		return 0;
+ 	}
+ 
+@@ -1739,7 +1739,7 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
+ 		 * btrfs_update_reloc_root() and update our root item
+ 		 * appropriately.
+ 		 */
+-		reloc_root->last_trans = trans->transid;
++		btrfs_set_root_last_trans(reloc_root, trans->transid);
+ 		trans->block_rsv = rc->block_rsv;
+ 
+ 		replaced = 0;
+@@ -2082,7 +2082,7 @@ static int record_reloc_root_in_trans(struct btrfs_trans_handle *trans,
+ 	struct btrfs_root *root;
+ 	int ret;
+ 
+-	if (reloc_root->last_trans == trans->transid)
++	if (btrfs_get_root_last_trans(reloc_root) == trans->transid)
+ 		return 0;
+ 
+ 	root = btrfs_get_fs_root(fs_info, reloc_root->root_key.offset, false);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 3388c836b9a56..76117bb2c726c 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -405,7 +405,7 @@ static int record_root_in_trans(struct btrfs_trans_handle *trans,
+ 	int ret = 0;
+ 
+ 	if ((test_bit(BTRFS_ROOT_SHAREABLE, &root->state) &&
+-	    root->last_trans < trans->transid) || force) {
++	    btrfs_get_root_last_trans(root) < trans->transid) || force) {
+ 		WARN_ON(!force && root->commit_root != root->node);
+ 
+ 		/*
+@@ -421,7 +421,7 @@ static int record_root_in_trans(struct btrfs_trans_handle *trans,
+ 		smp_wmb();
+ 
+ 		spin_lock(&fs_info->fs_roots_radix_lock);
+-		if (root->last_trans == trans->transid && !force) {
++		if (btrfs_get_root_last_trans(root) == trans->transid && !force) {
+ 			spin_unlock(&fs_info->fs_roots_radix_lock);
+ 			return 0;
+ 		}
+@@ -429,7 +429,7 @@ static int record_root_in_trans(struct btrfs_trans_handle *trans,
+ 				   (unsigned long)btrfs_root_id(root),
+ 				   BTRFS_ROOT_TRANS_TAG);
+ 		spin_unlock(&fs_info->fs_roots_radix_lock);
+-		root->last_trans = trans->transid;
++		btrfs_set_root_last_trans(root, trans->transid);
+ 
+ 		/* this is pretty tricky.  We don't want to
+ 		 * take the relocation lock in btrfs_record_root_in_trans
+@@ -491,7 +491,7 @@ int btrfs_record_root_in_trans(struct btrfs_trans_handle *trans,
+ 	 * and barriers
+ 	 */
+ 	smp_rmb();
+-	if (root->last_trans == trans->transid &&
++	if (btrfs_get_root_last_trans(root) == trans->transid &&
+ 	    !test_bit(BTRFS_ROOT_IN_TRANS_SETUP, &root->state))
+ 		return 0;
+ 
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 8c19e705b9c33..645f0387dfe1d 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -2187,6 +2187,8 @@ static void __block_commit_write(struct folio *folio, size_t from, size_t to)
+ 	struct buffer_head *bh, *head;
+ 
+ 	bh = head = folio_buffers(folio);
++	if (!bh)
++		return;
+ 	blocksize = bh->b_size;
+ 
+ 	block_start = 0;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index d5bd1e3a5d36c..e7a09a99837b9 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1410,7 +1410,11 @@ int ext4_inlinedir_to_tree(struct file *dir_file,
+ 			hinfo->hash = EXT4_DIRENT_HASH(de);
+ 			hinfo->minor_hash = EXT4_DIRENT_MINOR_HASH(de);
+ 		} else {
+-			ext4fs_dirhash(dir, de->name, de->name_len, hinfo);
++			err = ext4fs_dirhash(dir, de->name, de->name_len, hinfo);
++			if (err) {
++				ret = err;
++				goto out;
++			}
+ 		}
+ 		if ((hinfo->hash < start_hash) ||
+ 		    ((hinfo->hash == start_hash) &&
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4b0d64a76e88e..238e196338234 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -2973,6 +2973,11 @@ static int ext4_da_do_write_end(struct address_space *mapping,
+ 	bool disksize_changed = false;
+ 	loff_t new_i_size;
+ 
++	if (unlikely(!folio_buffers(folio))) {
++		folio_unlock(folio);
++		folio_put(folio);
++		return -EIO;
++	}
+ 	/*
+ 	 * block_write_end() will mark the inode as dirty with I_DIRTY_PAGES
+ 	 * flag, which all that's needed to trigger page writeback.
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index ae5b544ed0cc0..c8d9d85e0e871 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -399,6 +399,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
+ 		tmp = jbd2_alloc(bh_in->b_size, GFP_NOFS);
+ 		if (!tmp) {
+ 			brelse(new_bh);
++			free_buffer_head(new_bh);
+ 			return -ENOMEM;
+ 		}
+ 		spin_lock(&jh_in->b_state_lock);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index c848ebe5d08f1..0f9b4f7b56cd8 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -2053,8 +2053,7 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+ 			continue;
+ 		}
+ 
+-		ret = svc_xprt_create_from_sa(serv, xcl_name, net, sa,
+-					      SVC_SOCK_ANONYMOUS,
++		ret = svc_xprt_create_from_sa(serv, xcl_name, net, sa, 0,
+ 					      get_current_cred());
+ 		/* always save the latest error */
+ 		if (ret < 0)
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index c71ae5c043060..4a20e92474b23 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -1072,7 +1072,7 @@ static int cifs_security_flags_proc_open(struct inode *inode, struct file *file)
+ static void
+ cifs_security_flags_handle_must_flags(unsigned int *flags)
+ {
+-	unsigned int signflags = *flags & CIFSSEC_MUST_SIGN;
++	unsigned int signflags = *flags & (CIFSSEC_MUST_SIGN | CIFSSEC_MUST_SEAL);
+ 
+ 	if ((*flags & CIFSSEC_MUST_KRB5) == CIFSSEC_MUST_KRB5)
+ 		*flags = CIFSSEC_MUST_KRB5;
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index a865941724c02..d4bcc7da700c6 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1901,7 +1901,7 @@ static inline bool is_replayable_error(int error)
+ #define   CIFSSEC_MAY_SIGN	0x00001
+ #define   CIFSSEC_MAY_NTLMV2	0x00004
+ #define   CIFSSEC_MAY_KRB5	0x00008
+-#define   CIFSSEC_MAY_SEAL	0x00040 /* not supported yet */
++#define   CIFSSEC_MAY_SEAL	0x00040
+ #define   CIFSSEC_MAY_NTLMSSP	0x00080 /* raw ntlmssp with ntlmv2 */
+ 
+ #define   CIFSSEC_MUST_SIGN	0x01001
+@@ -1911,11 +1911,11 @@ require use of the stronger protocol */
+ #define   CIFSSEC_MUST_NTLMV2	0x04004
+ #define   CIFSSEC_MUST_KRB5	0x08008
+ #ifdef CONFIG_CIFS_UPCALL
+-#define   CIFSSEC_MASK          0x8F08F /* flags supported if no weak allowed */
++#define   CIFSSEC_MASK          0xCF0CF /* flags supported if no weak allowed */
+ #else
+-#define	  CIFSSEC_MASK          0x87087 /* flags supported if no weak allowed */
++#define	  CIFSSEC_MASK          0xC70C7 /* flags supported if no weak allowed */
+ #endif /* UPCALL */
+-#define   CIFSSEC_MUST_SEAL	0x40040 /* not supported yet */
++#define   CIFSSEC_MUST_SEAL	0x40040
+ #define   CIFSSEC_MUST_NTLMSSP	0x80080 /* raw ntlmssp with ntlmv2 */
+ 
+ #define   CIFSSEC_DEF (CIFSSEC_MAY_SIGN | CIFSSEC_MAY_NTLMV2 | CIFSSEC_MAY_NTLMSSP | CIFSSEC_MAY_SEAL)
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 4a8aa1de95223..dd0afa23734c8 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1042,13 +1042,26 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ 	}
+ 
+ 	rc = -EOPNOTSUPP;
+-	switch ((data->reparse.tag = tag)) {
+-	case 0: /* SMB1 symlink */
++	data->reparse.tag = tag;
++	if (!data->reparse.tag) {
+ 		if (server->ops->query_symlink) {
+ 			rc = server->ops->query_symlink(xid, tcon,
+ 							cifs_sb, full_path,
+ 							&data->symlink_target);
+ 		}
++		if (rc == -EOPNOTSUPP)
++			data->reparse.tag = IO_REPARSE_TAG_INTERNAL;
++	}
++
++	switch (data->reparse.tag) {
++	case 0: /* SMB1 symlink */
++		break;
++	case IO_REPARSE_TAG_INTERNAL:
++		rc = 0;
++		if (le32_to_cpu(data->fi.Attributes) & ATTR_DIRECTORY) {
++			cifs_create_junction_fattr(fattr, sb);
++			goto out;
++		}
+ 		break;
+ 	case IO_REPARSE_TAG_MOUNT_POINT:
+ 		cifs_create_junction_fattr(fattr, sb);
+diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
+index 07c468ddb88a8..65d4b72b4d51a 100644
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -1288,6 +1288,7 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 				   const char *full_path,
+ 				   bool *islink)
+ {
++	struct TCP_Server_Info *server = tcon->ses->server;
+ 	struct cifs_ses *ses = tcon->ses;
+ 	size_t len;
+ 	char *path;
+@@ -1304,12 +1305,12 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 	    !is_tcon_dfs(tcon))
+ 		return 0;
+ 
+-	spin_lock(&tcon->tc_lock);
+-	if (!tcon->origin_fullpath) {
+-		spin_unlock(&tcon->tc_lock);
++	spin_lock(&server->srv_lock);
++	if (!server->leaf_fullpath) {
++		spin_unlock(&server->srv_lock);
+ 		return 0;
+ 	}
+-	spin_unlock(&tcon->tc_lock);
++	spin_unlock(&server->srv_lock);
+ 
+ 	/*
+ 	 * Slow path - tcon is DFS and @full_path has prefix path, so attempt
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index a0ffbda907331..689d8a506d459 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -505,6 +505,10 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ 	}
+ 
+ 	switch (tag) {
++	case IO_REPARSE_TAG_INTERNAL:
++		if (!(fattr->cf_cifsattrs & ATTR_DIRECTORY))
++			return false;
++		fallthrough;
+ 	case IO_REPARSE_TAG_DFS:
+ 	case IO_REPARSE_TAG_DFSR:
+ 	case IO_REPARSE_TAG_MOUNT_POINT:
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index 6b55d1df9e2f8..2c0644bc4e65a 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -12,6 +12,12 @@
+ #include "fs_context.h"
+ #include "cifsglob.h"
+ 
++/*
++ * Used only by cifs.ko to ignore reparse points from files when client or
++ * server doesn't support FSCTL_GET_REPARSE_POINT.
++ */
++#define IO_REPARSE_TAG_INTERNAL ((__u32)~0U)
++
+ static inline dev_t reparse_nfs_mkdev(struct reparse_posix_data *buf)
+ {
+ 	u64 v = le64_to_cpu(*(__le64 *)buf->DataBuffer);
+@@ -78,10 +84,19 @@ static inline u32 reparse_mode_wsl_tag(mode_t mode)
+ static inline bool reparse_inode_match(struct inode *inode,
+ 				       struct cifs_fattr *fattr)
+ {
++	struct cifsInodeInfo *cinode = CIFS_I(inode);
+ 	struct timespec64 ctime = inode_get_ctime(inode);
+ 
+-	return (CIFS_I(inode)->cifsAttrs & ATTR_REPARSE) &&
+-		CIFS_I(inode)->reparse_tag == fattr->cf_cifstag &&
++	/*
++	 * Do not match reparse tags when client or server doesn't support
++	 * FSCTL_GET_REPARSE_POINT.  @fattr->cf_cifstag should contain correct
++	 * reparse tag from query dir response but the client won't be able to
++	 * read the reparse point data anyway.  This spares us a revalidation.
++	 */
++	if (cinode->reparse_tag != IO_REPARSE_TAG_INTERNAL &&
++	    cinode->reparse_tag != fattr->cf_cifstag)
++		return false;
++	return (cinode->cifsAttrs & ATTR_REPARSE) &&
+ 		timespec64_equal(&ctime, &fattr->cf_ctime);
+ }
+ 
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 5c02a12251c84..062b86a4936fd 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -930,6 +930,8 @@ int smb2_query_path_info(const unsigned int xid,
+ 
+ 	switch (rc) {
+ 	case 0:
++		rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++		break;
+ 	case -EOPNOTSUPP:
+ 		/*
+ 		 * BB TODO: When support for special files added to Samba
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index bb84a89e59059..896147ba6660e 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -82,6 +82,9 @@ int smb3_encryption_required(const struct cifs_tcon *tcon)
+ 	if (tcon->seal &&
+ 	    (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ 		return 1;
++	if (((global_secflags & CIFSSEC_MUST_SEAL) == CIFSSEC_MUST_SEAL) &&
++	    (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
++		return 1;
+ 	return 0;
+ }
+ 
+diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c
+index 5d88c184f0fc1..01e99e98457dd 100644
+--- a/fs/tracefs/event_inode.c
++++ b/fs/tracefs/event_inode.c
+@@ -112,7 +112,7 @@ static void release_ei(struct kref *ref)
+ 			entry->release(entry->name, ei->data);
+ 	}
+ 
+-	call_rcu(&ei->rcu, free_ei_rcu);
++	call_srcu(&eventfs_srcu, &ei->rcu, free_ei_rcu);
+ }
+ 
+ static inline void put_ei(struct eventfs_inode *ei)
+@@ -736,7 +736,7 @@ struct eventfs_inode *eventfs_create_dir(const char *name, struct eventfs_inode
+ 	/* Was the parent freed? */
+ 	if (list_empty(&ei->list)) {
+ 		cleanup_ei(ei);
+-		ei = NULL;
++		ei = ERR_PTR(-EBUSY);
+ 	}
+ 	return ei;
+ }
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 7c29f4afc23d5..507a61004ed37 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -42,7 +42,7 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
+ 	struct tracefs_inode *ti;
+ 	unsigned long flags;
+ 
+-	ti = kmem_cache_alloc(tracefs_inode_cachep, GFP_KERNEL);
++	ti = alloc_inode_sb(sb, tracefs_inode_cachep, GFP_KERNEL);
+ 	if (!ti)
+ 		return NULL;
+ 
+@@ -53,15 +53,14 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
+ 	return &ti->vfs_inode;
+ }
+ 
+-static void tracefs_free_inode_rcu(struct rcu_head *rcu)
++static void tracefs_free_inode(struct inode *inode)
+ {
+-	struct tracefs_inode *ti;
++	struct tracefs_inode *ti = get_tracefs(inode);
+ 
+-	ti = container_of(rcu, struct tracefs_inode, rcu);
+ 	kmem_cache_free(tracefs_inode_cachep, ti);
+ }
+ 
+-static void tracefs_free_inode(struct inode *inode)
++static void tracefs_destroy_inode(struct inode *inode)
+ {
+ 	struct tracefs_inode *ti = get_tracefs(inode);
+ 	unsigned long flags;
+@@ -69,8 +68,6 @@ static void tracefs_free_inode(struct inode *inode)
+ 	spin_lock_irqsave(&tracefs_inode_lock, flags);
+ 	list_del_rcu(&ti->list);
+ 	spin_unlock_irqrestore(&tracefs_inode_lock, flags);
+-
+-	call_rcu(&ti->rcu, tracefs_free_inode_rcu);
+ }
+ 
+ static ssize_t default_read_file(struct file *file, char __user *buf,
+@@ -445,6 +442,7 @@ static int tracefs_drop_inode(struct inode *inode)
+ static const struct super_operations tracefs_super_operations = {
+ 	.alloc_inode    = tracefs_alloc_inode,
+ 	.free_inode     = tracefs_free_inode,
++	.destroy_inode  = tracefs_destroy_inode,
+ 	.drop_inode     = tracefs_drop_inode,
+ 	.statfs		= simple_statfs,
+ 	.show_options	= tracefs_show_options,
+diff --git a/fs/tracefs/internal.h b/fs/tracefs/internal.h
+index f704d8348357e..d83c2a25f288e 100644
+--- a/fs/tracefs/internal.h
++++ b/fs/tracefs/internal.h
+@@ -10,10 +10,7 @@ enum {
+ };
+ 
+ struct tracefs_inode {
+-	union {
+-		struct inode            vfs_inode;
+-		struct rcu_head		rcu;
+-	};
++	struct inode            vfs_inode;
+ 	/* The below gets initialized with memset_after(ti, 0, vfs_inode) */
+ 	struct list_head	list;
+ 	unsigned long           flags;
+diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c
+index 558ad046972ad..bb471ec364046 100644
+--- a/fs/udf/balloc.c
++++ b/fs/udf/balloc.c
+@@ -18,6 +18,7 @@
+ #include "udfdecl.h"
+ 
+ #include <linux/bitops.h>
++#include <linux/overflow.h>
+ 
+ #include "udf_i.h"
+ #include "udf_sb.h"
+@@ -140,7 +141,6 @@ static void udf_bitmap_free_blocks(struct super_block *sb,
+ {
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct buffer_head *bh = NULL;
+-	struct udf_part_map *partmap;
+ 	unsigned long block;
+ 	unsigned long block_group;
+ 	unsigned long bit;
+@@ -149,19 +149,9 @@ static void udf_bitmap_free_blocks(struct super_block *sb,
+ 	unsigned long overflow;
+ 
+ 	mutex_lock(&sbi->s_alloc_mutex);
+-	partmap = &sbi->s_partmaps[bloc->partitionReferenceNum];
+-	if (bloc->logicalBlockNum + count < count ||
+-	    (bloc->logicalBlockNum + count) > partmap->s_partition_len) {
+-		udf_debug("%u < %d || %u + %u > %u\n",
+-			  bloc->logicalBlockNum, 0,
+-			  bloc->logicalBlockNum, count,
+-			  partmap->s_partition_len);
+-		goto error_return;
+-	}
+-
++	/* We make sure this cannot overflow when mounting the filesystem */
+ 	block = bloc->logicalBlockNum + offset +
+ 		(sizeof(struct spaceBitmapDesc) << 3);
+-
+ 	do {
+ 		overflow = 0;
+ 		block_group = block >> (sb->s_blocksize_bits + 3);
+@@ -391,7 +381,6 @@ static void udf_table_free_blocks(struct super_block *sb,
+ 				  uint32_t count)
+ {
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+-	struct udf_part_map *partmap;
+ 	uint32_t start, end;
+ 	uint32_t elen;
+ 	struct kernel_lb_addr eloc;
+@@ -400,16 +389,6 @@ static void udf_table_free_blocks(struct super_block *sb,
+ 	struct udf_inode_info *iinfo;
+ 
+ 	mutex_lock(&sbi->s_alloc_mutex);
+-	partmap = &sbi->s_partmaps[bloc->partitionReferenceNum];
+-	if (bloc->logicalBlockNum + count < count ||
+-	    (bloc->logicalBlockNum + count) > partmap->s_partition_len) {
+-		udf_debug("%u < %d || %u + %u > %u\n",
+-			  bloc->logicalBlockNum, 0,
+-			  bloc->logicalBlockNum, count,
+-			  partmap->s_partition_len);
+-		goto error_return;
+-	}
+-
+ 	iinfo = UDF_I(table);
+ 	udf_add_free_space(sb, sbi->s_partition, count);
+ 
+@@ -684,6 +663,17 @@ void udf_free_blocks(struct super_block *sb, struct inode *inode,
+ {
+ 	uint16_t partition = bloc->partitionReferenceNum;
+ 	struct udf_part_map *map = &UDF_SB(sb)->s_partmaps[partition];
++	uint32_t blk;
++
++	if (check_add_overflow(bloc->logicalBlockNum, offset, &blk) ||
++	    check_add_overflow(blk, count, &blk) ||
++	    bloc->logicalBlockNum + count > map->s_partition_len) {
++		udf_debug("Invalid request to free blocks: (%d, %u), off %u, "
++			  "len %u, partition len %u\n",
++			  partition, bloc->logicalBlockNum, offset, count,
++			  map->s_partition_len);
++		return;
++	}
+ 
+ 	if (map->s_partition_flags & UDF_PART_FLAG_UNALLOC_BITMAP) {
+ 		udf_bitmap_free_blocks(sb, map->s_uspace.s_bitmap,
+diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h
+index 7428cb43952da..9fc4fed9a19f8 100644
+--- a/include/linux/blk-integrity.h
++++ b/include/linux/blk-integrity.h
+@@ -100,14 +100,13 @@ static inline bool blk_integrity_rq(struct request *rq)
+ }
+ 
+ /*
+- * Return the first bvec that contains integrity data.  Only drivers that are
+- * limited to a single integrity segment should use this helper.
++ * Return the current bvec that contains the integrity data. bip_iter may be
++ * advanced to iterate over the integrity data.
+  */
+-static inline struct bio_vec *rq_integrity_vec(struct request *rq)
++static inline struct bio_vec rq_integrity_vec(struct request *rq)
+ {
+-	if (WARN_ON_ONCE(queue_max_integrity_segments(rq->q) > 1))
+-		return NULL;
+-	return rq->bio->bi_integrity->bip_vec;
++	return mp_bvec_iter_bvec(rq->bio->bi_integrity->bip_vec,
++				 rq->bio->bi_integrity->bip_iter);
+ }
+ #else /* CONFIG_BLK_DEV_INTEGRITY */
+ static inline int blk_rq_count_integrity_sg(struct request_queue *q,
+@@ -167,9 +166,10 @@ static inline int blk_integrity_rq(struct request *rq)
+ 	return 0;
+ }
+ 
+-static inline struct bio_vec *rq_integrity_vec(struct request *rq)
++static inline struct bio_vec rq_integrity_vec(struct request *rq)
+ {
+-	return NULL;
++	/* the optimizer will remove all calls to this function */
++	return (struct bio_vec){ };
+ }
+ #endif /* CONFIG_BLK_DEV_INTEGRITY */
+ #endif /* _LINUX_BLK_INTEGRITY_H */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 0283cf366c2a9..93ac1a859d699 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -629,6 +629,7 @@ struct inode {
+ 	umode_t			i_mode;
+ 	unsigned short		i_opflags;
+ 	kuid_t			i_uid;
++	struct list_head	i_lru;		/* inode LRU list */
+ 	kgid_t			i_gid;
+ 	unsigned int		i_flags;
+ 
+@@ -690,7 +691,6 @@ struct inode {
+ 	u16			i_wb_frn_avg_time;
+ 	u16			i_wb_frn_history;
+ #endif
+-	struct list_head	i_lru;		/* inode LRU list */
+ 	struct list_head	i_sb_list;
+ 	struct list_head	i_wb_list;	/* backing dev writeback list */
+ 	union {
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 942a587bb97e3..677aea20d3e11 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2126,6 +2126,8 @@
+ 
+ #define PCI_VENDOR_ID_CHELSIO		0x1425
+ 
++#define PCI_VENDOR_ID_EDIMAX		0x1432
++
+ #define PCI_VENDOR_ID_ADLINK		0x144a
+ 
+ #define PCI_VENDOR_ID_SAMSUNG		0x144d
+diff --git a/include/linux/profile.h b/include/linux/profile.h
+index 04ae5ebcb637a..cb365a7518616 100644
+--- a/include/linux/profile.h
++++ b/include/linux/profile.h
+@@ -11,7 +11,6 @@
+ 
+ #define CPU_PROFILING	1
+ #define SCHED_PROFILING	2
+-#define SLEEP_PROFILING	3
+ #define KVM_PROFILING	4
+ 
+ struct proc_dir_entry;
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index dfd2399f2cde0..61cb3de236af1 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -209,7 +209,6 @@ void synchronize_rcu_tasks_rude(void);
+ 
+ #define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
+ void exit_tasks_rcu_start(void);
+-void exit_tasks_rcu_stop(void);
+ void exit_tasks_rcu_finish(void);
+ #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
+ #define rcu_tasks_classic_qs(t, preempt) do { } while (0)
+@@ -218,7 +217,6 @@ void exit_tasks_rcu_finish(void);
+ #define call_rcu_tasks call_rcu
+ #define synchronize_rcu_tasks synchronize_rcu
+ static inline void exit_tasks_rcu_start(void) { }
+-static inline void exit_tasks_rcu_stop(void) { }
+ static inline void exit_tasks_rcu_finish(void) { }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
+ 
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 9df3e2973626b..9435185c10ef7 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -880,7 +880,6 @@ do {									\
+ struct perf_event;
+ 
+ DECLARE_PER_CPU(struct pt_regs, perf_trace_regs);
+-DECLARE_PER_CPU(int, bpf_kprobe_override);
+ 
+ extern int  perf_trace_init(struct perf_event *event);
+ extern void perf_trace_destroy(struct perf_event *event);
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index d1d7825318c32..6c395a2600e8d 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -56,7 +56,6 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 	unsigned int thlen = 0;
+ 	unsigned int p_off = 0;
+ 	unsigned int ip_proto;
+-	u64 ret, remainder, gso_size;
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+@@ -99,16 +98,6 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 		u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
+ 		u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16));
+ 
+-		if (hdr->gso_size) {
+-			gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
+-			ret = div64_u64_rem(skb->len, gso_size, &remainder);
+-			if (!(ret && (hdr->gso_size > needed) &&
+-						((remainder > needed) || (remainder == 0)))) {
+-				return -EINVAL;
+-			}
+-			skb_shinfo(skb)->tx_flags |= SKBFL_SHARED_FRAG;
+-		}
+-
+ 		if (!pskb_may_pull(skb, needed))
+ 			return -EINVAL;
+ 
+@@ -182,6 +171,11 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 			if (gso_type != SKB_GSO_UDP_L4)
+ 				return -EINVAL;
+ 			break;
++		case SKB_GSO_TCPV4:
++		case SKB_GSO_TCPV6:
++			if (skb->csum_offset != offsetof(struct tcphdr, check))
++				return -EINVAL;
++			break;
+ 		}
+ 
+ 		/* Kernel has a special handling for GSO_BY_FRAGS. */
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index dc627ebf01df8..347959585deb6 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -267,13 +267,23 @@ struct cs35l56_base {
+ 	bool fw_patched;
+ 	bool secured;
+ 	bool can_hibernate;
+-	bool fw_owns_asp1;
+ 	bool cal_data_valid;
+ 	s8 cal_index;
+ 	struct cirrus_amp_cal_data cal_data;
+ 	struct gpio_desc *reset_gpio;
+ };
+ 
++/* Temporary to avoid a build break with the HDA driver */
++static inline int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base)
++{
++	return 0;
++}
++
++static inline bool cs35l56_is_otp_register(unsigned int reg)
++{
++	return (reg >> 16) == 3;
++}
++
+ extern struct regmap_config cs35l56_regmap_i2c;
+ extern struct regmap_config cs35l56_regmap_spi;
+ extern struct regmap_config cs35l56_regmap_sdw;
+@@ -284,8 +294,6 @@ extern const char * const cs35l56_tx_input_texts[CS35L56_NUM_INPUT_SRC];
+ extern const unsigned int cs35l56_tx_input_values[CS35L56_NUM_INPUT_SRC];
+ 
+ int cs35l56_set_patch(struct cs35l56_base *cs35l56_base);
+-int cs35l56_init_asp1_regs_for_driver_control(struct cs35l56_base *cs35l56_base);
+-int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base);
+ int cs35l56_mbox_send(struct cs35l56_base *cs35l56_base, unsigned int command);
+ int cs35l56_firmware_shutdown(struct cs35l56_base *cs35l56_base);
+ int cs35l56_wait_for_firmware_boot(struct cs35l56_base *cs35l56_base);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index cf742bdd2a93e..09bb82bc209a1 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -591,17 +591,18 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 			.iovs = &kmsg->fast_iov,
+ 			.max_len = INT_MAX,
+ 			.nr_iovs = 1,
+-			.mode = KBUF_MODE_EXPAND,
+ 		};
+ 
+ 		if (kmsg->free_iov) {
+ 			arg.nr_iovs = kmsg->free_iov_nr;
+ 			arg.iovs = kmsg->free_iov;
+-			arg.mode |= KBUF_MODE_FREE;
++			arg.mode = KBUF_MODE_FREE;
+ 		}
+ 
+ 		if (!(sr->flags & IORING_RECVSEND_BUNDLE))
+ 			arg.nr_iovs = 1;
++		else
++			arg.mode |= KBUF_MODE_EXPAND;
+ 
+ 		ret = io_buffers_select(req, &arg, issue_flags);
+ 		if (unlikely(ret < 0))
+@@ -613,6 +614,7 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->free_iov) {
+ 			kmsg->free_iov_nr = ret;
+ 			kmsg->free_iov = arg.iovs;
++			req->flags |= REQ_F_NEED_CLEANUP;
+ 		}
+ 	}
+ 
+@@ -1084,6 +1086,7 @@ static int io_recv_buf_select(struct io_kiocb *req, struct io_async_msghdr *kmsg
+ 		if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->free_iov) {
+ 			kmsg->free_iov_nr = ret;
+ 			kmsg->free_iov = arg.iovs;
++			req->flags |= REQ_F_NEED_CLEANUP;
+ 		}
+ 	} else {
+ 		void __user *buf;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6b422c275f78c..a8845cc299fec 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7716,6 +7716,13 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ 	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
+ 	int err;
+ 
++	if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
++		verbose(env,
++			"arg#%d expected pointer to stack or const struct bpf_dynptr\n",
++			regno);
++		return -EINVAL;
++	}
++
+ 	/* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
+ 	 * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
+ 	 */
+@@ -9465,6 +9472,10 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ 				return -EINVAL;
+ 			}
+ 		} else if (arg->arg_type == (ARG_PTR_TO_DYNPTR | MEM_RDONLY)) {
++			ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_DYNPTR);
++			if (ret)
++				return ret;
++
+ 			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+ 			if (ret)
+ 				return ret;
+@@ -11958,12 +11969,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ 			enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
+ 			int clone_ref_obj_id = 0;
+ 
+-			if (reg->type != PTR_TO_STACK &&
+-			    reg->type != CONST_PTR_TO_DYNPTR) {
+-				verbose(env, "arg#%d expected pointer to stack or dynptr_ptr\n", i);
+-				return -EINVAL;
+-			}
+-
+ 			if (reg->type == CONST_PTR_TO_DYNPTR)
+ 				dynptr_arg_type |= MEM_RDONLY;
+ 
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 07e99c936ba5d..1dee88ba0ae44 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -530,6 +530,7 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node,
+ 				flags = IRQD_AFFINITY_MANAGED |
+ 					IRQD_MANAGED_SHUTDOWN;
+ 			}
++			flags |= IRQD_AFFINITY_SET;
+ 			mask = &affinity->mask;
+ 			node = cpu_to_node(cpumask_first(mask));
+ 			affinity++;
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index 1f05a19918f47..c6ac0d0377d72 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -231,7 +231,7 @@ void static_key_disable_cpuslocked(struct static_key *key)
+ 	}
+ 
+ 	jump_label_lock();
+-	if (atomic_cmpxchg(&key->enabled, 1, 0))
++	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
+ 		jump_label_update(key);
+ 	jump_label_unlock();
+ }
+@@ -284,7 +284,7 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
+ 		return;
+ 
+ 	guard(mutex)(&jump_label_mutex);
+-	if (atomic_cmpxchg(&key->enabled, 1, 0))
++	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
+ 		jump_label_update(key);
+ 	else
+ 		WARN_ON_ONCE(!static_key_slow_try_dec(key));
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index f0a69d402066e..274b6b7c718de 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -161,6 +161,15 @@ static void kcov_remote_area_put(struct kcov_remote_area *area,
+ 	kmsan_unpoison_memory(&area->list, sizeof(area->list));
+ }
+ 
++/*
++ * Unlike in_serving_softirq(), this function returns false when called during
++ * a hardirq or an NMI that happened in the softirq context.
++ */
++static inline bool in_softirq_really(void)
++{
++	return in_serving_softirq() && !in_hardirq() && !in_nmi();
++}
++
+ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+ {
+ 	unsigned int mode;
+@@ -170,7 +179,7 @@ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_stru
+ 	 * so we ignore code executed in interrupts, unless we are in a remote
+ 	 * coverage collection section in a softirq.
+ 	 */
+-	if (!in_task() && !(in_serving_softirq() && t->kcov_softirq))
++	if (!in_task() && !(in_softirq_really() && t->kcov_softirq))
+ 		return false;
+ 	mode = READ_ONCE(t->kcov_mode);
+ 	/*
+@@ -849,7 +858,7 @@ void kcov_remote_start(u64 handle)
+ 
+ 	if (WARN_ON(!kcov_check_handle(handle, true, true, true)))
+ 		return;
+-	if (!in_task() && !in_serving_softirq())
++	if (!in_task() && !in_softirq_really())
+ 		return;
+ 
+ 	local_lock_irqsave(&kcov_percpu_data.lock, flags);
+@@ -991,7 +1000,7 @@ void kcov_remote_stop(void)
+ 	int sequence;
+ 	unsigned long flags;
+ 
+-	if (!in_task() && !in_serving_softirq())
++	if (!in_task() && !in_softirq_really())
+ 		return;
+ 
+ 	local_lock_irqsave(&kcov_percpu_data.lock, flags);
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 6a76a81000735..85251c254d8a6 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1557,8 +1557,8 @@ static bool is_cfi_preamble_symbol(unsigned long addr)
+ 	if (lookup_symbol_name(addr, symbuf))
+ 		return false;
+ 
+-	return str_has_prefix("__cfi_", symbuf) ||
+-		str_has_prefix("__pfx_", symbuf);
++	return str_has_prefix(symbuf, "__cfi_") ||
++		str_has_prefix(symbuf, "__pfx_");
+ }
+ 
+ static int check_kprobe_address_safe(struct kprobe *p,
+diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
+index f5a36e67b5935..ac2e225027410 100644
+--- a/kernel/locking/qspinlock_paravirt.h
++++ b/kernel/locking/qspinlock_paravirt.h
+@@ -357,7 +357,7 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
+ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
+ {
+ 	struct pv_node *pn = (struct pv_node *)node;
+-	enum vcpu_state old = vcpu_halted;
++	u8 old = vcpu_halted;
+ 	/*
+ 	 * If the vCPU is indeed halted, advance its state to match that of
+ 	 * pv_wait_node(). If OTOH this fails, the vCPU was running and will
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index d18a94b973e10..3f9da537024a1 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -3101,7 +3101,7 @@ static bool idempotent(struct idempotent *u, const void *cookie)
+ 	struct idempotent *existing;
+ 	bool first;
+ 
+-	u->ret = 0;
++	u->ret = -EINTR;
+ 	u->cookie = cookie;
+ 	init_completion(&u->complete);
+ 
+@@ -3137,7 +3137,7 @@ static int idempotent_complete(struct idempotent *u, int ret)
+ 	hlist_for_each_entry_safe(pos, next, head, entry) {
+ 		if (pos->cookie != cookie)
+ 			continue;
+-		hlist_del(&pos->entry);
++		hlist_del_init(&pos->entry);
+ 		pos->ret = ret;
+ 		complete(&pos->complete);
+ 	}
+@@ -3145,6 +3145,28 @@ static int idempotent_complete(struct idempotent *u, int ret)
+ 	return ret;
+ }
+ 
++/*
++ * Wait for the idempotent worker.
++ *
++ * If we get interrupted, we need to remove ourselves from the
++ * the idempotent list, and the completion may still come in.
++ *
++ * The 'idem_lock' protects against the race, and 'idem.ret' was
++ * initialized to -EINTR and is thus always the right return
++ * value even if the idempotent work then completes between
++ * the wait_for_completion and the cleanup.
++ */
++static int idempotent_wait_for_completion(struct idempotent *u)
++{
++	if (wait_for_completion_interruptible(&u->complete)) {
++		spin_lock(&idem_lock);
++		if (!hlist_unhashed(&u->entry))
++			hlist_del(&u->entry);
++		spin_unlock(&idem_lock);
++	}
++	return u->ret;
++}
++
+ static int init_module_from_file(struct file *f, const char __user * uargs, int flags)
+ {
+ 	struct load_info info = { };
+@@ -3180,15 +3202,16 @@ static int idempotent_init_module(struct file *f, const char __user * uargs, int
+ 	if (!f || !(f->f_mode & FMODE_READ))
+ 		return -EBADF;
+ 
+-	/* See if somebody else is doing the operation? */
+-	if (idempotent(&idem, file_inode(f))) {
+-		wait_for_completion(&idem.complete);
+-		return idem.ret;
++	/* Are we the winners of the race and get to do this? */
++	if (!idempotent(&idem, file_inode(f))) {
++		int ret = init_module_from_file(f, uargs, flags);
++		return idempotent_complete(&idem, ret);
+ 	}
+ 
+-	/* Otherwise, we'll do it and complete others */
+-	return idempotent_complete(&idem,
+-		init_module_from_file(f, uargs, flags));
++	/*
++	 * Somebody else won the race and is loading the module.
++	 */
++	return idempotent_wait_for_completion(&idem);
+ }
+ 
+ SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 53f4bc9127127..0fa6c28954603 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -517,6 +517,13 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
+ 	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
+ 	ps.chunk_size = roundup(ps.chunk_size, job->align);
+ 
++	/*
++	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
++	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
++	 */
++	if (!ps.chunk_size)
++		ps.chunk_size = 1U;
++
+ 	list_for_each_entry(pw, &works, pw_list)
+ 		if (job->numa_aware) {
+ 			int old_node = atomic_read(&last_used_nid);
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index 25f3cf679b358..bdf0087d64423 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -249,24 +249,7 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (pid_ns->pid_allocated == init_pids)
+ 			break;
+-		/*
+-		 * Release tasks_rcu_exit_srcu to avoid following deadlock:
+-		 *
+-		 * 1) TASK A unshare(CLONE_NEWPID)
+-		 * 2) TASK A fork() twice -> TASK B (child reaper for new ns)
+-		 *    and TASK C
+-		 * 3) TASK B exits, kills TASK C, waits for TASK A to reap it
+-		 * 4) TASK A calls synchronize_rcu_tasks()
+-		 *                   -> synchronize_srcu(tasks_rcu_exit_srcu)
+-		 * 5) *DEADLOCK*
+-		 *
+-		 * It is considered safe to release tasks_rcu_exit_srcu here
+-		 * because we assume the current task can not be concurrently
+-		 * reaped at this point.
+-		 */
+-		exit_tasks_rcu_stop();
+ 		schedule();
+-		exit_tasks_rcu_start();
+ 	}
+ 	__set_current_state(TASK_RUNNING);
+ 
+diff --git a/kernel/profile.c b/kernel/profile.c
+index 2b775cc5c28f9..f2bb82c69f720 100644
+--- a/kernel/profile.c
++++ b/kernel/profile.c
+@@ -57,20 +57,11 @@ static DEFINE_MUTEX(profile_flip_mutex);
+ int profile_setup(char *str)
+ {
+ 	static const char schedstr[] = "schedule";
+-	static const char sleepstr[] = "sleep";
+ 	static const char kvmstr[] = "kvm";
+ 	const char *select = NULL;
+ 	int par;
+ 
+-	if (!strncmp(str, sleepstr, strlen(sleepstr))) {
+-#ifdef CONFIG_SCHEDSTATS
+-		force_schedstat_enabled();
+-		prof_on = SLEEP_PROFILING;
+-		select = sleepstr;
+-#else
+-		pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n");
+-#endif /* CONFIG_SCHEDSTATS */
+-	} else if (!strncmp(str, schedstr, strlen(schedstr))) {
++	if (!strncmp(str, schedstr, strlen(schedstr))) {
+ 		prof_on = SCHED_PROFILING;
+ 		select = schedstr;
+ 	} else if (!strncmp(str, kvmstr, strlen(kvmstr))) {
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 807fbf6123a77..251cead744603 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2626,7 +2626,7 @@ static void rcu_torture_fwd_cb_cr(struct rcu_head *rhp)
+ 	spin_lock_irqsave(&rfp->rcu_fwd_lock, flags);
+ 	rfcpp = rfp->rcu_fwd_cb_tail;
+ 	rfp->rcu_fwd_cb_tail = &rfcp->rfc_next;
+-	WRITE_ONCE(*rfcpp, rfcp);
++	smp_store_release(rfcpp, rfcp);
+ 	WRITE_ONCE(rfp->n_launders_cb, rfp->n_launders_cb + 1);
+ 	i = ((jiffies - rfp->rcu_fwd_startat) / (HZ / FWD_CBS_HIST_DIV));
+ 	if (i >= ARRAY_SIZE(rfp->n_launders_hist))
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 098e82bcc427f..ba3440a45b6dd 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -858,7 +858,7 @@ static void rcu_tasks_wait_gp(struct rcu_tasks *rtp)
+ //	not know to synchronize with this RCU Tasks grace period) have
+ //	completed exiting.  The synchronize_rcu() in rcu_tasks_postgp()
+ //	will take care of any tasks stuck in the non-preemptible region
+-//	of do_exit() following its call to exit_tasks_rcu_stop().
++//	of do_exit() following its call to exit_tasks_rcu_finish().
+ // check_all_holdout_tasks(), repeatedly until holdout list is empty:
+ //	Scans the holdout list, attempting to identify a quiescent state
+ //	for each task on the list.  If there is a quiescent state, the
+@@ -1220,7 +1220,7 @@ void exit_tasks_rcu_start(void)
+  * Remove the task from the "yet another list" because do_exit() is now
+  * non-preemptible, allowing synchronize_rcu() to wait beyond this point.
+  */
+-void exit_tasks_rcu_stop(void)
++void exit_tasks_rcu_finish(void)
+ {
+ 	unsigned long flags;
+ 	struct rcu_tasks_percpu *rtpcp;
+@@ -1231,22 +1231,12 @@ void exit_tasks_rcu_stop(void)
+ 	raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
+ 	list_del_init(&t->rcu_tasks_exit_list);
+ 	raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
+-}
+ 
+-/*
+- * Contribute to protect against tasklist scan blind spot while the
+- * task is exiting and may be removed from the tasklist. See
+- * corresponding synchronize_srcu() for further details.
+- */
+-void exit_tasks_rcu_finish(void)
+-{
+-	exit_tasks_rcu_stop();
+-	exit_tasks_rcu_finish_trace(current);
++	exit_tasks_rcu_finish_trace(t);
+ }
+ 
+ #else /* #ifdef CONFIG_TASKS_RCU */
+ void exit_tasks_rcu_start(void) { }
+-void exit_tasks_rcu_stop(void) { }
+ void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU */
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 28c7031711a3f..63fb007beeaf5 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -5110,11 +5110,15 @@ void rcutree_migrate_callbacks(int cpu)
+ 	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
+ 	bool needwake;
+ 
+-	if (rcu_rdp_is_offloaded(rdp) ||
+-	    rcu_segcblist_empty(&rdp->cblist))
+-		return;  /* No callbacks to migrate. */
++	if (rcu_rdp_is_offloaded(rdp))
++		return;
+ 
+ 	raw_spin_lock_irqsave(&rcu_state.barrier_lock, flags);
++	if (rcu_segcblist_empty(&rdp->cblist)) {
++		raw_spin_unlock_irqrestore(&rcu_state.barrier_lock, flags);
++		return;  /* No callbacks to migrate. */
++	}
++
+ 	WARN_ON_ONCE(rcu_rdp_cpu_online(rdp));
+ 	rcu_barrier_entrain(rdp);
+ 	my_rdp = this_cpu_ptr(&rcu_data);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index ebf21373f6634..3e84a3b7b7bb9 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -9604,6 +9604,30 @@ void set_rq_offline(struct rq *rq)
+ 	}
+ }
+ 
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (rq->rd) {
++		BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
++		set_rq_online(rq);
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (rq->rd) {
++		BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
++		set_rq_offline(rq);
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
+ /*
+  * used to mark begin/end of suspend/resume:
+  */
+@@ -9654,10 +9678,25 @@ static int cpuset_cpu_inactive(unsigned int cpu)
+ 	return 0;
+ }
+ 
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_dec_cpuslocked(&sched_smt_present);
++#endif
++}
++
+ int sched_cpu_activate(unsigned int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+-	struct rq_flags rf;
+ 
+ 	/*
+ 	 * Clear the balance_push callback and prepare to schedule
+@@ -9665,13 +9704,10 @@ int sched_cpu_activate(unsigned int cpu)
+ 	 */
+ 	balance_push_set(cpu, false);
+ 
+-#ifdef CONFIG_SCHED_SMT
+ 	/*
+ 	 * When going up, increment the number of cores with SMT present.
+ 	 */
+-	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+-		static_branch_inc_cpuslocked(&sched_smt_present);
+-#endif
++	sched_smt_present_inc(cpu);
+ 	set_cpu_active(cpu, true);
+ 
+ 	if (sched_smp_initialized) {
+@@ -9689,12 +9725,7 @@ int sched_cpu_activate(unsigned int cpu)
+ 	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
+ 	 *    domains.
+ 	 */
+-	rq_lock_irqsave(rq, &rf);
+-	if (rq->rd) {
+-		BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
+-		set_rq_online(rq);
+-	}
+-	rq_unlock_irqrestore(rq, &rf);
++	sched_set_rq_online(rq, cpu);
+ 
+ 	return 0;
+ }
+@@ -9702,7 +9733,6 @@ int sched_cpu_activate(unsigned int cpu)
+ int sched_cpu_deactivate(unsigned int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+-	struct rq_flags rf;
+ 	int ret;
+ 
+ 	/*
+@@ -9733,20 +9763,14 @@ int sched_cpu_deactivate(unsigned int cpu)
+ 	 */
+ 	synchronize_rcu();
+ 
+-	rq_lock_irqsave(rq, &rf);
+-	if (rq->rd) {
+-		BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
+-		set_rq_offline(rq);
+-	}
+-	rq_unlock_irqrestore(rq, &rf);
++	sched_set_rq_offline(rq, cpu);
+ 
+-#ifdef CONFIG_SCHED_SMT
+ 	/*
+ 	 * When going down, decrement the number of cores with SMT present.
+ 	 */
+-	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+-		static_branch_dec_cpuslocked(&sched_smt_present);
++	sched_smt_present_dec(cpu);
+ 
++#ifdef CONFIG_SCHED_SMT
+ 	sched_core_cpu_deactivate(cpu);
+ #endif
+ 
+@@ -9756,6 +9780,8 @@ int sched_cpu_deactivate(unsigned int cpu)
+ 	sched_update_numa(cpu, false);
+ 	ret = cpuset_cpu_inactive(cpu);
+ 	if (ret) {
++		sched_smt_present_inc(cpu);
++		sched_set_rq_online(rq, cpu);
+ 		balance_push_set(cpu, false);
+ 		set_cpu_active(cpu, true);
+ 		sched_update_numa(cpu, true);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index aa48b2ec879df..4feef0d4e4494 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -582,6 +582,12 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ 	}
+ 
+ 	stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
++	/*
++	 * Because mul_u64_u64_div_u64() can approximate on some
++	 * achitectures; enforce the constraint that: a*b/(b+c) <= a.
++	 */
++	if (unlikely(stime > rtime))
++		stime = rtime;
+ 
+ update:
+ 	/*
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 78e48f5426ee1..eb0cdcd4d9212 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -92,16 +92,6 @@ void __update_stats_enqueue_sleeper(struct rq *rq, struct task_struct *p,
+ 
+ 			trace_sched_stat_blocked(p, delta);
+ 
+-			/*
+-			 * Blocking time is in units of nanosecs, so shift by
+-			 * 20 to get a milliseconds-range estimation of the
+-			 * amount of time that the task spent sleeping:
+-			 */
+-			if (unlikely(prof_on == SLEEP_PROFILING)) {
+-				profile_hits(SLEEP_PROFILING,
+-					     (void *)get_wchan(p),
+-					     delta >> 20);
+-			}
+ 			account_scheduler_latency(p, delta >> 10, 0);
+ 		}
+ 	}
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index d25ba49e313cc..d0538a75f4c63 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -246,7 +246,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
+ 
+ 		wd_delay = cycles_to_nsec_safe(watchdog, *wdnow, wd_end);
+ 		if (wd_delay <= WATCHDOG_MAX_SKEW) {
+-			if (nretries > 1 || nretries >= max_retries) {
++			if (nretries > 1 && nretries >= max_retries) {
+ 				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
+ 					smp_processor_id(), watchdog->name, nretries);
+ 			}
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 406dccb79c2b6..8d2dd214ec682 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -727,17 +727,16 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc,
+ 	}
+ 
+ 	if (txc->modes & ADJ_MAXERROR)
+-		time_maxerror = txc->maxerror;
++		time_maxerror = clamp(txc->maxerror, 0, NTP_PHASE_LIMIT);
+ 
+ 	if (txc->modes & ADJ_ESTERROR)
+-		time_esterror = txc->esterror;
++		time_esterror = clamp(txc->esterror, 0, NTP_PHASE_LIMIT);
+ 
+ 	if (txc->modes & ADJ_TIMECONST) {
+-		time_constant = txc->constant;
++		time_constant = clamp(txc->constant, 0, MAXTC);
+ 		if (!(time_status & STA_NANO))
+ 			time_constant += 4;
+-		time_constant = min(time_constant, (long)MAXTC);
+-		time_constant = max(time_constant, 0l);
++		time_constant = clamp(time_constant, 0, MAXTC);
+ 	}
+ 
+ 	if (txc->modes & ADJ_TAI &&
+diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
+index b4843099a8da7..ed58eebb4e8f4 100644
+--- a/kernel/time/tick-broadcast.c
++++ b/kernel/time/tick-broadcast.c
+@@ -1141,7 +1141,6 @@ void tick_broadcast_switch_to_oneshot(void)
+ #ifdef CONFIG_HOTPLUG_CPU
+ void hotplug_cpu__broadcast_tick_pull(int deadcpu)
+ {
+-	struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
+ 	struct clock_event_device *bc;
+ 	unsigned long flags;
+ 
+@@ -1167,6 +1166,8 @@ void hotplug_cpu__broadcast_tick_pull(int deadcpu)
+ 		 * device to avoid the starvation.
+ 		 */
+ 		if (tick_check_broadcast_expired()) {
++			struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
++
+ 			cpumask_clear_cpu(smp_processor_id(), tick_broadcast_force_mask);
+ 			tick_program_event(td->evtdev->next_event, 1);
+ 		}
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 4e18db1819f84..0381366c541e0 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -2479,7 +2479,7 @@ int do_adjtimex(struct __kernel_timex *txc)
+ 		clock_set |= timekeeping_advance(TK_ADV_FREQ);
+ 
+ 	if (clock_set)
+-		clock_was_set(CLOCK_REALTIME);
++		clock_was_set(CLOCK_SET_WALL);
+ 
+ 	ntp_notify_cmos_timer();
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 749a182dab480..8edab43580d5a 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1573,6 +1573,29 @@ static inline void *event_file_data(struct file *filp)
+ extern struct mutex event_mutex;
+ extern struct list_head ftrace_events;
+ 
++/*
++ * When the trace_event_file is the filp->i_private pointer,
++ * it must be taken under the event_mutex lock, and then checked
++ * if the EVENT_FILE_FL_FREED flag is set. If it is, then the
++ * data pointed to by the trace_event_file can not be trusted.
++ *
++ * Use the event_file_file() to access the trace_event_file from
++ * the filp the first time under the event_mutex and check for
++ * NULL. If it is needed to be retrieved again and the event_mutex
++ * is still held, then the event_file_data() can be used and it
++ * is guaranteed to be valid.
++ */
++static inline struct trace_event_file *event_file_file(struct file *filp)
++{
++	struct trace_event_file *file;
++
++	lockdep_assert_held(&event_mutex);
++	file = READ_ONCE(file_inode(filp)->i_private);
++	if (!file || file->flags & EVENT_FILE_FL_FREED)
++		return NULL;
++	return file;
++}
++
+ extern const struct file_operations event_trigger_fops;
+ extern const struct file_operations event_hist_fops;
+ extern const struct file_operations event_hist_debug_fops;
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 6ef29eba90ceb..f08fbaf8cad67 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1386,12 +1386,12 @@ event_enable_read(struct file *filp, char __user *ubuf, size_t cnt,
+ 	char buf[4] = "0";
+ 
+ 	mutex_lock(&event_mutex);
+-	file = event_file_data(filp);
++	file = event_file_file(filp);
+ 	if (likely(file))
+ 		flags = file->flags;
+ 	mutex_unlock(&event_mutex);
+ 
+-	if (!file || flags & EVENT_FILE_FL_FREED)
++	if (!file)
+ 		return -ENODEV;
+ 
+ 	if (flags & EVENT_FILE_FL_ENABLED &&
+@@ -1424,8 +1424,8 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 	case 1:
+ 		ret = -ENODEV;
+ 		mutex_lock(&event_mutex);
+-		file = event_file_data(filp);
+-		if (likely(file && !(file->flags & EVENT_FILE_FL_FREED))) {
++		file = event_file_file(filp);
++		if (likely(file)) {
+ 			ret = tracing_update_buffers(file->tr);
+ 			if (ret < 0) {
+ 				mutex_unlock(&event_mutex);
+@@ -1540,7 +1540,8 @@ enum {
+ 
+ static void *f_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	struct trace_event_call *call = event_file_data(m->private);
++	struct trace_event_file *file = event_file_data(m->private);
++	struct trace_event_call *call = file->event_call;
+ 	struct list_head *common_head = &ftrace_common_fields;
+ 	struct list_head *head = trace_get_fields(call);
+ 	struct list_head *node = v;
+@@ -1572,7 +1573,8 @@ static void *f_next(struct seq_file *m, void *v, loff_t *pos)
+ 
+ static int f_show(struct seq_file *m, void *v)
+ {
+-	struct trace_event_call *call = event_file_data(m->private);
++	struct trace_event_file *file = event_file_data(m->private);
++	struct trace_event_call *call = file->event_call;
+ 	struct ftrace_event_field *field;
+ 	const char *array_descriptor;
+ 
+@@ -1627,12 +1629,14 @@ static int f_show(struct seq_file *m, void *v)
+ 
+ static void *f_start(struct seq_file *m, loff_t *pos)
+ {
++	struct trace_event_file *file;
+ 	void *p = (void *)FORMAT_HEADER;
+ 	loff_t l = 0;
+ 
+ 	/* ->stop() is called even if ->start() fails */
+ 	mutex_lock(&event_mutex);
+-	if (!event_file_data(m->private))
++	file = event_file_file(m->private);
++	if (!file)
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	while (l < *pos && p)
+@@ -1706,8 +1710,8 @@ event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
+ 	trace_seq_init(s);
+ 
+ 	mutex_lock(&event_mutex);
+-	file = event_file_data(filp);
+-	if (file && !(file->flags & EVENT_FILE_FL_FREED))
++	file = event_file_file(filp);
++	if (file)
+ 		print_event_filter(file, s);
+ 	mutex_unlock(&event_mutex);
+ 
+@@ -1736,9 +1740,13 @@ event_filter_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 		return PTR_ERR(buf);
+ 
+ 	mutex_lock(&event_mutex);
+-	file = event_file_data(filp);
+-	if (file)
+-		err = apply_event_filter(file, buf);
++	file = event_file_file(filp);
++	if (file) {
++		if (file->flags & EVENT_FILE_FL_FREED)
++			err = -ENODEV;
++		else
++			err = apply_event_filter(file, buf);
++	}
+ 	mutex_unlock(&event_mutex);
+ 
+ 	kfree(buf);
+@@ -2485,7 +2493,6 @@ static int event_callback(const char *name, umode_t *mode, void **data,
+ 	if (strcmp(name, "format") == 0) {
+ 		*mode = TRACE_MODE_READ;
+ 		*fops = &ftrace_event_format_fops;
+-		*data = call;
+ 		return 1;
+ 	}
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 6ece1308d36a0..5f9119eb7c67f 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5601,7 +5601,7 @@ static int hist_show(struct seq_file *m, void *v)
+ 
+ 	mutex_lock(&event_mutex);
+ 
+-	event_file = event_file_data(m->private);
++	event_file = event_file_file(m->private);
+ 	if (unlikely(!event_file)) {
+ 		ret = -ENODEV;
+ 		goto out_unlock;
+@@ -5880,7 +5880,7 @@ static int hist_debug_show(struct seq_file *m, void *v)
+ 
+ 	mutex_lock(&event_mutex);
+ 
+-	event_file = event_file_data(m->private);
++	event_file = event_file_file(m->private);
+ 	if (unlikely(!event_file)) {
+ 		ret = -ENODEV;
+ 		goto out_unlock;
+diff --git a/kernel/trace/trace_events_inject.c b/kernel/trace/trace_events_inject.c
+index 8650562bdaa98..a8f076809db4d 100644
+--- a/kernel/trace/trace_events_inject.c
++++ b/kernel/trace/trace_events_inject.c
+@@ -299,7 +299,7 @@ event_inject_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 	strim(buf);
+ 
+ 	mutex_lock(&event_mutex);
+-	file = event_file_data(filp);
++	file = event_file_file(filp);
+ 	if (file) {
+ 		call = file->event_call;
+ 		size = parse_entry(buf, call, &entry);
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 4bec043c8690d..a5e3d6acf1e1e 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -159,7 +159,7 @@ static void *trigger_start(struct seq_file *m, loff_t *pos)
+ 
+ 	/* ->stop() is called even if ->start() fails */
+ 	mutex_lock(&event_mutex);
+-	event_file = event_file_data(m->private);
++	event_file = event_file_file(m->private);
+ 	if (unlikely(!event_file))
+ 		return ERR_PTR(-ENODEV);
+ 
+@@ -213,7 +213,7 @@ static int event_trigger_regex_open(struct inode *inode, struct file *file)
+ 
+ 	mutex_lock(&event_mutex);
+ 
+-	if (unlikely(!event_file_data(file))) {
++	if (unlikely(!event_file_file(file))) {
+ 		mutex_unlock(&event_mutex);
+ 		return -ENODEV;
+ 	}
+@@ -293,7 +293,7 @@ static ssize_t event_trigger_regex_write(struct file *file,
+ 	strim(buf);
+ 
+ 	mutex_lock(&event_mutex);
+-	event_file = event_file_data(file);
++	event_file = event_file_file(file);
+ 	if (unlikely(!event_file)) {
+ 		mutex_unlock(&event_mutex);
+ 		kfree(buf);
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index a4dcf0f243521..3a56e7c8aa4f6 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -454,7 +454,7 @@ static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
+ 	struct tracing_map_elt *elt = NULL;
+ 	int idx;
+ 
+-	idx = atomic_inc_return(&map->next_elt);
++	idx = atomic_fetch_add_unless(&map->next_elt, 1, map->max_elts);
+ 	if (idx < map->max_elts) {
+ 		elt = *(TRACING_MAP_ELT(map->elts, idx));
+ 		if (map->ops && map->ops->elt_init)
+@@ -699,7 +699,7 @@ void tracing_map_clear(struct tracing_map *map)
+ {
+ 	unsigned int i;
+ 
+-	atomic_set(&map->next_elt, -1);
++	atomic_set(&map->next_elt, 0);
+ 	atomic64_set(&map->hits, 0);
+ 	atomic64_set(&map->drops, 0);
+ 
+@@ -783,7 +783,7 @@ struct tracing_map *tracing_map_create(unsigned int map_bits,
+ 
+ 	map->map_bits = map_bits;
+ 	map->max_elts = (1 << map_bits);
+-	atomic_set(&map->next_elt, -1);
++	atomic_set(&map->next_elt, 0);
+ 
+ 	map->map_size = (1 << (map_bits + 1));
+ 	map->ops = ops;
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index fb12a9bacd2fa..7cea91e193a8f 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -78,16 +78,17 @@ static bool			obj_freeing;
+ /* The number of objs on the global free list */
+ static int			obj_nr_tofree;
+ 
+-static int			debug_objects_maxchain __read_mostly;
+-static int __maybe_unused	debug_objects_maxchecked __read_mostly;
+-static int			debug_objects_fixups __read_mostly;
+-static int			debug_objects_warnings __read_mostly;
+-static int			debug_objects_enabled __read_mostly
+-				= CONFIG_DEBUG_OBJECTS_ENABLE_DEFAULT;
+-static int			debug_objects_pool_size __read_mostly
+-				= ODEBUG_POOL_SIZE;
+-static int			debug_objects_pool_min_level __read_mostly
+-				= ODEBUG_POOL_MIN_LEVEL;
++static int __data_racy			debug_objects_maxchain __read_mostly;
++static int __data_racy __maybe_unused	debug_objects_maxchecked __read_mostly;
++static int __data_racy			debug_objects_fixups __read_mostly;
++static int __data_racy			debug_objects_warnings __read_mostly;
++static int __data_racy			debug_objects_enabled __read_mostly
++					= CONFIG_DEBUG_OBJECTS_ENABLE_DEFAULT;
++static int __data_racy			debug_objects_pool_size __read_mostly
++					= ODEBUG_POOL_SIZE;
++static int __data_racy			debug_objects_pool_min_level __read_mostly
++					= ODEBUG_POOL_MIN_LEVEL;
++
+ static const struct debug_obj_descr *descr_test  __read_mostly;
+ static struct kmem_cache	*obj_cache __ro_after_init;
+ 
+diff --git a/mm/list_lru.c b/mm/list_lru.c
+index 3fd64736bc458..6ac393ea7439f 100644
+--- a/mm/list_lru.c
++++ b/mm/list_lru.c
+@@ -85,6 +85,7 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
+ }
+ #endif /* CONFIG_MEMCG_KMEM */
+ 
++/* The caller must ensure the memcg lifetime. */
+ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ 		    struct mem_cgroup *memcg)
+ {
+@@ -109,14 +110,22 @@ EXPORT_SYMBOL_GPL(list_lru_add);
+ 
+ bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
+ {
++	bool ret;
+ 	int nid = page_to_nid(virt_to_page(item));
+-	struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+-		mem_cgroup_from_slab_obj(item) : NULL;
+ 
+-	return list_lru_add(lru, item, nid, memcg);
++	if (list_lru_memcg_aware(lru)) {
++		rcu_read_lock();
++		ret = list_lru_add(lru, item, nid, mem_cgroup_from_slab_obj(item));
++		rcu_read_unlock();
++	} else {
++		ret = list_lru_add(lru, item, nid, NULL);
++	}
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(list_lru_add_obj);
+ 
++/* The caller must ensure the memcg lifetime. */
+ bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ 		    struct mem_cgroup *memcg)
+ {
+@@ -139,11 +148,18 @@ EXPORT_SYMBOL_GPL(list_lru_del);
+ 
+ bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
+ {
++	bool ret;
+ 	int nid = page_to_nid(virt_to_page(item));
+-	struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+-		mem_cgroup_from_slab_obj(item) : NULL;
+ 
+-	return list_lru_del(lru, item, nid, memcg);
++	if (list_lru_memcg_aware(lru)) {
++		rcu_read_lock();
++		ret = list_lru_del(lru, item, nid, mem_cgroup_from_slab_obj(item));
++		rcu_read_unlock();
++	} else {
++		ret = list_lru_del(lru, item, nid, NULL);
++	}
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(list_lru_del_obj);
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 8f2f1bb18c9cb..88c7a0861017c 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5568,11 +5568,28 @@ static struct cftype mem_cgroup_legacy_files[] = {
+ 
+ #define MEM_CGROUP_ID_MAX	((1UL << MEM_CGROUP_ID_SHIFT) - 1)
+ static DEFINE_IDR(mem_cgroup_idr);
++static DEFINE_SPINLOCK(memcg_idr_lock);
++
++static int mem_cgroup_alloc_id(void)
++{
++	int ret;
++
++	idr_preload(GFP_KERNEL);
++	spin_lock(&memcg_idr_lock);
++	ret = idr_alloc(&mem_cgroup_idr, NULL, 1, MEM_CGROUP_ID_MAX + 1,
++			GFP_NOWAIT);
++	spin_unlock(&memcg_idr_lock);
++	idr_preload_end();
++	return ret;
++}
+ 
+ static void mem_cgroup_id_remove(struct mem_cgroup *memcg)
+ {
+ 	if (memcg->id.id > 0) {
++		spin_lock(&memcg_idr_lock);
+ 		idr_remove(&mem_cgroup_idr, memcg->id.id);
++		spin_unlock(&memcg_idr_lock);
++
+ 		memcg->id.id = 0;
+ 	}
+ }
+@@ -5706,8 +5723,7 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent)
+ 	if (!memcg)
+ 		return ERR_PTR(error);
+ 
+-	memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,
+-				 1, MEM_CGROUP_ID_MAX + 1, GFP_KERNEL);
++	memcg->id.id = mem_cgroup_alloc_id();
+ 	if (memcg->id.id < 0) {
+ 		error = memcg->id.id;
+ 		goto fail;
+@@ -5854,7 +5870,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
+ 	 * publish it here at the end of onlining. This matches the
+ 	 * regular ID destruction during offlining.
+ 	 */
++	spin_lock(&memcg_idr_lock);
+ 	idr_replace(&mem_cgroup_idr, memcg, memcg->id.id);
++	spin_unlock(&memcg_idr_lock);
+ 
+ 	return 0;
+ offline_kmem:
+diff --git a/mm/slub.c b/mm/slub.c
+index 4927edec6a8c9..849e8972e2ffc 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4655,6 +4655,9 @@ static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
+ 		if (!df.slab)
+ 			continue;
+ 
++		if (kfence_free(df.freelist))
++			continue;
++
+ 		do_slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt,
+ 			     _RET_IP_);
+ 	} while (likely(size));
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 2f26147fdf3c9..4e90bd722e7b5 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2972,6 +2972,20 @@ static int hci_passive_scan_sync(struct hci_dev *hdev)
+ 	} else if (hci_is_adv_monitoring(hdev)) {
+ 		window = hdev->le_scan_window_adv_monitor;
+ 		interval = hdev->le_scan_int_adv_monitor;
++
++		/* Disable duplicates filter when scanning for advertisement
++		 * monitor for the following reasons.
++		 *
++		 * For HW pattern filtering (ex. MSFT), Realtek and Qualcomm
++		 * controllers ignore RSSI_Sampling_Period when the duplicates
++		 * filter is enabled.
++		 *
++		 * For SW pattern filtering, when we're not doing interleaved
++		 * scanning, it is necessary to disable duplicates filter,
++		 * otherwise hosts can only receive one advertisement and it's
++		 * impossible to know if a peer is still in range.
++		 */
++		filter_dups = LE_SCAN_FILTER_DUP_DISABLE;
+ 	} else {
+ 		window = hdev->le_scan_window;
+ 		interval = hdev->le_scan_interval;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index c3c26bbb5ddae..9988ba382b686 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6774,6 +6774,7 @@ static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm,
+ 	bt_cb(skb)->l2cap.psm = psm;
+ 
+ 	if (!chan->ops->recv(chan, skb)) {
++		l2cap_chan_unlock(chan);
+ 		l2cap_chan_put(chan);
+ 		return;
+ 	}
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 9a1cb5079a7a0..b2ae0d2434d2e 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -2045,16 +2045,14 @@ void br_multicast_del_port(struct net_bridge_port *port)
+ {
+ 	struct net_bridge *br = port->br;
+ 	struct net_bridge_port_group *pg;
+-	HLIST_HEAD(deleted_head);
+ 	struct hlist_node *n;
+ 
+ 	/* Take care of the remaining groups, only perm ones should be left */
+ 	spin_lock_bh(&br->multicast_lock);
+ 	hlist_for_each_entry_safe(pg, n, &port->mglist, mglist)
+ 		br_multicast_find_del_pg(br, pg);
+-	hlist_move_list(&br->mcast_gc_list, &deleted_head);
+ 	spin_unlock_bh(&br->multicast_lock);
+-	br_multicast_gc(&deleted_head);
++	flush_work(&br->mcast_gc_work);
+ 	br_multicast_port_ctx_deinit(&port->multicast_ctx);
+ 	free_percpu(port->mcast_stats);
+ }
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 8ec35194bfcb8..ab150641142aa 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -148,9 +148,9 @@ static void linkwatch_schedule_work(int urgent)
+ 	 * override the existing timer.
+ 	 */
+ 	if (test_bit(LW_URGENT, &linkwatch_flags))
+-		mod_delayed_work(system_wq, &linkwatch_work, 0);
++		mod_delayed_work(system_unbound_wq, &linkwatch_work, 0);
+ 	else
+-		schedule_delayed_work(&linkwatch_work, delay);
++		queue_delayed_work(system_unbound_wq, &linkwatch_work, delay);
+ }
+ 
+ 
+diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c
+index 09c0fa6756b7d..9171e1066808f 100644
+--- a/net/ipv4/tcp_ao.c
++++ b/net/ipv4/tcp_ao.c
+@@ -266,32 +266,49 @@ static void tcp_ao_key_free_rcu(struct rcu_head *head)
+ 	kfree_sensitive(key);
+ }
+ 
+-void tcp_ao_destroy_sock(struct sock *sk, bool twsk)
++static void tcp_ao_info_free_rcu(struct rcu_head *head)
+ {
+-	struct tcp_ao_info *ao;
++	struct tcp_ao_info *ao = container_of(head, struct tcp_ao_info, rcu);
+ 	struct tcp_ao_key *key;
+ 	struct hlist_node *n;
+ 
++	hlist_for_each_entry_safe(key, n, &ao->head, node) {
++		hlist_del(&key->node);
++		tcp_sigpool_release(key->tcp_sigpool_id);
++		kfree_sensitive(key);
++	}
++	kfree(ao);
++	static_branch_slow_dec_deferred(&tcp_ao_needed);
++}
++
++static void tcp_ao_sk_omem_free(struct sock *sk, struct tcp_ao_info *ao)
++{
++	size_t total_ao_sk_mem = 0;
++	struct tcp_ao_key *key;
++
++	hlist_for_each_entry(key,  &ao->head, node)
++		total_ao_sk_mem += tcp_ao_sizeof_key(key);
++	atomic_sub(total_ao_sk_mem, &sk->sk_omem_alloc);
++}
++
++void tcp_ao_destroy_sock(struct sock *sk, bool twsk)
++{
++	struct tcp_ao_info *ao;
++
+ 	if (twsk) {
+ 		ao = rcu_dereference_protected(tcp_twsk(sk)->ao_info, 1);
+-		tcp_twsk(sk)->ao_info = NULL;
++		rcu_assign_pointer(tcp_twsk(sk)->ao_info, NULL);
+ 	} else {
+ 		ao = rcu_dereference_protected(tcp_sk(sk)->ao_info, 1);
+-		tcp_sk(sk)->ao_info = NULL;
++		rcu_assign_pointer(tcp_sk(sk)->ao_info, NULL);
+ 	}
+ 
+ 	if (!ao || !refcount_dec_and_test(&ao->refcnt))
+ 		return;
+ 
+-	hlist_for_each_entry_safe(key, n, &ao->head, node) {
+-		hlist_del_rcu(&key->node);
+-		if (!twsk)
+-			atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+-		call_rcu(&key->rcu, tcp_ao_key_free_rcu);
+-	}
+-
+-	kfree_rcu(ao, rcu);
+-	static_branch_slow_dec_deferred(&tcp_ao_needed);
++	if (!twsk)
++		tcp_ao_sk_omem_free(sk, ao);
++	call_rcu(&ao->rcu, tcp_ao_info_free_rcu);
+ }
+ 
+ void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw, struct tcp_sock *tp)
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index 4b791e74529e1..e4ad3311e1489 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -140,6 +140,9 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ 	if (thlen < sizeof(*th))
+ 		goto out;
+ 
++	if (unlikely(skb_checksum_start(skb) != skb_transport_header(skb)))
++		goto out;
++
+ 	if (!pskb_may_pull(skb, thlen))
+ 		goto out;
+ 
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 59448a2dbf2c7..ee9af921556a7 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -278,6 +278,10 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 	if (gso_skb->len <= sizeof(*uh) + mss)
+ 		return ERR_PTR(-EINVAL);
+ 
++	if (unlikely(skb_checksum_start(gso_skb) !=
++		     skb_transport_header(gso_skb)))
++		return ERR_PTR(-EINVAL);
++
+ 	if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) {
+ 		/* Packet is from an untrusted source, reset gso_segs. */
+ 		skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 88a34db265d86..7ea4adf81d859 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -88,6 +88,11 @@
+ /* Default trace flags */
+ #define L2TP_DEFAULT_DEBUG_FLAGS	0
+ 
++#define L2TP_DEPTH_NESTING		2
++#if L2TP_DEPTH_NESTING == SINGLE_DEPTH_NESTING
++#error "L2TP requires its own lockdep subclass"
++#endif
++
+ /* Private data stored for received packets in the skb.
+  */
+ struct l2tp_skb_cb {
+@@ -1085,7 +1090,13 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
+ 	IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED);
+ 	nf_reset_ct(skb);
+ 
+-	bh_lock_sock_nested(sk);
++	/* L2TP uses its own lockdep subclass to avoid lockdep splats caused by
++	 * nested socket calls on the same lockdep socket class. This can
++	 * happen when data from a user socket is routed over l2tp, which uses
++	 * another userspace socket.
++	 */
++	spin_lock_nested(&sk->sk_lock.slock, L2TP_DEPTH_NESTING);
++
+ 	if (sock_owned_by_user(sk)) {
+ 		kfree_skb(skb);
+ 		ret = NET_XMIT_DROP;
+@@ -1137,7 +1148,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
+ 	ret = l2tp_xmit_queue(tunnel, skb, &inet->cork.fl);
+ 
+ out_unlock:
+-	bh_unlock_sock(sk);
++	spin_unlock(&sk->sk_lock.slock);
+ 
+ 	return ret;
+ }
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 21d55dc539f6c..677bbbac9f169 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -616,7 +616,9 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
+ 		return -EINVAL;
+ 
+ 	if (!pubsta->deflink.ht_cap.ht_supported &&
+-	    sta->sdata->vif.bss_conf.chanreq.oper.chan->band != NL80211_BAND_6GHZ)
++	    !pubsta->deflink.vht_cap.vht_supported &&
++	    !pubsta->deflink.he_cap.has_he &&
++	    !pubsta->deflink.eht_cap.has_eht)
+ 		return -EINVAL;
+ 
+ 	if (WARN_ON_ONCE(!local->ops->ampdu_action))
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 8a68382a4fe91..ac2f1a54cc43a 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -958,7 +958,8 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 
+ 	if (subflow->remote_key_valid &&
+ 	    (((mp_opt->suboptions & OPTION_MPTCP_DSS) && mp_opt->use_ack) ||
+-	     ((mp_opt->suboptions & OPTION_MPTCP_ADD_ADDR) && !mp_opt->echo))) {
++	     ((mp_opt->suboptions & OPTION_MPTCP_ADD_ADDR) &&
++	      (!mp_opt->echo || subflow->mp_join)))) {
+ 		/* subflows are fully established as soon as we get any
+ 		 * additional ack, including ADD_ADDR.
+ 		 */
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 37954a0b087d2..4cae2aa7be5cb 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -348,7 +348,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
+ 	add_entry = mptcp_lookup_anno_list_by_saddr(msk, addr);
+ 
+ 	if (add_entry) {
+-		if (mptcp_pm_is_kernel(msk))
++		if (WARN_ON_ONCE(mptcp_pm_is_kernel(msk)))
+ 			return false;
+ 
+ 		sk_reset_timer(sk, &add_entry->add_timer,
+@@ -512,8 +512,8 @@ __lookup_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *info)
+ 
+ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ {
++	struct mptcp_pm_addr_entry *local, *signal_and_subflow = NULL;
+ 	struct sock *sk = (struct sock *)msk;
+-	struct mptcp_pm_addr_entry *local;
+ 	unsigned int add_addr_signal_max;
+ 	unsigned int local_addr_max;
+ 	struct pm_nl_pernet *pernet;
+@@ -555,8 +555,6 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 
+ 	/* check first for announce */
+ 	if (msk->pm.add_addr_signaled < add_addr_signal_max) {
+-		local = select_signal_address(pernet, msk);
+-
+ 		/* due to racing events on both ends we can reach here while
+ 		 * previous add address is still running: if we invoke now
+ 		 * mptcp_pm_announce_addr(), that will fail and the
+@@ -567,16 +565,26 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 		if (msk->pm.addr_signal & BIT(MPTCP_ADD_ADDR_SIGNAL))
+ 			return;
+ 
+-		if (local) {
+-			if (mptcp_pm_alloc_anno_list(msk, &local->addr)) {
+-				__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
+-				msk->pm.add_addr_signaled++;
+-				mptcp_pm_announce_addr(msk, &local->addr, false);
+-				mptcp_pm_nl_addr_send_ack(msk);
+-			}
+-		}
++		local = select_signal_address(pernet, msk);
++		if (!local)
++			goto subflow;
++
++		/* If the alloc fails, we are on memory pressure, not worth
++		 * continuing, and trying to create subflows.
++		 */
++		if (!mptcp_pm_alloc_anno_list(msk, &local->addr))
++			return;
++
++		__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
++		msk->pm.add_addr_signaled++;
++		mptcp_pm_announce_addr(msk, &local->addr, false);
++		mptcp_pm_nl_addr_send_ack(msk);
++
++		if (local->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)
++			signal_and_subflow = local;
+ 	}
+ 
++subflow:
+ 	/* check if should create a new subflow */
+ 	while (msk->pm.local_addr_used < local_addr_max &&
+ 	       msk->pm.subflows < subflows_max) {
+@@ -584,9 +592,14 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 		bool fullmesh;
+ 		int i, nr;
+ 
+-		local = select_local_address(pernet, msk);
+-		if (!local)
+-			break;
++		if (signal_and_subflow) {
++			local = signal_and_subflow;
++			signal_and_subflow = NULL;
++		} else {
++			local = select_local_address(pernet, msk);
++			if (!local)
++				break;
++		}
+ 
+ 		fullmesh = !!(local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
+ 
+@@ -1328,8 +1341,8 @@ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (addr.addr.port && !(addr.flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) {
+-		GENL_SET_ERR_MSG(info, "flags must have signal when using port");
++	if (addr.addr.port && !address_use_port(&addr)) {
++		GENL_SET_ERR_MSG(info, "flags must have signal and not subflow when using port");
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 17fcaa9b0df94..a8a254a5008e5 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -735,15 +735,19 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep)
+ 	struct sock *sk = ep->base.sk;
+ 	struct net *net = sock_net(sk);
+ 	struct sctp_hashbucket *head;
++	int err = 0;
+ 
+ 	ep->hashent = sctp_ep_hashfn(net, ep->base.bind_addr.port);
+ 	head = &sctp_ep_hashtable[ep->hashent];
+ 
++	write_lock(&head->lock);
+ 	if (sk->sk_reuseport) {
+ 		bool any = sctp_is_ep_boundall(sk);
+ 		struct sctp_endpoint *ep2;
+ 		struct list_head *list;
+-		int cnt = 0, err = 1;
++		int cnt = 0;
++
++		err = 1;
+ 
+ 		list_for_each(list, &ep->base.bind_addr.address_list)
+ 			cnt++;
+@@ -761,24 +765,24 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep)
+ 			if (!err) {
+ 				err = reuseport_add_sock(sk, sk2, any);
+ 				if (err)
+-					return err;
++					goto out;
+ 				break;
+ 			} else if (err < 0) {
+-				return err;
++				goto out;
+ 			}
+ 		}
+ 
+ 		if (err) {
+ 			err = reuseport_alloc(sk, any);
+ 			if (err)
+-				return err;
++				goto out;
+ 		}
+ 	}
+ 
+-	write_lock(&head->lock);
+ 	hlist_add_head(&ep->node, &head->chain);
++out:
+ 	write_unlock(&head->lock);
+-	return 0;
++	return err;
+ }
+ 
+ /* Add an endpoint to the hash. Local BH-safe. */
+@@ -803,10 +807,9 @@ static void __sctp_unhash_endpoint(struct sctp_endpoint *ep)
+ 
+ 	head = &sctp_ep_hashtable[ep->hashent];
+ 
++	write_lock(&head->lock);
+ 	if (rcu_access_pointer(sk->sk_reuseport_cb))
+ 		reuseport_detach_sock(sk);
+-
+-	write_lock(&head->lock);
+ 	hlist_del_init(&ep->node);
+ 	write_unlock(&head->lock);
+ }
+diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h
+index 9d32058db2b5d..e19177ce40923 100644
+--- a/net/smc/smc_stats.h
++++ b/net/smc/smc_stats.h
+@@ -19,7 +19,7 @@
+ 
+ #include "smc_clc.h"
+ 
+-#define SMC_MAX_FBACK_RSN_CNT 30
++#define SMC_MAX_FBACK_RSN_CNT 36
+ 
+ enum {
+ 	SMC_BUF_8K,
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 6debf4fd42d4e..cef623ea15060 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -369,8 +369,10 @@ static void rpc_make_runnable(struct workqueue_struct *wq,
+ 	if (RPC_IS_ASYNC(task)) {
+ 		INIT_WORK(&task->u.tk_work, rpc_async_schedule);
+ 		queue_work(wq, &task->u.tk_work);
+-	} else
++	} else {
++		smp_mb__after_atomic();
+ 		wake_up_bit(&task->tk_runstate, RPC_TASK_QUEUED);
++	}
+ }
+ 
+ /*
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 11cb5badafb6d..be5266007b489 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1473,6 +1473,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	struct unix_sock *u = unix_sk(sk), *newu, *otheru;
+ 	struct net *net = sock_net(sk);
+ 	struct sk_buff *skb = NULL;
++	unsigned char state;
+ 	long timeo;
+ 	int err;
+ 
+@@ -1523,7 +1524,6 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		goto out;
+ 	}
+ 
+-	/* Latch state of peer */
+ 	unix_state_lock(other);
+ 
+ 	/* Apparently VFS overslept socket death. Retry. */
+@@ -1553,37 +1553,21 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		goto restart;
+ 	}
+ 
+-	/* Latch our state.
+-
+-	   It is tricky place. We need to grab our state lock and cannot
+-	   drop lock on peer. It is dangerous because deadlock is
+-	   possible. Connect to self case and simultaneous
+-	   attempt to connect are eliminated by checking socket
+-	   state. other is TCP_LISTEN, if sk is TCP_LISTEN we
+-	   check this before attempt to grab lock.
+-
+-	   Well, and we have to recheck the state after socket locked.
++	/* self connect and simultaneous connect are eliminated
++	 * by rejecting TCP_LISTEN socket to avoid deadlock.
+ 	 */
+-	switch (READ_ONCE(sk->sk_state)) {
+-	case TCP_CLOSE:
+-		/* This is ok... continue with connect */
+-		break;
+-	case TCP_ESTABLISHED:
+-		/* Socket is already connected */
+-		err = -EISCONN;
+-		goto out_unlock;
+-	default:
+-		err = -EINVAL;
++	state = READ_ONCE(sk->sk_state);
++	if (unlikely(state != TCP_CLOSE)) {
++		err = state == TCP_ESTABLISHED ? -EISCONN : -EINVAL;
+ 		goto out_unlock;
+ 	}
+ 
+ 	unix_state_lock_nested(sk, U_LOCK_SECOND);
+ 
+-	if (sk->sk_state != TCP_CLOSE) {
++	if (unlikely(sk->sk_state != TCP_CLOSE)) {
++		err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EINVAL;
+ 		unix_state_unlock(sk);
+-		unix_state_unlock(other);
+-		sock_put(other);
+-		goto restart;
++		goto out_unlock;
+ 	}
+ 
+ 	err = security_unix_stream_connect(sk, other, newsk);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 0fd075238fc74..c2829d673bc76 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3422,6 +3422,33 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev,
+ 			if (chandef.chan != cur_chan)
+ 				return -EBUSY;
+ 
++			/* only allow this for regular channel widths */
++			switch (wdev->links[link_id].ap.chandef.width) {
++			case NL80211_CHAN_WIDTH_20_NOHT:
++			case NL80211_CHAN_WIDTH_20:
++			case NL80211_CHAN_WIDTH_40:
++			case NL80211_CHAN_WIDTH_80:
++			case NL80211_CHAN_WIDTH_80P80:
++			case NL80211_CHAN_WIDTH_160:
++			case NL80211_CHAN_WIDTH_320:
++				break;
++			default:
++				return -EINVAL;
++			}
++
++			switch (chandef.width) {
++			case NL80211_CHAN_WIDTH_20_NOHT:
++			case NL80211_CHAN_WIDTH_20:
++			case NL80211_CHAN_WIDTH_40:
++			case NL80211_CHAN_WIDTH_80:
++			case NL80211_CHAN_WIDTH_80P80:
++			case NL80211_CHAN_WIDTH_160:
++			case NL80211_CHAN_WIDTH_320:
++				break;
++			default:
++				return -EINVAL;
++			}
++
+ 			result = rdev_set_ap_chanwidth(rdev, dev, link_id,
+ 						       &chandef);
+ 			if (result)
+@@ -4458,10 +4485,7 @@ static void get_key_callback(void *c, struct key_params *params)
+ 	struct nlattr *key;
+ 	struct get_key_cookie *cookie = c;
+ 
+-	if ((params->key &&
+-	     nla_put(cookie->msg, NL80211_ATTR_KEY_DATA,
+-		     params->key_len, params->key)) ||
+-	    (params->seq &&
++	if ((params->seq &&
+ 	     nla_put(cookie->msg, NL80211_ATTR_KEY_SEQ,
+ 		     params->seq_len, params->seq)) ||
+ 	    (params->cipher &&
+@@ -4473,10 +4497,7 @@ static void get_key_callback(void *c, struct key_params *params)
+ 	if (!key)
+ 		goto nla_put_failure;
+ 
+-	if ((params->key &&
+-	     nla_put(cookie->msg, NL80211_KEY_DATA,
+-		     params->key_len, params->key)) ||
+-	    (params->seq &&
++	if ((params->seq &&
+ 	     nla_put(cookie->msg, NL80211_KEY_SEQ,
+ 		     params->seq_len, params->seq)) ||
+ 	    (params->cipher &&
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 707d203ba6527..78042ac2b71f2 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1989,6 +1989,8 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ }
+ 
+ static const struct snd_pci_quirk force_connect_list[] = {
++	SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1),
++	SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a6c1e2199e703..3840565ef8b02 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10671,6 +10671,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 36dddf230c2c4..d597e59863ee3 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -409,6 +409,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8A43"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8A44"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/cs-amp-lib.c b/sound/soc/codecs/cs-amp-lib.c
+index 287ac01a38735..605964af8afad 100644
+--- a/sound/soc/codecs/cs-amp-lib.c
++++ b/sound/soc/codecs/cs-amp-lib.c
+@@ -108,7 +108,7 @@ static efi_status_t cs_amp_get_efi_variable(efi_char16_t *name,
+ 
+ 	KUNIT_STATIC_STUB_REDIRECT(cs_amp_get_efi_variable, name, guid, size, buf);
+ 
+-	if (IS_ENABLED(CONFIG_EFI))
++	if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		return efi.get_variable(name, guid, &attr, size, buf);
+ 
+ 	return EFI_NOT_FOUND;
+diff --git a/sound/soc/codecs/cs35l56-sdw.c b/sound/soc/codecs/cs35l56-sdw.c
+index 70ff55c1517fe..29a5476af95ae 100644
+--- a/sound/soc/codecs/cs35l56-sdw.c
++++ b/sound/soc/codecs/cs35l56-sdw.c
+@@ -23,6 +23,79 @@
+ /* Register addresses are offset when sent over SoundWire */
+ #define CS35L56_SDW_ADDR_OFFSET		0x8000
+ 
++/* Cirrus bus bridge registers */
++#define CS35L56_SDW_MEM_ACCESS_STATUS	0xd0
++#define CS35L56_SDW_MEM_READ_DATA	0xd8
++
++#define CS35L56_SDW_LAST_LATE		BIT(3)
++#define CS35L56_SDW_CMD_IN_PROGRESS	BIT(2)
++#define CS35L56_SDW_RDATA_RDY		BIT(0)
++
++#define CS35L56_LATE_READ_POLL_US	10
++#define CS35L56_LATE_READ_TIMEOUT_US	1000
++
++static int cs35l56_sdw_poll_mem_status(struct sdw_slave *peripheral,
++				       unsigned int mask,
++				       unsigned int match)
++{
++	int ret, val;
++
++	ret = read_poll_timeout(sdw_read_no_pm, val,
++				(val < 0) || ((val & mask) == match),
++				CS35L56_LATE_READ_POLL_US, CS35L56_LATE_READ_TIMEOUT_US,
++				false, peripheral, CS35L56_SDW_MEM_ACCESS_STATUS);
++	if (ret < 0)
++		return ret;
++
++	if (val < 0)
++		return val;
++
++	return 0;
++}
++
++static int cs35l56_sdw_slow_read(struct sdw_slave *peripheral, unsigned int reg,
++				 u8 *buf, size_t val_size)
++{
++	int ret, i;
++
++	reg += CS35L56_SDW_ADDR_OFFSET;
++
++	for (i = 0; i < val_size; i += sizeof(u32)) {
++		/* Poll for bus bridge idle */
++		ret = cs35l56_sdw_poll_mem_status(peripheral,
++						  CS35L56_SDW_CMD_IN_PROGRESS,
++						  0);
++		if (ret < 0) {
++			dev_err(&peripheral->dev, "!CMD_IN_PROGRESS fail: %d\n", ret);
++			return ret;
++		}
++
++		/* Reading LSByte triggers read of register to holding buffer */
++		sdw_read_no_pm(peripheral, reg + i);
++
++		/* Wait for data available */
++		ret = cs35l56_sdw_poll_mem_status(peripheral,
++						  CS35L56_SDW_RDATA_RDY,
++						  CS35L56_SDW_RDATA_RDY);
++		if (ret < 0) {
++			dev_err(&peripheral->dev, "RDATA_RDY fail: %d\n", ret);
++			return ret;
++		}
++
++		/* Read data from buffer */
++		ret = sdw_nread_no_pm(peripheral, CS35L56_SDW_MEM_READ_DATA,
++				      sizeof(u32), &buf[i]);
++		if (ret) {
++			dev_err(&peripheral->dev, "Late read @%#x failed: %d\n", reg + i, ret);
++			return ret;
++		}
++
++		swab32s((u32 *)&buf[i]);
++	}
++
++	return 0;
++}
++
+ static int cs35l56_sdw_read_one(struct sdw_slave *peripheral, unsigned int reg, void *buf)
+ {
+ 	int ret;
+@@ -48,6 +121,10 @@ static int cs35l56_sdw_read(void *context, const void *reg_buf,
+ 	int ret;
+ 
+ 	reg = le32_to_cpu(*(const __le32 *)reg_buf);
++
++	if (cs35l56_is_otp_register(reg))
++		return cs35l56_sdw_slow_read(peripheral, reg, buf8, val_size);
++
+ 	reg += CS35L56_SDW_ADDR_OFFSET;
+ 
+ 	if (val_size == 4)
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index f609cade805d7..6d821a793045e 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -20,6 +20,18 @@ static const struct reg_sequence cs35l56_patch[] = {
+ 	 * Firmware can change these to non-defaults to satisfy SDCA.
+ 	 * Ensure that they are at known defaults.
+ 	 */
++	{ CS35L56_ASP1_ENABLES1,		0x00000000 },
++	{ CS35L56_ASP1_CONTROL1,		0x00000028 },
++	{ CS35L56_ASP1_CONTROL2,		0x18180200 },
++	{ CS35L56_ASP1_CONTROL3,		0x00000002 },
++	{ CS35L56_ASP1_FRAME_CONTROL1,		0x03020100 },
++	{ CS35L56_ASP1_FRAME_CONTROL5,		0x00020100 },
++	{ CS35L56_ASP1_DATA_CONTROL1,		0x00000018 },
++	{ CS35L56_ASP1_DATA_CONTROL5,		0x00000018 },
++	{ CS35L56_ASP1TX1_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX2_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX3_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX4_INPUT,		0x00000000 },
+ 	{ CS35L56_SWIRE_DP3_CH1_INPUT,		0x00000018 },
+ 	{ CS35L56_SWIRE_DP3_CH2_INPUT,		0x00000019 },
+ 	{ CS35L56_SWIRE_DP3_CH3_INPUT,		0x00000029 },
+@@ -41,12 +53,18 @@ EXPORT_SYMBOL_NS_GPL(cs35l56_set_patch, SND_SOC_CS35L56_SHARED);
+ static const struct reg_default cs35l56_reg_defaults[] = {
+ 	/* no defaults for OTP_MEM - first read populates cache */
+ 
+-	/*
+-	 * No defaults for ASP1 control or ASP1TX mixer. See
+-	 * cs35l56_populate_asp1_register_defaults() and
+-	 * cs35l56_sync_asp1_mixer_widgets_with_firmware().
+-	 */
+-
++	{ CS35L56_ASP1_ENABLES1,		0x00000000 },
++	{ CS35L56_ASP1_CONTROL1,		0x00000028 },
++	{ CS35L56_ASP1_CONTROL2,		0x18180200 },
++	{ CS35L56_ASP1_CONTROL3,		0x00000002 },
++	{ CS35L56_ASP1_FRAME_CONTROL1,		0x03020100 },
++	{ CS35L56_ASP1_FRAME_CONTROL5,		0x00020100 },
++	{ CS35L56_ASP1_DATA_CONTROL1,		0x00000018 },
++	{ CS35L56_ASP1_DATA_CONTROL5,		0x00000018 },
++	{ CS35L56_ASP1TX1_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX2_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX3_INPUT,		0x00000000 },
++	{ CS35L56_ASP1TX4_INPUT,		0x00000000 },
+ 	{ CS35L56_SWIRE_DP3_CH1_INPUT,		0x00000018 },
+ 	{ CS35L56_SWIRE_DP3_CH2_INPUT,		0x00000019 },
+ 	{ CS35L56_SWIRE_DP3_CH3_INPUT,		0x00000029 },
+@@ -206,77 +224,6 @@ static bool cs35l56_volatile_reg(struct device *dev, unsigned int reg)
+ 	}
+ }
+ 
+-static const struct reg_sequence cs35l56_asp1_defaults[] = {
+-	REG_SEQ0(CS35L56_ASP1_ENABLES1,		0x00000000),
+-	REG_SEQ0(CS35L56_ASP1_CONTROL1,		0x00000028),
+-	REG_SEQ0(CS35L56_ASP1_CONTROL2,		0x18180200),
+-	REG_SEQ0(CS35L56_ASP1_CONTROL3,		0x00000002),
+-	REG_SEQ0(CS35L56_ASP1_FRAME_CONTROL1,	0x03020100),
+-	REG_SEQ0(CS35L56_ASP1_FRAME_CONTROL5,	0x00020100),
+-	REG_SEQ0(CS35L56_ASP1_DATA_CONTROL1,	0x00000018),
+-	REG_SEQ0(CS35L56_ASP1_DATA_CONTROL5,	0x00000018),
+-	REG_SEQ0(CS35L56_ASP1TX1_INPUT,		0x00000000),
+-	REG_SEQ0(CS35L56_ASP1TX2_INPUT,		0x00000000),
+-	REG_SEQ0(CS35L56_ASP1TX3_INPUT,		0x00000000),
+-	REG_SEQ0(CS35L56_ASP1TX4_INPUT,		0x00000000),
+-};
+-
+-/*
+- * The firmware can have control of the ASP so we don't provide regmap
+- * with defaults for these registers, to prevent a regcache_sync() from
+- * overwriting the firmware settings. But if the machine driver hooks up
+- * the ASP it means the driver is taking control of the ASP, so then the
+- * registers are populated with the defaults.
+- */
+-int cs35l56_init_asp1_regs_for_driver_control(struct cs35l56_base *cs35l56_base)
+-{
+-	if (!cs35l56_base->fw_owns_asp1)
+-		return 0;
+-
+-	cs35l56_base->fw_owns_asp1 = false;
+-
+-	return regmap_multi_reg_write(cs35l56_base->regmap, cs35l56_asp1_defaults,
+-				      ARRAY_SIZE(cs35l56_asp1_defaults));
+-}
+-EXPORT_SYMBOL_NS_GPL(cs35l56_init_asp1_regs_for_driver_control, SND_SOC_CS35L56_SHARED);
+-
+-/*
+- * The firmware boot sequence can overwrite the ASP1 config registers so that
+- * they don't match regmap's view of their values. Rewrite the values from the
+- * regmap cache into the hardware registers.
+- */
+-int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base)
+-{
+-	struct reg_sequence asp1_regs[ARRAY_SIZE(cs35l56_asp1_defaults)];
+-	int i, ret;
+-
+-	if (cs35l56_base->fw_owns_asp1)
+-		return 0;
+-
+-	memcpy(asp1_regs, cs35l56_asp1_defaults, sizeof(asp1_regs));
+-
+-	/* Read current values from regmap cache into the write sequence */
+-	for (i = 0; i < ARRAY_SIZE(asp1_regs); ++i) {
+-		ret = regmap_read(cs35l56_base->regmap, asp1_regs[i].reg, &asp1_regs[i].def);
+-		if (ret)
+-			goto err;
+-	}
+-
+-	/* Write the values cache-bypassed so that they will be written to silicon */
+-	ret = regmap_multi_reg_write_bypassed(cs35l56_base->regmap, asp1_regs,
+-					      ARRAY_SIZE(asp1_regs));
+-	if (ret)
+-		goto err;
+-
+-	return 0;
+-
+-err:
+-	dev_err(cs35l56_base->dev, "Failed to sync ASP1 registers: %d\n", ret);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL_NS_GPL(cs35l56_force_sync_asp1_registers_from_cache, SND_SOC_CS35L56_SHARED);
+-
+ int cs35l56_mbox_send(struct cs35l56_base *cs35l56_base, unsigned int command)
+ {
+ 	unsigned int val;
+diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c
+index 7f2f2f8c13fae..84c34f5b1a516 100644
+--- a/sound/soc/codecs/cs35l56.c
++++ b/sound/soc/codecs/cs35l56.c
+@@ -63,131 +63,6 @@ static int cs35l56_dspwait_put_volsw(struct snd_kcontrol *kcontrol,
+ 	return snd_soc_put_volsw(kcontrol, ucontrol);
+ }
+ 
+-static const unsigned short cs35l56_asp1_mixer_regs[] = {
+-	CS35L56_ASP1TX1_INPUT, CS35L56_ASP1TX2_INPUT,
+-	CS35L56_ASP1TX3_INPUT, CS35L56_ASP1TX4_INPUT,
+-};
+-
+-static const char * const cs35l56_asp1_mux_control_names[] = {
+-	"ASP1 TX1 Source", "ASP1 TX2 Source", "ASP1 TX3 Source", "ASP1 TX4 Source"
+-};
+-
+-static int cs35l56_sync_asp1_mixer_widgets_with_firmware(struct cs35l56_private *cs35l56)
+-{
+-	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(cs35l56->component);
+-	const char *prefix = cs35l56->component->name_prefix;
+-	char full_name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
+-	const char *name;
+-	struct snd_kcontrol *kcontrol;
+-	struct soc_enum *e;
+-	unsigned int val[4];
+-	int i, item, ret;
+-
+-	if (cs35l56->asp1_mixer_widgets_initialized)
+-		return 0;
+-
+-	/*
+-	 * Resume so we can read the registers from silicon if the regmap
+-	 * cache has not yet been populated.
+-	 */
+-	ret = pm_runtime_resume_and_get(cs35l56->base.dev);
+-	if (ret < 0)
+-		return ret;
+-
+-	/* Wait for firmware download and reboot */
+-	cs35l56_wait_dsp_ready(cs35l56);
+-
+-	ret = regmap_bulk_read(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT,
+-			       val, ARRAY_SIZE(val));
+-
+-	pm_runtime_mark_last_busy(cs35l56->base.dev);
+-	pm_runtime_put_autosuspend(cs35l56->base.dev);
+-
+-	if (ret) {
+-		dev_err(cs35l56->base.dev, "Failed to read ASP1 mixer regs: %d\n", ret);
+-		return ret;
+-	}
+-
+-	for (i = 0; i < ARRAY_SIZE(cs35l56_asp1_mux_control_names); ++i) {
+-		name = cs35l56_asp1_mux_control_names[i];
+-
+-		if (prefix) {
+-			snprintf(full_name, sizeof(full_name), "%s %s", prefix, name);
+-			name = full_name;
+-		}
+-
+-		kcontrol = snd_soc_card_get_kcontrol_locked(dapm->card, name);
+-		if (!kcontrol) {
+-			dev_warn(cs35l56->base.dev, "Could not find control %s\n", name);
+-			continue;
+-		}
+-
+-		e = (struct soc_enum *)kcontrol->private_value;
+-		item = snd_soc_enum_val_to_item(e, val[i] & CS35L56_ASP_TXn_SRC_MASK);
+-		snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL);
+-	}
+-
+-	cs35l56->asp1_mixer_widgets_initialized = true;
+-
+-	return 0;
+-}
+-
+-static int cs35l56_dspwait_asp1tx_get(struct snd_kcontrol *kcontrol,
+-				      struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct snd_soc_component *component = snd_soc_dapm_kcontrol_component(kcontrol);
+-	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
+-	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+-	int index = e->shift_l;
+-	unsigned int addr, val;
+-	int ret;
+-
+-	ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56);
+-	if (ret)
+-		return ret;
+-
+-	addr = cs35l56_asp1_mixer_regs[index];
+-	ret = regmap_read(cs35l56->base.regmap, addr, &val);
+-	if (ret)
+-		return ret;
+-
+-	val &= CS35L56_ASP_TXn_SRC_MASK;
+-	ucontrol->value.enumerated.item[0] = snd_soc_enum_val_to_item(e, val);
+-
+-	return 0;
+-}
+-
+-static int cs35l56_dspwait_asp1tx_put(struct snd_kcontrol *kcontrol,
+-				      struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct snd_soc_component *component = snd_soc_dapm_kcontrol_component(kcontrol);
+-	struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
+-	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
+-	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+-	int item = ucontrol->value.enumerated.item[0];
+-	int index = e->shift_l;
+-	unsigned int addr, val;
+-	bool changed;
+-	int ret;
+-
+-	ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56);
+-	if (ret)
+-		return ret;
+-
+-	addr = cs35l56_asp1_mixer_regs[index];
+-	val = snd_soc_enum_item_to_val(e, item);
+-
+-	ret = regmap_update_bits_check(cs35l56->base.regmap, addr,
+-				       CS35L56_ASP_TXn_SRC_MASK, val, &changed);
+-	if (ret)
+-		return ret;
+-
+-	if (changed)
+-		snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL);
+-
+-	return changed;
+-}
+-
+ static DECLARE_TLV_DB_SCALE(vol_tlv, -10000, 25, 0);
+ 
+ static const struct snd_kcontrol_new cs35l56_controls[] = {
+@@ -210,44 +85,40 @@ static const struct snd_kcontrol_new cs35l56_controls[] = {
+ };
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx1_enum,
+-				  SND_SOC_NOPM,
+-				  0, 0,
++				  CS35L56_ASP1TX1_INPUT,
++				  0, CS35L56_ASP_TXn_SRC_MASK,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx1_mux =
+-	SOC_DAPM_ENUM_EXT("ASP1TX1 SRC", cs35l56_asp1tx1_enum,
+-			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
++	SOC_DAPM_ENUM("ASP1TX1 SRC", cs35l56_asp1tx1_enum);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx2_enum,
+-				  SND_SOC_NOPM,
+-				  1, 0,
++				  CS35L56_ASP1TX2_INPUT,
++				  0, CS35L56_ASP_TXn_SRC_MASK,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx2_mux =
+-	SOC_DAPM_ENUM_EXT("ASP1TX2 SRC", cs35l56_asp1tx2_enum,
+-			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
++	SOC_DAPM_ENUM("ASP1TX2 SRC", cs35l56_asp1tx2_enum);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx3_enum,
+-				  SND_SOC_NOPM,
+-				  2, 0,
++				  CS35L56_ASP1TX3_INPUT,
++				  0, CS35L56_ASP_TXn_SRC_MASK,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx3_mux =
+-	SOC_DAPM_ENUM_EXT("ASP1TX3 SRC", cs35l56_asp1tx3_enum,
+-			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
++	SOC_DAPM_ENUM("ASP1TX3 SRC", cs35l56_asp1tx3_enum);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx4_enum,
+-				  SND_SOC_NOPM,
+-				  3, 0,
++				  CS35L56_ASP1TX4_INPUT,
++				  0, CS35L56_ASP_TXn_SRC_MASK,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx4_mux =
+-	SOC_DAPM_ENUM_EXT("ASP1TX4 SRC", cs35l56_asp1tx4_enum,
+-			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
++	SOC_DAPM_ENUM("ASP1TX4 SRC", cs35l56_asp1tx4_enum);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_sdw1tx1_enum,
+ 				CS35L56_SWIRE_DP3_CH1_INPUT,
+@@ -285,21 +156,6 @@ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_sdw1tx4_enum,
+ static const struct snd_kcontrol_new sdw1_tx4_mux =
+ 	SOC_DAPM_ENUM("SDW1TX4 SRC", cs35l56_sdw1tx4_enum);
+ 
+-static int cs35l56_asp1_cfg_event(struct snd_soc_dapm_widget *w,
+-				  struct snd_kcontrol *kcontrol, int event)
+-{
+-	struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+-	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
+-
+-	switch (event) {
+-	case SND_SOC_DAPM_PRE_PMU:
+-		/* Override register values set by firmware boot */
+-		return cs35l56_force_sync_asp1_registers_from_cache(&cs35l56->base);
+-	default:
+-		return 0;
+-	}
+-}
+-
+ static int cs35l56_play_event(struct snd_soc_dapm_widget *w,
+ 			      struct snd_kcontrol *kcontrol, int event)
+ {
+@@ -336,9 +192,6 @@ static const struct snd_soc_dapm_widget cs35l56_dapm_widgets[] = {
+ 	SND_SOC_DAPM_REGULATOR_SUPPLY("VDD_B", 0, 0),
+ 	SND_SOC_DAPM_REGULATOR_SUPPLY("VDD_AMP", 0, 0),
+ 
+-	SND_SOC_DAPM_SUPPLY("ASP1 CFG", SND_SOC_NOPM, 0, 0, cs35l56_asp1_cfg_event,
+-			    SND_SOC_DAPM_PRE_PMU),
+-
+ 	SND_SOC_DAPM_SUPPLY("PLAY", SND_SOC_NOPM, 0, 0, cs35l56_play_event,
+ 			    SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+ 
+@@ -406,9 +259,6 @@ static const struct snd_soc_dapm_route cs35l56_audio_map[] = {
+ 	{ "AMP", NULL, "VDD_B" },
+ 	{ "AMP", NULL, "VDD_AMP" },
+ 
+-	{ "ASP1 Playback", NULL, "ASP1 CFG" },
+-	{ "ASP1 Capture", NULL, "ASP1 CFG" },
+-
+ 	{ "ASP1 Playback", NULL, "PLAY" },
+ 	{ "SDW1 Playback", NULL, "PLAY" },
+ 
+@@ -459,14 +309,9 @@ static int cs35l56_asp_dai_set_fmt(struct snd_soc_dai *codec_dai, unsigned int f
+ {
+ 	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component);
+ 	unsigned int val;
+-	int ret;
+ 
+ 	dev_dbg(cs35l56->base.dev, "%s: %#x\n", __func__, fmt);
+ 
+-	ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);
+-	if (ret)
+-		return ret;
+-
+ 	switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {
+ 	case SND_SOC_DAIFMT_CBC_CFC:
+ 		break;
+@@ -540,11 +385,6 @@ static int cs35l56_asp_dai_set_tdm_slot(struct snd_soc_dai *dai, unsigned int tx
+ 					unsigned int rx_mask, int slots, int slot_width)
+ {
+ 	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);
+-	int ret;
+-
+-	ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);
+-	if (ret)
+-		return ret;
+ 
+ 	if ((slots == 0) || (slot_width == 0)) {
+ 		dev_dbg(cs35l56->base.dev, "tdm config cleared\n");
+@@ -593,11 +433,6 @@ static int cs35l56_asp_dai_hw_params(struct snd_pcm_substream *substream,
+ 	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);
+ 	unsigned int rate = params_rate(params);
+ 	u8 asp_width, asp_wl;
+-	int ret;
+-
+-	ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);
+-	if (ret)
+-		return ret;
+ 
+ 	asp_wl = params_width(params);
+ 	if (cs35l56->asp_slot_width)
+@@ -654,11 +489,7 @@ static int cs35l56_asp_dai_set_sysclk(struct snd_soc_dai *dai,
+ 				      int clk_id, unsigned int freq, int dir)
+ {
+ 	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);
+-	int freq_id, ret;
+-
+-	ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);
+-	if (ret)
+-		return ret;
++	int freq_id;
+ 
+ 	if (freq == 0) {
+ 		cs35l56->sysclk_set = false;
+@@ -1039,13 +870,6 @@ static int cs35l56_component_probe(struct snd_soc_component *component)
+ 	debugfs_create_bool("can_hibernate", 0444, debugfs_root, &cs35l56->base.can_hibernate);
+ 	debugfs_create_bool("fw_patched", 0444, debugfs_root, &cs35l56->base.fw_patched);
+ 
+-	/*
+-	 * The widgets for the ASP1TX mixer can't be initialized
+-	 * until the firmware has been downloaded and rebooted.
+-	 */
+-	regcache_drop_region(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT, CS35L56_ASP1TX4_INPUT);
+-	cs35l56->asp1_mixer_widgets_initialized = false;
+-
+ 	queue_work(cs35l56->dsp_wq, &cs35l56->dsp_work);
+ 
+ 	return 0;
+@@ -1436,9 +1260,6 @@ int cs35l56_common_probe(struct cs35l56_private *cs35l56)
+ 	cs35l56->base.cal_index = -1;
+ 	cs35l56->speaker_id = -ENOENT;
+ 
+-	/* Assume that the firmware owns ASP1 until we know different */
+-	cs35l56->base.fw_owns_asp1 = true;
+-
+ 	dev_set_drvdata(cs35l56->base.dev, cs35l56);
+ 
+ 	cs35l56_fill_supply_names(cs35l56->supplies);
+diff --git a/sound/soc/codecs/cs35l56.h b/sound/soc/codecs/cs35l56.h
+index b000e7365e406..200f695efca3d 100644
+--- a/sound/soc/codecs/cs35l56.h
++++ b/sound/soc/codecs/cs35l56.h
+@@ -51,7 +51,6 @@ struct cs35l56_private {
+ 	u8 asp_slot_count;
+ 	bool tdm_mode;
+ 	bool sysclk_set;
+-	bool asp1_mixer_widgets_initialized;
+ 	u8 old_sdw_clock_scale;
+ };
+ 
+diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c
+index a1f04010da95f..132c1d24f8f6e 100644
+--- a/sound/soc/codecs/wcd938x-sdw.c
++++ b/sound/soc/codecs/wcd938x-sdw.c
+@@ -1252,12 +1252,12 @@ static int wcd9380_probe(struct sdw_slave *pdev,
+ 	pdev->prop.lane_control_support = true;
+ 	pdev->prop.simple_clk_stop_capable = true;
+ 	if (wcd->is_tx) {
+-		pdev->prop.source_ports = GENMASK(WCD938X_MAX_SWR_PORTS, 0);
++		pdev->prop.source_ports = GENMASK(WCD938X_MAX_SWR_PORTS - 1, 0);
+ 		pdev->prop.src_dpn_prop = wcd938x_dpn_prop;
+ 		wcd->ch_info = &wcd938x_sdw_tx_ch_info[0];
+ 		pdev->prop.wake_capable = true;
+ 	} else {
+-		pdev->prop.sink_ports = GENMASK(WCD938X_MAX_SWR_PORTS, 0);
++		pdev->prop.sink_ports = GENMASK(WCD938X_MAX_SWR_PORTS - 1, 0);
+ 		pdev->prop.sink_dpn_prop = wcd938x_dpn_prop;
+ 		wcd->ch_info = &wcd938x_sdw_rx_ch_info[0];
+ 	}
+diff --git a/sound/soc/codecs/wcd939x-sdw.c b/sound/soc/codecs/wcd939x-sdw.c
+index 8acb5651c5bca..392f4dcab3e09 100644
+--- a/sound/soc/codecs/wcd939x-sdw.c
++++ b/sound/soc/codecs/wcd939x-sdw.c
+@@ -1453,12 +1453,12 @@ static int wcd9390_probe(struct sdw_slave *pdev, const struct sdw_device_id *id)
+ 	pdev->prop.lane_control_support = true;
+ 	pdev->prop.simple_clk_stop_capable = true;
+ 	if (wcd->is_tx) {
+-		pdev->prop.source_ports = GENMASK(WCD939X_MAX_TX_SWR_PORTS, 0);
++		pdev->prop.source_ports = GENMASK(WCD939X_MAX_TX_SWR_PORTS - 1, 0);
+ 		pdev->prop.src_dpn_prop = wcd939x_tx_dpn_prop;
+ 		wcd->ch_info = &wcd939x_sdw_tx_ch_info[0];
+ 		pdev->prop.wake_capable = true;
+ 	} else {
+-		pdev->prop.sink_ports = GENMASK(WCD939X_MAX_RX_SWR_PORTS, 0);
++		pdev->prop.sink_ports = GENMASK(WCD939X_MAX_RX_SWR_PORTS - 1, 0);
+ 		pdev->prop.sink_dpn_prop = wcd939x_rx_dpn_prop;
+ 		wcd->ch_info = &wcd939x_sdw_rx_ch_info[0];
+ 	}
+diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c
+index 1253695bebd86..53b828f681020 100644
+--- a/sound/soc/codecs/wsa881x.c
++++ b/sound/soc/codecs/wsa881x.c
+@@ -1152,7 +1152,7 @@ static int wsa881x_probe(struct sdw_slave *pdev,
+ 	wsa881x->sconfig.frame_rate = 48000;
+ 	wsa881x->sconfig.direction = SDW_DATA_DIR_RX;
+ 	wsa881x->sconfig.type = SDW_STREAM_PDM;
+-	pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS, 0);
++	pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS - 1, 0);
+ 	pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop;
+ 	pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY;
+ 	pdev->prop.clk_stop_mode1 = true;
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index a2e86ef7d18f5..2169d93989841 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -1399,7 +1399,15 @@ static int wsa883x_probe(struct sdw_slave *pdev,
+ 	wsa883x->sconfig.direction = SDW_DATA_DIR_RX;
+ 	wsa883x->sconfig.type = SDW_STREAM_PDM;
+ 
+-	pdev->prop.sink_ports = GENMASK(WSA883X_MAX_SWR_PORTS, 0);
++	/**
++	 * Port map index starts with 0, however the data port for this codec
++	 * are from index 1
++	 */
++	if (of_property_read_u32_array(dev->of_node, "qcom,port-mapping", &pdev->m_port_map[1],
++					WSA883X_MAX_SWR_PORTS))
++		dev_dbg(dev, "Static Port mapping not specified\n");
++
++	pdev->prop.sink_ports = GENMASK(WSA883X_MAX_SWR_PORTS - 1, 0);
+ 	pdev->prop.simple_clk_stop_capable = true;
+ 	pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop;
+ 	pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY;
+diff --git a/sound/soc/codecs/wsa884x.c b/sound/soc/codecs/wsa884x.c
+index a9767ef0e39d1..de4caf61eef9e 100644
+--- a/sound/soc/codecs/wsa884x.c
++++ b/sound/soc/codecs/wsa884x.c
+@@ -1887,7 +1887,15 @@ static int wsa884x_probe(struct sdw_slave *pdev,
+ 	wsa884x->sconfig.direction = SDW_DATA_DIR_RX;
+ 	wsa884x->sconfig.type = SDW_STREAM_PDM;
+ 
+-	pdev->prop.sink_ports = GENMASK(WSA884X_MAX_SWR_PORTS, 0);
++	/**
++	 * Port map index starts with 0, however the data port for this codec
++	 * are from index 1
++	 */
++	if (of_property_read_u32_array(dev->of_node, "qcom,port-mapping", &pdev->m_port_map[1],
++					WSA884X_MAX_SWR_PORTS))
++		dev_dbg(dev, "Static Port mapping not specified\n");
++
++	pdev->prop.sink_ports = GENMASK(WSA884X_MAX_SWR_PORTS - 1, 0);
+ 	pdev->prop.simple_clk_stop_capable = true;
+ 	pdev->prop.sink_dpn_prop = wsa884x_sink_dpn_prop;
+ 	pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY;
+diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c
+index 59abe0b3c59fb..486c56a84552d 100644
+--- a/sound/soc/meson/axg-fifo.c
++++ b/sound/soc/meson/axg-fifo.c
+@@ -207,25 +207,18 @@ static irqreturn_t axg_fifo_pcm_irq_block(int irq, void *dev_id)
+ 	status = FIELD_GET(STATUS1_INT_STS, status);
+ 	axg_fifo_ack_irq(fifo, status);
+ 
+-	/* Use the thread to call period elapsed on nonatomic links */
+-	if (status & FIFO_INT_COUNT_REPEAT)
+-		return IRQ_WAKE_THREAD;
++	if (status & ~FIFO_INT_COUNT_REPEAT)
++		dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n",
++			status);
+ 
+-	dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n",
+-		status);
++	if (status & FIFO_INT_COUNT_REPEAT) {
++		snd_pcm_period_elapsed(ss);
++		return IRQ_HANDLED;
++	}
+ 
+ 	return IRQ_NONE;
+ }
+ 
+-static irqreturn_t axg_fifo_pcm_irq_block_thread(int irq, void *dev_id)
+-{
+-	struct snd_pcm_substream *ss = dev_id;
+-
+-	snd_pcm_period_elapsed(ss);
+-
+-	return IRQ_HANDLED;
+-}
+-
+ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ 		      struct snd_pcm_substream *ss)
+ {
+@@ -251,8 +244,9 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = request_threaded_irq(fifo->irq, axg_fifo_pcm_irq_block,
+-				   axg_fifo_pcm_irq_block_thread,
++	/* Use the threaded irq handler only with non-atomic links */
++	ret = request_threaded_irq(fifo->irq, NULL,
++				   axg_fifo_pcm_irq_block,
+ 				   IRQF_ONESHOT, dev_name(dev), ss);
+ 	if (ret)
+ 		return ret;
+diff --git a/sound/soc/sof/mediatek/mt8195/mt8195.c b/sound/soc/sof/mediatek/mt8195/mt8195.c
+index 31dc98d1b1d8b..8d3fc167cd810 100644
+--- a/sound/soc/sof/mediatek/mt8195/mt8195.c
++++ b/sound/soc/sof/mediatek/mt8195/mt8195.c
+@@ -573,7 +573,7 @@ static const struct snd_sof_dsp_ops sof_mt8195_ops = {
+ static struct snd_sof_of_mach sof_mt8195_machs[] = {
+ 	{
+ 		.compatible = "google,tomato",
+-		.sof_tplg_filename = "sof-mt8195-mt6359-rt1019-rt5682-dts.tplg"
++		.sof_tplg_filename = "sof-mt8195-mt6359-rt1019-rt5682.tplg"
+ 	}, {
+ 		.compatible = "mediatek,mt8195",
+ 		.sof_tplg_filename = "sof-mt8195.tplg"
+diff --git a/sound/soc/sti/sti_uniperif.c b/sound/soc/sti/sti_uniperif.c
+index ba824f14a39cf..a7956e5a4ee5d 100644
+--- a/sound/soc/sti/sti_uniperif.c
++++ b/sound/soc/sti/sti_uniperif.c
+@@ -352,7 +352,7 @@ static int sti_uniperiph_resume(struct snd_soc_component *component)
+ 	return ret;
+ }
+ 
+-static int sti_uniperiph_dai_probe(struct snd_soc_dai *dai)
++int sti_uniperiph_dai_probe(struct snd_soc_dai *dai)
+ {
+ 	struct sti_uniperiph_data *priv = snd_soc_dai_get_drvdata(dai);
+ 	struct sti_uniperiph_dai *dai_data = &priv->dai_data;
+diff --git a/sound/soc/sti/uniperif.h b/sound/soc/sti/uniperif.h
+index 2a5de328501c1..74e51f0ff85c8 100644
+--- a/sound/soc/sti/uniperif.h
++++ b/sound/soc/sti/uniperif.h
+@@ -1380,6 +1380,7 @@ int uni_reader_init(struct platform_device *pdev,
+ 		    struct uniperif *reader);
+ 
+ /* common */
++int sti_uniperiph_dai_probe(struct snd_soc_dai *dai);
+ int sti_uniperiph_dai_set_fmt(struct snd_soc_dai *dai,
+ 			      unsigned int fmt);
+ 
+diff --git a/sound/soc/sti/uniperif_player.c b/sound/soc/sti/uniperif_player.c
+index dd9013c476649..6d1ce030963c6 100644
+--- a/sound/soc/sti/uniperif_player.c
++++ b/sound/soc/sti/uniperif_player.c
+@@ -1038,6 +1038,7 @@ static const struct snd_soc_dai_ops uni_player_dai_ops = {
+ 		.startup = uni_player_startup,
+ 		.shutdown = uni_player_shutdown,
+ 		.prepare = uni_player_prepare,
++		.probe = sti_uniperiph_dai_probe,
+ 		.trigger = uni_player_trigger,
+ 		.hw_params = sti_uniperiph_dai_hw_params,
+ 		.set_fmt = sti_uniperiph_dai_set_fmt,
+diff --git a/sound/soc/sti/uniperif_reader.c b/sound/soc/sti/uniperif_reader.c
+index 065c5f0d1f5f0..05ea2b794eb92 100644
+--- a/sound/soc/sti/uniperif_reader.c
++++ b/sound/soc/sti/uniperif_reader.c
+@@ -401,6 +401,7 @@ static const struct snd_soc_dai_ops uni_reader_dai_ops = {
+ 		.startup = uni_reader_startup,
+ 		.shutdown = uni_reader_shutdown,
+ 		.prepare = uni_reader_prepare,
++		.probe = sti_uniperiph_dai_probe,
+ 		.trigger = uni_reader_trigger,
+ 		.hw_params = sti_uniperiph_dai_hw_params,
+ 		.set_fmt = sti_uniperiph_dai_set_fmt,
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index f4437015d43a7..9df49a880b750 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -286,12 +286,14 @@ static void line6_data_received(struct urb *urb)
+ {
+ 	struct usb_line6 *line6 = (struct usb_line6 *)urb->context;
+ 	struct midi_buffer *mb = &line6->line6midi->midibuf_in;
++	unsigned long flags;
+ 	int done;
+ 
+ 	if (urb->status == -ESHUTDOWN)
+ 		return;
+ 
+ 	if (line6->properties->capabilities & LINE6_CAP_CONTROL_MIDI) {
++		spin_lock_irqsave(&line6->line6midi->lock, flags);
+ 		done =
+ 			line6_midibuf_write(mb, urb->transfer_buffer, urb->actual_length);
+ 
+@@ -300,12 +302,15 @@ static void line6_data_received(struct urb *urb)
+ 			dev_dbg(line6->ifcdev, "%d %d buffer overflow - message skipped\n",
+ 				done, urb->actual_length);
+ 		}
++		spin_unlock_irqrestore(&line6->line6midi->lock, flags);
+ 
+ 		for (;;) {
++			spin_lock_irqsave(&line6->line6midi->lock, flags);
+ 			done =
+ 				line6_midibuf_read(mb, line6->buffer_message,
+ 						   LINE6_MIDI_MESSAGE_MAXLEN,
+ 						   LINE6_MIDIBUF_READ_RX);
++			spin_unlock_irqrestore(&line6->line6midi->lock, flags);
+ 
+ 			if (done <= 0)
+ 				break;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 73abc38a54006..f13a8d63a019a 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2594,6 +2594,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	}
+ },
+ 
++/* Stanton ScratchAmp */
++{ USB_DEVICE(0x103d, 0x0100) },
++{ USB_DEVICE(0x103d, 0x0101) },
++
+ /* Novation EMS devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x1235, 0x0001),
+diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+index 920aee41bd58c..6cc69900b3106 100644
+--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+@@ -156,7 +156,8 @@ static void test_send_signal_tracepoint(bool signal_thread)
+ static void test_send_signal_perf(bool signal_thread)
+ {
+ 	struct perf_event_attr attr = {
+-		.sample_period = 1,
++		.freq = 1,
++		.sample_freq = 1000,
+ 		.type = PERF_TYPE_SOFTWARE,
+ 		.config = PERF_COUNT_SW_CPU_CLOCK,
+ 	};
+diff --git a/tools/testing/selftests/devices/ksft.py b/tools/testing/selftests/devices/ksft.py
+index cd89fb2bc10e7..bf215790a89d7 100644
+--- a/tools/testing/selftests/devices/ksft.py
++++ b/tools/testing/selftests/devices/ksft.py
+@@ -70,7 +70,7 @@ def test_result(condition, description=""):
+ 
+ 
+ def finished():
+-    if ksft_cnt["pass"] == ksft_num_tests:
++    if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
+         exit_code = KSFT_PASS
+     else:
+         exit_code = KSFT_FAIL
+diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
+index 3b49bc3d0a3b2..478cd26f62fdd 100644
+--- a/tools/testing/selftests/mm/Makefile
++++ b/tools/testing/selftests/mm/Makefile
+@@ -106,7 +106,7 @@ endif
+ 
+ endif
+ 
+-ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64))
++ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64 s390))
+ TEST_GEN_FILES += va_high_addr_switch
+ TEST_GEN_FILES += virtual_address_range
+ TEST_GEN_FILES += write_to_hugetlbfs
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 7043984b7e74a..a3293043c85dd 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -1415,18 +1415,28 @@ chk_add_nr()
+ 	local add_nr=$1
+ 	local echo_nr=$2
+ 	local port_nr=${3:-0}
+-	local syn_nr=${4:-$port_nr}
+-	local syn_ack_nr=${5:-$port_nr}
+-	local ack_nr=${6:-$port_nr}
+-	local mis_syn_nr=${7:-0}
+-	local mis_ack_nr=${8:-0}
++	local ns_invert=${4:-""}
++	local syn_nr=$port_nr
++	local syn_ack_nr=$port_nr
++	local ack_nr=$port_nr
++	local mis_syn_nr=0
++	local mis_ack_nr=0
++	local ns_tx=$ns1
++	local ns_rx=$ns2
++	local extra_msg=""
+ 	local count
+ 	local timeout
+ 
+-	timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout)
++	if [[ $ns_invert = "invert" ]]; then
++		ns_tx=$ns2
++		ns_rx=$ns1
++		extra_msg="invert"
++	fi
++
++	timeout=$(ip netns exec ${ns_tx} sysctl -n net.mptcp.add_addr_timeout)
+ 
+ 	print_check "add"
+-	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtAddAddr")
++	count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtAddAddr")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	# if the test configured a short timeout tolerate greater then expected
+@@ -1438,7 +1448,7 @@ chk_add_nr()
+ 	fi
+ 
+ 	print_check "echo"
+-	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtEchoAdd")
++	count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtEchoAdd")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$echo_nr" ]; then
+@@ -1449,7 +1459,7 @@ chk_add_nr()
+ 
+ 	if [ $port_nr -gt 0 ]; then
+ 		print_check "pt"
+-		count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtPortAdd")
++		count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtPortAdd")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$port_nr" ]; then
+@@ -1459,7 +1469,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "syn"
+-		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinPortSynRx")
++		count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPJoinPortSynRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$syn_nr" ]; then
+@@ -1470,7 +1480,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "synack"
+-		count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx")
++		count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPJoinPortSynAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$syn_ack_nr" ]; then
+@@ -1481,7 +1491,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "ack"
+-		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinPortAckRx")
++		count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPJoinPortAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$ack_nr" ]; then
+@@ -1492,7 +1502,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "syn"
+-		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMismatchPortSynRx")
++		count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMismatchPortSynRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$mis_syn_nr" ]; then
+@@ -1503,7 +1513,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "ack"
+-		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMismatchPortAckRx")
++		count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMismatchPortAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$mis_ack_nr" ]; then
+@@ -1513,6 +1523,8 @@ chk_add_nr()
+ 			print_ok
+ 		fi
+ 	fi
++
++	print_info "$extra_msg"
+ }
+ 
+ chk_add_tx_nr()
+@@ -1977,6 +1989,21 @@ signal_address_tests()
+ 		chk_add_nr 1 1
+ 	fi
+ 
++	# uncommon: subflow and signal flags on the same endpoint
++	# or because the user wrongly picked both, but still expects the client
++	# to create additional subflows
++	if reset "subflow and signal together"; then
++		pm_nl_set_limits $ns1 0 2
++		pm_nl_set_limits $ns2 0 2
++		pm_nl_add_endpoint $ns2 10.0.3.2 flags signal,subflow
++		run_tests $ns1 $ns2 10.0.1.1
++		chk_join_nr 1 1 1
++		chk_add_nr 1 1 0 invert  # only initiated by ns2
++		chk_add_nr 0 0 0         # none initiated by ns1
++		chk_rst_nr 0 0 invert    # no RST sent by the client
++		chk_rst_nr 0 0           # no RST sent by the server
++	fi
++
+ 	# accept and use add_addr with additional subflows
+ 	if reset "multiple subflows and signal"; then
+ 		pm_nl_set_limits $ns1 0 3


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-14 14:49 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-14 14:49 UTC (permalink / raw
  To: gentoo-commits

commit:     59cf1147d40237fbf00353e17a5cf22141622e1b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 14 14:49:16 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 14 14:49:16 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59cf1147

libbpf: v2 workaround -Wmaybe-uninitialized false positive

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 +-
 ...workaround-Wmaybe-uninitialized-false-pos.patch | 47 ++++++++++++++++++----
 2 files changed, 41 insertions(+), 10 deletions(-)

diff --git a/0000_README b/0000_README
index 308053b4..04764583 100644
--- a/0000_README
+++ b/0000_README
@@ -99,8 +99,8 @@ Patch:  2950_jump-label-fix.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/
 Desc:   jump_label: Fix a regression
 
-Patch:  2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
-From:   https://lore.kernel.org/bpf/3ebbe7a4e93a5ddc3a26e2e11d329801d7c8de6b.1723217044.git.sam@gentoo.org/
+Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
+From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
 Patch:  3000_Support-printing-firmware-info.patch

diff --git a/2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
similarity index 59%
rename from 2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
rename to 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
index 86de18d7..af5e117f 100644
--- a/2990_libbpf-workaround-Wmaybe-uninitialized-false-pos.patch
+++ b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -1,8 +1,8 @@
 From git@z Thu Jan  1 00:00:00 1970
-Subject: [PATCH] libbpf: workaround -Wmaybe-uninitialized false positive
+Subject: [PATCH v2] libbpf: workaround -Wmaybe-uninitialized false positive
 From: Sam James <sam@gentoo.org>
-Date: Fri, 09 Aug 2024 16:24:04 +0100
-Message-Id: <3ebbe7a4e93a5ddc3a26e2e11d329801d7c8de6b.1723217044.git.sam@gentoo.org>
+Date: Fri, 09 Aug 2024 18:26:41 +0100
+Message-Id: <8f5c3b173e4cb216322ae19ade2766940c6fbebb.1723224401.git.sam@gentoo.org>
 MIME-Version: 1.0
 Content-Type: text/plain; charset="utf-8"
 Content-Transfer-Encoding: 8bit
@@ -37,28 +37,59 @@ here (see linked GCC bug). Suppress -Wmaybe-uninitialized accordingly.
 Link: https://gcc.gnu.org/PR114952
 Signed-off-by: Sam James <sam@gentoo.org>
 ---
- tools/lib/bpf/elf.c | 4 ++++
- 1 file changed, 4 insertions(+)
+v2: Fix Clang build.
+
+Range-diff against v1:
+1:  3ebbe7a4e93a ! 1:  8f5c3b173e4c libbpf: workaround -Wmaybe-uninitialized false positive
+    @@ tools/lib/bpf/elf.c: long elf_find_func_offset(Elf *elf, const char *binary_path
+      	return ret;
+      }
+      
+    ++#if !defined(__clang__)
+     +#pragma GCC diagnostic push
+     +/* https://gcc.gnu.org/PR114952 */
+     +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
+    ++#endif
+      /* Find offset of function name in ELF object specified by path. "name" matches
+       * symbol name or name@@LIB for library functions.
+       */
+    @@ tools/lib/bpf/elf.c: long elf_find_func_offset_from_file(const char *binary_path
+      	elf_close(&elf_fd);
+      	return ret;
+      }
+    ++#if !defined(__clang__)
+     +#pragma GCC diagnostic pop
+    ++#endif
+      
+      struct symbol {
+      	const char *name;
+
+ tools/lib/bpf/elf.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
 
 diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
-index c92e02394159e..ee226bb8e1af0 100644
+index c92e02394159..7058425ca85b 100644
 --- a/tools/lib/bpf/elf.c
 +++ b/tools/lib/bpf/elf.c
-@@ -369,6 +369,9 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
+@@ -369,6 +369,11 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
  	return ret;
  }
  
++#if !defined(__clang__)
 +#pragma GCC diagnostic push
 +/* https://gcc.gnu.org/PR114952 */
 +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
++#endif
  /* Find offset of function name in ELF object specified by path. "name" matches
   * symbol name or name@@LIB for library functions.
   */
-@@ -384,6 +387,7 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name)
+@@ -384,6 +389,9 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name)
  	elf_close(&elf_fd);
  	return ret;
  }
++#if !defined(__clang__)
 +#pragma GCC diagnostic pop
++#endif
  
  struct symbol {
  	const char *name;


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-14 15:18 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-14 15:18 UTC (permalink / raw
  To: gentoo-commits

commit:     7acf5823d708e0a03078c1e068cd7b10f00c8465
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 14 15:18:13 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 14 15:18:13 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7acf5823

Remove redundant patch

Removed:
2950_jump-label-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |  4 ----
 2950_jump-label-fix.patch | 57 -----------------------------------------------
 2 files changed, 61 deletions(-)

diff --git a/0000_README b/0000_README
index 04764583..46799647 100644
--- a/0000_README
+++ b/0000_README
@@ -95,10 +95,6 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
-Patch:  2950_jump-label-fix.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/
-Desc:   jump_label: Fix a regression
-
 Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive

diff --git a/2950_jump-label-fix.patch b/2950_jump-label-fix.patch
deleted file mode 100644
index 1a5fdf7a..00000000
--- a/2950_jump-label-fix.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 224fa3552029a3d14bec7acf72ded8171d551b88 Mon Sep 17 00:00:00 2001
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Wed, 31 Jul 2024 12:43:21 +0200
-Subject: jump_label: Fix the fix, brown paper bags galore
-
-Per the example of:
-
-  !atomic_cmpxchg(&key->enabled, 0, 1)
-
-the inverse was written as:
-
-  atomic_cmpxchg(&key->enabled, 1, 0)
-
-except of course, that while !old is only true for old == 0, old is
-true for everything except old == 0.
-
-Fix it to read:
-
-  atomic_cmpxchg(&key->enabled, 1, 0) == 1
-
-such that only the 1->0 transition returns true and goes on to disable
-the keys.
-
-Fixes: 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()")
-Reported-by: Darrick J. Wong <djwong@kernel.org>
-Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
-Tested-by: Darrick J. Wong <djwong@kernel.org>
-Link: https://lkml.kernel.org/r/20240731105557.GY33588@noisy.programming.kicks-ass.net
----
- kernel/jump_label.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/kernel/jump_label.c b/kernel/jump_label.c
-index 4ad5ed8adf9691..6dc76b590703ed 100644
---- a/kernel/jump_label.c
-+++ b/kernel/jump_label.c
-@@ -236,7 +236,7 @@ void static_key_disable_cpuslocked(struct static_key *key)
- 	}
- 
- 	jump_label_lock();
--	if (atomic_cmpxchg(&key->enabled, 1, 0))
-+	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
- 		jump_label_update(key);
- 	jump_label_unlock();
- }
-@@ -289,7 +289,7 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
- 		return;
- 
- 	guard(mutex)(&jump_label_mutex);
--	if (atomic_cmpxchg(&key->enabled, 1, 0))
-+	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
- 		jump_label_update(key);
- 	else
- 		WARN_ON_ONCE(!static_key_slow_try_dec(key));
--- 
-cgit 1.2.3-korg
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-15 13:19 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-15 13:19 UTC (permalink / raw
  To: gentoo-commits

commit:     3b50392746348d310486cd36f2067df9812990a6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 15 13:19:10 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 15 13:19:10 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3b503927

tools lib subcmd: Fixed uninitialized use of variable in parse-options

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                             |  4 +++
 2901_tools-lib-subcmd-compile-fix.patch | 54 +++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/0000_README b/0000_README
index 46799647..845e8cff 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
 
+Patch:  2901_tools-lib-subcmd-compile-fix.patch
+From:   https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
+Desc:   tools lib subcmd: Fixed uninitialized use of variable in parse-options
+
 Patch:  2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
 From:   https://www.spinics.net/lists/stable/msg604665.html
 Desc:   bpf: mark get_entry_ip as __maybe_unused

diff --git a/2901_tools-lib-subcmd-compile-fix.patch b/2901_tools-lib-subcmd-compile-fix.patch
new file mode 100644
index 00000000..bb1f7ffd
--- /dev/null
+++ b/2901_tools-lib-subcmd-compile-fix.patch
@@ -0,0 +1,54 @@
+From git@z Thu Jan  1 00:00:00 1970
+Subject: [PATCH] tools lib subcmd: Fixed uninitialized use of variable in
+ parse-options
+From: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
+Date: Wed, 31 Jul 2024 10:52:17 +0200
+Message-Id: <20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+Since commit ea558c86248b ("tools lib subcmd: Show parent options in
+help"), our debug images fail to build.
+
+For our Yocto-based GyroidOS, we build debug images with debugging enabled
+for all binaries including the kernel. Yocto passes the corresponding gcc
+option "-Og" also to the kernel HOSTCFLAGS. This results in the following
+build error:
+
+  parse-options.c: In function ‘options__order’:
+  parse-options.c:834:9: error: ‘o’ may be used uninitialized [-Werror=maybe-uninitialized]
+    834 |         memcpy(&ordered[nr_opts], o, sizeof(*o));
+        |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+  parse-options.c:812:30: note: ‘o’ was declared here
+    812 |         const struct option *o, *p = opts;
+        |                              ^
+  ..
+
+Fix it by initializing 'o' instead of 'p' in the above failing line 812.
+'p' is initialized afterwards in the following for-loop anyway.
+I think that was the intention of the commit ea558c86248b ("tools lib
+subcmd: Show parent options in help") in the first place.
+
+Fixes: ea558c86248b ("tools lib subcmd: Show parent options in help")
+Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
+---
+ tools/lib/subcmd/parse-options.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
+index 4b60ec03b0bb..2a3b51a690c7 100644
+--- a/tools/lib/subcmd/parse-options.c
++++ b/tools/lib/subcmd/parse-options.c
+@@ -809,7 +809,7 @@ static int option__cmp(const void *va, const void *vb)
+ static struct option *options__order(const struct option *opts)
+ {
+ 	int nr_opts = 0, nr_group = 0, nr_parent = 0, len;
+-	const struct option *o, *p = opts;
++	const struct option *o = opts, *p;
+ 	struct option *opt, *ordered = NULL, *group;
+ 
+ 	/* flatten the options that have parents */
+-- 
+2.39.2
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-15 14:05 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-15 14:05 UTC (permalink / raw
  To: gentoo-commits

commit:     9d1ed4e98b1b3b271a2fd0986f46af57dfcf7bb5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 15 14:04:39 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 15 14:04:39 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d1ed4e9

BMQ Sched Patch. USE=experimental

A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                 |     8 +
 5020_BMQ-and-PDS-io-scheduler-v6.9-r2.patch | 11843 ++++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch      |    13 +
 3 files changed, 11864 insertions(+)

diff --git a/0000_README b/0000_README
index 845e8cff..b1582141 100644
--- a/0000_README
+++ b/0000_README
@@ -114,3 +114,11 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.10-r2.patch
+From:   https://github.com/Frogging-Family/linux-tkg
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. default to n

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.9-r2.patch b/5020_BMQ-and-PDS-io-scheduler-v6.9-r2.patch
new file mode 100644
index 00000000..77443a4c
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.9-r2.patch
@@ -0,0 +1,11843 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 7fd43947832f..a07de351a5c2 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 72a1acd03675..e69ab1acbdbd 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -481,7 +481,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a5f4b48fca18..b775fc92142a 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -774,9 +774,13 @@ struct task_struct {
+ 	struct alloc_tag		*alloc_tag;
+ #endif
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ 	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ 	struct __call_single_node	wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -790,6 +794,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -798,6 +803,19 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -809,6 +827,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1571,6 +1590,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
+ #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
+ 
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index df3aca89d4f5..1df1f7635188 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..e66dfb553bc5 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,28 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++#endif
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index b2b9e6eb9683..09bd4d8758b2 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 4237daa5ac7a..3cebd93c49c8 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index febdea2afc3b..5b2295d7c8e5 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -638,6 +638,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -803,6 +804,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -849,6 +851,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -914,6 +945,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1015,6 +1047,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1037,6 +1070,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1285,6 +1319,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index eeb110c65fe2..9d5ac5c3af07 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -70,9 +70,15 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -85,6 +91,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -92,6 +108,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index c12b9fdb22a4..f7fa31c1be91 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -846,7 +846,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1245,7 +1245,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3301,12 +3301,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3327,6 +3330,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3350,12 +3354,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index e039b0f99a0b..7fc16698a514 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 81fcee45d630..76076803a464 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 88d08eeb8bc0..03e8793badfc 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..6fa0058d2649
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,8809 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.10-r2"
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++	idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	int last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++			rq->clear_idle_mask_func(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++		rq->set_idle_mask_func(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++	delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_dequeue(rq, p);							\
++											\
++	__list_del_entry(&p->sq_node);							\
++	if (p->sq_node.prev == p->sq_node.next) {					\
++		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
++			  rq->queue.bitmap);						\
++		func;									\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_enqueue(rq, p);							\
++	{										\
++	int idx, prio;									\
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
++	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
++		set_bit(prio, rq->queue.bitmap);					\
++		func;									\
++	}										\
++	}
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *node = &p->sq_node;
++	int deq_idx, idx, prio;
++
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++	if (list_is_last(node, &rq->queue.heads[idx]))
++		return;
++
++	__list_del_entry(node);
++	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++	list_add_tail(node, &rq->queue.heads[idx]);
++	if (list_is_first(node, &rq->queue.heads[idx]))
++		set_bit(prio, rq->queue.bitmap);
++	update_sched_preempt_mask(rq);
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	/*
++	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++	 * part of the idle loop. This forces an exit from the idle loop
++	 * and a round trip to schedule(). Now this could be optimized
++	 * because a simple new idle loop iteration is enough to
++	 * re-evaluate the next tick. Provided some re-ordering of tick
++	 * nohz functions that would need to follow TIF_NR_POLLING
++	 * clearing:
++	 *
++	 * - On most archs, a simple fetch_or on ti::flags with a
++	 *   "0" value would be enough to know if an IPI needs to be sent.
++	 *
++	 * - x86 needs to perform a last need_resched() check between
++	 *   monitor and mwait which doesn't take timers into account.
++	 *   There a dedicated TIF_TIMER flag would be required to
++	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
++	 *   before mwait().
++	 *
++	 * However, remote timer enqueue is not such a frequent event
++	 * and testing of the above solutions didn't appear to report
++	 * much benefits.
++	 */
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	WRITE_ONCE(p->on_rq, 0);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++	int src_cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	src_cpu = cpu_of(rq);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_preempt_mask + ref);
++	if (prio < ref) {
++		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++			if (prio < cpu_rq(cpu)->prio)
++				cpumask_set_cpu(cpu, mask);
++		}
++	} else {
++		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++			if (prio >= cpu_rq(cpu)->prio)
++				cpumask_clear_cpu(cpu, mask);
++		}
++	}
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++	cpumask_t *mask = sched_preempt_mask + prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++		sched_preempt_mask_flush(mask, prio, pr);
++		atomic_set(&sched_prio_record, prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (idle_select_func(&mask, &allow_mask, sched_idle_mask)	||
++	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++	return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++	if (!sched_asym_cpucap_active())
++		return true;
++
++	if (this_cpu == that_cpu)
++		return true;
++
++	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++static void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++	       " ecore_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_pcore_idle_mask->bits[0],
++	       sched_ecore_idle_mask->bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	cpumask_t *topo_mask, *end_mask, chk;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++
++		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++			continue;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++			if (likely(rq->balance_func && rq->online))
++				rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler sched_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr; this
++	 * barrier matches a full barrier in the proximity of the membarrier
++	 * system call exit.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
++		 *   on PowerPC and on RISC-V.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 *
++		 * The barrier matches a full barrier in the proximity of
++		 * the membarrier system call entry.
++		 *
++		 * On RISC-V, this barrier pairing is also needed for the
++		 * SYNC_CORE command when switching between processes, cf.
++		 * the inline comments in membarrier_arch_switch_mm().
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++		if (tsk->flags & PF_BLOCK_TS)
++			blk_plug_invalidate_ts(tsk);
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else if (tsk->flags & PF_IO_WORKER)
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		requeue_task(p, rq);
++		wakeup_preempt(rq);
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	if (task_on_rq_queued(p))
++		__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++static struct task_struct *find_get_task(pid_t pid)
++{
++	struct task_struct *p;
++	guard(rcu)();
++
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++
++	return p;
++}
++
++DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
++	     find_get_task(pid), pid_t pid)
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	guard(rcu)();
++
++	pcred = __task_cred(p);
++	return (uid_eq(cred->euid, pcred->euid) ||
++	        uid_eq(cred->euid, pcred->uid));
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	return sched_setscheduler(p, policy, &lparam);
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++static void get_params(struct task_struct *p, struct sched_attr *attr)
++{
++	if (task_has_rt_policy(p))
++		attr->sched_priority = p->rt_priority;
++	else
++		attr->sched_nice = task_nice(p);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
++		get_params(p, &attr);
++
++	return sched_setattr(p, &attr);
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		return -ESRCH;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (!retval)
++		retval = p->policy;
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		int retval;
++
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -EINVAL;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		if (task_has_rt_policy(p))
++			lp.sched_priority = p->rt_priority;
++	}
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -ESRCH;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		kattr.sched_policy = p->policy;
++		if (p->sched_reset_on_fork)
++			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		get_params(p, &kattr);
++		kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++	}
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	int retval;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (p->flags & PF_NO_SETAFFINITY)
++		return -EINVAL;
++
++	if (!check_same_owner(p)) {
++		guard(rcu)();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
++			return -EPERM;
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		return retval;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		return -ENOMEM;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	int retval;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	guard(raw_spinlock_irqsave)(&p->pi_lock);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	} else if (rq->nr_running > 1) {
++		do_sched_yield_type_1(p, rq);
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -EINVAL;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++	return 0;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_inc_cpuslocked(&sched_smt_present);
++		cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(sched_pcore_idle_mask);
++		cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology();
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++		rq->prio_idx = rq->prio;
++#endif
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++		rq->clear_idle_mask_func = cpumask_clear_cpu;
++		rq->set_idle_mask_func = cpumask_set_cpu;
++		rq->balance_func = NULL;
++		rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	rcu_read_lock();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		rcu_read_unlock();
++		t->last_mm_cid = -1;
++		return -1;
++	}
++	rcu_read_unlock();
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, dst_cid;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many cpus.
++	 *
++	 * If destination cid is already set, we may have to just clear
++	 * the src cid to ensure compactness in frequent migrations
++	 * scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed cpus, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed cpus.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++	if (!mm_cid_is_unset(dst_cid) &&
++	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (!mm_cid_is_unset(dst_cid)) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..7b546c1bc9d0
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,74 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *next = p->sq_node.next;
++
++	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++		struct list_head *head;
++		unsigned long idx = next - &rq->queue.heads[0];
++
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++				   const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..e8e61a59eae8
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,989 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++			       struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++	struct task_struct	*task;
++	int			active;
++	cpumask_t		*cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue		queue		____cacheline_aligned;
++
++	int				prio;
++#ifdef CONFIG_SCHED_PDS
++	int				prio_idx;
++	u64				time_edge;
++#endif
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++	set_idle_mask_func_t	set_idle_mask_func;
++	clear_idle_mask_func_t	clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++	balance_func_t		balance_func;
++	struct balance_arg	active_balance_arg		____cacheline_aligned;
++	struct cpu_stop_work	active_balance_work;
++
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++				 unsigned long min,
++				 unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	/*
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cpumask);
++		if (cid < nr_cpu_ids)
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cpumask))
++		return -1;
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++		       struct balance_callback *head,
++		       void (*func)(struct rq *rq))
++{
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * Don't (re)queue an already queued item; nor queue anything when
++	 * balance_push() is active, see the comment with
++	 * balance_push_callback.
++	 */
++	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++		return;
++
++	head->func = func;
++	head->next = rq->balance_callback;
++	rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++	if (cpulist_parse(str, &sched_pcore_mask))
++		pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++	return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++		cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++				 const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++					const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p + 2)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++	struct balance_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++	cpumask_t tmp;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	arg->active = 0;
++
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++	    !is_migration_disabled(p)) {
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++	struct balance_arg *arg;
++	unsigned long flags;
++	struct task_struct *p;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++
++	arg = &rq->active_balance_arg;
++	res = (1 == rq->nr_running) &&					\
++	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
++	      cpumask_intersects(p->cpus_ptr, target_mask) &&		\
++	      !arg->active;
++	if (res) {
++		arg->task = p;
++		arg->cpumask = target_mask;
++
++		arg->active = 1;
++	}
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res) {
++		preempt_disable();
++		raw_spin_unlock(&src_rq->lock);
++
++		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++				    &rq->active_balance_work);
++
++		preempt_enable();
++		raw_spin_lock(&src_rq->lock);
++	}
++
++	return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, single_task_mask, cpu)
++			if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++	}
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	cpumask_t smt_single_mask;
++
++	if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++			    trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++		}
++	}
++
++	return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    (/* smt core group balance */
++	     (static_key_count(&sched_smt_present.key) > 1 &&
++	      smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++	     ) ||
++	     /* e core to idle smt core balance */
++	     ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++		return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    static_key_count(&sched_smt_present.key) > 1 &&
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++		return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* smt occupied p core to idle e core balance */
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++		return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* idle e core to p core balance */
++	    ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++		return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...)	printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...)	do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func)						\
++{										\
++	idle_select_func = func;						\
++	printk(KERN_INFO "sched: "#func);					\
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func)					\
++{										\
++	rq->balance_func = func;						\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu);			\
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func)			\
++{										\
++	rq->set_idle_mask_func		= set_func;				\
++	rq->clear_idle_mask_func	= clear_func;				\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu);	\
++}
++
++void sched_init_topology(void)
++{
++	int cpu;
++	struct rq *rq;
++	cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++	int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++	if (!cpumask_empty(&sched_smt_mask))
++		printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++		printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++		       sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++		ecore_present = !cpumask_empty(&sched_ecore_mask);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	/* idle select function */
++	if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++		SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++	} else
++#endif
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++	}
++
++	for_each_online_cpu(cpu) {
++		rq = cpu_rq(cpu);
++		/* take chance to reset time slice for idle tasks */
++		rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++		if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++			if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++			    !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++			} else {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++			}
++
++			continue;
++		}
++#endif
++		/* !SMT or only one cpu in sg */
++		if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++			if (ecore_present)
++				SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++			continue;
++		}
++		if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++			if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++				SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++		}
++	}
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
++	prio = task_sched_prio(p);		\
++	idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	s64 delta = this_rq()->clock_task > p->last_ran;
++
++	if (likely(delta > 0))
++		boost_task(p, delta  >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index d9dc9ab3773f..71a25540d65e 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..58d04aa73634 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+ 
+ #include "clock.c"
+ 
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -84,7 +88,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index eece6244f9d2..3075127f9e95 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -197,12 +197,17 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
+ 
+ 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+ 	util = max(util, boost);
+ 	sg_cpu->bw_min = min;
+ 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++	sg_cpu->bw_min = 0;
++	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -343,8 +348,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -676,6 +683,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index aa48b2ec879d..3034c06528bc 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -617,7 +617,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index c1eb9a1afd13..d3aec989797d 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1111,6 +1120,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 6135fbe83d68..57708dc54827 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -430,6 +430,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -551,3 +552,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
++	if (p->prio < MIN_NORMAL_PRIO) {							\
++		prio = p->prio >> 2;								\
++		idx = prio;									\
++	} else {										\
++		u64 sched_dl = max(p->deadline, rq->time_edge);					\
++		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
++		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
++	}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index ef00382de595..9b8362284b9e 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * hardware:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 2150062949d4..a82bff3231a4 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index ef20c61004eb..a38785bb8048 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3481,4 +3485,9 @@ static inline void init_sched_mm_cid(struct task_struct *t) { }
+ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+ extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 78e48f5426ee..2b31bcb9f683 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -170,6 +173,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index b02dfc322951..49f94ed07ce6 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index a6994a1fcc90..22d542998c5c 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1451,8 +1452,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1687,6 +1690,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1723,6 +1727,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2789,3 +2794,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index e0b917328cf9..c72067bfb880 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index b8ee320208d4..087b252383cb 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2074,8 +2074,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index e9c6f9d0e42c..43ee0a94abdd 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index e9c5058a8efd..8c23dc364046 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 3fbaecfc88c2..e2e493d2316b 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1248,6 +1248,7 @@ static bool kick_pool(struct worker_pool *pool)
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1277,6 +1278,8 @@ static bool kick_pool(struct worker_pool *pool)
+ 		}
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1405,7 +1408,11 @@ void wq_worker_running(struct task_struct *task)
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1490,7 +1497,11 @@ void wq_worker_tick(struct task_struct *task)
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -3176,7 +3187,11 @@ __acquires(&pool->lock)
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
+ 	if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++		worker->current_at = worker->task->sched_time;
++#else
+ 		worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-19 10:23 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-19 10:23 UTC (permalink / raw
  To: gentoo-commits

commit:     9ea6291b7fa333834b62299defaeaa1444daf91f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 19 10:22:56 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 19 10:22:56 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ea6291b

Linux patch 6.10.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-6.10.6.patch | 1626 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1630 insertions(+)

diff --git a/0000_README b/0000_README
index b1582141..75bbf8b4 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.10.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.5
 
+Patch:  1005_linux-6.10.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.6
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1005_linux-6.10.6.patch b/1005_linux-6.10.6.patch
new file mode 100644
index 00000000..f226d534
--- /dev/null
+++ b/1005_linux-6.10.6.patch
@@ -0,0 +1,1626 @@
+diff --git a/Makefile b/Makefile
+index f9badb79ae8f4e..361a70264e1fb0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/loongarch/include/uapi/asm/unistd.h b/arch/loongarch/include/uapi/asm/unistd.h
+index fcb668984f0336..b344b1f917153b 100644
+--- a/arch/loongarch/include/uapi/asm/unistd.h
++++ b/arch/loongarch/include/uapi/asm/unistd.h
+@@ -1,4 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++#define __ARCH_WANT_NEW_STAT
+ #define __ARCH_WANT_SYS_CLONE
+ #define __ARCH_WANT_SYS_CLONE3
+ 
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 076fbeadce0153..4e084760110396 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -941,8 +941,19 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ 				   &sense_key, &asc, &ascq);
+ 		ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq);
+ 	} else {
+-		/* ATA PASS-THROUGH INFORMATION AVAILABLE */
+-		ata_scsi_set_sense(qc->dev, cmd, RECOVERED_ERROR, 0, 0x1D);
++		/*
++		 * ATA PASS-THROUGH INFORMATION AVAILABLE
++		 *
++		 * Note: we are supposed to call ata_scsi_set_sense(), which
++		 * respects the D_SENSE bit, instead of unconditionally
++		 * generating the sense data in descriptor format. However,
++		 * because hdparm, hddtemp, and udisks incorrectly assume sense
++		 * data in descriptor format, without even looking at the
++		 * RESPONSE CODE field in the returned sense data (to see which
++		 * format the returned sense data is in), we are stuck with
++		 * being bug compatible with older kernels.
++		 */
++		scsi_build_sense(cmd, 1, RECOVERED_ERROR, 0, 0x1D);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 964bb6d0a38331..836bf9ba620d19 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2944,6 +2944,7 @@ static int dm_resume(void *handle)
+ 
+ 		commit_params.streams = dc_state->streams;
+ 		commit_params.stream_count = dc_state->stream_count;
++		dc_exit_ips_for_hw_access(dm->dc);
+ 		WARN_ON(!dc_commit_streams(dm->dc, &commit_params));
+ 
+ 		dm_gpureset_commit_state(dm->cached_dc_state, dm);
+@@ -3016,7 +3017,8 @@ static int dm_resume(void *handle)
+ 			emulated_link_detect(aconnector->dc_link);
+ 		} else {
+ 			mutex_lock(&dm->dc_lock);
+-			dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD);
++			dc_exit_ips_for_hw_access(dm->dc);
++			dc_link_detect(aconnector->dc_link, DETECT_REASON_RESUMEFROMS3S4);
+ 			mutex_unlock(&dm->dc_lock);
+ 		}
+ 
+@@ -3352,6 +3354,7 @@ static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector)
+ 	enum dc_connection_type new_connection_type = dc_connection_none;
+ 	struct amdgpu_device *adev = drm_to_adev(dev);
+ 	struct dm_connector_state *dm_con_state = to_dm_connector_state(connector->state);
++	struct dc *dc = aconnector->dc_link->ctx->dc;
+ 	bool ret = false;
+ 
+ 	if (adev->dm.disable_hpd_irq)
+@@ -3386,6 +3389,7 @@ static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector)
+ 			drm_kms_helper_connector_hotplug_event(connector);
+ 	} else {
+ 		mutex_lock(&adev->dm.dc_lock);
++		dc_exit_ips_for_hw_access(dc);
+ 		ret = dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD);
+ 		mutex_unlock(&adev->dm.dc_lock);
+ 		if (ret) {
+@@ -3445,6 +3449,7 @@ static void handle_hpd_rx_irq(void *param)
+ 	bool has_left_work = false;
+ 	int idx = dc_link->link_index;
+ 	struct hpd_rx_irq_offload_work_queue *offload_wq = &adev->dm.hpd_rx_offload_wq[idx];
++	struct dc *dc = aconnector->dc_link->ctx->dc;
+ 
+ 	memset(&hpd_irq_data, 0, sizeof(hpd_irq_data));
+ 
+@@ -3534,6 +3539,7 @@ static void handle_hpd_rx_irq(void *param)
+ 			bool ret = false;
+ 
+ 			mutex_lock(&adev->dm.dc_lock);
++			dc_exit_ips_for_hw_access(dc);
+ 			ret = dc_link_detect(dc_link, DETECT_REASON_HPDRX);
+ 			mutex_unlock(&adev->dm.dc_lock);
+ 
+@@ -4640,6 +4646,7 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 			bool ret = false;
+ 
+ 			mutex_lock(&dm->dc_lock);
++			dc_exit_ips_for_hw_access(dm->dc);
+ 			ret = dc_link_detect(link, DETECT_REASON_BOOT);
+ 			mutex_unlock(&dm->dc_lock);
+ 
+@@ -8948,7 +8955,8 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 
+ 			memset(&position, 0, sizeof(position));
+ 			mutex_lock(&dm->dc_lock);
+-			dc_stream_set_cursor_position(dm_old_crtc_state->stream, &position);
++			dc_exit_ips_for_hw_access(dm->dc);
++			dc_stream_program_cursor_position(dm_old_crtc_state->stream, &position);
+ 			mutex_unlock(&dm->dc_lock);
+ 		}
+ 
+@@ -9017,6 +9025,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 
+ 	dm_enable_per_frame_crtc_master_sync(dc_state);
+ 	mutex_lock(&dm->dc_lock);
++	dc_exit_ips_for_hw_access(dm->dc);
+ 	WARN_ON(!dc_commit_streams(dm->dc, &params));
+ 
+ 	/* Allow idle optimization when vblank count is 0 for display off */
+@@ -9382,6 +9391,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 
+ 
+ 		mutex_lock(&dm->dc_lock);
++		dc_exit_ips_for_hw_access(dm->dc);
+ 		dc_update_planes_and_stream(dm->dc,
+ 					    dummy_updates,
+ 					    status->plane_count,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index bb4e5ab7edc6e4..b50010ed763327 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1594,171 +1594,109 @@ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+ 	return bw_range->max_target_bpp_x16 && bw_range->min_target_bpp_x16;
+ }
+ 
+-#if defined(CONFIG_DRM_AMD_DC_FP)
+-static bool dp_get_link_current_set_bw(struct drm_dp_aux *aux, uint32_t *cur_link_bw)
+-{
+-	uint32_t total_data_bw_efficiency_x10000 = 0;
+-	uint32_t link_rate_per_lane_kbps = 0;
+-	enum dc_link_rate link_rate;
+-	union lane_count_set lane_count;
+-	u8 dp_link_encoding;
+-	u8 link_bw_set = 0;
+-
+-	*cur_link_bw = 0;
+-
+-	if (drm_dp_dpcd_read(aux, DP_MAIN_LINK_CHANNEL_CODING_SET, &dp_link_encoding, 1) != 1 ||
+-		drm_dp_dpcd_read(aux, DP_LANE_COUNT_SET, &lane_count.raw, 1) != 1 ||
+-		drm_dp_dpcd_read(aux, DP_LINK_BW_SET, &link_bw_set, 1) != 1)
+-		return false;
+-
+-	switch (dp_link_encoding) {
+-	case DP_8b_10b_ENCODING:
+-		link_rate = link_bw_set;
+-		link_rate_per_lane_kbps = link_rate * LINK_RATE_REF_FREQ_IN_KHZ * BITS_PER_DP_BYTE;
+-		total_data_bw_efficiency_x10000 = DATA_EFFICIENCY_8b_10b_x10000;
+-		total_data_bw_efficiency_x10000 /= 100;
+-		total_data_bw_efficiency_x10000 *= DATA_EFFICIENCY_8b_10b_FEC_EFFICIENCY_x100;
+-		break;
+-	case DP_128b_132b_ENCODING:
+-		switch (link_bw_set) {
+-		case DP_LINK_BW_10:
+-			link_rate = LINK_RATE_UHBR10;
+-			break;
+-		case DP_LINK_BW_13_5:
+-			link_rate = LINK_RATE_UHBR13_5;
+-			break;
+-		case DP_LINK_BW_20:
+-			link_rate = LINK_RATE_UHBR20;
+-			break;
+-		default:
+-			return false;
+-		}
+-
+-		link_rate_per_lane_kbps = link_rate * 10000;
+-		total_data_bw_efficiency_x10000 = DATA_EFFICIENCY_128b_132b_x10000;
+-		break;
+-	default:
+-		return false;
+-	}
+-
+-	*cur_link_bw = link_rate_per_lane_kbps * lane_count.bits.LANE_COUNT_SET / 10000 * total_data_bw_efficiency_x10000;
+-	return true;
+-}
+-#endif
+-
+ enum dc_status dm_dp_mst_is_port_support_mode(
+ 	struct amdgpu_dm_connector *aconnector,
+ 	struct dc_stream_state *stream)
+ {
+-#if defined(CONFIG_DRM_AMD_DC_FP)
+-	int branch_max_throughput_mps = 0;
++	int pbn, branch_max_throughput_mps = 0;
+ 	struct dc_link_settings cur_link_settings;
+-	uint32_t end_to_end_bw_in_kbps = 0;
+-	uint32_t root_link_bw_in_kbps = 0;
+-	uint32_t virtual_channel_bw_in_kbps = 0;
++	unsigned int end_to_end_bw_in_kbps = 0;
++	unsigned int upper_link_bw_in_kbps = 0, down_link_bw_in_kbps = 0;
+ 	struct dc_dsc_bw_range bw_range = {0};
+ 	struct dc_dsc_config_options dsc_options = {0};
+-	uint32_t stream_kbps;
+ 
+-	/* DSC unnecessary case
+-	 * Check if timing could be supported within end-to-end BW
++	/*
++	 * Consider the case with the depth of the mst topology tree is equal or less than 2
++	 * A. When dsc bitstream can be transmitted along the entire path
++	 *    1. dsc is possible between source and branch/leaf device (common dsc params is possible), AND
++	 *    2. dsc passthrough supported at MST branch, or
++	 *    3. dsc decoding supported at leaf MST device
++	 *    Use maximum dsc compression as bw constraint
++	 * B. When dsc bitstream cannot be transmitted along the entire path
++	 *    Use native bw as bw constraint
+ 	 */
+-	stream_kbps =
+-		dc_bandwidth_in_kbps_from_timing(&stream->timing,
+-			dc_link_get_highest_encoding_format(stream->link));
+-	cur_link_settings = stream->link->verified_link_cap;
+-	root_link_bw_in_kbps = dc_link_bandwidth_kbps(aconnector->dc_link, &cur_link_settings);
+-	virtual_channel_bw_in_kbps = kbps_from_pbn(aconnector->mst_output_port->full_pbn);
+-
+-	/* pick the end to end bw bottleneck */
+-	end_to_end_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+-
+-	if (stream_kbps <= end_to_end_bw_in_kbps) {
+-		DRM_DEBUG_DRIVER("No DSC needed. End-to-end bw sufficient.");
+-		return DC_OK;
+-	}
+-
+-	/*DSC necessary case*/
+-	if (!aconnector->dsc_aux)
+-		return DC_FAIL_BANDWIDTH_VALIDATE;
+-
+-	if (is_dsc_common_config_possible(stream, &bw_range)) {
+-
+-		/*capable of dsc passthough. dsc bitstream along the entire path*/
+-		if (aconnector->mst_output_port->passthrough_aux) {
+-			if (bw_range.min_kbps > end_to_end_bw_in_kbps) {
+-				DRM_DEBUG_DRIVER("DSC passthrough. Max dsc compression can't fit into end-to-end bw\n");
++	if (is_dsc_common_config_possible(stream, &bw_range) &&
++	   (aconnector->mst_output_port->passthrough_aux ||
++	    aconnector->dsc_aux == &aconnector->mst_output_port->aux)) {
++		cur_link_settings = stream->link->verified_link_cap;
++		upper_link_bw_in_kbps = dc_link_bandwidth_kbps(aconnector->dc_link, &cur_link_settings);
++		down_link_bw_in_kbps = kbps_from_pbn(aconnector->mst_output_port->full_pbn);
++
++		/* pick the end to end bw bottleneck */
++		end_to_end_bw_in_kbps = min(upper_link_bw_in_kbps, down_link_bw_in_kbps);
++
++		if (end_to_end_bw_in_kbps < bw_range.min_kbps) {
++			DRM_DEBUG_DRIVER("maximum dsc compression cannot fit into end-to-end bandwidth\n");
+ 			return DC_FAIL_BANDWIDTH_VALIDATE;
+-			}
+-		} else {
+-			/*dsc bitstream decoded at the dp last link*/
+-			struct drm_dp_mst_port *immediate_upstream_port = NULL;
+-			uint32_t end_link_bw = 0;
+-
+-			/*Get last DP link BW capability*/
+-			if (dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw)) {
+-				if (stream_kbps > end_link_bw) {
+-					DRM_DEBUG_DRIVER("DSC decode at last link. Mode required bw can't fit into available bw\n");
+-					return DC_FAIL_BANDWIDTH_VALIDATE;
+-				}
+-			}
+-
+-			/*Get virtual channel bandwidth between source and the link before the last link*/
+-			if (aconnector->mst_output_port->parent->port_parent)
+-				immediate_upstream_port = aconnector->mst_output_port->parent->port_parent;
++		}
+ 
+-			if (immediate_upstream_port) {
+-				virtual_channel_bw_in_kbps = kbps_from_pbn(immediate_upstream_port->full_pbn);
+-				virtual_channel_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+-				if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
+-					DRM_DEBUG_DRIVER("DSC decode at last link. Max dsc compression can't fit into MST available bw\n");
+-					return DC_FAIL_BANDWIDTH_VALIDATE;
+-				}
++		if (end_to_end_bw_in_kbps < bw_range.stream_kbps) {
++			dc_dsc_get_default_config_option(stream->link->dc, &dsc_options);
++			dsc_options.max_target_bpp_limit_override_x16 = aconnector->base.display_info.max_dsc_bpp * 16;
++			if (dc_dsc_compute_config(stream->sink->ctx->dc->res_pool->dscs[0],
++					&stream->sink->dsc_caps.dsc_dec_caps,
++					&dsc_options,
++					end_to_end_bw_in_kbps,
++					&stream->timing,
++					dc_link_get_highest_encoding_format(stream->link),
++					&stream->timing.dsc_cfg)) {
++				stream->timing.flags.DSC = 1;
++				DRM_DEBUG_DRIVER("end-to-end bandwidth require dsc and dsc config found\n");
++			} else {
++				DRM_DEBUG_DRIVER("end-to-end bandwidth require dsc but dsc config not found\n");
++				return DC_FAIL_BANDWIDTH_VALIDATE;
+ 			}
+ 		}
+-
+-		/*Confirm if we can obtain dsc config*/
+-		dc_dsc_get_default_config_option(stream->link->dc, &dsc_options);
+-		dsc_options.max_target_bpp_limit_override_x16 = aconnector->base.display_info.max_dsc_bpp * 16;
+-		if (dc_dsc_compute_config(stream->sink->ctx->dc->res_pool->dscs[0],
+-				&stream->sink->dsc_caps.dsc_dec_caps,
+-				&dsc_options,
+-				end_to_end_bw_in_kbps,
+-				&stream->timing,
+-				dc_link_get_highest_encoding_format(stream->link),
+-				&stream->timing.dsc_cfg)) {
+-			stream->timing.flags.DSC = 1;
+-			DRM_DEBUG_DRIVER("Require dsc and dsc config found\n");
+-		} else {
+-			DRM_DEBUG_DRIVER("Require dsc but can't find appropriate dsc config\n");
++	} else {
++		/* Check if mode could be supported within max slot
++		 * number of current mst link and full_pbn of mst links.
++		 */
++		int pbn_div, slot_num, max_slot_num;
++		enum dc_link_encoding_format link_encoding;
++		uint32_t stream_kbps =
++			dc_bandwidth_in_kbps_from_timing(&stream->timing,
++				dc_link_get_highest_encoding_format(stream->link));
++
++		pbn = kbps_to_peak_pbn(stream_kbps);
++		pbn_div = dm_mst_get_pbn_divider(stream->link);
++		slot_num = DIV_ROUND_UP(pbn, pbn_div);
++
++		link_encoding = dc_link_get_highest_encoding_format(stream->link);
++		if (link_encoding == DC_LINK_ENCODING_DP_8b_10b)
++			max_slot_num = 63;
++		else if (link_encoding == DC_LINK_ENCODING_DP_128b_132b)
++			max_slot_num = 64;
++		else {
++			DRM_DEBUG_DRIVER("Invalid link encoding format\n");
+ 			return DC_FAIL_BANDWIDTH_VALIDATE;
+ 		}
+ 
+-		/* check is mst dsc output bandwidth branch_overall_throughput_0_mps */
+-		switch (stream->timing.pixel_encoding) {
+-		case PIXEL_ENCODING_RGB:
+-		case PIXEL_ENCODING_YCBCR444:
+-			branch_max_throughput_mps =
+-				aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_0_mps;
+-			break;
+-		case PIXEL_ENCODING_YCBCR422:
+-		case PIXEL_ENCODING_YCBCR420:
+-			branch_max_throughput_mps =
+-				aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_1_mps;
+-			break;
+-		default:
+-			break;
++		if (slot_num > max_slot_num ||
++			pbn > aconnector->mst_output_port->full_pbn) {
++			DRM_DEBUG_DRIVER("Mode can not be supported within mst links!");
++			return DC_FAIL_BANDWIDTH_VALIDATE;
+ 		}
++	}
+ 
+-		if (branch_max_throughput_mps != 0 &&
+-			((stream->timing.pix_clk_100hz / 10) >  branch_max_throughput_mps * 1000)) {
+-			DRM_DEBUG_DRIVER("DSC is required but max throughput mps fails");
+-		return DC_FAIL_BANDWIDTH_VALIDATE;
+-		}
+-	} else {
+-		DRM_DEBUG_DRIVER("DSC is required but can't find common dsc config.");
+-		return DC_FAIL_BANDWIDTH_VALIDATE;
++	/* check is mst dsc output bandwidth branch_overall_throughput_0_mps */
++	switch (stream->timing.pixel_encoding) {
++	case PIXEL_ENCODING_RGB:
++	case PIXEL_ENCODING_YCBCR444:
++		branch_max_throughput_mps =
++			aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_0_mps;
++		break;
++	case PIXEL_ENCODING_YCBCR422:
++	case PIXEL_ENCODING_YCBCR420:
++		branch_max_throughput_mps =
++			aconnector->dc_sink->dsc_caps.dsc_dec_caps.branch_overall_throughput_1_mps;
++		break;
++	default:
++		break;
+ 	}
+-#endif
++
++	if (branch_max_throughput_mps != 0 &&
++		((stream->timing.pix_clk_100hz / 10) >  branch_max_throughput_mps * 1000))
++		return DC_FAIL_BANDWIDTH_VALIDATE;
++
+ 	return DC_OK;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 8a4c40b4c27e4f..311c62d2d1ebbf 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -1254,7 +1254,7 @@ void amdgpu_dm_plane_handle_cursor_update(struct drm_plane *plane,
+ 		/* turn off cursor */
+ 		if (crtc_state && crtc_state->stream) {
+ 			mutex_lock(&adev->dm.dc_lock);
+-			dc_stream_set_cursor_position(crtc_state->stream,
++			dc_stream_program_cursor_position(crtc_state->stream,
+ 						      &position);
+ 			mutex_unlock(&adev->dm.dc_lock);
+ 		}
+@@ -1284,11 +1284,11 @@ void amdgpu_dm_plane_handle_cursor_update(struct drm_plane *plane,
+ 
+ 	if (crtc_state->stream) {
+ 		mutex_lock(&adev->dm.dc_lock);
+-		if (!dc_stream_set_cursor_attributes(crtc_state->stream,
++		if (!dc_stream_program_cursor_attributes(crtc_state->stream,
+ 							 &attributes))
+ 			DRM_ERROR("DC failed to set cursor attributes\n");
+ 
+-		if (!dc_stream_set_cursor_position(crtc_state->stream,
++		if (!dc_stream_program_cursor_position(crtc_state->stream,
+ 						   &position))
+ 			DRM_ERROR("DC failed to set cursor position\n");
+ 		mutex_unlock(&adev->dm.dc_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 5c7e4884cac2c5..53bc991b6e6737 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -266,7 +266,6 @@ bool dc_stream_set_cursor_attributes(
+ 	const struct dc_cursor_attributes *attributes)
+ {
+ 	struct dc  *dc;
+-	bool reset_idle_optimizations = false;
+ 
+ 	if (NULL == stream) {
+ 		dm_error("DC: dc_stream is NULL!\n");
+@@ -297,20 +296,36 @@ bool dc_stream_set_cursor_attributes(
+ 
+ 	stream->cursor_attributes = *attributes;
+ 
+-	dc_z10_restore(dc);
+-	/* disable idle optimizations while updating cursor */
+-	if (dc->idle_optimizations_allowed) {
+-		dc_allow_idle_optimizations(dc, false);
+-		reset_idle_optimizations = true;
+-	}
++	return true;
++}
+ 
+-	program_cursor_attributes(dc, stream, attributes);
++bool dc_stream_program_cursor_attributes(
++	struct dc_stream_state *stream,
++	const struct dc_cursor_attributes *attributes)
++{
++	struct dc  *dc;
++	bool reset_idle_optimizations = false;
+ 
+-	/* re-enable idle optimizations if necessary */
+-	if (reset_idle_optimizations && !dc->debug.disable_dmub_reallow_idle)
+-		dc_allow_idle_optimizations(dc, true);
++	dc = stream ? stream->ctx->dc : NULL;
+ 
+-	return true;
++	if (dc_stream_set_cursor_attributes(stream, attributes)) {
++		dc_z10_restore(dc);
++		/* disable idle optimizations while updating cursor */
++		if (dc->idle_optimizations_allowed) {
++			dc_allow_idle_optimizations(dc, false);
++			reset_idle_optimizations = true;
++		}
++
++		program_cursor_attributes(dc, stream, attributes);
++
++		/* re-enable idle optimizations if necessary */
++		if (reset_idle_optimizations && !dc->debug.disable_dmub_reallow_idle)
++			dc_allow_idle_optimizations(dc, true);
++
++		return true;
++	}
++
++	return false;
+ }
+ 
+ static void program_cursor_position(
+@@ -355,9 +370,6 @@ bool dc_stream_set_cursor_position(
+ 	struct dc_stream_state *stream,
+ 	const struct dc_cursor_position *position)
+ {
+-	struct dc *dc;
+-	bool reset_idle_optimizations = false;
+-
+ 	if (NULL == stream) {
+ 		dm_error("DC: dc_stream is NULL!\n");
+ 		return false;
+@@ -368,24 +380,46 @@ bool dc_stream_set_cursor_position(
+ 		return false;
+ 	}
+ 
++	stream->cursor_position = *position;
++
++
++	return true;
++}
++
++bool dc_stream_program_cursor_position(
++	struct dc_stream_state *stream,
++	const struct dc_cursor_position *position)
++{
++	struct dc *dc;
++	bool reset_idle_optimizations = false;
++	const struct dc_cursor_position *old_position;
++
++	if (!stream)
++		return false;
++
++	old_position = &stream->cursor_position;
+ 	dc = stream->ctx->dc;
+-	dc_z10_restore(dc);
+ 
+-	/* disable idle optimizations if enabling cursor */
+-	if (dc->idle_optimizations_allowed && (!stream->cursor_position.enable || dc->debug.exit_idle_opt_for_cursor_updates)
+-			&& position->enable) {
+-		dc_allow_idle_optimizations(dc, false);
+-		reset_idle_optimizations = true;
+-	}
++	if (dc_stream_set_cursor_position(stream, position)) {
++		dc_z10_restore(dc);
+ 
+-	stream->cursor_position = *position;
++		/* disable idle optimizations if enabling cursor */
++		if (dc->idle_optimizations_allowed &&
++		    (!old_position->enable || dc->debug.exit_idle_opt_for_cursor_updates) &&
++		    position->enable) {
++			dc_allow_idle_optimizations(dc, false);
++			reset_idle_optimizations = true;
++		}
+ 
+-	program_cursor_position(dc, stream, position);
+-	/* re-enable idle optimizations if necessary */
+-	if (reset_idle_optimizations && !dc->debug.disable_dmub_reallow_idle)
+-		dc_allow_idle_optimizations(dc, true);
++		program_cursor_position(dc, stream, position);
++		/* re-enable idle optimizations if necessary */
++		if (reset_idle_optimizations && !dc->debug.disable_dmub_reallow_idle)
++			dc_allow_idle_optimizations(dc, true);
+ 
+-	return true;
++		return true;
++	}
++
++	return false;
+ }
+ 
+ bool dc_stream_add_writeback(struct dc *dc,
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index e5dbbc6089a5e4..1039dfb0b071a7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -470,10 +470,18 @@ bool dc_stream_set_cursor_attributes(
+ 	struct dc_stream_state *stream,
+ 	const struct dc_cursor_attributes *attributes);
+ 
++bool dc_stream_program_cursor_attributes(
++	struct dc_stream_state *stream,
++	const struct dc_cursor_attributes *attributes);
++
+ bool dc_stream_set_cursor_position(
+ 	struct dc_stream_state *stream,
+ 	const struct dc_cursor_position *position);
+ 
++bool dc_stream_program_cursor_position(
++	struct dc_stream_state *stream,
++	const struct dc_cursor_position *position);
++
+ 
+ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+ 				struct dc_stream_state *stream,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index 5b09d95cc5b8fc..4c470615330509 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -1041,7 +1041,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 
+ 					/* Use copied cursor, and it's okay to not switch back */
+ 					cursor_attr.address.quad_part = cmd.mall.cursor_copy_dst.quad_part;
+-					dc_stream_set_cursor_attributes(stream, &cursor_attr);
++					dc_stream_program_cursor_attributes(stream, &cursor_attr);
+ 				}
+ 
+ 				/* Enable MALL */
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 22d83ac18eb735..fbf58012becdf2 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -23,40 +23,11 @@ static int dvb_usb_force_pid_filter_usage;
+ module_param_named(force_pid_filter_usage, dvb_usb_force_pid_filter_usage, int, 0444);
+ MODULE_PARM_DESC(force_pid_filter_usage, "force all dvb-usb-devices to use a PID filter, if any (default: 0).");
+ 
+-static int dvb_usb_check_bulk_endpoint(struct dvb_usb_device *d, u8 endpoint)
+-{
+-	if (endpoint) {
+-		int ret;
+-
+-		ret = usb_pipe_type_check(d->udev, usb_sndbulkpipe(d->udev, endpoint));
+-		if (ret)
+-			return ret;
+-		ret = usb_pipe_type_check(d->udev, usb_rcvbulkpipe(d->udev, endpoint));
+-		if (ret)
+-			return ret;
+-	}
+-	return 0;
+-}
+-
+-static void dvb_usb_clear_halt(struct dvb_usb_device *d, u8 endpoint)
+-{
+-	if (endpoint) {
+-		usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, endpoint));
+-		usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, endpoint));
+-	}
+-}
+-
+ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ {
+ 	struct dvb_usb_adapter *adap;
+ 	int ret, n, o;
+ 
+-	ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint);
+-	if (ret)
+-		return ret;
+-	ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint_response);
+-	if (ret)
+-		return ret;
+ 	for (n = 0; n < d->props.num_adapters; n++) {
+ 		adap = &d->adapter[n];
+ 		adap->dev = d;
+@@ -132,8 +103,10 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 	 * when reloading the driver w/o replugging the device
+ 	 * sometimes a timeout occurs, this helps
+ 	 */
+-	dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint);
+-	dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint_response);
++	if (d->props.generic_bulk_ctrl_endpoint != 0) {
++		usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));
++		usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 7168ff4cc62bba..a823330567ff8b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2933,6 +2933,13 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ 			return NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND;
+ 	}
+ 
++	/*
++	 * NVMe SSD drops off the PCIe bus after system idle
++	 * for 10 hours on a Lenovo N60z board.
++	 */
++	if (dmi_match(DMI_BOARD_NAME, "LXKT-ZXEG-N6"))
++		return NVME_QUIRK_NO_APST;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 665fa952498659..ddfccc226751f4 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -477,6 +477,7 @@ config LENOVO_YMC
+ 	tristate "Lenovo Yoga Tablet Mode Control"
+ 	depends on ACPI_WMI
+ 	depends on INPUT
++	depends on IDEAPAD_LAPTOP
+ 	select INPUT_SPARSEKMAP
+ 	help
+ 	  This driver maps the Tablet Mode Control switch to SW_TABLET_MODE input
+diff --git a/drivers/platform/x86/amd/pmf/spc.c b/drivers/platform/x86/amd/pmf/spc.c
+index a3dec14c30043e..3c153fb1425e9f 100644
+--- a/drivers/platform/x86/amd/pmf/spc.c
++++ b/drivers/platform/x86/amd/pmf/spc.c
+@@ -150,36 +150,26 @@ static int amd_pmf_get_slider_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_
+ 	return 0;
+ }
+ 
+-static int amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
++static void amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
+ {
+ 	struct amd_sfh_info sfh_info;
+-	int ret;
++
++	/* Get the latest information from SFH */
++	in->ev_info.user_present = false;
+ 
+ 	/* Get ALS data */
+-	ret = amd_get_sfh_info(&sfh_info, MT_ALS);
+-	if (!ret)
++	if (!amd_get_sfh_info(&sfh_info, MT_ALS))
+ 		in->ev_info.ambient_light = sfh_info.ambient_light;
+ 	else
+-		return ret;
++		dev_dbg(dev->dev, "ALS is not enabled/detected\n");
+ 
+ 	/* get HPD data */
+-	ret = amd_get_sfh_info(&sfh_info, MT_HPD);
+-	if (ret)
+-		return ret;
+-
+-	switch (sfh_info.user_present) {
+-	case SFH_NOT_DETECTED:
+-		in->ev_info.user_present = 0xff; /* assume no sensors connected */
+-		break;
+-	case SFH_USER_PRESENT:
+-		in->ev_info.user_present = 1;
+-		break;
+-	case SFH_USER_AWAY:
+-		in->ev_info.user_present = 0;
+-		break;
++	if (!amd_get_sfh_info(&sfh_info, MT_HPD)) {
++		if (sfh_info.user_present == SFH_USER_PRESENT)
++			in->ev_info.user_present = true;
++	} else {
++		dev_dbg(dev->dev, "HPD is not enabled/detected\n");
+ 	}
+-
+-	return 0;
+ }
+ 
+ void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index fcf13d88fd6ed4..490815917adec8 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -125,6 +125,7 @@ struct ideapad_rfk_priv {
+ 
+ struct ideapad_private {
+ 	struct acpi_device *adev;
++	struct mutex vpc_mutex; /* protects the VPC calls */
+ 	struct rfkill *rfk[IDEAPAD_RFKILL_DEV_NUM];
+ 	struct ideapad_rfk_priv rfk_priv[IDEAPAD_RFKILL_DEV_NUM];
+ 	struct platform_device *platform_device;
+@@ -145,6 +146,7 @@ struct ideapad_private {
+ 		bool touchpad_ctrl_via_ec : 1;
+ 		bool ctrl_ps2_aux_port    : 1;
+ 		bool usb_charging         : 1;
++		bool ymc_ec_trigger       : 1;
+ 	} features;
+ 	struct {
+ 		bool initialized;
+@@ -193,6 +195,12 @@ MODULE_PARM_DESC(touchpad_ctrl_via_ec,
+ 	"Enable registering a 'touchpad' sysfs-attribute which can be used to manually "
+ 	"tell the EC to enable/disable the touchpad. This may not work on all models.");
+ 
++static bool ymc_ec_trigger __read_mostly;
++module_param(ymc_ec_trigger, bool, 0444);
++MODULE_PARM_DESC(ymc_ec_trigger,
++	"Enable EC triggering work-around to force emitting tablet mode events. "
++	"If you need this please report this to: platform-driver-x86@vger.kernel.org");
++
+ /*
+  * shared data
+  */
+@@ -297,6 +305,8 @@ static int debugfs_status_show(struct seq_file *s, void *data)
+ 	struct ideapad_private *priv = s->private;
+ 	unsigned long value;
+ 
++	guard(mutex)(&priv->vpc_mutex);
++
+ 	if (!read_ec_data(priv->adev->handle, VPCCMD_R_BL_MAX, &value))
+ 		seq_printf(s, "Backlight max:  %lu\n", value);
+ 	if (!read_ec_data(priv->adev->handle, VPCCMD_R_BL, &value))
+@@ -415,7 +425,8 @@ static ssize_t camera_power_show(struct device *dev,
+ 	unsigned long result;
+ 	int err;
+ 
+-	err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);
+ 	if (err)
+ 		return err;
+ 
+@@ -434,7 +445,8 @@ static ssize_t camera_power_store(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);
+ 	if (err)
+ 		return err;
+ 
+@@ -487,7 +499,8 @@ static ssize_t fan_mode_show(struct device *dev,
+ 	unsigned long result;
+ 	int err;
+ 
+-	err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);
+ 	if (err)
+ 		return err;
+ 
+@@ -509,7 +522,8 @@ static ssize_t fan_mode_store(struct device *dev,
+ 	if (state > 4 || state == 3)
+ 		return -EINVAL;
+ 
+-	err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);
+ 	if (err)
+ 		return err;
+ 
+@@ -594,7 +608,8 @@ static ssize_t touchpad_show(struct device *dev,
+ 	unsigned long result;
+ 	int err;
+ 
+-	err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);
+ 	if (err)
+ 		return err;
+ 
+@@ -615,7 +630,8 @@ static ssize_t touchpad_store(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);
+ 	if (err)
+ 		return err;
+ 
+@@ -1012,6 +1028,8 @@ static int ideapad_rfk_set(void *data, bool blocked)
+ 	struct ideapad_rfk_priv *priv = data;
+ 	int opcode = ideapad_rfk_data[priv->dev].opcode;
+ 
++	guard(mutex)(&priv->priv->vpc_mutex);
++
+ 	return write_ec_cmd(priv->priv->adev->handle, opcode, !blocked);
+ }
+ 
+@@ -1025,6 +1043,8 @@ static void ideapad_sync_rfk_state(struct ideapad_private *priv)
+ 	int i;
+ 
+ 	if (priv->features.hw_rfkill_switch) {
++		guard(mutex)(&priv->vpc_mutex);
++
+ 		if (read_ec_data(priv->adev->handle, VPCCMD_R_RF, &hw_blocked))
+ 			return;
+ 		hw_blocked = !hw_blocked;
+@@ -1198,8 +1218,9 @@ static void ideapad_input_novokey(struct ideapad_private *priv)
+ {
+ 	unsigned long long_pressed;
+ 
+-	if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed))
+-		return;
++	scoped_guard(mutex, &priv->vpc_mutex)
++		if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed))
++			return;
+ 
+ 	if (long_pressed)
+ 		ideapad_input_report(priv, 17);
+@@ -1211,8 +1232,9 @@ static void ideapad_check_special_buttons(struct ideapad_private *priv)
+ {
+ 	unsigned long bit, value;
+ 
+-	if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value))
+-		return;
++	scoped_guard(mutex, &priv->vpc_mutex)
++		if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value))
++			return;
+ 
+ 	for_each_set_bit (bit, &value, 16) {
+ 		switch (bit) {
+@@ -1245,6 +1267,8 @@ static int ideapad_backlight_get_brightness(struct backlight_device *blightdev)
+ 	unsigned long now;
+ 	int err;
+ 
++	guard(mutex)(&priv->vpc_mutex);
++
+ 	err = read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);
+ 	if (err)
+ 		return err;
+@@ -1257,6 +1281,8 @@ static int ideapad_backlight_update_status(struct backlight_device *blightdev)
+ 	struct ideapad_private *priv = bl_get_data(blightdev);
+ 	int err;
+ 
++	guard(mutex)(&priv->vpc_mutex);
++
+ 	err = write_ec_cmd(priv->adev->handle, VPCCMD_W_BL,
+ 			   blightdev->props.brightness);
+ 	if (err)
+@@ -1334,6 +1360,8 @@ static void ideapad_backlight_notify_power(struct ideapad_private *priv)
+ 	if (!blightdev)
+ 		return;
+ 
++	guard(mutex)(&priv->vpc_mutex);
++
+ 	if (read_ec_data(priv->adev->handle, VPCCMD_R_BL_POWER, &power))
+ 		return;
+ 
+@@ -1346,7 +1374,8 @@ static void ideapad_backlight_notify_brightness(struct ideapad_private *priv)
+ 
+ 	/* if we control brightness via acpi video driver */
+ 	if (!priv->blightdev)
+-		read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);
++		scoped_guard(mutex, &priv->vpc_mutex)
++			read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);
+ 	else
+ 		backlight_force_update(priv->blightdev, BACKLIGHT_UPDATE_HOTKEY);
+ }
+@@ -1571,7 +1600,8 @@ static void ideapad_sync_touchpad_state(struct ideapad_private *priv, bool send_
+ 	int ret;
+ 
+ 	/* Without reading from EC touchpad LED doesn't switch state */
+-	ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value);
++	scoped_guard(mutex, &priv->vpc_mutex)
++		ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value);
+ 	if (ret)
+ 		return;
+ 
+@@ -1599,16 +1629,92 @@ static void ideapad_sync_touchpad_state(struct ideapad_private *priv, bool send_
+ 	priv->r_touchpad_val = value;
+ }
+ 
++static const struct dmi_system_id ymc_ec_trigger_quirk_dmi_table[] = {
++	{
++		/* Lenovo Yoga 7 14ARB7 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82QF"),
++		},
++	},
++	{
++		/* Lenovo Yoga 7 14ACN6 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82N7"),
++		},
++	},
++	{ }
++};
++
++static void ideapad_laptop_trigger_ec(void)
++{
++	struct ideapad_private *priv;
++	int ret;
++
++	guard(mutex)(&ideapad_shared_mutex);
++
++	priv = ideapad_shared;
++	if (!priv)
++		return;
++
++	if (!priv->features.ymc_ec_trigger)
++		return;
++
++	scoped_guard(mutex, &priv->vpc_mutex)
++		ret = write_ec_cmd(priv->adev->handle, VPCCMD_W_YMC, 1);
++	if (ret)
++		dev_warn(&priv->platform_device->dev, "Could not write YMC: %d\n", ret);
++}
++
++static int ideapad_laptop_nb_notify(struct notifier_block *nb,
++				    unsigned long action, void *data)
++{
++	switch (action) {
++	case IDEAPAD_LAPTOP_YMC_EVENT:
++		ideapad_laptop_trigger_ec();
++		break;
++	}
++
++	return 0;
++}
++
++static struct notifier_block ideapad_laptop_notifier = {
++	.notifier_call = ideapad_laptop_nb_notify,
++};
++
++static BLOCKING_NOTIFIER_HEAD(ideapad_laptop_chain_head);
++
++int ideapad_laptop_register_notifier(struct notifier_block *nb)
++{
++	return blocking_notifier_chain_register(&ideapad_laptop_chain_head, nb);
++}
++EXPORT_SYMBOL_NS_GPL(ideapad_laptop_register_notifier, IDEAPAD_LAPTOP);
++
++int ideapad_laptop_unregister_notifier(struct notifier_block *nb)
++{
++	return blocking_notifier_chain_unregister(&ideapad_laptop_chain_head, nb);
++}
++EXPORT_SYMBOL_NS_GPL(ideapad_laptop_unregister_notifier, IDEAPAD_LAPTOP);
++
++void ideapad_laptop_call_notifier(unsigned long action, void *data)
++{
++	blocking_notifier_call_chain(&ideapad_laptop_chain_head, action, data);
++}
++EXPORT_SYMBOL_NS_GPL(ideapad_laptop_call_notifier, IDEAPAD_LAPTOP);
++
+ static void ideapad_acpi_notify(acpi_handle handle, u32 event, void *data)
+ {
+ 	struct ideapad_private *priv = data;
+ 	unsigned long vpc1, vpc2, bit;
+ 
+-	if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1))
+-		return;
++	scoped_guard(mutex, &priv->vpc_mutex) {
++		if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1))
++			return;
+ 
+-	if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2))
+-		return;
++		if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2))
++			return;
++	}
+ 
+ 	vpc1 = (vpc2 << 8) | vpc1;
+ 
+@@ -1735,6 +1841,8 @@ static void ideapad_check_features(struct ideapad_private *priv)
+ 	priv->features.ctrl_ps2_aux_port =
+ 		ctrl_ps2_aux_port || dmi_check_system(ctrl_ps2_aux_port_list);
+ 	priv->features.touchpad_ctrl_via_ec = touchpad_ctrl_via_ec;
++	priv->features.ymc_ec_trigger =
++		ymc_ec_trigger || dmi_check_system(ymc_ec_trigger_quirk_dmi_table);
+ 
+ 	if (!read_ec_data(handle, VPCCMD_R_FAN, &val))
+ 		priv->features.fan_mode = true;
+@@ -1915,6 +2023,10 @@ static int ideapad_acpi_add(struct platform_device *pdev)
+ 	priv->adev = adev;
+ 	priv->platform_device = pdev;
+ 
++	err = devm_mutex_init(&pdev->dev, &priv->vpc_mutex);
++	if (err)
++		return err;
++
+ 	ideapad_check_features(priv);
+ 
+ 	err = ideapad_sysfs_init(priv);
+@@ -1983,6 +2095,8 @@ static int ideapad_acpi_add(struct platform_device *pdev)
+ 	if (err)
+ 		goto shared_init_failed;
+ 
++	ideapad_laptop_register_notifier(&ideapad_laptop_notifier);
++
+ 	return 0;
+ 
+ shared_init_failed:
+@@ -2015,6 +2129,8 @@ static void ideapad_acpi_remove(struct platform_device *pdev)
+ 	struct ideapad_private *priv = dev_get_drvdata(&pdev->dev);
+ 	int i;
+ 
++	ideapad_laptop_unregister_notifier(&ideapad_laptop_notifier);
++
+ 	ideapad_shared_exit(priv);
+ 
+ 	acpi_remove_notify_handler(priv->adev->handle,
+diff --git a/drivers/platform/x86/ideapad-laptop.h b/drivers/platform/x86/ideapad-laptop.h
+index 4498a96de59769..948cc61800a950 100644
+--- a/drivers/platform/x86/ideapad-laptop.h
++++ b/drivers/platform/x86/ideapad-laptop.h
+@@ -12,6 +12,15 @@
+ #include <linux/acpi.h>
+ #include <linux/jiffies.h>
+ #include <linux/errno.h>
++#include <linux/notifier.h>
++
++enum ideapad_laptop_notifier_actions {
++	IDEAPAD_LAPTOP_YMC_EVENT,
++};
++
++int ideapad_laptop_register_notifier(struct notifier_block *nb);
++int ideapad_laptop_unregister_notifier(struct notifier_block *nb);
++void ideapad_laptop_call_notifier(unsigned long action, void *data);
+ 
+ enum {
+ 	VPCCMD_R_VPC1 = 0x10,
+diff --git a/drivers/platform/x86/lenovo-ymc.c b/drivers/platform/x86/lenovo-ymc.c
+index e1fbc35504d498..e0bbd6a14a89cb 100644
+--- a/drivers/platform/x86/lenovo-ymc.c
++++ b/drivers/platform/x86/lenovo-ymc.c
+@@ -20,32 +20,10 @@
+ #define LENOVO_YMC_QUERY_INSTANCE 0
+ #define LENOVO_YMC_QUERY_METHOD 0x01
+ 
+-static bool ec_trigger __read_mostly;
+-module_param(ec_trigger, bool, 0444);
+-MODULE_PARM_DESC(ec_trigger, "Enable EC triggering work-around to force emitting tablet mode events");
+-
+ static bool force;
+ module_param(force, bool, 0444);
+ MODULE_PARM_DESC(force, "Force loading on boards without a convertible DMI chassis-type");
+ 
+-static const struct dmi_system_id ec_trigger_quirk_dmi_table[] = {
+-	{
+-		/* Lenovo Yoga 7 14ARB7 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "82QF"),
+-		},
+-	},
+-	{
+-		/* Lenovo Yoga 7 14ACN6 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "82N7"),
+-		},
+-	},
+-	{ }
+-};
+-
+ static const struct dmi_system_id allowed_chasis_types_dmi_table[] = {
+ 	{
+ 		.matches = {
+@@ -62,21 +40,8 @@ static const struct dmi_system_id allowed_chasis_types_dmi_table[] = {
+ 
+ struct lenovo_ymc_private {
+ 	struct input_dev *input_dev;
+-	struct acpi_device *ec_acpi_dev;
+ };
+ 
+-static void lenovo_ymc_trigger_ec(struct wmi_device *wdev, struct lenovo_ymc_private *priv)
+-{
+-	int err;
+-
+-	if (!priv->ec_acpi_dev)
+-		return;
+-
+-	err = write_ec_cmd(priv->ec_acpi_dev->handle, VPCCMD_W_YMC, 1);
+-	if (err)
+-		dev_warn(&wdev->dev, "Could not write YMC: %d\n", err);
+-}
+-
+ static const struct key_entry lenovo_ymc_keymap[] = {
+ 	/* Laptop */
+ 	{ KE_SW, 0x01, { .sw = { SW_TABLET_MODE, 0 } } },
+@@ -125,11 +90,9 @@ static void lenovo_ymc_notify(struct wmi_device *wdev, union acpi_object *data)
+ 
+ free_obj:
+ 	kfree(obj);
+-	lenovo_ymc_trigger_ec(wdev, priv);
++	ideapad_laptop_call_notifier(IDEAPAD_LAPTOP_YMC_EVENT, &code);
+ }
+ 
+-static void acpi_dev_put_helper(void *p) { acpi_dev_put(p); }
+-
+ static int lenovo_ymc_probe(struct wmi_device *wdev, const void *ctx)
+ {
+ 	struct lenovo_ymc_private *priv;
+@@ -143,29 +106,10 @@ static int lenovo_ymc_probe(struct wmi_device *wdev, const void *ctx)
+ 			return -ENODEV;
+ 	}
+ 
+-	ec_trigger |= dmi_check_system(ec_trigger_quirk_dmi_table);
+-
+ 	priv = devm_kzalloc(&wdev->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	if (ec_trigger) {
+-		pr_debug("Lenovo YMC enable EC triggering.\n");
+-		priv->ec_acpi_dev = acpi_dev_get_first_match_dev("VPC2004", NULL, -1);
+-
+-		if (!priv->ec_acpi_dev) {
+-			dev_err(&wdev->dev, "Could not find EC ACPI device.\n");
+-			return -ENODEV;
+-		}
+-		err = devm_add_action_or_reset(&wdev->dev,
+-				acpi_dev_put_helper, priv->ec_acpi_dev);
+-		if (err) {
+-			dev_err(&wdev->dev,
+-				"Could not clean up EC ACPI device: %d\n", err);
+-			return err;
+-		}
+-	}
+-
+ 	input_dev = devm_input_allocate_device(&wdev->dev);
+ 	if (!input_dev)
+ 		return -ENOMEM;
+@@ -192,7 +136,6 @@ static int lenovo_ymc_probe(struct wmi_device *wdev, const void *ctx)
+ 	dev_set_drvdata(&wdev->dev, priv);
+ 
+ 	/* Report the state for the first time on probe */
+-	lenovo_ymc_trigger_ec(wdev, priv);
+ 	lenovo_ymc_notify(wdev, NULL);
+ 	return 0;
+ }
+@@ -217,3 +160,4 @@ module_wmi_driver(lenovo_ymc_driver);
+ MODULE_AUTHOR("Gergo Koteles <soyer@irl.hu>");
+ MODULE_DESCRIPTION("Lenovo Yoga Mode Control driver");
+ MODULE_LICENSE("GPL");
++MODULE_IMPORT_NS(IDEAPAD_LAPTOP);
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index c26545d71d39a3..cd6d5bbb4b9df5 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -72,8 +72,10 @@
+ 
+ #ifdef CONFIG_BINFMT_FLAT_NO_DATA_START_OFFSET
+ #define DATA_START_OFFSET_WORDS		(0)
++#define MAX_SHARED_LIBS_UPDATE		(0)
+ #else
+ #define DATA_START_OFFSET_WORDS		(MAX_SHARED_LIBS)
++#define MAX_SHARED_LIBS_UPDATE		(MAX_SHARED_LIBS)
+ #endif
+ 
+ struct lib_info {
+@@ -880,7 +882,7 @@ static int load_flat_binary(struct linux_binprm *bprm)
+ 		return res;
+ 
+ 	/* Update data segment pointers for all libraries */
+-	for (i = 0; i < MAX_SHARED_LIBS; i++) {
++	for (i = 0; i < MAX_SHARED_LIBS_UPDATE; i++) {
+ 		if (!libinfo.lib_list[i].loaded)
+ 			continue;
+ 		for (j = 0; j < MAX_SHARED_LIBS; j++) {
+diff --git a/fs/exec.c b/fs/exec.c
+index 40073142288f7a..0c17e59e3767b8 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1668,6 +1668,7 @@ static void bprm_fill_uid(struct linux_binprm *bprm, struct file *file)
+ 	unsigned int mode;
+ 	vfsuid_t vfsuid;
+ 	vfsgid_t vfsgid;
++	int err;
+ 
+ 	if (!mnt_may_suid(file->f_path.mnt))
+ 		return;
+@@ -1684,12 +1685,17 @@ static void bprm_fill_uid(struct linux_binprm *bprm, struct file *file)
+ 	/* Be careful if suid/sgid is set */
+ 	inode_lock(inode);
+ 
+-	/* reload atomically mode/uid/gid now that lock held */
++	/* Atomically reload and check mode/uid/gid now that lock held. */
+ 	mode = inode->i_mode;
+ 	vfsuid = i_uid_into_vfsuid(idmap, inode);
+ 	vfsgid = i_gid_into_vfsgid(idmap, inode);
++	err = inode_permission(idmap, inode, MAY_EXEC);
+ 	inode_unlock(inode);
+ 
++	/* Did the exec bit vanish out from under us? Give up. */
++	if (err)
++		return;
++
+ 	/* We ignore suid/sgid if there are no mappings for them in the ns */
+ 	if (!vfsuid_has_mapping(bprm->cred->user_ns, vfsuid) ||
+ 	    !vfsgid_has_mapping(bprm->cred->user_ns, vfsgid))
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 48048fa3642766..fd1fc06359eea3 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -19,33 +19,23 @@
+ #include "node.h"
+ #include <trace/events/f2fs.h>
+ 
+-bool sanity_check_extent_cache(struct inode *inode)
++bool sanity_check_extent_cache(struct inode *inode, struct page *ipage)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+-	struct f2fs_inode_info *fi = F2FS_I(inode);
+-	struct extent_tree *et = fi->extent_tree[EX_READ];
+-	struct extent_info *ei;
+-
+-	if (!et)
+-		return true;
++	struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext;
++	struct extent_info ei;
+ 
+-	ei = &et->largest;
+-	if (!ei->len)
+-		return true;
++	get_read_extent_info(&ei, i_ext);
+ 
+-	/* Let's drop, if checkpoint got corrupted. */
+-	if (is_set_ckpt_flags(sbi, CP_ERROR_FLAG)) {
+-		ei->len = 0;
+-		et->largest_updated = true;
++	if (!ei.len)
+ 		return true;
+-	}
+ 
+-	if (!f2fs_is_valid_blkaddr(sbi, ei->blk, DATA_GENERIC_ENHANCE) ||
+-	    !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1,
++	if (!f2fs_is_valid_blkaddr(sbi, ei.blk, DATA_GENERIC_ENHANCE) ||
++	    !f2fs_is_valid_blkaddr(sbi, ei.blk + ei.len - 1,
+ 					DATA_GENERIC_ENHANCE)) {
+ 		f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix",
+ 			  __func__, inode->i_ino,
+-			  ei->blk, ei->fofs, ei->len);
++			  ei.blk, ei.fofs, ei.len);
+ 		return false;
+ 	}
+ 	return true;
+@@ -394,24 +384,22 @@ void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage)
+ 
+ 	if (!__may_extent_tree(inode, EX_READ)) {
+ 		/* drop largest read extent */
+-		if (i_ext && i_ext->len) {
++		if (i_ext->len) {
+ 			f2fs_wait_on_page_writeback(ipage, NODE, true, true);
+ 			i_ext->len = 0;
+ 			set_page_dirty(ipage);
+ 		}
+-		goto out;
++		set_inode_flag(inode, FI_NO_EXTENT);
++		return;
+ 	}
+ 
+ 	et = __grab_extent_tree(inode, EX_READ);
+ 
+-	if (!i_ext || !i_ext->len)
+-		goto out;
+-
+ 	get_read_extent_info(&ei, i_ext);
+ 
+ 	write_lock(&et->lock);
+-	if (atomic_read(&et->node_cnt))
+-		goto unlock_out;
++	if (atomic_read(&et->node_cnt) || !ei.len)
++		goto skip;
+ 
+ 	en = __attach_extent_node(sbi, et, &ei, NULL,
+ 				&et->root.rb_root.rb_node, true);
+@@ -423,11 +411,13 @@ void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage)
+ 		list_add_tail(&en->list, &eti->extent_list);
+ 		spin_unlock(&eti->extent_lock);
+ 	}
+-unlock_out:
++skip:
++	/* Let's drop, if checkpoint got corrupted. */
++	if (f2fs_cp_error(sbi)) {
++		et->largest.len = 0;
++		et->largest_updated = true;
++	}
+ 	write_unlock(&et->lock);
+-out:
+-	if (!F2FS_I(inode)->extent_tree[EX_READ])
+-		set_inode_flag(inode, FI_NO_EXTENT);
+ }
+ 
+ void f2fs_init_age_extent_tree(struct inode *inode)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 66680159a29680..5556ab491368da 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -4195,7 +4195,7 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi);
+ /*
+  * extent_cache.c
+  */
+-bool sanity_check_extent_cache(struct inode *inode);
++bool sanity_check_extent_cache(struct inode *inode, struct page *ipage);
+ void f2fs_init_extent_tree(struct inode *inode);
+ void f2fs_drop_extent_tree(struct inode *inode);
+ void f2fs_destroy_extent_node(struct inode *inode);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index b2951cd930d808..448c75e80b89e6 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1566,6 +1566,16 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 				continue;
+ 			}
+ 
++			if (f2fs_has_inline_data(inode)) {
++				iput(inode);
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++				f2fs_err_ratelimited(sbi,
++					"inode %lx has both inline_data flag and "
++					"data block, nid=%u, ofs_in_node=%u",
++					inode->i_ino, dni.nid, ofs_in_node);
++				continue;
++			}
++
+ 			err = f2fs_gc_pinned_control(inode, gc_type, segno);
+ 			if (err == -EAGAIN) {
+ 				iput(inode);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index c6b55aedc27627..ed629dabbfda43 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -511,16 +511,16 @@ static int do_read_inode(struct inode *inode)
+ 
+ 	init_idisk_time(inode);
+ 
+-	/* Need all the flag bits */
+-	f2fs_init_read_extent_tree(inode, node_page);
+-	f2fs_init_age_extent_tree(inode);
+-
+-	if (!sanity_check_extent_cache(inode)) {
++	if (!sanity_check_extent_cache(inode, node_page)) {
+ 		f2fs_put_page(node_page, 1);
+ 		f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE);
+ 		return -EFSCORRUPTED;
+ 	}
+ 
++	/* Need all the flag bits */
++	f2fs_init_read_extent_tree(inode, node_page);
++	f2fs_init_age_extent_tree(inode);
++
+ 	f2fs_put_page(node_page, 1);
+ 
+ 	stat_inc_inline_xattr(inode);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index cb3cda1390adb1..5713994328cbcb 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1626,6 +1626,8 @@ s64 dbDiscardAG(struct inode *ip, int agno, s64 minlen)
+ 		} else if (rc == -ENOSPC) {
+ 			/* search for next smaller log2 block */
+ 			l2nb = BLKSTOL2(nblocks) - 1;
++			if (unlikely(l2nb < 0))
++				break;
+ 			nblocks = 1LL << l2nb;
+ 		} else {
+ 			/* Trim any already allocated blocks */
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 031d8f570f581f..5d3127ca68a42d 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -834,6 +834,8 @@ int dtInsert(tid_t tid, struct inode *ip,
+ 	 * the full page.
+ 	 */
+ 	DT_GETSEARCH(ip, btstack->top, bn, mp, p, index);
++	if (p->header.freelist == 0)
++		return -EINVAL;
+ 
+ 	/*
+ 	 *	insert entry for new key
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 4822cfd6351c27..ded451a84b773b 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1896,6 +1896,47 @@ enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
+ 	return REPARSE_LINK;
+ }
+ 
++/*
++ * fiemap_fill_next_extent_k - a copy of fiemap_fill_next_extent
++ * but it accepts kernel address for fi_extents_start
++ */
++static int fiemap_fill_next_extent_k(struct fiemap_extent_info *fieinfo,
++				     u64 logical, u64 phys, u64 len, u32 flags)
++{
++	struct fiemap_extent extent;
++	struct fiemap_extent __user *dest = fieinfo->fi_extents_start;
++
++	/* only count the extents */
++	if (fieinfo->fi_extents_max == 0) {
++		fieinfo->fi_extents_mapped++;
++		return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
++	}
++
++	if (fieinfo->fi_extents_mapped >= fieinfo->fi_extents_max)
++		return 1;
++
++	if (flags & FIEMAP_EXTENT_DELALLOC)
++		flags |= FIEMAP_EXTENT_UNKNOWN;
++	if (flags & FIEMAP_EXTENT_DATA_ENCRYPTED)
++		flags |= FIEMAP_EXTENT_ENCODED;
++	if (flags & (FIEMAP_EXTENT_DATA_TAIL | FIEMAP_EXTENT_DATA_INLINE))
++		flags |= FIEMAP_EXTENT_NOT_ALIGNED;
++
++	memset(&extent, 0, sizeof(extent));
++	extent.fe_logical = logical;
++	extent.fe_physical = phys;
++	extent.fe_length = len;
++	extent.fe_flags = flags;
++
++	dest += fieinfo->fi_extents_mapped;
++	memcpy(dest, &extent, sizeof(extent));
++
++	fieinfo->fi_extents_mapped++;
++	if (fieinfo->fi_extents_mapped == fieinfo->fi_extents_max)
++		return 1;
++	return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
++}
++
+ /*
+  * ni_fiemap - Helper for file_fiemap().
+  *
+@@ -1906,6 +1947,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 	      __u64 vbo, __u64 len)
+ {
+ 	int err = 0;
++	struct fiemap_extent __user *fe_u = fieinfo->fi_extents_start;
++	struct fiemap_extent *fe_k = NULL;
+ 	struct ntfs_sb_info *sbi = ni->mi.sbi;
+ 	u8 cluster_bits = sbi->cluster_bits;
+ 	struct runs_tree *run;
+@@ -1953,6 +1996,18 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 		goto out;
+ 	}
+ 
++	/*
++	 * To avoid lock problems replace pointer to user memory by pointer to kernel memory.
++	 */
++	fe_k = kmalloc_array(fieinfo->fi_extents_max,
++			     sizeof(struct fiemap_extent),
++			     GFP_NOFS | __GFP_ZERO);
++	if (!fe_k) {
++		err = -ENOMEM;
++		goto out;
++	}
++	fieinfo->fi_extents_start = fe_k;
++
+ 	end = vbo + len;
+ 	alloc_size = le64_to_cpu(attr->nres.alloc_size);
+ 	if (end > alloc_size)
+@@ -2041,8 +2096,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 			if (vbo + dlen >= end)
+ 				flags |= FIEMAP_EXTENT_LAST;
+ 
+-			err = fiemap_fill_next_extent(fieinfo, vbo, lbo, dlen,
+-						      flags);
++			err = fiemap_fill_next_extent_k(fieinfo, vbo, lbo, dlen,
++							flags);
++
+ 			if (err < 0)
+ 				break;
+ 			if (err == 1) {
+@@ -2062,7 +2118,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 		if (vbo + bytes >= end)
+ 			flags |= FIEMAP_EXTENT_LAST;
+ 
+-		err = fiemap_fill_next_extent(fieinfo, vbo, lbo, bytes, flags);
++		err = fiemap_fill_next_extent_k(fieinfo, vbo, lbo, bytes,
++						flags);
+ 		if (err < 0)
+ 			break;
+ 		if (err == 1) {
+@@ -2075,7 +2132,19 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 
+ 	up_read(run_lock);
+ 
++	/*
++	 * Copy to user memory out of lock
++	 */
++	if (copy_to_user(fe_u, fe_k,
++			 fieinfo->fi_extents_max *
++				 sizeof(struct fiemap_extent))) {
++		err = -EFAULT;
++	}
++
+ out:
++	/* Restore original pointer. */
++	fieinfo->fi_extents_start = fe_u;
++	kfree(fe_k);
+ 	return err;
+ }
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 110692c1dd95a5..ab0455c64e49ad 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2279,12 +2279,12 @@ static int __bpf_redirect_neigh_v6(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	err = bpf_out_neigh_v6(net, skb, dev, nh);
+ 	if (unlikely(net_xmit_eval(err)))
+-		dev->stats.tx_errors++;
++		DEV_STATS_INC(dev, tx_errors);
+ 	else
+ 		ret = NET_XMIT_SUCCESS;
+ 	goto out_xmit;
+ out_drop:
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	kfree_skb(skb);
+ out_xmit:
+ 	return ret;
+@@ -2385,12 +2385,12 @@ static int __bpf_redirect_neigh_v4(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	err = bpf_out_neigh_v4(net, skb, dev, nh);
+ 	if (unlikely(net_xmit_eval(err)))
+-		dev->stats.tx_errors++;
++		DEV_STATS_INC(dev, tx_errors);
+ 	else
+ 		ret = NET_XMIT_SUCCESS;
+ 	goto out_xmit;
+ out_drop:
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	kfree_skb(skb);
+ out_xmit:
+ 	return ret;
+diff --git a/net/ipv4/fou_core.c b/net/ipv4/fou_core.c
+index a8494f796dca33..0abbc413e0fe51 100644
+--- a/net/ipv4/fou_core.c
++++ b/net/ipv4/fou_core.c
+@@ -433,7 +433,7 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+-	if (WARN_ON_ONCE(!ops || !ops->callbacks.gro_receive))
++	if (!ops || !ops->callbacks.gro_receive)
+ 		goto out;
+ 
+ 	pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index 6d821a793045ee..56cd60d33a28e7 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -36,6 +36,7 @@ static const struct reg_sequence cs35l56_patch[] = {
+ 	{ CS35L56_SWIRE_DP3_CH2_INPUT,		0x00000019 },
+ 	{ CS35L56_SWIRE_DP3_CH3_INPUT,		0x00000029 },
+ 	{ CS35L56_SWIRE_DP3_CH4_INPUT,		0x00000028 },
++	{ CS35L56_IRQ1_MASK_18,			0x1f7df0ff },
+ 
+ 	/* These are not reset by a soft-reset, so patch to defaults. */
+ 	{ CS35L56_MAIN_RENDER_USER_MUTE,	0x00000000 },
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index d1bdb0b93bda0c..8cc2d4937f3403 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2021,6 +2021,13 @@ static int parse_audio_feature_unit(struct mixer_build *state, int unitid,
+ 		bmaControls = ftr->bmaControls;
+ 	}
+ 
++	if (channels > 32) {
++		usb_audio_info(state->chip,
++			       "usbmixer: too many channels (%d) in unit %d\n",
++			       channels, unitid);
++		return -EINVAL;
++	}
++
+ 	/* parse the source unit */
+ 	err = parse_audio_unit(state, hdr->bSourceID);
+ 	if (err < 0)


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-08-29 16:48 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-08-29 16:48 UTC (permalink / raw
  To: gentoo-commits

commit:     ef27e3d3833a5fc6966eba377d74239c139c169e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 29 16:48:11 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 29 16:48:11 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ef27e3d3

Linux patch 6.10.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1006_linux-6.10.7.patch | 12217 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12221 insertions(+)

diff --git a/0000_README b/0000_README
index 75bbf8b4..82c9265b 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-6.10.6.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.6
 
+Patch:  1006_linux-6.10.7.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.7
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1006_linux-6.10.7.patch b/1006_linux-6.10.7.patch
new file mode 100644
index 00000000..00a04ca2
--- /dev/null
+++ b/1006_linux-6.10.7.patch
@@ -0,0 +1,12217 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index e7e160954e798..0579860b55299 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -562,7 +562,8 @@ Description:	Control Symmetric Multi Threading (SMT)
+ 			 ================ =========================================
+ 
+ 			 If control status is "forceoff" or "notsupported" writes
+-			 are rejected.
++			 are rejected. Note that enabling SMT on PowerPC skips
++			 offline cores.
+ 
+ What:		/sys/devices/system/cpu/cpuX/power/energy_perf_bias
+ Date:		March 2019
+diff --git a/Makefile b/Makefile
+index 361a70264e1fb..ab77d171e268d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+@@ -1986,7 +1986,7 @@ nsdeps: modules
+ quiet_cmd_gen_compile_commands = GEN     $@
+       cmd_gen_compile_commands = $(PYTHON3) $< -a $(AR) -o $@ $(filter-out $<, $(real-prereqs))
+ 
+-$(extmod_prefix)compile_commands.json: scripts/clang-tools/gen_compile_commands.py \
++$(extmod_prefix)compile_commands.json: $(srctree)/scripts/clang-tools/gen_compile_commands.py \
+ 	$(if $(KBUILD_EXTMOD),, vmlinux.a $(KBUILD_VMLINUX_LIBS)) \
+ 	$(if $(CONFIG_MODULES), $(MODORDER)) FORCE
+ 	$(call if_changed,gen_compile_commands)
+diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
+index e51535a5f939a..ccbff21ce1faf 100644
+--- a/arch/arm64/kernel/acpi_numa.c
++++ b/arch/arm64/kernel/acpi_numa.c
+@@ -27,7 +27,7 @@
+ 
+ #include <asm/numa.h>
+ 
+-static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE };
++static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };
+ 
+ int __init acpi_numa_get_nid(unsigned int cpu)
+ {
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index a096e2451044d..b22d28ec80284 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -355,9 +355,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
+ 	smp_init_cpus();
+ 	smp_build_mpidr_hash();
+ 
+-	/* Init percpu seeds for random tags after cpus are set up. */
+-	kasan_init_sw_tags();
+-
+ #ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ 	/*
+ 	 * Make sure init_thread_info.ttbr0 always generates translation
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 5de85dccc09cd..05688f6a275f1 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -469,6 +469,8 @@ void __init smp_prepare_boot_cpu(void)
+ 		init_gic_priority_masking();
+ 
+ 	kasan_init_hw_tags();
++	/* Init percpu seeds for random tags after cpus are set up. */
++	kasan_init_sw_tags();
+ }
+ 
+ /*
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 22b45a15d0688..33d866ca8f655 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -33,6 +33,7 @@
+ #include <trace/events/kvm.h>
+ 
+ #include "sys_regs.h"
++#include "vgic/vgic.h"
+ 
+ #include "trace.h"
+ 
+@@ -428,6 +429,11 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu,
+ {
+ 	bool g1;
+ 
++	if (!kvm_has_gicv3(vcpu->kvm)) {
++		kvm_inject_undefined(vcpu);
++		return false;
++	}
++
+ 	if (!p->is_write)
+ 		return read_from_write_only(vcpu, p, r);
+ 
+diff --git a/arch/arm64/kvm/vgic/vgic-debug.c b/arch/arm64/kvm/vgic/vgic-debug.c
+index bcbc8c986b1d6..980fbe4fd82cb 100644
+--- a/arch/arm64/kvm/vgic/vgic-debug.c
++++ b/arch/arm64/kvm/vgic/vgic-debug.c
+@@ -84,7 +84,7 @@ static void iter_unmark_lpis(struct kvm *kvm)
+ 	struct vgic_irq *irq;
+ 	unsigned long intid;
+ 
+-	xa_for_each(&dist->lpi_xa, intid, irq) {
++	xa_for_each_marked(&dist->lpi_xa, intid, irq, LPI_XA_MARK_DEBUG_ITER) {
+ 		xa_clear_mark(&dist->lpi_xa, intid, LPI_XA_MARK_DEBUG_ITER);
+ 		vgic_put_irq(kvm, irq);
+ 	}
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index 03d356a123771..55e3b7108dc9b 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -346,4 +346,11 @@ void vgic_v4_configure_vsgis(struct kvm *kvm);
+ void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
+ int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq);
+ 
++static inline bool kvm_has_gicv3(struct kvm *kvm)
++{
++	return (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif) &&
++		irqchip_in_kernel(kvm) &&
++		kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3);
++}
++
+ #endif
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index bda7f193baab9..af7412549e6ea 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1724,12 +1724,16 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ 		c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
+ 			MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
+ 		c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
++		change_c0_config6(LOONGSON_CONF6_EXTIMER | LOONGSON_CONF6_INTIMER,
++				  LOONGSON_CONF6_INTIMER);
+ 		break;
+ 	case PRID_IMP_LOONGSON_64G:
+ 		__cpu_name[cpu] = "ICT Loongson-3";
+ 		set_elf_platform(cpu, "loongson3a");
+ 		set_isa(c, MIPS_CPU_ISA_M64R2);
+ 		decode_cpucfg(c);
++		change_c0_config6(LOONGSON_CONF6_EXTIMER | LOONGSON_CONF6_INTIMER,
++				  LOONGSON_CONF6_INTIMER);
+ 		break;
+ 	default:
+ 		panic("Unknown Loongson Processor ID!");
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index f4e6f2dd04b73..16bacfe8c7a2c 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -145,6 +145,7 @@ static inline int cpu_to_coregroup_id(int cpu)
+ 
+ #ifdef CONFIG_HOTPLUG_SMT
+ #include <linux/cpu_smt.h>
++#include <linux/cpumask.h>
+ #include <asm/cputhreads.h>
+ 
+ static inline bool topology_is_primary_thread(unsigned int cpu)
+@@ -156,6 +157,18 @@ static inline bool topology_smt_thread_allowed(unsigned int cpu)
+ {
+ 	return cpu_thread_in_core(cpu) < cpu_smt_num_threads;
+ }
++
++#define topology_is_core_online topology_is_core_online
++static inline bool topology_is_core_online(unsigned int cpu)
++{
++	int i, first_cpu = cpu_first_thread_sibling(cpu);
++
++	for (i = first_cpu; i < first_cpu + threads_per_core; ++i) {
++		if (cpu_online(i))
++			return true;
++	}
++	return false;
++}
+ #endif
+ 
+ #endif /* __KERNEL__ */
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 05a16b1f0aee8..51ebfd23e0076 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -319,6 +319,7 @@ void do_trap_ecall_u(struct pt_regs *regs)
+ 
+ 		regs->epc += 4;
+ 		regs->orig_a0 = regs->a0;
++		regs->a0 = -ENOSYS;
+ 
+ 		riscv_v_vstate_discard(regs);
+ 
+@@ -328,8 +329,7 @@ void do_trap_ecall_u(struct pt_regs *regs)
+ 
+ 		if (syscall >= 0 && syscall < NR_syscalls)
+ 			syscall_handler(regs, syscall);
+-		else if (syscall != -1)
+-			regs->a0 = -ENOSYS;
++
+ 		/*
+ 		 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+ 		 * so the maximum stack offset is 1k bytes (10 bits).
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 7e25606f858aa..c5c66f53971af 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -931,7 +931,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir,
+ 				   PMD_SIZE, PAGE_KERNEL_EXEC);
+ 
+ 	/* Map the data in RAM */
+-	end_va = kernel_map.virt_addr + XIP_OFFSET + kernel_map.size;
++	end_va = kernel_map.virt_addr + kernel_map.size;
+ 	for (va = kernel_map.virt_addr + XIP_OFFSET; va < end_va; va += PMD_SIZE)
+ 		create_pgd_mapping(pgdir, va,
+ 				   kernel_map.phys_addr + (va - (kernel_map.virt_addr + XIP_OFFSET)),
+@@ -1100,7 +1100,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 
+ 	phys_ram_base = CONFIG_PHYS_RAM_BASE;
+ 	kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
+-	kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata);
++	kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start);
+ 
+ 	kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom;
+ #else
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 5a36d5538dae8..6d88f241dd43a 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -161,7 +161,7 @@ static void kaslr_adjust_relocs(unsigned long min_addr, unsigned long max_addr,
+ 		loc = (long)*reloc + phys_offset;
+ 		if (loc < min_addr || loc > max_addr)
+ 			error("64-bit relocation outside of kernel!\n");
+-		*(u64 *)loc += offset - __START_KERNEL;
++		*(u64 *)loc += offset;
+ 	}
+ }
+ 
+@@ -176,7 +176,7 @@ static void kaslr_adjust_got(unsigned long offset)
+ 	 */
+ 	for (entry = (u64 *)vmlinux.got_start; entry < (u64 *)vmlinux.got_end; entry++) {
+ 		if (*entry)
+-			*entry += offset - __START_KERNEL;
++			*entry += offset;
+ 	}
+ }
+ 
+@@ -251,7 +251,7 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
+ 	vmemmap_size = SECTION_ALIGN_UP(pages) * sizeof(struct page);
+ 
+ 	/* choose kernel address space layout: 4 or 3 levels. */
+-	BUILD_BUG_ON(!IS_ALIGNED(__START_KERNEL, THREAD_SIZE));
++	BUILD_BUG_ON(!IS_ALIGNED(TEXT_OFFSET, THREAD_SIZE));
+ 	BUILD_BUG_ON(!IS_ALIGNED(__NO_KASLR_START_KERNEL, THREAD_SIZE));
+ 	BUILD_BUG_ON(__NO_KASLR_END_KERNEL > _REGION1_SIZE);
+ 	vsize = get_vmem_size(ident_map_size, vmemmap_size, vmalloc_size, _REGION3_SIZE);
+@@ -378,31 +378,25 @@ static void kaslr_adjust_vmlinux_info(long offset)
+ #endif
+ }
+ 
+-static void fixup_vmlinux_info(void)
+-{
+-	vmlinux.entry -= __START_KERNEL;
+-	kaslr_adjust_vmlinux_info(-__START_KERNEL);
+-}
+-
+ void startup_kernel(void)
+ {
+-	unsigned long kernel_size = vmlinux.image_size + vmlinux.bss_size;
+-	unsigned long nokaslr_offset_phys, kaslr_large_page_offset;
+-	unsigned long amode31_lma = 0;
++	unsigned long vmlinux_size = vmlinux.image_size + vmlinux.bss_size;
++	unsigned long nokaslr_text_lma, text_lma = 0, amode31_lma = 0;
++	unsigned long kernel_size = TEXT_OFFSET + vmlinux_size;
++	unsigned long kaslr_large_page_offset;
+ 	unsigned long max_physmem_end;
+ 	unsigned long asce_limit;
+ 	unsigned long safe_addr;
+ 	psw_t psw;
+ 
+-	fixup_vmlinux_info();
+ 	setup_lpp();
+ 
+ 	/*
+ 	 * Non-randomized kernel physical start address must be _SEGMENT_SIZE
+ 	 * aligned (see blow).
+ 	 */
+-	nokaslr_offset_phys = ALIGN(mem_safe_offset(), _SEGMENT_SIZE);
+-	safe_addr = PAGE_ALIGN(nokaslr_offset_phys + kernel_size);
++	nokaslr_text_lma = ALIGN(mem_safe_offset(), _SEGMENT_SIZE);
++	safe_addr = PAGE_ALIGN(nokaslr_text_lma + vmlinux_size);
+ 
+ 	/*
+ 	 * Reserve decompressor memory together with decompression heap,
+@@ -446,16 +440,27 @@ void startup_kernel(void)
+ 	 */
+ 	kaslr_large_page_offset = __kaslr_offset & ~_SEGMENT_MASK;
+ 	if (kaslr_enabled()) {
+-		unsigned long end = ident_map_size - kaslr_large_page_offset;
++		unsigned long size = vmlinux_size + kaslr_large_page_offset;
+ 
+-		__kaslr_offset_phys = randomize_within_range(kernel_size, _SEGMENT_SIZE, 0, end);
++		text_lma = randomize_within_range(size, _SEGMENT_SIZE, TEXT_OFFSET, ident_map_size);
+ 	}
+-	if (!__kaslr_offset_phys)
+-		__kaslr_offset_phys = nokaslr_offset_phys;
+-	__kaslr_offset_phys |= kaslr_large_page_offset;
++	if (!text_lma)
++		text_lma = nokaslr_text_lma;
++	text_lma |= kaslr_large_page_offset;
++
++	/*
++	 * [__kaslr_offset_phys..__kaslr_offset_phys + TEXT_OFFSET] region is
++	 * never accessed via the kernel image mapping as per the linker script:
++	 *
++	 *	. = TEXT_OFFSET;
++	 *
++	 * Therefore, this region could be used for something else and does
++	 * not need to be reserved. See how it is skipped in setup_vmem().
++	 */
++	__kaslr_offset_phys = text_lma - TEXT_OFFSET;
+ 	kaslr_adjust_vmlinux_info(__kaslr_offset_phys);
+-	physmem_reserve(RR_VMLINUX, __kaslr_offset_phys, kernel_size);
+-	deploy_kernel((void *)__kaslr_offset_phys);
++	physmem_reserve(RR_VMLINUX, text_lma, vmlinux_size);
++	deploy_kernel((void *)text_lma);
+ 
+ 	/* vmlinux decompression is done, shrink reserved low memory */
+ 	physmem_reserve(RR_DECOMPRESSOR, 0, (unsigned long)_decompressor_end);
+@@ -474,7 +479,7 @@ void startup_kernel(void)
+ 	if (kaslr_enabled())
+ 		amode31_lma = randomize_within_range(vmlinux.amode31_size, PAGE_SIZE, 0, SZ_2G);
+ 	if (!amode31_lma)
+-		amode31_lma = __kaslr_offset_phys - vmlinux.amode31_size;
++		amode31_lma = text_lma - vmlinux.amode31_size;
+ 	physmem_reserve(RR_AMODE31, amode31_lma, vmlinux.amode31_size);
+ 
+ 	/*
+@@ -490,8 +495,8 @@ void startup_kernel(void)
+ 	 * - copy_bootdata() must follow setup_vmem() to propagate changes
+ 	 *   to bootdata made by setup_vmem()
+ 	 */
+-	clear_bss_section(__kaslr_offset_phys);
+-	kaslr_adjust_relocs(__kaslr_offset_phys, __kaslr_offset_phys + vmlinux.image_size,
++	clear_bss_section(text_lma);
++	kaslr_adjust_relocs(text_lma, text_lma + vmlinux.image_size,
+ 			    __kaslr_offset, __kaslr_offset_phys);
+ 	kaslr_adjust_got(__kaslr_offset);
+ 	setup_vmem(__kaslr_offset, __kaslr_offset + kernel_size, asce_limit);
+diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c
+index 40cfce2687c43..3d4dae9905cd8 100644
+--- a/arch/s390/boot/vmem.c
++++ b/arch/s390/boot/vmem.c
+@@ -89,7 +89,7 @@ static void kasan_populate_shadow(unsigned long kernel_start, unsigned long kern
+ 		}
+ 		memgap_start = end;
+ 	}
+-	kasan_populate(kernel_start, kernel_end, POPULATE_KASAN_MAP_SHADOW);
++	kasan_populate(kernel_start + TEXT_OFFSET, kernel_end, POPULATE_KASAN_MAP_SHADOW);
+ 	kasan_populate(0, (unsigned long)__identity_va(0), POPULATE_KASAN_ZERO_SHADOW);
+ 	kasan_populate(AMODE31_START, AMODE31_END, POPULATE_KASAN_ZERO_SHADOW);
+ 	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+@@ -466,7 +466,17 @@ void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned l
+ 				 (unsigned long)__identity_va(end),
+ 				 POPULATE_IDENTITY);
+ 	}
+-	pgtable_populate(kernel_start, kernel_end, POPULATE_KERNEL);
++
++	/*
++	 * [kernel_start..kernel_start + TEXT_OFFSET] region is never
++	 * accessed as per the linker script:
++	 *
++	 *	. = TEXT_OFFSET;
++	 *
++	 * Therefore, skip mapping TEXT_OFFSET bytes to prevent access to
++	 * [__kaslr_offset_phys..__kaslr_offset_phys + TEXT_OFFSET] region.
++	 */
++	pgtable_populate(kernel_start + TEXT_OFFSET, kernel_end, POPULATE_KERNEL);
+ 	pgtable_populate(AMODE31_START, AMODE31_END, POPULATE_DIRECT);
+ 	pgtable_populate(__abs_lowcore, __abs_lowcore + sizeof(struct lowcore),
+ 			 POPULATE_ABS_LOWCORE);
+diff --git a/arch/s390/boot/vmlinux.lds.S b/arch/s390/boot/vmlinux.lds.S
+index a750711d44c86..66670212a3611 100644
+--- a/arch/s390/boot/vmlinux.lds.S
++++ b/arch/s390/boot/vmlinux.lds.S
+@@ -109,7 +109,12 @@ SECTIONS
+ #ifdef CONFIG_KERNEL_UNCOMPRESSED
+ 	. = ALIGN(PAGE_SIZE);
+ 	. += AMODE31_SIZE;		/* .amode31 section */
+-	. = ALIGN(1 << 20);		/* _SEGMENT_SIZE */
++
++	/*
++	 * Make sure the location counter is not less than TEXT_OFFSET.
++	 * _SEGMENT_SIZE is not available, use ALIGN(1 << 20) instead.
++	 */
++	. = MAX(TEXT_OFFSET, ALIGN(1 << 20));
+ #else
+ 	. = ALIGN(8);
+ #endif
+diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
+index 224ff9d433ead..8cac1a737424d 100644
+--- a/arch/s390/include/asm/page.h
++++ b/arch/s390/include/asm/page.h
+@@ -276,8 +276,9 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
+ #define AMODE31_SIZE		(3 * PAGE_SIZE)
+ 
+ #define KERNEL_IMAGE_SIZE	(512 * 1024 * 1024)
+-#define __START_KERNEL		0x100000
+ #define __NO_KASLR_START_KERNEL	CONFIG_KERNEL_IMAGE_BASE
+ #define __NO_KASLR_END_KERNEL	(__NO_KASLR_START_KERNEL + KERNEL_IMAGE_SIZE)
+ 
++#define TEXT_OFFSET		0x100000
++
+ #endif /* _S390_PAGE_H */
+diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
+index 0e7bd3873907f..b2e2f9a4163c5 100644
+--- a/arch/s390/include/asm/uv.h
++++ b/arch/s390/include/asm/uv.h
+@@ -442,7 +442,10 @@ static inline int share(unsigned long addr, u16 cmd)
+ 
+ 	if (!uv_call(0, (u64)&uvcb))
+ 		return 0;
+-	return -EINVAL;
++	pr_err("%s UVC failed (rc: 0x%x, rrc: 0x%x), possible hypervisor bug.\n",
++	       uvcb.header.cmd == UVC_CMD_SET_SHARED_ACCESS ? "Share" : "Unshare",
++	       uvcb.header.rc, uvcb.header.rrc);
++	panic("System security cannot be guaranteed unless the system panics now.\n");
+ }
+ 
+ /*
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index a1ce3925ec719..52bd969b28283 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -39,7 +39,7 @@ PHDRS {
+ 
+ SECTIONS
+ {
+-	. = __START_KERNEL;
++	. = TEXT_OFFSET;
+ 	.text : {
+ 		_stext = .;		/* Start of text section */
+ 		_text = .;		/* Text and read-only data */
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index bf8534218af3d..e680c6bf0c9d9 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -267,7 +267,12 @@ static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots)
+ 
+ static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm)
+ {
+-	u32 gd = virt_to_phys(kvm->arch.gisa_int.origin);
++	u32 gd;
++
++	if (!kvm->arch.gisa_int.origin)
++		return 0;
++
++	gd = virt_to_phys(kvm->arch.gisa_int.origin);
+ 
+ 	if (gd && sclp.has_gisaf)
+ 		gd |= GISA_FORMAT1;
+diff --git a/arch/s390/tools/relocs.c b/arch/s390/tools/relocs.c
+index a74dbd5c9896a..30a732c808f35 100644
+--- a/arch/s390/tools/relocs.c
++++ b/arch/s390/tools/relocs.c
+@@ -280,7 +280,7 @@ static int do_reloc(struct section *sec, Elf_Rel *rel)
+ 	case R_390_GOTOFF64:
+ 		break;
+ 	case R_390_64:
+-		add_reloc(&relocs64, offset - ehdr.e_entry);
++		add_reloc(&relocs64, offset);
+ 		break;
+ 	default:
+ 		die("Unsupported relocation type: %d\n", r_type);
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index cc57e2dd9a0bb..2cafcf11ee8be 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -38,6 +38,7 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
+ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+ 	unsigned int users;
++	unsigned long flags;
+ 	struct blk_mq_tags *tags = hctx->tags;
+ 
+ 	/*
+@@ -56,11 +57,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ 			return;
+ 	}
+ 
+-	spin_lock_irq(&tags->lock);
++	spin_lock_irqsave(&tags->lock, flags);
+ 	users = tags->active_queues + 1;
+ 	WRITE_ONCE(tags->active_queues, users);
+ 	blk_mq_update_wake_batch(tags, users);
+-	spin_unlock_irq(&tags->lock);
++	spin_unlock_irqrestore(&tags->lock, flags);
+ }
+ 
+ /*
+diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
+index 2133085deda77..1c5218b79fc2a 100644
+--- a/drivers/acpi/acpica/acevents.h
++++ b/drivers/acpi/acpica/acevents.h
+@@ -188,13 +188,9 @@ acpi_ev_detach_region(union acpi_operand_object *region_obj,
+ 		      u8 acpi_ns_is_locked);
+ 
+ void
+-acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
++acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth,
+ 			    acpi_adr_space_type space_id, u32 function);
+ 
+-void
+-acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *node,
+-				  acpi_adr_space_type space_id);
+-
+ acpi_status
+ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);
+ 
+diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
+index dc6004daf624b..cf53b9535f18e 100644
+--- a/drivers/acpi/acpica/evregion.c
++++ b/drivers/acpi/acpica/evregion.c
+@@ -20,6 +20,10 @@ extern u8 acpi_gbl_default_address_spaces[];
+ 
+ /* Local prototypes */
+ 
++static void
++acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,
++				  acpi_adr_space_type space_id);
++
+ static acpi_status
+ acpi_ev_reg_run(acpi_handle obj_handle,
+ 		u32 level, void *context, void **return_value);
+@@ -61,6 +65,7 @@ acpi_status acpi_ev_initialize_op_regions(void)
+ 						acpi_gbl_default_address_spaces
+ 						[i])) {
+ 			acpi_ev_execute_reg_methods(acpi_gbl_root_node,
++						    ACPI_UINT32_MAX,
+ 						    acpi_gbl_default_address_spaces
+ 						    [i], ACPI_REG_CONNECT);
+ 		}
+@@ -668,6 +673,7 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
+  * FUNCTION:    acpi_ev_execute_reg_methods
+  *
+  * PARAMETERS:  node            - Namespace node for the device
++ *              max_depth       - Depth to which search for _REG
+  *              space_id        - The address space ID
+  *              function        - Passed to _REG: On (1) or Off (0)
+  *
+@@ -679,7 +685,7 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
+  ******************************************************************************/
+ 
+ void
+-acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
++acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth,
+ 			    acpi_adr_space_type space_id, u32 function)
+ {
+ 	struct acpi_reg_walk_info info;
+@@ -713,7 +719,7 @@ acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
+ 	 * regions and _REG methods. (i.e. handlers must be installed for all
+ 	 * regions of this Space ID before we can run any _REG methods)
+ 	 */
+-	(void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX,
++	(void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, max_depth,
+ 				     ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL,
+ 				     &info, NULL);
+ 
+@@ -814,7 +820,7 @@ acpi_ev_reg_run(acpi_handle obj_handle,
+  *
+  ******************************************************************************/
+ 
+-void
++static void
+ acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,
+ 				  acpi_adr_space_type space_id)
+ {
+diff --git a/drivers/acpi/acpica/evxfregn.c b/drivers/acpi/acpica/evxfregn.c
+index 624361a5f34d8..95f78383bbdba 100644
+--- a/drivers/acpi/acpica/evxfregn.c
++++ b/drivers/acpi/acpica/evxfregn.c
+@@ -85,7 +85,8 @@ acpi_install_address_space_handler_internal(acpi_handle device,
+ 	/* Run all _REG methods for this address space */
+ 
+ 	if (run_reg) {
+-		acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT);
++		acpi_ev_execute_reg_methods(node, ACPI_UINT32_MAX, space_id,
++					    ACPI_REG_CONNECT);
+ 	}
+ 
+ unlock_and_exit:
+@@ -263,6 +264,7 @@ ACPI_EXPORT_SYMBOL(acpi_remove_address_space_handler)
+  * FUNCTION:    acpi_execute_reg_methods
+  *
+  * PARAMETERS:  device          - Handle for the device
++ *              max_depth       - Depth to which search for _REG
+  *              space_id        - The address space ID
+  *
+  * RETURN:      Status
+@@ -271,7 +273,8 @@ ACPI_EXPORT_SYMBOL(acpi_remove_address_space_handler)
+  *
+  ******************************************************************************/
+ acpi_status
+-acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)
++acpi_execute_reg_methods(acpi_handle device, u32 max_depth,
++			 acpi_adr_space_type space_id)
+ {
+ 	struct acpi_namespace_node *node;
+ 	acpi_status status;
+@@ -296,7 +299,8 @@ acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)
+ 
+ 		/* Run all _REG methods for this address space */
+ 
+-		acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT);
++		acpi_ev_execute_reg_methods(node, max_depth, space_id,
++					    ACPI_REG_CONNECT);
+ 	} else {
+ 		status = AE_BAD_PARAMETER;
+ 	}
+@@ -306,57 +310,3 @@ acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)
+ }
+ 
+ ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods)
+-
+-/*******************************************************************************
+- *
+- * FUNCTION:    acpi_execute_orphan_reg_method
+- *
+- * PARAMETERS:  device          - Handle for the device
+- *              space_id        - The address space ID
+- *
+- * RETURN:      Status
+- *
+- * DESCRIPTION: Execute an "orphan" _REG method that appears under an ACPI
+- *              device. This is a _REG method that has no corresponding region
+- *              within the device's scope.
+- *
+- ******************************************************************************/
+-acpi_status
+-acpi_execute_orphan_reg_method(acpi_handle device, acpi_adr_space_type space_id)
+-{
+-	struct acpi_namespace_node *node;
+-	acpi_status status;
+-
+-	ACPI_FUNCTION_TRACE(acpi_execute_orphan_reg_method);
+-
+-	/* Parameter validation */
+-
+-	if (!device) {
+-		return_ACPI_STATUS(AE_BAD_PARAMETER);
+-	}
+-
+-	status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+-	if (ACPI_FAILURE(status)) {
+-		return_ACPI_STATUS(status);
+-	}
+-
+-	/* Convert and validate the device handle */
+-
+-	node = acpi_ns_validate_handle(device);
+-	if (node) {
+-
+-		/*
+-		 * If an "orphan" _REG method is present in the device's scope
+-		 * for the given address space ID, run it.
+-		 */
+-
+-		acpi_ev_execute_orphan_reg_method(node, space_id);
+-	} else {
+-		status = AE_BAD_PARAMETER;
+-	}
+-
+-	(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+-	return_ACPI_STATUS(status);
+-}
+-
+-ACPI_EXPORT_SYMBOL(acpi_execute_orphan_reg_method)
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 299ec653388ce..38d2f6e6b12b4 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1487,12 +1487,13 @@ static bool install_gpio_irq_event_handler(struct acpi_ec *ec)
+ static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device,
+ 			       bool call_reg)
+ {
+-	acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle;
+ 	acpi_status status;
+ 
+ 	acpi_ec_start(ec, false);
+ 
+ 	if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
++		acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle;
++
+ 		acpi_ec_enter_noirq(ec);
+ 		status = acpi_install_address_space_handler_no_reg(scope_handle,
+ 								   ACPI_ADR_SPACE_EC,
+@@ -1506,10 +1507,7 @@ static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device,
+ 	}
+ 
+ 	if (call_reg && !test_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags)) {
+-		acpi_execute_reg_methods(scope_handle, ACPI_ADR_SPACE_EC);
+-		if (scope_handle != ec->handle)
+-			acpi_execute_orphan_reg_method(ec->handle, ACPI_ADR_SPACE_EC);
+-
++		acpi_execute_reg_methods(ec->handle, ACPI_UINT32_MAX, ACPI_ADR_SPACE_EC);
+ 		set_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags);
+ 	}
+ 
+@@ -1724,6 +1722,12 @@ static void acpi_ec_remove(struct acpi_device *device)
+ 	}
+ }
+ 
++void acpi_ec_register_opregions(struct acpi_device *adev)
++{
++	if (first_ec && first_ec->handle != adev->handle)
++		acpi_execute_reg_methods(adev->handle, 1, ACPI_ADR_SPACE_EC);
++}
++
+ static acpi_status
+ ec_parse_io_ports(struct acpi_resource *resource, void *context)
+ {
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 601b670356e50..aadd4c218b320 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -223,6 +223,7 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
+ 			      acpi_handle handle, acpi_ec_query_func func,
+ 			      void *data);
+ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
++void acpi_ec_register_opregions(struct acpi_device *adev);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ void acpi_ec_flush_work(void);
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 503773707e012..cdc5a74092c77 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -2264,6 +2264,8 @@ static int acpi_bus_attach(struct acpi_device *device, void *first_pass)
+ 	if (device->handler)
+ 		goto ok;
+ 
++	acpi_ec_register_opregions(device);
++
+ 	if (!device->flags.initialized) {
+ 		device->flags.power_manageable =
+ 			device->power.states[ACPI_STATE_D0].flags.valid;
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 2cc3821b2b16e..ff6f260433a11 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -54,6 +54,8 @@ static void acpi_video_parse_cmdline(void)
+ 		acpi_backlight_cmdline = acpi_backlight_nvidia_wmi_ec;
+ 	if (!strcmp("apple_gmux", acpi_video_backlight_string))
+ 		acpi_backlight_cmdline = acpi_backlight_apple_gmux;
++	if (!strcmp("dell_uart", acpi_video_backlight_string))
++		acpi_backlight_cmdline = acpi_backlight_dell_uart;
+ 	if (!strcmp("none", acpi_video_backlight_string))
+ 		acpi_backlight_cmdline = acpi_backlight_none;
+ }
+@@ -805,6 +807,21 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		},
+ 	},
+ 
++	/*
++	 * Dell AIO (All in Ones) which advertise an UART attached backlight
++	 * controller board in their ACPI tables (and may even have one), but
++	 * which need native backlight control nevertheless.
++	 */
++	{
++	 /* https://bugzilla.redhat.com/show_bug.cgi?id=2303936 */
++	 .callback = video_detect_force_native,
++	 /* Dell OptiPlex 7760 AIO */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 7760 AIO"),
++		},
++	},
++
+ 	/*
+ 	 * Models which have nvidia-ec-wmi support, but should not use it.
+ 	 * Note this indicates a likely firmware bug on these models and should
+@@ -902,6 +919,7 @@ enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, bool *auto
+ 	static DEFINE_MUTEX(init_mutex);
+ 	static bool nvidia_wmi_ec_present;
+ 	static bool apple_gmux_present;
++	static bool dell_uart_present;
+ 	static bool native_available;
+ 	static bool init_done;
+ 	static long video_caps;
+@@ -916,6 +934,7 @@ enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, bool *auto
+ 				    &video_caps, NULL);
+ 		nvidia_wmi_ec_present = nvidia_wmi_ec_supported();
+ 		apple_gmux_present = apple_gmux_detect(NULL, NULL);
++		dell_uart_present = acpi_dev_present("DELL0501", NULL, -1);
+ 		init_done = true;
+ 	}
+ 	if (native)
+@@ -946,6 +965,9 @@ enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, bool *auto
+ 	if (apple_gmux_present)
+ 		return acpi_backlight_apple_gmux;
+ 
++	if (dell_uart_present)
++		return acpi_backlight_dell_uart;
++
+ 	/* Use ACPI video if available, except when native should be preferred. */
+ 	if ((video_caps & ACPI_VIDEO_BACKLIGHT) &&
+ 	     !(native_available && prefer_native_over_acpi_video()))
+diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c
+index 3cb455a32d926..99fc5d9d95d7a 100644
+--- a/drivers/ata/pata_macio.c
++++ b/drivers/ata/pata_macio.c
+@@ -208,6 +208,19 @@ static const char* macio_ata_names[] = {
+ /* Don't let a DMA segment go all the way to 64K */
+ #define MAX_DBDMA_SEG		0xff00
+ 
++#ifdef CONFIG_PAGE_SIZE_64KB
++/*
++ * The SCSI core requires the segment size to cover at least a page, so
++ * for 64K page size kernels it must be at least 64K. However the
++ * hardware can't handle 64K, so pata_macio_qc_prep() will split large
++ * requests. To handle the split requests the tablesize must be halved.
++ */
++#define PATA_MACIO_MAX_SEGMENT_SIZE	SZ_64K
++#define PATA_MACIO_SG_TABLESIZE		(MAX_DCMDS / 2)
++#else
++#define PATA_MACIO_MAX_SEGMENT_SIZE	MAX_DBDMA_SEG
++#define PATA_MACIO_SG_TABLESIZE		MAX_DCMDS
++#endif
+ 
+ /*
+  * Wait 1s for disk to answer on IDE bus after a hard reset
+@@ -912,16 +925,10 @@ static int pata_macio_do_resume(struct pata_macio_priv *priv)
+ 
+ static const struct scsi_host_template pata_macio_sht = {
+ 	__ATA_BASE_SHT(DRV_NAME),
+-	.sg_tablesize		= MAX_DCMDS,
++	.sg_tablesize		= PATA_MACIO_SG_TABLESIZE,
+ 	/* We may not need that strict one */
+ 	.dma_boundary		= ATA_DMA_BOUNDARY,
+-	/*
+-	 * The SCSI core requires the segment size to cover at least a page, so
+-	 * for 64K page size kernels this must be at least 64K. However the
+-	 * hardware can't handle 64K, so pata_macio_qc_prep() will split large
+-	 * requests.
+-	 */
+-	.max_segment_size	= SZ_64K,
++	.max_segment_size	= PATA_MACIO_MAX_SEGMENT_SIZE,
+ 	.device_configure	= pata_macio_device_configure,
+ 	.sdev_groups		= ata_common_sdev_groups,
+ 	.can_queue		= ATA_DEF_QUEUE,
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index e7f713cd70d3f..a876024d8a05f 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -1118,8 +1118,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
+ 	rpp->len += skb->len;
+ 
+ 	if (stat & SAR_RSQE_EPDU) {
++		unsigned int len, truesize;
+ 		unsigned char *l1l2;
+-		unsigned int len;
+ 
+ 		l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6);
+ 
+@@ -1189,14 +1189,15 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
+ 		ATM_SKB(skb)->vcc = vcc;
+ 		__net_timestamp(skb);
+ 
++		truesize = skb->truesize;
+ 		vcc->push(vcc, skb);
+ 		atomic_inc(&vcc->stats->rx);
+ 
+-		if (skb->truesize > SAR_FB_SIZE_3)
++		if (truesize > SAR_FB_SIZE_3)
+ 			add_rx_skb(card, 3, SAR_FB_SIZE_3, 1);
+-		else if (skb->truesize > SAR_FB_SIZE_2)
++		else if (truesize > SAR_FB_SIZE_2)
+ 			add_rx_skb(card, 2, SAR_FB_SIZE_2, 1);
+-		else if (skb->truesize > SAR_FB_SIZE_1)
++		else if (truesize > SAR_FB_SIZE_1)
+ 			add_rx_skb(card, 1, SAR_FB_SIZE_1, 1);
+ 		else
+ 			add_rx_skb(card, 0, SAR_FB_SIZE_0, 1);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 93900c37349c1..c084dc88d3d91 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2876,9 +2876,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 					       INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
+ 				set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 					&hdev->quirks);
+-			if (ver.hw_variant == 0x08 && ver.fw_variant == 0x22)
+-				set_bit(HCI_QUIRK_VALID_LE_STATES,
+-					&hdev->quirks);
+ 
+ 			err = btintel_legacy_rom_setup(hdev, &ver);
+ 			break;
+@@ -2887,7 +2884,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		case 0x12:      /* ThP */
+ 		case 0x13:      /* HrP */
+ 		case 0x14:      /* CcP */
+-			set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+ 			fallthrough;
+ 		case 0x0c:	/* WsP */
+ 			/* Apply the device specific HCI quirks
+@@ -2979,9 +2975,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		/* These variants don't seem to support LE Coded PHY */
+ 		set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
+ 
+-		/* Set Valid LE States quirk */
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+-
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev, ver.hw_variant);
+ 
+@@ -3003,9 +2996,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		 */
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+-		/* Apply LE States quirk from solar onwards */
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+-
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev,
+ 					INTEL_HW_VARIANT(ver_tlv.cnvi_bt));
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index b8120b98a2395..1fd3b7073ab90 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1182,9 +1182,6 @@ static int btintel_pcie_setup(struct hci_dev *hdev)
+ 		 */
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+-		/* Apply LE States quirk from solar onwards */
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+-
+ 		/* Setup MSFT Extension support */
+ 		btintel_set_msft_opcode(hdev,
+ 					INTEL_HW_VARIANT(ver_tlv.cnvi_bt));
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 8ded9ef8089a2..bc4700ed3b782 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -1144,9 +1144,6 @@ static int btmtksdio_setup(struct hci_dev *hdev)
+ 			}
+ 		}
+ 
+-		/* Valid LE States quirk for MediaTek 7921 */
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+-
+ 		break;
+ 	case 0x7663:
+ 	case 0x7668:
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 4f1e37b4f7802..bfcb41a57655f 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1287,7 +1287,6 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 	case CHIP_ID_8852C:
+ 	case CHIP_ID_8851B:
+ 	case CHIP_ID_8852BT:
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+ 		/* RTL8852C needs to transmit mSBC data continuously without
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 789c492df6fa2..0927f51867c26 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -4545,8 +4545,8 @@ static int btusb_probe(struct usb_interface *intf,
+ 	if (id->driver_info & BTUSB_WIDEBAND_SPEECH)
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
+-	if (id->driver_info & BTUSB_VALID_LE_STATES)
+-		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++	if (!(id->driver_info & BTUSB_VALID_LE_STATES))
++		set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
+ 
+ 	if (id->driver_info & BTUSB_DIGIANSWER) {
+ 		data->cmdreq_type = USB_TYPE_VENDOR;
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 9a0bc86f9aace..34c36f0f781ea 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2408,8 +2408,8 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 				&hdev->quirks);
+ 
+-		if (data->capabilities & QCA_CAP_VALID_LE_STATES)
+-			set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
++		if (!(data->capabilities & QCA_CAP_VALID_LE_STATES))
++			set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 28750a40f0ed5..b652d68f0ee14 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -425,8 +425,6 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ 	if (opcode & 0x80)
+ 		set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
+ 
+-	set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+-
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
+ 		hci_free_dev(hdev);
+diff --git a/drivers/char/xillybus/xillyusb.c b/drivers/char/xillybus/xillyusb.c
+index 5a5afa14ca8cb..45771b1a3716a 100644
+--- a/drivers/char/xillybus/xillyusb.c
++++ b/drivers/char/xillybus/xillyusb.c
+@@ -50,6 +50,7 @@ MODULE_LICENSE("GPL v2");
+ static const char xillyname[] = "xillyusb";
+ 
+ static unsigned int fifo_buf_order;
++static struct workqueue_struct *wakeup_wq;
+ 
+ #define USB_VENDOR_ID_XILINX		0x03fd
+ #define USB_VENDOR_ID_ALTERA		0x09fb
+@@ -569,10 +570,6 @@ static void cleanup_dev(struct kref *kref)
+  * errors if executed. The mechanism relies on that xdev->error is assigned
+  * a non-zero value by report_io_error() prior to queueing wakeup_all(),
+  * which prevents bulk_in_work() from calling process_bulk_in().
+- *
+- * The fact that wakeup_all() and bulk_in_work() are queued on the same
+- * workqueue makes their concurrent execution very unlikely, however the
+- * kernel's API doesn't seem to ensure this strictly.
+  */
+ 
+ static void wakeup_all(struct work_struct *work)
+@@ -627,7 +624,7 @@ static void report_io_error(struct xillyusb_dev *xdev,
+ 
+ 	if (do_once) {
+ 		kref_get(&xdev->kref); /* xdev is used by work item */
+-		queue_work(xdev->workq, &xdev->wakeup_workitem);
++		queue_work(wakeup_wq, &xdev->wakeup_workitem);
+ 	}
+ }
+ 
+@@ -1906,6 +1903,13 @@ static const struct file_operations xillyusb_fops = {
+ 
+ static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev)
+ {
++	struct usb_device *udev = xdev->udev;
++
++	/* Verify that device has the two fundamental bulk in/out endpoints */
++	if (usb_pipe_type_check(udev, usb_sndbulkpipe(udev, MSG_EP_NUM)) ||
++	    usb_pipe_type_check(udev, usb_rcvbulkpipe(udev, IN_EP_NUM)))
++		return -ENODEV;
++
+ 	xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT,
+ 				      bulk_out_work, 1, 2);
+ 	if (!xdev->msg_ep)
+@@ -1935,14 +1939,15 @@ static int setup_channels(struct xillyusb_dev *xdev,
+ 			  __le16 *chandesc,
+ 			  int num_channels)
+ {
+-	struct xillyusb_channel *chan;
++	struct usb_device *udev = xdev->udev;
++	struct xillyusb_channel *chan, *new_channels;
+ 	int i;
+ 
+ 	chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL);
+ 	if (!chan)
+ 		return -ENOMEM;
+ 
+-	xdev->channels = chan;
++	new_channels = chan;
+ 
+ 	for (i = 0; i < num_channels; i++, chan++) {
+ 		unsigned int in_desc = le16_to_cpu(*chandesc++);
+@@ -1971,6 +1976,15 @@ static int setup_channels(struct xillyusb_dev *xdev,
+ 		 */
+ 
+ 		if ((out_desc & 0x80) && i < 14) { /* Entry is valid */
++			if (usb_pipe_type_check(udev,
++						usb_sndbulkpipe(udev, i + 2))) {
++				dev_err(xdev->dev,
++					"Missing BULK OUT endpoint %d\n",
++					i + 2);
++				kfree(new_channels);
++				return -ENODEV;
++			}
++
+ 			chan->writable = 1;
+ 			chan->out_synchronous = !!(out_desc & 0x40);
+ 			chan->out_seekable = !!(out_desc & 0x20);
+@@ -1980,6 +1994,7 @@ static int setup_channels(struct xillyusb_dev *xdev,
+ 		}
+ 	}
+ 
++	xdev->channels = new_channels;
+ 	return 0;
+ }
+ 
+@@ -2096,9 +2111,11 @@ static int xillyusb_discovery(struct usb_interface *interface)
+ 	 * just after responding with the IDT, there is no reason for any
+ 	 * work item to be running now. To be sure that xdev->channels
+ 	 * is updated on anything that might run in parallel, flush the
+-	 * workqueue, which rarely does anything.
++	 * device's workqueue and the wakeup work item. This rarely
++	 * does anything.
+ 	 */
+ 	flush_workqueue(xdev->workq);
++	flush_work(&xdev->wakeup_workitem);
+ 
+ 	xdev->num_channels = num_channels;
+ 
+@@ -2258,6 +2275,10 @@ static int __init xillyusb_init(void)
+ {
+ 	int rc = 0;
+ 
++	wakeup_wq = alloc_workqueue(xillyname, 0, 0);
++	if (!wakeup_wq)
++		return -ENOMEM;
++
+ 	if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT)
+ 		fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT;
+ 	else
+@@ -2265,12 +2286,17 @@ static int __init xillyusb_init(void)
+ 
+ 	rc = usb_register(&xillyusb_driver);
+ 
++	if (rc)
++		destroy_workqueue(wakeup_wq);
++
+ 	return rc;
+ }
+ 
+ static void __exit xillyusb_exit(void)
+ {
+ 	usb_deregister(&xillyusb_driver);
++
++	destroy_workqueue(wakeup_wq);
+ }
+ 
+ module_init(xillyusb_init);
+diff --git a/drivers/gpio/gpio-mlxbf3.c b/drivers/gpio/gpio-mlxbf3.c
+index d5906d419b0ab..10ea71273c891 100644
+--- a/drivers/gpio/gpio-mlxbf3.c
++++ b/drivers/gpio/gpio-mlxbf3.c
+@@ -39,6 +39,8 @@
+ #define MLXBF_GPIO_CAUSE_OR_EVTEN0        0x14
+ #define MLXBF_GPIO_CAUSE_OR_CLRCAUSE      0x18
+ 
++#define MLXBF_GPIO_CLR_ALL_INTS           GENMASK(31, 0)
++
+ struct mlxbf3_gpio_context {
+ 	struct gpio_chip gc;
+ 
+@@ -82,6 +84,8 @@ static void mlxbf3_gpio_irq_disable(struct irq_data *irqd)
+ 	val = readl(gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
+ 	val &= ~BIT(offset);
+ 	writel(val, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
++
++	writel(BIT(offset), gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE);
+ 	raw_spin_unlock_irqrestore(&gs->gc.bgpio_lock, flags);
+ 
+ 	gpiochip_disable_irq(gc, offset);
+@@ -253,6 +257,15 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void mlxbf3_gpio_shutdown(struct platform_device *pdev)
++{
++	struct mlxbf3_gpio_context *gs = platform_get_drvdata(pdev);
++
++	/* Disable and clear all interrupts */
++	writel(0, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
++	writel(MLXBF_GPIO_CLR_ALL_INTS, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE);
++}
++
+ static const struct acpi_device_id mlxbf3_gpio_acpi_match[] = {
+ 	{ "MLNXBF33", 0 },
+ 	{}
+@@ -265,6 +278,7 @@ static struct platform_driver mlxbf3_gpio_driver = {
+ 		.acpi_match_table = mlxbf3_gpio_acpi_match,
+ 	},
+ 	.probe    = mlxbf3_gpio_probe,
++	.shutdown = mlxbf3_gpio_shutdown,
+ };
+ module_platform_driver(mlxbf3_gpio_driver);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 13eb2bc69e342..936c98a13a240 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1057,6 +1057,9 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
+ 			r = amdgpu_ring_parse_cs(ring, p, job, ib);
+ 			if (r)
+ 				return r;
++
++			if (ib->sa_bo)
++				ib->gpu_addr =  amdgpu_sa_bo_gpu_addr(ib->sa_bo);
+ 		} else {
+ 			ib->ptr = (uint32_t *)kptr;
+ 			r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 5cb33ac99f708..c43d1b6e5d66b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -685,16 +685,24 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
+ 
+ 	switch (args->in.op) {
+ 	case AMDGPU_CTX_OP_ALLOC_CTX:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
+ 		args->out.alloc.ctx_id = id;
+ 		break;
+ 	case AMDGPU_CTX_OP_FREE_CTX:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_free(fpriv, id);
+ 		break;
+ 	case AMDGPU_CTX_OP_QUERY_STATE:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
+ 		break;
+ 	case AMDGPU_CTX_OP_QUERY_STATE2:
++		if (args->in.flags)
++			return -EINVAL;
+ 		r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
+ 		break;
+ 	case AMDGPU_CTX_OP_GET_STABLE_PSTATE:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+index 8e8afbd237bcd..9aff579c6abf5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+@@ -166,6 +166,9 @@ static ssize_t ta_if_load_debugfs_write(struct file *fp, const char *buf, size_t
+ 	if (ret)
+ 		return -EFAULT;
+ 
++	if (ta_bin_len > PSP_1_MEG)
++		return -EINVAL;
++
+ 	copy_pos += sizeof(uint32_t);
+ 
+ 	ta_bin = kzalloc(ta_bin_len, GFP_KERNEL);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 677eb141554e0..ceb3f1e4ed1dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -151,6 +151,10 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+ 		}
+ 	}
+ 
++	/* from vcn4 and above, only unified queue is used */
++	adev->vcn.using_unified_queue =
++		amdgpu_ip_version(adev, UVD_HWIP, 0) >= IP_VERSION(4, 0, 0);
++
+ 	hdr = (const struct common_firmware_header *)adev->vcn.fw[0]->data;
+ 	adev->vcn.fw_version = le32_to_cpu(hdr->ucode_version);
+ 
+@@ -279,18 +283,6 @@ int amdgpu_vcn_sw_fini(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
+-/* from vcn4 and above, only unified queue is used */
+-static bool amdgpu_vcn_using_unified_queue(struct amdgpu_ring *ring)
+-{
+-	struct amdgpu_device *adev = ring->adev;
+-	bool ret = false;
+-
+-	if (amdgpu_ip_version(adev, UVD_HWIP, 0) >= IP_VERSION(4, 0, 0))
+-		ret = true;
+-
+-	return ret;
+-}
+-
+ bool amdgpu_vcn_is_disabled_vcn(struct amdgpu_device *adev, enum vcn_ring_type type, uint32_t vcn_instance)
+ {
+ 	bool ret = false;
+@@ -401,7 +393,9 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work)
+ 		for (i = 0; i < adev->vcn.num_enc_rings; ++i)
+ 			fence[j] += amdgpu_fence_count_emitted(&adev->vcn.inst[j].ring_enc[i]);
+ 
+-		if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)	{
++		/* Only set DPG pause for VCN3 or below, VCN4 and above will be handled by FW */
++		if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG &&
++		    !adev->vcn.using_unified_queue) {
+ 			struct dpg_pause_state new_state;
+ 
+ 			if (fence[j] ||
+@@ -447,7 +441,9 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
+ 	amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
+ 	       AMD_PG_STATE_UNGATE);
+ 
+-	if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)	{
++	/* Only set DPG pause for VCN3 or below, VCN4 and above will be handled by FW */
++	if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG &&
++	    !adev->vcn.using_unified_queue) {
+ 		struct dpg_pause_state new_state;
+ 
+ 		if (ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC) {
+@@ -473,8 +469,12 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
+ 
+ void amdgpu_vcn_ring_end_use(struct amdgpu_ring *ring)
+ {
++	struct amdgpu_device *adev = ring->adev;
++
++	/* Only set DPG pause for VCN3 or below, VCN4 and above will be handled by FW */
+ 	if (ring->adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG &&
+-		ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC)
++	    ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC &&
++	    !adev->vcn.using_unified_queue)
+ 		atomic_dec(&ring->adev->vcn.inst[ring->me].dpg_enc_submission_cnt);
+ 
+ 	atomic_dec(&ring->adev->vcn.total_submission_cnt);
+@@ -728,12 +728,11 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
+ 	struct amdgpu_job *job;
+ 	struct amdgpu_ib *ib;
+ 	uint64_t addr = AMDGPU_GPU_PAGE_ALIGN(ib_msg->gpu_addr);
+-	bool sq = amdgpu_vcn_using_unified_queue(ring);
+ 	uint32_t *ib_checksum;
+ 	uint32_t ib_pack_in_dw;
+ 	int i, r;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		ib_size_dw += 8;
+ 
+ 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
+@@ -746,7 +745,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
+ 	ib->length_dw = 0;
+ 
+ 	/* single queue headers */
+-	if (sq) {
++	if (adev->vcn.using_unified_queue) {
+ 		ib_pack_in_dw = sizeof(struct amdgpu_vcn_decode_buffer) / sizeof(uint32_t)
+ 						+ 4 + 2; /* engine info + decoding ib in dw */
+ 		ib_checksum = amdgpu_vcn_unified_ring_ib_header(ib, ib_pack_in_dw, false);
+@@ -765,7 +764,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
+ 	for (i = ib->length_dw; i < ib_size_dw; ++i)
+ 		ib->ptr[i] = 0x0;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		amdgpu_vcn_unified_ring_ib_checksum(&ib_checksum, ib_pack_in_dw);
+ 
+ 	r = amdgpu_job_submit_direct(job, ring, &f);
+@@ -855,15 +854,15 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
+ 					 struct dma_fence **fence)
+ {
+ 	unsigned int ib_size_dw = 16;
++	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_job *job;
+ 	struct amdgpu_ib *ib;
+ 	struct dma_fence *f = NULL;
+ 	uint32_t *ib_checksum = NULL;
+ 	uint64_t addr;
+-	bool sq = amdgpu_vcn_using_unified_queue(ring);
+ 	int i, r;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		ib_size_dw += 8;
+ 
+ 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
+@@ -877,7 +876,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
+ 
+ 	ib->length_dw = 0;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		ib_checksum = amdgpu_vcn_unified_ring_ib_header(ib, 0x11, true);
+ 
+ 	ib->ptr[ib->length_dw++] = 0x00000018;
+@@ -899,7 +898,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
+ 	for (i = ib->length_dw; i < ib_size_dw; ++i)
+ 		ib->ptr[i] = 0x0;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		amdgpu_vcn_unified_ring_ib_checksum(&ib_checksum, 0x11);
+ 
+ 	r = amdgpu_job_submit_direct(job, ring, &f);
+@@ -922,15 +921,15 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
+ 					  struct dma_fence **fence)
+ {
+ 	unsigned int ib_size_dw = 16;
++	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_job *job;
+ 	struct amdgpu_ib *ib;
+ 	struct dma_fence *f = NULL;
+ 	uint32_t *ib_checksum = NULL;
+ 	uint64_t addr;
+-	bool sq = amdgpu_vcn_using_unified_queue(ring);
+ 	int i, r;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		ib_size_dw += 8;
+ 
+ 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
+@@ -944,7 +943,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
+ 
+ 	ib->length_dw = 0;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		ib_checksum = amdgpu_vcn_unified_ring_ib_header(ib, 0x11, true);
+ 
+ 	ib->ptr[ib->length_dw++] = 0x00000018;
+@@ -966,7 +965,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
+ 	for (i = ib->length_dw; i < ib_size_dw; ++i)
+ 		ib->ptr[i] = 0x0;
+ 
+-	if (sq)
++	if (adev->vcn.using_unified_queue)
+ 		amdgpu_vcn_unified_ring_ib_checksum(&ib_checksum, 0x11);
+ 
+ 	r = amdgpu_job_submit_direct(job, ring, &f);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+index 9f06def236fdc..1a5439abd1a04 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+@@ -329,6 +329,7 @@ struct amdgpu_vcn {
+ 
+ 	uint16_t inst_mask;
+ 	uint8_t	num_inst_per_aid;
++	bool using_unified_queue;
+ };
+ 
+ struct amdgpu_fw_shared_rb_ptrs_struct {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index ef3e42f6b8411..63f84ef6dfcf2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -543,11 +543,11 @@ void jpeg_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index d66af11aa66c7..d24d06f6d682a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -23,6 +23,7 @@
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_jpeg.h"
++#include "amdgpu_cs.h"
+ #include "soc15.h"
+ #include "soc15d.h"
+ #include "jpeg_v4_0_3.h"
+@@ -773,11 +774,15 @@ void jpeg_v4_0_3_dec_ring_emit_ib(struct amdgpu_ring *ring,
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(regUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++
++	if (ring->funcs->parse_cs)
++		amdgpu_ring_write(ring, 0);
++	else
++		amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(regUVD_LMI_JPEG_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring,	PACKETJ(regUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+@@ -1063,6 +1068,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_3_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v4_0_3_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v4_0_3_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v4_0_3_dec_ring_set_wptr,
++	.parse_cs = jpeg_v4_0_3_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+@@ -1227,3 +1233,56 @@ static void jpeg_v4_0_3_set_ras_funcs(struct amdgpu_device *adev)
+ {
+ 	adev->jpeg.ras = &jpeg_v4_0_3_ras;
+ }
++
++/**
++ * jpeg_v4_0_3_dec_ring_parse_cs - command submission parser
++ *
++ * @parser: Command submission parser context
++ * @job: the job to parse
++ * @ib: the IB to parse
++ *
++ * Parse the command stream, return -EINVAL for invalid packet,
++ * 0 otherwise
++ */
++int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++			     struct amdgpu_job *job,
++			     struct amdgpu_ib *ib)
++{
++	uint32_t i, reg, res, cond, type;
++	struct amdgpu_device *adev = parser->adev;
++
++	for (i = 0; i < ib->length_dw ; i += 2) {
++		reg  = CP_PACKETJ_GET_REG(ib->ptr[i]);
++		res  = CP_PACKETJ_GET_RES(ib->ptr[i]);
++		cond = CP_PACKETJ_GET_COND(ib->ptr[i]);
++		type = CP_PACKETJ_GET_TYPE(ib->ptr[i]);
++
++		if (res) /* only support 0 at the moment */
++			return -EINVAL;
++
++		switch (type) {
++		case PACKETJ_TYPE0:
++			if (cond != PACKETJ_CONDITION_CHECK0 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) {
++				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++				return -EINVAL;
++			}
++			break;
++		case PACKETJ_TYPE3:
++			if (cond != PACKETJ_CONDITION_CHECK3 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) {
++				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++				return -EINVAL;
++			}
++			break;
++		case PACKETJ_TYPE6:
++			if (ib->ptr[i] == CP_PACKETJ_NOP)
++				continue;
++			dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++			return -EINVAL;
++		default:
++			dev_err(adev->dev, "Unknown packet type %d !\n", type);
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+index 747a3e5f68564..71c54b294e157 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+@@ -46,6 +46,9 @@
+ 
+ #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR				0x18000
+ 
++#define JPEG_REG_RANGE_START						0x4000
++#define JPEG_REG_RANGE_END						0x41c2
++
+ extern const struct amdgpu_ip_block_version jpeg_v4_0_3_ip_block;
+ 
+ void jpeg_v4_0_3_dec_ring_emit_ib(struct amdgpu_ring *ring,
+@@ -62,5 +65,7 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring);
+ void jpeg_v4_0_3_dec_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val);
+ void jpeg_v4_0_3_dec_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+ 					uint32_t val, uint32_t mask);
+-
++int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++				  struct amdgpu_job *job,
++				  struct amdgpu_ib *ib);
+ #endif /* __JPEG_V4_0_3_H__ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+index 64c856bfe0cbb..90299f66a4445 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+@@ -523,6 +523,7 @@ static const struct amdgpu_ring_funcs jpeg_v5_0_0_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v5_0_0_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v5_0_0_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v5_0_0_dec_ring_set_wptr,
++	.parse_cs = jpeg_v4_0_3_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index af1e90159ce36..2e72d445415f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -176,14 +176,16 @@ static void sdma_v5_2_ring_set_wptr(struct amdgpu_ring *ring)
+ 		DRM_DEBUG("calling WDOORBELL64(0x%08x, 0x%016llx)\n",
+ 				ring->doorbell_index, ring->wptr << 2);
+ 		WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
+-		/* SDMA seems to miss doorbells sometimes when powergating kicks in.
+-		 * Updating the wptr directly will wake it. This is only safe because
+-		 * we disallow gfxoff in begin_use() and then allow it again in end_use().
+-		 */
+-		WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR),
+-		       lower_32_bits(ring->wptr << 2));
+-		WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI),
+-		       upper_32_bits(ring->wptr << 2));
++		if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) == IP_VERSION(5, 2, 1)) {
++			/* SDMA seems to miss doorbells sometimes when powergating kicks in.
++			 * Updating the wptr directly will wake it. This is only safe because
++			 * we disallow gfxoff in begin_use() and then allow it again in end_use().
++			 */
++			WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR),
++			       lower_32_bits(ring->wptr << 2));
++			WREG32(sdma_v5_2_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI),
++			       upper_32_bits(ring->wptr << 2));
++		}
+ 	} else {
+ 		DRM_DEBUG("Not using doorbell -- "
+ 				"mmSDMA%i_GFX_RB_WPTR == 0x%08x "
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15d.h b/drivers/gpu/drm/amd/amdgpu/soc15d.h
+index 2357ff39323f0..e74e1983da53a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15d.h
++++ b/drivers/gpu/drm/amd/amdgpu/soc15d.h
+@@ -76,6 +76,12 @@
+ 			 ((cond & 0xF) << 24) |				\
+ 			 ((type & 0xF) << 28))
+ 
++#define CP_PACKETJ_NOP		0x60000000
++#define CP_PACKETJ_GET_REG(x)  ((x) & 0x3FFFF)
++#define CP_PACKETJ_GET_RES(x)  (((x) >> 18) & 0x3F)
++#define CP_PACKETJ_GET_COND(x) (((x) >> 24) & 0xF)
++#define CP_PACKETJ_GET_TYPE(x) (((x) >> 28) & 0xF)
++
+ /* Packet 3 types */
+ #define	PACKET3_NOP					0x10
+ #define	PACKET3_SET_BASE				0x11
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 836bf9ba620d1..8d4ad15b8e171 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2724,6 +2724,9 @@ static int dm_suspend(void *handle)
+ 
+ 	hpd_rx_irq_work_suspend(dm);
+ 
++	if (adev->dm.dc->caps.ips_support)
++		dc_allow_idle_optimizations(adev->dm.dc, true);
++
+ 	dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
+ 	dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3);
+ 
+@@ -6559,12 +6562,34 @@ static const struct attribute_group amdgpu_group = {
+ 	.attrs = amdgpu_attrs
+ };
+ 
++static bool
++amdgpu_dm_should_create_sysfs(struct amdgpu_dm_connector *amdgpu_dm_connector)
++{
++	if (amdgpu_dm_abm_level >= 0)
++		return false;
++
++	if (amdgpu_dm_connector->base.connector_type != DRM_MODE_CONNECTOR_eDP)
++		return false;
++
++	/* check for OLED panels */
++	if (amdgpu_dm_connector->bl_idx >= 0) {
++		struct drm_device *drm = amdgpu_dm_connector->base.dev;
++		struct amdgpu_display_manager *dm = &drm_to_adev(drm)->dm;
++		struct amdgpu_dm_backlight_caps *caps;
++
++		caps = &dm->backlight_caps[amdgpu_dm_connector->bl_idx];
++		if (caps->aux_support)
++			return false;
++	}
++
++	return true;
++}
++
+ static void amdgpu_dm_connector_unregister(struct drm_connector *connector)
+ {
+ 	struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector);
+ 
+-	if (connector->connector_type == DRM_MODE_CONNECTOR_eDP &&
+-	    amdgpu_dm_abm_level < 0)
++	if (amdgpu_dm_should_create_sysfs(amdgpu_dm_connector))
+ 		sysfs_remove_group(&connector->kdev->kobj, &amdgpu_group);
+ 
+ 	drm_dp_aux_unregister(&amdgpu_dm_connector->dm_dp_aux.aux);
+@@ -6671,8 +6696,7 @@ amdgpu_dm_connector_late_register(struct drm_connector *connector)
+ 		to_amdgpu_dm_connector(connector);
+ 	int r;
+ 
+-	if (connector->connector_type == DRM_MODE_CONNECTOR_eDP &&
+-	    amdgpu_dm_abm_level < 0) {
++	if (amdgpu_dm_should_create_sysfs(amdgpu_dm_connector)) {
+ 		r = sysfs_create_group(&connector->kdev->kobj,
+ 				       &amdgpu_group);
+ 		if (r)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 3306684e805ac..872f994dd356e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -3605,7 +3605,7 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ 						(int)hubp->curs_attr.width || pos_cpy.x
+ 						<= (int)hubp->curs_attr.width +
+ 						pipe_ctx->plane_state->src_rect.x) {
+-						pos_cpy.x = temp_x + viewport_width;
++						pos_cpy.x = 2 * viewport_width - temp_x;
+ 					}
+ 				}
+ 			} else {
+@@ -3698,7 +3698,7 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ 						(int)hubp->curs_attr.width || pos_cpy.x
+ 						<= (int)hubp->curs_attr.width +
+ 						pipe_ctx->plane_state->src_rect.x) {
+-						pos_cpy.x = 2 * viewport_width - temp_x;
++						pos_cpy.x = temp_x + viewport_width;
+ 					}
+ 				}
+ 			} else {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+index 9a3cc0514a36e..8e0588b1cf305 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+@@ -1778,6 +1778,9 @@ static bool dcn321_resource_construct(
+ 	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+ 	dc->caps.color.mpc.ocsc = 1;
+ 
++	/* Use pipe context based otg sync logic */
++	dc->config.use_pipe_ctx_sync_logic = true;
++
+ 	dc->config.dc_mode_clk_limit_support = true;
+ 	dc->config.enable_windowed_mpo_odm = true;
+ 	/* read VBIOS LTTPR caps */
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
+index 92b03073acdd5..555428606e127 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c
+@@ -39,7 +39,9 @@ static u32 transcoder_to_stream_enc_status(enum transcoder cpu_transcoder)
+ static void intel_dp_hdcp_wait_for_cp_irq(struct intel_connector *connector,
+ 					  int timeout)
+ {
+-	struct intel_hdcp *hdcp = &connector->hdcp;
++	struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
++	struct intel_dp *dp = &dig_port->dp;
++	struct intel_hdcp *hdcp = &dp->attached_connector->hdcp;
+ 	long ret;
+ 
+ #define C (hdcp->cp_irq_count_cached != atomic_read(&hdcp->cp_irq_count))
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 697ad4a640516..a6c5e3bc9bf15 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1179,8 +1179,6 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
+ 
+ 	cstate->num_mixers = num_lm;
+ 
+-	dpu_enc->connector = conn_state->connector;
+-
+ 	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
+ 		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
+ 
+@@ -1277,6 +1275,8 @@ static void dpu_encoder_virt_atomic_enable(struct drm_encoder *drm_enc,
+ 
+ 	dpu_enc->commit_done_timedout = false;
+ 
++	dpu_enc->connector = drm_atomic_get_new_connector_for_encoder(state, drm_enc);
++
+ 	cur_mode = &dpu_enc->base.crtc->state->adjusted_mode;
+ 
+ 	dpu_enc->wide_bus_en = dpu_encoder_is_widebus_enabled(drm_enc);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 9b72977feafa4..e61b5681f3bbd 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -308,8 +308,8 @@ static const u32 wb2_formats_rgb_yuv[] = {
+ 	{ \
+ 	.maxdwnscale = SSPP_UNITY_SCALE, \
+ 	.maxupscale = SSPP_UNITY_SCALE, \
+-	.format_list = plane_formats_yuv, \
+-	.num_formats = ARRAY_SIZE(plane_formats_yuv), \
++	.format_list = plane_formats, \
++	.num_formats = ARRAY_SIZE(plane_formats), \
+ 	.virt_format_list = plane_formats, \
+ 	.virt_num_formats = ARRAY_SIZE(plane_formats), \
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index e2adc937ea63b..935ff6fd172c4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -31,24 +31,14 @@
+  * @fmt: Pointer to format string
+  */
+ #define DPU_DEBUG(fmt, ...)                                                \
+-	do {                                                               \
+-		if (drm_debug_enabled(DRM_UT_KMS))                         \
+-			DRM_DEBUG(fmt, ##__VA_ARGS__); \
+-		else                                                       \
+-			pr_debug(fmt, ##__VA_ARGS__);                      \
+-	} while (0)
++	DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+ 
+ /**
+  * DPU_DEBUG_DRIVER - macro for hardware driver logging
+  * @fmt: Pointer to format string
+  */
+ #define DPU_DEBUG_DRIVER(fmt, ...)                                         \
+-	do {                                                               \
+-		if (drm_debug_enabled(DRM_UT_DRIVER))                      \
+-			DRM_ERROR(fmt, ##__VA_ARGS__); \
+-		else                                                       \
+-			pr_debug(fmt, ##__VA_ARGS__);                      \
+-	} while (0)
++	DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
+ 
+ #define DPU_ERROR(fmt, ...) pr_err("[dpu error]" fmt, ##__VA_ARGS__)
+ #define DPU_ERROR_RATELIMITED(fmt, ...) pr_err_ratelimited("[dpu error]" fmt, ##__VA_ARGS__)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index 1c3a2657450c6..c31d283d1c6c1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -680,6 +680,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
+ 			new_state->fb, &layout);
+ 	if (ret) {
+ 		DPU_ERROR_PLANE(pdpu, "failed to get format layout, %d\n", ret);
++		if (pstate->aspace)
++			msm_framebuffer_cleanup(new_state->fb, pstate->aspace,
++						pstate->needs_dirtyfb);
+ 		return ret;
+ 	}
+ 
+@@ -743,10 +746,9 @@ static int dpu_plane_atomic_check_pipe(struct dpu_plane *pdpu,
+ 	min_src_size = MSM_FORMAT_IS_YUV(fmt) ? 2 : 1;
+ 
+ 	if (MSM_FORMAT_IS_YUV(fmt) &&
+-	    (!pipe->sspp->cap->sblk->scaler_blk.len ||
+-	     !pipe->sspp->cap->sblk->csc_blk.len)) {
++	    !pipe->sspp->cap->sblk->csc_blk.len) {
+ 		DPU_DEBUG_PLANE(pdpu,
+-				"plane doesn't have scaler/csc for yuv\n");
++				"plane doesn't have csc for yuv\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -863,6 +865,10 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
+ 
+ 	max_linewidth = pdpu->catalog->caps->max_linewidth;
+ 
++	drm_rect_rotate(&pipe_cfg->src_rect,
++			new_plane_state->fb->width, new_plane_state->fb->height,
++			new_plane_state->rotation);
++
+ 	if ((drm_rect_width(&pipe_cfg->src_rect) > max_linewidth) ||
+ 	     _dpu_plane_calc_clk(&crtc_state->adjusted_mode, pipe_cfg) > max_mdp_clk_rate) {
+ 		/*
+@@ -912,6 +918,14 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
+ 		r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2;
+ 	}
+ 
++	drm_rect_rotate_inv(&pipe_cfg->src_rect,
++			    new_plane_state->fb->width, new_plane_state->fb->height,
++			    new_plane_state->rotation);
++	if (r_pipe->sspp)
++		drm_rect_rotate_inv(&r_pipe_cfg->src_rect,
++				    new_plane_state->fb->width, new_plane_state->fb->height,
++				    new_plane_state->rotation);
++
+ 	ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, fmt, &crtc_state->adjusted_mode);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 7bc8a9f0657a9..f342fc5ae41ec 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1286,6 +1286,8 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	link_info.rate = ctrl->link->link_params.rate;
+ 	link_info.capabilities = DP_LINK_CAP_ENHANCED_FRAMING;
+ 
++	dp_link_reset_phy_params_vx_px(ctrl->link);
++
+ 	dp_aux_link_configure(ctrl->aux, &link_info);
+ 
+ 	if (drm_dp_max_downspread(dpcd))
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 07db8f37cd06a..017fb8cc8ab67 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -90,22 +90,22 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
+ static u32 dp_panel_get_supported_bpp(struct dp_panel *dp_panel,
+ 		u32 mode_edid_bpp, u32 mode_pclk_khz)
+ {
+-	struct dp_link_info *link_info;
++	const struct dp_link_info *link_info;
+ 	const u32 max_supported_bpp = 30, min_supported_bpp = 18;
+-	u32 bpp = 0, data_rate_khz = 0;
++	u32 bpp, data_rate_khz;
+ 
+-	bpp = min_t(u32, mode_edid_bpp, max_supported_bpp);
++	bpp = min(mode_edid_bpp, max_supported_bpp);
+ 
+ 	link_info = &dp_panel->link_info;
+ 	data_rate_khz = link_info->num_lanes * link_info->rate * 8;
+ 
+-	while (bpp > min_supported_bpp) {
++	do {
+ 		if (mode_pclk_khz * bpp <= data_rate_khz)
+-			break;
++			return bpp;
+ 		bpp -= 6;
+-	}
++	} while (bpp > min_supported_bpp);
+ 
+-	return bpp;
++	return min_supported_bpp;
+ }
+ 
+ static int dp_panel_update_modes(struct drm_connector *connector,
+@@ -442,8 +442,9 @@ int dp_panel_init_panel_info(struct dp_panel *dp_panel)
+ 				drm_mode->clock);
+ 	drm_dbg_dp(panel->drm_dev, "bpp = %d\n", dp_panel->dp_mode.bpp);
+ 
+-	dp_panel->dp_mode.bpp = max_t(u32, 18,
+-				min_t(u32, dp_panel->dp_mode.bpp, 30));
++	dp_panel->dp_mode.bpp = dp_panel_get_mode_bpp(dp_panel, dp_panel->dp_mode.bpp,
++						      dp_panel->dp_mode.drm_mode.clock);
++
+ 	drm_dbg_dp(panel->drm_dev, "updated bpp = %d\n",
+ 				dp_panel->dp_mode.bpp);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_mdss.c b/drivers/gpu/drm/msm/msm_mdss.c
+index fab6ad4e5107c..ec75274178028 100644
+--- a/drivers/gpu/drm/msm/msm_mdss.c
++++ b/drivers/gpu/drm/msm/msm_mdss.c
+@@ -577,7 +577,7 @@ static const struct msm_mdss_data sc7180_data = {
+ 	.ubwc_enc_version = UBWC_2_0,
+ 	.ubwc_dec_version = UBWC_2_0,
+ 	.ubwc_static = 0x1e,
+-	.highest_bank_bit = 0x3,
++	.highest_bank_bit = 0x1,
+ 	.reg_bus_bw = 76800,
+ };
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/core/firmware.c b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c
+index adc60b25f8e6c..0af01a0ec6016 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/core/firmware.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c
+@@ -205,7 +205,8 @@ nvkm_firmware_dtor(struct nvkm_firmware *fw)
+ 		break;
+ 	case NVKM_FIRMWARE_IMG_DMA:
+ 		nvkm_memory_unref(&memory);
+-		dma_free_coherent(fw->device->dev, sg_dma_len(&fw->mem.sgl), fw->img, fw->phys);
++		dma_free_noncoherent(fw->device->dev, sg_dma_len(&fw->mem.sgl),
++				     fw->img, fw->phys, DMA_TO_DEVICE);
+ 		break;
+ 	case NVKM_FIRMWARE_IMG_SGT:
+ 		nvkm_memory_unref(&memory);
+@@ -236,10 +237,12 @@ nvkm_firmware_ctor(const struct nvkm_firmware_func *func, const char *name,
+ 		break;
+ 	case NVKM_FIRMWARE_IMG_DMA: {
+ 		dma_addr_t addr;
+-
+ 		len = ALIGN(fw->len, PAGE_SIZE);
+ 
+-		fw->img = dma_alloc_coherent(fw->device->dev, len, &addr, GFP_KERNEL);
++		fw->img = dma_alloc_noncoherent(fw->device->dev,
++						len, &addr,
++						DMA_TO_DEVICE,
++						GFP_KERNEL);
+ 		if (fw->img) {
+ 			memcpy(fw->img, src, fw->len);
+ 			fw->phys = addr;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c b/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
+index 80a480b121746..a1c8545f1249a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
++++ b/drivers/gpu/drm/nouveau/nvkm/falcon/fw.c
+@@ -89,6 +89,12 @@ nvkm_falcon_fw_boot(struct nvkm_falcon_fw *fw, struct nvkm_subdev *user,
+ 		nvkm_falcon_fw_dtor_sigs(fw);
+ 	}
+ 
++	/* after last write to the img, sync dma mappings */
++	dma_sync_single_for_device(fw->fw.device->dev,
++				   fw->fw.phys,
++				   sg_dma_len(&fw->fw.mem.sgl),
++				   DMA_TO_DEVICE);
++
+ 	FLCNFW_DBG(fw, "resetting");
+ 	fw->func->reset(fw);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 30d5366d62883..0132403b8159f 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -315,7 +315,7 @@ v3d_csd_job_run(struct drm_sched_job *sched_job)
+ 	struct v3d_dev *v3d = job->base.v3d;
+ 	struct drm_device *dev = &v3d->drm;
+ 	struct dma_fence *fence;
+-	int i, csd_cfg0_reg, csd_cfg_reg_count;
++	int i, csd_cfg0_reg;
+ 
+ 	v3d->csd_job = job;
+ 
+@@ -335,9 +335,17 @@ v3d_csd_job_run(struct drm_sched_job *sched_job)
+ 	v3d_switch_perfmon(v3d, &job->base);
+ 
+ 	csd_cfg0_reg = V3D_CSD_QUEUED_CFG0(v3d->ver);
+-	csd_cfg_reg_count = v3d->ver < 71 ? 6 : 7;
+-	for (i = 1; i <= csd_cfg_reg_count; i++)
++	for (i = 1; i <= 6; i++)
+ 		V3D_CORE_WRITE(0, csd_cfg0_reg + 4 * i, job->args.cfg[i]);
++
++	/* Although V3D 7.1 has an eighth configuration register, we are not
++	 * using it. Therefore, make sure it remains unused.
++	 *
++	 * XXX: Set the CFG7 register
++	 */
++	if (v3d->ver >= 71)
++		V3D_CORE_WRITE(0, V3D_V7_CSD_QUEUED_CFG7, 0);
++
+ 	/* CFG0 write kicks off the job. */
+ 	V3D_CORE_WRITE(0, csd_cfg0_reg, job->args.cfg[0]);
+ 
+diff --git a/drivers/gpu/drm/xe/display/xe_display.c b/drivers/gpu/drm/xe/display/xe_display.c
+index 0de0566e5b394..7cdc03dc40ed9 100644
+--- a/drivers/gpu/drm/xe/display/xe_display.c
++++ b/drivers/gpu/drm/xe/display/xe_display.c
+@@ -134,7 +134,7 @@ static void xe_display_fini_noirq(struct drm_device *dev, void *dummy)
+ 		return;
+ 
+ 	intel_display_driver_remove_noirq(xe);
+-	intel_power_domains_driver_remove(xe);
++	intel_opregion_cleanup(xe);
+ }
+ 
+ int xe_display_init_noirq(struct xe_device *xe)
+@@ -160,8 +160,10 @@ int xe_display_init_noirq(struct xe_device *xe)
+ 	intel_display_device_info_runtime_init(xe);
+ 
+ 	err = intel_display_driver_probe_noirq(xe);
+-	if (err)
++	if (err) {
++		intel_opregion_cleanup(xe);
+ 		return err;
++	}
+ 
+ 	return drmm_add_action_or_reset(&xe->drm, xe_display_fini_noirq, NULL);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index 5ef9b50a20d01..a1cbdafbff75e 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -551,7 +551,9 @@ int xe_device_probe(struct xe_device *xe)
+ 	if (err)
+ 		return err;
+ 
+-	xe_mmio_probe_tiles(xe);
++	err = xe_mmio_probe_tiles(xe);
++	if (err)
++		return err;
+ 
+ 	xe_ttm_sys_mgr_init(xe);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index 395de93579fa6..2ae4420e29353 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -96,17 +96,11 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
+ 		}
+ 	}
+ 
+-	if (xe_exec_queue_is_parallel(q)) {
+-		q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
+-		q->parallel.composite_fence_seqno = XE_FENCE_INITIAL_SEQNO;
+-	}
+-
+ 	return q;
+ }
+ 
+ static int __xe_exec_queue_init(struct xe_exec_queue *q)
+ {
+-	struct xe_device *xe = gt_to_xe(q->gt);
+ 	int i, err;
+ 
+ 	for (i = 0; i < q->width; ++i) {
+@@ -119,17 +113,6 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
+ 	if (err)
+ 		goto err_lrc;
+ 
+-	/*
+-	 * Normally the user vm holds an rpm ref to keep the device
+-	 * awake, and the context holds a ref for the vm, however for
+-	 * some engines we use the kernels migrate vm underneath which offers no
+-	 * such rpm ref, or we lack a vm. Make sure we keep a ref here, so we
+-	 * can perform GuC CT actions when needed. Caller is expected to have
+-	 * already grabbed the rpm ref outside any sensitive locks.
+-	 */
+-	if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
+-		xe_pm_runtime_get_noresume(xe);
+-
+ 	return 0;
+ 
+ err_lrc:
+@@ -216,8 +199,6 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
+ 
+ 	for (i = 0; i < q->width; ++i)
+ 		xe_lrc_finish(q->lrc + i);
+-	if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
+-		xe_pm_runtime_put(gt_to_xe(q->gt));
+ 	__xe_exec_queue_free(q);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+index ee78d497d838a..f0c40e8ad80a1 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+@@ -103,16 +103,6 @@ struct xe_exec_queue {
+ 		struct xe_guc_exec_queue *guc;
+ 	};
+ 
+-	/**
+-	 * @parallel: parallel submission state
+-	 */
+-	struct {
+-		/** @parallel.composite_fence_ctx: context composite fence */
+-		u64 composite_fence_ctx;
+-		/** @parallel.composite_fence_seqno: seqno for composite fence */
+-		u32 composite_fence_seqno;
+-	} parallel;
+-
+ 	/** @sched_props: scheduling properties */
+ 	struct {
+ 		/** @sched_props.timeslice_us: timeslice period in micro-seconds */
+diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+index fa9e9853c53ba..67e8efcaa93f1 100644
+--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
++++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+@@ -402,6 +402,18 @@ static void pf_queue_work_func(struct work_struct *w)
+ 
+ static void acc_queue_work_func(struct work_struct *w);
+ 
++static void pagefault_fini(void *arg)
++{
++	struct xe_gt *gt = arg;
++	struct xe_device *xe = gt_to_xe(gt);
++
++	if (!xe->info.has_usm)
++		return;
++
++	destroy_workqueue(gt->usm.acc_wq);
++	destroy_workqueue(gt->usm.pf_wq);
++}
++
+ int xe_gt_pagefault_init(struct xe_gt *gt)
+ {
+ 	struct xe_device *xe = gt_to_xe(gt);
+@@ -429,10 +441,12 @@ int xe_gt_pagefault_init(struct xe_gt *gt)
+ 	gt->usm.acc_wq = alloc_workqueue("xe_gt_access_counter_work_queue",
+ 					 WQ_UNBOUND | WQ_HIGHPRI,
+ 					 NUM_ACC_QUEUE);
+-	if (!gt->usm.acc_wq)
++	if (!gt->usm.acc_wq) {
++		destroy_workqueue(gt->usm.pf_wq);
+ 		return -ENOMEM;
++	}
+ 
+-	return 0;
++	return devm_add_action_or_reset(xe->drm.dev, pagefault_fini, gt);
+ }
+ 
+ void xe_gt_pagefault_reset(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 0f42971ff0a83..e48285c81bf57 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -35,6 +35,7 @@
+ #include "xe_macros.h"
+ #include "xe_map.h"
+ #include "xe_mocs.h"
++#include "xe_pm.h"
+ #include "xe_ring_ops_types.h"
+ #include "xe_sched_job.h"
+ #include "xe_trace.h"
+@@ -916,8 +917,9 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
+ 		return DRM_GPU_SCHED_STAT_NOMINAL;
+ 	}
+ 
+-	drm_notice(&xe->drm, "Timedout job: seqno=%u, guc_id=%d, flags=0x%lx",
+-		   xe_sched_job_seqno(job), q->guc->id, q->flags);
++	drm_notice(&xe->drm, "Timedout job: seqno=%u, lrc_seqno=%u, guc_id=%d, flags=0x%lx",
++		   xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
++		   q->guc->id, q->flags);
+ 	xe_gt_WARN(q->gt, q->flags & EXEC_QUEUE_FLAG_KERNEL,
+ 		   "Kernel-submitted job timed out\n");
+ 	xe_gt_WARN(q->gt, q->flags & EXEC_QUEUE_FLAG_VM && !exec_queue_killed(q),
+@@ -1011,6 +1013,7 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ 	struct xe_exec_queue *q = ge->q;
+ 	struct xe_guc *guc = exec_queue_to_guc(q);
+ 
++	xe_pm_runtime_get(guc_to_xe(guc));
+ 	trace_xe_exec_queue_destroy(q);
+ 
+ 	if (xe_exec_queue_is_lr(q))
+@@ -1021,6 +1024,7 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ 
+ 	kfree(ge);
+ 	xe_exec_queue_fini(q);
++	xe_pm_runtime_put(guc_to_xe(guc));
+ }
+ 
+ static void guc_exec_queue_fini_async(struct xe_exec_queue *q)
+diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
+index f872ef1031272..35c0063a831af 100644
+--- a/drivers/gpu/drm/xe/xe_hw_fence.c
++++ b/drivers/gpu/drm/xe/xe_hw_fence.c
+@@ -208,23 +208,58 @@ static struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence)
+ 	return container_of(fence, struct xe_hw_fence, dma);
+ }
+ 
+-struct xe_hw_fence *xe_hw_fence_create(struct xe_hw_fence_ctx *ctx,
+-				       struct iosys_map seqno_map)
++/**
++ * xe_hw_fence_alloc() -  Allocate an hw fence.
++ *
++ * Allocate but don't initialize an hw fence.
++ *
++ * Return: Pointer to the allocated fence or
++ * negative error pointer on error.
++ */
++struct dma_fence *xe_hw_fence_alloc(void)
+ {
+-	struct xe_hw_fence *fence;
++	struct xe_hw_fence *hw_fence = fence_alloc();
+ 
+-	fence = fence_alloc();
+-	if (!fence)
++	if (!hw_fence)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	fence->ctx = ctx;
+-	fence->seqno_map = seqno_map;
+-	INIT_LIST_HEAD(&fence->irq_link);
++	return &hw_fence->dma;
++}
+ 
+-	dma_fence_init(&fence->dma, &xe_hw_fence_ops, &ctx->irq->lock,
+-		       ctx->dma_fence_ctx, ctx->next_seqno++);
++/**
++ * xe_hw_fence_free() - Free an hw fence.
++ * @fence: Pointer to the fence to free.
++ *
++ * Frees an hw fence that hasn't yet been
++ * initialized.
++ */
++void xe_hw_fence_free(struct dma_fence *fence)
++{
++	fence_free(&fence->rcu);
++}
+ 
+-	trace_xe_hw_fence_create(fence);
++/**
++ * xe_hw_fence_init() - Initialize an hw fence.
++ * @fence: Pointer to the fence to initialize.
++ * @ctx: Pointer to the struct xe_hw_fence_ctx fence context.
++ * @seqno_map: Pointer to the map into where the seqno is blitted.
++ *
++ * Initializes a pre-allocated hw fence.
++ * After initialization, the fence is subject to normal
++ * dma-fence refcounting.
++ */
++void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
++		      struct iosys_map seqno_map)
++{
++	struct  xe_hw_fence *hw_fence =
++		container_of(fence, typeof(*hw_fence), dma);
++
++	hw_fence->ctx = ctx;
++	hw_fence->seqno_map = seqno_map;
++	INIT_LIST_HEAD(&hw_fence->irq_link);
++
++	dma_fence_init(fence, &xe_hw_fence_ops, &ctx->irq->lock,
++		       ctx->dma_fence_ctx, ctx->next_seqno++);
+ 
+-	return fence;
++	trace_xe_hw_fence_create(hw_fence);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
+index cfe5fd603787e..f13a1c4982c73 100644
+--- a/drivers/gpu/drm/xe/xe_hw_fence.h
++++ b/drivers/gpu/drm/xe/xe_hw_fence.h
+@@ -24,7 +24,10 @@ void xe_hw_fence_ctx_init(struct xe_hw_fence_ctx *ctx, struct xe_gt *gt,
+ 			  struct xe_hw_fence_irq *irq, const char *name);
+ void xe_hw_fence_ctx_finish(struct xe_hw_fence_ctx *ctx);
+ 
+-struct xe_hw_fence *xe_hw_fence_create(struct xe_hw_fence_ctx *ctx,
+-				       struct iosys_map seqno_map);
++struct dma_fence *xe_hw_fence_alloc(void);
+ 
++void xe_hw_fence_free(struct dma_fence *fence);
++
++void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
++		      struct iosys_map seqno_map);
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index d7bf7bc9dc145..995f47eb365c3 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -901,10 +901,54 @@ u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc)
+ 	return __xe_lrc_seqno_ggtt_addr(lrc);
+ }
+ 
++/**
++ * xe_lrc_alloc_seqno_fence() - Allocate an lrc seqno fence.
++ *
++ * Allocate but don't initialize an lrc seqno fence.
++ *
++ * Return: Pointer to the allocated fence or
++ * negative error pointer on error.
++ */
++struct dma_fence *xe_lrc_alloc_seqno_fence(void)
++{
++	return xe_hw_fence_alloc();
++}
++
++/**
++ * xe_lrc_free_seqno_fence() - Free an lrc seqno fence.
++ * @fence: Pointer to the fence to free.
++ *
++ * Frees an lrc seqno fence that hasn't yet been
++ * initialized.
++ */
++void xe_lrc_free_seqno_fence(struct dma_fence *fence)
++{
++	xe_hw_fence_free(fence);
++}
++
++/**
++ * xe_lrc_init_seqno_fence() - Initialize an lrc seqno fence.
++ * @lrc: Pointer to the lrc.
++ * @fence: Pointer to the fence to initialize.
++ *
++ * Initializes a pre-allocated lrc seqno fence.
++ * After initialization, the fence is subject to normal
++ * dma-fence refcounting.
++ */
++void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence)
++{
++	xe_hw_fence_init(fence, &lrc->fence_ctx, __xe_lrc_seqno_map(lrc));
++}
++
+ struct dma_fence *xe_lrc_create_seqno_fence(struct xe_lrc *lrc)
+ {
+-	return &xe_hw_fence_create(&lrc->fence_ctx,
+-				   __xe_lrc_seqno_map(lrc))->dma;
++	struct dma_fence *fence = xe_lrc_alloc_seqno_fence();
++
++	if (IS_ERR(fence))
++		return fence;
++
++	xe_lrc_init_seqno_fence(lrc, fence);
++	return fence;
+ }
+ 
+ s32 xe_lrc_seqno(struct xe_lrc *lrc)
+diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
+index d32fa31faa2cf..f57c5836dab87 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.h
++++ b/drivers/gpu/drm/xe/xe_lrc.h
+@@ -38,6 +38,9 @@ void xe_lrc_write_ctx_reg(struct xe_lrc *lrc, int reg_nr, u32 val);
+ u64 xe_lrc_descriptor(struct xe_lrc *lrc);
+ 
+ u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc);
++struct dma_fence *xe_lrc_alloc_seqno_fence(void);
++void xe_lrc_free_seqno_fence(struct dma_fence *fence);
++void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence);
+ struct dma_fence *xe_lrc_create_seqno_fence(struct xe_lrc *lrc);
+ s32 xe_lrc_seqno(struct xe_lrc *lrc);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
+index 334637511e750..beb4e276ba845 100644
+--- a/drivers/gpu/drm/xe/xe_mmio.c
++++ b/drivers/gpu/drm/xe/xe_mmio.c
+@@ -254,6 +254,21 @@ static int xe_mmio_tile_vram_size(struct xe_tile *tile, u64 *vram_size,
+ 	return xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+ }
+ 
++static void vram_fini(void *arg)
++{
++	struct xe_device *xe = arg;
++	struct xe_tile *tile;
++	int id;
++
++	if (xe->mem.vram.mapping)
++		iounmap(xe->mem.vram.mapping);
++
++	xe->mem.vram.mapping = NULL;
++
++	for_each_tile(tile, xe, id)
++		tile->mem.vram.mapping = NULL;
++}
++
+ int xe_mmio_probe_vram(struct xe_device *xe)
+ {
+ 	struct xe_tile *tile;
+@@ -330,10 +345,21 @@ int xe_mmio_probe_vram(struct xe_device *xe)
+ 	drm_info(&xe->drm, "Available VRAM: %pa, %pa\n", &xe->mem.vram.io_start,
+ 		 &available_size);
+ 
+-	return 0;
++	return devm_add_action_or_reset(xe->drm.dev, vram_fini, xe);
+ }
+ 
+-void xe_mmio_probe_tiles(struct xe_device *xe)
++static void tiles_fini(void *arg)
++{
++	struct xe_device *xe = arg;
++	struct xe_tile *tile;
++	int id;
++
++	for_each_tile(tile, xe, id)
++		if (tile != xe_device_get_root_tile(xe))
++			tile->mmio.regs = NULL;
++}
++
++int xe_mmio_probe_tiles(struct xe_device *xe)
+ {
+ 	size_t tile_mmio_size = SZ_16M, tile_mmio_ext_size = xe->info.tile_mmio_ext_size;
+ 	u8 id, tile_count = xe->info.tile_count;
+@@ -384,15 +410,18 @@ void xe_mmio_probe_tiles(struct xe_device *xe)
+ 			regs += tile_mmio_ext_size;
+ 		}
+ 	}
++
++	return devm_add_action_or_reset(xe->drm.dev, tiles_fini, xe);
+ }
+ 
+-static void mmio_fini(struct drm_device *drm, void *arg)
++static void mmio_fini(void *arg)
+ {
+ 	struct xe_device *xe = arg;
++	struct xe_tile *root_tile = xe_device_get_root_tile(xe);
+ 
+ 	pci_iounmap(to_pci_dev(xe->drm.dev), xe->mmio.regs);
+-	if (xe->mem.vram.mapping)
+-		iounmap(xe->mem.vram.mapping);
++	xe->mmio.regs = NULL;
++	root_tile->mmio.regs = NULL;
+ }
+ 
+ int xe_mmio_init(struct xe_device *xe)
+@@ -417,7 +446,7 @@ int xe_mmio_init(struct xe_device *xe)
+ 	root_tile->mmio.size = SZ_16M;
+ 	root_tile->mmio.regs = xe->mmio.regs;
+ 
+-	return drmm_add_action_or_reset(&xe->drm, mmio_fini, xe);
++	return devm_add_action_or_reset(xe->drm.dev, mmio_fini, xe);
+ }
+ 
+ u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
+diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
+index a3cd7b3036c73..a929d090bb2f1 100644
+--- a/drivers/gpu/drm/xe/xe_mmio.h
++++ b/drivers/gpu/drm/xe/xe_mmio.h
+@@ -21,7 +21,7 @@ struct xe_device;
+ #define LMEM_BAR		2
+ 
+ int xe_mmio_init(struct xe_device *xe);
+-void xe_mmio_probe_tiles(struct xe_device *xe);
++int xe_mmio_probe_tiles(struct xe_device *xe);
+ 
+ u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg);
+ u16 xe_mmio_read16(struct xe_gt *gt, struct xe_reg reg);
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index aca7a9af6e846..c59c4373aefad 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -380,7 +380,7 @@ static void emit_migration_job_gen12(struct xe_sched_job *job,
+ 
+ 	dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; /* Enabled again below */
+ 
+-	i = emit_bb_start(job->batch_addr[0], BIT(8), dw, i);
++	i = emit_bb_start(job->ptrs[0].batch_addr, BIT(8), dw, i);
+ 
+ 	if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) {
+ 		/* XXX: Do we need this? Leaving for now. */
+@@ -389,7 +389,7 @@ static void emit_migration_job_gen12(struct xe_sched_job *job,
+ 		dw[i++] = preparser_disable(false);
+ 	}
+ 
+-	i = emit_bb_start(job->batch_addr[1], BIT(8), dw, i);
++	i = emit_bb_start(job->ptrs[1].batch_addr, BIT(8), dw, i);
+ 
+ 	dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | job->migrate_flush_flags |
+ 		MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW;
+@@ -411,8 +411,8 @@ static void emit_job_gen12_gsc(struct xe_sched_job *job)
+ 	xe_gt_assert(gt, job->q->width <= 1); /* no parallel submission for GSCCS */
+ 
+ 	__emit_job_gen12_simple(job, job->q->lrc,
+-				job->batch_addr[0],
+-				xe_sched_job_seqno(job));
++				job->ptrs[0].batch_addr,
++				xe_sched_job_lrc_seqno(job));
+ }
+ 
+ static void emit_job_gen12_copy(struct xe_sched_job *job)
+@@ -421,14 +421,14 @@ static void emit_job_gen12_copy(struct xe_sched_job *job)
+ 
+ 	if (xe_sched_job_is_migration(job->q)) {
+ 		emit_migration_job_gen12(job, job->q->lrc,
+-					 xe_sched_job_seqno(job));
++					 xe_sched_job_lrc_seqno(job));
+ 		return;
+ 	}
+ 
+ 	for (i = 0; i < job->q->width; ++i)
+ 		__emit_job_gen12_simple(job, job->q->lrc + i,
+-				        job->batch_addr[i],
+-				        xe_sched_job_seqno(job));
++					job->ptrs[i].batch_addr,
++					xe_sched_job_lrc_seqno(job));
+ }
+ 
+ static void emit_job_gen12_video(struct xe_sched_job *job)
+@@ -438,8 +438,8 @@ static void emit_job_gen12_video(struct xe_sched_job *job)
+ 	/* FIXME: Not doing parallel handshake for now */
+ 	for (i = 0; i < job->q->width; ++i)
+ 		__emit_job_gen12_video(job, job->q->lrc + i,
+-				       job->batch_addr[i],
+-				       xe_sched_job_seqno(job));
++				       job->ptrs[i].batch_addr,
++				       xe_sched_job_lrc_seqno(job));
+ }
+ 
+ static void emit_job_gen12_render_compute(struct xe_sched_job *job)
+@@ -448,8 +448,8 @@ static void emit_job_gen12_render_compute(struct xe_sched_job *job)
+ 
+ 	for (i = 0; i < job->q->width; ++i)
+ 		__emit_job_gen12_render_compute(job, job->q->lrc + i,
+-						job->batch_addr[i],
+-						xe_sched_job_seqno(job));
++						job->ptrs[i].batch_addr,
++						xe_sched_job_lrc_seqno(job));
+ }
+ 
+ static const struct xe_ring_ops ring_ops_gen12_gsc = {
+diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
+index cd8a2fba54389..2b064680abb96 100644
+--- a/drivers/gpu/drm/xe/xe_sched_job.c
++++ b/drivers/gpu/drm/xe/xe_sched_job.c
+@@ -6,7 +6,7 @@
+ #include "xe_sched_job.h"
+ 
+ #include <drm/xe_drm.h>
+-#include <linux/dma-fence-array.h>
++#include <linux/dma-fence-chain.h>
+ #include <linux/slab.h>
+ 
+ #include "xe_device.h"
+@@ -29,7 +29,7 @@ int __init xe_sched_job_module_init(void)
+ 	xe_sched_job_slab =
+ 		kmem_cache_create("xe_sched_job",
+ 				  sizeof(struct xe_sched_job) +
+-				  sizeof(u64), 0,
++				  sizeof(struct xe_job_ptrs), 0,
+ 				  SLAB_HWCACHE_ALIGN, NULL);
+ 	if (!xe_sched_job_slab)
+ 		return -ENOMEM;
+@@ -37,7 +37,7 @@ int __init xe_sched_job_module_init(void)
+ 	xe_sched_job_parallel_slab =
+ 		kmem_cache_create("xe_sched_job_parallel",
+ 				  sizeof(struct xe_sched_job) +
+-				  sizeof(u64) *
++				  sizeof(struct xe_job_ptrs) *
+ 				  XE_HW_ENGINE_MAX_INSTANCE, 0,
+ 				  SLAB_HWCACHE_ALIGN, NULL);
+ 	if (!xe_sched_job_parallel_slab) {
+@@ -79,26 +79,33 @@ static struct xe_device *job_to_xe(struct xe_sched_job *job)
+ 	return gt_to_xe(job->q->gt);
+ }
+ 
++/* Free unused pre-allocated fences */
++static void xe_sched_job_free_fences(struct xe_sched_job *job)
++{
++	int i;
++
++	for (i = 0; i < job->q->width; ++i) {
++		struct xe_job_ptrs *ptrs = &job->ptrs[i];
++
++		if (ptrs->lrc_fence)
++			xe_lrc_free_seqno_fence(ptrs->lrc_fence);
++		if (ptrs->chain_fence)
++			dma_fence_chain_free(ptrs->chain_fence);
++	}
++}
++
+ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
+ 					 u64 *batch_addr)
+ {
+-	struct xe_sched_job *job;
+-	struct dma_fence **fences;
+ 	bool is_migration = xe_sched_job_is_migration(q);
++	struct xe_sched_job *job;
+ 	int err;
+-	int i, j;
++	int i;
+ 	u32 width;
+ 
+ 	/* only a kernel context can submit a vm-less job */
+ 	XE_WARN_ON(!q->vm && !(q->flags & EXEC_QUEUE_FLAG_KERNEL));
+ 
+-	/* Migration and kernel engines have their own locking */
+-	if (!(q->flags & (EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_VM))) {
+-		lockdep_assert_held(&q->vm->lock);
+-		if (!xe_vm_in_lr_mode(q->vm))
+-			xe_vm_assert_held(q->vm);
+-	}
+-
+ 	job = job_alloc(xe_exec_queue_is_parallel(q) || is_migration);
+ 	if (!job)
+ 		return ERR_PTR(-ENOMEM);
+@@ -111,44 +118,25 @@ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
+ 	if (err)
+ 		goto err_free;
+ 
+-	if (!xe_exec_queue_is_parallel(q)) {
+-		job->fence = xe_lrc_create_seqno_fence(q->lrc);
+-		if (IS_ERR(job->fence)) {
+-			err = PTR_ERR(job->fence);
+-			goto err_sched_job;
+-		}
+-	} else {
+-		struct dma_fence_array *cf;
++	for (i = 0; i < q->width; ++i) {
++		struct dma_fence *fence = xe_lrc_alloc_seqno_fence();
++		struct dma_fence_chain *chain;
+ 
+-		fences = kmalloc_array(q->width, sizeof(*fences), GFP_KERNEL);
+-		if (!fences) {
+-			err = -ENOMEM;
++		if (IS_ERR(fence)) {
++			err = PTR_ERR(fence);
+ 			goto err_sched_job;
+ 		}
++		job->ptrs[i].lrc_fence = fence;
+ 
+-		for (j = 0; j < q->width; ++j) {
+-			fences[j] = xe_lrc_create_seqno_fence(q->lrc + j);
+-			if (IS_ERR(fences[j])) {
+-				err = PTR_ERR(fences[j]);
+-				goto err_fences;
+-			}
+-		}
++		if (i + 1 == q->width)
++			continue;
+ 
+-		cf = dma_fence_array_create(q->width, fences,
+-					    q->parallel.composite_fence_ctx,
+-					    q->parallel.composite_fence_seqno++,
+-					    false);
+-		if (!cf) {
+-			--q->parallel.composite_fence_seqno;
++		chain = dma_fence_chain_alloc();
++		if (!chain) {
+ 			err = -ENOMEM;
+-			goto err_fences;
++			goto err_sched_job;
+ 		}
+-
+-		/* Sanity check */
+-		for (j = 0; j < q->width; ++j)
+-			xe_assert(job_to_xe(job), cf->base.seqno == fences[j]->seqno);
+-
+-		job->fence = &cf->base;
++		job->ptrs[i].chain_fence = chain;
+ 	}
+ 
+ 	width = q->width;
+@@ -156,23 +144,14 @@ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
+ 		width = 2;
+ 
+ 	for (i = 0; i < width; ++i)
+-		job->batch_addr[i] = batch_addr[i];
+-
+-	/* All other jobs require a VM to be open which has a ref */
+-	if (unlikely(q->flags & EXEC_QUEUE_FLAG_KERNEL))
+-		xe_pm_runtime_get_noresume(job_to_xe(job));
+-	xe_device_assert_mem_access(job_to_xe(job));
++		job->ptrs[i].batch_addr = batch_addr[i];
+ 
++	xe_pm_runtime_get_noresume(job_to_xe(job));
+ 	trace_xe_sched_job_create(job);
+ 	return job;
+ 
+-err_fences:
+-	for (j = j - 1; j >= 0; --j) {
+-		--q->lrc[j].fence_ctx.next_seqno;
+-		dma_fence_put(fences[j]);
+-	}
+-	kfree(fences);
+ err_sched_job:
++	xe_sched_job_free_fences(job);
+ 	drm_sched_job_cleanup(&job->drm);
+ err_free:
+ 	xe_exec_queue_put(q);
+@@ -191,36 +170,43 @@ void xe_sched_job_destroy(struct kref *ref)
+ {
+ 	struct xe_sched_job *job =
+ 		container_of(ref, struct xe_sched_job, refcount);
++	struct xe_device *xe = job_to_xe(job);
++	struct xe_exec_queue *q = job->q;
+ 
+-	if (unlikely(job->q->flags & EXEC_QUEUE_FLAG_KERNEL))
+-		xe_pm_runtime_put(job_to_xe(job));
+-	xe_exec_queue_put(job->q);
++	xe_sched_job_free_fences(job);
+ 	dma_fence_put(job->fence);
+ 	drm_sched_job_cleanup(&job->drm);
+ 	job_free(job);
++	xe_exec_queue_put(q);
++	xe_pm_runtime_put(xe);
+ }
+ 
+-void xe_sched_job_set_error(struct xe_sched_job *job, int error)
++/* Set the error status under the fence to avoid racing with signaling */
++static bool xe_fence_set_error(struct dma_fence *fence, int error)
+ {
+-	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &job->fence->flags))
+-		return;
++	unsigned long irq_flags;
++	bool signaled;
+ 
+-	dma_fence_set_error(job->fence, error);
++	spin_lock_irqsave(fence->lock, irq_flags);
++	signaled = test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags);
++	if (!signaled)
++		dma_fence_set_error(fence, error);
++	spin_unlock_irqrestore(fence->lock, irq_flags);
+ 
+-	if (dma_fence_is_array(job->fence)) {
+-		struct dma_fence_array *array =
+-			to_dma_fence_array(job->fence);
+-		struct dma_fence **child = array->fences;
+-		unsigned int nchild = array->num_fences;
++	return signaled;
++}
++
++void xe_sched_job_set_error(struct xe_sched_job *job, int error)
++{
++	if (xe_fence_set_error(job->fence, error))
++		return;
+ 
+-		do {
+-			struct dma_fence *current_fence = *child++;
++	if (dma_fence_is_chain(job->fence)) {
++		struct dma_fence *iter;
+ 
+-			if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
+-				     &current_fence->flags))
+-				continue;
+-			dma_fence_set_error(current_fence, error);
+-		} while (--nchild);
++		dma_fence_chain_for_each(iter, job->fence)
++			xe_fence_set_error(dma_fence_chain_contained(iter),
++					   error);
+ 	}
+ 
+ 	trace_xe_sched_job_set_error(job);
+@@ -233,9 +219,9 @@ bool xe_sched_job_started(struct xe_sched_job *job)
+ {
+ 	struct xe_lrc *lrc = job->q->lrc;
+ 
+-	return !__dma_fence_is_later(xe_sched_job_seqno(job),
++	return !__dma_fence_is_later(xe_sched_job_lrc_seqno(job),
+ 				     xe_lrc_start_seqno(lrc),
+-				     job->fence->ops);
++				     dma_fence_chain_contained(job->fence)->ops);
+ }
+ 
+ bool xe_sched_job_completed(struct xe_sched_job *job)
+@@ -247,14 +233,26 @@ bool xe_sched_job_completed(struct xe_sched_job *job)
+ 	 * parallel handshake is done.
+ 	 */
+ 
+-	return !__dma_fence_is_later(xe_sched_job_seqno(job), xe_lrc_seqno(lrc),
+-				     job->fence->ops);
++	return !__dma_fence_is_later(xe_sched_job_lrc_seqno(job),
++				     xe_lrc_seqno(lrc),
++				     dma_fence_chain_contained(job->fence)->ops);
+ }
+ 
+ void xe_sched_job_arm(struct xe_sched_job *job)
+ {
+ 	struct xe_exec_queue *q = job->q;
++	struct dma_fence *fence, *prev;
+ 	struct xe_vm *vm = q->vm;
++	u64 seqno = 0;
++	int i;
++
++	/* Migration and kernel engines have their own locking */
++	if (IS_ENABLED(CONFIG_LOCKDEP) &&
++	    !(q->flags & (EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_VM))) {
++		lockdep_assert_held(&q->vm->lock);
++		if (!xe_vm_in_lr_mode(q->vm))
++			xe_vm_assert_held(q->vm);
++	}
+ 
+ 	if (vm && !xe_sched_job_is_migration(q) && !xe_vm_in_lr_mode(vm) &&
+ 	    (vm->batch_invalidate_tlb || vm->tlb_flush_seqno != q->tlb_flush_seqno)) {
+@@ -263,6 +261,27 @@ void xe_sched_job_arm(struct xe_sched_job *job)
+ 		job->ring_ops_flush_tlb = true;
+ 	}
+ 
++	/* Arm the pre-allocated fences */
++	for (i = 0; i < q->width; prev = fence, ++i) {
++		struct dma_fence_chain *chain;
++
++		fence = job->ptrs[i].lrc_fence;
++		xe_lrc_init_seqno_fence(&q->lrc[i], fence);
++		job->ptrs[i].lrc_fence = NULL;
++		if (!i) {
++			job->lrc_seqno = fence->seqno;
++			continue;
++		} else {
++			xe_assert(gt_to_xe(q->gt), job->lrc_seqno == fence->seqno);
++		}
++
++		chain = job->ptrs[i - 1].chain_fence;
++		dma_fence_chain_init(chain, prev, fence, seqno++);
++		job->ptrs[i - 1].chain_fence = NULL;
++		fence = &chain->base;
++	}
++
++	job->fence = fence;
+ 	drm_sched_job_arm(&job->drm);
+ }
+ 
+@@ -322,7 +341,8 @@ xe_sched_job_snapshot_capture(struct xe_sched_job *job)
+ 
+ 	snapshot->batch_addr_len = q->width;
+ 	for (i = 0; i < q->width; i++)
+-		snapshot->batch_addr[i] = xe_device_uncanonicalize_addr(xe, job->batch_addr[i]);
++		snapshot->batch_addr[i] =
++			xe_device_uncanonicalize_addr(xe, job->ptrs[i].batch_addr);
+ 
+ 	return snapshot;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_sched_job.h b/drivers/gpu/drm/xe/xe_sched_job.h
+index c75018f4660dc..f362e28455dbf 100644
+--- a/drivers/gpu/drm/xe/xe_sched_job.h
++++ b/drivers/gpu/drm/xe/xe_sched_job.h
+@@ -70,7 +70,12 @@ to_xe_sched_job(struct drm_sched_job *drm)
+ 
+ static inline u32 xe_sched_job_seqno(struct xe_sched_job *job)
+ {
+-	return job->fence->seqno;
++	return job->fence ? job->fence->seqno : 0;
++}
++
++static inline u32 xe_sched_job_lrc_seqno(struct xe_sched_job *job)
++{
++	return job->lrc_seqno;
+ }
+ 
+ static inline void
+diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
+index 5e12724219fdd..0d3f76fb05cea 100644
+--- a/drivers/gpu/drm/xe/xe_sched_job_types.h
++++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
+@@ -11,6 +11,20 @@
+ #include <drm/gpu_scheduler.h>
+ 
+ struct xe_exec_queue;
++struct dma_fence;
++struct dma_fence_chain;
++
++/**
++ * struct xe_job_ptrs - Per hw engine instance data
++ */
++struct xe_job_ptrs {
++	/** @lrc_fence: Pre-allocated uinitialized lrc fence.*/
++	struct dma_fence *lrc_fence;
++	/** @chain_fence: Pre-allocated ninitialized fence chain node. */
++	struct dma_fence_chain *chain_fence;
++	/** @batch_addr: Batch buffer address. */
++	u64 batch_addr;
++};
+ 
+ /**
+  * struct xe_sched_job - XE schedule job (batch buffer tracking)
+@@ -37,12 +51,14 @@ struct xe_sched_job {
+ 		/** @user_fence.value: write back value */
+ 		u64 value;
+ 	} user_fence;
++	/** @lrc_seqno: LRC seqno */
++	u32 lrc_seqno;
+ 	/** @migrate_flush_flags: Additional flush flags for migration jobs */
+ 	u32 migrate_flush_flags;
+ 	/** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */
+ 	bool ring_ops_flush_tlb;
+-	/** @batch_addr: batch buffer address of job */
+-	u64 batch_addr[];
++	/** @ptrs: per instance pointers. */
++	struct xe_job_ptrs ptrs[];
+ };
+ 
+ struct xe_sched_job_snapshot {
+diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
+index 2d56cfc09e421..e4cba64474e6d 100644
+--- a/drivers/gpu/drm/xe/xe_trace.h
++++ b/drivers/gpu/drm/xe/xe_trace.h
+@@ -254,6 +254,7 @@ DECLARE_EVENT_CLASS(xe_sched_job,
+ 
+ 		    TP_STRUCT__entry(
+ 			     __field(u32, seqno)
++			     __field(u32, lrc_seqno)
+ 			     __field(u16, guc_id)
+ 			     __field(u32, guc_state)
+ 			     __field(u32, flags)
+@@ -264,17 +265,19 @@ DECLARE_EVENT_CLASS(xe_sched_job,
+ 
+ 		    TP_fast_assign(
+ 			   __entry->seqno = xe_sched_job_seqno(job);
++			   __entry->lrc_seqno = xe_sched_job_lrc_seqno(job);
+ 			   __entry->guc_id = job->q->guc->id;
+ 			   __entry->guc_state =
+ 			   atomic_read(&job->q->guc->state);
+ 			   __entry->flags = job->q->flags;
+-			   __entry->error = job->fence->error;
++			   __entry->error = job->fence ? job->fence->error : 0;
+ 			   __entry->fence = job->fence;
+-			   __entry->batch_addr = (u64)job->batch_addr[0];
++			   __entry->batch_addr = (u64)job->ptrs[0].batch_addr;
+ 			   ),
+ 
+-		    TP_printk("fence=%p, seqno=%u, guc_id=%d, batch_addr=0x%012llx, guc_state=0x%x, flags=0x%x, error=%d",
+-			      __entry->fence, __entry->seqno, __entry->guc_id,
++		    TP_printk("fence=%p, seqno=%u, lrc_seqno=%u, guc_id=%d, batch_addr=0x%012llx, guc_state=0x%x, flags=0x%x, error=%d",
++			      __entry->fence, __entry->seqno,
++			      __entry->lrc_seqno, __entry->guc_id,
+ 			      __entry->batch_addr, __entry->guc_state,
+ 			      __entry->flags, __entry->error)
+ );
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 20de97ce0f5ee..d84740be96426 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1924,12 +1924,14 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
+ 	int fmax = field->logical_maximum;
+ 	unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid);
+ 	int resolution_code = code;
+-	int resolution = hidinput_calc_abs_res(field, resolution_code);
++	int resolution;
+ 
+ 	if (equivalent_usage == HID_DG_TWIST) {
+ 		resolution_code = ABS_RZ;
+ 	}
+ 
++	resolution = hidinput_calc_abs_res(field, resolution_code);
++
+ 	if (equivalent_usage == HID_GD_X) {
+ 		fmin += features->offset_left;
+ 		fmax -= features->offset_right;
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 365e37bba0f33..06e836e3e8773 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -986,8 +986,10 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(gi2c->core_clk);
+-	if (ret)
++	if (ret) {
++		geni_icc_disable(&gi2c->se);
+ 		return ret;
++	}
+ 
+ 	ret = geni_se_resources_on(&gi2c->se);
+ 	if (ret) {
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 85b31edc558df..1df5b42041427 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -1802,9 +1802,9 @@ static int tegra_i2c_probe(struct platform_device *pdev)
+ 	 * domain.
+ 	 *
+ 	 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't
+-	 * be used for atomic transfers.
++	 * be used for atomic transfers. ACPI device is not IRQ safe also.
+ 	 */
+-	if (!IS_VI(i2c_dev))
++	if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev))
+ 		pm_runtime_irq_safe(i2c_dev->dev);
+ 
+ 	pm_runtime_enable(i2c_dev->dev);
+diff --git a/drivers/input/input-mt.c b/drivers/input/input-mt.c
+index 14b53dac1253b..6b04a674f832a 100644
+--- a/drivers/input/input-mt.c
++++ b/drivers/input/input-mt.c
+@@ -46,6 +46,9 @@ int input_mt_init_slots(struct input_dev *dev, unsigned int num_slots,
+ 		return 0;
+ 	if (mt)
+ 		return mt->num_slots != num_slots ? -EINVAL : 0;
++	/* Arbitrary limit for avoiding too large memory allocation. */
++	if (num_slots > 1024)
++		return -EINVAL;
+ 
+ 	mt = kzalloc(struct_size(mt, slots, num_slots), GFP_KERNEL);
+ 	if (!mt)
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 5b50475ec4140..e9eb9554dd7bd 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -83,6 +83,7 @@ static inline void i8042_write_command(int val)
+ #define SERIO_QUIRK_KBDRESET		BIT(12)
+ #define SERIO_QUIRK_DRITEK		BIT(13)
+ #define SERIO_QUIRK_NOPNP		BIT(14)
++#define SERIO_QUIRK_FORCENORESTORE	BIT(15)
+ 
+ /* Quirk table for different mainboards. Options similar or identical to i8042
+  * module parameters.
+@@ -1149,18 +1150,10 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
+ 	{
+-		/*
+-		 * Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes
+-		 * the keyboard very laggy for ~5 seconds after boot and
+-		 * sometimes also after resume.
+-		 * However both are required for the keyboard to not fail
+-		 * completely sometimes after boot or resume.
+-		 */
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "N150CU"),
+ 		},
+-		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+-					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++		.driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ 	},
+ 	{
+ 		.matches = {
+@@ -1685,6 +1678,8 @@ static void __init i8042_check_quirks(void)
+ 	if (quirks & SERIO_QUIRK_NOPNP)
+ 		i8042_nopnp = true;
+ #endif
++	if (quirks & SERIO_QUIRK_FORCENORESTORE)
++		i8042_forcenorestore = true;
+ }
+ #else
+ static inline void i8042_check_quirks(void) {}
+@@ -1718,7 +1713,7 @@ static int __init i8042_platform_init(void)
+ 
+ 	i8042_check_quirks();
+ 
+-	pr_debug("Active quirks (empty means none):%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
++	pr_debug("Active quirks (empty means none):%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
+ 		i8042_nokbd ? " nokbd" : "",
+ 		i8042_noaux ? " noaux" : "",
+ 		i8042_nomux ? " nomux" : "",
+@@ -1738,10 +1733,11 @@ static int __init i8042_platform_init(void)
+ 		"",
+ #endif
+ #ifdef CONFIG_PNP
+-		i8042_nopnp ? " nopnp" : "");
++		i8042_nopnp ? " nopnp" : "",
+ #else
+-		"");
++		"",
+ #endif
++		i8042_forcenorestore ? " forcenorestore" : "");
+ 
+ 	retval = i8042_pnp_init();
+ 	if (retval)
+diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c
+index 9fbb8d31575ae..2233d93f90e81 100644
+--- a/drivers/input/serio/i8042.c
++++ b/drivers/input/serio/i8042.c
+@@ -115,6 +115,10 @@ module_param_named(nopnp, i8042_nopnp, bool, 0);
+ MODULE_PARM_DESC(nopnp, "Do not use PNP to detect controller settings");
+ #endif
+ 
++static bool i8042_forcenorestore;
++module_param_named(forcenorestore, i8042_forcenorestore, bool, 0);
++MODULE_PARM_DESC(forcenorestore, "Force no restore on s3 resume, copying s2idle behaviour");
++
+ #define DEBUG
+ #ifdef DEBUG
+ static bool i8042_debug;
+@@ -1232,7 +1236,7 @@ static int i8042_pm_suspend(struct device *dev)
+ {
+ 	int i;
+ 
+-	if (pm_suspend_via_firmware())
++	if (!i8042_forcenorestore && pm_suspend_via_firmware())
+ 		i8042_controller_reset(true);
+ 
+ 	/* Set up serio interrupts for system wakeup. */
+@@ -1248,7 +1252,7 @@ static int i8042_pm_suspend(struct device *dev)
+ 
+ static int i8042_pm_resume_noirq(struct device *dev)
+ {
+-	if (!pm_resume_via_firmware())
++	if (i8042_forcenorestore || !pm_resume_via_firmware())
+ 		i8042_interrupt(0, NULL);
+ 
+ 	return 0;
+@@ -1271,7 +1275,7 @@ static int i8042_pm_resume(struct device *dev)
+ 	 * not restore the controller state to whatever it had been at boot
+ 	 * time, so we do not need to do anything.
+ 	 */
+-	if (!pm_suspend_via_firmware())
++	if (i8042_forcenorestore || !pm_suspend_via_firmware())
+ 		return 0;
+ 
+ 	/*
+diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
+index 06d78fcc79fdb..f2c87c695a17c 100644
+--- a/drivers/iommu/io-pgfault.c
++++ b/drivers/iommu/io-pgfault.c
+@@ -192,6 +192,7 @@ void iommu_report_device_fault(struct device *dev, struct iopf_fault *evt)
+ 		report_partial_fault(iopf_param, fault);
+ 		iopf_put_dev_fault_param(iopf_param);
+ 		/* A request that is not the last does not need to be ack'd */
++		return;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index 873630c111c1f..e333793e88eb7 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -525,7 +525,7 @@ iommufd_device_do_replace(struct iommufd_device *idev,
+ err_unresv:
+ 	if (hwpt_is_paging(hwpt))
+ 		iommufd_group_remove_reserved_iova(igroup,
+-						   to_hwpt_paging(old_hwpt));
++						   to_hwpt_paging(hwpt));
+ err_unlock:
+ 	mutex_unlock(&idev->igroup->lock);
+ 	return ERR_PTR(rc);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index c2c07bfa64719..f299ff393a6a2 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1181,8 +1181,26 @@ static int do_resume(struct dm_ioctl *param)
+ 			suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
+ 		if (param->flags & DM_NOFLUSH_FLAG)
+ 			suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
+-		if (!dm_suspended_md(md))
+-			dm_suspend(md, suspend_flags);
++		if (!dm_suspended_md(md)) {
++			r = dm_suspend(md, suspend_flags);
++			if (r) {
++				down_write(&_hash_lock);
++				hc = dm_get_mdptr(md);
++				if (hc && !hc->new_map) {
++					hc->new_map = new_map;
++					new_map = NULL;
++				} else {
++					r = -ENXIO;
++				}
++				up_write(&_hash_lock);
++				if (new_map) {
++					dm_sync_table(md);
++					dm_table_destroy(new_map);
++				}
++				dm_put(md);
++				return r;
++			}
++		}
+ 
+ 		old_size = dm_get_size(md);
+ 		old_map = dm_swap_table(md, new_map);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 13037d6a6f62a..6e15ac4e0845c 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2594,7 +2594,7 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int ta
+ 			break;
+ 
+ 		if (signal_pending_state(task_state, current)) {
+-			r = -EINTR;
++			r = -ERESTARTSYS;
+ 			break;
+ 		}
+ 
+@@ -2619,7 +2619,7 @@ static int dm_wait_for_completion(struct mapped_device *md, unsigned int task_st
+ 			break;
+ 
+ 		if (signal_pending_state(task_state, current)) {
+-			r = -EINTR;
++			r = -ERESTARTSYS;
+ 			break;
+ 		}
+ 
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index 04698fd03e606..d48c4fafc7798 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -277,7 +277,7 @@ static void sm_metadata_destroy(struct dm_space_map *sm)
+ {
+ 	struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+ 
+-	kfree(smm);
++	kvfree(smm);
+ }
+ 
+ static int sm_metadata_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+@@ -772,7 +772,7 @@ struct dm_space_map *dm_sm_metadata_init(void)
+ {
+ 	struct sm_metadata *smm;
+ 
+-	smm = kmalloc(sizeof(*smm), GFP_KERNEL);
++	smm = kvmalloc(sizeof(*smm), GFP_KERNEL);
+ 	if (!smm)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 5ea57b6748c53..687bd374cde89 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -617,6 +617,12 @@ static int choose_first_rdev(struct r1conf *conf, struct r1bio *r1_bio,
+ 	return -1;
+ }
+ 
++static bool rdev_in_recovery(struct md_rdev *rdev, struct r1bio *r1_bio)
++{
++	return !test_bit(In_sync, &rdev->flags) &&
++	       rdev->recovery_offset < r1_bio->sector + r1_bio->sectors;
++}
++
+ static int choose_bb_rdev(struct r1conf *conf, struct r1bio *r1_bio,
+ 			  int *max_sectors)
+ {
+@@ -635,6 +641,7 @@ static int choose_bb_rdev(struct r1conf *conf, struct r1bio *r1_bio,
+ 
+ 		rdev = conf->mirrors[disk].rdev;
+ 		if (!rdev || test_bit(Faulty, &rdev->flags) ||
++		    rdev_in_recovery(rdev, r1_bio) ||
+ 		    test_bit(WriteMostly, &rdev->flags))
+ 			continue;
+ 
+@@ -673,7 +680,8 @@ static int choose_slow_rdev(struct r1conf *conf, struct r1bio *r1_bio,
+ 
+ 		rdev = conf->mirrors[disk].rdev;
+ 		if (!rdev || test_bit(Faulty, &rdev->flags) ||
+-		    !test_bit(WriteMostly, &rdev->flags))
++		    !test_bit(WriteMostly, &rdev->flags) ||
++		    rdev_in_recovery(rdev, r1_bio))
+ 			continue;
+ 
+ 		/* there are no bad blocks, we can use this disk */
+@@ -733,9 +741,7 @@ static bool rdev_readable(struct md_rdev *rdev, struct r1bio *r1_bio)
+ 	if (!rdev || test_bit(Faulty, &rdev->flags))
+ 		return false;
+ 
+-	/* still in recovery */
+-	if (!test_bit(In_sync, &rdev->flags) &&
+-	    rdev->recovery_offset < r1_bio->sector + r1_bio->sectors)
++	if (rdev_in_recovery(rdev, r1_bio))
+ 		return false;
+ 
+ 	/* don't read from slow disk unless have to */
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index a7a2bcedb37e4..5680856c0fb82 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -2087,16 +2087,6 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp)
+ 	return err;
+ }
+ 
+-static int is_attach_rejected(struct fastrpc_user *fl)
+-{
+-	/* Check if the device node is non-secure */
+-	if (!fl->is_secure_dev) {
+-		dev_dbg(&fl->cctx->rpdev->dev, "untrusted app trying to attach to privileged DSP PD\n");
+-		return -EACCES;
+-	}
+-	return 0;
+-}
+-
+ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,
+ 				 unsigned long arg)
+ {
+@@ -2109,19 +2099,13 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,
+ 		err = fastrpc_invoke(fl, argp);
+ 		break;
+ 	case FASTRPC_IOCTL_INIT_ATTACH:
+-		err = is_attach_rejected(fl);
+-		if (!err)
+-			err = fastrpc_init_attach(fl, ROOT_PD);
++		err = fastrpc_init_attach(fl, ROOT_PD);
+ 		break;
+ 	case FASTRPC_IOCTL_INIT_ATTACH_SNS:
+-		err = is_attach_rejected(fl);
+-		if (!err)
+-			err = fastrpc_init_attach(fl, SENSORS_PD);
++		err = fastrpc_init_attach(fl, SENSORS_PD);
+ 		break;
+ 	case FASTRPC_IOCTL_INIT_CREATE_STATIC:
+-		err = is_attach_rejected(fl);
+-		if (!err)
+-			err = fastrpc_init_create_static_process(fl, argp);
++		err = fastrpc_init_create_static_process(fl, argp);
+ 		break;
+ 	case FASTRPC_IOCTL_INIT_CREATE:
+ 		err = fastrpc_init_create_process(fl, argp);
+diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
+index 8f7f587a0025b..b7f627a9fdeab 100644
+--- a/drivers/mmc/core/mmc_test.c
++++ b/drivers/mmc/core/mmc_test.c
+@@ -3125,13 +3125,13 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
+ 	test->buffer = kzalloc(BUFFER_SIZE, GFP_KERNEL);
+ #ifdef CONFIG_HIGHMEM
+ 	test->highmem = alloc_pages(GFP_KERNEL | __GFP_HIGHMEM, BUFFER_ORDER);
++	if (!test->highmem) {
++		count = -ENOMEM;
++		goto free_test_buffer;
++	}
+ #endif
+ 
+-#ifdef CONFIG_HIGHMEM
+-	if (test->buffer && test->highmem) {
+-#else
+ 	if (test->buffer) {
+-#endif
+ 		mutex_lock(&mmc_test_lock);
+ 		mmc_test_run(test, testcase);
+ 		mutex_unlock(&mmc_test_lock);
+@@ -3139,6 +3139,7 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
+ 
+ #ifdef CONFIG_HIGHMEM
+ 	__free_pages(test->highmem, BUFFER_ORDER);
++free_test_buffer:
+ #endif
+ 	kfree(test->buffer);
+ 	kfree(test);
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 8e2d676b92399..a4f813ea177a8 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -3293,6 +3293,10 @@ int dw_mci_probe(struct dw_mci *host)
+ 	host->biu_clk = devm_clk_get(host->dev, "biu");
+ 	if (IS_ERR(host->biu_clk)) {
+ 		dev_dbg(host->dev, "biu clock not available\n");
++		ret = PTR_ERR(host->biu_clk);
++		if (ret == -EPROBE_DEFER)
++			return ret;
++
+ 	} else {
+ 		ret = clk_prepare_enable(host->biu_clk);
+ 		if (ret) {
+@@ -3304,6 +3308,10 @@ int dw_mci_probe(struct dw_mci *host)
+ 	host->ciu_clk = devm_clk_get(host->dev, "ciu");
+ 	if (IS_ERR(host->ciu_clk)) {
+ 		dev_dbg(host->dev, "ciu clock not available\n");
++		ret = PTR_ERR(host->ciu_clk);
++		if (ret == -EPROBE_DEFER)
++			goto err_clk_biu;
++
+ 		host->bus_hz = host->pdata->bus_hz;
+ 	} else {
+ 		ret = clk_prepare_enable(host->ciu_clk);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index a94835b8ab939..e386f78e32679 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -1230,7 +1230,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
+ 	}
+ 
+ 	if (!sbc_error && !(events & MSDC_INT_CMDRDY)) {
+-		if (events & MSDC_INT_CMDTMO ||
++		if ((events & MSDC_INT_CMDTMO && !host->hs400_tuning) ||
+ 		    (!mmc_op_tuning(cmd->opcode) && !host->hs400_tuning))
+ 			/*
+ 			 * should not clear fifo/interrupt as the tune data
+@@ -1323,9 +1323,9 @@ static void msdc_start_command(struct msdc_host *host,
+ static void msdc_cmd_next(struct msdc_host *host,
+ 		struct mmc_request *mrq, struct mmc_command *cmd)
+ {
+-	if ((cmd->error &&
+-	    !(cmd->error == -EILSEQ &&
+-	      (mmc_op_tuning(cmd->opcode) || host->hs400_tuning))) ||
++	if ((cmd->error && !host->hs400_tuning &&
++	     !(cmd->error == -EILSEQ &&
++	     mmc_op_tuning(cmd->opcode))) ||
+ 	    (mrq->sbc && mrq->sbc->error))
+ 		msdc_request_done(host, mrq);
+ 	else if (cmd == mrq->sbc)
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 2ed0da0684906..b257504a85347 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -582,7 +582,6 @@ static void bond_ipsec_del_sa_all(struct bonding *bond)
+ 		} else {
+ 			slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
+ 		}
+-		ipsec->xs->xso.real_dev = NULL;
+ 	}
+ 	spin_unlock_bh(&bond->ipsec_lock);
+ 	rcu_read_unlock();
+@@ -599,34 +598,30 @@ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ 	struct net_device *real_dev;
+ 	struct slave *curr_active;
+ 	struct bonding *bond;
+-	int err;
++	bool ok = false;
+ 
+ 	bond = netdev_priv(bond_dev);
+ 	rcu_read_lock();
+ 	curr_active = rcu_dereference(bond->curr_active_slave);
++	if (!curr_active)
++		goto out;
+ 	real_dev = curr_active->dev;
+ 
+-	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
+-		err = false;
++	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
+ 		goto out;
+-	}
+ 
+-	if (!xs->xso.real_dev) {
+-		err = false;
++	if (!xs->xso.real_dev)
+ 		goto out;
+-	}
+ 
+ 	if (!real_dev->xfrmdev_ops ||
+ 	    !real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
+-	    netif_is_bond_master(real_dev)) {
+-		err = false;
++	    netif_is_bond_master(real_dev))
+ 		goto out;
+-	}
+ 
+-	err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++	ok = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
+ out:
+ 	rcu_read_unlock();
+-	return err;
++	return ok;
+ }
+ 
+ static const struct xfrmdev_ops bond_xfrmdev_ops = {
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index bc80fb6397dcd..95d59a18c0223 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -936,7 +936,7 @@ static int bond_option_active_slave_set(struct bonding *bond,
+ 	/* check to see if we are clearing active */
+ 	if (!slave_dev) {
+ 		netdev_dbg(bond->dev, "Clearing current active slave\n");
+-		RCU_INIT_POINTER(bond->curr_active_slave, NULL);
++		bond_change_active_slave(bond, NULL);
+ 		bond_select_active_slave(bond);
+ 	} else {
+ 		struct slave *old_active = rtnl_dereference(bond->curr_active_slave);
+diff --git a/drivers/net/dsa/microchip/ksz_ptp.c b/drivers/net/dsa/microchip/ksz_ptp.c
+index 1fe105913c758..beb969f391be5 100644
+--- a/drivers/net/dsa/microchip/ksz_ptp.c
++++ b/drivers/net/dsa/microchip/ksz_ptp.c
+@@ -266,7 +266,6 @@ static int ksz_ptp_enable_mode(struct ksz_device *dev)
+ 	struct ksz_port *prt;
+ 	struct dsa_port *dp;
+ 	bool tag_en = false;
+-	int ret;
+ 
+ 	dsa_switch_for_each_user_port(dp, dev->ds) {
+ 		prt = &dev->ports[dp->index];
+@@ -277,9 +276,7 @@ static int ksz_ptp_enable_mode(struct ksz_device *dev)
+ 	}
+ 
+ 	if (tag_en) {
+-		ret = ptp_schedule_worker(ptp_data->clock, 0);
+-		if (ret)
+-			return ret;
++		ptp_schedule_worker(ptp_data->clock, 0);
+ 	} else {
+ 		ptp_cancel_worker_sync(ptp_data->clock);
+ 	}
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+index ce3b3690c3c05..c47f068f56b32 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+@@ -457,7 +457,8 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 		trace_mv88e6xxx_atu_full_violation(chip->dev, spid,
+ 						   entry.portvec, entry.mac,
+ 						   fid);
+-		chip->ports[spid].atu_full_violation++;
++		if (spid < ARRAY_SIZE(chip->ports))
++			chip->ports[spid].atu_full_violation++;
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 61e95487732dc..f5d26e724ae65 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -528,7 +528,9 @@ static int felix_tag_8021q_setup(struct dsa_switch *ds)
+ 	 * so we need to be careful that there are no extra frames to be
+ 	 * dequeued over MMIO, since we would never know to discard them.
+ 	 */
++	ocelot_lock_xtr_grp_bh(ocelot, 0);
+ 	ocelot_drain_cpu_queue(ocelot, 0);
++	ocelot_unlock_xtr_grp_bh(ocelot, 0);
+ 
+ 	return 0;
+ }
+@@ -1504,6 +1506,8 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
+ 	int port = xmit_work->dp->index;
+ 	int retries = 10;
+ 
++	ocelot_lock_inj_grp(ocelot, 0);
++
+ 	do {
+ 		if (ocelot_can_inject(ocelot, 0))
+ 			break;
+@@ -1512,6 +1516,7 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
+ 	} while (--retries);
+ 
+ 	if (!retries) {
++		ocelot_unlock_inj_grp(ocelot, 0);
+ 		dev_err(ocelot->dev, "port %d failed to inject skb\n",
+ 			port);
+ 		ocelot_port_purge_txtstamp_skb(ocelot, port, skb);
+@@ -1521,6 +1526,8 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
+ 
+ 	ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
+ 
++	ocelot_unlock_inj_grp(ocelot, 0);
++
+ 	consume_skb(skb);
+ 	kfree(xmit_work);
+ }
+@@ -1671,6 +1678,8 @@ static bool felix_check_xtr_pkt(struct ocelot *ocelot)
+ 	if (!felix->info->quirk_no_xtr_irq)
+ 		return false;
+ 
++	ocelot_lock_xtr_grp(ocelot, grp);
++
+ 	while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)) {
+ 		struct sk_buff *skb;
+ 		unsigned int type;
+@@ -1707,6 +1716,8 @@ static bool felix_check_xtr_pkt(struct ocelot *ocelot)
+ 		ocelot_drain_cpu_queue(ocelot, 0);
+ 	}
+ 
++	ocelot_unlock_xtr_grp(ocelot, grp);
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 4b031fefcec68..56bb77dbd28a2 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -38,6 +38,10 @@
+ #define VSC73XX_BLOCK_ARBITER	0x5 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_SYSTEM	0x7 /* Only subblock 0 */
+ 
++/* MII Block subblock */
++#define VSC73XX_BLOCK_MII_INTERNAL	0x0 /* Internal MDIO subblock */
++#define VSC73XX_BLOCK_MII_EXTERNAL	0x1 /* External MDIO subblock */
++
+ #define CPU_PORT	6 /* CPU port */
+ 
+ /* MAC Block registers */
+@@ -196,6 +200,8 @@
+ #define VSC73XX_MII_CMD		0x1
+ #define VSC73XX_MII_DATA	0x2
+ 
++#define VSC73XX_MII_STAT_BUSY	BIT(3)
++
+ /* Arbiter block 5 registers */
+ #define VSC73XX_ARBEMPTY		0x0c
+ #define VSC73XX_ARBDISC			0x0e
+@@ -270,6 +276,7 @@
+ #define IS_739X(a) (IS_7395(a) || IS_7398(a))
+ 
+ #define VSC73XX_POLL_SLEEP_US		1000
++#define VSC73XX_MDIO_POLL_SLEEP_US	5
+ #define VSC73XX_POLL_TIMEOUT_US		10000
+ 
+ struct vsc73xx_counter {
+@@ -487,6 +494,22 @@ static int vsc73xx_detect(struct vsc73xx *vsc)
+ 	return 0;
+ }
+ 
++static int vsc73xx_mdio_busy_check(struct vsc73xx *vsc)
++{
++	int ret, err;
++	u32 val;
++
++	ret = read_poll_timeout(vsc73xx_read, err,
++				err < 0 || !(val & VSC73XX_MII_STAT_BUSY),
++				VSC73XX_MDIO_POLL_SLEEP_US,
++				VSC73XX_POLL_TIMEOUT_US, false, vsc,
++				VSC73XX_BLOCK_MII, VSC73XX_BLOCK_MII_INTERNAL,
++				VSC73XX_MII_STAT, &val);
++	if (ret)
++		return ret;
++	return err;
++}
++
+ static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ {
+ 	struct vsc73xx *vsc = ds->priv;
+@@ -494,12 +517,20 @@ static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ 	u32 val;
+ 	int ret;
+ 
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	/* Setting bit 26 means "read" */
+ 	cmd = BIT(26) | (phy << 21) | (regnum << 16);
+ 	ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);
+ 	if (ret)
+ 		return ret;
+-	msleep(2);
++
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	ret = vsc73xx_read(vsc, VSC73XX_BLOCK_MII, 0, 2, &val);
+ 	if (ret)
+ 		return ret;
+@@ -523,6 +554,10 @@ static int vsc73xx_phy_write(struct dsa_switch *ds, int phy, int regnum,
+ 	u32 cmd;
+ 	int ret;
+ 
++	ret = vsc73xx_mdio_busy_check(vsc);
++	if (ret)
++		return ret;
++
+ 	/* It was found through tedious experiments that this router
+ 	 * chip really hates to have it's PHYs reset. They
+ 	 * never recover if that happens: autonegotiation stops
+@@ -534,7 +569,7 @@ static int vsc73xx_phy_write(struct dsa_switch *ds, int phy, int regnum,
+ 		return 0;
+ 	}
+ 
+-	cmd = (phy << 21) | (regnum << 16);
++	cmd = (phy << 21) | (regnum << 16) | val;
+ 	ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);
+ 	if (ret)
+ 		return ret;
+@@ -817,6 +852,11 @@ static void vsc73xx_mac_link_up(struct phylink_config *config,
+ 
+ 	if (duplex == DUPLEX_FULL)
+ 		val |= VSC73XX_MAC_CFG_FDX;
++	else
++		/* In datasheet description ("Port Mode Procedure" in 5.6.2)
++		 * this bit is configured only for half duplex.
++		 */
++		val |= VSC73XX_MAC_CFG_WEXC_DIS;
+ 
+ 	/* This routine is described in the datasheet (below ARBDISC register
+ 	 * description)
+@@ -827,7 +867,6 @@ static void vsc73xx_mac_link_up(struct phylink_config *config,
+ 	get_random_bytes(&seed, 1);
+ 	val |= seed << VSC73XX_MAC_CFG_SEED_OFFSET;
+ 	val |= VSC73XX_MAC_CFG_SEED_LOAD;
+-	val |= VSC73XX_MAC_CFG_WEXC_DIS;
+ 	vsc73xx_write(vsc, VSC73XX_BLOCK_MAC, port, VSC73XX_MAC_CFG, val);
+ 
+ 	/* Flow control for the PHY facing ports:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index 345681d5007e3..f88b641533fcc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -297,11 +297,6 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
+ 		 * redirect is coming from a frame received by the
+ 		 * bnxt_en driver.
+ 		 */
+-		rx_buf = &rxr->rx_buf_ring[cons];
+-		mapping = rx_buf->mapping - bp->rx_dma_offset;
+-		dma_unmap_page_attrs(&pdev->dev, mapping,
+-				     BNXT_RX_PAGE_SIZE, bp->rx_dir,
+-				     DMA_ATTR_WEAK_ORDERING);
+ 
+ 		/* if we are unable to allocate a new buffer, abort and reuse */
+ 		if (bnxt_alloc_rx_data(bp, rxr, rxr->rx_prod, GFP_ATOMIC)) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 786ceae344887..dd9e68465e697 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -1244,7 +1244,8 @@ static u64 hash_filter_ntuple(struct ch_filter_specification *fs,
+ 	 * in the Compressed Filter Tuple.
+ 	 */
+ 	if (tp->vlan_shift >= 0 && fs->mask.ivlan)
+-		ntuple |= (FT_VLAN_VLD_F | fs->val.ivlan) << tp->vlan_shift;
++		ntuple |= (u64)(FT_VLAN_VLD_F |
++				fs->val.ivlan) << tp->vlan_shift;
+ 
+ 	if (tp->port_shift >= 0 && fs->mask.iport)
+ 		ntuple |= (u64)fs->val.iport << tp->port_shift;
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index a71f848adc054..a293b08f36d46 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -2638,13 +2638,14 @@ static int dpaa2_switch_refill_bp(struct ethsw_core *ethsw)
+ 
+ static int dpaa2_switch_seed_bp(struct ethsw_core *ethsw)
+ {
+-	int *count, i;
++	int *count, ret, i;
+ 
+ 	for (i = 0; i < DPAA2_ETHSW_NUM_BUFS; i += BUFS_PER_CMD) {
++		ret = dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
+ 		count = &ethsw->buf_count;
+-		*count += dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
++		*count += ret;
+ 
+-		if (unlikely(*count < BUFS_PER_CMD))
++		if (unlikely(ret < BUFS_PER_CMD))
+ 			return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index a5fc0209d628e..4cbc4d069a1f3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -5724,6 +5724,9 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle)
+ 	struct net_device *netdev = handle->kinfo.netdev;
+ 	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 
++	if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state))
++		hns3_nic_net_stop(netdev);
++
+ 	if (!test_and_clear_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
+ 		netdev_warn(netdev, "already uninitialized\n");
+ 		return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 82574ce0194fb..465f0d5822837 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2653,8 +2653,17 @@ static int hclge_cfg_mac_speed_dup_h(struct hnae3_handle *handle, int speed,
+ {
+ 	struct hclge_vport *vport = hclge_get_vport(handle);
+ 	struct hclge_dev *hdev = vport->back;
++	int ret;
++
++	ret = hclge_cfg_mac_speed_dup(hdev, speed, duplex, lane_num);
++
++	if (ret)
++		return ret;
+ 
+-	return hclge_cfg_mac_speed_dup(hdev, speed, duplex, lane_num);
++	hdev->hw.mac.req_speed = speed;
++	hdev->hw.mac.req_duplex = duplex;
++
++	return 0;
+ }
+ 
+ static int hclge_set_autoneg_en(struct hclge_dev *hdev, bool enable)
+@@ -2956,17 +2965,20 @@ static int hclge_mac_init(struct hclge_dev *hdev)
+ 	if (!test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ 		hdev->hw.mac.duplex = HCLGE_MAC_FULL;
+ 
+-	ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.speed,
+-					 hdev->hw.mac.duplex, hdev->hw.mac.lane_num);
+-	if (ret)
+-		return ret;
+-
+ 	if (hdev->hw.mac.support_autoneg) {
+ 		ret = hclge_set_autoneg_en(hdev, hdev->hw.mac.autoneg);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
++	if (!hdev->hw.mac.autoneg) {
++		ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.req_speed,
++						 hdev->hw.mac.req_duplex,
++						 hdev->hw.mac.lane_num);
++		if (ret)
++			return ret;
++	}
++
+ 	mac->link = 0;
+ 
+ 	if (mac->user_fec_mode & BIT(HNAE3_FEC_USER_DEF)) {
+@@ -11516,8 +11528,8 @@ static void hclge_reset_done(struct hnae3_ae_dev *ae_dev)
+ 		dev_err(&hdev->pdev->dev, "fail to rebuild, ret=%d\n", ret);
+ 
+ 	hdev->reset_type = HNAE3_NONE_RESET;
+-	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
+-	up(&hdev->reset_sem);
++	if (test_and_clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++		up(&hdev->reset_sem);
+ }
+ 
+ static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 85fb11de43a12..80079657afebe 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -191,6 +191,9 @@ static void hclge_mac_adjust_link(struct net_device *netdev)
+ 	if (ret)
+ 		netdev_err(netdev, "failed to adjust link.\n");
+ 
++	hdev->hw.mac.req_speed = (u32)speed;
++	hdev->hw.mac.req_duplex = (u8)duplex;
++
+ 	ret = hclge_cfg_flowctrl(hdev);
+ 	if (ret)
+ 		netdev_err(netdev, "failed to configure flow control.\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 3735d2fed11f7..094a7c7b55921 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1747,8 +1747,8 @@ static void hclgevf_reset_done(struct hnae3_ae_dev *ae_dev)
+ 			 ret);
+ 
+ 	hdev->reset_type = HNAE3_NONE_RESET;
+-	clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state);
+-	up(&hdev->reset_sem);
++	if (test_and_clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++		up(&hdev->reset_sem);
+ }
+ 
+ static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+diff --git a/drivers/net/ethernet/intel/ice/devlink/devlink_port.c b/drivers/net/ethernet/intel/ice/devlink/devlink_port.c
+index 13e6790d3cae7..afcf64dab48a1 100644
+--- a/drivers/net/ethernet/intel/ice/devlink/devlink_port.c
++++ b/drivers/net/ethernet/intel/ice/devlink/devlink_port.c
+@@ -337,7 +337,7 @@ int ice_devlink_create_pf_port(struct ice_pf *pf)
+ 		return -EIO;
+ 
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+-	attrs.phys.port_number = pf->hw.bus.func;
++	attrs.phys.port_number = pf->hw.pf_id;
+ 
+ 	/* As FW supports only port split options for whole device,
+ 	 * set port split options only for first PF.
+@@ -399,7 +399,7 @@ int ice_devlink_create_vf_port(struct ice_vf *vf)
+ 		return -EINVAL;
+ 
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
+-	attrs.pci_vf.pf = pf->hw.bus.func;
++	attrs.pci_vf.pf = pf->hw.pf_id;
+ 	attrs.pci_vf.vf = vf->vf_id;
+ 
+ 	ice_devlink_set_switch_id(pf, &attrs.switch_id);
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 1facf179a96fd..f448d3a845642 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -512,6 +512,25 @@ static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
+ 	xsk_pool_fill_cb(ring->xsk_pool, &desc);
+ }
+ 
++/**
++ * ice_get_frame_sz - calculate xdp_buff::frame_sz
++ * @rx_ring: the ring being configured
++ *
++ * Return frame size based on underlying PAGE_SIZE
++ */
++static unsigned int ice_get_frame_sz(struct ice_rx_ring *rx_ring)
++{
++	unsigned int frame_sz;
++
++#if (PAGE_SIZE >= 8192)
++	frame_sz = rx_ring->rx_buf_len;
++#else
++	frame_sz = ice_rx_pg_size(rx_ring) / 2;
++#endif
++
++	return frame_sz;
++}
++
+ /**
+  * ice_vsi_cfg_rxq - Configure an Rx queue
+  * @ring: the ring being configured
+@@ -576,7 +595,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ 		}
+ 	}
+ 
+-	xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq);
++	xdp_init_buff(&ring->xdp, ice_get_frame_sz(ring), &ring->xdp_rxq);
+ 	ring->xdp.data = NULL;
+ 	ring->xdp_ext.pkt_ctx = &ring->pkt_ctx;
+ 	err = ice_setup_rx_ctx(ring);
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 8d25b69812698..c9bc3f1add5d3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -521,30 +521,6 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
+ 	return -ENOMEM;
+ }
+ 
+-/**
+- * ice_rx_frame_truesize
+- * @rx_ring: ptr to Rx ring
+- * @size: size
+- *
+- * calculate the truesize with taking into the account PAGE_SIZE of
+- * underlying arch
+- */
+-static unsigned int
+-ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size)
+-{
+-	unsigned int truesize;
+-
+-#if (PAGE_SIZE < 8192)
+-	truesize = ice_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */
+-#else
+-	truesize = rx_ring->rx_offset ?
+-		SKB_DATA_ALIGN(rx_ring->rx_offset + size) +
+-		SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) :
+-		SKB_DATA_ALIGN(size);
+-#endif
+-	return truesize;
+-}
+-
+ /**
+  * ice_run_xdp - Executes an XDP program on initialized xdp_buff
+  * @rx_ring: Rx ring
+@@ -837,16 +813,15 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf)
+ 	if (!dev_page_is_reusable(page))
+ 		return false;
+ 
+-#if (PAGE_SIZE < 8192)
+ 	/* if we are only owner of page we can reuse it */
+ 	if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1))
+ 		return false;
+-#else
++#if (PAGE_SIZE >= 8192)
+ #define ICE_LAST_OFFSET \
+-	(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_2048)
++	(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_3072)
+ 	if (rx_buf->page_offset > ICE_LAST_OFFSET)
+ 		return false;
+-#endif /* PAGE_SIZE < 8192) */
++#endif /* PAGE_SIZE >= 8192) */
+ 
+ 	/* If we have drained the page fragment pool we need to update
+ 	 * the pagecnt_bias and page count so that we fully restock the
+@@ -949,12 +924,7 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ 	struct ice_rx_buf *rx_buf;
+ 
+ 	rx_buf = &rx_ring->rx_buf[ntc];
+-	rx_buf->pgcnt =
+-#if (PAGE_SIZE < 8192)
+-		page_count(rx_buf->page);
+-#else
+-		0;
+-#endif
++	rx_buf->pgcnt = page_count(rx_buf->page);
+ 	prefetchw(rx_buf->page);
+ 
+ 	if (!size)
+@@ -1160,11 +1130,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 	bool failure;
+ 	u32 first;
+ 
+-	/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
+-#if (PAGE_SIZE < 8192)
+-	xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0);
+-#endif
+-
+ 	xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+ 	if (xdp_prog) {
+ 		xdp_ring = rx_ring->xdp_ring;
+@@ -1223,10 +1188,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 			hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
+ 				     offset;
+ 			xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
+-#if (PAGE_SIZE > 4096)
+-			/* At larger PAGE_SIZE, frame_sz depend on len size */
+-			xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size);
+-#endif
+ 			xdp_buff_clear_frags_flag(xdp);
+ 		} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
+ 			break;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index fce2930ae6af7..b6aa449aa56af 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4809,6 +4809,7 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
+ 
+ #if (PAGE_SIZE < 8192)
+ 	if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB ||
++	    IGB_2K_TOO_SMALL_WITH_PADDING ||
+ 	    rd32(E1000_RCTL) & E1000_RCTL_SBP)
+ 		set_ring_uses_large_buffer(rx_ring);
+ #endif
+diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
+index 5f92b3c7c3d4a..511384f3ec5cb 100644
+--- a/drivers/net/ethernet/intel/igc/igc_defines.h
++++ b/drivers/net/ethernet/intel/igc/igc_defines.h
+@@ -404,6 +404,12 @@
+ #define IGC_DTXMXPKTSZ_TSN	0x19 /* 1600 bytes of max TX DMA packet size */
+ #define IGC_DTXMXPKTSZ_DEFAULT	0x98 /* 9728-byte Jumbo frames */
+ 
++/* Retry Buffer Control */
++#define IGC_RETX_CTL			0x041C
++#define IGC_RETX_CTL_WATERMARK_MASK	0xF
++#define IGC_RETX_CTL_QBVFULLTH_SHIFT	8 /* QBV Retry Buffer Full Threshold */
++#define IGC_RETX_CTL_QBVFULLEN	0x1000 /* Enable QBV Retry Buffer Full Threshold */
++
+ /* Transmit Scheduling Latency */
+ /* Latency between transmission scheduling (LaunchTime) and the time
+  * the packet is transmitted to the network in nanosecond.
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 33069880c86c0..3041f8142324f 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -6319,12 +6319,16 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 	if (!validate_schedule(adapter, qopt))
+ 		return -EINVAL;
+ 
++	igc_ptp_read(adapter, &now);
++
++	if (igc_tsn_is_taprio_activated_by_user(adapter) &&
++	    is_base_time_past(qopt->base_time, &now))
++		adapter->qbv_config_change_errors++;
++
+ 	adapter->cycle_time = qopt->cycle_time;
+ 	adapter->base_time = qopt->base_time;
+ 	adapter->taprio_offload_enable = true;
+ 
+-	igc_ptp_read(adapter, &now);
+-
+ 	for (n = 0; n < qopt->num_entries; n++) {
+ 		struct tc_taprio_sched_entry *e = &qopt->entries[n];
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c
+index 22cefb1eeedfa..d68fa7f3d5f07 100644
+--- a/drivers/net/ethernet/intel/igc/igc_tsn.c
++++ b/drivers/net/ethernet/intel/igc/igc_tsn.c
+@@ -49,12 +49,19 @@ static unsigned int igc_tsn_new_flags(struct igc_adapter *adapter)
+ 	return new_flags;
+ }
+ 
++static bool igc_tsn_is_tx_mode_in_tsn(struct igc_adapter *adapter)
++{
++	struct igc_hw *hw = &adapter->hw;
++
++	return !!(rd32(IGC_TQAVCTRL) & IGC_TQAVCTRL_TRANSMIT_MODE_TSN);
++}
++
+ void igc_tsn_adjust_txtime_offset(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
+ 	u16 txoffset;
+ 
+-	if (!is_any_launchtime(adapter))
++	if (!igc_tsn_is_tx_mode_in_tsn(adapter))
+ 		return;
+ 
+ 	switch (adapter->link_speed) {
+@@ -78,6 +85,23 @@ void igc_tsn_adjust_txtime_offset(struct igc_adapter *adapter)
+ 	wr32(IGC_GTXOFFSET, txoffset);
+ }
+ 
++static void igc_tsn_restore_retx_default(struct igc_adapter *adapter)
++{
++	struct igc_hw *hw = &adapter->hw;
++	u32 retxctl;
++
++	retxctl = rd32(IGC_RETX_CTL) & IGC_RETX_CTL_WATERMARK_MASK;
++	wr32(IGC_RETX_CTL, retxctl);
++}
++
++bool igc_tsn_is_taprio_activated_by_user(struct igc_adapter *adapter)
++{
++	struct igc_hw *hw = &adapter->hw;
++
++	return (rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
++		adapter->taprio_offload_enable;
++}
++
+ /* Returns the TSN specific registers to their default values after
+  * the adapter is reset.
+  */
+@@ -91,6 +115,9 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
+ 	wr32(IGC_TXPBS, I225_TXPBSIZE_DEFAULT);
+ 	wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_DEFAULT);
+ 
++	if (igc_is_device_id_i226(hw))
++		igc_tsn_restore_retx_default(adapter);
++
+ 	tqavctrl = rd32(IGC_TQAVCTRL);
+ 	tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN |
+ 		      IGC_TQAVCTRL_ENHANCED_QAV | IGC_TQAVCTRL_FUTSCDDIS);
+@@ -111,6 +138,25 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
+ 	return 0;
+ }
+ 
++/* To partially fix i226 HW errata, reduce MAC internal buffering from 192 Bytes
++ * to 88 Bytes by setting RETX_CTL register using the recommendation from:
++ * a) Ethernet Controller I225/I226 Specification Update Rev 2.1
++ *    Item 9: TSN: Packet Transmission Might Cross the Qbv Window
++ * b) I225/6 SW User Manual Rev 1.2.4: Section 8.11.5 Retry Buffer Control
++ */
++static void igc_tsn_set_retx_qbvfullthreshold(struct igc_adapter *adapter)
++{
++	struct igc_hw *hw = &adapter->hw;
++	u32 retxctl, watermark;
++
++	retxctl = rd32(IGC_RETX_CTL);
++	watermark = retxctl & IGC_RETX_CTL_WATERMARK_MASK;
++	/* Set QBVFULLTH value using watermark and set QBVFULLEN */
++	retxctl |= (watermark << IGC_RETX_CTL_QBVFULLTH_SHIFT) |
++		   IGC_RETX_CTL_QBVFULLEN;
++	wr32(IGC_RETX_CTL, retxctl);
++}
++
+ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
+@@ -123,6 +169,9 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
+ 	wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN);
+ 	wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN);
+ 
++	if (igc_is_device_id_i226(hw))
++		igc_tsn_set_retx_qbvfullthreshold(adapter);
++
+ 	for (i = 0; i < adapter->num_tx_queues; i++) {
+ 		struct igc_ring *ring = adapter->tx_ring[i];
+ 		u32 txqctl = 0;
+@@ -262,14 +311,6 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
+ 		s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
+ 
+ 		base_time = ktime_add_ns(base_time, (n + 1) * cycle);
+-
+-		/* Increase the counter if scheduling into the past while
+-		 * Gate Control List (GCL) is running.
+-		 */
+-		if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
+-		    (adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) &&
+-		    (adapter->qbv_count > 1))
+-			adapter->qbv_config_change_errors++;
+ 	} else {
+ 		if (igc_is_device_id_i226(hw)) {
+ 			ktime_t adjust_time, expires_time;
+@@ -331,15 +372,22 @@ int igc_tsn_reset(struct igc_adapter *adapter)
+ 	return err;
+ }
+ 
+-int igc_tsn_offload_apply(struct igc_adapter *adapter)
++static bool igc_tsn_will_tx_mode_change(struct igc_adapter *adapter)
+ {
+-	struct igc_hw *hw = &adapter->hw;
++	bool any_tsn_enabled = !!(igc_tsn_new_flags(adapter) &
++				  IGC_FLAG_TSN_ANY_ENABLED);
+ 
+-	/* Per I225/6 HW Design Section 7.5.2.1, transmit mode
+-	 * cannot be changed dynamically. Require reset the adapter.
++	return (any_tsn_enabled && !igc_tsn_is_tx_mode_in_tsn(adapter)) ||
++	       (!any_tsn_enabled && igc_tsn_is_tx_mode_in_tsn(adapter));
++}
++
++int igc_tsn_offload_apply(struct igc_adapter *adapter)
++{
++	/* Per I225/6 HW Design Section 7.5.2.1 guideline, if tx mode change
++	 * from legacy->tsn or tsn->legacy, then reset adapter is needed.
+ 	 */
+ 	if (netif_running(adapter->netdev) &&
+-	    (igc_is_device_id_i225(hw) || !adapter->qbv_count)) {
++	    igc_tsn_will_tx_mode_change(adapter)) {
+ 		schedule_work(&adapter->reset_task);
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.h b/drivers/net/ethernet/intel/igc/igc_tsn.h
+index b53e6af560b73..98ec845a86bf0 100644
+--- a/drivers/net/ethernet/intel/igc/igc_tsn.h
++++ b/drivers/net/ethernet/intel/igc/igc_tsn.h
+@@ -7,5 +7,6 @@
+ int igc_tsn_offload_apply(struct igc_adapter *adapter);
+ int igc_tsn_reset(struct igc_adapter *adapter);
+ void igc_tsn_adjust_txtime_offset(struct igc_adapter *adapter);
++bool igc_tsn_is_taprio_activated_by_user(struct igc_adapter *adapter);
+ 
+ #endif /* _IGC_BASE_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 3e09d22858147..daf4b951e9059 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -632,7 +632,9 @@ int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
+ 	return ret;
+ }
+ 
+-static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
++static bool validate_and_update_reg_offset(struct rvu *rvu,
++					   struct cpt_rd_wr_reg_msg *req,
++					   u64 *reg_offset)
+ {
+ 	u64 offset = req->reg_offset;
+ 	int blkaddr, num_lfs, lf;
+@@ -663,6 +665,11 @@ static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
+ 		if (lf < 0)
+ 			return false;
+ 
++		/* Translate local LF's offset to global CPT LF's offset to
++		 * access LFX register.
++		 */
++		*reg_offset = (req->reg_offset & 0xFF000) + (lf << 3);
++
+ 		return true;
+ 	} else if (!(req->hdr.pcifunc & RVU_PFVF_FUNC_MASK)) {
+ 		/* Registers that can be accessed from PF */
+@@ -697,7 +704,7 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
+ 					struct cpt_rd_wr_reg_msg *rsp)
+ {
+ 	u64 offset = req->reg_offset;
+-	int blkaddr, lf;
++	int blkaddr;
+ 
+ 	blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
+ 	if (blkaddr < 0)
+@@ -708,18 +715,10 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
+ 	    !is_cpt_vf(rvu, req->hdr.pcifunc))
+ 		return CPT_AF_ERR_ACCESS_DENIED;
+ 
+-	if (!is_valid_offset(rvu, req))
++	if (!validate_and_update_reg_offset(rvu, req, &offset))
+ 		return CPT_AF_ERR_ACCESS_DENIED;
+ 
+-	/* Translate local LF used by VFs to global CPT LF */
+-	lf = rvu_get_lf(rvu, &rvu->hw->block[blkaddr], req->hdr.pcifunc,
+-			(offset & 0xFFF) >> 3);
+-
+-	/* Translate local LF's offset to global CPT LF's offset */
+-	offset &= 0xFF000;
+-	offset += lf << 3;
+-
+-	rsp->reg_offset = offset;
++	rsp->reg_offset = req->reg_offset;
+ 	rsp->ret_val = req->ret_val;
+ 	rsp->is_write = req->is_write;
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c
+index 61334a71058c7..e212a4ba92751 100644
+--- a/drivers/net/ethernet/mediatek/mtk_wed.c
++++ b/drivers/net/ethernet/mediatek/mtk_wed.c
+@@ -2666,14 +2666,15 @@ mtk_wed_setup_tc_block_cb(enum tc_setup_type type, void *type_data, void *cb_pri
+ {
+ 	struct mtk_wed_flow_block_priv *priv = cb_priv;
+ 	struct flow_cls_offload *cls = type_data;
+-	struct mtk_wed_hw *hw = priv->hw;
++	struct mtk_wed_hw *hw = NULL;
+ 
+-	if (!tc_can_offload(priv->dev))
++	if (!priv || !tc_can_offload(priv->dev))
+ 		return -EOPNOTSUPP;
+ 
+ 	if (type != TC_SETUP_CLSFLOWER)
+ 		return -EOPNOTSUPP;
+ 
++	hw = priv->hw;
+ 	return mtk_flow_offload_cmd(hw->eth, cls, hw->index);
+ }
+ 
+@@ -2729,6 +2730,7 @@ mtk_wed_setup_tc_block(struct mtk_wed_hw *hw, struct net_device *dev,
+ 			flow_block_cb_remove(block_cb, f);
+ 			list_del(&block_cb->driver_list);
+ 			kfree(block_cb->cb_priv);
++			block_cb->cb_priv = NULL;
+ 		}
+ 		return 0;
+ 	default:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+index 22918b2ef7f12..09433b91be176 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+@@ -146,7 +146,9 @@ static int mlx5e_tx_reporter_timeout_recover(void *ctx)
+ 		return err;
+ 	}
+ 
++	mutex_lock(&priv->state_lock);
+ 	err = mlx5e_safe_reopen_channels(priv);
++	mutex_unlock(&priv->state_lock);
+ 	if (!err) {
+ 		to_ctx->status = 1; /* all channels recovered */
+ 		return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+index 3eccdadc03578..773624bb2c5d5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+@@ -734,7 +734,7 @@ mlx5e_ethtool_flow_replace(struct mlx5e_priv *priv,
+ 	if (num_tuples <= 0) {
+ 		netdev_warn(priv->netdev, "%s: flow is not valid %d\n",
+ 			    __func__, num_tuples);
+-		return num_tuples;
++		return num_tuples < 0 ? num_tuples : -EINVAL;
+ 	}
+ 
+ 	eth_ft = get_flow_table(priv, fs, num_tuples);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index eedbcba226894..409f525f1703c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3005,15 +3005,18 @@ int mlx5e_update_tx_netdev_queues(struct mlx5e_priv *priv)
+ static void mlx5e_set_default_xps_cpumasks(struct mlx5e_priv *priv,
+ 					   struct mlx5e_params *params)
+ {
+-	struct mlx5_core_dev *mdev = priv->mdev;
+-	int num_comp_vectors, ix, irq;
+-
+-	num_comp_vectors = mlx5_comp_vectors_max(mdev);
++	int ix;
+ 
+ 	for (ix = 0; ix < params->num_channels; ix++) {
++		int num_comp_vectors, irq, vec_ix;
++		struct mlx5_core_dev *mdev;
++
++		mdev = mlx5_sd_ch_ix_get_dev(priv->mdev, ix);
++		num_comp_vectors = mlx5_comp_vectors_max(mdev);
+ 		cpumask_clear(priv->scratchpad.cpumask);
++		vec_ix = mlx5_sd_ch_ix_get_vec_ix(mdev, ix);
+ 
+-		for (irq = ix; irq < num_comp_vectors; irq += params->num_channels) {
++		for (irq = vec_ix; irq < num_comp_vectors; irq += params->num_channels) {
+ 			int cpu = mlx5_comp_vector_get_cpu(mdev, irq);
+ 
+ 			cpumask_set_cpu(cpu, priv->scratchpad.cpumask);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
+index 234cd00f71a1c..b7d4b1a2baf2e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
+@@ -386,7 +386,8 @@ static int ipsec_fs_roce_tx_mpv_create(struct mlx5_core_dev *mdev,
+ 		return -EOPNOTSUPP;
+ 
+ 	peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
+-	if (!peer_priv) {
++	if (!peer_priv || !peer_priv->ipsec) {
++		mlx5_core_err(mdev, "IPsec not supported on master device\n");
+ 		err = -EOPNOTSUPP;
+ 		goto release_peer;
+ 	}
+@@ -455,7 +456,8 @@ static int ipsec_fs_roce_rx_mpv_create(struct mlx5_core_dev *mdev,
+ 		return -EOPNOTSUPP;
+ 
+ 	peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
+-	if (!peer_priv) {
++	if (!peer_priv || !peer_priv->ipsec) {
++		mlx5_core_err(mdev, "IPsec not supported on master device\n");
+ 		err = -EOPNOTSUPP;
+ 		goto release_peer;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c
+index f6deb5a3f8202..eeb0b7ea05f12 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c
+@@ -126,7 +126,7 @@ static bool mlx5_sd_is_supported(struct mlx5_core_dev *dev, u8 host_buses)
+ }
+ 
+ static int mlx5_query_sd(struct mlx5_core_dev *dev, bool *sdm,
+-			 u8 *host_buses, u8 *sd_group)
++			 u8 *host_buses)
+ {
+ 	u32 out[MLX5_ST_SZ_DW(mpir_reg)];
+ 	int err;
+@@ -135,10 +135,6 @@ static int mlx5_query_sd(struct mlx5_core_dev *dev, bool *sdm,
+ 	if (err)
+ 		return err;
+ 
+-	err = mlx5_query_nic_vport_sd_group(dev, sd_group);
+-	if (err)
+-		return err;
+-
+ 	*sdm = MLX5_GET(mpir_reg, out, sdm);
+ 	*host_buses = MLX5_GET(mpir_reg, out, host_buses);
+ 
+@@ -166,19 +162,23 @@ static int sd_init(struct mlx5_core_dev *dev)
+ 	if (mlx5_core_is_ecpf(dev))
+ 		return 0;
+ 
++	err = mlx5_query_nic_vport_sd_group(dev, &sd_group);
++	if (err)
++		return err;
++
++	if (!sd_group)
++		return 0;
++
+ 	if (!MLX5_CAP_MCAM_REG(dev, mpir))
+ 		return 0;
+ 
+-	err = mlx5_query_sd(dev, &sdm, &host_buses, &sd_group);
++	err = mlx5_query_sd(dev, &sdm, &host_buses);
+ 	if (err)
+ 		return err;
+ 
+ 	if (!sdm)
+ 		return 0;
+ 
+-	if (!sd_group)
+-		return 0;
+-
+ 	group_id = mlx5_sd_group_id(dev, sd_group);
+ 
+ 	if (!mlx5_sd_is_supported(dev, host_buses)) {
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
+index bc94e75a7aebd..e7777700ee18a 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
+@@ -40,6 +40,7 @@
+  */
+ #define MLXBF_GIGE_BCAST_MAC_FILTER_IDX 0
+ #define MLXBF_GIGE_LOCAL_MAC_FILTER_IDX 1
++#define MLXBF_GIGE_MAX_FILTER_IDX       3
+ 
+ /* Define for broadcast MAC literal */
+ #define BCAST_MAC_ADDR 0xFFFFFFFFFFFF
+@@ -175,6 +176,13 @@ enum mlxbf_gige_res {
+ int mlxbf_gige_mdio_probe(struct platform_device *pdev,
+ 			  struct mlxbf_gige *priv);
+ void mlxbf_gige_mdio_remove(struct mlxbf_gige *priv);
++
++void mlxbf_gige_enable_multicast_rx(struct mlxbf_gige *priv);
++void mlxbf_gige_disable_multicast_rx(struct mlxbf_gige *priv);
++void mlxbf_gige_enable_mac_rx_filter(struct mlxbf_gige *priv,
++				     unsigned int index);
++void mlxbf_gige_disable_mac_rx_filter(struct mlxbf_gige *priv,
++				      unsigned int index);
+ void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,
+ 				  unsigned int index, u64 dmac);
+ void mlxbf_gige_get_mac_rx_filter(struct mlxbf_gige *priv,
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+index b157f0f1c5a88..385a56ac73481 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+@@ -168,6 +168,10 @@ static int mlxbf_gige_open(struct net_device *netdev)
+ 	if (err)
+ 		goto napi_deinit;
+ 
++	mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_BCAST_MAC_FILTER_IDX);
++	mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_LOCAL_MAC_FILTER_IDX);
++	mlxbf_gige_enable_multicast_rx(priv);
++
+ 	/* Set bits in INT_EN that we care about */
+ 	int_en = MLXBF_GIGE_INT_EN_HW_ACCESS_ERROR |
+ 		 MLXBF_GIGE_INT_EN_TX_CHECKSUM_INPUTS |
+@@ -379,6 +383,7 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
+ 	void __iomem *plu_base;
+ 	void __iomem *base;
+ 	int addr, phy_irq;
++	unsigned int i;
+ 	int err;
+ 
+ 	base = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MAC);
+@@ -423,6 +428,11 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
+ 	priv->rx_q_entries = MLXBF_GIGE_DEFAULT_RXQ_SZ;
+ 	priv->tx_q_entries = MLXBF_GIGE_DEFAULT_TXQ_SZ;
+ 
++	for (i = 0; i <= MLXBF_GIGE_MAX_FILTER_IDX; i++)
++		mlxbf_gige_disable_mac_rx_filter(priv, i);
++	mlxbf_gige_disable_multicast_rx(priv);
++	mlxbf_gige_disable_promisc(priv);
++
+ 	/* Write initial MAC address to hardware */
+ 	mlxbf_gige_initial_mac(priv);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
+index 98a8681c21b9c..4d14cb13fd64e 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
+@@ -62,6 +62,8 @@
+ #define MLXBF_GIGE_TX_STATUS_DATA_FIFO_FULL           BIT(1)
+ #define MLXBF_GIGE_RX_MAC_FILTER_DMAC_RANGE_START     0x0520
+ #define MLXBF_GIGE_RX_MAC_FILTER_DMAC_RANGE_END       0x0528
++#define MLXBF_GIGE_RX_MAC_FILTER_GENERAL              0x0530
++#define MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST         BIT(1)
+ #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_DISC           0x0540
+ #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_DISC_EN        BIT(0)
+ #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_PASS           0x0548
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+index 6999843584934..eb62620b63c7f 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+@@ -11,15 +11,31 @@
+ #include "mlxbf_gige.h"
+ #include "mlxbf_gige_regs.h"
+ 
+-void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,
+-				  unsigned int index, u64 dmac)
++void mlxbf_gige_enable_multicast_rx(struct mlxbf_gige *priv)
+ {
+ 	void __iomem *base = priv->base;
+-	u64 control;
++	u64 data;
+ 
+-	/* Write destination MAC to specified MAC RX filter */
+-	writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER +
+-	       (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE));
++	data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);
++	data |= MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST;
++	writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);
++}
++
++void mlxbf_gige_disable_multicast_rx(struct mlxbf_gige *priv)
++{
++	void __iomem *base = priv->base;
++	u64 data;
++
++	data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);
++	data &= ~MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST;
++	writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);
++}
++
++void mlxbf_gige_enable_mac_rx_filter(struct mlxbf_gige *priv,
++				     unsigned int index)
++{
++	void __iomem *base = priv->base;
++	u64 control;
+ 
+ 	/* Enable MAC receive filter mask for specified index */
+ 	control = readq(base + MLXBF_GIGE_CONTROL);
+@@ -27,6 +43,28 @@ void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,
+ 	writeq(control, base + MLXBF_GIGE_CONTROL);
+ }
+ 
++void mlxbf_gige_disable_mac_rx_filter(struct mlxbf_gige *priv,
++				      unsigned int index)
++{
++	void __iomem *base = priv->base;
++	u64 control;
++
++	/* Disable MAC receive filter mask for specified index */
++	control = readq(base + MLXBF_GIGE_CONTROL);
++	control &= ~(MLXBF_GIGE_CONTROL_EN_SPECIFIC_MAC << index);
++	writeq(control, base + MLXBF_GIGE_CONTROL);
++}
++
++void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,
++				  unsigned int index, u64 dmac)
++{
++	void __iomem *base = priv->base;
++
++	/* Write destination MAC to specified MAC RX filter */
++	writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER +
++	       (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE));
++}
++
+ void mlxbf_gige_get_mac_rx_filter(struct mlxbf_gige *priv,
+ 				  unsigned int index, u64 *dmac)
+ {
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index ad7ae7ba2b8fc..482b9cd369508 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -599,7 +599,11 @@ static void mana_get_rxbuf_cfg(int mtu, u32 *datasize, u32 *alloc_size,
+ 	else
+ 		*headroom = XDP_PACKET_HEADROOM;
+ 
+-	*alloc_size = mtu + MANA_RXBUF_PAD + *headroom;
++	*alloc_size = SKB_DATA_ALIGN(mtu + MANA_RXBUF_PAD + *headroom);
++
++	/* Using page pool in this case, so alloc_size is PAGE_SIZE */
++	if (*alloc_size < PAGE_SIZE)
++		*alloc_size = PAGE_SIZE;
+ 
+ 	*datasize = mtu + ETH_HLEN;
+ }
+@@ -1773,7 +1777,6 @@ static void mana_poll_rx_cq(struct mana_cq *cq)
+ static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
+ {
+ 	struct mana_cq *cq = context;
+-	u8 arm_bit;
+ 	int w;
+ 
+ 	WARN_ON_ONCE(cq->gdma_cq != gdma_queue);
+@@ -1784,16 +1787,23 @@ static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
+ 		mana_poll_tx_cq(cq);
+ 
+ 	w = cq->work_done;
+-
+-	if (w < cq->budget &&
+-	    napi_complete_done(&cq->napi, w)) {
+-		arm_bit = SET_ARM_BIT;
+-	} else {
+-		arm_bit = 0;
++	cq->work_done_since_doorbell += w;
++
++	if (w < cq->budget) {
++		mana_gd_ring_cq(gdma_queue, SET_ARM_BIT);
++		cq->work_done_since_doorbell = 0;
++		napi_complete_done(&cq->napi, w);
++	} else if (cq->work_done_since_doorbell >
++		   cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) {
++		/* MANA hardware requires at least one doorbell ring every 8
++		 * wraparounds of CQ even if there is no need to arm the CQ.
++		 * This driver rings the doorbell as soon as we have exceeded
++		 * 4 wraparounds.
++		 */
++		mana_gd_ring_cq(gdma_queue, 0);
++		cq->work_done_since_doorbell = 0;
+ 	}
+ 
+-	mana_gd_ring_cq(gdma_queue, arm_bit);
+-
+ 	return w;
+ }
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index ed2fb44500b0c..f4e027a6fe955 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1099,6 +1099,48 @@ void ocelot_ptp_rx_timestamp(struct ocelot *ocelot, struct sk_buff *skb,
+ }
+ EXPORT_SYMBOL(ocelot_ptp_rx_timestamp);
+ 
++void ocelot_lock_inj_grp(struct ocelot *ocelot, int grp)
++			 __acquires(&ocelot->inj_lock)
++{
++	spin_lock(&ocelot->inj_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_lock_inj_grp);
++
++void ocelot_unlock_inj_grp(struct ocelot *ocelot, int grp)
++			   __releases(&ocelot->inj_lock)
++{
++	spin_unlock(&ocelot->inj_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_unlock_inj_grp);
++
++void ocelot_lock_xtr_grp(struct ocelot *ocelot, int grp)
++			 __acquires(&ocelot->inj_lock)
++{
++	spin_lock(&ocelot->inj_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_lock_xtr_grp);
++
++void ocelot_unlock_xtr_grp(struct ocelot *ocelot, int grp)
++			   __releases(&ocelot->inj_lock)
++{
++	spin_unlock(&ocelot->inj_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_unlock_xtr_grp);
++
++void ocelot_lock_xtr_grp_bh(struct ocelot *ocelot, int grp)
++			    __acquires(&ocelot->xtr_lock)
++{
++	spin_lock_bh(&ocelot->xtr_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_lock_xtr_grp_bh);
++
++void ocelot_unlock_xtr_grp_bh(struct ocelot *ocelot, int grp)
++			      __releases(&ocelot->xtr_lock)
++{
++	spin_unlock_bh(&ocelot->xtr_lock);
++}
++EXPORT_SYMBOL_GPL(ocelot_unlock_xtr_grp_bh);
++
+ int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **nskb)
+ {
+ 	u64 timestamp, src_port, len;
+@@ -1109,6 +1151,8 @@ int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **nskb)
+ 	u32 val, *buf;
+ 	int err;
+ 
++	lockdep_assert_held(&ocelot->xtr_lock);
++
+ 	err = ocelot_xtr_poll_xfh(ocelot, grp, xfh);
+ 	if (err)
+ 		return err;
+@@ -1184,6 +1228,8 @@ bool ocelot_can_inject(struct ocelot *ocelot, int grp)
+ {
+ 	u32 val = ocelot_read(ocelot, QS_INJ_STATUS);
+ 
++	lockdep_assert_held(&ocelot->inj_lock);
++
+ 	if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))))
+ 		return false;
+ 	if (val & QS_INJ_STATUS_WMARK_REACHED(BIT(grp)))
+@@ -1193,28 +1239,55 @@ bool ocelot_can_inject(struct ocelot *ocelot, int grp)
+ }
+ EXPORT_SYMBOL(ocelot_can_inject);
+ 
+-void ocelot_ifh_port_set(void *ifh, int port, u32 rew_op, u32 vlan_tag)
++/**
++ * ocelot_ifh_set_basic - Set basic information in Injection Frame Header
++ * @ifh: Pointer to Injection Frame Header memory
++ * @ocelot: Switch private data structure
++ * @port: Egress port number
++ * @rew_op: Egress rewriter operation for PTP
++ * @skb: Pointer to socket buffer (packet)
++ *
++ * Populate the Injection Frame Header with basic information for this skb: the
++ * analyzer bypass bit, destination port, VLAN info, egress rewriter info.
++ */
++void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
++			  u32 rew_op, struct sk_buff *skb)
+ {
++	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	struct net_device *dev = skb->dev;
++	u64 vlan_tci, tag_type;
++	int qos_class;
++
++	ocelot_xmit_get_vlan_info(skb, ocelot_port->bridge, &vlan_tci,
++				  &tag_type);
++
++	qos_class = netdev_get_num_tc(dev) ?
++		    netdev_get_prio_tc_map(dev, skb->priority) : skb->priority;
++
++	memset(ifh, 0, OCELOT_TAG_LEN);
+ 	ocelot_ifh_set_bypass(ifh, 1);
++	ocelot_ifh_set_src(ifh, BIT_ULL(ocelot->num_phys_ports));
+ 	ocelot_ifh_set_dest(ifh, BIT_ULL(port));
+-	ocelot_ifh_set_tag_type(ifh, IFH_TAG_TYPE_C);
+-	if (vlan_tag)
+-		ocelot_ifh_set_vlan_tci(ifh, vlan_tag);
++	ocelot_ifh_set_qos_class(ifh, qos_class);
++	ocelot_ifh_set_tag_type(ifh, tag_type);
++	ocelot_ifh_set_vlan_tci(ifh, vlan_tci);
+ 	if (rew_op)
+ 		ocelot_ifh_set_rew_op(ifh, rew_op);
+ }
+-EXPORT_SYMBOL(ocelot_ifh_port_set);
++EXPORT_SYMBOL(ocelot_ifh_set_basic);
+ 
+ void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
+ 			      u32 rew_op, struct sk_buff *skb)
+ {
+-	u32 ifh[OCELOT_TAG_LEN / 4] = {0};
++	u32 ifh[OCELOT_TAG_LEN / 4];
+ 	unsigned int i, count, last;
+ 
++	lockdep_assert_held(&ocelot->inj_lock);
++
+ 	ocelot_write_rix(ocelot, QS_INJ_CTRL_GAP_SIZE(1) |
+ 			 QS_INJ_CTRL_SOF, QS_INJ_CTRL, grp);
+ 
+-	ocelot_ifh_port_set(ifh, port, rew_op, skb_vlan_tag_get(skb));
++	ocelot_ifh_set_basic(ifh, ocelot, port, rew_op, skb);
+ 
+ 	for (i = 0; i < OCELOT_TAG_LEN / 4; i++)
+ 		ocelot_write_rix(ocelot, ifh[i], QS_INJ_WR, grp);
+@@ -1247,6 +1320,8 @@ EXPORT_SYMBOL(ocelot_port_inject_frame);
+ 
+ void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)
+ {
++	lockdep_assert_held(&ocelot->xtr_lock);
++
+ 	while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp))
+ 		ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+ }
+@@ -2929,6 +3004,8 @@ int ocelot_init(struct ocelot *ocelot)
+ 	mutex_init(&ocelot->fwd_domain_lock);
+ 	spin_lock_init(&ocelot->ptp_clock_lock);
+ 	spin_lock_init(&ocelot->ts_id_lock);
++	spin_lock_init(&ocelot->inj_lock);
++	spin_lock_init(&ocelot->xtr_lock);
+ 
+ 	ocelot->owq = alloc_ordered_workqueue("ocelot-owq", 0);
+ 	if (!ocelot->owq)
+diff --git a/drivers/net/ethernet/mscc/ocelot_fdma.c b/drivers/net/ethernet/mscc/ocelot_fdma.c
+index 312a468321544..00326ae8c708b 100644
+--- a/drivers/net/ethernet/mscc/ocelot_fdma.c
++++ b/drivers/net/ethernet/mscc/ocelot_fdma.c
+@@ -665,8 +665,7 @@ static int ocelot_fdma_prepare_skb(struct ocelot *ocelot, int port, u32 rew_op,
+ 
+ 	ifh = skb_push(skb, OCELOT_TAG_LEN);
+ 	skb_put(skb, ETH_FCS_LEN);
+-	memset(ifh, 0, OCELOT_TAG_LEN);
+-	ocelot_ifh_port_set(ifh, port, rew_op, skb_vlan_tag_get(skb));
++	ocelot_ifh_set_basic(ifh, ocelot, port, rew_op, skb);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+index 993212c3a7da6..c09dd2e3343cb 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -51,6 +51,8 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 	struct ocelot *ocelot = arg;
+ 	int grp = 0, err;
+ 
++	ocelot_lock_xtr_grp(ocelot, grp);
++
+ 	while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)) {
+ 		struct sk_buff *skb;
+ 
+@@ -69,6 +71,8 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 	if (err < 0)
+ 		ocelot_drain_cpu_queue(ocelot, 0);
+ 
++	ocelot_unlock_xtr_grp(ocelot, grp);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
+index ec54b18c5fe73..a5e9b779c44d0 100644
+--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
+@@ -124,8 +124,12 @@ static int ngbe_phylink_init(struct wx *wx)
+ 				   MAC_SYM_PAUSE | MAC_ASYM_PAUSE;
+ 	config->mac_managed_pm = true;
+ 
+-	phy_mode = PHY_INTERFACE_MODE_RGMII_ID;
+-	__set_bit(PHY_INTERFACE_MODE_RGMII_ID, config->supported_interfaces);
++	/* The MAC only has add the Tx delay and it can not be modified.
++	 * So just disable TX delay in PHY, and it is does not matter to
++	 * internal phy.
++	 */
++	phy_mode = PHY_INTERFACE_MODE_RGMII_RXID;
++	__set_bit(PHY_INTERFACE_MODE_RGMII_RXID, config->supported_interfaces);
+ 
+ 	phylink = phylink_create(config, NULL, phy_mode, &ngbe_mac_ops);
+ 	if (IS_ERR(phylink))
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+index fa5500decc960..09c9f9787180b 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+@@ -160,16 +160,17 @@
+ #define XAE_RCW1_OFFSET		0x00000404 /* Rx Configuration Word 1 */
+ #define XAE_TC_OFFSET		0x00000408 /* Tx Configuration */
+ #define XAE_FCC_OFFSET		0x0000040C /* Flow Control Configuration */
+-#define XAE_EMMC_OFFSET		0x00000410 /* EMAC mode configuration */
+-#define XAE_PHYC_OFFSET		0x00000414 /* RGMII/SGMII configuration */
++#define XAE_EMMC_OFFSET		0x00000410 /* MAC speed configuration */
++#define XAE_PHYC_OFFSET		0x00000414 /* RX Max Frame Configuration */
+ #define XAE_ID_OFFSET		0x000004F8 /* Identification register */
+-#define XAE_MDIO_MC_OFFSET	0x00000500 /* MII Management Config */
+-#define XAE_MDIO_MCR_OFFSET	0x00000504 /* MII Management Control */
+-#define XAE_MDIO_MWD_OFFSET	0x00000508 /* MII Management Write Data */
+-#define XAE_MDIO_MRD_OFFSET	0x0000050C /* MII Management Read Data */
++#define XAE_MDIO_MC_OFFSET	0x00000500 /* MDIO Setup */
++#define XAE_MDIO_MCR_OFFSET	0x00000504 /* MDIO Control */
++#define XAE_MDIO_MWD_OFFSET	0x00000508 /* MDIO Write Data */
++#define XAE_MDIO_MRD_OFFSET	0x0000050C /* MDIO Read Data */
+ #define XAE_UAW0_OFFSET		0x00000700 /* Unicast address word 0 */
+ #define XAE_UAW1_OFFSET		0x00000704 /* Unicast address word 1 */
+-#define XAE_FMI_OFFSET		0x00000708 /* Filter Mask Index */
++#define XAE_FMI_OFFSET		0x00000708 /* Frame Filter Control */
++#define XAE_FFE_OFFSET		0x0000070C /* Frame Filter Enable */
+ #define XAE_AF0_OFFSET		0x00000710 /* Address Filter 0 */
+ #define XAE_AF1_OFFSET		0x00000714 /* Address Filter 1 */
+ 
+@@ -308,7 +309,7 @@
+  */
+ #define XAE_UAW1_UNICASTADDR_MASK	0x0000FFFF
+ 
+-/* Bit masks for Axi Ethernet FMI register */
++/* Bit masks for Axi Ethernet FMC register */
+ #define XAE_FMI_PM_MASK			0x80000000 /* Promis. mode enable */
+ #define XAE_FMI_IND_MASK		0x00000003 /* Index Mask */
+ 
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index fa510f4e26008..559c0d60d9483 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -432,7 +432,7 @@ static int netdev_set_mac_address(struct net_device *ndev, void *p)
+  */
+ static void axienet_set_multicast_list(struct net_device *ndev)
+ {
+-	int i;
++	int i = 0;
+ 	u32 reg, af0reg, af1reg;
+ 	struct axienet_local *lp = netdev_priv(ndev);
+ 
+@@ -450,7 +450,10 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 	} else if (!netdev_mc_empty(ndev)) {
+ 		struct netdev_hw_addr *ha;
+ 
+-		i = 0;
++		reg = axienet_ior(lp, XAE_FMI_OFFSET);
++		reg &= ~XAE_FMI_PM_MASK;
++		axienet_iow(lp, XAE_FMI_OFFSET, reg);
++
+ 		netdev_for_each_mc_addr(ha, ndev) {
+ 			if (i >= XAE_MULTICAST_CAM_TABLE_NUM)
+ 				break;
+@@ -469,6 +472,7 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 			axienet_iow(lp, XAE_FMI_OFFSET, reg);
+ 			axienet_iow(lp, XAE_AF0_OFFSET, af0reg);
+ 			axienet_iow(lp, XAE_AF1_OFFSET, af1reg);
++			axienet_iow(lp, XAE_FFE_OFFSET, 1);
+ 			i++;
+ 		}
+ 	} else {
+@@ -476,18 +480,15 @@ static void axienet_set_multicast_list(struct net_device *ndev)
+ 		reg &= ~XAE_FMI_PM_MASK;
+ 
+ 		axienet_iow(lp, XAE_FMI_OFFSET, reg);
+-
+-		for (i = 0; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
+-			reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
+-			reg |= i;
+-
+-			axienet_iow(lp, XAE_FMI_OFFSET, reg);
+-			axienet_iow(lp, XAE_AF0_OFFSET, 0);
+-			axienet_iow(lp, XAE_AF1_OFFSET, 0);
+-		}
+-
+ 		dev_info(&ndev->dev, "Promiscuous mode disabled.\n");
+ 	}
++
++	for (; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
++		reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
++		reg |= i;
++		axienet_iow(lp, XAE_FMI_OFFSET, reg);
++		axienet_iow(lp, XAE_FFE_OFFSET, 0);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 427b91aca50d3..0696faf60013e 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1269,6 +1269,9 @@ static netdev_tx_t gtp_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	if (skb_cow_head(skb, dev->needed_headroom))
+ 		goto tx_err;
+ 
++	if (!pskb_inet_may_pull(skb))
++		goto tx_err;
++
+ 	skb_reset_inner_headers(skb);
+ 
+ 	/* PDP context lookups in gtp_build_skb_*() need rcu read-side lock. */
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index a7c7a868c14ce..fca9f7e510b41 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -124,6 +124,60 @@ static void ath12k_hal_tx_cmd_ext_desc_setup(struct ath12k_base *ab,
+ 						 HAL_TX_MSDU_EXT_INFO1_ENCRYPT_TYPE);
+ }
+ 
++static void ath12k_dp_tx_move_payload(struct sk_buff *skb,
++				      unsigned long delta,
++				      bool head)
++{
++	unsigned long len = skb->len;
++
++	if (head) {
++		skb_push(skb, delta);
++		memmove(skb->data, skb->data + delta, len);
++		skb_trim(skb, len);
++	} else {
++		skb_put(skb, delta);
++		memmove(skb->data + delta, skb->data, len);
++		skb_pull(skb, delta);
++	}
++}
++
++static int ath12k_dp_tx_align_payload(struct ath12k_base *ab,
++				      struct sk_buff **pskb)
++{
++	u32 iova_mask = ab->hw_params->iova_mask;
++	unsigned long offset, delta1, delta2;
++	struct sk_buff *skb2, *skb = *pskb;
++	unsigned int headroom = skb_headroom(skb);
++	int tailroom = skb_tailroom(skb);
++	int ret = 0;
++
++	offset = (unsigned long)skb->data & iova_mask;
++	delta1 = offset;
++	delta2 = iova_mask - offset + 1;
++
++	if (headroom >= delta1) {
++		ath12k_dp_tx_move_payload(skb, delta1, true);
++	} else if (tailroom >= delta2) {
++		ath12k_dp_tx_move_payload(skb, delta2, false);
++	} else {
++		skb2 = skb_realloc_headroom(skb, iova_mask);
++		if (!skb2) {
++			ret = -ENOMEM;
++			goto out;
++		}
++
++		dev_kfree_skb_any(skb);
++
++		offset = (unsigned long)skb2->data & iova_mask;
++		if (offset)
++			ath12k_dp_tx_move_payload(skb2, offset, true);
++		*pskb = skb2;
++	}
++
++out:
++	return ret;
++}
++
+ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
+ 		 struct sk_buff *skb)
+ {
+@@ -145,6 +199,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
+ 	u8 ring_selector, ring_map = 0;
+ 	bool tcl_ring_retry;
+ 	bool msdu_ext_desc = false;
++	u32 iova_mask = ab->hw_params->iova_mask;
+ 
+ 	if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ar->ab->dev_flags))
+ 		return -ESHUTDOWN;
+@@ -240,6 +295,23 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
+ 		goto fail_remove_tx_buf;
+ 	}
+ 
++	if (iova_mask &&
++	    (unsigned long)skb->data & iova_mask) {
++		ret = ath12k_dp_tx_align_payload(ab, &skb);
++		if (ret) {
++			ath12k_warn(ab, "failed to align TX buffer %d\n", ret);
++			/* don't bail out, give original buffer
++			 * a chance even unaligned.
++			 */
++			goto map;
++		}
++
++		/* hdr is pointing to a wrong place after alignment,
++		 * so refresh it for later use.
++		 */
++		hdr = (void *)skb->data;
++	}
++map:
+ 	ti.paddr = dma_map_single(ab->dev, skb->data, skb->len, DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ab->dev, ti.paddr)) {
+ 		atomic_inc(&ab->soc_stats.tx_err.misc_fail);
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index bff8cf97a18c6..2a92147d15fa1 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -922,6 +922,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.supports_sta_ps = false,
+ 
+ 		.acpi_guid = NULL,
++
++		.iova_mask = 0,
+ 	},
+ 	{
+ 		.name = "wcn7850 hw2.0",
+@@ -997,6 +999,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.supports_sta_ps = true,
+ 
+ 		.acpi_guid = &wcn7850_uuid,
++
++		.iova_mask = ATH12K_PCIE_MAX_PAYLOAD_SIZE - 1,
+ 	},
+ 	{
+ 		.name = "qcn9274 hw2.0",
+@@ -1067,6 +1071,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.supports_sta_ps = false,
+ 
+ 		.acpi_guid = NULL,
++
++		.iova_mask = 0,
+ 	},
+ };
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 2a314cfc8cb84..400bda17e02f6 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -96,6 +96,8 @@
+ #define ATH12K_M3_FILE			"m3.bin"
+ #define ATH12K_REGDB_FILE_NAME		"regdb.bin"
+ 
++#define ATH12K_PCIE_MAX_PAYLOAD_SIZE	128
++
+ enum ath12k_hw_rate_cck {
+ 	ATH12K_HW_RATE_CCK_LP_11M = 0,
+ 	ATH12K_HW_RATE_CCK_LP_5_5M,
+@@ -214,6 +216,8 @@ struct ath12k_hw_params {
+ 	bool supports_sta_ps;
+ 
+ 	const guid_t *acpi_guid;
++
++	u32 iova_mask;
+ };
+ 
+ struct ath12k_hw_ops {
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index ead37a4e002a2..8474e25d2ac64 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -8737,6 +8737,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 
+ 	hw->vif_data_size = sizeof(struct ath12k_vif);
+ 	hw->sta_data_size = sizeof(struct ath12k_sta);
++	hw->extra_tx_headroom = ab->hw_params->iova_mask;
+ 
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST);
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_STA_TX_PWR);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 5fe0e671ecb36..826b768196e28 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -4320,9 +4320,16 @@ brcmf_pmksa_v3_op(struct brcmf_if *ifp, struct cfg80211_pmksa *pmksa,
+ 		/* Single PMK operation */
+ 		pmk_op->count = cpu_to_le16(1);
+ 		length += sizeof(struct brcmf_pmksa_v3);
+-		memcpy(pmk_op->pmk[0].bssid, pmksa->bssid, ETH_ALEN);
+-		memcpy(pmk_op->pmk[0].pmkid, pmksa->pmkid, WLAN_PMKID_LEN);
+-		pmk_op->pmk[0].pmkid_len = WLAN_PMKID_LEN;
++		if (pmksa->bssid)
++			memcpy(pmk_op->pmk[0].bssid, pmksa->bssid, ETH_ALEN);
++		if (pmksa->pmkid) {
++			memcpy(pmk_op->pmk[0].pmkid, pmksa->pmkid, WLAN_PMKID_LEN);
++			pmk_op->pmk[0].pmkid_len = WLAN_PMKID_LEN;
++		}
++		if (pmksa->ssid && pmksa->ssid_len) {
++			memcpy(pmk_op->pmk[0].ssid.SSID, pmksa->ssid, pmksa->ssid_len);
++			pmk_op->pmk[0].ssid.SSID_len = pmksa->ssid_len;
++		}
+ 		pmk_op->pmk[0].time_left = cpu_to_le32(alive ? BRCMF_PMKSA_NO_EXPIRY : 0);
+ 	}
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 782090ce0bc10..d973d063bbf50 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4522,7 +4522,6 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
+ {
+ 	nvme_mpath_stop(ctrl);
+ 	nvme_auth_stop(ctrl);
+-	nvme_stop_keep_alive(ctrl);
+ 	nvme_stop_failfast_work(ctrl);
+ 	flush_work(&ctrl->async_event_work);
+ 	cancel_work_sync(&ctrl->fw_act_work);
+@@ -4558,6 +4557,7 @@ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+ 
+ void nvme_uninit_ctrl(struct nvme_ctrl *ctrl)
+ {
++	nvme_stop_keep_alive(ctrl);
+ 	nvme_hwmon_exit(ctrl);
+ 	nvme_fault_inject_fini(&ctrl->fault_inject);
+ 	dev_pm_qos_hide_latency_tolerance(ctrl->device);
+diff --git a/drivers/platform/surface/aggregator/controller.c b/drivers/platform/surface/aggregator/controller.c
+index 7fc602e01487d..7e89f547999b2 100644
+--- a/drivers/platform/surface/aggregator/controller.c
++++ b/drivers/platform/surface/aggregator/controller.c
+@@ -1354,7 +1354,8 @@ void ssam_controller_destroy(struct ssam_controller *ctrl)
+ 	if (ctrl->state == SSAM_CONTROLLER_UNINITIALIZED)
+ 		return;
+ 
+-	WARN_ON(ctrl->state != SSAM_CONTROLLER_STOPPED);
++	WARN_ON(ctrl->state != SSAM_CONTROLLER_STOPPED &&
++		ctrl->state != SSAM_CONTROLLER_INITIALIZED);
+ 
+ 	/*
+ 	 * Note: New events could still have been received after the previous
+diff --git a/drivers/platform/x86/dell/Kconfig b/drivers/platform/x86/dell/Kconfig
+index 195a8bf532cc6..de0bfd8f4b66e 100644
+--- a/drivers/platform/x86/dell/Kconfig
++++ b/drivers/platform/x86/dell/Kconfig
+@@ -148,6 +148,7 @@ config DELL_SMO8800
+ config DELL_UART_BACKLIGHT
+ 	tristate "Dell AIO UART Backlight driver"
+ 	depends on ACPI
++	depends on ACPI_VIDEO
+ 	depends on BACKLIGHT_CLASS_DEVICE
+ 	depends on SERIAL_DEV_BUS
+ 	help
+diff --git a/drivers/platform/x86/dell/dell-uart-backlight.c b/drivers/platform/x86/dell/dell-uart-backlight.c
+index 87d2a20b4cb3d..3995f90add456 100644
+--- a/drivers/platform/x86/dell/dell-uart-backlight.c
++++ b/drivers/platform/x86/dell/dell-uart-backlight.c
+@@ -20,6 +20,7 @@
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/wait.h>
++#include <acpi/video.h>
+ #include "../serdev_helpers.h"
+ 
+ /* The backlight controller must respond within 1 second */
+@@ -332,10 +333,17 @@ struct serdev_device_driver dell_uart_bl_serdev_driver = {
+ 
+ static int dell_uart_bl_pdev_probe(struct platform_device *pdev)
+ {
++	enum acpi_backlight_type bl_type;
+ 	struct serdev_device *serdev;
+ 	struct device *ctrl_dev;
+ 	int ret;
+ 
++	bl_type = acpi_video_get_backlight_type();
++	if (bl_type != acpi_backlight_dell_uart) {
++		dev_dbg(&pdev->dev, "Not loading (ACPI backlight type = %d)\n", bl_type);
++		return -ENODEV;
++	}
++
+ 	ctrl_dev = get_serdev_controller("DELL0501", NULL, 0, "serial0");
+ 	if (IS_ERR(ctrl_dev))
+ 		return PTR_ERR(ctrl_dev);
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c b/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
+index 7fa360073f6ef..4045823071091 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
+@@ -1549,8 +1549,7 @@ int tpmi_sst_dev_add(struct auxiliary_device *auxdev)
+ 			goto unlock_free;
+ 		}
+ 
+-		ret = sst_main(auxdev, &pd_info[i]);
+-		if (ret) {
++		if (sst_main(auxdev, &pd_info[i])) {
+ 			/*
+ 			 * This entry is not valid, hardware can partially
+ 			 * populate dies. In this case MMIO will have 0xFFs.
+diff --git a/drivers/pmdomain/imx/imx93-pd.c b/drivers/pmdomain/imx/imx93-pd.c
+index 1e94b499c19bc..d750a7dc58d21 100644
+--- a/drivers/pmdomain/imx/imx93-pd.c
++++ b/drivers/pmdomain/imx/imx93-pd.c
+@@ -20,6 +20,7 @@
+ #define FUNC_STAT_PSW_STAT_MASK		BIT(0)
+ #define FUNC_STAT_RST_STAT_MASK		BIT(2)
+ #define FUNC_STAT_ISO_STAT_MASK		BIT(4)
++#define FUNC_STAT_SSAR_STAT_MASK	BIT(8)
+ 
+ struct imx93_power_domain {
+ 	struct generic_pm_domain genpd;
+@@ -50,7 +51,7 @@ static int imx93_pd_on(struct generic_pm_domain *genpd)
+ 	writel(val, addr + MIX_SLICE_SW_CTRL_OFF);
+ 
+ 	ret = readl_poll_timeout(addr + MIX_FUNC_STAT_OFF, val,
+-				 !(val & FUNC_STAT_ISO_STAT_MASK), 1, 10000);
++				 !(val & FUNC_STAT_SSAR_STAT_MASK), 1, 10000);
+ 	if (ret) {
+ 		dev_err(domain->dev, "pd_on timeout: name: %s, stat: %x\n", genpd->name, val);
+ 		return ret;
+@@ -72,7 +73,7 @@ static int imx93_pd_off(struct generic_pm_domain *genpd)
+ 	writel(val, addr + MIX_SLICE_SW_CTRL_OFF);
+ 
+ 	ret = readl_poll_timeout(addr + MIX_FUNC_STAT_OFF, val,
+-				 val & FUNC_STAT_PSW_STAT_MASK, 1, 1000);
++				 val & FUNC_STAT_PSW_STAT_MASK, 1, 10000);
+ 	if (ret) {
+ 		dev_err(domain->dev, "pd_off timeout: name: %s, stat: %x\n", genpd->name, val);
+ 		return ret;
+diff --git a/drivers/pmdomain/imx/scu-pd.c b/drivers/pmdomain/imx/scu-pd.c
+index 05841b0bf7f30..01d465d88f60d 100644
+--- a/drivers/pmdomain/imx/scu-pd.c
++++ b/drivers/pmdomain/imx/scu-pd.c
+@@ -223,11 +223,6 @@ static const struct imx_sc_pd_range imx8qxp_scu_pd_ranges[] = {
+ 	{ "lvds1-pwm", IMX_SC_R_LVDS_1_PWM_0, 1, false, 0 },
+ 	{ "lvds1-lpi2c", IMX_SC_R_LVDS_1_I2C_0, 2, true, 0 },
+ 
+-	{ "mipi1", IMX_SC_R_MIPI_1, 1, 0 },
+-	{ "mipi1-pwm0", IMX_SC_R_MIPI_1_PWM_0, 1, 0 },
+-	{ "mipi1-i2c", IMX_SC_R_MIPI_1_I2C_0, 2, 1 },
+-	{ "lvds1", IMX_SC_R_LVDS_1, 1, 0 },
+-
+ 	/* DC SS */
+ 	{ "dc0", IMX_SC_R_DC_0, 1, false, 0 },
+ 	{ "dc0-pll", IMX_SC_R_DC_0_PLL_0, 2, true, 0 },
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 0a97cfedd7060..42a4a996defbe 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -1601,9 +1601,15 @@ static int dasd_ese_needs_format(struct dasd_block *block, struct irb *irb)
+ 	if (!sense)
+ 		return 0;
+ 
+-	return !!(sense[1] & SNS1_NO_REC_FOUND) ||
+-		!!(sense[1] & SNS1_FILE_PROTECTED) ||
+-		scsw_cstat(&irb->scsw) == SCHN_STAT_INCORR_LEN;
++	if (sense[1] & SNS1_NO_REC_FOUND)
++		return 1;
++
++	if ((sense[1] & SNS1_INV_TRACK_FORMAT) &&
++	    scsw_is_tm(&irb->scsw) &&
++	    !(sense[2] & SNS2_ENV_DATA_PRESENT))
++		return 1;
++
++	return 0;
+ }
+ 
+ static int dasd_ese_oos_cond(u8 *sense)
+@@ -1624,7 +1630,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 	struct dasd_device *device;
+ 	unsigned long now;
+ 	int nrf_suppressed = 0;
+-	int fp_suppressed = 0;
++	int it_suppressed = 0;
+ 	struct request *req;
+ 	u8 *sense = NULL;
+ 	int expires;
+@@ -1679,8 +1685,9 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 		 */
+ 		sense = dasd_get_sense(irb);
+ 		if (sense) {
+-			fp_suppressed = (sense[1] & SNS1_FILE_PROTECTED) &&
+-				test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
++			it_suppressed =	(sense[1] & SNS1_INV_TRACK_FORMAT) &&
++				!(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++				test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 			nrf_suppressed = (sense[1] & SNS1_NO_REC_FOUND) &&
+ 				test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
+ 
+@@ -1695,7 +1702,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ 				return;
+ 			}
+ 		}
+-		if (!(fp_suppressed || nrf_suppressed))
++		if (!(it_suppressed || nrf_suppressed))
+ 			device->discipline->dump_sense_dbf(device, irb, "int");
+ 
+ 		if (device->features & DASD_FEATURE_ERPLOG)
+@@ -2459,14 +2466,17 @@ static int _dasd_sleep_on_queue(struct list_head *ccw_queue, int interruptible)
+ 	rc = 0;
+ 	list_for_each_entry_safe(cqr, n, ccw_queue, blocklist) {
+ 		/*
+-		 * In some cases the 'File Protected' or 'Incorrect Length'
+-		 * error might be expected and error recovery would be
+-		 * unnecessary in these cases.	Check if the according suppress
+-		 * bit is set.
++		 * In some cases certain errors might be expected and
++		 * error recovery would be unnecessary in these cases.
++		 * Check if the according suppress bit is set.
+ 		 */
+ 		sense = dasd_get_sense(&cqr->irb);
+-		if (sense && sense[1] & SNS1_FILE_PROTECTED &&
+-		    test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags))
++		if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&
++		    !(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++		    test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags))
++			continue;
++		if (sense && (sense[1] & SNS1_NO_REC_FOUND) &&
++		    test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags))
+ 			continue;
+ 		if (scsw_cstat(&cqr->irb.scsw) == 0x40 &&
+ 		    test_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags))
+diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c
+index bbbacfc386f28..d0aa267462c50 100644
+--- a/drivers/s390/block/dasd_3990_erp.c
++++ b/drivers/s390/block/dasd_3990_erp.c
+@@ -1386,14 +1386,8 @@ dasd_3990_erp_file_prot(struct dasd_ccw_req * erp)
+ 
+ 	struct dasd_device *device = erp->startdev;
+ 
+-	/*
+-	 * In some cases the 'File Protected' error might be expected and
+-	 * log messages shouldn't be written then.
+-	 * Check if the according suppress bit is set.
+-	 */
+-	if (!test_bit(DASD_CQR_SUPPRESS_FP, &erp->flags))
+-		dev_err(&device->cdev->dev,
+-			"Accessing the DASD failed because of a hardware error\n");
++	dev_err(&device->cdev->dev,
++		"Accessing the DASD failed because of a hardware error\n");
+ 
+ 	return dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);
+ 
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index a76c6af9ea638..a47443d3cc366 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -2274,6 +2274,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
+ 	cqr->status = DASD_CQR_FILLED;
+ 	/* Set flags to suppress output for expected errors */
+ 	set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
++	set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 
+ 	return cqr;
+ }
+@@ -2555,7 +2556,6 @@ dasd_eckd_build_check_tcw(struct dasd_device *base, struct format_data_t *fdata,
+ 	cqr->buildclk = get_tod_clock();
+ 	cqr->status = DASD_CQR_FILLED;
+ 	/* Set flags to suppress output for expected errors */
+-	set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+ 	set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 
+ 	return cqr;
+@@ -4129,8 +4129,6 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
+ 
+ 	/* Set flags to suppress output for expected errors */
+ 	if (dasd_eckd_is_ese(basedev)) {
+-		set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+-		set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 		set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
+ 	}
+ 
+@@ -4632,9 +4630,8 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_tpm_track(
+ 
+ 	/* Set flags to suppress output for expected errors */
+ 	if (dasd_eckd_is_ese(basedev)) {
+-		set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);
+-		set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);
+ 		set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);
++		set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);
+ 	}
+ 
+ 	return cqr;
+@@ -5779,36 +5776,32 @@ static void dasd_eckd_dump_sense(struct dasd_device *device,
+ {
+ 	u8 *sense = dasd_get_sense(irb);
+ 
+-	if (scsw_is_tm(&irb->scsw)) {
+-		/*
+-		 * In some cases the 'File Protected' or 'Incorrect Length'
+-		 * error might be expected and log messages shouldn't be written
+-		 * then. Check if the according suppress bit is set.
+-		 */
+-		if (sense && (sense[1] & SNS1_FILE_PROTECTED) &&
+-		    test_bit(DASD_CQR_SUPPRESS_FP, &req->flags))
+-			return;
+-		if (scsw_cstat(&irb->scsw) == 0x40 &&
+-		    test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))
+-			return;
++	/*
++	 * In some cases certain errors might be expected and
++	 * log messages shouldn't be written then.
++	 * Check if the according suppress bit is set.
++	 */
++	if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&
++	    !(sense[2] & SNS2_ENV_DATA_PRESENT) &&
++	    test_bit(DASD_CQR_SUPPRESS_IT, &req->flags))
++		return;
+ 
+-		dasd_eckd_dump_sense_tcw(device, req, irb);
+-	} else {
+-		/*
+-		 * In some cases the 'Command Reject' or 'No Record Found'
+-		 * error might be expected and log messages shouldn't be
+-		 * written then. Check if the according suppress bit is set.
+-		 */
+-		if (sense && sense[0] & SNS0_CMD_REJECT &&
+-		    test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))
+-			return;
++	if (sense && sense[0] & SNS0_CMD_REJECT &&
++	    test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))
++		return;
+ 
+-		if (sense && sense[1] & SNS1_NO_REC_FOUND &&
+-		    test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))
+-			return;
++	if (sense && sense[1] & SNS1_NO_REC_FOUND &&
++	    test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))
++		return;
+ 
++	if (scsw_cstat(&irb->scsw) == 0x40 &&
++	    test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))
++		return;
++
++	if (scsw_is_tm(&irb->scsw))
++		dasd_eckd_dump_sense_tcw(device, req, irb);
++	else
+ 		dasd_eckd_dump_sense_ccw(device, req, irb);
+-	}
+ }
+ 
+ static int dasd_eckd_reload_device(struct dasd_device *device)
+diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
+index 4533dd055ca8e..23d6e638f381d 100644
+--- a/drivers/s390/block/dasd_genhd.c
++++ b/drivers/s390/block/dasd_genhd.c
+@@ -41,7 +41,6 @@ int dasd_gendisk_alloc(struct dasd_block *block)
+ 		 */
+ 		.max_segment_size = PAGE_SIZE,
+ 		.seg_boundary_mask = PAGE_SIZE - 1,
+-		.dma_alignment = PAGE_SIZE - 1,
+ 		.max_segments = USHRT_MAX,
+ 	};
+ 	struct gendisk *gdp;
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index e5f40536b4254..81cfb5c89681b 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -196,7 +196,7 @@ struct dasd_ccw_req {
+  * The following flags are used to suppress output of certain errors.
+  */
+ #define DASD_CQR_SUPPRESS_NRF	4	/* Suppress 'No Record Found' error */
+-#define DASD_CQR_SUPPRESS_FP	5	/* Suppress 'File Protected' error*/
++#define DASD_CQR_SUPPRESS_IT	5	/* Suppress 'Invalid Track' error*/
+ #define DASD_CQR_SUPPRESS_IL	6	/* Suppress 'Incorrect Length' error */
+ #define DASD_CQR_SUPPRESS_CR	7	/* Suppress 'Command Reject' error */
+ 
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 898865be0dad9..99fadfb4cd9f2 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -971,11 +971,16 @@ int ap_driver_register(struct ap_driver *ap_drv, struct module *owner,
+ 		       char *name)
+ {
+ 	struct device_driver *drv = &ap_drv->driver;
++	int rc;
+ 
+ 	drv->bus = &ap_bus_type;
+ 	drv->owner = owner;
+ 	drv->name = name;
+-	return driver_register(drv);
++	rc = driver_register(drv);
++
++	ap_check_bindings_complete();
++
++	return rc;
+ }
+ EXPORT_SYMBOL(ap_driver_register);
+ 
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 05ebb03d319fc..d4607cb89c484 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -2000,13 +2000,25 @@ static int cqspi_runtime_resume(struct device *dev)
+ static int cqspi_suspend(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	int ret;
+ 
+-	return spi_controller_suspend(cqspi->host);
++	ret = spi_controller_suspend(cqspi->host);
++	if (ret)
++		return ret;
++
++	return pm_runtime_force_suspend(dev);
+ }
+ 
+ static int cqspi_resume(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++	int ret;
++
++	ret = pm_runtime_force_resume(dev);
++	if (ret) {
++		dev_err(dev, "pm_runtime_force_resume failed on resume\n");
++		return ret;
++	}
+ 
+ 	return spi_controller_resume(cqspi->host);
+ }
+diff --git a/drivers/staging/media/atomisp/pci/ia_css_stream_public.h b/drivers/staging/media/atomisp/pci/ia_css_stream_public.h
+index 961c612880833..aad860e54d3a7 100644
+--- a/drivers/staging/media/atomisp/pci/ia_css_stream_public.h
++++ b/drivers/staging/media/atomisp/pci/ia_css_stream_public.h
+@@ -27,12 +27,16 @@
+ #include "ia_css_prbs.h"
+ #include "ia_css_input_port.h"
+ 
+-/* Input modes, these enumerate all supported input modes.
+- *  Note that not all ISP modes support all input modes.
++/*
++ * Input modes, these enumerate all supported input modes.
++ * This enum is part of the atomisp firmware ABI and must
++ * NOT be changed!
++ * Note that not all ISP modes support all input modes.
+  */
+ enum ia_css_input_mode {
+ 	IA_CSS_INPUT_MODE_SENSOR, /** data from sensor */
+ 	IA_CSS_INPUT_MODE_FIFO,   /** data from input-fifo */
++	IA_CSS_INPUT_MODE_TPG,    /** data from test-pattern generator */
+ 	IA_CSS_INPUT_MODE_PRBS,   /** data from pseudo-random bit stream */
+ 	IA_CSS_INPUT_MODE_MEMORY, /** data from a frame in memory */
+ 	IA_CSS_INPUT_MODE_BUFFERED_SENSOR /** data is sent through mipi buffer */
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_internal.h b/drivers/staging/media/atomisp/pci/sh_css_internal.h
+index bef2b8c5132ba..419aba9b0e49e 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_internal.h
++++ b/drivers/staging/media/atomisp/pci/sh_css_internal.h
+@@ -341,7 +341,14 @@ struct sh_css_sp_input_formatter_set {
+ 
+ #define IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT (3)
+ 
+-/* SP configuration information */
++/*
++ * SP configuration information
++ *
++ * This struct is part of the atomisp firmware ABI and is directly copied
++ * to ISP DRAM by sh_css_store_sp_group_to_ddr()
++ *
++ * Do NOT change this struct's layout or remove seemingly unused fields!
++ */
+ struct sh_css_sp_config {
+ 	u8			no_isp_sync; /* Signal host immediately after start */
+ 	u8			enable_raw_pool_locking; /** Enable Raw Buffer Locking for HALv3 Support */
+@@ -351,6 +358,10 @@ struct sh_css_sp_config {
+ 	     host (true) or when they are passed to the preview/video pipe
+ 	     (false). */
+ 
++	 /*
++	  * Note the fields below are only used on the ISP2400 not on the ISP2401,
++	  * sh_css_store_sp_group_to_ddr() skip copying these when run on the ISP2401.
++	  */
+ 	struct {
+ 		u8					a_changed;
+ 		u8					b_changed;
+@@ -360,11 +371,13 @@ struct sh_css_sp_config {
+ 	} input_formatter;
+ 
+ 	sync_generator_cfg_t	sync_gen;
++	tpg_cfg_t		tpg;
+ 	prbs_cfg_t		prbs;
+ 	input_system_cfg_t	input_circuit;
+ 	u8			input_circuit_cfg_changed;
+-	u32		mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT];
+-	u8                 enable_isys_event_queue;
++	u32			mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT];
++	/* These last 2 fields are used on both the ISP2400 and the ISP2401 */
++	u8			enable_isys_event_queue;
+ 	u8			disable_cont_vf;
+ };
+ 
+diff --git a/drivers/thermal/gov_bang_bang.c b/drivers/thermal/gov_bang_bang.c
+index acb52c9ee10f1..daed67d19efb8 100644
+--- a/drivers/thermal/gov_bang_bang.c
++++ b/drivers/thermal/gov_bang_bang.c
+@@ -13,6 +13,28 @@
+ 
+ #include "thermal_core.h"
+ 
++static void bang_bang_set_instance_target(struct thermal_instance *instance,
++					  unsigned int target)
++{
++	if (instance->target != 0 && instance->target != 1 &&
++	    instance->target != THERMAL_NO_TARGET)
++		pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n",
++			 instance->target, instance->name);
++
++	/*
++	 * Enable the fan when the trip is crossed on the way up and disable it
++	 * when the trip is crossed on the way down.
++	 */
++	instance->target = target;
++	instance->initialized = true;
++
++	dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target);
++
++	mutex_lock(&instance->cdev->lock);
++	__thermal_cdev_update(instance->cdev);
++	mutex_unlock(&instance->cdev->lock);
++}
++
+ /**
+  * bang_bang_control - controls devices associated with the given zone
+  * @tz: thermal_zone_device
+@@ -54,41 +76,60 @@ static void bang_bang_control(struct thermal_zone_device *tz,
+ 		tz->temperature, trip->hysteresis);
+ 
+ 	list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+-		if (instance->trip != trip)
+-			continue;
++		if (instance->trip == trip)
++			bang_bang_set_instance_target(instance, crossed_up);
++	}
++}
++
++static void bang_bang_manage(struct thermal_zone_device *tz)
++{
++	const struct thermal_trip_desc *td;
++	struct thermal_instance *instance;
+ 
+-		if (instance->target == THERMAL_NO_TARGET)
+-			instance->target = 0;
++	/* If the code below has run already, nothing needs to be done. */
++	if (tz->governor_data)
++		return;
+ 
+-		if (instance->target != 0 && instance->target != 1) {
+-			pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n",
+-				 instance->target, instance->name);
++	for_each_trip_desc(tz, td) {
++		const struct thermal_trip *trip = &td->trip;
+ 
+-			instance->target = 1;
+-		}
++		if (tz->temperature >= td->threshold ||
++		    trip->temperature == THERMAL_TEMP_INVALID ||
++		    trip->type == THERMAL_TRIP_CRITICAL ||
++		    trip->type == THERMAL_TRIP_HOT)
++			continue;
+ 
+ 		/*
+-		 * Enable the fan when the trip is crossed on the way up and
+-		 * disable it when the trip is crossed on the way down.
++		 * If the initial cooling device state is "on", but the zone
++		 * temperature is not above the trip point, the core will not
++		 * call bang_bang_control() until the zone temperature reaches
++		 * the trip point temperature which may be never.  In those
++		 * cases, set the initial state of the cooling device to 0.
+ 		 */
+-		if (instance->target == 0 && crossed_up)
+-			instance->target = 1;
+-		else if (instance->target == 1 && !crossed_up)
+-			instance->target = 0;
+-
+-		dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target);
+-
+-		mutex_lock(&instance->cdev->lock);
+-		instance->cdev->updated = false; /* cdev needs update */
+-		mutex_unlock(&instance->cdev->lock);
++		list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++			if (!instance->initialized && instance->trip == trip)
++				bang_bang_set_instance_target(instance, 0);
++		}
+ 	}
+ 
+-	list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+-		thermal_cdev_update(instance->cdev);
++	tz->governor_data = (void *)true;
++}
++
++static void bang_bang_update_tz(struct thermal_zone_device *tz,
++				enum thermal_notify_event reason)
++{
++	/*
++	 * Let bang_bang_manage() know that it needs to walk trips after binding
++	 * a new cdev and after system resume.
++	 */
++	if (reason == THERMAL_TZ_BIND_CDEV || reason == THERMAL_TZ_RESUME)
++		tz->governor_data = NULL;
+ }
+ 
+ static struct thermal_governor thermal_gov_bang_bang = {
+ 	.name		= "bang_bang",
+ 	.trip_crossed	= bang_bang_control,
++	.manage		= bang_bang_manage,
++	.update_tz	= bang_bang_update_tz,
+ };
+ THERMAL_GOVERNOR_DECLARE(thermal_gov_bang_bang);
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index f2d31bc48f529..b8d889ef4fa5e 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1715,7 +1715,8 @@ static void thermal_zone_device_resume(struct work_struct *work)
+ 	tz->suspended = false;
+ 
+ 	thermal_zone_device_init(tz);
+-	__thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++	thermal_governor_update_tz(tz, THERMAL_TZ_RESUME);
++	__thermal_zone_device_update(tz, THERMAL_TZ_RESUME);
+ 
+ 	complete(&tz->resume);
+ 	tz->resuming = false;
+diff --git a/drivers/thermal/thermal_debugfs.c b/drivers/thermal/thermal_debugfs.c
+index 9424472291570..6d55f5fc4ca0f 100644
+--- a/drivers/thermal/thermal_debugfs.c
++++ b/drivers/thermal/thermal_debugfs.c
+@@ -178,11 +178,11 @@ struct thermal_debugfs {
+ void thermal_debug_init(void)
+ {
+ 	d_root = debugfs_create_dir("thermal", NULL);
+-	if (!d_root)
++	if (IS_ERR(d_root))
+ 		return;
+ 
+ 	d_cdev = debugfs_create_dir("cooling_devices", d_root);
+-	if (!d_cdev)
++	if (IS_ERR(d_cdev))
+ 		return;
+ 
+ 	d_tz = debugfs_create_dir("thermal_zones", d_root);
+@@ -202,7 +202,7 @@ static struct thermal_debugfs *thermal_debugfs_add_id(struct dentry *d, int id)
+ 	snprintf(ids, IDSLENGTH, "%d", id);
+ 
+ 	thermal_dbg->d_top = debugfs_create_dir(ids, d);
+-	if (!thermal_dbg->d_top) {
++	if (IS_ERR(thermal_dbg->d_top)) {
+ 		kfree(thermal_dbg);
+ 		return NULL;
+ 	}
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index aa34b6e82e268..1f252692815a1 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -125,7 +125,7 @@ static int thermal_of_populate_trip(struct device_node *np,
+ static struct thermal_trip *thermal_of_trips_init(struct device_node *np, int *ntrips)
+ {
+ 	struct thermal_trip *tt;
+-	struct device_node *trips, *trip;
++	struct device_node *trips;
+ 	int ret, count;
+ 
+ 	trips = of_get_child_by_name(np, "trips");
+@@ -150,7 +150,7 @@ static struct thermal_trip *thermal_of_trips_init(struct device_node *np, int *n
+ 	*ntrips = count;
+ 
+ 	count = 0;
+-	for_each_child_of_node(trips, trip) {
++	for_each_child_of_node_scoped(trips, trip) {
+ 		ret = thermal_of_populate_trip(trip, &tt[count++]);
+ 		if (ret)
+ 			goto out_kfree;
+@@ -184,14 +184,14 @@ static struct device_node *of_thermal_zone_find(struct device_node *sensor, int
+ 	 * Search for each thermal zone, a defined sensor
+ 	 * corresponding to the one passed as parameter
+ 	 */
+-	for_each_available_child_of_node(np, tz) {
++	for_each_available_child_of_node_scoped(np, child) {
+ 
+ 		int count, i;
+ 
+-		count = of_count_phandle_with_args(tz, "thermal-sensors",
++		count = of_count_phandle_with_args(child, "thermal-sensors",
+ 						   "#thermal-sensor-cells");
+ 		if (count <= 0) {
+-			pr_err("%pOFn: missing thermal sensor\n", tz);
++			pr_err("%pOFn: missing thermal sensor\n", child);
+ 			tz = ERR_PTR(-EINVAL);
+ 			goto out;
+ 		}
+@@ -200,18 +200,19 @@ static struct device_node *of_thermal_zone_find(struct device_node *sensor, int
+ 
+ 			int ret;
+ 
+-			ret = of_parse_phandle_with_args(tz, "thermal-sensors",
++			ret = of_parse_phandle_with_args(child, "thermal-sensors",
+ 							 "#thermal-sensor-cells",
+ 							 i, &sensor_specs);
+ 			if (ret < 0) {
+-				pr_err("%pOFn: Failed to read thermal-sensors cells: %d\n", tz, ret);
++				pr_err("%pOFn: Failed to read thermal-sensors cells: %d\n", child, ret);
+ 				tz = ERR_PTR(ret);
+ 				goto out;
+ 			}
+ 
+ 			if ((sensor == sensor_specs.np) && id == (sensor_specs.args_count ?
+ 								  sensor_specs.args[0] : 0)) {
+-				pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, tz);
++				pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, child);
++				tz = no_free_ptr(child);
+ 				goto out;
+ 			}
+ 		}
+@@ -491,7 +492,8 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node *
+ 	trips = thermal_of_trips_init(np, &ntrips);
+ 	if (IS_ERR(trips)) {
+ 		pr_err("Failed to find trip points for %pOFn id=%d\n", sensor, id);
+-		return ERR_CAST(trips);
++		ret = PTR_ERR(trips);
++		goto out_of_node_put;
+ 	}
+ 
+ 	ret = thermal_of_monitor_init(np, &delay, &pdelay);
+@@ -519,6 +521,7 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node *
+ 		goto out_kfree_trips;
+ 	}
+ 
++	of_node_put(np);
+ 	kfree(trips);
+ 
+ 	ret = thermal_zone_device_enable(tz);
+@@ -533,6 +536,8 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node *
+ 
+ out_kfree_trips:
+ 	kfree(trips);
++out_of_node_put:
++	of_node_put(np);
+ 
+ 	return ERR_PTR(ret);
+ }
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 326433df5880e..6a2116cbb06f9 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -3392,6 +3392,7 @@ void tb_switch_remove(struct tb_switch *sw)
+ 			tb_switch_remove(port->remote->sw);
+ 			port->remote = NULL;
+ 		} else if (port->xdomain) {
++			port->xdomain->is_unplugged = true;
+ 			tb_xdomain_remove(port->xdomain);
+ 			port->xdomain = NULL;
+ 		}
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 1af9aed99c651..afef1dd4ddf49 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -27,7 +27,6 @@
+ #include <linux/pm_wakeirq.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/sys_soc.h>
+-#include <linux/pm_domain.h>
+ 
+ #include "8250.h"
+ 
+@@ -119,12 +118,6 @@
+ #define UART_OMAP_TO_L                 0x26
+ #define UART_OMAP_TO_H                 0x27
+ 
+-/*
+- * Copy of the genpd flags for the console.
+- * Only used if console suspend is disabled
+- */
+-static unsigned int genpd_flags_console;
+-
+ struct omap8250_priv {
+ 	void __iomem *membase;
+ 	int line;
+@@ -1655,7 +1648,6 @@ static int omap8250_suspend(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+ 	struct uart_8250_port *up = serial8250_get_port(priv->line);
+-	struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain);
+ 	int err = 0;
+ 
+ 	serial8250_suspend_port(priv->line);
+@@ -1666,19 +1658,8 @@ static int omap8250_suspend(struct device *dev)
+ 	if (!device_may_wakeup(dev))
+ 		priv->wer = 0;
+ 	serial_out(up, UART_OMAP_WER, priv->wer);
+-	if (uart_console(&up->port)) {
+-		if (console_suspend_enabled)
+-			err = pm_runtime_force_suspend(dev);
+-		else {
+-			/*
+-			 * The pd shall not be powered-off (no console suspend).
+-			 * Make copy of genpd flags before to set it always on.
+-			 * The original value is restored during the resume.
+-			 */
+-			genpd_flags_console = genpd->flags;
+-			genpd->flags |= GENPD_FLAG_ALWAYS_ON;
+-		}
+-	}
++	if (uart_console(&up->port) && console_suspend_enabled)
++		err = pm_runtime_force_suspend(dev);
+ 	flush_work(&priv->qos_work);
+ 
+ 	return err;
+@@ -1688,16 +1669,12 @@ static int omap8250_resume(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+ 	struct uart_8250_port *up = serial8250_get_port(priv->line);
+-	struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain);
+ 	int err;
+ 
+ 	if (uart_console(&up->port) && console_suspend_enabled) {
+-		if (console_suspend_enabled) {
+-			err = pm_runtime_force_resume(dev);
+-			if (err)
+-				return err;
+-		} else
+-			genpd->flags = genpd_flags_console;
++		err = pm_runtime_force_resume(dev);
++		if (err)
++			return err;
+ 	}
+ 
+ 	serial8250_resume_port(priv->line);
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 0a90964d6d107..09b246c9e389e 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -2514,7 +2514,7 @@ static const struct uart_ops atmel_pops = {
+ };
+ 
+ static const struct serial_rs485 atmel_rs485_supported = {
+-	.flags = SER_RS485_ENABLED | SER_RS485_RTS_AFTER_SEND | SER_RS485_RX_DURING_TX,
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RX_DURING_TX,
+ 	.delay_rts_before_send = 1,
+ 	.delay_rts_after_send = 1,
+ };
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 615291ea9b5e9..77efa7ee6eda2 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2923,6 +2923,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
++	pm_runtime_mark_last_busy(&pdev->dev);
+ 
+ 	ret = lpuart_global_reset(sport);
+ 	if (ret)
+diff --git a/drivers/tty/vt/conmakehash.c b/drivers/tty/vt/conmakehash.c
+index 82d9db68b2ce8..a931fcde7ad98 100644
+--- a/drivers/tty/vt/conmakehash.c
++++ b/drivers/tty/vt/conmakehash.c
+@@ -11,8 +11,6 @@
+  * Copyright (C) 1995-1997 H. Peter Anvin
+  */
+ 
+-#include <libgen.h>
+-#include <linux/limits.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <sysexits.h>
+@@ -79,7 +77,6 @@ int main(int argc, char *argv[])
+ {
+   FILE *ctbl;
+   const char *tblname;
+-  char base_tblname[PATH_MAX];
+   char buffer[65536];
+   int fontlen;
+   int i, nuni, nent;
+@@ -245,20 +242,15 @@ int main(int argc, char *argv[])
+   for ( i = 0 ; i < fontlen ; i++ )
+     nuni += unicount[i];
+ 
+-  strncpy(base_tblname, tblname, PATH_MAX);
+-  base_tblname[PATH_MAX - 1] = 0;
+   printf("\
+ /*\n\
+- * Do not edit this file; it was automatically generated by\n\
+- *\n\
+- * conmakehash %s > [this file]\n\
+- *\n\
++ * Automatically generated file; Do not edit.\n\
+  */\n\
+ \n\
+ #include <linux/types.h>\n\
+ \n\
+ u8 dfont_unicount[%d] = \n\
+-{\n\t", basename(base_tblname), fontlen);
++{\n\t", fontlen);
+ 
+   for ( i = 0 ; i < fontlen ; i++ )
+     {
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 3100219d6496d..f591ddd086627 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1877,7 +1877,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
+ 
+ 	cancel_delayed_work_sync(&xhci->cmd_timer);
+ 
+-	for (i = 0; i < xhci->max_interrupters; i++) {
++	for (i = 0; xhci->interrupters && i < xhci->max_interrupters; i++) {
+ 		if (xhci->interrupters[i]) {
+ 			xhci_remove_interrupter(xhci, xhci->interrupters[i]);
+ 			xhci_free_interrupter(xhci, xhci->interrupters[i]);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 0a8cf6c17f827..efdf4c228b8c0 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -2837,7 +2837,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+ 				xhci->num_active_eps);
+ 		return -ENOMEM;
+ 	}
+-	if ((xhci->quirks & XHCI_SW_BW_CHECKING) &&
++	if ((xhci->quirks & XHCI_SW_BW_CHECKING) && !ctx_change &&
+ 	    xhci_reserve_bandwidth(xhci, virt_dev, command->in_ctx)) {
+ 		if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK))
+ 			xhci_free_host_resources(xhci, ctrl_ctx);
+@@ -4200,8 +4200,10 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		mutex_unlock(&xhci->mutex);
+ 		ret = xhci_disable_slot(xhci, udev->slot_id);
+ 		xhci_free_virt_device(xhci, udev->slot_id);
+-		if (!ret)
+-			xhci_alloc_dev(hcd, udev);
++		if (!ret) {
++			if (xhci_alloc_dev(hcd, udev) == 1)
++				xhci_setup_addressable_virt_dev(xhci, udev);
++		}
+ 		kfree(command->completion);
+ 		kfree(command);
+ 		return -EPROTO;
+diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c
+index 2d30fc1be3066..1a8d5e80b9aec 100644
+--- a/drivers/usb/misc/usb-ljca.c
++++ b/drivers/usb/misc/usb-ljca.c
+@@ -169,6 +169,7 @@ static const struct acpi_device_id ljca_gpio_hids[] = {
+ 	{ "INTC1096" },
+ 	{ "INTC100B" },
+ 	{ "INTC10D1" },
++	{ "INTC10B5" },
+ 	{},
+ };
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 5d4da962acc87..ea388e793f882 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -5630,7 +5630,6 @@ static void run_state_machine(struct tcpm_port *port)
+ 		break;
+ 	case PORT_RESET:
+ 		tcpm_reset_port(port);
+-		port->pd_events = 0;
+ 		if (port->self_powered)
+ 			tcpm_set_cc(port, TYPEC_CC_OPEN);
+ 		else
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index 6cc80fb10da23..5eb06fccc32e5 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -1169,6 +1169,73 @@ btrfs_find_delayed_ref_head(struct btrfs_delayed_ref_root *delayed_refs, u64 byt
+ 	return find_ref_head(delayed_refs, bytenr, false);
+ }
+ 
++static int find_comp(struct btrfs_delayed_ref_node *entry, u64 root, u64 parent)
++{
++	int type = parent ? BTRFS_SHARED_BLOCK_REF_KEY : BTRFS_TREE_BLOCK_REF_KEY;
++
++	if (type < entry->type)
++		return -1;
++	if (type > entry->type)
++		return 1;
++
++	if (type == BTRFS_TREE_BLOCK_REF_KEY) {
++		if (root < entry->ref_root)
++			return -1;
++		if (root > entry->ref_root)
++			return 1;
++	} else {
++		if (parent < entry->parent)
++			return -1;
++		if (parent > entry->parent)
++			return 1;
++	}
++	return 0;
++}
++
++/*
++ * Check to see if a given root/parent reference is attached to the head.  This
++ * only checks for BTRFS_ADD_DELAYED_REF references that match, as that
++ * indicates the reference exists for the given root or parent.  This is for
++ * tree blocks only.
++ *
++ * @head: the head of the bytenr we're searching.
++ * @root: the root objectid of the reference if it is a normal reference.
++ * @parent: the parent if this is a shared backref.
++ */
++bool btrfs_find_delayed_tree_ref(struct btrfs_delayed_ref_head *head,
++				 u64 root, u64 parent)
++{
++	struct rb_node *node;
++	bool found = false;
++
++	lockdep_assert_held(&head->mutex);
++
++	spin_lock(&head->lock);
++	node = head->ref_tree.rb_root.rb_node;
++	while (node) {
++		struct btrfs_delayed_ref_node *entry;
++		int ret;
++
++		entry = rb_entry(node, struct btrfs_delayed_ref_node, ref_node);
++		ret = find_comp(entry, root, parent);
++		if (ret < 0) {
++			node = node->rb_left;
++		} else if (ret > 0) {
++			node = node->rb_right;
++		} else {
++			/*
++			 * We only want to count ADD actions, as drops mean the
++			 * ref doesn't exist.
++			 */
++			if (entry->action == BTRFS_ADD_DELAYED_REF)
++				found = true;
++			break;
++		}
++	}
++	spin_unlock(&head->lock);
++	return found;
++}
++
+ void __cold btrfs_delayed_ref_exit(void)
+ {
+ 	kmem_cache_destroy(btrfs_delayed_ref_head_cachep);
+diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
+index 04b180ebe1fe0..1d11a1a6f9cb1 100644
+--- a/fs/btrfs/delayed-ref.h
++++ b/fs/btrfs/delayed-ref.h
+@@ -389,6 +389,8 @@ int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info,
+ void btrfs_migrate_to_delayed_refs_rsv(struct btrfs_fs_info *fs_info,
+ 				       u64 num_bytes);
+ bool btrfs_check_space_for_delayed_refs(struct btrfs_fs_info *fs_info);
++bool btrfs_find_delayed_tree_ref(struct btrfs_delayed_ref_head *head,
++				 u64 root, u64 parent);
+ 
+ static inline u64 btrfs_delayed_ref_owner(struct btrfs_delayed_ref_node *node)
+ {
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 844b677d054ec..8bf980123c5c5 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5387,23 +5387,62 @@ static int check_ref_exists(struct btrfs_trans_handle *trans,
+ 			    struct btrfs_root *root, u64 bytenr, u64 parent,
+ 			    int level)
+ {
++	struct btrfs_delayed_ref_root *delayed_refs;
++	struct btrfs_delayed_ref_head *head;
+ 	struct btrfs_path *path;
+ 	struct btrfs_extent_inline_ref *iref;
+ 	int ret;
++	bool exists = false;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+ 		return -ENOMEM;
+-
++again:
+ 	ret = lookup_extent_backref(trans, path, &iref, bytenr,
+ 				    root->fs_info->nodesize, parent,
+ 				    btrfs_root_id(root), level, 0);
++	if (ret != -ENOENT) {
++		/*
++		 * If we get 0 then we found our reference, return 1, else
++		 * return the error if it's not -ENOENT;
++		 */
++		btrfs_free_path(path);
++		return (ret < 0 ) ? ret : 1;
++	}
++
++	/*
++	 * We could have a delayed ref with this reference, so look it up while
++	 * we're holding the path open to make sure we don't race with the
++	 * delayed ref running.
++	 */
++	delayed_refs = &trans->transaction->delayed_refs;
++	spin_lock(&delayed_refs->lock);
++	head = btrfs_find_delayed_ref_head(delayed_refs, bytenr);
++	if (!head)
++		goto out;
++	if (!mutex_trylock(&head->mutex)) {
++		/*
++		 * We're contended, means that the delayed ref is running, get a
++		 * reference and wait for the ref head to be complete and then
++		 * try again.
++		 */
++		refcount_inc(&head->refs);
++		spin_unlock(&delayed_refs->lock);
++
++		btrfs_release_path(path);
++
++		mutex_lock(&head->mutex);
++		mutex_unlock(&head->mutex);
++		btrfs_put_delayed_ref_head(head);
++		goto again;
++	}
++
++	exists = btrfs_find_delayed_tree_ref(head, root->root_key.objectid, parent);
++	mutex_unlock(&head->mutex);
++out:
++	spin_unlock(&delayed_refs->lock);
+ 	btrfs_free_path(path);
+-	if (ret == -ENOENT)
+-		return 0;
+-	if (ret < 0)
+-		return ret;
+-	return 1;
++	return exists ? 1 : 0;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 0486b1f911248..3bad7c0be1f10 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -1420,6 +1420,13 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 		free_extent_map(em);
+ 		em = NULL;
+ 
++		/*
++		 * Although the PageDirty bit might be cleared before entering
++		 * this function, subpage dirty bit is not cleared.
++		 * So clear subpage dirty bit here so next time we won't submit
++		 * page for range already written to disk.
++		 */
++		btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);
+ 		btrfs_set_range_writeback(inode, cur, cur + iosize - 1);
+ 		if (!PageWriteback(page)) {
+ 			btrfs_err(inode->root->fs_info,
+@@ -1427,13 +1434,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 			       page->index, cur, end);
+ 		}
+ 
+-		/*
+-		 * Although the PageDirty bit is cleared before entering this
+-		 * function, subpage dirty bit is not cleared.
+-		 * So clear subpage dirty bit here so next time we won't submit
+-		 * page for range already written to disk.
+-		 */
+-		btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);
+ 
+ 		submit_extent_page(bio_ctrl, disk_bytenr, page, iosize,
+ 				   cur - page_offset(page));
+diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
+index b4c9a6aa118cd..6853f043c2c14 100644
+--- a/fs/btrfs/extent_map.c
++++ b/fs/btrfs/extent_map.c
+@@ -1065,8 +1065,7 @@ static long btrfs_scan_inode(struct btrfs_inode *inode, struct btrfs_em_shrink_c
+ 		return 0;
+ 
+ 	/*
+-	 * We want to be fast because we can be called from any path trying to
+-	 * allocate memory, so if the lock is busy we don't want to spend time
++	 * We want to be fast so if the lock is busy we don't want to spend time
+ 	 * waiting for it - either some task is about to do IO for the inode or
+ 	 * we may have another task shrinking extent maps, here in this code, so
+ 	 * skip this inode.
+@@ -1109,9 +1108,7 @@ static long btrfs_scan_inode(struct btrfs_inode *inode, struct btrfs_em_shrink_c
+ 		/*
+ 		 * Stop if we need to reschedule or there's contention on the
+ 		 * lock. This is to avoid slowing other tasks trying to take the
+-		 * lock and because the shrinker might be called during a memory
+-		 * allocation path and we want to avoid taking a very long time
+-		 * and slowing down all sorts of tasks.
++		 * lock.
+ 		 */
+ 		if (need_resched() || rwlock_needbreak(&tree->lock))
+ 			break;
+@@ -1139,12 +1136,7 @@ static long btrfs_scan_root(struct btrfs_root *root, struct btrfs_em_shrink_ctx
+ 		if (ctx->scanned >= ctx->nr_to_scan)
+ 			break;
+ 
+-		/*
+-		 * We may be called from memory allocation paths, so we don't
+-		 * want to take too much time and slowdown tasks.
+-		 */
+-		if (need_resched())
+-			break;
++		cond_resched();
+ 
+ 		inode = btrfs_find_first_inode(root, min_ino);
+ 	}
+@@ -1202,14 +1194,12 @@ long btrfs_free_extent_maps(struct btrfs_fs_info *fs_info, long nr_to_scan)
+ 							   ctx.last_ino);
+ 	}
+ 
+-	/*
+-	 * We may be called from memory allocation paths, so we don't want to
+-	 * take too much time and slowdown tasks, so stop if we need reschedule.
+-	 */
+-	while (ctx.scanned < ctx.nr_to_scan && !need_resched()) {
++	while (ctx.scanned < ctx.nr_to_scan) {
+ 		struct btrfs_root *root;
+ 		unsigned long count;
+ 
++		cond_resched();
++
+ 		spin_lock(&fs_info->fs_roots_radix_lock);
+ 		count = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
+ 					       (void **)&root,
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 62c3dea9572ab..1926a228d0ba0 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2698,15 +2698,16 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
+ 	u64 offset = bytenr - block_group->start;
+ 	u64 to_free, to_unusable;
+ 	int bg_reclaim_threshold = 0;
+-	bool initial = ((size == block_group->length) && (block_group->alloc_offset == 0));
++	bool initial;
+ 	u64 reclaimable_unusable;
+ 
+-	WARN_ON(!initial && offset + size > block_group->zone_capacity);
++	spin_lock(&block_group->lock);
+ 
++	initial = ((size == block_group->length) && (block_group->alloc_offset == 0));
++	WARN_ON(!initial && offset + size > block_group->zone_capacity);
+ 	if (!initial)
+ 		bg_reclaim_threshold = READ_ONCE(sinfo->bg_reclaim_threshold);
+ 
+-	spin_lock(&ctl->tree_lock);
+ 	if (!used)
+ 		to_free = size;
+ 	else if (initial)
+@@ -2719,7 +2720,9 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
+ 		to_free = offset + size - block_group->alloc_offset;
+ 	to_unusable = size - to_free;
+ 
++	spin_lock(&ctl->tree_lock);
+ 	ctl->free_space += to_free;
++	spin_unlock(&ctl->tree_lock);
+ 	/*
+ 	 * If the block group is read-only, we should account freed space into
+ 	 * bytes_readonly.
+@@ -2728,11 +2731,8 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
+ 		block_group->zone_unusable += to_unusable;
+ 		WARN_ON(block_group->zone_unusable > block_group->length);
+ 	}
+-	spin_unlock(&ctl->tree_lock);
+ 	if (!used) {
+-		spin_lock(&block_group->lock);
+ 		block_group->alloc_offset -= size;
+-		spin_unlock(&block_group->lock);
+ 	}
+ 
+ 	reclaimable_unusable = block_group->zone_unusable -
+@@ -2746,6 +2746,8 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
+ 		btrfs_mark_bg_to_reclaim(block_group);
+ 	}
+ 
++	spin_unlock(&block_group->lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 3dd4a48479a9e..d1a04c0c576ed 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6158,25 +6158,51 @@ static int send_write_or_clone(struct send_ctx *sctx,
+ 	u64 offset = key->offset;
+ 	u64 end;
+ 	u64 bs = sctx->send_root->fs_info->sectorsize;
++	struct btrfs_file_extent_item *ei;
++	u64 disk_byte;
++	u64 data_offset;
++	u64 num_bytes;
++	struct btrfs_inode_info info = { 0 };
+ 
+ 	end = min_t(u64, btrfs_file_extent_end(path), sctx->cur_inode_size);
+ 	if (offset >= end)
+ 		return 0;
+ 
+-	if (clone_root && IS_ALIGNED(end, bs)) {
+-		struct btrfs_file_extent_item *ei;
+-		u64 disk_byte;
+-		u64 data_offset;
++	num_bytes = end - offset;
+ 
+-		ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
+-				    struct btrfs_file_extent_item);
+-		disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei);
+-		data_offset = btrfs_file_extent_offset(path->nodes[0], ei);
+-		ret = clone_range(sctx, path, clone_root, disk_byte,
+-				  data_offset, offset, end - offset);
+-	} else {
+-		ret = send_extent_data(sctx, path, offset, end - offset);
+-	}
++	if (!clone_root)
++		goto write_data;
++
++	if (IS_ALIGNED(end, bs))
++		goto clone_data;
++
++	/*
++	 * If the extent end is not aligned, we can clone if the extent ends at
++	 * the i_size of the inode and the clone range ends at the i_size of the
++	 * source inode, otherwise the clone operation fails with -EINVAL.
++	 */
++	if (end != sctx->cur_inode_size)
++		goto write_data;
++
++	ret = get_inode_info(clone_root->root, clone_root->ino, &info);
++	if (ret < 0)
++		return ret;
++
++	if (clone_root->offset + num_bytes == info.size)
++		goto clone_data;
++
++write_data:
++	ret = send_extent_data(sctx, path, offset, num_bytes);
++	sctx->cur_inode_next_write_offset = end;
++	return ret;
++
++clone_data:
++	ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
++			    struct btrfs_file_extent_item);
++	disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei);
++	data_offset = btrfs_file_extent_offset(path->nodes[0], ei);
++	ret = clone_range(sctx, path, clone_root, disk_byte, data_offset, offset,
++			  num_bytes);
+ 	sctx->cur_inode_next_write_offset = end;
+ 	return ret;
+ }
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index f05cce7c8b8d1..03107c01b2875 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -28,6 +28,7 @@
+ #include <linux/btrfs.h>
+ #include <linux/security.h>
+ #include <linux/fs_parser.h>
++#include <linux/swap.h>
+ #include "messages.h"
+ #include "delayed-inode.h"
+ #include "ctree.h"
+@@ -2386,7 +2387,13 @@ static long btrfs_nr_cached_objects(struct super_block *sb, struct shrink_contro
+ 
+ 	trace_btrfs_extent_map_shrinker_count(fs_info, nr);
+ 
+-	return nr;
++	/*
++	 * Only report the real number for DEBUG builds, as there are reports of
++	 * serious performance degradation caused by too frequent shrinks.
++	 */
++	if (IS_ENABLED(CONFIG_BTRFS_DEBUG))
++		return nr;
++	return 0;
+ }
+ 
+ static long btrfs_free_cached_objects(struct super_block *sb, struct shrink_control *sc)
+@@ -2394,6 +2401,15 @@ static long btrfs_free_cached_objects(struct super_block *sb, struct shrink_cont
+ 	const long nr_to_scan = min_t(unsigned long, LONG_MAX, sc->nr_to_scan);
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+ 
++	/*
++	 * We may be called from any task trying to allocate memory and we don't
++	 * want to slow it down with scanning and dropping extent maps. It would
++	 * also cause heavy lock contention if many tasks concurrently enter
++	 * here. Therefore only allow kswapd tasks to scan and drop extent maps.
++	 */
++	if (!current_is_kswapd())
++		return 0;
++
+ 	return btrfs_free_extent_maps(fs_info, nr_to_scan);
+ }
+ 
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index a2c3651a3d8fc..897e19790522d 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -551,9 +551,10 @@ static int check_dir_item(struct extent_buffer *leaf,
+ 
+ 		/* dir type check */
+ 		dir_type = btrfs_dir_ftype(leaf, di);
+-		if (unlikely(dir_type >= BTRFS_FT_MAX)) {
++		if (unlikely(dir_type <= BTRFS_FT_UNKNOWN ||
++			     dir_type >= BTRFS_FT_MAX)) {
+ 			dir_item_err(leaf, slot,
+-			"invalid dir item type, have %u expect [0, %u)",
++			"invalid dir item type, have %u expect (0, %u)",
+ 				dir_type, BTRFS_FT_MAX);
+ 			return -EUCLEAN;
+ 		}
+@@ -1717,6 +1718,72 @@ static int check_raid_stripe_extent(const struct extent_buffer *leaf,
+ 	return 0;
+ }
+ 
++static int check_dev_extent_item(const struct extent_buffer *leaf,
++				 const struct btrfs_key *key,
++				 int slot,
++				 struct btrfs_key *prev_key)
++{
++	struct btrfs_dev_extent *de;
++	const u32 sectorsize = leaf->fs_info->sectorsize;
++
++	de = btrfs_item_ptr(leaf, slot, struct btrfs_dev_extent);
++	/* Basic fixed member checks. */
++	if (unlikely(btrfs_dev_extent_chunk_tree(leaf, de) !=
++		     BTRFS_CHUNK_TREE_OBJECTID)) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk tree id, has %llu expect %llu",
++			    btrfs_dev_extent_chunk_tree(leaf, de),
++			    BTRFS_CHUNK_TREE_OBJECTID);
++		return -EUCLEAN;
++	}
++	if (unlikely(btrfs_dev_extent_chunk_objectid(leaf, de) !=
++		     BTRFS_FIRST_CHUNK_TREE_OBJECTID)) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk objectid, has %llu expect %llu",
++			    btrfs_dev_extent_chunk_objectid(leaf, de),
++			    BTRFS_FIRST_CHUNK_TREE_OBJECTID);
++		return -EUCLEAN;
++	}
++	/* Alignment check. */
++	if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent key.offset, has %llu not aligned to %u",
++			    key->offset, sectorsize);
++		return -EUCLEAN;
++	}
++	if (unlikely(!IS_ALIGNED(btrfs_dev_extent_chunk_offset(leaf, de),
++				 sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent chunk offset, has %llu not aligned to %u",
++			    btrfs_dev_extent_chunk_objectid(leaf, de),
++			    sectorsize);
++		return -EUCLEAN;
++	}
++	if (unlikely(!IS_ALIGNED(btrfs_dev_extent_length(leaf, de),
++				 sectorsize))) {
++		generic_err(leaf, slot,
++			    "invalid dev extent length, has %llu not aligned to %u",
++			    btrfs_dev_extent_length(leaf, de), sectorsize);
++		return -EUCLEAN;
++	}
++	/* Overlap check with previous dev extent. */
++	if (slot && prev_key->objectid == key->objectid &&
++	    prev_key->type == key->type) {
++		struct btrfs_dev_extent *prev_de;
++		u64 prev_len;
++
++		prev_de = btrfs_item_ptr(leaf, slot - 1, struct btrfs_dev_extent);
++		prev_len = btrfs_dev_extent_length(leaf, prev_de);
++		if (unlikely(prev_key->offset + prev_len > key->offset)) {
++			generic_err(leaf, slot,
++		"dev extent overlap, prev offset %llu len %llu current offset %llu",
++				    prev_key->objectid, prev_len, key->offset);
++			return -EUCLEAN;
++		}
++	}
++	return 0;
++}
++
+ /*
+  * Common point to switch the item-specific validation.
+  */
+@@ -1753,6 +1820,9 @@ static enum btrfs_tree_block_status check_leaf_item(struct extent_buffer *leaf,
+ 	case BTRFS_DEV_ITEM_KEY:
+ 		ret = check_dev_item(leaf, key, slot);
+ 		break;
++	case BTRFS_DEV_EXTENT_KEY:
++		ret = check_dev_extent_item(leaf, key, slot, prev_key);
++		break;
+ 	case BTRFS_INODE_ITEM_KEY:
+ 		ret = check_inode_item(leaf, key, slot);
+ 		break;
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 8c16bc5250ef5..73b5a07bf94de 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -498,6 +498,11 @@ const struct netfs_request_ops ceph_netfs_ops = {
+ };
+ 
+ #ifdef CONFIG_CEPH_FSCACHE
++static void ceph_set_page_fscache(struct page *page)
++{
++	folio_start_private_2(page_folio(page)); /* [DEPRECATED] */
++}
++
+ static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async)
+ {
+ 	struct inode *inode = priv;
+@@ -515,6 +520,10 @@ static void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, b
+ 			       ceph_fscache_write_terminated, inode, true, caching);
+ }
+ #else
++static inline void ceph_set_page_fscache(struct page *page)
++{
++}
++
+ static inline void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, bool caching)
+ {
+ }
+@@ -706,6 +715,8 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
+ 		len = wlen;
+ 
+ 	set_page_writeback(page);
++	if (caching)
++		ceph_set_page_fscache(page);
+ 	ceph_fscache_write_to_cache(inode, page_off, len, caching);
+ 
+ 	if (IS_ENCRYPTED(inode)) {
+@@ -789,6 +800,8 @@ static int ceph_writepage(struct page *page, struct writeback_control *wbc)
+ 		return AOP_WRITEPAGE_ACTIVATE;
+ 	}
+ 
++	folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */
++
+ 	err = writepage_nounlock(page, wbc);
+ 	if (err == -ERESTARTSYS) {
+ 		/* direct memory reclaimer was killed by SIGKILL. return 0
+@@ -1062,7 +1075,8 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 				unlock_page(page);
+ 				break;
+ 			}
+-			if (PageWriteback(page)) {
++			if (PageWriteback(page) ||
++			    PagePrivate2(page) /* [DEPRECATED] */) {
+ 				if (wbc->sync_mode == WB_SYNC_NONE) {
+ 					doutc(cl, "%p under writeback\n", page);
+ 					unlock_page(page);
+@@ -1070,6 +1084,7 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 				}
+ 				doutc(cl, "waiting on writeback %p\n", page);
+ 				wait_on_page_writeback(page);
++				folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */
+ 			}
+ 
+ 			if (!clear_page_dirty_for_io(page)) {
+@@ -1254,6 +1269,8 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 			}
+ 
+ 			set_page_writeback(page);
++			if (caching)
++				ceph_set_page_fscache(page);
+ 			len += thp_size(page);
+ 		}
+ 		ceph_fscache_write_to_cache(inode, offset, len, caching);
+diff --git a/fs/file.c b/fs/file.c
+index a11e59b5d6026..655338effe9c7 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -46,27 +46,23 @@ static void free_fdtable_rcu(struct rcu_head *rcu)
+ #define BITBIT_NR(nr)	BITS_TO_LONGS(BITS_TO_LONGS(nr))
+ #define BITBIT_SIZE(nr)	(BITBIT_NR(nr) * sizeof(long))
+ 
++#define fdt_words(fdt) ((fdt)->max_fds / BITS_PER_LONG) // words in ->open_fds
+ /*
+  * Copy 'count' fd bits from the old table to the new table and clear the extra
+  * space if any.  This does not copy the file pointers.  Called with the files
+  * spinlock held for write.
+  */
+-static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
+-			    unsigned int count)
++static inline void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
++			    unsigned int copy_words)
+ {
+-	unsigned int cpy, set;
+-
+-	cpy = count / BITS_PER_BYTE;
+-	set = (nfdt->max_fds - count) / BITS_PER_BYTE;
+-	memcpy(nfdt->open_fds, ofdt->open_fds, cpy);
+-	memset((char *)nfdt->open_fds + cpy, 0, set);
+-	memcpy(nfdt->close_on_exec, ofdt->close_on_exec, cpy);
+-	memset((char *)nfdt->close_on_exec + cpy, 0, set);
+-
+-	cpy = BITBIT_SIZE(count);
+-	set = BITBIT_SIZE(nfdt->max_fds) - cpy;
+-	memcpy(nfdt->full_fds_bits, ofdt->full_fds_bits, cpy);
+-	memset((char *)nfdt->full_fds_bits + cpy, 0, set);
++	unsigned int nwords = fdt_words(nfdt);
++
++	bitmap_copy_and_extend(nfdt->open_fds, ofdt->open_fds,
++			copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);
++	bitmap_copy_and_extend(nfdt->close_on_exec, ofdt->close_on_exec,
++			copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);
++	bitmap_copy_and_extend(nfdt->full_fds_bits, ofdt->full_fds_bits,
++			copy_words, nwords);
+ }
+ 
+ /*
+@@ -84,7 +80,7 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ 	memcpy(nfdt->fd, ofdt->fd, cpy);
+ 	memset((char *)nfdt->fd + cpy, 0, set);
+ 
+-	copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);
++	copy_fd_bitmaps(nfdt, ofdt, fdt_words(ofdt));
+ }
+ 
+ /*
+@@ -379,7 +375,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 		open_files = sane_fdtable_size(old_fdt, max_fds);
+ 	}
+ 
+-	copy_fd_bitmaps(new_fdt, old_fdt, open_files);
++	copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG);
+ 
+ 	old_fds = old_fdt->fd;
+ 	new_fds = new_fdt->fd;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 9eb191b5c4de1..7146038b2fe7d 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1618,9 +1618,11 @@ static int fuse_notify_store(struct fuse_conn *fc, unsigned int size,
+ 
+ 		this_num = min_t(unsigned, num, PAGE_SIZE - offset);
+ 		err = fuse_copy_page(cs, &page, offset, this_num, 0);
+-		if (!err && offset == 0 &&
+-		    (this_num == PAGE_SIZE || file_size == end))
++		if (!PageUptodate(page) && !err && offset == 0 &&
++		    (this_num == PAGE_SIZE || file_size == end)) {
++			zero_user_segment(page, this_num, PAGE_SIZE);
+ 			SetPageUptodate(page);
++		}
+ 		unlock_page(page);
+ 		put_page(page);
+ 
+diff --git a/fs/inode.c b/fs/inode.c
+index 3a41f83a4ba55..f5add7222c98e 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -486,6 +486,39 @@ static void inode_lru_list_del(struct inode *inode)
+ 		this_cpu_dec(nr_unused);
+ }
+ 
++static void inode_pin_lru_isolating(struct inode *inode)
++{
++	lockdep_assert_held(&inode->i_lock);
++	WARN_ON(inode->i_state & (I_LRU_ISOLATING | I_FREEING | I_WILL_FREE));
++	inode->i_state |= I_LRU_ISOLATING;
++}
++
++static void inode_unpin_lru_isolating(struct inode *inode)
++{
++	spin_lock(&inode->i_lock);
++	WARN_ON(!(inode->i_state & I_LRU_ISOLATING));
++	inode->i_state &= ~I_LRU_ISOLATING;
++	smp_mb();
++	wake_up_bit(&inode->i_state, __I_LRU_ISOLATING);
++	spin_unlock(&inode->i_lock);
++}
++
++static void inode_wait_for_lru_isolating(struct inode *inode)
++{
++	spin_lock(&inode->i_lock);
++	if (inode->i_state & I_LRU_ISOLATING) {
++		DEFINE_WAIT_BIT(wq, &inode->i_state, __I_LRU_ISOLATING);
++		wait_queue_head_t *wqh;
++
++		wqh = bit_waitqueue(&inode->i_state, __I_LRU_ISOLATING);
++		spin_unlock(&inode->i_lock);
++		__wait_on_bit(wqh, &wq, bit_wait, TASK_UNINTERRUPTIBLE);
++		spin_lock(&inode->i_lock);
++		WARN_ON(inode->i_state & I_LRU_ISOLATING);
++	}
++	spin_unlock(&inode->i_lock);
++}
++
+ /**
+  * inode_sb_list_add - add inode to the superblock list of inodes
+  * @inode: inode to add
+@@ -655,6 +688,8 @@ static void evict(struct inode *inode)
+ 
+ 	inode_sb_list_del(inode);
+ 
++	inode_wait_for_lru_isolating(inode);
++
+ 	/*
+ 	 * Wait for flusher thread to be done with the inode so that filesystem
+ 	 * does not start destroying it while writeback is still running. Since
+@@ -843,7 +878,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
+ 	 * be under pressure before the cache inside the highmem zone.
+ 	 */
+ 	if (inode_has_buffers(inode) || !mapping_empty(&inode->i_data)) {
+-		__iget(inode);
++		inode_pin_lru_isolating(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		spin_unlock(lru_lock);
+ 		if (remove_inode_buffers(inode)) {
+@@ -855,7 +890,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
+ 				__count_vm_events(PGINODESTEAL, reap);
+ 			mm_account_reclaimed_pages(reap);
+ 		}
+-		iput(inode);
++		inode_unpin_lru_isolating(inode);
+ 		spin_lock(lru_lock);
+ 		return LRU_RETRY;
+ 	}
+diff --git a/fs/libfs.c b/fs/libfs.c
+index b635ee5adbcce..65279e53fbf27 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -450,6 +450,14 @@ void simple_offset_destroy(struct offset_ctx *octx)
+ 	mtree_destroy(&octx->mt);
+ }
+ 
++static int offset_dir_open(struct inode *inode, struct file *file)
++{
++	struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
++
++	file->private_data = (void *)ctx->next_offset;
++	return 0;
++}
++
+ /**
+  * offset_dir_llseek - Advance the read position of a directory descriptor
+  * @file: an open directory whose position is to be updated
+@@ -463,6 +471,9 @@ void simple_offset_destroy(struct offset_ctx *octx)
+  */
+ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ {
++	struct inode *inode = file->f_inode;
++	struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
++
+ 	switch (whence) {
+ 	case SEEK_CUR:
+ 		offset += file->f_pos;
+@@ -476,7 +487,8 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ 	}
+ 
+ 	/* In this case, ->private_data is protected by f_pos_lock */
+-	file->private_data = NULL;
++	if (!offset)
++		file->private_data = (void *)ctx->next_offset;
+ 	return vfs_setpos(file, offset, LONG_MAX);
+ }
+ 
+@@ -507,7 +519,7 @@ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
+ 			  inode->i_ino, fs_umode_to_dtype(inode->i_mode));
+ }
+ 
+-static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
++static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, long last_index)
+ {
+ 	struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
+ 	struct dentry *dentry;
+@@ -515,17 +527,21 @@ static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
+ 	while (true) {
+ 		dentry = offset_find_next(octx, ctx->pos);
+ 		if (!dentry)
+-			return ERR_PTR(-ENOENT);
++			return;
++
++		if (dentry2offset(dentry) >= last_index) {
++			dput(dentry);
++			return;
++		}
+ 
+ 		if (!offset_dir_emit(ctx, dentry)) {
+ 			dput(dentry);
+-			break;
++			return;
+ 		}
+ 
+ 		ctx->pos = dentry2offset(dentry) + 1;
+ 		dput(dentry);
+ 	}
+-	return NULL;
+ }
+ 
+ /**
+@@ -552,22 +568,19 @@ static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
+ static int offset_readdir(struct file *file, struct dir_context *ctx)
+ {
+ 	struct dentry *dir = file->f_path.dentry;
++	long last_index = (long)file->private_data;
+ 
+ 	lockdep_assert_held(&d_inode(dir)->i_rwsem);
+ 
+ 	if (!dir_emit_dots(file, ctx))
+ 		return 0;
+ 
+-	/* In this case, ->private_data is protected by f_pos_lock */
+-	if (ctx->pos == DIR_OFFSET_MIN)
+-		file->private_data = NULL;
+-	else if (file->private_data == ERR_PTR(-ENOENT))
+-		return 0;
+-	file->private_data = offset_iterate_dir(d_inode(dir), ctx);
++	offset_iterate_dir(d_inode(dir), ctx, last_index);
+ 	return 0;
+ }
+ 
+ const struct file_operations simple_offset_dir_operations = {
++	.open		= offset_dir_open,
+ 	.llseek		= offset_dir_llseek,
+ 	.iterate_shared	= offset_readdir,
+ 	.read		= generic_read_dir,
+diff --git a/fs/locks.c b/fs/locks.c
+index 9afb16e0683ff..e45cad40f8b6b 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2984,7 +2984,7 @@ static int __init filelock_init(void)
+ 	filelock_cache = kmem_cache_create("file_lock_cache",
+ 			sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
+ 
+-	filelease_cache = kmem_cache_create("file_lock_cache",
++	filelease_cache = kmem_cache_create("file_lease_cache",
+ 			sizeof(struct file_lease), 0, SLAB_PANIC, NULL);
+ 
+ 	for_each_possible_cpu(i) {
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 4c0401dbbfcfa..9aa045e2ab0ae 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -466,7 +466,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	if (!netfs_is_cache_enabled(ctx) &&
+ 	    netfs_skip_folio_read(folio, pos, len, false)) {
+ 		netfs_stat(&netfs_n_rh_write_zskip);
+-		goto have_folio;
++		goto have_folio_no_wait;
+ 	}
+ 
+ 	rreq = netfs_alloc_request(mapping, file,
+@@ -507,6 +507,12 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
+ 
+ have_folio:
++	if (test_bit(NETFS_ICTX_USE_PGPRIV2, &ctx->flags)) {
++		ret = folio_wait_private_2_killable(folio);
++		if (ret < 0)
++			goto error;
++	}
++have_folio_no_wait:
+ 	*_folio = folio;
+ 	kleave(" = 0");
+ 	return 0;
+diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
+index ecbc99ec7d367..18055c1e01835 100644
+--- a/fs/netfs/buffered_write.c
++++ b/fs/netfs/buffered_write.c
+@@ -184,7 +184,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
+ 	unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0;
+ 	ssize_t written = 0, ret, ret2;
+ 	loff_t i_size, pos = iocb->ki_pos, from, to;
+-	size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER;
++	size_t max_chunk = mapping_max_folio_size(mapping);
+ 	bool maybe_trouble = false;
+ 
+ 	if (unlikely(test_bit(NETFS_ICTX_WRITETHROUGH, &ctx->flags) ||
+diff --git a/fs/netfs/fscache_cookie.c b/fs/netfs/fscache_cookie.c
+index 4d1e8bf4c615f..431a1905fcb4c 100644
+--- a/fs/netfs/fscache_cookie.c
++++ b/fs/netfs/fscache_cookie.c
+@@ -741,6 +741,10 @@ static void fscache_cookie_state_machine(struct fscache_cookie *cookie)
+ 			spin_lock(&cookie->lock);
+ 		}
+ 		if (test_bit(FSCACHE_COOKIE_DO_LRU_DISCARD, &cookie->flags)) {
++			if (atomic_read(&cookie->n_accesses) != 0)
++				/* still being accessed: postpone it */
++				break;
++
+ 			__fscache_set_cookie_state(cookie,
+ 						   FSCACHE_COOKIE_STATE_LRU_DISCARDING);
+ 			wake = true;
+diff --git a/fs/netfs/io.c b/fs/netfs/io.c
+index c7576481c321d..f3abc5dfdbc0c 100644
+--- a/fs/netfs/io.c
++++ b/fs/netfs/io.c
+@@ -98,6 +98,146 @@ static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async)
+ 	netfs_put_request(rreq, was_async, netfs_rreq_trace_put_complete);
+ }
+ 
++/*
++ * [DEPRECATED] Deal with the completion of writing the data to the cache.  We
++ * have to clear the PG_fscache bits on the folios involved and release the
++ * caller's ref.
++ *
++ * May be called in softirq mode and we inherit a ref from the caller.
++ */
++static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq,
++					  bool was_async)
++{
++	struct netfs_io_subrequest *subreq;
++	struct folio *folio;
++	pgoff_t unlocked = 0;
++	bool have_unlocked = false;
++
++	rcu_read_lock();
++
++	list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
++		XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE);
++
++		xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) {
++			if (xas_retry(&xas, folio))
++				continue;
++
++			/* We might have multiple writes from the same huge
++			 * folio, but we mustn't unlock a folio more than once.
++			 */
++			if (have_unlocked && folio->index <= unlocked)
++				continue;
++			unlocked = folio_next_index(folio) - 1;
++			trace_netfs_folio(folio, netfs_folio_trace_end_copy);
++			folio_end_private_2(folio);
++			have_unlocked = true;
++		}
++	}
++
++	rcu_read_unlock();
++	netfs_rreq_completed(rreq, was_async);
++}
++
++static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error,
++				       bool was_async) /* [DEPRECATED] */
++{
++	struct netfs_io_subrequest *subreq = priv;
++	struct netfs_io_request *rreq = subreq->rreq;
++
++	if (IS_ERR_VALUE(transferred_or_error)) {
++		netfs_stat(&netfs_n_rh_write_failed);
++		trace_netfs_failure(rreq, subreq, transferred_or_error,
++				    netfs_fail_copy_to_cache);
++	} else {
++		netfs_stat(&netfs_n_rh_write_done);
++	}
++
++	trace_netfs_sreq(subreq, netfs_sreq_trace_write_term);
++
++	/* If we decrement nr_copy_ops to 0, the ref belongs to us. */
++	if (atomic_dec_and_test(&rreq->nr_copy_ops))
++		netfs_rreq_unmark_after_write(rreq, was_async);
++
++	netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated);
++}
++
++/*
++ * [DEPRECATED] Perform any outstanding writes to the cache.  We inherit a ref
++ * from the caller.
++ */
++static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
++{
++	struct netfs_cache_resources *cres = &rreq->cache_resources;
++	struct netfs_io_subrequest *subreq, *next, *p;
++	struct iov_iter iter;
++	int ret;
++
++	trace_netfs_rreq(rreq, netfs_rreq_trace_copy);
++
++	/* We don't want terminating writes trying to wake us up whilst we're
++	 * still going through the list.
++	 */
++	atomic_inc(&rreq->nr_copy_ops);
++
++	list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) {
++		if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
++			list_del_init(&subreq->rreq_link);
++			netfs_put_subrequest(subreq, false,
++					     netfs_sreq_trace_put_no_copy);
++		}
++	}
++
++	list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
++		/* Amalgamate adjacent writes */
++		while (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) {
++			next = list_next_entry(subreq, rreq_link);
++			if (next->start != subreq->start + subreq->len)
++				break;
++			subreq->len += next->len;
++			list_del_init(&next->rreq_link);
++			netfs_put_subrequest(next, false,
++					     netfs_sreq_trace_put_merged);
++		}
++
++		ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len,
++					       subreq->len, rreq->i_size, true);
++		if (ret < 0) {
++			trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write);
++			trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip);
++			continue;
++		}
++
++		iov_iter_xarray(&iter, ITER_SOURCE, &rreq->mapping->i_pages,
++				subreq->start, subreq->len);
++
++		atomic_inc(&rreq->nr_copy_ops);
++		netfs_stat(&netfs_n_rh_write);
++		netfs_get_subrequest(subreq, netfs_sreq_trace_get_copy_to_cache);
++		trace_netfs_sreq(subreq, netfs_sreq_trace_write);
++		cres->ops->write(cres, subreq->start, &iter,
++				 netfs_rreq_copy_terminated, subreq);
++	}
++
++	/* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */
++	if (atomic_dec_and_test(&rreq->nr_copy_ops))
++		netfs_rreq_unmark_after_write(rreq, false);
++}
++
++static void netfs_rreq_write_to_cache_work(struct work_struct *work) /* [DEPRECATED] */
++{
++	struct netfs_io_request *rreq =
++		container_of(work, struct netfs_io_request, work);
++
++	netfs_rreq_do_write_to_cache(rreq);
++}
++
++static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq) /* [DEPRECATED] */
++{
++	rreq->work.func = netfs_rreq_write_to_cache_work;
++	if (!queue_work(system_unbound_wq, &rreq->work))
++		BUG();
++}
++
+ /*
+  * Handle a short read.
+  */
+@@ -275,6 +415,10 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
+ 	clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+ 	wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
+ 
++	if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags) &&
++	    test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags))
++		return netfs_rreq_write_to_cache(rreq);
++
+ 	netfs_rreq_completed(rreq, was_async);
+ }
+ 
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index d4bcc7da700c6..0a271b9fbc622 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -290,7 +290,7 @@ struct smb_version_operations {
+ 	int (*check_receive)(struct mid_q_entry *, struct TCP_Server_Info *,
+ 			     bool);
+ 	void (*add_credits)(struct TCP_Server_Info *server,
+-			    const struct cifs_credits *credits,
++			    struct cifs_credits *credits,
+ 			    const int optype);
+ 	void (*set_credits)(struct TCP_Server_Info *, const int);
+ 	int * (*get_credits_field)(struct TCP_Server_Info *, const int);
+@@ -550,8 +550,8 @@ struct smb_version_operations {
+ 				size_t *, struct cifs_credits *);
+ 	/* adjust previously taken mtu credits to request size */
+ 	int (*adjust_credits)(struct TCP_Server_Info *server,
+-			      struct cifs_credits *credits,
+-			      const unsigned int payload_size);
++			      struct cifs_io_subrequest *subreq,
++			      unsigned int /*enum smb3_rw_credits_trace*/ trace);
+ 	/* check if we need to issue closedir */
+ 	bool (*dir_needs_close)(struct cifsFileInfo *);
+ 	long (*fallocate)(struct file *, struct cifs_tcon *, int, loff_t,
+@@ -848,6 +848,9 @@ static inline void cifs_server_unlock(struct TCP_Server_Info *server)
+ struct cifs_credits {
+ 	unsigned int value;
+ 	unsigned int instance;
++	unsigned int in_flight_check;
++	unsigned int rreq_debug_id;
++	unsigned int rreq_debug_index;
+ };
+ 
+ static inline unsigned int
+@@ -873,7 +876,7 @@ has_credits(struct TCP_Server_Info *server, int *credits, int num_credits)
+ }
+ 
+ static inline void
+-add_credits(struct TCP_Server_Info *server, const struct cifs_credits *credits,
++add_credits(struct TCP_Server_Info *server, struct cifs_credits *credits,
+ 	    const int optype)
+ {
+ 	server->ops->add_credits(server, credits, optype);
+@@ -897,11 +900,11 @@ set_credits(struct TCP_Server_Info *server, const int val)
+ }
+ 
+ static inline int
+-adjust_credits(struct TCP_Server_Info *server, struct cifs_credits *credits,
+-	       const unsigned int payload_size)
++adjust_credits(struct TCP_Server_Info *server, struct cifs_io_subrequest *subreq,
++	       unsigned int /* enum smb3_rw_credits_trace */ trace)
+ {
+ 	return server->ops->adjust_credits ?
+-		server->ops->adjust_credits(server, credits, payload_size) : 0;
++		server->ops->adjust_credits(server, subreq, trace) : 0;
+ }
+ 
+ static inline __le64
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 04ec1b9737a89..b202eac6584e1 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -80,6 +80,16 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq)
+ 		return netfs_prepare_write_failed(subreq);
+ 	}
+ 
++	wdata->credits.rreq_debug_id = subreq->rreq->debug_id;
++	wdata->credits.rreq_debug_index = subreq->debug_index;
++	wdata->credits.in_flight_check = 1;
++	trace_smb3_rw_credits(wdata->rreq->debug_id,
++			      wdata->subreq.debug_index,
++			      wdata->credits.value,
++			      server->credits, server->in_flight,
++			      wdata->credits.value,
++			      cifs_trace_rw_credits_write_prepare);
++
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 	if (server->smbd_conn)
+ 		subreq->max_nr_segs = server->smbd_conn->max_frmr_depth;
+@@ -101,7 +111,7 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq)
+ 		goto fail;
+ 	}
+ 
+-	rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len);
++	rc = adjust_credits(wdata->server, wdata, cifs_trace_rw_credits_issue_write_adjust);
+ 	if (rc)
+ 		goto fail;
+ 
+@@ -163,7 +173,18 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq)
+ 		return false;
+ 	}
+ 
++	rdata->credits.in_flight_check = 1;
++	rdata->credits.rreq_debug_id = rreq->debug_id;
++	rdata->credits.rreq_debug_index = subreq->debug_index;
++
++	trace_smb3_rw_credits(rdata->rreq->debug_id,
++			      rdata->subreq.debug_index,
++			      rdata->credits.value,
++			      server->credits, server->in_flight, 0,
++			      cifs_trace_rw_credits_read_submit);
++
+ 	subreq->len = min_t(size_t, subreq->len, rsize);
++
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 	if (server->smbd_conn)
+ 		subreq->max_nr_segs = server->smbd_conn->max_frmr_depth;
+@@ -294,7 +315,20 @@ static void cifs_free_subrequest(struct netfs_io_subrequest *subreq)
+ #endif
+ 	}
+ 
+-	add_credits_and_wake_if(rdata->server, &rdata->credits, 0);
++	if (rdata->credits.value != 0) {
++		trace_smb3_rw_credits(rdata->rreq->debug_id,
++				      rdata->subreq.debug_index,
++				      rdata->credits.value,
++				      rdata->server ? rdata->server->credits : 0,
++				      rdata->server ? rdata->server->in_flight : 0,
++				      -rdata->credits.value,
++				      cifs_trace_rw_credits_free_subreq);
++		if (rdata->server)
++			add_credits_and_wake_if(rdata->server, &rdata->credits, 0);
++		else
++			rdata->credits.value = 0;
++	}
++
+ 	if (rdata->have_xid)
+ 		free_xid(rdata->xid);
+ }
+@@ -2719,6 +2753,7 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from)
+ 	struct inode *inode = file->f_mapping->host;
+ 	struct cifsInodeInfo *cinode = CIFS_I(inode);
+ 	struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server;
++	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ 	ssize_t rc;
+ 
+ 	rc = netfs_start_io_write(inode);
+@@ -2735,12 +2770,16 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from)
+ 	if (rc <= 0)
+ 		goto out;
+ 
+-	if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from),
++	if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) &&
++	    (cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from),
+ 				     server->vals->exclusive_lock_type, 0,
+-				     NULL, CIFS_WRITE_OP))
+-		rc = netfs_buffered_write_iter_locked(iocb, from, NULL);
+-	else
++				     NULL, CIFS_WRITE_OP))) {
+ 		rc = -EACCES;
++		goto out;
++	}
++
++	rc = netfs_buffered_write_iter_locked(iocb, from, NULL);
++
+ out:
+ 	up_read(&cinode->lock_sem);
+ 	netfs_end_io_write(inode);
+@@ -2872,9 +2911,7 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to)
+ 	if (!CIFS_CACHE_READ(cinode))
+ 		return netfs_unbuffered_read_iter(iocb, to);
+ 
+-	if (cap_unix(tcon->ses) &&
+-	    (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&
+-	    ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) {
++	if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0) {
+ 		if (iocb->ki_flags & IOCB_DIRECT)
+ 			return netfs_unbuffered_read_iter(iocb, to);
+ 		return netfs_buffered_read_iter(iocb, to);
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 689d8a506d459..48c27581ec511 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -378,6 +378,8 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ 			u32 plen, struct cifs_sb_info *cifs_sb,
+ 			bool unicode, struct cifs_open_info_data *data)
+ {
++	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
++
+ 	data->reparse.buf = buf;
+ 
+ 	/* See MS-FSCC 2.1.2 */
+@@ -394,12 +396,13 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ 	case IO_REPARSE_TAG_LX_FIFO:
+ 	case IO_REPARSE_TAG_LX_CHR:
+ 	case IO_REPARSE_TAG_LX_BLK:
+-		return 0;
++		break;
+ 	default:
+-		cifs_dbg(VFS, "%s: unhandled reparse tag: 0x%08x\n",
+-			 __func__, le32_to_cpu(buf->ReparseTag));
+-		return -EOPNOTSUPP;
++		cifs_tcon_dbg(VFS | ONCE, "unhandled reparse tag: 0x%08x\n",
++			      le32_to_cpu(buf->ReparseTag));
++		break;
+ 	}
++	return 0;
+ }
+ 
+ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index 212ec6f66ec65..e1f2feb56f45f 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -108,7 +108,7 @@ cifs_find_mid(struct TCP_Server_Info *server, char *buffer)
+ 
+ static void
+ cifs_add_credits(struct TCP_Server_Info *server,
+-		 const struct cifs_credits *credits, const int optype)
++		 struct cifs_credits *credits, const int optype)
+ {
+ 	spin_lock(&server->req_lock);
+ 	server->credits += credits->value;
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index c8e536540895a..7fe59235f0901 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -66,7 +66,7 @@ change_conf(struct TCP_Server_Info *server)
+ 
+ static void
+ smb2_add_credits(struct TCP_Server_Info *server,
+-		 const struct cifs_credits *credits, const int optype)
++		 struct cifs_credits *credits, const int optype)
+ {
+ 	int *val, rc = -1;
+ 	int scredits, in_flight;
+@@ -94,7 +94,21 @@ smb2_add_credits(struct TCP_Server_Info *server,
+ 					    server->conn_id, server->hostname, *val,
+ 					    add, server->in_flight);
+ 	}
+-	WARN_ON_ONCE(server->in_flight == 0);
++	if (credits->in_flight_check > 1) {
++		pr_warn_once("rreq R=%08x[%x] Credits not in flight\n",
++			     credits->rreq_debug_id, credits->rreq_debug_index);
++	} else {
++		credits->in_flight_check = 2;
++	}
++	if (WARN_ON_ONCE(server->in_flight == 0)) {
++		pr_warn_once("rreq R=%08x[%x] Zero in_flight\n",
++			     credits->rreq_debug_id, credits->rreq_debug_index);
++		trace_smb3_rw_credits(credits->rreq_debug_id,
++				      credits->rreq_debug_index,
++				      credits->value,
++				      server->credits, server->in_flight, 0,
++				      cifs_trace_rw_credits_zero_in_flight);
++	}
+ 	server->in_flight--;
+ 	if (server->in_flight == 0 &&
+ 	   ((optype & CIFS_OP_MASK) != CIFS_NEG_OP) &&
+@@ -283,16 +297,23 @@ smb2_wait_mtu_credits(struct TCP_Server_Info *server, size_t size,
+ 
+ static int
+ smb2_adjust_credits(struct TCP_Server_Info *server,
+-		    struct cifs_credits *credits,
+-		    const unsigned int payload_size)
++		    struct cifs_io_subrequest *subreq,
++		    unsigned int /*enum smb3_rw_credits_trace*/ trace)
+ {
+-	int new_val = DIV_ROUND_UP(payload_size, SMB2_MAX_BUFFER_SIZE);
++	struct cifs_credits *credits = &subreq->credits;
++	int new_val = DIV_ROUND_UP(subreq->subreq.len, SMB2_MAX_BUFFER_SIZE);
+ 	int scredits, in_flight;
+ 
+ 	if (!credits->value || credits->value == new_val)
+ 		return 0;
+ 
+ 	if (credits->value < new_val) {
++		trace_smb3_rw_credits(subreq->rreq->debug_id,
++				      subreq->subreq.debug_index,
++				      credits->value,
++				      server->credits, server->in_flight,
++				      new_val - credits->value,
++				      cifs_trace_rw_credits_no_adjust_up);
+ 		trace_smb3_too_many_credits(server->CurrentMid,
+ 				server->conn_id, server->hostname, 0, credits->value - new_val, 0);
+ 		cifs_server_dbg(VFS, "request has less credits (%d) than required (%d)",
+@@ -308,6 +329,12 @@ smb2_adjust_credits(struct TCP_Server_Info *server,
+ 		in_flight = server->in_flight;
+ 		spin_unlock(&server->req_lock);
+ 
++		trace_smb3_rw_credits(subreq->rreq->debug_id,
++				      subreq->subreq.debug_index,
++				      credits->value,
++				      server->credits, server->in_flight,
++				      new_val - credits->value,
++				      cifs_trace_rw_credits_old_session);
+ 		trace_smb3_reconnect_detected(server->CurrentMid,
+ 			server->conn_id, server->hostname, scredits,
+ 			credits->value - new_val, in_flight);
+@@ -316,6 +343,11 @@ smb2_adjust_credits(struct TCP_Server_Info *server,
+ 		return -EAGAIN;
+ 	}
+ 
++	trace_smb3_rw_credits(subreq->rreq->debug_id,
++			      subreq->subreq.debug_index,
++			      credits->value,
++			      server->credits, server->in_flight,
++			      new_val - credits->value, trace);
+ 	server->credits += credits->value - new_val;
+ 	scredits = server->credits;
+ 	in_flight = server->in_flight;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 896147ba6660e..4cd5c33be2a1a 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4505,8 +4505,15 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 	struct TCP_Server_Info *server = rdata->server;
+ 	struct smb2_hdr *shdr =
+ 				(struct smb2_hdr *)rdata->iov[0].iov_base;
+-	struct cifs_credits credits = { .value = 0, .instance = 0 };
++	struct cifs_credits credits = {
++		.value = 0,
++		.instance = 0,
++		.rreq_debug_id = rdata->rreq->debug_id,
++		.rreq_debug_index = rdata->subreq.debug_index,
++	};
+ 	struct smb_rqst rqst = { .rq_iov = &rdata->iov[1], .rq_nvec = 1 };
++	unsigned int rreq_debug_id = rdata->rreq->debug_id;
++	unsigned int subreq_debug_index = rdata->subreq.debug_index;
+ 
+ 	if (rdata->got_bytes) {
+ 		rqst.rq_iter	  = rdata->subreq.io_iter;
+@@ -4590,10 +4597,16 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 		if (rdata->subreq.start < rdata->subreq.rreq->i_size)
+ 			rdata->result = 0;
+ 	}
++	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.value,
++			      server->credits, server->in_flight,
++			      0, cifs_trace_rw_credits_read_response_clear);
+ 	rdata->credits.value = 0;
+ 	INIT_WORK(&rdata->subreq.work, smb2_readv_worker);
+ 	queue_work(cifsiod_wq, &rdata->subreq.work);
+ 	release_mid(mid);
++	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
++			      server->credits, server->in_flight,
++			      credits.value, cifs_trace_rw_credits_read_response_add);
+ 	add_credits(server, &credits, 0);
+ }
+ 
+@@ -4650,7 +4663,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
+ 				min_t(int, server->max_credits -
+ 						server->credits, credit_request));
+ 
+-		rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);
++		rc = adjust_credits(server, rdata, cifs_trace_rw_credits_call_readv_adjust);
+ 		if (rc)
+ 			goto async_readv_out;
+ 
+@@ -4769,7 +4782,14 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ 	struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink);
+ 	struct TCP_Server_Info *server = wdata->server;
+ 	struct smb2_write_rsp *rsp = (struct smb2_write_rsp *)mid->resp_buf;
+-	struct cifs_credits credits = { .value = 0, .instance = 0 };
++	struct cifs_credits credits = {
++		.value = 0,
++		.instance = 0,
++		.rreq_debug_id = wdata->rreq->debug_id,
++		.rreq_debug_index = wdata->subreq.debug_index,
++	};
++	unsigned int rreq_debug_id = wdata->rreq->debug_id;
++	unsigned int subreq_debug_index = wdata->subreq.debug_index;
+ 	ssize_t result = 0;
+ 	size_t written;
+ 
+@@ -4840,9 +4860,15 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ 				      tcon->tid, tcon->ses->Suid,
+ 				      wdata->subreq.start, wdata->subreq.len);
+ 
++	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, wdata->credits.value,
++			      server->credits, server->in_flight,
++			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+ 	cifs_write_subrequest_terminated(wdata, result ?: written, true);
+ 	release_mid(mid);
++	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
++			      server->credits, server->in_flight,
++			      credits.value, cifs_trace_rw_credits_write_response_add);
+ 	add_credits(server, &credits, 0);
+ }
+ 
+@@ -4972,7 +4998,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 				min_t(int, server->max_credits -
+ 						server->credits, credit_request));
+ 
+-		rc = adjust_credits(server, &wdata->credits, io_parms->length);
++		rc = adjust_credits(server, wdata, cifs_trace_rw_credits_call_writev_adjust);
+ 		if (rc)
+ 			goto async_writev_out;
+ 
+@@ -4997,6 +5023,12 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 	cifs_small_buf_release(req);
+ out:
+ 	if (rc) {
++		trace_smb3_rw_credits(wdata->rreq->debug_id,
++				      wdata->subreq.debug_index,
++				      wdata->credits.value,
++				      server->credits, server->in_flight,
++				      -(int)wdata->credits.value,
++				      cifs_trace_rw_credits_write_response_clear);
+ 		add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+ 		cifs_write_subrequest_terminated(wdata, rc, true);
+ 	}
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 36d47ce596317..36d5295c2a6f9 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -20,6 +20,22 @@
+ /*
+  * Specify enums for tracing information.
+  */
++#define smb3_rw_credits_traces \
++	EM(cifs_trace_rw_credits_call_readv_adjust,	"rd-call-adj") \
++	EM(cifs_trace_rw_credits_call_writev_adjust,	"wr-call-adj") \
++	EM(cifs_trace_rw_credits_free_subreq,		"free-subreq") \
++	EM(cifs_trace_rw_credits_issue_read_adjust,	"rd-issu-adj") \
++	EM(cifs_trace_rw_credits_issue_write_adjust,	"wr-issu-adj") \
++	EM(cifs_trace_rw_credits_no_adjust_up,		"no-adj-up  ") \
++	EM(cifs_trace_rw_credits_old_session,		"old-session") \
++	EM(cifs_trace_rw_credits_read_response_add,	"rd-resp-add") \
++	EM(cifs_trace_rw_credits_read_response_clear,	"rd-resp-clr") \
++	EM(cifs_trace_rw_credits_read_submit,		"rd-submit  ") \
++	EM(cifs_trace_rw_credits_write_prepare,		"wr-prepare ") \
++	EM(cifs_trace_rw_credits_write_response_add,	"wr-resp-add") \
++	EM(cifs_trace_rw_credits_write_response_clear,	"wr-resp-clr") \
++	E_(cifs_trace_rw_credits_zero_in_flight,	"ZERO-IN-FLT")
++
+ #define smb3_tcon_ref_traces					      \
+ 	EM(netfs_trace_tcon_ref_dec_dfs_refer,		"DEC DfsRef") \
+ 	EM(netfs_trace_tcon_ref_free,			"FRE       ") \
+@@ -59,7 +75,8 @@
+ #define EM(a, b) a,
+ #define E_(a, b) a
+ 
+-enum smb3_tcon_ref_trace { smb3_tcon_ref_traces } __mode(byte);
++enum smb3_rw_credits_trace	{ smb3_rw_credits_traces } __mode(byte);
++enum smb3_tcon_ref_trace	{ smb3_tcon_ref_traces } __mode(byte);
+ 
+ #undef EM
+ #undef E_
+@@ -71,6 +88,7 @@ enum smb3_tcon_ref_trace { smb3_tcon_ref_traces } __mode(byte);
+ #define EM(a, b) TRACE_DEFINE_ENUM(a);
+ #define E_(a, b) TRACE_DEFINE_ENUM(a);
+ 
++smb3_rw_credits_traces;
+ smb3_tcon_ref_traces;
+ 
+ #undef EM
+@@ -1316,6 +1334,41 @@ TRACE_EVENT(smb3_tcon_ref,
+ 		      __entry->ref)
+ 	    );
+ 
++TRACE_EVENT(smb3_rw_credits,
++	    TP_PROTO(unsigned int rreq_debug_id,
++		     unsigned int subreq_debug_index,
++		     unsigned int subreq_credits,
++		     unsigned int server_credits,
++		     int server_in_flight,
++		     int credit_change,
++		     enum smb3_rw_credits_trace trace),
++	    TP_ARGS(rreq_debug_id, subreq_debug_index, subreq_credits,
++		    server_credits, server_in_flight, credit_change, trace),
++	    TP_STRUCT__entry(
++		    __field(unsigned int, rreq_debug_id)
++		    __field(unsigned int, subreq_debug_index)
++		    __field(unsigned int, subreq_credits)
++		    __field(unsigned int, server_credits)
++		    __field(int,	  in_flight)
++		    __field(int,	  credit_change)
++		    __field(enum smb3_rw_credits_trace, trace)
++			     ),
++	    TP_fast_assign(
++		    __entry->rreq_debug_id	= rreq_debug_id;
++		    __entry->subreq_debug_index	= subreq_debug_index;
++		    __entry->subreq_credits	= subreq_credits;
++		    __entry->server_credits	= server_credits;
++		    __entry->in_flight		= server_in_flight;
++		    __entry->credit_change	= credit_change;
++		    __entry->trace		= trace;
++			   ),
++	    TP_printk("R=%08x[%x] %s cred=%u chg=%d pool=%u ifl=%d",
++		      __entry->rreq_debug_id, __entry->subreq_debug_index,
++		      __print_symbolic(__entry->trace, smb3_rw_credits_traces),
++		      __entry->subreq_credits, __entry->credit_change,
++		      __entry->server_credits, __entry->in_flight)
++	    );
++
+ 
+ #undef EM
+ #undef E_
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index 012b9bd069952..adfe0d0587010 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -988,10 +988,10 @@ static void
+ cifs_compound_callback(struct mid_q_entry *mid)
+ {
+ 	struct TCP_Server_Info *server = mid->server;
+-	struct cifs_credits credits;
+-
+-	credits.value = server->ops->get_credits(mid);
+-	credits.instance = server->reconnect_instance;
++	struct cifs_credits credits = {
++		.value = server->ops->get_credits(mid),
++		.instance = server->reconnect_instance,
++	};
+ 
+ 	add_credits(server, &credits, mid->optype);
+ 
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 09e1e7771592f..7889df8112b4e 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -165,11 +165,43 @@ void ksmbd_all_conn_set_status(u64 sess_id, u32 status)
+ 	up_read(&conn_list_lock);
+ }
+ 
+-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id)
++void ksmbd_conn_wait_idle(struct ksmbd_conn *conn)
+ {
+ 	wait_event(conn->req_running_q, atomic_read(&conn->req_running) < 2);
+ }
+ 
++int ksmbd_conn_wait_idle_sess_id(struct ksmbd_conn *curr_conn, u64 sess_id)
++{
++	struct ksmbd_conn *conn;
++	int rc, retry_count = 0, max_timeout = 120;
++	int rcount = 1;
++
++retry_idle:
++	if (retry_count >= max_timeout)
++		return -EIO;
++
++	down_read(&conn_list_lock);
++	list_for_each_entry(conn, &conn_list, conns_list) {
++		if (conn->binding || xa_load(&conn->sessions, sess_id)) {
++			if (conn == curr_conn)
++				rcount = 2;
++			if (atomic_read(&conn->req_running) >= rcount) {
++				rc = wait_event_timeout(conn->req_running_q,
++					atomic_read(&conn->req_running) < rcount,
++					HZ);
++				if (!rc) {
++					up_read(&conn_list_lock);
++					retry_count++;
++					goto retry_idle;
++				}
++			}
++		}
++	}
++	up_read(&conn_list_lock);
++
++	return 0;
++}
++
+ int ksmbd_conn_write(struct ksmbd_work *work)
+ {
+ 	struct ksmbd_conn *conn = work->conn;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 0e04cf8b1d896..b93e5437793e0 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -145,7 +145,8 @@ extern struct list_head conn_list;
+ extern struct rw_semaphore conn_list_lock;
+ 
+ bool ksmbd_conn_alive(struct ksmbd_conn *conn);
+-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id);
++void ksmbd_conn_wait_idle(struct ksmbd_conn *conn);
++int ksmbd_conn_wait_idle_sess_id(struct ksmbd_conn *curr_conn, u64 sess_id);
+ struct ksmbd_conn *ksmbd_conn_alloc(void);
+ void ksmbd_conn_free(struct ksmbd_conn *conn);
+ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c);
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index aec0a7a124052..dac5f984f1754 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -310,6 +310,7 @@ void destroy_previous_session(struct ksmbd_conn *conn,
+ {
+ 	struct ksmbd_session *prev_sess;
+ 	struct ksmbd_user *prev_user;
++	int err;
+ 
+ 	down_write(&sessions_table_lock);
+ 	down_write(&conn->session_lock);
+@@ -324,8 +325,15 @@ void destroy_previous_session(struct ksmbd_conn *conn,
+ 	    memcmp(user->passkey, prev_user->passkey, user->passkey_sz))
+ 		goto out;
+ 
++	ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_RECONNECT);
++	err = ksmbd_conn_wait_idle_sess_id(conn, id);
++	if (err) {
++		ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
++		goto out;
++	}
+ 	ksmbd_destroy_file_table(&prev_sess->file_table);
+ 	prev_sess->state = SMB2_SESSION_EXPIRED;
++	ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
+ out:
+ 	up_write(&conn->session_lock);
+ 	up_write(&sessions_table_lock);
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 840c71c66b30b..8cfdf0d1a186e 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -2210,7 +2210,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ 	ksmbd_conn_unlock(conn);
+ 
+ 	ksmbd_close_session_fds(work);
+-	ksmbd_conn_wait_idle(conn, sess_id);
++	ksmbd_conn_wait_idle(conn);
+ 
+ 	/*
+ 	 * Re-lookup session to validate if session is deleted
+@@ -4406,7 +4406,8 @@ int smb2_query_dir(struct ksmbd_work *work)
+ 		rsp->OutputBufferLength = cpu_to_le32(0);
+ 		rsp->Buffer[0] = 0;
+ 		rc = ksmbd_iov_pin_rsp(work, (void *)rsp,
+-				       sizeof(struct smb2_query_directory_rsp));
++				       offsetof(struct smb2_query_directory_rsp, Buffer)
++				       + 1);
+ 		if (rc)
+ 			goto err_out;
+ 	} else {
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index 80dc36f9d5274..9f1c1d225e32c 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -660,12 +660,9 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+ 			     void *context))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+ 			    acpi_execute_reg_methods(acpi_handle device,
++						     u32 nax_depth,
+ 						     acpi_adr_space_type
+ 						     space_id))
+-ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+-			    acpi_execute_orphan_reg_method(acpi_handle device,
+-							   acpi_adr_space_type
+-							   space_id))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+ 			    acpi_remove_address_space_handler(acpi_handle
+ 							      device,
+diff --git a/include/acpi/video.h b/include/acpi/video.h
+index 3d538d4178abb..044c463138df8 100644
+--- a/include/acpi/video.h
++++ b/include/acpi/video.h
+@@ -50,6 +50,7 @@ enum acpi_backlight_type {
+ 	acpi_backlight_native,
+ 	acpi_backlight_nvidia_wmi_ec,
+ 	acpi_backlight_apple_gmux,
++	acpi_backlight_dell_uart,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ACPI_VIDEO)
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 70bf1004076b2..f00a8e18f389f 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -451,30 +451,11 @@
+ #endif
+ #endif
+ 
+-/*
+- * Some symbol definitions will not exist yet during the first pass of the
+- * link, but are guaranteed to exist in the final link. Provide preliminary
+- * definitions that will be superseded in the final link to avoid having to
+- * rely on weak external linkage, which requires a GOT when used in position
+- * independent code.
+- */
+-#define PRELIMINARY_SYMBOL_DEFINITIONS					\
+-	PROVIDE(kallsyms_addresses = .);				\
+-	PROVIDE(kallsyms_offsets = .);					\
+-	PROVIDE(kallsyms_names = .);					\
+-	PROVIDE(kallsyms_num_syms = .);					\
+-	PROVIDE(kallsyms_relative_base = .);				\
+-	PROVIDE(kallsyms_token_table = .);				\
+-	PROVIDE(kallsyms_token_index = .);				\
+-	PROVIDE(kallsyms_markers = .);					\
+-	PROVIDE(kallsyms_seqs_of_names = .);
+-
+ /*
+  * Read only Data
+  */
+ #define RO_DATA(align)							\
+ 	. = ALIGN((align));						\
+-	PRELIMINARY_SYMBOL_DEFINITIONS					\
+ 	.rodata           : AT(ADDR(.rodata) - LOAD_OFFSET) {		\
+ 		__start_rodata = .;					\
+ 		*(.rodata) *(.rodata.*)					\
+diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
+index 8c4768c44a01b..d3b66d77df7a3 100644
+--- a/include/linux/bitmap.h
++++ b/include/linux/bitmap.h
+@@ -270,6 +270,18 @@ static inline void bitmap_copy_clear_tail(unsigned long *dst,
+ 		dst[nbits / BITS_PER_LONG] &= BITMAP_LAST_WORD_MASK(nbits);
+ }
+ 
++static inline void bitmap_copy_and_extend(unsigned long *to,
++					  const unsigned long *from,
++					  unsigned int count, unsigned int size)
++{
++	unsigned int copy = BITS_TO_LONGS(count);
++
++	memcpy(to, from, copy * sizeof(long));
++	if (count % BITS_PER_LONG)
++		to[copy - 1] &= BITMAP_LAST_WORD_MASK(count);
++	memset(to + copy, 0, bitmap_size(size) - copy * sizeof(long));
++}
++
+ /*
+  * On 32-bit systems bitmaps are represented as u32 arrays internally. On LE64
+  * machines the order of hi and lo parts of numbers match the bitmap structure.
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index ff2a6cdb1fa3f..5db4a3f354804 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -846,8 +846,8 @@ static inline u32 type_flag(u32 type)
+ /* only use after check_attach_btf_id() */
+ static inline enum bpf_prog_type resolve_prog_type(const struct bpf_prog *prog)
+ {
+-	return (prog->type == BPF_PROG_TYPE_EXT && prog->aux->dst_prog) ?
+-		prog->aux->dst_prog->type : prog->type;
++	return (prog->type == BPF_PROG_TYPE_EXT && prog->aux->saved_dst_prog_type) ?
++		prog->aux->saved_dst_prog_type : prog->type;
+ }
+ 
+ static inline bool bpf_prog_check_recur(const struct bpf_prog *prog)
+diff --git a/include/linux/dsa/ocelot.h b/include/linux/dsa/ocelot.h
+index dca2969015d80..6fbfbde68a37c 100644
+--- a/include/linux/dsa/ocelot.h
++++ b/include/linux/dsa/ocelot.h
+@@ -5,6 +5,8 @@
+ #ifndef _NET_DSA_TAG_OCELOT_H
+ #define _NET_DSA_TAG_OCELOT_H
+ 
++#include <linux/if_bridge.h>
++#include <linux/if_vlan.h>
+ #include <linux/kthread.h>
+ #include <linux/packing.h>
+ #include <linux/skbuff.h>
+@@ -273,4 +275,49 @@ static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+ 	return rew_op;
+ }
+ 
++/**
++ * ocelot_xmit_get_vlan_info: Determine VLAN_TCI and TAG_TYPE for injected frame
++ * @skb: Pointer to socket buffer
++ * @br: Pointer to bridge device that the port is under, if any
++ * @vlan_tci:
++ * @tag_type:
++ *
++ * If the port is under a VLAN-aware bridge, remove the VLAN header from the
++ * payload and move it into the DSA tag, which will make the switch classify
++ * the packet to the bridge VLAN. Otherwise, leave the classified VLAN at zero,
++ * which is the pvid of standalone ports (OCELOT_STANDALONE_PVID), although not
++ * of VLAN-unaware bridge ports (that would be ocelot_vlan_unaware_pvid()).
++ * Anyway, VID 0 is fine because it is stripped on egress for these port modes,
++ * and source address learning is not performed for packets injected from the
++ * CPU anyway, so it doesn't matter that the VID is "wrong".
++ */
++static inline void ocelot_xmit_get_vlan_info(struct sk_buff *skb,
++					     struct net_device *br,
++					     u64 *vlan_tci, u64 *tag_type)
++{
++	struct vlan_ethhdr *hdr;
++	u16 proto, tci;
++
++	if (!br || !br_vlan_enabled(br)) {
++		*vlan_tci = 0;
++		*tag_type = IFH_TAG_TYPE_C;
++		return;
++	}
++
++	hdr = (struct vlan_ethhdr *)skb_mac_header(skb);
++	br_vlan_get_proto(br, &proto);
++
++	if (ntohs(hdr->h_vlan_proto) == proto) {
++		vlan_remove_tag(skb, &tci);
++		*vlan_tci = tci;
++	} else {
++		rcu_read_lock();
++		br_vlan_get_pvid_rcu(br, &tci);
++		rcu_read_unlock();
++		*vlan_tci = tci;
++	}
++
++	*tag_type = (proto != ETH_P_8021Q) ? IFH_TAG_TYPE_S : IFH_TAG_TYPE_C;
++}
++
+ #endif
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 93ac1a859d699..36b9e87439221 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2370,6 +2370,9 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+  *
+  * I_PINNING_FSCACHE_WB	Inode is pinning an fscache object for writeback.
+  *
++ * I_LRU_ISOLATING	Inode is pinned being isolated from LRU without holding
++ *			i_count.
++ *
+  * Q: What is the difference between I_WILL_FREE and I_FREEING?
+  */
+ #define I_DIRTY_SYNC		(1 << 0)
+@@ -2393,6 +2396,8 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+ #define I_DONTCACHE		(1 << 16)
+ #define I_SYNC_QUEUED		(1 << 17)
+ #define I_PINNING_NETFS_WB	(1 << 18)
++#define __I_LRU_ISOLATING	19
++#define I_LRU_ISOLATING		(1 << __I_LRU_ISOLATING)
+ 
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 8120d1976188c..18cf440866d9f 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -967,10 +967,37 @@ static inline bool htlb_allow_alloc_fallback(int reason)
+ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 					   struct mm_struct *mm, pte_t *pte)
+ {
+-	if (huge_page_size(h) == PMD_SIZE)
++	const unsigned long size = huge_page_size(h);
++
++	VM_WARN_ON(size == PAGE_SIZE);
++
++	/*
++	 * hugetlb must use the exact same PT locks as core-mm page table
++	 * walkers would. When modifying a PTE table, hugetlb must take the
++	 * PTE PT lock, when modifying a PMD table, hugetlb must take the PMD
++	 * PT lock etc.
++	 *
++	 * The expectation is that any hugetlb folio smaller than a PMD is
++	 * always mapped into a single PTE table and that any hugetlb folio
++	 * smaller than a PUD (but at least as big as a PMD) is always mapped
++	 * into a single PMD table.
++	 *
++	 * If that does not hold for an architecture, then that architecture
++	 * must disable split PT locks such that all *_lockptr() functions
++	 * will give us the same result: the per-MM PT lock.
++	 *
++	 * Note that with e.g., CONFIG_PGTABLE_LEVELS=2 where
++	 * PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE, we'd use pud_lockptr()
++	 * and core-mm would use pmd_lockptr(). However, in such configurations
++	 * split PMD locks are disabled -- they don't make sense on a single
++	 * PGDIR page table -- and the end result is the same.
++	 */
++	if (size >= PUD_SIZE)
++		return pud_lockptr(mm, (pud_t *) pte);
++	else if (size >= PMD_SIZE || IS_ENABLED(CONFIG_HIGHPTE))
+ 		return pmd_lockptr(mm, (pmd_t *) pte);
+-	VM_BUG_ON(huge_page_size(h) == PAGE_SIZE);
+-	return &mm->page_table_lock;
++	/* pte_alloc_huge() only applies with !CONFIG_HIGHPTE */
++	return ptep_lockptr(mm, pte);
+ }
+ 
+ #ifndef hugepages_supported
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index 7abdc09271245..b18e998c8b887 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -410,7 +410,7 @@ struct io_ring_ctx {
+ 	spinlock_t		napi_lock;	/* napi_list lock */
+ 
+ 	/* napi busy poll default timeout */
+-	unsigned int		napi_busy_poll_to;
++	ktime_t			napi_busy_poll_dt;
+ 	bool			napi_prefer_busy_poll;
+ 	bool			napi_enabled;
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index b58bad248eefd..f32177152921e 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2960,6 +2960,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
+ 	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
+ }
+ 
++static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte)
++{
++	BUILD_BUG_ON(IS_ENABLED(CONFIG_HIGHPTE));
++	BUILD_BUG_ON(MAX_PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE);
++	return ptlock_ptr(virt_to_ptdesc(pte));
++}
++
+ static inline bool ptlock_init(struct ptdesc *ptdesc)
+ {
+ 	/*
+@@ -2984,6 +2991,10 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
+ {
+ 	return &mm->page_table_lock;
+ }
++static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte)
++{
++	return &mm->page_table_lock;
++}
+ static inline void ptlock_cache_init(void) {}
+ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
+ static inline void ptlock_free(struct ptdesc *ptdesc) {}
+diff --git a/include/linux/panic.h b/include/linux/panic.h
+index 6717b15e798c3..556b4e2ad9aa5 100644
+--- a/include/linux/panic.h
++++ b/include/linux/panic.h
+@@ -16,6 +16,7 @@ extern void oops_enter(void);
+ extern void oops_exit(void);
+ extern bool oops_may_print(void);
+ 
++extern bool panic_triggering_all_cpu_backtrace;
+ extern int panic_timeout;
+ extern unsigned long panic_print;
+ extern int panic_on_oops;
+diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
+index 18cd0c0c73d93..207f0c83c8e97 100644
+--- a/include/linux/pgalloc_tag.h
++++ b/include/linux/pgalloc_tag.h
+@@ -43,6 +43,18 @@ static inline void put_page_tag_ref(union codetag_ref *ref)
+ 	page_ext_put(page_ext_from_codetag_ref(ref));
+ }
+ 
++static inline void clear_page_tag_ref(struct page *page)
++{
++	if (mem_alloc_profiling_enabled()) {
++		union codetag_ref *ref = get_page_tag_ref(page);
++
++		if (ref) {
++			set_codetag_empty(ref);
++			put_page_tag_ref(ref);
++		}
++	}
++}
++
+ static inline void pgalloc_tag_add(struct page *page, struct task_struct *task,
+ 				   unsigned int nr)
+ {
+@@ -126,6 +138,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
+ 
+ static inline union codetag_ref *get_page_tag_ref(struct page *page) { return NULL; }
+ static inline void put_page_tag_ref(union codetag_ref *ref) {}
++static inline void clear_page_tag_ref(struct page *page) {}
+ static inline void pgalloc_tag_add(struct page *page, struct task_struct *task,
+ 				   unsigned int nr) {}
+ static inline void pgalloc_tag_sub(struct page *page, unsigned int nr) {}
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index f1155c0439c4b..13f88317b81bf 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -55,6 +55,7 @@ enum thermal_notify_event {
+ 	THERMAL_TZ_BIND_CDEV, /* Cooling dev is bind to the thermal zone */
+ 	THERMAL_TZ_UNBIND_CDEV, /* Cooling dev is unbind from the thermal zone */
+ 	THERMAL_INSTANCE_WEIGHT_CHANGED, /* Thermal instance weight changed */
++	THERMAL_TZ_RESUME, /* Thermal zone is resuming after system sleep */
+ };
+ 
+ /**
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index 535701efc1e5c..24d970f7a4fa2 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -230,8 +230,12 @@ struct vsock_tap {
+ int vsock_add_tap(struct vsock_tap *vt);
+ int vsock_remove_tap(struct vsock_tap *vt);
+ void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
++int __vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
++				int flags);
+ int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 			      int flags);
++int __vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
++			  size_t len, int flags);
+ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ 			size_t len, int flags);
+ 
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index e372a88e8c3f6..d1d073089f384 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -206,14 +206,17 @@ enum {
+ 	 */
+ 	HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 
+-	/* When this quirk is set, the controller has validated that
+-	 * LE states reported through the HCI_LE_READ_SUPPORTED_STATES are
+-	 * valid.  This mechanism is necessary as many controllers have
+-	 * been seen has having trouble initiating a connectable
+-	 * advertisement despite the state combination being reported as
+-	 * supported.
++	/* When this quirk is set, the LE states reported through the
++	 * HCI_LE_READ_SUPPORTED_STATES are invalid/broken.
++	 *
++	 * This mechanism is necessary as many controllers have been seen has
++	 * having trouble initiating a connectable advertisement despite the
++	 * state combination being reported as supported.
++	 *
++	 * This quirk can be set before hci_register_dev is called or
++	 * during the hdev->setup vendor callback.
+ 	 */
+-	HCI_QUIRK_VALID_LE_STATES,
++	HCI_QUIRK_BROKEN_LE_STATES,
+ 
+ 	/* When this quirk is set, then erroneous data reporting
+ 	 * is ignored. This is mainly due to the fact that the HCI
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index b15f51ae3bfd9..c97ff64c9189f 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -826,7 +826,7 @@ extern struct mutex hci_cb_list_lock;
+ 	} while (0)
+ 
+ #define hci_dev_le_state_simultaneous(hdev) \
+-	(test_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks) && \
++	(!test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) && \
+ 	 (hdev->le_states[4] & 0x08) &&	/* Central */ \
+ 	 (hdev->le_states[4] & 0x40) &&	/* Peripheral */ \
+ 	 (hdev->le_states[3] & 0x10))	/* Simultaneous */
+diff --git a/include/net/kcm.h b/include/net/kcm.h
+index 90279e5e09a5c..441e993be634c 100644
+--- a/include/net/kcm.h
++++ b/include/net/kcm.h
+@@ -70,6 +70,7 @@ struct kcm_sock {
+ 	struct work_struct tx_work;
+ 	struct list_head wait_psock_list;
+ 	struct sk_buff *seq_skb;
++	struct mutex tx_mutex;
+ 	u32 tx_stopped : 1;
+ 
+ 	/* Don't use bit fields here, these are set under different locks */
+diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
+index f207a6e1042ae..9bdb1fdc7c51b 100644
+--- a/include/net/mana/mana.h
++++ b/include/net/mana/mana.h
+@@ -274,6 +274,7 @@ struct mana_cq {
+ 	/* NAPI data */
+ 	struct napi_struct napi;
+ 	int work_done;
++	int work_done_since_doorbell;
+ 	int budget;
+ };
+ 
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 45c40d200154d..8ecfb94049db5 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -234,7 +234,7 @@ static inline sector_t scsi_get_lba(struct scsi_cmnd *scmd)
+ 
+ static inline unsigned int scsi_logical_block_count(struct scsi_cmnd *scmd)
+ {
+-	unsigned int shift = ilog2(scmd->device->sector_size) - SECTOR_SHIFT;
++	unsigned int shift = ilog2(scmd->device->sector_size);
+ 
+ 	return blk_rq_bytes(scsi_cmd_to_rq(scmd)) >> shift;
+ }
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 1e1b40f4e664e..846132ca5503d 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -813,6 +813,9 @@ struct ocelot {
+ 	const u32 *const		*map;
+ 	struct list_head		stats_regions;
+ 
++	spinlock_t			inj_lock;
++	spinlock_t			xtr_lock;
++
+ 	u32				pool_size[OCELOT_SB_NUM][OCELOT_SB_POOL_NUM];
+ 	int				packet_buffer_size;
+ 	int				num_frame_refs;
+@@ -966,10 +969,17 @@ void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target,
+ 			      u32 val, u32 reg, u32 offset);
+ 
+ /* Packet I/O */
++void ocelot_lock_inj_grp(struct ocelot *ocelot, int grp);
++void ocelot_unlock_inj_grp(struct ocelot *ocelot, int grp);
++void ocelot_lock_xtr_grp(struct ocelot *ocelot, int grp);
++void ocelot_unlock_xtr_grp(struct ocelot *ocelot, int grp);
++void ocelot_lock_xtr_grp_bh(struct ocelot *ocelot, int grp);
++void ocelot_unlock_xtr_grp_bh(struct ocelot *ocelot, int grp);
+ bool ocelot_can_inject(struct ocelot *ocelot, int grp);
+ void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
+ 			      u32 rew_op, struct sk_buff *skb);
+-void ocelot_ifh_port_set(void *ifh, int port, u32 rew_op, u32 vlan_tag);
++void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
++			  u32 rew_op, struct sk_buff *skb);
+ int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);
+ void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);
+ void ocelot_ptp_rx_timestamp(struct ocelot *ocelot, struct sk_buff *skb,
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index da23484268dfc..24ec3434d32ee 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -145,6 +145,7 @@
+ 	EM(netfs_folio_trace_clear_g,		"clear-g")	\
+ 	EM(netfs_folio_trace_clear_s,		"clear-s")	\
+ 	EM(netfs_folio_trace_copy_to_cache,	"mark-copy")	\
++	EM(netfs_folio_trace_end_copy,		"end-copy")	\
+ 	EM(netfs_folio_trace_filled_gaps,	"filled-gaps")	\
+ 	EM(netfs_folio_trace_kill,		"kill")		\
+ 	EM(netfs_folio_trace_kill_cc,		"kill-cc")	\
+diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h
+index 91583690bddc5..f33d914d8f469 100644
+--- a/include/uapi/misc/fastrpc.h
++++ b/include/uapi/misc/fastrpc.h
+@@ -8,14 +8,11 @@
+ #define FASTRPC_IOCTL_ALLOC_DMA_BUFF	_IOWR('R', 1, struct fastrpc_alloc_dma_buf)
+ #define FASTRPC_IOCTL_FREE_DMA_BUFF	_IOWR('R', 2, __u32)
+ #define FASTRPC_IOCTL_INVOKE		_IOWR('R', 3, struct fastrpc_invoke)
+-/* This ioctl is only supported with secure device nodes */
+ #define FASTRPC_IOCTL_INIT_ATTACH	_IO('R', 4)
+ #define FASTRPC_IOCTL_INIT_CREATE	_IOWR('R', 5, struct fastrpc_init_create)
+ #define FASTRPC_IOCTL_MMAP		_IOWR('R', 6, struct fastrpc_req_mmap)
+ #define FASTRPC_IOCTL_MUNMAP		_IOWR('R', 7, struct fastrpc_req_munmap)
+-/* This ioctl is only supported with secure device nodes */
+ #define FASTRPC_IOCTL_INIT_ATTACH_SNS	_IO('R', 8)
+-/* This ioctl is only supported with secure device nodes */
+ #define FASTRPC_IOCTL_INIT_CREATE_STATIC _IOWR('R', 9, struct fastrpc_init_create_static)
+ #define FASTRPC_IOCTL_MEM_MAP		_IOWR('R', 10, struct fastrpc_mem_map)
+ #define FASTRPC_IOCTL_MEM_UNMAP		_IOWR('R', 11, struct fastrpc_mem_unmap)
+diff --git a/init/Kconfig b/init/Kconfig
+index d8a971b804d32..9684e5d2b81c6 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1789,24 +1789,6 @@ config KALLSYMS_ABSOLUTE_PERCPU
+ 	depends on KALLSYMS
+ 	default X86_64 && SMP
+ 
+-config KALLSYMS_BASE_RELATIVE
+-	bool
+-	depends on KALLSYMS
+-	default y
+-	help
+-	  Instead of emitting them as absolute values in the native word size,
+-	  emit the symbol references in the kallsyms table as 32-bit entries,
+-	  each containing a relative value in the range [base, base + U32_MAX]
+-	  or, when KALLSYMS_ABSOLUTE_PERCPU is in effect, each containing either
+-	  an absolute value in the range [0, S32_MAX] or a relative value in the
+-	  range [base, base + S32_MAX], where base is the lowest relative symbol
+-	  address encountered in the image.
+-
+-	  On 64-bit builds, this reduces the size of the address table by 50%,
+-	  but more importantly, it results in entries whose values are build
+-	  time constants, and no relocation pass is required at runtime to fix
+-	  up the entries based on the runtime load address of the kernel.
+-
+ # end of the "standard kernel features (expert users)" menu
+ 
+ config ARCH_HAS_MEMBARRIER_CALLBACKS
+@@ -1924,12 +1906,15 @@ config RUST
+ config RUSTC_VERSION_TEXT
+ 	string
+ 	depends on RUST
+-	default $(shell,command -v $(RUSTC) >/dev/null 2>&1 && $(RUSTC) --version || echo n)
++	default "$(shell,$(RUSTC) --version 2>/dev/null)"
+ 
+ config BINDGEN_VERSION_TEXT
+ 	string
+ 	depends on RUST
+-	default $(shell,command -v $(BINDGEN) >/dev/null 2>&1 && $(BINDGEN) --version || echo n)
++	# The dummy parameter `workaround-for-0.69.0` is required to support 0.69.0
++	# (https://github.com/rust-lang/rust-bindgen/pull/2678). It can be removed when
++	# the minimum version is upgraded past that (0.69.1 already fixed the issue).
++	default "$(shell,$(BINDGEN) --version workaround-for-0.69.0 2>/dev/null)"
+ 
+ #
+ # Place an empty function call at each tracepoint site. Can be
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index 726e6367af4d3..af46d03d58847 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -43,7 +43,7 @@ struct io_wait_queue {
+ 	ktime_t timeout;
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-	unsigned int napi_busy_poll_to;
++	ktime_t napi_busy_poll_dt;
+ 	bool napi_prefer_busy_poll;
+ #endif
+ };
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index c95dc1736dd93..1af2bd56af44a 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -218,10 +218,13 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg,
+ 
+ 	buf = io_ring_head_to_buf(br, head, bl->mask);
+ 	if (arg->max_len) {
+-		int needed;
++		u32 len = READ_ONCE(buf->len);
++		size_t needed;
+ 
+-		needed = (arg->max_len + buf->len - 1) / buf->len;
+-		needed = min(needed, PEEK_MAX_IMPORT);
++		if (unlikely(!len))
++			return -ENOBUFS;
++		needed = (arg->max_len + len - 1) / len;
++		needed = min_not_zero(needed, (size_t) PEEK_MAX_IMPORT);
+ 		if (nr_avail > needed)
+ 			nr_avail = needed;
+ 	}
+diff --git a/io_uring/napi.c b/io_uring/napi.c
+index 080d10e0e0afd..ab5d68d4440c4 100644
+--- a/io_uring/napi.c
++++ b/io_uring/napi.c
+@@ -33,6 +33,12 @@ static struct io_napi_entry *io_napi_hash_find(struct hlist_head *hash_list,
+ 	return NULL;
+ }
+ 
++static inline ktime_t net_to_ktime(unsigned long t)
++{
++	/* napi approximating usecs, reverse busy_loop_current_time */
++	return ns_to_ktime(t << 10);
++}
++
+ void __io_napi_add(struct io_ring_ctx *ctx, struct socket *sock)
+ {
+ 	struct hlist_head *hash_list;
+@@ -102,14 +108,14 @@ static inline void io_napi_remove_stale(struct io_ring_ctx *ctx, bool is_stale)
+ 		__io_napi_remove_stale(ctx);
+ }
+ 
+-static inline bool io_napi_busy_loop_timeout(unsigned long start_time,
+-					     unsigned long bp_usec)
++static inline bool io_napi_busy_loop_timeout(ktime_t start_time,
++					     ktime_t bp)
+ {
+-	if (bp_usec) {
+-		unsigned long end_time = start_time + bp_usec;
+-		unsigned long now = busy_loop_current_time();
++	if (bp) {
++		ktime_t end_time = ktime_add(start_time, bp);
++		ktime_t now = net_to_ktime(busy_loop_current_time());
+ 
+-		return time_after(now, end_time);
++		return ktime_after(now, end_time);
+ 	}
+ 
+ 	return true;
+@@ -124,7 +130,8 @@ static bool io_napi_busy_loop_should_end(void *data,
+ 		return true;
+ 	if (io_should_wake(iowq) || io_has_work(iowq->ctx))
+ 		return true;
+-	if (io_napi_busy_loop_timeout(start_time, iowq->napi_busy_poll_to))
++	if (io_napi_busy_loop_timeout(net_to_ktime(start_time),
++				      iowq->napi_busy_poll_dt))
+ 		return true;
+ 
+ 	return false;
+@@ -181,10 +188,12 @@ static void io_napi_blocking_busy_loop(struct io_ring_ctx *ctx,
+  */
+ void io_napi_init(struct io_ring_ctx *ctx)
+ {
++	u64 sys_dt = READ_ONCE(sysctl_net_busy_poll) * NSEC_PER_USEC;
++
+ 	INIT_LIST_HEAD(&ctx->napi_list);
+ 	spin_lock_init(&ctx->napi_lock);
+ 	ctx->napi_prefer_busy_poll = false;
+-	ctx->napi_busy_poll_to = READ_ONCE(sysctl_net_busy_poll);
++	ctx->napi_busy_poll_dt = ns_to_ktime(sys_dt);
+ }
+ 
+ /*
+@@ -217,7 +226,7 @@ void io_napi_free(struct io_ring_ctx *ctx)
+ int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
+ {
+ 	const struct io_uring_napi curr = {
+-		.busy_poll_to 	  = ctx->napi_busy_poll_to,
++		.busy_poll_to 	  = ktime_to_us(ctx->napi_busy_poll_dt),
+ 		.prefer_busy_poll = ctx->napi_prefer_busy_poll
+ 	};
+ 	struct io_uring_napi napi;
+@@ -232,7 +241,7 @@ int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
+ 	if (copy_to_user(arg, &curr, sizeof(curr)))
+ 		return -EFAULT;
+ 
+-	WRITE_ONCE(ctx->napi_busy_poll_to, napi.busy_poll_to);
++	WRITE_ONCE(ctx->napi_busy_poll_dt, napi.busy_poll_to * NSEC_PER_USEC);
+ 	WRITE_ONCE(ctx->napi_prefer_busy_poll, !!napi.prefer_busy_poll);
+ 	WRITE_ONCE(ctx->napi_enabled, true);
+ 	return 0;
+@@ -249,14 +258,14 @@ int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
+ int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
+ {
+ 	const struct io_uring_napi curr = {
+-		.busy_poll_to 	  = ctx->napi_busy_poll_to,
++		.busy_poll_to 	  = ktime_to_us(ctx->napi_busy_poll_dt),
+ 		.prefer_busy_poll = ctx->napi_prefer_busy_poll
+ 	};
+ 
+ 	if (arg && copy_to_user(arg, &curr, sizeof(curr)))
+ 		return -EFAULT;
+ 
+-	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
++	WRITE_ONCE(ctx->napi_busy_poll_dt, 0);
+ 	WRITE_ONCE(ctx->napi_prefer_busy_poll, false);
+ 	WRITE_ONCE(ctx->napi_enabled, false);
+ 	return 0;
+@@ -275,23 +284,20 @@ int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
+ void __io_napi_adjust_timeout(struct io_ring_ctx *ctx, struct io_wait_queue *iowq,
+ 			      struct timespec64 *ts)
+ {
+-	unsigned int poll_to = READ_ONCE(ctx->napi_busy_poll_to);
++	ktime_t poll_dt = READ_ONCE(ctx->napi_busy_poll_dt);
+ 
+ 	if (ts) {
+ 		struct timespec64 poll_to_ts;
+ 
+-		poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to);
++		poll_to_ts = ns_to_timespec64(ktime_to_ns(poll_dt));
+ 		if (timespec64_compare(ts, &poll_to_ts) < 0) {
+ 			s64 poll_to_ns = timespec64_to_ns(ts);
+-			if (poll_to_ns > 0) {
+-				u64 val = poll_to_ns + 999;
+-				do_div(val, (s64) 1000);
+-				poll_to = val;
+-			}
++			if (poll_to_ns > 0)
++				poll_dt = ns_to_ktime(poll_to_ns);
+ 		}
+ 	}
+ 
+-	iowq->napi_busy_poll_to = poll_to;
++	iowq->napi_busy_poll_dt = poll_dt;
+ }
+ 
+ /*
+@@ -305,7 +311,7 @@ void __io_napi_busy_loop(struct io_ring_ctx *ctx, struct io_wait_queue *iowq)
+ {
+ 	iowq->napi_prefer_busy_poll = READ_ONCE(ctx->napi_prefer_busy_poll);
+ 
+-	if (!(ctx->flags & IORING_SETUP_SQPOLL) && ctx->napi_enabled)
++	if (!(ctx->flags & IORING_SETUP_SQPOLL))
+ 		io_napi_blocking_busy_loop(ctx, iowq);
+ }
+ 
+@@ -320,7 +326,7 @@ int io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
+ 	LIST_HEAD(napi_list);
+ 	bool is_stale = false;
+ 
+-	if (!READ_ONCE(ctx->napi_busy_poll_to))
++	if (!READ_ONCE(ctx->napi_busy_poll_dt))
+ 		return 0;
+ 	if (list_empty_careful(&ctx->napi_list))
+ 		return 0;
+diff --git a/io_uring/napi.h b/io_uring/napi.h
+index 6fc0393d0dbef..341d010cf66bc 100644
+--- a/io_uring/napi.h
++++ b/io_uring/napi.h
+@@ -55,7 +55,7 @@ static inline void io_napi_add(struct io_kiocb *req)
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	struct socket *sock;
+ 
+-	if (!READ_ONCE(ctx->napi_busy_poll_to))
++	if (!READ_ONCE(ctx->napi_enabled))
+ 		return;
+ 
+ 	sock = sock_from_file(req->file);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a8845cc299fec..521bd7efae038 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -16881,8 +16881,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 		spi = i / BPF_REG_SIZE;
+ 
+ 		if (exact != NOT_EXACT &&
+-		    old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
+-		    cur->stack[spi].slot_type[i % BPF_REG_SIZE])
++		    (i >= cur->allocated_stack ||
++		     old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
++		     cur->stack[spi].slot_type[i % BPF_REG_SIZE]))
+ 			return false;
+ 
+ 		if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 5e468db958104..fc1c6236460d2 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1940,6 +1940,8 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+ 			part_error = PERR_CPUSEMPTY;
+ 			goto write_error;
+ 		}
++		/* Check newmask again, whether cpus are available for parent/cs */
++		nocpu |= tasks_nocpu_error(parent, cs, newmask);
+ 
+ 		/*
+ 		 * partcmd_update with newmask:
+@@ -2472,7 +2474,8 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 	 */
+ 	if (!*buf) {
+ 		cpumask_clear(trialcs->cpus_allowed);
+-		cpumask_clear(trialcs->effective_xcpus);
++		if (cpumask_empty(trialcs->exclusive_cpus))
++			cpumask_clear(trialcs->effective_xcpus);
+ 	} else {
+ 		retval = cpulist_parse(buf, trialcs->cpus_allowed);
+ 		if (retval < 0)
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 3d2bf1d50a0c4..6dee328bfe6fd 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2679,6 +2679,16 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 	return ret;
+ }
+ 
++/**
++ * Check if the core a CPU belongs to is online
++ */
++#if !defined(topology_is_core_online)
++static inline bool topology_is_core_online(unsigned int cpu)
++{
++	return true;
++}
++#endif
++
+ int cpuhp_smt_enable(void)
+ {
+ 	int cpu, ret = 0;
+@@ -2689,7 +2699,7 @@ int cpuhp_smt_enable(void)
+ 		/* Skip online CPUs and CPUs on offline nodes */
+ 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+ 			continue;
+-		if (!cpu_smt_thread_allowed(cpu))
++		if (!cpu_smt_thread_allowed(cpu) || !topology_is_core_online(cpu))
+ 			continue;
+ 		ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
+ 		if (ret)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index b2a6aec118f3a..7891d5a75526a 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -9708,7 +9708,8 @@ static int __perf_event_overflow(struct perf_event *event,
+ 
+ 	ret = __perf_event_account_interrupt(event, throttle);
+ 
+-	if (event->prog && !bpf_overflow_handler(event, data, regs))
++	if (event->prog && event->prog->type == BPF_PROG_TYPE_PERF_EVENT &&
++	    !bpf_overflow_handler(event, data, regs))
+ 		return ret;
+ 
+ 	/*
+diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
+index 98b9622d372e4..a9a0ca605d4a8 100644
+--- a/kernel/kallsyms.c
++++ b/kernel/kallsyms.c
+@@ -148,9 +148,6 @@ static unsigned int get_symbol_offset(unsigned long pos)
+ 
+ unsigned long kallsyms_sym_address(int idx)
+ {
+-	if (!IS_ENABLED(CONFIG_KALLSYMS_BASE_RELATIVE))
+-		return kallsyms_addresses[idx];
+-
+ 	/* values are unsigned offsets if --absolute-percpu is not in effect */
+ 	if (!IS_ENABLED(CONFIG_KALLSYMS_ABSOLUTE_PERCPU))
+ 		return kallsyms_relative_base + (u32)kallsyms_offsets[idx];
+@@ -163,38 +160,6 @@ unsigned long kallsyms_sym_address(int idx)
+ 	return kallsyms_relative_base - 1 - kallsyms_offsets[idx];
+ }
+ 
+-static void cleanup_symbol_name(char *s)
+-{
+-	char *res;
+-
+-	if (!IS_ENABLED(CONFIG_LTO_CLANG))
+-		return;
+-
+-	/*
+-	 * LLVM appends various suffixes for local functions and variables that
+-	 * must be promoted to global scope as part of LTO.  This can break
+-	 * hooking of static functions with kprobes. '.' is not a valid
+-	 * character in an identifier in C. Suffixes only in LLVM LTO observed:
+-	 * - foo.llvm.[0-9a-f]+
+-	 */
+-	res = strstr(s, ".llvm.");
+-	if (res)
+-		*res = '\0';
+-
+-	return;
+-}
+-
+-static int compare_symbol_name(const char *name, char *namebuf)
+-{
+-	/* The kallsyms_seqs_of_names is sorted based on names after
+-	 * cleanup_symbol_name() (see scripts/kallsyms.c) if clang lto is enabled.
+-	 * To ensure correct bisection in kallsyms_lookup_names(), do
+-	 * cleanup_symbol_name(namebuf) before comparing name and namebuf.
+-	 */
+-	cleanup_symbol_name(namebuf);
+-	return strcmp(name, namebuf);
+-}
+-
+ static unsigned int get_symbol_seq(int index)
+ {
+ 	unsigned int i, seq = 0;
+@@ -222,7 +187,7 @@ static int kallsyms_lookup_names(const char *name,
+ 		seq = get_symbol_seq(mid);
+ 		off = get_symbol_offset(seq);
+ 		kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));
+-		ret = compare_symbol_name(name, namebuf);
++		ret = strcmp(name, namebuf);
+ 		if (ret > 0)
+ 			low = mid + 1;
+ 		else if (ret < 0)
+@@ -239,7 +204,7 @@ static int kallsyms_lookup_names(const char *name,
+ 		seq = get_symbol_seq(low - 1);
+ 		off = get_symbol_offset(seq);
+ 		kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));
+-		if (compare_symbol_name(name, namebuf))
++		if (strcmp(name, namebuf))
+ 			break;
+ 		low--;
+ 	}
+@@ -251,7 +216,7 @@ static int kallsyms_lookup_names(const char *name,
+ 			seq = get_symbol_seq(high + 1);
+ 			off = get_symbol_offset(seq);
+ 			kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));
+-			if (compare_symbol_name(name, namebuf))
++			if (strcmp(name, namebuf))
+ 				break;
+ 			high++;
+ 		}
+@@ -325,7 +290,7 @@ static unsigned long get_symbol_pos(unsigned long addr,
+ 	unsigned long symbol_start = 0, symbol_end = 0;
+ 	unsigned long i, low, high, mid;
+ 
+-	/* Do a binary search on the sorted kallsyms_addresses array. */
++	/* Do a binary search on the sorted kallsyms_offsets array. */
+ 	low = 0;
+ 	high = kallsyms_num_syms;
+ 
+@@ -410,8 +375,7 @@ static int kallsyms_lookup_buildid(unsigned long addr,
+ 		if (modbuildid)
+ 			*modbuildid = NULL;
+ 
+-		ret = strlen(namebuf);
+-		goto found;
++		return strlen(namebuf);
+ 	}
+ 
+ 	/* See if it's in a module or a BPF JITed image. */
+@@ -425,8 +389,6 @@ static int kallsyms_lookup_buildid(unsigned long addr,
+ 		ret = ftrace_mod_address_lookup(addr, symbolsize,
+ 						offset, modname, namebuf);
+ 
+-found:
+-	cleanup_symbol_name(namebuf);
+ 	return ret;
+ }
+ 
+@@ -453,8 +415,6 @@ const char *kallsyms_lookup(unsigned long addr,
+ 
+ int lookup_symbol_name(unsigned long addr, char *symname)
+ {
+-	int res;
+-
+ 	symname[0] = '\0';
+ 	symname[KSYM_NAME_LEN - 1] = '\0';
+ 
+@@ -465,16 +425,10 @@ int lookup_symbol_name(unsigned long addr, char *symname)
+ 		/* Grab name */
+ 		kallsyms_expand_symbol(get_symbol_offset(pos),
+ 				       symname, KSYM_NAME_LEN);
+-		goto found;
++		return 0;
+ 	}
+ 	/* See if it's in a module. */
+-	res = lookup_module_symbol_name(addr, symname);
+-	if (res)
+-		return res;
+-
+-found:
+-	cleanup_symbol_name(symname);
+-	return 0;
++	return lookup_module_symbol_name(addr, symname);
+ }
+ 
+ /* Look up a kernel symbol and return it in a text buffer. */
+diff --git a/kernel/kallsyms_internal.h b/kernel/kallsyms_internal.h
+index 85480274fc8fb..9633782f82500 100644
+--- a/kernel/kallsyms_internal.h
++++ b/kernel/kallsyms_internal.h
+@@ -4,12 +4,6 @@
+ 
+ #include <linux/types.h>
+ 
+-/*
+- * These will be re-linked against their real values during the second link
+- * stage. Preliminary values must be provided in the linker script using the
+- * PROVIDE() directive so that the first link stage can complete successfully.
+- */
+-extern const unsigned long kallsyms_addresses[];
+ extern const int kallsyms_offsets[];
+ extern const u8 kallsyms_names[];
+ 
+diff --git a/kernel/kallsyms_selftest.c b/kernel/kallsyms_selftest.c
+index 2f84896a7bcbd..873f7c445488c 100644
+--- a/kernel/kallsyms_selftest.c
++++ b/kernel/kallsyms_selftest.c
+@@ -187,31 +187,11 @@ static void test_perf_kallsyms_lookup_name(void)
+ 		stat.min, stat.max, div_u64(stat.sum, stat.real_cnt));
+ }
+ 
+-static bool match_cleanup_name(const char *s, const char *name)
+-{
+-	char *p;
+-	int len;
+-
+-	if (!IS_ENABLED(CONFIG_LTO_CLANG))
+-		return false;
+-
+-	p = strstr(s, ".llvm.");
+-	if (!p)
+-		return false;
+-
+-	len = strlen(name);
+-	if (p - s != len)
+-		return false;
+-
+-	return !strncmp(s, name, len);
+-}
+-
+ static int find_symbol(void *data, const char *name, unsigned long addr)
+ {
+ 	struct test_stat *stat = (struct test_stat *)data;
+ 
+-	if (strcmp(name, stat->name) == 0 ||
+-	    (!stat->perf && match_cleanup_name(name, stat->name))) {
++	if (!strcmp(name, stat->name)) {
+ 		stat->real_cnt++;
+ 		stat->addr = addr;
+ 
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 8bff183d6180e..30342568e935f 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -63,6 +63,8 @@ unsigned long panic_on_taint;
+ bool panic_on_taint_nousertaint = false;
+ static unsigned int warn_limit __read_mostly;
+ 
++bool panic_triggering_all_cpu_backtrace;
++
+ int panic_timeout = CONFIG_PANIC_TIMEOUT;
+ EXPORT_SYMBOL_GPL(panic_timeout);
+ 
+@@ -252,8 +254,12 @@ void check_panic_on_warn(const char *origin)
+  */
+ static void panic_other_cpus_shutdown(bool crash_kexec)
+ {
+-	if (panic_print & PANIC_PRINT_ALL_CPU_BT)
++	if (panic_print & PANIC_PRINT_ALL_CPU_BT) {
++		/* Temporary allow non-panic CPUs to write their backtraces. */
++		panic_triggering_all_cpu_backtrace = true;
+ 		trigger_all_cpu_backtrace();
++		panic_triggering_all_cpu_backtrace = false;
++	}
+ 
+ 	/*
+ 	 * Note that smp_send_stop() is the usual SMP shutdown function,
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index dddb15f48d595..c5d844f727f63 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2316,7 +2316,7 @@ asmlinkage int vprintk_emit(int facility, int level,
+ 	 * non-panic CPUs are generating any messages, they will be
+ 	 * silently dropped.
+ 	 */
+-	if (other_cpu_in_panic())
++	if (other_cpu_in_panic() && !panic_triggering_all_cpu_backtrace)
+ 		return 0;
+ 
+ 	if (level == LOGLEVEL_SCHED) {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 578a49ff5c32e..cb507860163d0 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7956,7 +7956,7 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
+ 	trace_access_unlock(iter->cpu_file);
+ 
+ 	if (ret < 0) {
+-		if (trace_empty(iter)) {
++		if (trace_empty(iter) && !iter->closed) {
+ 			if ((filp->f_flags & O_NONBLOCK))
+ 				return -EAGAIN;
+ 
+diff --git a/kernel/vmcore_info.c b/kernel/vmcore_info.c
+index 1d5eadd9dd61c..8b4f8cc2e0ec0 100644
+--- a/kernel/vmcore_info.c
++++ b/kernel/vmcore_info.c
+@@ -216,12 +216,8 @@ static int __init crash_save_vmcoreinfo_init(void)
+ 	VMCOREINFO_SYMBOL(kallsyms_num_syms);
+ 	VMCOREINFO_SYMBOL(kallsyms_token_table);
+ 	VMCOREINFO_SYMBOL(kallsyms_token_index);
+-#ifdef CONFIG_KALLSYMS_BASE_RELATIVE
+ 	VMCOREINFO_SYMBOL(kallsyms_offsets);
+ 	VMCOREINFO_SYMBOL(kallsyms_relative_base);
+-#else
+-	VMCOREINFO_SYMBOL(kallsyms_addresses);
+-#endif /* CONFIG_KALLSYMS_BASE_RELATIVE */
+ #endif /* CONFIG_KALLSYMS */
+ 
+ 	arch_crash_save_vmcoreinfo();
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index f98247ec99c20..c970eec25c5a0 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -896,7 +896,7 @@ static struct worker_pool *get_work_pool(struct work_struct *work)
+ 
+ static unsigned long shift_and_mask(unsigned long v, u32 shift, u32 bits)
+ {
+-	return (v >> shift) & ((1 << bits) - 1);
++	return (v >> shift) & ((1U << bits) - 1);
+ }
+ 
+ static void work_offqd_unpack(struct work_offq_data *offqd, unsigned long data)
+@@ -4190,7 +4190,6 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
+ static bool __flush_work(struct work_struct *work, bool from_cancel)
+ {
+ 	struct wq_barrier barr;
+-	unsigned long data;
+ 
+ 	if (WARN_ON(!wq_online))
+ 		return false;
+@@ -4208,29 +4207,35 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
+ 	 * was queued on a BH workqueue, we also know that it was running in the
+ 	 * BH context and thus can be busy-waited.
+ 	 */
+-	data = *work_data_bits(work);
+-	if (from_cancel &&
+-	    !WARN_ON_ONCE(data & WORK_STRUCT_PWQ) && (data & WORK_OFFQ_BH)) {
+-		/*
+-		 * On RT, prevent a live lock when %current preempted soft
+-		 * interrupt processing or prevents ksoftirqd from running by
+-		 * keeping flipping BH. If the BH work item runs on a different
+-		 * CPU then this has no effect other than doing the BH
+-		 * disable/enable dance for nothing. This is copied from
+-		 * kernel/softirq.c::tasklet_unlock_spin_wait().
+-		 */
+-		while (!try_wait_for_completion(&barr.done)) {
+-			if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+-				local_bh_disable();
+-				local_bh_enable();
+-			} else {
+-				cpu_relax();
++	if (from_cancel) {
++		unsigned long data = *work_data_bits(work);
++
++		if (!WARN_ON_ONCE(data & WORK_STRUCT_PWQ) &&
++		    (data & WORK_OFFQ_BH)) {
++			/*
++			 * On RT, prevent a live lock when %current preempted
++			 * soft interrupt processing or prevents ksoftirqd from
++			 * running by keeping flipping BH. If the BH work item
++			 * runs on a different CPU then this has no effect other
++			 * than doing the BH disable/enable dance for nothing.
++			 * This is copied from
++			 * kernel/softirq.c::tasklet_unlock_spin_wait().
++			 */
++			while (!try_wait_for_completion(&barr.done)) {
++				if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++					local_bh_disable();
++					local_bh_enable();
++				} else {
++					cpu_relax();
++				}
+ 			}
++			goto out_destroy;
+ 		}
+-	} else {
+-		wait_for_completion(&barr.done);
+ 	}
+ 
++	wait_for_completion(&barr.done);
++
++out_destroy:
+ 	destroy_work_on_stack(&barr.work);
+ 	return true;
+ }
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 5f32a196a612e..4d9c1277e5e4d 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1672,7 +1672,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
+ 	vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+ 	if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) {
+ 		spin_unlock(vmf->ptl);
+-		goto out;
++		return 0;
+ 	}
+ 
+ 	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
+@@ -1715,22 +1715,16 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
+ 	if (!migrate_misplaced_folio(folio, vma, target_nid)) {
+ 		flags |= TNF_MIGRATED;
+ 		nid = target_nid;
+-	} else {
+-		flags |= TNF_MIGRATE_FAIL;
+-		vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+-		if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) {
+-			spin_unlock(vmf->ptl);
+-			goto out;
+-		}
+-		goto out_map;
+-	}
+-
+-out:
+-	if (nid != NUMA_NO_NODE)
+ 		task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags);
++		return 0;
++	}
+ 
+-	return 0;
+-
++	flags |= TNF_MIGRATE_FAIL;
++	vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
++	if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) {
++		spin_unlock(vmf->ptl);
++		return 0;
++	}
+ out_map:
+ 	/* Restore the PMD */
+ 	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
+@@ -1740,7 +1734,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
+ 	set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd);
+ 	update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
+ 	spin_unlock(vmf->ptl);
+-	goto out;
++
++	if (nid != NUMA_NO_NODE)
++		task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags);
++	return 0;
+ }
+ 
+ /*
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 88c7a0861017c..332f190bf3d6b 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5282,9 +5282,12 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ 	buf = endp + 1;
+ 
+ 	cfd = simple_strtoul(buf, &endp, 10);
+-	if ((*endp != ' ') && (*endp != '\0'))
++	if (*endp == '\0')
++		buf = endp;
++	else if (*endp == ' ')
++		buf = endp + 1;
++	else
+ 		return -EINVAL;
+-	buf = endp + 1;
+ 
+ 	event = kzalloc(sizeof(*event), GFP_KERNEL);
+ 	if (!event)
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index d3c830e817e35..7e2e454142bcd 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -2406,7 +2406,7 @@ struct memory_failure_entry {
+ struct memory_failure_cpu {
+ 	DECLARE_KFIFO(fifo, struct memory_failure_entry,
+ 		      MEMORY_FAILURE_FIFO_SIZE);
+-	spinlock_t lock;
++	raw_spinlock_t lock;
+ 	struct work_struct work;
+ };
+ 
+@@ -2432,20 +2432,22 @@ void memory_failure_queue(unsigned long pfn, int flags)
+ {
+ 	struct memory_failure_cpu *mf_cpu;
+ 	unsigned long proc_flags;
++	bool buffer_overflow;
+ 	struct memory_failure_entry entry = {
+ 		.pfn =		pfn,
+ 		.flags =	flags,
+ 	};
+ 
+ 	mf_cpu = &get_cpu_var(memory_failure_cpu);
+-	spin_lock_irqsave(&mf_cpu->lock, proc_flags);
+-	if (kfifo_put(&mf_cpu->fifo, entry))
++	raw_spin_lock_irqsave(&mf_cpu->lock, proc_flags);
++	buffer_overflow = !kfifo_put(&mf_cpu->fifo, entry);
++	if (!buffer_overflow)
+ 		schedule_work_on(smp_processor_id(), &mf_cpu->work);
+-	else
++	raw_spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
++	put_cpu_var(memory_failure_cpu);
++	if (buffer_overflow)
+ 		pr_err("buffer overflow when queuing memory failure at %#lx\n",
+ 		       pfn);
+-	spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
+-	put_cpu_var(memory_failure_cpu);
+ }
+ EXPORT_SYMBOL_GPL(memory_failure_queue);
+ 
+@@ -2458,9 +2460,9 @@ static void memory_failure_work_func(struct work_struct *work)
+ 
+ 	mf_cpu = container_of(work, struct memory_failure_cpu, work);
+ 	for (;;) {
+-		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
++		raw_spin_lock_irqsave(&mf_cpu->lock, proc_flags);
+ 		gotten = kfifo_get(&mf_cpu->fifo, &entry);
+-		spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
++		raw_spin_unlock_irqrestore(&mf_cpu->lock, proc_flags);
+ 		if (!gotten)
+ 			break;
+ 		if (entry.flags & MF_SOFT_OFFLINE)
+@@ -2490,7 +2492,7 @@ static int __init memory_failure_init(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		mf_cpu = &per_cpu(memory_failure_cpu, cpu);
+-		spin_lock_init(&mf_cpu->lock);
++		raw_spin_lock_init(&mf_cpu->lock);
+ 		INIT_KFIFO(mf_cpu->fifo);
+ 		INIT_WORK(&mf_cpu->work, memory_failure_work_func);
+ 	}
+diff --git a/mm/memory.c b/mm/memory.c
+index 755ffe082e217..72d00a38585d0 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5155,7 +5155,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
+ 
+ 	if (unlikely(!pte_same(old_pte, vmf->orig_pte))) {
+ 		pte_unmap_unlock(vmf->pte, vmf->ptl);
+-		goto out;
++		return 0;
+ 	}
+ 
+ 	pte = pte_modify(old_pte, vma->vm_page_prot);
+@@ -5218,23 +5218,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
+ 	if (!migrate_misplaced_folio(folio, vma, target_nid)) {
+ 		nid = target_nid;
+ 		flags |= TNF_MIGRATED;
+-	} else {
+-		flags |= TNF_MIGRATE_FAIL;
+-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
+-					       vmf->address, &vmf->ptl);
+-		if (unlikely(!vmf->pte))
+-			goto out;
+-		if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
+-			pte_unmap_unlock(vmf->pte, vmf->ptl);
+-			goto out;
+-		}
+-		goto out_map;
++		task_numa_fault(last_cpupid, nid, nr_pages, flags);
++		return 0;
+ 	}
+ 
+-out:
+-	if (nid != NUMA_NO_NODE)
+-		task_numa_fault(last_cpupid, nid, nr_pages, flags);
+-	return 0;
++	flags |= TNF_MIGRATE_FAIL;
++	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
++				       vmf->address, &vmf->ptl);
++	if (unlikely(!vmf->pte))
++		return 0;
++	if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
++		pte_unmap_unlock(vmf->pte, vmf->ptl);
++		return 0;
++	}
+ out_map:
+ 	/*
+ 	 * Make it present again, depending on how arch implements
+@@ -5247,7 +5243,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
+ 		numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte,
+ 					    writable);
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+-	goto out;
++
++	if (nid != NUMA_NO_NODE)
++		task_numa_fault(last_cpupid, nid, nr_pages, flags);
++	return 0;
+ }
+ 
+ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
+diff --git a/mm/mm_init.c b/mm/mm_init.c
+index 3ec04933f7fd8..2addc701790ae 100644
+--- a/mm/mm_init.c
++++ b/mm/mm_init.c
+@@ -2293,6 +2293,8 @@ void __init init_cma_reserved_pageblock(struct page *page)
+ 
+ 	set_pageblock_migratetype(page, MIGRATE_CMA);
+ 	set_page_refcounted(page);
++	/* pages were reserved and not allocated */
++	clear_page_tag_ref(page);
+ 	__free_pages(page, pageblock_order);
+ 
+ 	adjust_managed_page_count(page, pageblock_nr_pages);
+@@ -2505,15 +2507,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
+ 	}
+ 
+ 	/* pages were reserved and not allocated */
+-	if (mem_alloc_profiling_enabled()) {
+-		union codetag_ref *ref = get_page_tag_ref(page);
+-
+-		if (ref) {
+-			set_codetag_empty(ref);
+-			put_page_tag_ref(ref);
+-		}
+-	}
+-
++	clear_page_tag_ref(page);
+ 	__free_pages_core(page, order);
+ }
+ 
+diff --git a/mm/mseal.c b/mm/mseal.c
+index bf783bba8ed0b..15bba28acc005 100644
+--- a/mm/mseal.c
++++ b/mm/mseal.c
+@@ -40,9 +40,17 @@ static bool can_modify_vma(struct vm_area_struct *vma)
+ 
+ static bool is_madv_discard(int behavior)
+ {
+-	return	behavior &
+-		(MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED |
+-		 MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK);
++	switch (behavior) {
++	case MADV_FREE:
++	case MADV_DONTNEED:
++	case MADV_DONTNEED_LOCKED:
++	case MADV_REMOVE:
++	case MADV_DONTFORK:
++	case MADV_WIPEONFORK:
++		return true;
++	}
++
++	return false;
+ }
+ 
+ static bool is_ro_anon(struct vm_area_struct *vma)
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index df2c442f1c47b..b50060405d947 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -287,7 +287,7 @@ EXPORT_SYMBOL(nr_online_nodes);
+ 
+ static bool page_contains_unaccepted(struct page *page, unsigned int order);
+ static void accept_page(struct page *page, unsigned int order);
+-static bool try_to_accept_memory(struct zone *zone, unsigned int order);
++static bool cond_accept_memory(struct zone *zone, unsigned int order);
+ static inline bool has_unaccepted_memory(void);
+ static bool __free_unaccepted(struct page *page);
+ 
+@@ -3059,9 +3059,6 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
+ 	if (!(alloc_flags & ALLOC_CMA))
+ 		unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES);
+ #endif
+-#ifdef CONFIG_UNACCEPTED_MEMORY
+-	unusable_free += zone_page_state(z, NR_UNACCEPTED);
+-#endif
+ 
+ 	return unusable_free;
+ }
+@@ -3355,6 +3352,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
+ 			}
+ 		}
+ 
++		cond_accept_memory(zone, order);
++
+ 		/*
+ 		 * Detect whether the number of free pages is below high
+ 		 * watermark.  If so, we will decrease pcp->high and free
+@@ -3380,10 +3379,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
+ 				       gfp_mask)) {
+ 			int ret;
+ 
+-			if (has_unaccepted_memory()) {
+-				if (try_to_accept_memory(zone, order))
+-					goto try_this_zone;
+-			}
++			if (cond_accept_memory(zone, order))
++				goto try_this_zone;
+ 
+ #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
+ 			/*
+@@ -3437,10 +3434,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
+ 
+ 			return page;
+ 		} else {
+-			if (has_unaccepted_memory()) {
+-				if (try_to_accept_memory(zone, order))
+-					goto try_this_zone;
+-			}
++			if (cond_accept_memory(zone, order))
++				goto try_this_zone;
+ 
+ #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
+ 			/* Try again if zone has deferred pages */
+@@ -5811,14 +5806,7 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
+ 
+ void free_reserved_page(struct page *page)
+ {
+-	if (mem_alloc_profiling_enabled()) {
+-		union codetag_ref *ref = get_page_tag_ref(page);
+-
+-		if (ref) {
+-			set_codetag_empty(ref);
+-			put_page_tag_ref(ref);
+-		}
+-	}
++	clear_page_tag_ref(page);
+ 	ClearPageReserved(page);
+ 	init_page_count(page);
+ 	__free_page(page);
+@@ -6933,9 +6921,6 @@ static bool try_to_accept_memory_one(struct zone *zone)
+ 	struct page *page;
+ 	bool last;
+ 
+-	if (list_empty(&zone->unaccepted_pages))
+-		return false;
+-
+ 	spin_lock_irqsave(&zone->lock, flags);
+ 	page = list_first_entry_or_null(&zone->unaccepted_pages,
+ 					struct page, lru);
+@@ -6961,23 +6946,29 @@ static bool try_to_accept_memory_one(struct zone *zone)
+ 	return true;
+ }
+ 
+-static bool try_to_accept_memory(struct zone *zone, unsigned int order)
++static bool cond_accept_memory(struct zone *zone, unsigned int order)
+ {
+ 	long to_accept;
+-	int ret = false;
++	bool ret = false;
++
++	if (!has_unaccepted_memory())
++		return false;
++
++	if (list_empty(&zone->unaccepted_pages))
++		return false;
+ 
+ 	/* How much to accept to get to high watermark? */
+ 	to_accept = high_wmark_pages(zone) -
+ 		    (zone_page_state(zone, NR_FREE_PAGES) -
+-		    __zone_watermark_unusable_free(zone, order, 0));
++		    __zone_watermark_unusable_free(zone, order, 0) -
++		    zone_page_state(zone, NR_UNACCEPTED));
+ 
+-	/* Accept at least one page */
+-	do {
++	while (to_accept > 0) {
+ 		if (!try_to_accept_memory_one(zone))
+ 			break;
+ 		ret = true;
+ 		to_accept -= MAX_ORDER_NR_PAGES;
+-	} while (to_accept > 0);
++	}
+ 
+ 	return ret;
+ }
+@@ -7020,7 +7011,7 @@ static void accept_page(struct page *page, unsigned int order)
+ {
+ }
+ 
+-static bool try_to_accept_memory(struct zone *zone, unsigned int order)
++static bool cond_accept_memory(struct zone *zone, unsigned int order)
+ {
+ 	return false;
+ }
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index e34ea860153f2..881e497137e5d 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3583,15 +3583,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
+ 			page = alloc_pages_noprof(alloc_gfp, order);
+ 		else
+ 			page = alloc_pages_node_noprof(nid, alloc_gfp, order);
+-		if (unlikely(!page)) {
+-			if (!nofail)
+-				break;
+-
+-			/* fall back to the zero order allocations */
+-			alloc_gfp |= __GFP_NOFAIL;
+-			order = 0;
+-			continue;
+-		}
++		if (unlikely(!page))
++			break;
+ 
+ 		/*
+ 		 * Higher order allocations must be able to be treated as
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 6ecb110bf46bc..b488d0742c966 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3674,19 +3674,19 @@ static void hci_sched_le(struct hci_dev *hdev)
+ {
+ 	struct hci_chan *chan;
+ 	struct sk_buff *skb;
+-	int quote, cnt, tmp;
++	int quote, *cnt, tmp;
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	if (!hci_conn_num(hdev, LE_LINK))
+ 		return;
+ 
+-	cnt = hdev->le_pkts ? hdev->le_cnt : hdev->acl_cnt;
++	cnt = hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
+ 
+-	__check_timeout(hdev, cnt, LE_LINK);
++	__check_timeout(hdev, *cnt, LE_LINK);
+ 
+-	tmp = cnt;
+-	while (cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
++	tmp = *cnt;
++	while (*cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
+ 		u32 priority = (skb_peek(&chan->data_q))->priority;
+ 		while (quote-- && (skb = skb_peek(&chan->data_q))) {
+ 			BT_DBG("chan %p skb %p len %d priority %u", chan, skb,
+@@ -3701,7 +3701,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 			hci_send_frame(hdev, skb);
+ 			hdev->le_last_tx = jiffies;
+ 
+-			cnt--;
++			(*cnt)--;
+ 			chan->sent++;
+ 			chan->conn->sent++;
+ 
+@@ -3711,12 +3711,7 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 		}
+ 	}
+ 
+-	if (hdev->le_pkts)
+-		hdev->le_cnt = cnt;
+-	else
+-		hdev->acl_cnt = cnt;
+-
+-	if (cnt != tmp)
++	if (*cnt != tmp)
+ 		hci_prio_recalculate(hdev, LE_LINK);
+ }
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index a78f6d706cd43..59d9086db75fe 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5921,7 +5921,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+ 	 * while we have an existing one in peripheral role.
+ 	 */
+ 	if (hdev->conn_hash.le_num_peripheral > 0 &&
+-	    (!test_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks) ||
++	    (test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) ||
+ 	     !(hdev->le_states[3] & 0x10)))
+ 		return NULL;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 80f220b7e19d5..ad4793ea052df 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -3457,6 +3457,10 @@ static int pair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		 * will be kept and this function does nothing.
+ 		 */
+ 		p = hci_conn_params_add(hdev, &cp->addr.bdaddr, addr_type);
++		if (!p) {
++			err = -EIO;
++			goto unlock;
++		}
+ 
+ 		if (p->auto_connect == HCI_AUTO_CONN_EXPLICIT)
+ 			p->auto_connect = HCI_AUTO_CONN_DISABLED;
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 1e7ea3a4b7ef3..4f9fdf400584e 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -914,7 +914,7 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
+ 	 * Confirms and the responder Enters the passkey.
+ 	 */
+ 	if (smp->method == OVERLAP) {
+-		if (hcon->role == HCI_ROLE_MASTER)
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp->method = CFM_PASSKEY;
+ 		else
+ 			smp->method = REQ_PASSKEY;
+@@ -964,7 +964,7 @@ static u8 smp_confirm(struct smp_chan *smp)
+ 
+ 	smp_send_cmd(smp->conn, SMP_CMD_PAIRING_CONFIRM, sizeof(cp), &cp);
+ 
+-	if (conn->hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ 	else
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -980,7 +980,8 @@ static u8 smp_random(struct smp_chan *smp)
+ 	int ret;
+ 
+ 	bt_dev_dbg(conn->hcon->hdev, "conn %p %s", conn,
+-		   conn->hcon->out ? "initiator" : "responder");
++		   test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
++		   "responder");
+ 
+ 	ret = smp_c1(smp->tk, smp->rrnd, smp->preq, smp->prsp,
+ 		     hcon->init_addr_type, &hcon->init_addr,
+@@ -994,7 +995,7 @@ static u8 smp_random(struct smp_chan *smp)
+ 		return SMP_CONFIRM_FAILED;
+ 	}
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		u8 stk[16];
+ 		__le64 rand = 0;
+ 		__le16 ediv = 0;
+@@ -1256,14 +1257,15 @@ static void smp_distribute_keys(struct smp_chan *smp)
+ 	rsp = (void *) &smp->prsp[1];
+ 
+ 	/* The responder sends its keys first */
+-	if (hcon->out && (smp->remote_key_dist & KEY_DIST_MASK)) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags) &&
++	    (smp->remote_key_dist & KEY_DIST_MASK)) {
+ 		smp_allow_key_dist(smp);
+ 		return;
+ 	}
+ 
+ 	req = (void *) &smp->preq[1];
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		keydist = &rsp->init_key_dist;
+ 		*keydist &= req->init_key_dist;
+ 	} else {
+@@ -1432,7 +1434,7 @@ static int sc_mackey_and_ltk(struct smp_chan *smp, u8 mackey[16], u8 ltk[16])
+ 	struct hci_conn *hcon = smp->conn->hcon;
+ 	u8 *na, *nb, a[7], b[7];
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		na   = smp->prnd;
+ 		nb   = smp->rrnd;
+ 	} else {
+@@ -1460,7 +1462,7 @@ static void sc_dhkey_check(struct smp_chan *smp)
+ 	a[6] = hcon->init_addr_type;
+ 	b[6] = hcon->resp_addr_type;
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local_addr = a;
+ 		remote_addr = b;
+ 		memcpy(io_cap, &smp->preq[1], 3);
+@@ -1539,7 +1541,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 		/* The round is only complete when the initiator
+ 		 * receives pairing random.
+ 		 */
+-		if (!hcon->out) {
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 			if (smp->passkey_round == 20)
+@@ -1567,7 +1569,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+ 
+-		if (hcon->out) {
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 			return 0;
+@@ -1578,7 +1580,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
+ 	case SMP_CMD_PUBLIC_KEY:
+ 	default:
+ 		/* Initiating device starts the round */
+-		if (!hcon->out)
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			return 0;
+ 
+ 		bt_dev_dbg(hdev, "Starting passkey round %u",
+@@ -1623,7 +1625,7 @@ static int sc_user_reply(struct smp_chan *smp, u16 mgmt_op, __le32 passkey)
+ 	}
+ 
+ 	/* Initiator sends DHKey check first */
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		sc_dhkey_check(smp);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+ 	} else if (test_and_clear_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags)) {
+@@ -1746,7 +1748,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct smp_cmd_pairing rsp, *req = (void *) skb->data;
+ 	struct l2cap_chan *chan = conn->smp;
+ 	struct hci_dev *hdev = conn->hcon->hdev;
+-	struct smp_chan *smp;
++	struct smp_chan *smp = chan->data;
+ 	u8 key_size, auth, sec_level;
+ 	int ret;
+ 
+@@ -1755,16 +1757,14 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (skb->len < sizeof(*req))
+ 		return SMP_INVALID_PARAMS;
+ 
+-	if (conn->hcon->role != HCI_ROLE_SLAVE)
++	if (smp && test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_CMD_NOTSUPP;
+ 
+-	if (!chan->data)
++	if (!smp) {
+ 		smp = smp_chan_create(conn);
+-	else
+-		smp = chan->data;
+-
+-	if (!smp)
+-		return SMP_UNSPECIFIED;
++		if (!smp)
++			return SMP_UNSPECIFIED;
++	}
+ 
+ 	/* We didn't start the pairing, so match remote */
+ 	auth = req->auth_req & AUTH_REQ_MASK(hdev);
+@@ -1946,7 +1946,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (skb->len < sizeof(*rsp))
+ 		return SMP_INVALID_PARAMS;
+ 
+-	if (conn->hcon->role != HCI_ROLE_MASTER)
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_CMD_NOTSUPP;
+ 
+ 	skb_pull(skb, sizeof(*rsp));
+@@ -2041,7 +2041,7 @@ static u8 sc_check_confirm(struct smp_chan *smp)
+ 	if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
+ 		return sc_passkey_round(smp, SMP_CMD_PAIRING_CONFIRM);
+ 
+-	if (conn->hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
+ 			     smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -2063,7 +2063,7 @@ static int fixup_sc_false_positive(struct smp_chan *smp)
+ 	u8 auth;
+ 
+ 	/* The issue is only observed when we're in responder role */
+-	if (hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return SMP_UNSPECIFIED;
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_SC_ONLY)) {
+@@ -2099,7 +2099,8 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	struct hci_dev *hdev = hcon->hdev;
+ 
+ 	bt_dev_dbg(hdev, "conn %p %s", conn,
+-		   hcon->out ? "initiator" : "responder");
++		   test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
++		   "responder");
+ 
+ 	if (skb->len < sizeof(smp->pcnf))
+ 		return SMP_INVALID_PARAMS;
+@@ -2121,7 +2122,7 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
+ 			return ret;
+ 	}
+ 
+-	if (conn->hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
+ 			     smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
+@@ -2156,7 +2157,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (!test_bit(SMP_FLAG_SC, &smp->flags))
+ 		return smp_random(smp);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		pkax = smp->local_pk;
+ 		pkbx = smp->remote_pk;
+ 		na   = smp->prnd;
+@@ -2169,7 +2170,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	}
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (!hcon->out)
++		if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+@@ -2180,7 +2181,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
+ 		return sc_passkey_round(smp, SMP_CMD_PAIRING_RANDOM);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		u8 cfm[16];
+ 
+ 		err = smp_f4(smp->tfm_cmac, smp->remote_pk, smp->local_pk,
+@@ -2221,7 +2222,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		return SMP_UNSPECIFIED;
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (hcon->out) {
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 			sc_dhkey_check(smp);
+ 			SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
+ 		}
+@@ -2295,10 +2296,27 @@ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
+ 	return false;
+ }
+ 
++static void smp_send_pairing_req(struct smp_chan *smp, __u8 auth)
++{
++	struct smp_cmd_pairing cp;
++
++	if (smp->conn->hcon->type == ACL_LINK)
++		build_bredr_pairing_cmd(smp, &cp, NULL);
++	else
++		build_pairing_cmd(smp->conn, &cp, NULL, auth);
++
++	smp->preq[0] = SMP_CMD_PAIRING_REQ;
++	memcpy(&smp->preq[1], &cp, sizeof(cp));
++
++	smp_send_cmd(smp->conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
++	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++
++	set_bit(SMP_FLAG_INITIATOR, &smp->flags);
++}
++
+ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ {
+ 	struct smp_cmd_security_req *rp = (void *) skb->data;
+-	struct smp_cmd_pairing cp;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	struct hci_dev *hdev = hcon->hdev;
+ 	struct smp_chan *smp;
+@@ -2347,16 +2365,20 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ 	skb_pull(skb, sizeof(*rp));
+ 
+-	memset(&cp, 0, sizeof(cp));
+-	build_pairing_cmd(conn, &cp, NULL, auth);
++	smp_send_pairing_req(smp, auth);
+ 
+-	smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-	memcpy(&smp->preq[1], &cp, sizeof(cp));
++	return 0;
++}
+ 
+-	smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+-	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++static void smp_send_security_req(struct smp_chan *smp, __u8 auth)
++{
++	struct smp_cmd_security_req cp;
+ 
+-	return 0;
++	cp.auth_req = auth;
++	smp_send_cmd(smp->conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
++	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
++
++	clear_bit(SMP_FLAG_INITIATOR, &smp->flags);
+ }
+ 
+ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+@@ -2427,23 +2449,11 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
+ 			authreq |= SMP_AUTH_MITM;
+ 	}
+ 
+-	if (hcon->role == HCI_ROLE_MASTER) {
+-		struct smp_cmd_pairing cp;
+-
+-		build_pairing_cmd(conn, &cp, NULL, authreq);
+-		smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-		memcpy(&smp->preq[1], &cp, sizeof(cp));
+-
+-		smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
+-		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
+-	} else {
+-		struct smp_cmd_security_req cp;
+-		cp.auth_req = authreq;
+-		smp_send_cmd(conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
+-		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
+-	}
++	if (hcon->role == HCI_ROLE_MASTER)
++		smp_send_pairing_req(smp, authreq);
++	else
++		smp_send_security_req(smp, authreq);
+ 
+-	set_bit(SMP_FLAG_INITIATOR, &smp->flags);
+ 	ret = 0;
+ 
+ unlock:
+@@ -2694,8 +2704,6 @@ static int smp_cmd_sign_info(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ static u8 sc_select_method(struct smp_chan *smp)
+ {
+-	struct l2cap_conn *conn = smp->conn;
+-	struct hci_conn *hcon = conn->hcon;
+ 	struct smp_cmd_pairing *local, *remote;
+ 	u8 local_mitm, remote_mitm, local_io, remote_io, method;
+ 
+@@ -2708,7 +2716,7 @@ static u8 sc_select_method(struct smp_chan *smp)
+ 	 * the "struct smp_cmd_pairing" from them we need to skip the
+ 	 * first byte which contains the opcode.
+ 	 */
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local = (void *) &smp->preq[1];
+ 		remote = (void *) &smp->prsp[1];
+ 	} else {
+@@ -2777,7 +2785,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	/* Non-initiating device sends its public key after receiving
+ 	 * the key from the initiating device.
+ 	 */
+-	if (!hcon->out) {
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		err = sc_send_public_key(smp);
+ 		if (err)
+ 			return err;
+@@ -2839,7 +2847,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	}
+ 
+ 	if (smp->method == REQ_OOB) {
+-		if (hcon->out)
++		if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 			smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
+ 				     sizeof(smp->prnd), smp->prnd);
+ 
+@@ -2848,7 +2856,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	if (hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
+ 
+ 	if (smp->method == REQ_PASSKEY) {
+@@ -2863,7 +2871,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	/* The Initiating device waits for the non-initiating device to
+ 	 * send the confirm value.
+ 	 */
+-	if (conn->hcon->out)
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
+ 		return 0;
+ 
+ 	err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->remote_pk, smp->prnd,
+@@ -2897,7 +2905,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	a[6] = hcon->init_addr_type;
+ 	b[6] = hcon->resp_addr_type;
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		local_addr = a;
+ 		remote_addr = b;
+ 		memcpy(io_cap, &smp->prsp[1], 3);
+@@ -2922,7 +2930,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	if (crypto_memneq(check->e, e, 16))
+ 		return SMP_DHKEY_CHECK_FAILED;
+ 
+-	if (!hcon->out) {
++	if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		if (test_bit(SMP_FLAG_WAIT_USER, &smp->flags)) {
+ 			set_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags);
+ 			return 0;
+@@ -2934,7 +2942,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
+ 
+ 	sc_add_ltk(smp);
+ 
+-	if (hcon->out) {
++	if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
+ 		hci_le_start_enc(hcon, 0, 0, smp->tk, smp->enc_key_size);
+ 		hcon->enc_key_size = smp->enc_key_size;
+ 	}
+@@ -3083,7 +3091,6 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 	struct l2cap_conn *conn = chan->conn;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	struct hci_dev *hdev = hcon->hdev;
+-	struct smp_cmd_pairing req;
+ 	struct smp_chan *smp;
+ 
+ 	bt_dev_dbg(hdev, "chan %p", chan);
+@@ -3135,14 +3142,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
+ 
+ 	bt_dev_dbg(hdev, "starting SMP over BR/EDR");
+ 
+-	/* Prepare and send the BR/EDR SMP Pairing Request */
+-	build_bredr_pairing_cmd(smp, &req, NULL);
+-
+-	smp->preq[0] = SMP_CMD_PAIRING_REQ;
+-	memcpy(&smp->preq[1], &req, sizeof(req));
+-
+-	smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(req), &req);
+-	SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
++	smp_send_pairing_req(smp, 0x00);
+ }
+ 
+ static void smp_resume_cb(struct l2cap_chan *chan)
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index bf30c50b56895..a9e1b56f854d4 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -619,8 +619,12 @@ static unsigned int br_nf_local_in(void *priv,
+ 	if (likely(nf_ct_is_confirmed(ct)))
+ 		return NF_ACCEPT;
+ 
++	if (WARN_ON_ONCE(refcount_read(&nfct->use) != 1)) {
++		nf_reset_ct(skb);
++		return NF_ACCEPT;
++	}
++
+ 	WARN_ON_ONCE(skb_shared(skb));
+-	WARN_ON_ONCE(refcount_read(&nfct->use) != 1);
+ 
+ 	/* We can't call nf_confirm here, it would create a dependency
+ 	 * on nf_conntrack module.
+diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c
+index e0e4300bfbd3f..bf6608fc6be70 100644
+--- a/net/dsa/tag_ocelot.c
++++ b/net/dsa/tag_ocelot.c
+@@ -8,40 +8,6 @@
+ #define OCELOT_NAME	"ocelot"
+ #define SEVILLE_NAME	"seville"
+ 
+-/* If the port is under a VLAN-aware bridge, remove the VLAN header from the
+- * payload and move it into the DSA tag, which will make the switch classify
+- * the packet to the bridge VLAN. Otherwise, leave the classified VLAN at zero,
+- * which is the pvid of standalone and VLAN-unaware bridge ports.
+- */
+-static void ocelot_xmit_get_vlan_info(struct sk_buff *skb, struct dsa_port *dp,
+-				      u64 *vlan_tci, u64 *tag_type)
+-{
+-	struct net_device *br = dsa_port_bridge_dev_get(dp);
+-	struct vlan_ethhdr *hdr;
+-	u16 proto, tci;
+-
+-	if (!br || !br_vlan_enabled(br)) {
+-		*vlan_tci = 0;
+-		*tag_type = IFH_TAG_TYPE_C;
+-		return;
+-	}
+-
+-	hdr = skb_vlan_eth_hdr(skb);
+-	br_vlan_get_proto(br, &proto);
+-
+-	if (ntohs(hdr->h_vlan_proto) == proto) {
+-		vlan_remove_tag(skb, &tci);
+-		*vlan_tci = tci;
+-	} else {
+-		rcu_read_lock();
+-		br_vlan_get_pvid_rcu(br, &tci);
+-		rcu_read_unlock();
+-		*vlan_tci = tci;
+-	}
+-
+-	*tag_type = (proto != ETH_P_8021Q) ? IFH_TAG_TYPE_S : IFH_TAG_TYPE_C;
+-}
+-
+ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
+ 			       __be32 ifh_prefix, void **ifh)
+ {
+@@ -53,7 +19,8 @@ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
+ 	u32 rew_op = 0;
+ 	u64 qos_class;
+ 
+-	ocelot_xmit_get_vlan_info(skb, dp, &vlan_tci, &tag_type);
++	ocelot_xmit_get_vlan_info(skb, dsa_port_bridge_dev_get(dp), &vlan_tci,
++				  &tag_type);
+ 
+ 	qos_class = netdev_get_num_tc(netdev) ?
+ 		    netdev_get_prio_tc_map(netdev, skb->priority) : skb->priority;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index ecd521108559f..2c52f6dcbd290 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -238,9 +238,14 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ 		 */
+ 		if (unlikely(len != icsk->icsk_ack.rcv_mss)) {
+ 			u64 val = (u64)skb->len << TCP_RMEM_TO_WIN_SCALE;
++			u8 old_ratio = tcp_sk(sk)->scaling_ratio;
+ 
+ 			do_div(val, skb->truesize);
+ 			tcp_sk(sk)->scaling_ratio = val ? val : 1;
++
++			if (old_ratio != tcp_sk(sk)->scaling_ratio)
++				WRITE_ONCE(tcp_sk(sk)->window_clamp,
++					   tcp_win_from_space(sk, sk->sk_rcvbuf));
+ 		}
+ 		icsk->icsk_ack.rcv_mss = min_t(unsigned int, len,
+ 					       tcp_sk(sk)->advmss);
+@@ -754,7 +759,8 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 	 * <prev RTT . ><current RTT .. ><next RTT .... >
+ 	 */
+ 
+-	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf)) {
++	if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&
++	    !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
+ 		u64 rcvwin, grow;
+ 		int rcvbuf;
+ 
+@@ -770,22 +776,12 @@ void tcp_rcv_space_adjust(struct sock *sk)
+ 
+ 		rcvbuf = min_t(u64, tcp_space_from_win(sk, rcvwin),
+ 			       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
+-		if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {
+-			if (rcvbuf > sk->sk_rcvbuf) {
+-				WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
+-
+-				/* Make the window clamp follow along.  */
+-				WRITE_ONCE(tp->window_clamp,
+-					   tcp_win_from_space(sk, rcvbuf));
+-			}
+-		} else {
+-			/* Make the window clamp follow along while being bounded
+-			 * by SO_RCVBUF.
+-			 */
+-			int clamp = tcp_win_from_space(sk, min(rcvbuf, sk->sk_rcvbuf));
++		if (rcvbuf > sk->sk_rcvbuf) {
++			WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
+ 
+-			if (clamp > tp->window_clamp)
+-				WRITE_ONCE(tp->window_clamp, clamp);
++			/* Make the window clamp follow along.  */
++			WRITE_ONCE(tp->window_clamp,
++				   tcp_win_from_space(sk, rcvbuf));
+ 		}
+ 	}
+ 	tp->rcvq_space.space = copied;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a541659b6562b..8f8f93716ff85 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -95,6 +95,8 @@ EXPORT_SYMBOL(tcp_hashinfo);
+ 
+ static DEFINE_PER_CPU(struct sock *, ipv4_tcp_sk);
+ 
++static DEFINE_MUTEX(tcp_exit_batch_mutex);
++
+ static u32 tcp_v4_init_seq(const struct sk_buff *skb)
+ {
+ 	return secure_tcp_seq(ip_hdr(skb)->daddr,
+@@ -3509,6 +3511,16 @@ static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list)
+ {
+ 	struct net *net;
+ 
++	/* make sure concurrent calls to tcp_sk_exit_batch from net_cleanup_work
++	 * and failed setup_net error unwinding path are serialized.
++	 *
++	 * tcp_twsk_purge() handles twsk in any dead netns, not just those in
++	 * net_exit_list, the thread that dismantles a particular twsk must
++	 * do so without other thread progressing to refcount_dec_and_test() of
++	 * tcp_death_row.tw_refcount.
++	 */
++	mutex_lock(&tcp_exit_batch_mutex);
++
+ 	tcp_twsk_purge(net_exit_list);
+ 
+ 	list_for_each_entry(net, net_exit_list, exit_list) {
+@@ -3516,6 +3528,8 @@ static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list)
+ 		WARN_ON_ONCE(!refcount_dec_and_test(&net->ipv4.tcp_death_row.tw_refcount));
+ 		tcp_fastopen_ctx_destroy(net);
+ 	}
++
++	mutex_unlock(&tcp_exit_batch_mutex);
+ }
+ 
+ static struct pernet_operations __net_initdata tcp_sk_ops = {
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index ee9af921556a7..5b54f4f32b1cd 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -279,7 +279,8 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (unlikely(skb_checksum_start(gso_skb) !=
+-		     skb_transport_header(gso_skb)))
++		     skb_transport_header(gso_skb) &&
++		     !(skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 784424ac41477..c49344d8311ab 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -70,11 +70,15 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ 
+ 	/* Be paranoid, rather than too clever. */
+ 	if (unlikely(hh_len > skb_headroom(skb)) && dev->header_ops) {
++		/* Make sure idev stays alive */
++		rcu_read_lock();
+ 		skb = skb_expand_head(skb, hh_len);
+ 		if (!skb) {
+ 			IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
++			rcu_read_unlock();
+ 			return -ENOMEM;
+ 		}
++		rcu_read_unlock();
+ 	}
+ 
+ 	hdr = ipv6_hdr(skb);
+@@ -283,11 +287,15 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
+ 		head_room += opt->opt_nflen + opt->opt_flen;
+ 
+ 	if (unlikely(head_room > skb_headroom(skb))) {
++		/* Make sure idev stays alive */
++		rcu_read_lock();
+ 		skb = skb_expand_head(skb, head_room);
+ 		if (!skb) {
+ 			IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
++			rcu_read_unlock();
+ 			return -ENOBUFS;
+ 		}
++		rcu_read_unlock();
+ 	}
+ 
+ 	if (opt) {
+@@ -1953,6 +1961,7 @@ int ip6_send_skb(struct sk_buff *skb)
+ 	struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
+ 	int err;
+ 
++	rcu_read_lock();
+ 	err = ip6_local_out(net, skb->sk, skb);
+ 	if (err) {
+ 		if (err > 0)
+@@ -1962,6 +1971,7 @@ int ip6_send_skb(struct sk_buff *skb)
+ 				      IPSTATS_MIB_OUTDISCARDS);
+ 	}
+ 
++	rcu_read_unlock();
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 9dee0c1279554..87dfb565a9f81 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1507,7 +1507,8 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
+ 			tdev = __dev_get_by_index(t->net, p->link);
+ 
+ 		if (tdev) {
+-			dev->hard_header_len = tdev->hard_header_len + t_hlen;
++			dev->needed_headroom = tdev->hard_header_len +
++				tdev->needed_headroom + t_hlen;
+ 			mtu = min_t(unsigned int, tdev->mtu, IP6_MAX_MTU);
+ 
+ 			mtu = mtu - t_hlen;
+@@ -1731,7 +1732,9 @@ ip6_tnl_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
+ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
+ {
+ 	struct ip6_tnl *tnl = netdev_priv(dev);
++	int t_hlen;
+ 
++	t_hlen = tnl->hlen + sizeof(struct ipv6hdr);
+ 	if (tnl->parms.proto == IPPROTO_IPV6) {
+ 		if (new_mtu < IPV6_MIN_MTU)
+ 			return -EINVAL;
+@@ -1740,10 +1743,10 @@ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
+ 			return -EINVAL;
+ 	}
+ 	if (tnl->parms.proto == IPPROTO_IPV6 || tnl->parms.proto == 0) {
+-		if (new_mtu > IP6_MAX_MTU - dev->hard_header_len)
++		if (new_mtu > IP6_MAX_MTU - dev->hard_header_len - t_hlen)
+ 			return -EINVAL;
+ 	} else {
+-		if (new_mtu > IP_MAX_MTU - dev->hard_header_len)
++		if (new_mtu > IP_MAX_MTU - dev->hard_header_len - t_hlen)
+ 			return -EINVAL;
+ 	}
+ 	WRITE_ONCE(dev->mtu, new_mtu);
+@@ -1887,12 +1890,11 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
+ 	t_hlen = t->hlen + sizeof(struct ipv6hdr);
+ 
+ 	dev->type = ARPHRD_TUNNEL6;
+-	dev->hard_header_len = LL_MAX_HEADER + t_hlen;
+ 	dev->mtu = ETH_DATA_LEN - t_hlen;
+ 	if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
+ 		dev->mtu -= 8;
+ 	dev->min_mtu = ETH_MIN_MTU;
+-	dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
++	dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len - t_hlen;
+ 
+ 	netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
+ 	netdev_lockdep_set_classes(dev);
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index 5e1b50c6a44d2..3e9779ed7daec 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -154,6 +154,10 @@ static struct frag_queue *fq_find(struct net *net, __be32 id, u32 user,
+ 	};
+ 	struct inet_frag_queue *q;
+ 
++	if (!(ipv6_addr_type(&hdr->daddr) & (IPV6_ADDR_MULTICAST |
++					    IPV6_ADDR_LINKLOCAL)))
++		key.iif = 0;
++
+ 	q = inet_frag_find(nf_frag->fqdir, &key);
+ 	if (!q)
+ 		return NULL;
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index b7bf34a5eb37a..1235307020075 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -86,13 +86,15 @@ struct device *iucv_alloc_device(const struct attribute_group **attrs,
+ {
+ 	struct device *dev;
+ 	va_list vargs;
++	char buf[20];
+ 	int rc;
+ 
+ 	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+ 		goto out_error;
+ 	va_start(vargs, fmt);
+-	rc = dev_set_name(dev, fmt, vargs);
++	vsnprintf(buf, sizeof(buf), fmt, vargs);
++	rc = dev_set_name(dev, "%s", buf);
+ 	va_end(vargs);
+ 	if (rc)
+ 		goto out_error;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 2f191e50d4fc9..d4118c796290e 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -755,6 +755,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		  !(msg->msg_flags & MSG_MORE) : !!(msg->msg_flags & MSG_EOR);
+ 	int err = -EPIPE;
+ 
++	mutex_lock(&kcm->tx_mutex);
+ 	lock_sock(sk);
+ 
+ 	/* Per tcp_sendmsg this should be in poll */
+@@ -926,6 +927,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 	KCM_STATS_ADD(kcm->stats.tx_bytes, copied);
+ 
+ 	release_sock(sk);
++	mutex_unlock(&kcm->tx_mutex);
+ 	return copied;
+ 
+ out_error:
+@@ -951,6 +953,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		sk->sk_write_space(sk);
+ 
+ 	release_sock(sk);
++	mutex_unlock(&kcm->tx_mutex);
+ 	return err;
+ }
+ 
+@@ -1204,6 +1207,7 @@ static void init_kcm_sock(struct kcm_sock *kcm, struct kcm_mux *mux)
+ 	spin_unlock_bh(&mux->lock);
+ 
+ 	INIT_WORK(&kcm->tx_work, kcm_tx_work);
++	mutex_init(&kcm->tx_mutex);
+ 
+ 	spin_lock_bh(&mux->rx_lock);
+ 	kcm_rcv_ready(kcm);
+diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c
+index 77e5dd4222580..8551dab1d1e69 100644
+--- a/net/mctp/test/route-test.c
++++ b/net/mctp/test/route-test.c
+@@ -366,7 +366,7 @@ static void mctp_test_route_input_sk(struct kunit *test)
+ 
+ 		skb2 = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);
+ 		KUNIT_EXPECT_NOT_ERR_OR_NULL(test, skb2);
+-		KUNIT_EXPECT_EQ(test, skb->len, 1);
++		KUNIT_EXPECT_EQ(test, skb2->len, 1);
+ 
+ 		skb_free_datagram(sock->sk, skb2);
+ 
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index 3ae46b545d2c2..2d3efb405437d 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -94,7 +94,7 @@ static size_t subflow_get_info_size(const struct sock *sk)
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ */
+ 		nla_total_size_64bit(8) +	/* MPTCP_SUBFLOW_ATTR_MAP_SEQ */
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_MAP_SFSEQ */
+-		nla_total_size(2) +	/* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */
++		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */
+ 		nla_total_size(2) +	/* MPTCP_SUBFLOW_ATTR_MAP_DATALEN */
+ 		nla_total_size(4) +	/* MPTCP_SUBFLOW_ATTR_FLAGS */
+ 		nla_total_size(1) +	/* MPTCP_SUBFLOW_ATTR_ID_REM */
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 23bb89c94e90d..3e6e0f5510bb1 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -60,16 +60,6 @@ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_
+ 	return 0;
+ }
+ 
+-int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list)
+-{
+-	pr_debug("msk=%p, rm_list_nr=%d", msk, rm_list->nr);
+-
+-	spin_lock_bh(&msk->pm.lock);
+-	mptcp_pm_nl_rm_subflow_received(msk, rm_list);
+-	spin_unlock_bh(&msk->pm.lock);
+-	return 0;
+-}
+-
+ /* path manager event handlers */
+ 
+ void mptcp_pm_new_connection(struct mptcp_sock *msk, const struct sock *ssk, int server_side)
+@@ -444,9 +434,6 @@ int mptcp_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int id
+ 	*flags = 0;
+ 	*ifindex = 0;
+ 
+-	if (!id)
+-		return 0;
+-
+ 	if (mptcp_pm_is_userspace(msk))
+ 		return mptcp_userspace_pm_get_flags_and_ifindex_by_id(msk, id, flags, ifindex);
+ 	return mptcp_pm_nl_get_flags_and_ifindex_by_id(msk, id, flags, ifindex);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 4cae2aa7be5cb..3e4ad801786f2 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -143,11 +143,13 @@ static bool lookup_subflow_by_daddr(const struct list_head *list,
+ 	return false;
+ }
+ 
+-static struct mptcp_pm_addr_entry *
++static bool
+ select_local_address(const struct pm_nl_pernet *pernet,
+-		     const struct mptcp_sock *msk)
++		     const struct mptcp_sock *msk,
++		     struct mptcp_pm_addr_entry *new_entry)
+ {
+-	struct mptcp_pm_addr_entry *entry, *ret = NULL;
++	struct mptcp_pm_addr_entry *entry;
++	bool found = false;
+ 
+ 	msk_owned_by_me(msk);
+ 
+@@ -159,17 +161,21 @@ select_local_address(const struct pm_nl_pernet *pernet,
+ 		if (!test_bit(entry->addr.id, msk->pm.id_avail_bitmap))
+ 			continue;
+ 
+-		ret = entry;
++		*new_entry = *entry;
++		found = true;
+ 		break;
+ 	}
+ 	rcu_read_unlock();
+-	return ret;
++
++	return found;
+ }
+ 
+-static struct mptcp_pm_addr_entry *
+-select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk)
++static bool
++select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk,
++		      struct mptcp_pm_addr_entry *new_entry)
+ {
+-	struct mptcp_pm_addr_entry *entry, *ret = NULL;
++	struct mptcp_pm_addr_entry *entry;
++	bool found = false;
+ 
+ 	rcu_read_lock();
+ 	/* do not keep any additional per socket state, just signal
+@@ -184,11 +190,13 @@ select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk)
+ 		if (!(entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL))
+ 			continue;
+ 
+-		ret = entry;
++		*new_entry = *entry;
++		found = true;
+ 		break;
+ 	}
+ 	rcu_read_unlock();
+-	return ret;
++
++	return found;
+ }
+ 
+ unsigned int mptcp_pm_get_add_addr_signal_max(const struct mptcp_sock *msk)
+@@ -512,9 +520,10 @@ __lookup_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *info)
+ 
+ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ {
+-	struct mptcp_pm_addr_entry *local, *signal_and_subflow = NULL;
+ 	struct sock *sk = (struct sock *)msk;
++	struct mptcp_pm_addr_entry local;
+ 	unsigned int add_addr_signal_max;
++	bool signal_and_subflow = false;
+ 	unsigned int local_addr_max;
+ 	struct pm_nl_pernet *pernet;
+ 	unsigned int subflows_max;
+@@ -565,23 +574,22 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 		if (msk->pm.addr_signal & BIT(MPTCP_ADD_ADDR_SIGNAL))
+ 			return;
+ 
+-		local = select_signal_address(pernet, msk);
+-		if (!local)
++		if (!select_signal_address(pernet, msk, &local))
+ 			goto subflow;
+ 
+ 		/* If the alloc fails, we are on memory pressure, not worth
+ 		 * continuing, and trying to create subflows.
+ 		 */
+-		if (!mptcp_pm_alloc_anno_list(msk, &local->addr))
++		if (!mptcp_pm_alloc_anno_list(msk, &local.addr))
+ 			return;
+ 
+-		__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
++		__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
+ 		msk->pm.add_addr_signaled++;
+-		mptcp_pm_announce_addr(msk, &local->addr, false);
++		mptcp_pm_announce_addr(msk, &local.addr, false);
+ 		mptcp_pm_nl_addr_send_ack(msk);
+ 
+-		if (local->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)
+-			signal_and_subflow = local;
++		if (local.flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)
++			signal_and_subflow = true;
+ 	}
+ 
+ subflow:
+@@ -592,26 +600,22 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 		bool fullmesh;
+ 		int i, nr;
+ 
+-		if (signal_and_subflow) {
+-			local = signal_and_subflow;
+-			signal_and_subflow = NULL;
+-		} else {
+-			local = select_local_address(pernet, msk);
+-			if (!local)
+-				break;
+-		}
++		if (signal_and_subflow)
++			signal_and_subflow = false;
++		else if (!select_local_address(pernet, msk, &local))
++			break;
+ 
+-		fullmesh = !!(local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
++		fullmesh = !!(local.flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
+ 
+ 		msk->pm.local_addr_used++;
+-		__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
+-		nr = fill_remote_addresses_vec(msk, &local->addr, fullmesh, addrs);
++		__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
++		nr = fill_remote_addresses_vec(msk, &local.addr, fullmesh, addrs);
+ 		if (nr == 0)
+ 			continue;
+ 
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		for (i = 0; i < nr; i++)
+-			__mptcp_subflow_connect(sk, &local->addr, &addrs[i]);
++			__mptcp_subflow_connect(sk, &local.addr, &addrs[i]);
+ 		spin_lock_bh(&msk->pm.lock);
+ 	}
+ 	mptcp_pm_nl_check_work_pending(msk);
+@@ -636,6 +640,7 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
+ {
+ 	struct sock *sk = (struct sock *)msk;
+ 	struct mptcp_pm_addr_entry *entry;
++	struct mptcp_addr_info mpc_addr;
+ 	struct pm_nl_pernet *pernet;
+ 	unsigned int subflows_max;
+ 	int i = 0;
+@@ -643,6 +648,8 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
+ 	pernet = pm_nl_get_pernet_from_msk(msk);
+ 	subflows_max = mptcp_pm_get_subflows_max(msk);
+ 
++	mptcp_local_address((struct sock_common *)msk, &mpc_addr);
++
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) {
+ 		if (!(entry->flags & MPTCP_PM_ADDR_FLAG_FULLMESH))
+@@ -653,7 +660,13 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
+ 
+ 		if (msk->pm.subflows < subflows_max) {
+ 			msk->pm.subflows++;
+-			addrs[i++] = entry->addr;
++			addrs[i] = entry->addr;
++
++			/* Special case for ID0: set the correct ID */
++			if (mptcp_addresses_equal(&entry->addr, &mpc_addr, entry->addr.port))
++				addrs[i].id = 0;
++
++			i++;
+ 		}
+ 	}
+ 	rcu_read_unlock();
+@@ -829,25 +842,27 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 			mptcp_close_ssk(sk, ssk, subflow);
+ 			spin_lock_bh(&msk->pm.lock);
+ 
+-			removed = true;
++			removed |= subflow->request_join;
+ 			if (rm_type == MPTCP_MIB_RMSUBFLOW)
+ 				__MPTCP_INC_STATS(sock_net(sk), rm_type);
+ 		}
+-		if (rm_type == MPTCP_MIB_RMSUBFLOW)
+-			__set_bit(rm_id ? rm_id : msk->mpc_endpoint_id, msk->pm.id_avail_bitmap);
+-		else if (rm_type == MPTCP_MIB_RMADDR)
++
++		if (rm_type == MPTCP_MIB_RMADDR)
+ 			__MPTCP_INC_STATS(sock_net(sk), rm_type);
++
+ 		if (!removed)
+ 			continue;
+ 
+ 		if (!mptcp_pm_is_kernel(msk))
+ 			continue;
+ 
+-		if (rm_type == MPTCP_MIB_RMADDR) {
+-			msk->pm.add_addr_accepted--;
+-			WRITE_ONCE(msk->pm.accept_addr, true);
+-		} else if (rm_type == MPTCP_MIB_RMSUBFLOW) {
+-			msk->pm.local_addr_used--;
++		if (rm_type == MPTCP_MIB_RMADDR && rm_id &&
++		    !WARN_ON_ONCE(msk->pm.add_addr_accepted == 0)) {
++			/* Note: if the subflow has been closed before, this
++			 * add_addr_accepted counter will not be decremented.
++			 */
++			if (--msk->pm.add_addr_accepted < mptcp_pm_get_add_addr_accept_max(msk))
++				WRITE_ONCE(msk->pm.accept_addr, true);
+ 		}
+ 	}
+ }
+@@ -857,8 +872,8 @@ static void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk)
+ 	mptcp_pm_nl_rm_addr_or_subflow(msk, &msk->pm.rm_list_rx, MPTCP_MIB_RMADDR);
+ }
+ 
+-void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
+-				     const struct mptcp_rm_list *rm_list)
++static void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
++					    const struct mptcp_rm_list *rm_list)
+ {
+ 	mptcp_pm_nl_rm_addr_or_subflow(msk, rm_list, MPTCP_MIB_RMSUBFLOW);
+ }
+@@ -1393,6 +1408,10 @@ int mptcp_pm_nl_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int
+ 	struct sock *sk = (struct sock *)msk;
+ 	struct net *net = sock_net(sk);
+ 
++	/* No entries with ID 0 */
++	if (id == 0)
++		return 0;
++
+ 	rcu_read_lock();
+ 	entry = __lookup_addr_by_id(pm_nl_get_pernet(net), id);
+ 	if (entry) {
+@@ -1431,13 +1450,24 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
+ 	ret = remove_anno_list_by_saddr(msk, addr);
+ 	if (ret || force) {
+ 		spin_lock_bh(&msk->pm.lock);
+-		msk->pm.add_addr_signaled -= ret;
++		if (ret) {
++			__set_bit(addr->id, msk->pm.id_avail_bitmap);
++			msk->pm.add_addr_signaled--;
++		}
+ 		mptcp_pm_remove_addr(msk, &list);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 	}
+ 	return ret;
+ }
+ 
++static void __mark_subflow_endp_available(struct mptcp_sock *msk, u8 id)
++{
++	/* If it was marked as used, and not ID 0, decrement local_addr_used */
++	if (!__test_and_set_bit(id ? : msk->mpc_endpoint_id, msk->pm.id_avail_bitmap) &&
++	    id && !WARN_ON_ONCE(msk->pm.local_addr_used == 0))
++		msk->pm.local_addr_used--;
++}
++
+ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 						   const struct mptcp_pm_addr_entry *entry)
+ {
+@@ -1466,8 +1496,19 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 		remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
+ 		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+ 					  !(entry->flags & MPTCP_PM_ADDR_FLAG_IMPLICIT));
+-		if (remove_subflow)
+-			mptcp_pm_remove_subflow(msk, &list);
++
++		if (remove_subflow) {
++			spin_lock_bh(&msk->pm.lock);
++			mptcp_pm_nl_rm_subflow_received(msk, &list);
++			spin_unlock_bh(&msk->pm.lock);
++		}
++
++		if (entry->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW) {
++			spin_lock_bh(&msk->pm.lock);
++			__mark_subflow_endp_available(msk, list.ids[0]);
++			spin_unlock_bh(&msk->pm.lock);
++		}
++
+ 		release_sock(sk);
+ 
+ next:
+@@ -1502,6 +1543,7 @@ static int mptcp_nl_remove_id_zero_address(struct net *net,
+ 		spin_lock_bh(&msk->pm.lock);
+ 		mptcp_pm_remove_addr(msk, &list);
+ 		mptcp_pm_nl_rm_subflow_received(msk, &list);
++		__mark_subflow_endp_available(msk, 0);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		release_sock(sk);
+ 
+@@ -1605,14 +1647,17 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 			alist.ids[alist.nr++] = entry->addr.id;
+ 	}
+ 
++	spin_lock_bh(&msk->pm.lock);
+ 	if (alist.nr) {
+-		spin_lock_bh(&msk->pm.lock);
+ 		msk->pm.add_addr_signaled -= alist.nr;
+ 		mptcp_pm_remove_addr(msk, &alist);
+-		spin_unlock_bh(&msk->pm.lock);
+ 	}
+ 	if (slist.nr)
+-		mptcp_pm_remove_subflow(msk, &slist);
++		mptcp_pm_nl_rm_subflow_received(msk, &slist);
++	/* Reset counters: maybe some subflows have been removed before */
++	bitmap_fill(msk->pm.id_avail_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
++	msk->pm.local_addr_used = 0;
++	spin_unlock_bh(&msk->pm.lock);
+ }
+ 
+ static void mptcp_nl_remove_addrs_list(struct net *net,
+@@ -1900,6 +1945,7 @@ static void mptcp_pm_nl_fullmesh(struct mptcp_sock *msk,
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	mptcp_pm_nl_rm_subflow_received(msk, &list);
++	__mark_subflow_endp_available(msk, list.ids[0]);
+ 	mptcp_pm_create_subflow_or_signal_addr(msk);
+ 	spin_unlock_bh(&msk->pm.lock);
+ }
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 8357046732f71..c7c846805c4e1 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -1021,7 +1021,6 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
+ 			   const struct mptcp_addr_info *addr,
+ 			   bool echo);
+ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
+-int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
+ void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list);
+ 
+ void mptcp_free_local_addr_list(struct mptcp_sock *msk);
+@@ -1128,8 +1127,6 @@ static inline u8 subflow_get_local_id(const struct mptcp_subflow_context *subflo
+ 
+ void __init mptcp_pm_nl_init(void);
+ void mptcp_pm_nl_work(struct mptcp_sock *msk);
+-void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
+-				     const struct mptcp_rm_list *rm_list);
+ unsigned int mptcp_pm_get_add_addr_signal_max(const struct mptcp_sock *msk);
+ unsigned int mptcp_pm_get_add_addr_accept_max(const struct mptcp_sock *msk);
+ unsigned int mptcp_pm_get_subflows_max(const struct mptcp_sock *msk);
+diff --git a/net/netfilter/nf_flow_table_inet.c b/net/netfilter/nf_flow_table_inet.c
+index 6eef15648b7b0..b0f1991719324 100644
+--- a/net/netfilter/nf_flow_table_inet.c
++++ b/net/netfilter/nf_flow_table_inet.c
+@@ -17,6 +17,9 @@ nf_flow_offload_inet_hook(void *priv, struct sk_buff *skb,
+ 
+ 	switch (skb->protocol) {
+ 	case htons(ETH_P_8021Q):
++		if (!pskb_may_pull(skb, skb_mac_offset(skb) + sizeof(*veth)))
++			return NF_ACCEPT;
++
+ 		veth = (struct vlan_ethhdr *)skb_mac_header(skb);
+ 		proto = veth->h_vlan_encapsulated_proto;
+ 		break;
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index c2c005234dcd3..98edcaa37b38d 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -281,6 +281,9 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto,
+ 
+ 	switch (skb->protocol) {
+ 	case htons(ETH_P_8021Q):
++		if (!pskb_may_pull(skb, skb_mac_offset(skb) + sizeof(*veth)))
++			return false;
++
+ 		veth = (struct vlan_ethhdr *)skb_mac_header(skb);
+ 		if (veth->h_vlan_encapsulated_proto == proto) {
+ 			*offset += VLAN_HLEN;
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index a010b25076ca0..3d46372b538e5 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -841,8 +841,8 @@ static int nf_flow_offload_tuple(struct nf_flowtable *flowtable,
+ 				 struct list_head *block_cb_list)
+ {
+ 	struct flow_cls_offload cls_flow = {};
++	struct netlink_ext_ack extack = {};
+ 	struct flow_block_cb *block_cb;
+-	struct netlink_ext_ack extack;
+ 	__be16 proto = ETH_P_ALL;
+ 	int err, i = 0;
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 91cc3a81ba8f1..41d7faeb101cf 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -7977,6 +7977,19 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 	return skb->len;
+ }
+ 
++static int nf_tables_dumpreset_obj(struct sk_buff *skb,
++				   struct netlink_callback *cb)
++{
++	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
++	int ret;
++
++	mutex_lock(&nft_net->commit_mutex);
++	ret = nf_tables_dump_obj(skb, cb);
++	mutex_unlock(&nft_net->commit_mutex);
++
++	return ret;
++}
++
+ static int nf_tables_dump_obj_start(struct netlink_callback *cb)
+ {
+ 	struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
+@@ -7993,12 +8006,18 @@ static int nf_tables_dump_obj_start(struct netlink_callback *cb)
+ 	if (nla[NFTA_OBJ_TYPE])
+ 		ctx->type = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE]));
+ 
+-	if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET)
+-		ctx->reset = true;
+-
+ 	return 0;
+ }
+ 
++static int nf_tables_dumpreset_obj_start(struct netlink_callback *cb)
++{
++	struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
++
++	ctx->reset = true;
++
++	return nf_tables_dump_obj_start(cb);
++}
++
+ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
+ {
+ 	struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
+@@ -8009,8 +8028,9 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
+ }
+ 
+ /* called with rcu_read_lock held */
+-static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info,
+-			    const struct nlattr * const nla[])
++static struct sk_buff *
++nf_tables_getobj_single(u32 portid, const struct nfnl_info *info,
++			const struct nlattr * const nla[], bool reset)
+ {
+ 	struct netlink_ext_ack *extack = info->extack;
+ 	u8 genmask = nft_genmask_cur(info->net);
+@@ -8019,72 +8039,109 @@ static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info,
+ 	struct net *net = info->net;
+ 	struct nft_object *obj;
+ 	struct sk_buff *skb2;
+-	bool reset = false;
+ 	u32 objtype;
+ 	int err;
+ 
+-	if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
+-		struct netlink_dump_control c = {
+-			.start = nf_tables_dump_obj_start,
+-			.dump = nf_tables_dump_obj,
+-			.done = nf_tables_dump_obj_done,
+-			.module = THIS_MODULE,
+-			.data = (void *)nla,
+-		};
+-
+-		return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
+-	}
+-
+ 	if (!nla[NFTA_OBJ_NAME] ||
+ 	    !nla[NFTA_OBJ_TYPE])
+-		return -EINVAL;
++		return ERR_PTR(-EINVAL);
+ 
+ 	table = nft_table_lookup(net, nla[NFTA_OBJ_TABLE], family, genmask, 0);
+ 	if (IS_ERR(table)) {
+ 		NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_TABLE]);
+-		return PTR_ERR(table);
++		return ERR_CAST(table);
+ 	}
+ 
+ 	objtype = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE]));
+ 	obj = nft_obj_lookup(net, table, nla[NFTA_OBJ_NAME], objtype, genmask);
+ 	if (IS_ERR(obj)) {
+ 		NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_NAME]);
+-		return PTR_ERR(obj);
++		return ERR_CAST(obj);
+ 	}
+ 
+ 	skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
+ 	if (!skb2)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+-	if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET)
+-		reset = true;
++	err = nf_tables_fill_obj_info(skb2, net, portid,
++				      info->nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0,
++				      family, table, obj, reset);
++	if (err < 0) {
++		kfree_skb(skb2);
++		return ERR_PTR(err);
++	}
+ 
+-	if (reset) {
+-		const struct nftables_pernet *nft_net;
+-		char *buf;
++	return skb2;
++}
+ 
+-		nft_net = nft_pernet(net);
+-		buf = kasprintf(GFP_ATOMIC, "%s:%u", table->name, nft_net->base_seq);
++static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info,
++			    const struct nlattr * const nla[])
++{
++	u32 portid = NETLINK_CB(skb).portid;
++	struct sk_buff *skb2;
+ 
+-		audit_log_nfcfg(buf,
+-				family,
+-				1,
+-				AUDIT_NFT_OP_OBJ_RESET,
+-				GFP_ATOMIC);
+-		kfree(buf);
++	if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
++		struct netlink_dump_control c = {
++			.start = nf_tables_dump_obj_start,
++			.dump = nf_tables_dump_obj,
++			.done = nf_tables_dump_obj_done,
++			.module = THIS_MODULE,
++			.data = (void *)nla,
++		};
++
++		return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
+ 	}
+ 
+-	err = nf_tables_fill_obj_info(skb2, net, NETLINK_CB(skb).portid,
+-				      info->nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0,
+-				      family, table, obj, reset);
+-	if (err < 0)
+-		goto err_fill_obj_info;
++	skb2 = nf_tables_getobj_single(portid, info, nla, false);
++	if (IS_ERR(skb2))
++		return PTR_ERR(skb2);
+ 
+-	return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
++	return nfnetlink_unicast(skb2, info->net, portid);
++}
+ 
+-err_fill_obj_info:
+-	kfree_skb(skb2);
+-	return err;
++static int nf_tables_getobj_reset(struct sk_buff *skb,
++				  const struct nfnl_info *info,
++				  const struct nlattr * const nla[])
++{
++	struct nftables_pernet *nft_net = nft_pernet(info->net);
++	u32 portid = NETLINK_CB(skb).portid;
++	struct net *net = info->net;
++	struct sk_buff *skb2;
++	char *buf;
++
++	if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
++		struct netlink_dump_control c = {
++			.start = nf_tables_dumpreset_obj_start,
++			.dump = nf_tables_dumpreset_obj,
++			.done = nf_tables_dump_obj_done,
++			.module = THIS_MODULE,
++			.data = (void *)nla,
++		};
++
++		return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
++	}
++
++	if (!try_module_get(THIS_MODULE))
++		return -EINVAL;
++	rcu_read_unlock();
++	mutex_lock(&nft_net->commit_mutex);
++	skb2 = nf_tables_getobj_single(portid, info, nla, true);
++	mutex_unlock(&nft_net->commit_mutex);
++	rcu_read_lock();
++	module_put(THIS_MODULE);
++
++	if (IS_ERR(skb2))
++		return PTR_ERR(skb2);
++
++	buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
++			nla_len(nla[NFTA_OBJ_TABLE]),
++			(char *)nla_data(nla[NFTA_OBJ_TABLE]),
++			nft_net->base_seq);
++	audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
++			AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC);
++	kfree(buf);
++
++	return nfnetlink_unicast(skb2, net, portid);
+ }
+ 
+ static void nft_obj_destroy(const struct nft_ctx *ctx, struct nft_object *obj)
+@@ -9367,7 +9424,7 @@ static const struct nfnl_callback nf_tables_cb[NFT_MSG_MAX] = {
+ 		.policy		= nft_obj_policy,
+ 	},
+ 	[NFT_MSG_GETOBJ_RESET] = {
+-		.call		= nf_tables_getobj,
++		.call		= nf_tables_getobj_reset,
+ 		.type		= NFNL_CB_RCU,
+ 		.attr_count	= NFTA_OBJ_MAX,
+ 		.policy		= nft_obj_policy,
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 4abf660c7baff..932b3ddb34f13 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -427,8 +427,10 @@ static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	nfnl_unlock(subsys_id);
+ 
+-	if (nlh->nlmsg_flags & NLM_F_ACK)
++	if (nlh->nlmsg_flags & NLM_F_ACK) {
++		memset(&extack, 0, sizeof(extack));
+ 		nfnl_err_add(&err_list, nlh, 0, &extack);
++	}
+ 
+ 	while (skb->len >= nlmsg_total_size(0)) {
+ 		int msglen, type;
+@@ -577,6 +579,7 @@ static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 			ss->abort(net, oskb, NFNL_ABORT_NONE);
+ 			netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL);
+ 		} else if (nlh->nlmsg_flags & NLM_F_ACK) {
++			memset(&extack, 0, sizeof(extack));
+ 			nfnl_err_add(&err_list, nlh, 0, &extack);
+ 		}
+ 	} else {
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 55e28e1da66ec..e0716da256bf5 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -820,10 +820,41 @@ static bool nf_ct_drop_unconfirmed(const struct nf_queue_entry *entry)
+ {
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ 	static const unsigned long flags = IPS_CONFIRMED | IPS_DYING;
+-	const struct nf_conn *ct = (void *)skb_nfct(entry->skb);
++	struct nf_conn *ct = (void *)skb_nfct(entry->skb);
++	unsigned long status;
++	unsigned int use;
+ 
+-	if (ct && ((ct->status & flags) == IPS_DYING))
++	if (!ct)
++		return false;
++
++	status = READ_ONCE(ct->status);
++	if ((status & flags) == IPS_DYING)
+ 		return true;
++
++	if (status & IPS_CONFIRMED)
++		return false;
++
++	/* in some cases skb_clone() can occur after initial conntrack
++	 * pickup, but conntrack assumes exclusive skb->_nfct ownership for
++	 * unconfirmed entries.
++	 *
++	 * This happens for br_netfilter and with ip multicast routing.
++	 * We can't be solved with serialization here because one clone could
++	 * have been queued for local delivery.
++	 */
++	use = refcount_read(&ct->ct_general.use);
++	if (likely(use == 1))
++		return false;
++
++	/* Can't decrement further? Exclusive ownership. */
++	if (!refcount_dec_not_one(&ct->ct_general.use))
++		return false;
++
++	skb_set_nfct(entry->skb, 0);
++	/* No nf_ct_put(): we already decremented .use and it cannot
++	 * drop down to 0.
++	 */
++	return true;
+ #endif
+ 	return false;
+ }
+diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
+index 291ed2026367e..eab0dc66bee6b 100644
+--- a/net/netfilter/nft_counter.c
++++ b/net/netfilter/nft_counter.c
+@@ -107,11 +107,16 @@ static void nft_counter_reset(struct nft_counter_percpu_priv *priv,
+ 			      struct nft_counter *total)
+ {
+ 	struct nft_counter *this_cpu;
++	seqcount_t *myseq;
+ 
+ 	local_bh_disable();
+ 	this_cpu = this_cpu_ptr(priv->counter);
++	myseq = this_cpu_ptr(&nft_counter_seq);
++
++	write_seqcount_begin(myseq);
+ 	this_cpu->packets -= total->packets;
+ 	this_cpu->bytes -= total->bytes;
++	write_seqcount_end(myseq);
+ 	local_bh_enable();
+ }
+ 
+@@ -265,7 +270,7 @@ static void nft_counter_offload_stats(struct nft_expr *expr,
+ 	struct nft_counter *this_cpu;
+ 	seqcount_t *myseq;
+ 
+-	preempt_disable();
++	local_bh_disable();
+ 	this_cpu = this_cpu_ptr(priv->counter);
+ 	myseq = this_cpu_ptr(&nft_counter_seq);
+ 
+@@ -273,7 +278,7 @@ static void nft_counter_offload_stats(struct nft_expr *expr,
+ 	this_cpu->packets += stats->pkts;
+ 	this_cpu->bytes += stats->bytes;
+ 	write_seqcount_end(myseq);
+-	preempt_enable();
++	local_bh_enable();
+ }
+ 
+ void nft_counter_init_seqcount(void)
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 99d72543abd3a..78d9961fcd446 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -2706,7 +2706,7 @@ static struct pernet_operations ovs_net_ops = {
+ };
+ 
+ static const char * const ovs_drop_reasons[] = {
+-#define S(x)	(#x),
++#define S(x) [(x) & ~SKB_DROP_REASON_SUBSYS_MASK] = (#x),
+ 	OVS_DROP_REASONS(S)
+ #undef S
+ };
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index edc72962ae63a..0f8d581438c39 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -446,12 +446,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	struct netem_sched_data *q = qdisc_priv(sch);
+ 	/* We don't fill cb now as skb_unshare() may invalidate it */
+ 	struct netem_skb_cb *cb;
+-	struct sk_buff *skb2;
++	struct sk_buff *skb2 = NULL;
+ 	struct sk_buff *segs = NULL;
+ 	unsigned int prev_len = qdisc_pkt_len(skb);
+ 	int count = 1;
+-	int rc = NET_XMIT_SUCCESS;
+-	int rc_drop = NET_XMIT_DROP;
+ 
+ 	/* Do not fool qdisc_drop_all() */
+ 	skb->prev = NULL;
+@@ -480,19 +478,11 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		skb_orphan_partial(skb);
+ 
+ 	/*
+-	 * If we need to duplicate packet, then re-insert at top of the
+-	 * qdisc tree, since parent queuer expects that only one
+-	 * skb will be queued.
++	 * If we need to duplicate packet, then clone it before
++	 * original is modified.
+ 	 */
+-	if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) {
+-		struct Qdisc *rootq = qdisc_root_bh(sch);
+-		u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
+-
+-		q->duplicate = 0;
+-		rootq->enqueue(skb2, rootq, to_free);
+-		q->duplicate = dupsave;
+-		rc_drop = NET_XMIT_SUCCESS;
+-	}
++	if (count > 1)
++		skb2 = skb_clone(skb, GFP_ATOMIC);
+ 
+ 	/*
+ 	 * Randomized packet corruption.
+@@ -504,7 +494,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		if (skb_is_gso(skb)) {
+ 			skb = netem_segment(skb, sch, to_free);
+ 			if (!skb)
+-				return rc_drop;
++				goto finish_segs;
++
+ 			segs = skb->next;
+ 			skb_mark_not_on_list(skb);
+ 			qdisc_skb_cb(skb)->pkt_len = skb->len;
+@@ -530,7 +521,24 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		/* re-link segs, so that qdisc_drop_all() frees them all */
+ 		skb->next = segs;
+ 		qdisc_drop_all(skb, sch, to_free);
+-		return rc_drop;
++		if (skb2)
++			__qdisc_drop(skb2, to_free);
++		return NET_XMIT_DROP;
++	}
++
++	/*
++	 * If doing duplication then re-insert at top of the
++	 * qdisc tree, since parent queuer expects that only one
++	 * skb will be queued.
++	 */
++	if (skb2) {
++		struct Qdisc *rootq = qdisc_root_bh(sch);
++		u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
++
++		q->duplicate = 0;
++		rootq->enqueue(skb2, rootq, to_free);
++		q->duplicate = dupsave;
++		skb2 = NULL;
+ 	}
+ 
+ 	qdisc_qstats_backlog_inc(sch, skb);
+@@ -601,9 +609,12 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	}
+ 
+ finish_segs:
++	if (skb2)
++		__qdisc_drop(skb2, to_free);
++
+ 	if (segs) {
+ 		unsigned int len, last_len;
+-		int nb;
++		int rc, nb;
+ 
+ 		len = skb ? skb->len : 0;
+ 		nb = skb ? 1 : 0;
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 4b040285aa78c..0ff9b2dd86bac 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1270,25 +1270,28 @@ static int vsock_dgram_connect(struct socket *sock,
+ 	return err;
+ }
+ 
++int __vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
++			  size_t len, int flags)
++{
++	struct sock *sk = sock->sk;
++	struct vsock_sock *vsk = vsock_sk(sk);
++
++	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
++}
++
+ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ 			size_t len, int flags)
+ {
+ #ifdef CONFIG_BPF_SYSCALL
++	struct sock *sk = sock->sk;
+ 	const struct proto *prot;
+-#endif
+-	struct vsock_sock *vsk;
+-	struct sock *sk;
+ 
+-	sk = sock->sk;
+-	vsk = vsock_sk(sk);
+-
+-#ifdef CONFIG_BPF_SYSCALL
+ 	prot = READ_ONCE(sk->sk_prot);
+ 	if (prot != &vsock_proto)
+ 		return prot->recvmsg(sk, msg, len, flags, NULL);
+ #endif
+ 
+-	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
++	return __vsock_dgram_recvmsg(sock, msg, len, flags);
+ }
+ EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
+ 
+@@ -2174,15 +2177,12 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
+ }
+ 
+ int
+-vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+-			  int flags)
++__vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
++			    int flags)
+ {
+ 	struct sock *sk;
+ 	struct vsock_sock *vsk;
+ 	const struct vsock_transport *transport;
+-#ifdef CONFIG_BPF_SYSCALL
+-	const struct proto *prot;
+-#endif
+ 	int err;
+ 
+ 	sk = sock->sk;
+@@ -2233,14 +2233,6 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		goto out;
+ 	}
+ 
+-#ifdef CONFIG_BPF_SYSCALL
+-	prot = READ_ONCE(sk->sk_prot);
+-	if (prot != &vsock_proto) {
+-		release_sock(sk);
+-		return prot->recvmsg(sk, msg, len, flags, NULL);
+-	}
+-#endif
+-
+ 	if (sk->sk_type == SOCK_STREAM)
+ 		err = __vsock_stream_recvmsg(sk, msg, len, flags);
+ 	else
+@@ -2250,6 +2242,22 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	release_sock(sk);
+ 	return err;
+ }
++
++int
++vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
++			  int flags)
++{
++#ifdef CONFIG_BPF_SYSCALL
++	struct sock *sk = sock->sk;
++	const struct proto *prot;
++
++	prot = READ_ONCE(sk->sk_prot);
++	if (prot != &vsock_proto)
++		return prot->recvmsg(sk, msg, len, flags, NULL);
++#endif
++
++	return __vsock_connectible_recvmsg(sock, msg, len, flags);
++}
+ EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
+ 
+ static int vsock_set_rcvlowat(struct sock *sk, int val)
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index a3c97546ab84a..c42c5cc18f324 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -64,9 +64,9 @@ static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int
+ 	int err;
+ 
+ 	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
+-		err = vsock_connectible_recvmsg(sock, msg, len, flags);
++		err = __vsock_connectible_recvmsg(sock, msg, len, flags);
+ 	else if (sk->sk_type == SOCK_DGRAM)
+-		err = vsock_dgram_recvmsg(sock, msg, len, flags);
++		err = __vsock_dgram_recvmsg(sock, msg, len, flags);
+ 	else
+ 		err = -EPROTOTYPE;
+ 
+diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
+index 47978efe4797c..839d9c49f28ce 100644
+--- a/scripts/kallsyms.c
++++ b/scripts/kallsyms.c
+@@ -5,8 +5,7 @@
+  * This software may be used and distributed according to the terms
+  * of the GNU General Public License, incorporated herein by reference.
+  *
+- * Usage: kallsyms [--all-symbols] [--absolute-percpu]
+- *                         [--base-relative] [--lto-clang] in.map > out.S
++ * Usage: kallsyms [--all-symbols] [--absolute-percpu]  in.map > out.S
+  *
+  *      Table compression uses all the unused char codes on the symbols and
+  *  maps these to the most used substrings (tokens). For instance, it might
+@@ -63,8 +62,6 @@ static struct sym_entry **table;
+ static unsigned int table_size, table_cnt;
+ static int all_symbols;
+ static int absolute_percpu;
+-static int base_relative;
+-static int lto_clang;
+ 
+ static int token_profit[0x10000];
+ 
+@@ -75,8 +72,7 @@ static unsigned char best_table_len[256];
+ 
+ static void usage(void)
+ {
+-	fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] "
+-			"[--base-relative] [--lto-clang] in.map > out.S\n");
++	fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] in.map > out.S\n");
+ 	exit(1);
+ }
+ 
+@@ -259,12 +255,6 @@ static void shrink_table(void)
+ 		}
+ 	}
+ 	table_cnt = pos;
+-
+-	/* When valid symbol is not registered, exit to error */
+-	if (!table_cnt) {
+-		fprintf(stderr, "No valid symbol.\n");
+-		exit(1);
+-	}
+ }
+ 
+ static void read_map(const char *in)
+@@ -352,25 +342,6 @@ static int symbol_absolute(const struct sym_entry *s)
+ 	return s->percpu_absolute;
+ }
+ 
+-static void cleanup_symbol_name(char *s)
+-{
+-	char *p;
+-
+-	/*
+-	 * ASCII[.]   = 2e
+-	 * ASCII[0-9] = 30,39
+-	 * ASCII[A-Z] = 41,5a
+-	 * ASCII[_]   = 5f
+-	 * ASCII[a-z] = 61,7a
+-	 *
+-	 * As above, replacing the first '.' in ".llvm." with '\0' does not
+-	 * affect the main sorting, but it helps us with subsorting.
+-	 */
+-	p = strstr(s, ".llvm.");
+-	if (p)
+-		*p = '\0';
+-}
+-
+ static int compare_names(const void *a, const void *b)
+ {
+ 	int ret;
+@@ -497,58 +468,43 @@ static void write_src(void)
+ 		printf("\t.short\t%d\n", best_idx[i]);
+ 	printf("\n");
+ 
+-	if (!base_relative)
+-		output_label("kallsyms_addresses");
+-	else
+-		output_label("kallsyms_offsets");
++	output_label("kallsyms_offsets");
+ 
+ 	for (i = 0; i < table_cnt; i++) {
+-		if (base_relative) {
+-			/*
+-			 * Use the offset relative to the lowest value
+-			 * encountered of all relative symbols, and emit
+-			 * non-relocatable fixed offsets that will be fixed
+-			 * up at runtime.
+-			 */
++		/*
++		 * Use the offset relative to the lowest value
++		 * encountered of all relative symbols, and emit
++		 * non-relocatable fixed offsets that will be fixed
++		 * up at runtime.
++		 */
+ 
+-			long long offset;
+-			int overflow;
+-
+-			if (!absolute_percpu) {
+-				offset = table[i]->addr - relative_base;
+-				overflow = (offset < 0 || offset > UINT_MAX);
+-			} else if (symbol_absolute(table[i])) {
+-				offset = table[i]->addr;
+-				overflow = (offset < 0 || offset > INT_MAX);
+-			} else {
+-				offset = relative_base - table[i]->addr - 1;
+-				overflow = (offset < INT_MIN || offset >= 0);
+-			}
+-			if (overflow) {
+-				fprintf(stderr, "kallsyms failure: "
+-					"%s symbol value %#llx out of range in relative mode\n",
+-					symbol_absolute(table[i]) ? "absolute" : "relative",
+-					table[i]->addr);
+-				exit(EXIT_FAILURE);
+-			}
+-			printf("\t.long\t%#x	/* %s */\n", (int)offset, table[i]->sym);
+-		} else if (!symbol_absolute(table[i])) {
+-			output_address(table[i]->addr);
++		long long offset;
++		int overflow;
++
++		if (!absolute_percpu) {
++			offset = table[i]->addr - relative_base;
++			overflow = (offset < 0 || offset > UINT_MAX);
++		} else if (symbol_absolute(table[i])) {
++			offset = table[i]->addr;
++			overflow = (offset < 0 || offset > INT_MAX);
+ 		} else {
+-			printf("\tPTR\t%#llx\n", table[i]->addr);
++			offset = relative_base - table[i]->addr - 1;
++			overflow = (offset < INT_MIN || offset >= 0);
++		}
++		if (overflow) {
++			fprintf(stderr, "kallsyms failure: "
++				"%s symbol value %#llx out of range in relative mode\n",
++				symbol_absolute(table[i]) ? "absolute" : "relative",
++				table[i]->addr);
++			exit(EXIT_FAILURE);
+ 		}
++		printf("\t.long\t%#x	/* %s */\n", (int)offset, table[i]->sym);
+ 	}
+ 	printf("\n");
+ 
+-	if (base_relative) {
+-		output_label("kallsyms_relative_base");
+-		output_address(relative_base);
+-		printf("\n");
+-	}
+-
+-	if (lto_clang)
+-		for (i = 0; i < table_cnt; i++)
+-			cleanup_symbol_name((char *)table[i]->sym);
++	output_label("kallsyms_relative_base");
++	output_address(relative_base);
++	printf("\n");
+ 
+ 	sort_symbols_by_name();
+ 	output_label("kallsyms_seqs_of_names");
+@@ -826,8 +782,6 @@ int main(int argc, char **argv)
+ 		static const struct option long_options[] = {
+ 			{"all-symbols",     no_argument, &all_symbols,     1},
+ 			{"absolute-percpu", no_argument, &absolute_percpu, 1},
+-			{"base-relative",   no_argument, &base_relative,   1},
+-			{"lto-clang",       no_argument, &lto_clang,       1},
+ 			{},
+ 		};
+ 
+@@ -847,8 +801,7 @@ int main(int argc, char **argv)
+ 	if (absolute_percpu)
+ 		make_percpus_absolute();
+ 	sort_symbols();
+-	if (base_relative)
+-		record_relative_base();
++	record_relative_base();
+ 	optimize_token_table();
+ 	write_src();
+ 
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index 518c70b8db507..070a319140e89 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -45,7 +45,6 @@ info()
+ 
+ # Link of vmlinux
+ # ${1} - output file
+-# ${2}, ${3}, ... - optional extra .o files
+ vmlinux_link()
+ {
+ 	local output=${1}
+@@ -90,7 +89,7 @@ vmlinux_link()
+ 	ldflags="${ldflags} ${wl}--script=${objtree}/${KBUILD_LDS}"
+ 
+ 	# The kallsyms linking does not need debug symbols included.
+-	if [ "$output" != "${output#.tmp_vmlinux.kallsyms}" ] ; then
++	if [ -n "${strip_debug}" ] ; then
+ 		ldflags="${ldflags} ${wl}--strip-debug"
+ 	fi
+ 
+@@ -101,15 +100,15 @@ vmlinux_link()
+ 	${ld} ${ldflags} -o ${output}					\
+ 		${wl}--whole-archive ${objs} ${wl}--no-whole-archive	\
+ 		${wl}--start-group ${libs} ${wl}--end-group		\
+-		$@ ${ldlibs}
++		${kallsymso} ${btf_vmlinux_bin_o} ${ldlibs}
+ }
+ 
+ # generate .BTF typeinfo from DWARF debuginfo
+ # ${1} - vmlinux image
+-# ${2} - file to dump raw BTF data into
+ gen_btf()
+ {
+ 	local pahole_ver
++	local btf_data=${1}.btf.o
+ 
+ 	if ! [ -x "$(command -v ${PAHOLE})" ]; then
+ 		echo >&2 "BTF: ${1}: pahole (${PAHOLE}) is not available"
+@@ -122,18 +121,16 @@ gen_btf()
+ 		return 1
+ 	fi
+ 
+-	vmlinux_link ${1}
+-
+-	info "BTF" ${2}
++	info BTF "${btf_data}"
+ 	LLVM_OBJCOPY="${OBJCOPY}" ${PAHOLE} -J ${PAHOLE_FLAGS} ${1}
+ 
+-	# Create ${2} which contains just .BTF section but no symbols. Add
++	# Create ${btf_data} which contains just .BTF section but no symbols. Add
+ 	# SHF_ALLOC because .BTF will be part of the vmlinux image. --strip-all
+ 	# deletes all symbols including __start_BTF and __stop_BTF, which will
+ 	# be redefined in the linker script. Add 2>/dev/null to suppress GNU
+ 	# objcopy warnings: "empty loadable segment detected at ..."
+ 	${OBJCOPY} --only-section=.BTF --set-section-flags .BTF=alloc,readonly \
+-		--strip-all ${1} ${2} 2>/dev/null
++		--strip-all ${1} "${btf_data}" 2>/dev/null
+ 	# Change e_type to ET_REL so that it can be used to link final vmlinux.
+ 	# GNU ld 2.35+ and lld do not allow an ET_EXEC input.
+ 	if is_enabled CONFIG_CPU_BIG_ENDIAN; then
+@@ -141,10 +138,12 @@ gen_btf()
+ 	else
+ 		et_rel='\1\0'
+ 	fi
+-	printf "${et_rel}" | dd of=${2} conv=notrunc bs=1 seek=16 status=none
++	printf "${et_rel}" | dd of="${btf_data}" conv=notrunc bs=1 seek=16 status=none
++
++	btf_vmlinux_bin_o=${btf_data}
+ }
+ 
+-# Create ${2} .S file with all symbols from the ${1} object file
++# Create ${2}.o file with all symbols from the ${1} object file
+ kallsyms()
+ {
+ 	local kallsymopt;
+@@ -157,35 +156,23 @@ kallsyms()
+ 		kallsymopt="${kallsymopt} --absolute-percpu"
+ 	fi
+ 
+-	if is_enabled CONFIG_KALLSYMS_BASE_RELATIVE; then
+-		kallsymopt="${kallsymopt} --base-relative"
+-	fi
++	info KSYMS "${2}.S"
++	scripts/kallsyms ${kallsymopt} "${1}" > "${2}.S"
+ 
+-	if is_enabled CONFIG_LTO_CLANG; then
+-		kallsymopt="${kallsymopt} --lto-clang"
+-	fi
++	info AS "${2}.o"
++	${CC} ${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS} \
++	      ${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} -c -o "${2}.o" "${2}.S"
+ 
+-	info KSYMS ${2}
+-	scripts/kallsyms ${kallsymopt} ${1} > ${2}
++	kallsymso=${2}.o
+ }
+ 
+-# Perform one step in kallsyms generation, including temporary linking of
+-# vmlinux.
+-kallsyms_step()
++# Perform kallsyms for the given temporary vmlinux.
++sysmap_and_kallsyms()
+ {
+-	kallsymso_prev=${kallsymso}
+-	kallsyms_vmlinux=.tmp_vmlinux.kallsyms${1}
+-	kallsymso=${kallsyms_vmlinux}.o
+-	kallsyms_S=${kallsyms_vmlinux}.S
++	mksysmap "${1}" "${1}.syms"
++	kallsyms "${1}.syms" "${1}.kallsyms"
+ 
+-	vmlinux_link ${kallsyms_vmlinux} "${kallsymso_prev}" ${btf_vmlinux_bin_o}
+-	mksysmap ${kallsyms_vmlinux} ${kallsyms_vmlinux}.syms
+-	kallsyms ${kallsyms_vmlinux}.syms ${kallsyms_S}
+-
+-	info AS ${kallsymso}
+-	${CC} ${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS} \
+-	      ${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} \
+-	      -c -o ${kallsymso} ${kallsyms_S}
++	kallsyms_sysmap=${1}.syms
+ }
+ 
+ # Create map file with all symbols from ${1}
+@@ -223,26 +210,41 @@ fi
+ 
+ ${MAKE} -f "${srctree}/scripts/Makefile.build" obj=init init/version-timestamp.o
+ 
+-btf_vmlinux_bin_o=""
++btf_vmlinux_bin_o=
++kallsymso=
++strip_debug=
++
++if is_enabled CONFIG_KALLSYMS; then
++	truncate -s0 .tmp_vmlinux.kallsyms0.syms
++	kallsyms .tmp_vmlinux.kallsyms0.syms .tmp_vmlinux0.kallsyms
++fi
++
++if is_enabled CONFIG_KALLSYMS || is_enabled CONFIG_DEBUG_INFO_BTF; then
++
++	# The kallsyms linking does not need debug symbols, but the BTF does.
++	if ! is_enabled CONFIG_DEBUG_INFO_BTF; then
++		strip_debug=1
++	fi
++
++	vmlinux_link .tmp_vmlinux1
++fi
++
+ if is_enabled CONFIG_DEBUG_INFO_BTF; then
+-	btf_vmlinux_bin_o=.btf.vmlinux.bin.o
+-	if ! gen_btf .tmp_vmlinux.btf $btf_vmlinux_bin_o ; then
++	if ! gen_btf .tmp_vmlinux1; then
+ 		echo >&2 "Failed to generate BTF for vmlinux"
+ 		echo >&2 "Try to disable CONFIG_DEBUG_INFO_BTF"
+ 		exit 1
+ 	fi
+ fi
+ 
+-kallsymso=""
+-kallsymso_prev=""
+-kallsyms_vmlinux=""
+ if is_enabled CONFIG_KALLSYMS; then
+ 
+ 	# kallsyms support
+ 	# Generate section listing all symbols and add it into vmlinux
+-	# It's a three step process:
++	# It's a four step process:
++	# 0)  Generate a dummy __kallsyms with empty symbol list.
+ 	# 1)  Link .tmp_vmlinux.kallsyms1 so it has all symbols and sections,
+-	#     but __kallsyms is empty.
++	#     with a dummy __kallsyms.
+ 	#     Running kallsyms on that gives us .tmp_kallsyms1.o with
+ 	#     the right size
+ 	# 2)  Link .tmp_vmlinux.kallsyms2 so it now has a __kallsyms section of
+@@ -261,19 +263,25 @@ if is_enabled CONFIG_KALLSYMS; then
+ 	# a)  Verify that the System.map from vmlinux matches the map from
+ 	#     ${kallsymso}.
+ 
+-	kallsyms_step 1
+-	kallsyms_step 2
++	# The kallsyms linking does not need debug symbols included.
++	strip_debug=1
++
++	sysmap_and_kallsyms .tmp_vmlinux1
++	size1=$(${CONFIG_SHELL} "${srctree}/scripts/file-size.sh" ${kallsymso})
+ 
+-	# step 3
+-	size1=$(${CONFIG_SHELL} "${srctree}/scripts/file-size.sh" ${kallsymso_prev})
++	vmlinux_link .tmp_vmlinux2
++	sysmap_and_kallsyms .tmp_vmlinux2
+ 	size2=$(${CONFIG_SHELL} "${srctree}/scripts/file-size.sh" ${kallsymso})
+ 
+ 	if [ $size1 -ne $size2 ] || [ -n "${KALLSYMS_EXTRA_PASS}" ]; then
+-		kallsyms_step 3
++		vmlinux_link .tmp_vmlinux3
++		sysmap_and_kallsyms .tmp_vmlinux3
+ 	fi
+ fi
+ 
+-vmlinux_link vmlinux "${kallsymso}" ${btf_vmlinux_bin_o}
++strip_debug=
++
++vmlinux_link vmlinux
+ 
+ # fill in BTF IDs
+ if is_enabled CONFIG_DEBUG_INFO_BTF && is_enabled CONFIG_BPF; then
+@@ -293,7 +301,7 @@ fi
+ 
+ # step a (see comment above)
+ if is_enabled CONFIG_KALLSYMS; then
+-	if ! cmp -s System.map ${kallsyms_vmlinux}.syms; then
++	if ! cmp -s System.map "${kallsyms_sysmap}"; then
+ 		echo >&2 Inconsistent kallsyms data
+ 		echo >&2 'Try "make KALLSYMS_EXTRA_PASS=1" as a workaround'
+ 		exit 1
+diff --git a/scripts/rust_is_available.sh b/scripts/rust_is_available.sh
+index 117018946b577..a6fdcf13e0e53 100755
+--- a/scripts/rust_is_available.sh
++++ b/scripts/rust_is_available.sh
+@@ -129,8 +129,12 @@ fi
+ # Check that the Rust bindings generator is suitable.
+ #
+ # Non-stable and distributions' versions may have a version suffix, e.g. `-dev`.
++#
++# The dummy parameter `workaround-for-0.69.0` is required to support 0.69.0
++# (https://github.com/rust-lang/rust-bindgen/pull/2678). It can be removed when
++# the minimum version is upgraded past that (0.69.1 already fixed the issue).
+ rust_bindings_generator_output=$( \
+-	LC_ALL=C "$BINDGEN" --version 2>/dev/null
++	LC_ALL=C "$BINDGEN" --version workaround-for-0.69.0 2>/dev/null
+ ) || rust_bindings_generator_code=$?
+ if [ -n "$rust_bindings_generator_code" ]; then
+ 	echo >&2 "***"
+diff --git a/security/keys/trusted-keys/trusted_dcp.c b/security/keys/trusted-keys/trusted_dcp.c
+index b5f81a05be367..4edc5bbbcda3c 100644
+--- a/security/keys/trusted-keys/trusted_dcp.c
++++ b/security/keys/trusted-keys/trusted_dcp.c
+@@ -186,20 +186,21 @@ static int do_aead_crypto(u8 *in, u8 *out, size_t len, u8 *key, u8 *nonce,
+ 	return ret;
+ }
+ 
+-static int decrypt_blob_key(u8 *key)
++static int decrypt_blob_key(u8 *encrypted_key, u8 *plain_key)
+ {
+-	return do_dcp_crypto(key, key, false);
++	return do_dcp_crypto(encrypted_key, plain_key, false);
+ }
+ 
+-static int encrypt_blob_key(u8 *key)
++static int encrypt_blob_key(u8 *plain_key, u8 *encrypted_key)
+ {
+-	return do_dcp_crypto(key, key, true);
++	return do_dcp_crypto(plain_key, encrypted_key, true);
+ }
+ 
+ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ {
+ 	struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ 	int blen, ret;
++	u8 plain_blob_key[AES_KEYSIZE_128];
+ 
+ 	blen = calc_blob_len(p->key_len);
+ 	if (blen > MAX_BLOB_SIZE)
+@@ -207,30 +208,36 @@ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ 
+ 	b->fmt_version = DCP_BLOB_VERSION;
+ 	get_random_bytes(b->nonce, AES_KEYSIZE_128);
+-	get_random_bytes(b->blob_key, AES_KEYSIZE_128);
++	get_random_bytes(plain_blob_key, AES_KEYSIZE_128);
+ 
+-	ret = do_aead_crypto(p->key, b->payload, p->key_len, b->blob_key,
++	ret = do_aead_crypto(p->key, b->payload, p->key_len, plain_blob_key,
+ 			     b->nonce, true);
+ 	if (ret) {
+ 		pr_err("Unable to encrypt blob payload: %i\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 
+-	ret = encrypt_blob_key(b->blob_key);
++	ret = encrypt_blob_key(plain_blob_key, b->blob_key);
+ 	if (ret) {
+ 		pr_err("Unable to encrypt blob key: %i\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 
+-	b->payload_len = get_unaligned_le32(&p->key_len);
++	put_unaligned_le32(p->key_len, &b->payload_len);
+ 	p->blob_len = blen;
+-	return 0;
++	ret = 0;
++
++out:
++	memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++
++	return ret;
+ }
+ 
+ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ {
+ 	struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ 	int blen, ret;
++	u8 plain_blob_key[AES_KEYSIZE_128];
+ 
+ 	if (b->fmt_version != DCP_BLOB_VERSION) {
+ 		pr_err("DCP blob has bad version: %i, expected %i\n",
+@@ -248,14 +255,14 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ 		goto out;
+ 	}
+ 
+-	ret = decrypt_blob_key(b->blob_key);
++	ret = decrypt_blob_key(b->blob_key, plain_blob_key);
+ 	if (ret) {
+ 		pr_err("Unable to decrypt blob key: %i\n", ret);
+ 		goto out;
+ 	}
+ 
+ 	ret = do_aead_crypto(b->payload, p->key, p->key_len + DCP_BLOB_AUTHLEN,
+-			     b->blob_key, b->nonce, false);
++			     plain_blob_key, b->nonce, false);
+ 	if (ret) {
+ 		pr_err("Unwrap of DCP payload failed: %i\n", ret);
+ 		goto out;
+@@ -263,6 +270,8 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ 
+ 	ret = 0;
+ out:
++	memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++
+ 	return ret;
+ }
+ 
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index 32eb67fb3e42c..b49c44869dc46 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -330,12 +330,12 @@ static int avc_add_xperms_decision(struct avc_node *node,
+ {
+ 	struct avc_xperms_decision_node *dest_xpd;
+ 
+-	node->ae.xp_node->xp.len++;
+ 	dest_xpd = avc_xperms_decision_alloc(src->used);
+ 	if (!dest_xpd)
+ 		return -ENOMEM;
+ 	avc_copy_xperms_decision(&dest_xpd->xpd, src);
+ 	list_add(&dest_xpd->xpd_list, &node->ae.xp_node->xpd_head);
++	node->ae.xp_node->xp.len++;
+ 	return 0;
+ }
+ 
+@@ -907,7 +907,11 @@ static int avc_update_node(u32 event, u32 perms, u8 driver, u8 xperm, u32 ssid,
+ 		node->ae.avd.auditdeny &= ~perms;
+ 		break;
+ 	case AVC_CALLBACK_ADD_XPERMS:
+-		avc_add_xperms_decision(node, xpd);
++		rc = avc_add_xperms_decision(node, xpd);
++		if (rc) {
++			avc_node_kill(node);
++			goto out_unlock;
++		}
+ 		break;
+ 	}
+ 	avc_node_replace(node, orig);
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 55c78c318ccd7..bfa61e005aace 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3852,7 +3852,17 @@ static int selinux_file_mprotect(struct vm_area_struct *vma,
+ 	if (default_noexec &&
+ 	    (prot & PROT_EXEC) && !(vma->vm_flags & VM_EXEC)) {
+ 		int rc = 0;
+-		if (vma_is_initial_heap(vma)) {
++		/*
++		 * We don't use the vma_is_initial_heap() helper as it has
++		 * a history of problems and is currently broken on systems
++		 * where there is no heap, e.g. brk == start_brk.  Before
++		 * replacing the conditional below with vma_is_initial_heap(),
++		 * or something similar, please ensure that the logic is the
++		 * same as what we have below or you have tested every possible
++		 * corner case you can think to test.
++		 */
++		if (vma->vm_start >= vma->vm_mm->start_brk &&
++		    vma->vm_end <= vma->vm_mm->brk) {
+ 			rc = avc_has_perm(sid, sid, SECCLASS_PROCESS,
+ 					  PROCESS__EXECHEAP, NULL);
+ 		} else if (!vma->vm_file && (vma_is_initial_stack(vma) ||
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index d104adc75a8b0..71a07c1662f5c 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -547,7 +547,7 @@ static int snd_timer_start1(struct snd_timer_instance *timeri,
+ 	/* check the actual time for the start tick;
+ 	 * bail out as error if it's way too low (< 100us)
+ 	 */
+-	if (start) {
++	if (start && !(timer->hw.flags & SNDRV_TIMER_HW_SLAVE)) {
+ 		if ((u64)snd_timer_hw_resolution(timer) * ticks < 100000)
+ 			return -EINVAL;
+ 	}
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3840565ef8b02..c9d76bca99232 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -583,7 +583,6 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+-	case 0x10ec0257:
+ 	case 0x19e58326:
+ 	case 0x10ec0283:
+ 	case 0x10ec0285:
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index fdee6592c502d..9e88d39eac1e2 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -2,10 +2,12 @@
+ //
+ // TAS2781 HDA I2C driver
+ //
+-// Copyright 2023 Texas Instruments, Inc.
++// Copyright 2023 - 2024 Texas Instruments, Inc.
+ //
+ // Author: Shenghao Ding <shenghao-ding@ti.com>
++// Current maintainer: Baojun Xu <baojun.xu@ti.com>
+ 
++#include <asm/unaligned.h>
+ #include <linux/acpi.h>
+ #include <linux/crc8.h>
+ #include <linux/crc32.h>
+@@ -519,20 +521,22 @@ static void tas2781_apply_calib(struct tasdevice_priv *tas_priv)
+ 	static const unsigned char rgno_array[CALIB_MAX] = {
+ 		0x74, 0x0c, 0x14, 0x70, 0x7c,
+ 	};
+-	unsigned char *data;
++	int offset = 0;
+ 	int i, j, rc;
++	__be32 data;
+ 
+ 	for (i = 0; i < tas_priv->ndev; i++) {
+-		data = tas_priv->cali_data.data +
+-			i * TASDEVICE_SPEAKER_CALIBRATION_SIZE;
+ 		for (j = 0; j < CALIB_MAX; j++) {
++			data = cpu_to_be32(
++				*(uint32_t *)&tas_priv->cali_data.data[offset]);
+ 			rc = tasdevice_dev_bulk_write(tas_priv, i,
+ 				TASDEVICE_REG(0, page_array[j], rgno_array[j]),
+-				&(data[4 * j]), 4);
++				(unsigned char *)&data, 4);
+ 			if (rc < 0)
+ 				dev_err(tas_priv->dev,
+ 					"chn %d calib %d bulk_wr err = %d\n",
+ 					i, j, rc);
++			offset += 4;
+ 		}
+ 	}
+ }
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index f13a8d63a019a..aaa6a515d0f8a 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -273,6 +273,7 @@ YAMAHA_DEVICE(0x105a, NULL),
+ YAMAHA_DEVICE(0x105b, NULL),
+ YAMAHA_DEVICE(0x105c, NULL),
+ YAMAHA_DEVICE(0x105d, NULL),
++YAMAHA_DEVICE(0x1718, "P-125"),
+ {
+ 	USB_DEVICE(0x0499, 0x1503),
+ 	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index ea063a14cdd8f..e7b68c67852e9 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2221,6 +2221,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
+index e30fd55f8e51d..cd3b480d20bd6 100644
+--- a/tools/perf/tests/vmlinux-kallsyms.c
++++ b/tools/perf/tests/vmlinux-kallsyms.c
+@@ -26,7 +26,6 @@ static bool is_ignored_symbol(const char *name, char type)
+ 		 * when --all-symbols is specified so exclude them to get a
+ 		 * stable symbol list.
+ 		 */
+-		"kallsyms_addresses",
+ 		"kallsyms_offsets",
+ 		"kallsyms_relative_base",
+ 		"kallsyms_num_syms",
+diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
+index fe65e0952a1e0..179bfe25dbc61 100644
+--- a/tools/testing/selftests/bpf/progs/iters.c
++++ b/tools/testing/selftests/bpf/progs/iters.c
+@@ -1434,4 +1434,58 @@ int iter_arr_with_actual_elem_count(const void *ctx)
+ 	return sum;
+ }
+ 
++__u32 upper, select_n, result;
++__u64 global;
++
++static __noinline bool nest_2(char *str)
++{
++	/* some insns (including branch insns) to ensure stacksafe() is triggered
++	 * in nest_2(). This way, stacksafe() can compare frame associated with nest_1().
++	 */
++	if (str[0] == 't')
++		return true;
++	if (str[1] == 'e')
++		return true;
++	if (str[2] == 's')
++		return true;
++	if (str[3] == 't')
++		return true;
++	return false;
++}
++
++static __noinline bool nest_1(int n)
++{
++	/* case 0: allocate stack, case 1: no allocate stack */
++	switch (n) {
++	case 0: {
++		char comm[16];
++
++		if (bpf_get_current_comm(comm, 16))
++			return false;
++		return nest_2(comm);
++	}
++	case 1:
++		return nest_2((char *)&global);
++	default:
++		return false;
++	}
++}
++
++SEC("raw_tp")
++__success
++int iter_subprog_check_stacksafe(const void *ctx)
++{
++	long i;
++
++	bpf_for(i, 0, upper) {
++		if (!nest_1(select_n)) {
++			result = 1;
++			return 0;
++		}
++	}
++
++	result = 2;
++	return 0;
++}
++
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/core/close_range_test.c b/tools/testing/selftests/core/close_range_test.c
+index 991c473e38593..12b4eb9d04347 100644
+--- a/tools/testing/selftests/core/close_range_test.c
++++ b/tools/testing/selftests/core/close_range_test.c
+@@ -589,4 +589,39 @@ TEST(close_range_cloexec_unshare_syzbot)
+ 	EXPECT_EQ(close(fd3), 0);
+ }
+ 
++TEST(close_range_bitmap_corruption)
++{
++	pid_t pid;
++	int status;
++	struct __clone_args args = {
++		.flags = CLONE_FILES,
++		.exit_signal = SIGCHLD,
++	};
++
++	/* get the first 128 descriptors open */
++	for (int i = 2; i < 128; i++)
++		EXPECT_GE(dup2(0, i), 0);
++
++	/* get descriptor table shared */
++	pid = sys_clone3(&args, sizeof(args));
++	ASSERT_GE(pid, 0);
++
++	if (pid == 0) {
++		/* unshare and truncate descriptor table down to 64 */
++		if (sys_close_range(64, ~0U, CLOSE_RANGE_UNSHARE))
++			exit(EXIT_FAILURE);
++
++		ASSERT_EQ(fcntl(64, F_GETFD), -1);
++		/* ... and verify that the range 64..127 is not
++		   stuck "fully used" according to secondary bitmap */
++		EXPECT_EQ(dup(0), 64)
++			exit(EXIT_FAILURE);
++		exit(EXIT_SUCCESS);
++	}
++
++	EXPECT_EQ(waitpid(pid, &status, 0), pid);
++	EXPECT_EQ(true, WIFEXITED(status));
++	EXPECT_EQ(0, WEXITSTATUS(status));
++}
++
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/ethtool_lanes.sh b/tools/testing/selftests/drivers/net/mlxsw/ethtool_lanes.sh
+index 877cd6df94a10..fe905a7f34b3c 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/ethtool_lanes.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/ethtool_lanes.sh
+@@ -2,6 +2,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ lib_dir=$(dirname $0)/../../../net/forwarding
++ethtool_lib_dir=$(dirname $0)/../hw
+ 
+ ALL_TESTS="
+ 	autoneg
+@@ -11,7 +12,7 @@ ALL_TESTS="
+ NUM_NETIFS=2
+ : ${TIMEOUT:=30000} # ms
+ source $lib_dir/lib.sh
+-source $lib_dir/ethtool_lib.sh
++source $ethtool_lib_dir/ethtool_lib.sh
+ 
+ setup_prepare()
+ {
+diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
+index 478cd26f62fdd..1285b82fbd8a2 100644
+--- a/tools/testing/selftests/mm/Makefile
++++ b/tools/testing/selftests/mm/Makefile
+@@ -51,7 +51,9 @@ TEST_GEN_FILES += madv_populate
+ TEST_GEN_FILES += map_fixed_noreplace
+ TEST_GEN_FILES += map_hugetlb
+ TEST_GEN_FILES += map_populate
++ifneq (,$(filter $(ARCH),arm64 riscv riscv64 x86 x86_64))
+ TEST_GEN_FILES += memfd_secret
++endif
+ TEST_GEN_FILES += migration
+ TEST_GEN_FILES += mkdirty
+ TEST_GEN_FILES += mlock-random-test
+diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
+index 3157204b90476..1c7cdfd29fcc8 100755
+--- a/tools/testing/selftests/mm/run_vmtests.sh
++++ b/tools/testing/selftests/mm/run_vmtests.sh
+@@ -367,8 +367,11 @@ CATEGORY="hmm" run_test bash ./test_hmm.sh smoke
+ # MADV_POPULATE_READ and MADV_POPULATE_WRITE tests
+ CATEGORY="madv_populate" run_test ./madv_populate
+ 
++if [ -x ./memfd_secret ]
++then
+ (echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope 2>&1) | tap_prefix
+ CATEGORY="memfd_secret" run_test ./memfd_secret
++fi
+ 
+ # KSM KSM_MERGE_TIME_HUGE_PAGES test with size of 100
+ CATEGORY="ksm" run_test ./ksm_tests -H -s 100
+diff --git a/tools/testing/selftests/net/af_unix/msg_oob.c b/tools/testing/selftests/net/af_unix/msg_oob.c
+index 16d0c172eaebe..535eb2c3d7d1c 100644
+--- a/tools/testing/selftests/net/af_unix/msg_oob.c
++++ b/tools/testing/selftests/net/af_unix/msg_oob.c
+@@ -209,7 +209,7 @@ static void __sendpair(struct __test_metadata *_metadata,
+ 
+ static void __recvpair(struct __test_metadata *_metadata,
+ 		       FIXTURE_DATA(msg_oob) *self,
+-		       const void *expected_buf, int expected_len,
++		       const char *expected_buf, int expected_len,
+ 		       int buf_len, int flags)
+ {
+ 	int i, ret[2], recv_errno[2], expected_errno = 0;
+diff --git a/tools/testing/selftests/net/lib.sh b/tools/testing/selftests/net/lib.sh
+index 9155c914c064f..93de05fedd91a 100644
+--- a/tools/testing/selftests/net/lib.sh
++++ b/tools/testing/selftests/net/lib.sh
+@@ -128,25 +128,18 @@ slowwait_for_counter()
+ cleanup_ns()
+ {
+ 	local ns=""
+-	local errexit=0
+ 	local ret=0
+ 
+-	# disable errexit temporary
+-	if [[ $- =~ "e" ]]; then
+-		errexit=1
+-		set +e
+-	fi
+-
+ 	for ns in "$@"; do
+ 		[ -z "${ns}" ] && continue
+-		ip netns delete "${ns}" &> /dev/null
++		ip netns pids "${ns}" 2> /dev/null | xargs -r kill || true
++		ip netns delete "${ns}" &> /dev/null || true
+ 		if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then
+ 			echo "Warn: Failed to remove namespace $ns"
+ 			ret=1
+ 		fi
+ 	done
+ 
+-	[ $errexit -eq 1 ] && set -e
+ 	return $ret
+ }
+ 
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index a3293043c85dd..ed30c0ed55e7a 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -436,9 +436,10 @@ reset_with_tcp_filter()
+ 	local ns="${!1}"
+ 	local src="${2}"
+ 	local target="${3}"
++	local chain="${4:-INPUT}"
+ 
+ 	if ! ip netns exec "${ns}" ${iptables} \
+-			-A INPUT \
++			-A "${chain}" \
+ 			-s "${src}" \
+ 			-p tcp \
+ 			-j "${target}"; then
+@@ -3058,6 +3059,7 @@ fullmesh_tests()
+ 		pm_nl_set_limits $ns1 1 3
+ 		pm_nl_set_limits $ns2 1 3
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
++		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,fullmesh
+ 		fullmesh=1 speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 3 3 3
+@@ -3571,10 +3573,10 @@ endpoint_tests()
+ 		mptcp_lib_kill_wait $tests_pid
+ 	fi
+ 
+-	if reset "delete and re-add" &&
++	if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
+ 	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+-		pm_nl_set_limits $ns1 1 1
+-		pm_nl_set_limits $ns2 1 1
++		pm_nl_set_limits $ns1 0 2
++		pm_nl_set_limits $ns2 0 2
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+ 		test_linkfail=4 speed=20 \
+ 			run_tests $ns1 $ns2 10.0.1.1 &
+@@ -3591,11 +3593,27 @@ endpoint_tests()
+ 		chk_subflow_nr "after delete" 1
+ 		chk_mptcp_info subflows 0 subflows 0
+ 
+-		pm_nl_add_endpoint $ns2 10.0.2.2 dev ns2eth2 flags subflow
++		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+ 		wait_mpj $ns2
+ 		chk_subflow_nr "after re-add" 2
+ 		chk_mptcp_info subflows 1 subflows 1
++
++		pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
++		wait_attempt_fail $ns2
++		chk_subflow_nr "after new reject" 2
++		chk_mptcp_info subflows 1 subflows 1
++
++		ip netns exec "${ns2}" ${iptables} -D OUTPUT -s "10.0.3.2" -p tcp -j REJECT
++		pm_nl_del_endpoint $ns2 3 10.0.3.2
++		pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
++		wait_mpj $ns2
++		chk_subflow_nr "after no reject" 3
++		chk_mptcp_info subflows 2 subflows 2
++
+ 		mptcp_lib_kill_wait $tests_pid
++
++		chk_join_nr 3 3 3
++		chk_rm_nr 1 1
+ 	fi
+ }
+ 
+diff --git a/tools/testing/selftests/net/udpgro.sh b/tools/testing/selftests/net/udpgro.sh
+index 11a1ebda564fd..d5ffd8c9172e1 100755
+--- a/tools/testing/selftests/net/udpgro.sh
++++ b/tools/testing/selftests/net/udpgro.sh
+@@ -7,8 +7,6 @@ source net_helper.sh
+ 
+ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
+ 
+-BPF_FILE="xdp_dummy.bpf.o"
+-
+ # set global exit status, but never reset nonzero one.
+ check_err()
+ {
+@@ -38,7 +36,7 @@ cfg_veth() {
+ 	ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/24
+ 	ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad
+ 	ip -netns "${PEER_NS}" link set dev veth1 up
+-	ip -n "${PEER_NS}" link set veth1 xdp object ${BPF_FILE} section xdp
++	ip netns exec "${PEER_NS}" ethtool -K veth1 gro on
+ }
+ 
+ run_one() {
+@@ -46,17 +44,19 @@ run_one() {
+ 	local -r all="$@"
+ 	local -r tx_args=${all%rx*}
+ 	local -r rx_args=${all#*rx}
++	local ret=0
+ 
+ 	cfg_veth
+ 
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} && \
+-		echo "ok" || \
+-		echo "failed" &
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} &
++	local PID1=$!
+ 
+ 	wait_local_port_listen ${PEER_NS} 8000 udp
+ 	./udpgso_bench_tx ${tx_args}
+-	ret=$?
+-	wait $(jobs -p)
++	check_err $?
++	wait ${PID1}
++	check_err $?
++	[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
+ 	return $ret
+ }
+ 
+@@ -73,6 +73,7 @@ run_one_nat() {
+ 	local -r all="$@"
+ 	local -r tx_args=${all%rx*}
+ 	local -r rx_args=${all#*rx}
++	local ret=0
+ 
+ 	if [[ ${tx_args} = *-4* ]]; then
+ 		ipt_cmd=iptables
+@@ -93,16 +94,17 @@ run_one_nat() {
+ 	# ... so that GRO will match the UDP_GRO enabled socket, but packets
+ 	# will land on the 'plain' one
+ 	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -G ${family} -b ${addr1} -n 0 &
+-	pid=$!
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} && \
+-		echo "ok" || \
+-		echo "failed"&
++	local PID1=$!
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} &
++	local PID2=$!
+ 
+ 	wait_local_port_listen "${PEER_NS}" 8000 udp
+ 	./udpgso_bench_tx ${tx_args}
+-	ret=$?
+-	kill -INT $pid
+-	wait $(jobs -p)
++	check_err $?
++	kill -INT ${PID1}
++	wait ${PID2}
++	check_err $?
++	[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
+ 	return $ret
+ }
+ 
+@@ -111,20 +113,26 @@ run_one_2sock() {
+ 	local -r all="$@"
+ 	local -r tx_args=${all%rx*}
+ 	local -r rx_args=${all#*rx}
++	local ret=0
+ 
+ 	cfg_veth
+ 
+ 	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} -p 12345 &
+-	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} && \
+-		echo "ok" || \
+-		echo "failed" &
++	local PID1=$!
++	ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} &
++	local PID2=$!
+ 
+ 	wait_local_port_listen "${PEER_NS}" 12345 udp
+ 	./udpgso_bench_tx ${tx_args} -p 12345
++	check_err $?
+ 	wait_local_port_listen "${PEER_NS}" 8000 udp
+ 	./udpgso_bench_tx ${tx_args}
+-	ret=$?
+-	wait $(jobs -p)
++	check_err $?
++	wait ${PID1}
++	check_err $?
++	wait ${PID2}
++	check_err $?
++	[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
+ 	return $ret
+ }
+ 
+@@ -196,11 +204,6 @@ run_all() {
+ 	return $ret
+ }
+ 
+-if [ ! -f ${BPF_FILE} ]; then
+-	echo "Missing ${BPF_FILE}. Run 'make' first"
+-	exit -1
+-fi
+-
+ if [[ $# -eq 0 ]]; then
+ 	run_all
+ elif [[ $1 == "__subprocess" ]]; then
+diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py
+index ee349187636fc..4f255cec0c22e 100755
+--- a/tools/testing/selftests/tc-testing/tdc.py
++++ b/tools/testing/selftests/tc-testing/tdc.py
+@@ -143,7 +143,6 @@ class PluginMgr:
+             except Exception as ee:
+                 print('exception {} in call to pre_case for {} plugin'.
+                       format(ee, pgn_inst.__class__))
+-                print('test_ordinal is {}'.format(test_ordinal))
+                 print('testid is {}'.format(caseinfo['id']))
+                 raise
+ 
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index 07ba55d4ec06f..de33ea5005b6b 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -640,8 +640,10 @@ struct osnoise_tool *osnoise_init_top(struct osnoise_top_params *params)
+ 		return NULL;
+ 
+ 	tool->data = osnoise_alloc_top(nr_cpus);
+-	if (!tool->data)
+-		goto out_err;
++	if (!tool->data) {
++		osnoise_destroy_tool(tool);
++		return NULL;
++	}
+ 
+ 	tool->params = params;
+ 
+@@ -649,11 +651,6 @@ struct osnoise_tool *osnoise_init_top(struct osnoise_top_params *params)
+ 				   osnoise_top_handler, NULL);
+ 
+ 	return tool;
+-
+-out_err:
+-	osnoise_free_top(tool->data);
+-	osnoise_destroy_tool(tool);
+-	return NULL;
+ }
+ 
+ static int stop_tracing;


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-04 13:50 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-04 13:50 UTC (permalink / raw
  To: gentoo-commits

commit:     94737e9422147d7840950af0a7890faafbb1f337
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  4 13:49:42 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  4 13:49:42 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=94737e94

dtrace patch thanks to Sam

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +
 2995_dtrace-6.10_p1.patch | 2390 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2394 insertions(+)

diff --git a/0000_README b/0000_README
index 82c9265b..f658d7dc 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
+Patch:  2995_dtrace-6.10_p1.patch
+From:   https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.10
+Desc:   dtrace patch 6.10_p1
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2995_dtrace-6.10_p1.patch b/2995_dtrace-6.10_p1.patch
new file mode 100644
index 00000000..6e03cbf6
--- /dev/null
+++ b/2995_dtrace-6.10_p1.patch
@@ -0,0 +1,2390 @@
+diff --git a/.gitignore b/.gitignore
+index c59dc60ba62ef..6722f45561f3a 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -16,6 +16,7 @@
+ *.bin
+ *.bz2
+ *.c.[012]*.*
++*.ctf
+ *.dt.yaml
+ *.dtb
+ *.dtbo
+@@ -54,6 +55,7 @@
+ Module.symvers
+ dtbs-list
+ modules.order
++objects.builtin
+ 
+ #
+ # Top-level generic files
+@@ -64,12 +66,13 @@ modules.order
+ /vmlinux.32
+ /vmlinux.map
+ /vmlinux.symvers
++/vmlinux.ctfa
+ /vmlinux-gdb.py
+ /vmlinuz
+ /System.map
+ /Module.markers
+ /modules.builtin
+-/modules.builtin.modinfo
++/modules.builtin.*
+ /modules.nsdeps
+ 
+ #
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index 3c399f132e2db..75b9655e57914 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -179,7 +179,7 @@ mkutf8data
+ modpost
+ modules-only.symvers
+ modules.builtin
+-modules.builtin.modinfo
++modules.builtin.*
+ modules.nsdeps
+ modules.order
+ modversions.h*
+diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
+index 9c8d1d046ea56..79e104ffee715 100644
+--- a/Documentation/kbuild/kbuild.rst
++++ b/Documentation/kbuild/kbuild.rst
+@@ -17,6 +17,11 @@ modules.builtin
+ This file lists all modules that are built into the kernel. This is used
+ by modprobe to not fail when trying to load something builtin.
+ 
++modules.builtin.objs
++-----------------------
++This file contains object mapping of modules that are built into the kernel
++to their corresponding object files used to build the module.
++
+ modules.builtin.modinfo
+ -----------------------
+ This file contains modinfo from all modules that are built into the kernel.
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index 5685d7bfe4d0f..8db62fe4dadff 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -63,9 +63,13 @@ cpio                   any              cpio --version
+ GNU tar                1.28             tar --version
+ gtags (optional)       6.6.5            gtags --version
+ mkimage (optional)     2017.01          mkimage --version
++GNU AWK (optional)     5.1.0            gawk --version
++GNU C\ [#f2]_          12.0             gcc --version
++binutils\ [#f2]_       2.36             ld -v
+ ====================== ===============  ========================================
+ 
+ .. [#f1] Sphinx is needed only to build the Kernel documentation
++.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
+ 
+ Kernel compilation
+ ******************
+@@ -198,6 +202,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
+ built from the U-Boot source code. See the instructions at
+ https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
+ 
++GNU AWK
++-------
++
++GNU AWK is needed if you want kernel builds to generate address range data for
++builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
++
+ System utilities
+ ****************
+ 
+diff --git a/Makefile b/Makefile
+index ab77d171e268d..425dd4d723155 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1024,6 +1024,7 @@ include-$(CONFIG_UBSAN)		+= scripts/Makefile.ubsan
+ include-$(CONFIG_KCOV)		+= scripts/Makefile.kcov
+ include-$(CONFIG_RANDSTRUCT)	+= scripts/Makefile.randstruct
+ include-$(CONFIG_GCC_PLUGINS)	+= scripts/Makefile.gcc-plugins
++include-$(CONFIG_CTF)		+= scripts/Makefile.ctfa-toplevel
+ 
+ include $(addprefix $(srctree)/, $(include-y))
+ 
+@@ -1151,7 +1152,11 @@ PHONY += vmlinux_o
+ vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
+ 
+-vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
++MODULES_BUILTIN := modules.builtin.modinfo
++MODULES_BUILTIN += modules.builtin
++MODULES_BUILTIN += modules.builtin.objs
++
++vmlinux.o $(MODULES_BUILTIN): vmlinux_o
+ 	@:
+ 
+ PHONY += vmlinux
+@@ -1490,9 +1495,10 @@ endif # CONFIG_MODULES
+ 
+ # Directories & files removed with 'make clean'
+ CLEAN_FILES += vmlinux.symvers modules-only.symvers \
+-	       modules.builtin modules.builtin.modinfo modules.nsdeps \
++	       modules.builtin modules.builtin.* modules.nsdeps \
+ 	       compile_commands.json rust/test \
+-	       rust-project.json .vmlinux.objs .vmlinux.export.c
++	       rust-project.json .vmlinux.objs .vmlinux.export.c \
++	       vmlinux.ctfa
+ 
+ # Directories & files removed with 'make mrproper'
+ MRPROPER_FILES += include/config include/generated          \
+@@ -1586,6 +1592,8 @@ help:
+ 	@echo  '                    (requires a recent binutils and recent build (System.map))'
+ 	@echo  '  dir/file.ko     - Build module including final link'
+ 	@echo  '  modules_prepare - Set up for building external modules'
++	@echo  '  ctf             - Generate CTF type information, installed by make ctf_install'
++	@echo  '  ctf_install     - Install CTF to INSTALL_MOD_PATH (default: /)'
+ 	@echo  '  tags/TAGS	  - Generate tags file for editors'
+ 	@echo  '  cscope	  - Generate cscope index'
+ 	@echo  '  gtags           - Generate GNU GLOBAL index'
+@@ -1942,7 +1950,7 @@ clean: $(clean-dirs)
+ 	$(call cmd,rmfiles)
+ 	@find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
+ 		\( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
+-		-o -name '*.ko.*' \
++		-o -name '*.ko.*' -o -name '*.ctf' \
+ 		-o -name '*.dtb' -o -name '*.dtbo' \
+ 		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
+ 		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
+diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
+index 01067a2bc43b7..d2193b8dfad83 100644
+--- a/arch/arm/vdso/Makefile
++++ b/arch/arm/vdso/Makefile
+@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
+ 
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+ 	    -z max-page-size=4096 -shared $(ldflags-y) \
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index d63930c828397..6e84d3822cfe3 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -33,6 +33,10 @@ ldflags-y += -T
+ ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ # -Wmissing-prototypes and -Wmissing-declarations are removed from
+ # the CFLAGS of vgettimeofday.c to make possible to build the
+ # kernel with CONFIG_WERROR enabled.
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index d724d46b07c84..fbedb95223ae1 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ 	$(call cc-option, -fno-asynchronous-unwind-tables) \
+-	$(call cc-option, -fno-stack-protector)
++	$(call cc-option, -fno-stack-protector) \
++	$(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index b289b2c1b2946..6c8d777525f9b 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+-	$(call cc-option, -fno-asynchronous-unwind-tables)
++	$(call cc-option, -fno-asynchronous-unwind-tables) \
++	$(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
+diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
+index 243dbfc4609d8..e4f3e47074e9d 100644
+--- a/arch/sparc/vdso/Makefile
++++ b/arch/sparc/vdso/Makefile
+@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
+ SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
+ 
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 215a1b202a918..2fa1613a06275 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
++       $(call cc-option,-gctf0) \
+        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+@@ -131,6 +132,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += -fno-stack-protector
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
++KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 6a77ea6434ffd..6db233b5edd75 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ #
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+-       -fno-omit-frame-pointer -foptimize-sibling-calls
++       -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
+ 
+ $(vobjs): KBUILD_CFLAGS += $(CFL)
+ 
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index f00a8e18f389f..2e307c0824574 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -1014,6 +1014,7 @@
+ 	*(.discard.*)							\
+ 	*(.export_symbol)						\
+ 	*(.modinfo)							\
++	*(.ctf)								\
+ 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
+ 	*(.gnu.version*)						\
+ 
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 330ffb59efe51..ec828908470c9 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -180,7 +180,9 @@ extern void cleanup_module(void);
+ #ifdef MODULE
+ #define MODULE_FILE
+ #else
+-#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
++#define MODULE_FILE					                      \
++			MODULE_INFO(file, KBUILD_MODFILE);                    \
++			MODULE_INFO(objs, KBUILD_MODOBJS);
+ #endif
+ 
+ /*
+diff --git a/init/Kconfig b/init/Kconfig
+index 9684e5d2b81c6..c1b00b2e4a43d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -111,6 +111,12 @@ config PAHOLE_VERSION
+ 	int
+ 	default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+ 
++config HAVE_CTF_TOOLCHAIN
++	def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
++	depends on CC_IS_GCC
++	help
++	  GCC and binutils support CTF generation.
++
+ config CONSTRUCTORS
+ 	bool
+ 
+diff --git a/lib/Kconfig b/lib/Kconfig
+index b0a76dff5c182..61d0be30b3562 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -633,6 +633,16 @@ config DIMLIB
+ #
+ config LIBFDT
+ 	bool
++#
++# CTF support is select'ed if needed
++#
++config CTF
++        bool "Compact Type Format generation"
++        depends on HAVE_CTF_TOOLCHAIN
++        help
++          Emit a compact, compressed description of the kernel's datatypes and
++          global variables into the vmlinux.ctfa archive (for in-tree modules)
++          or into .ctf sections in kernel modules (for out-of-tree modules).
+ 
+ config OID_REGISTRY
+ 	tristate
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 59b6765d86b8f..dab7e6983eace 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -571,6 +571,21 @@ config VMLINUX_MAP
+ 	  pieces of code get eliminated with
+ 	  CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
+ 
++config BUILTIN_MODULE_RANGES
++	bool "Generate address range information for builtin modules"
++	depends on !LTO
++	depends on VMLINUX_MAP
++	help
++	 When modules are built into the kernel, there will be no module name
++	 associated with its symbols in /proc/kallsyms.  Tracers may want to
++	 identify symbols by module name and symbol name regardless of whether
++	 the module is configured as loadable or not.
++
++	 This option generates modules.builtin.ranges in the build tree with
++	 offset ranges (per ELF section) for the module(s) they belong to.
++	 It also records an anchor symbol to determine the load address of the
++	 section.
++
+ config DEBUG_FORCE_WEAK_PER_CPU
+ 	bool "Force weak per-cpu definitions"
+ 	depends on DEBUG_KERNEL
+diff --git a/scripts/.gitignore b/scripts/.gitignore
+index 3dbb8bb2457bc..11339fa922abd 100644
+--- a/scripts/.gitignore
++++ b/scripts/.gitignore
+@@ -11,3 +11,4 @@
+ /sorttable
+ /target.json
+ /unifdef
++y!/Makefile.ctf
+diff --git a/scripts/Makefile b/scripts/Makefile
+index fe56eeef09dd4..8e7eb174d3154 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -54,6 +54,7 @@ targets += module.lds
+ 
+ subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
+ subdir-$(CONFIG_MODVERSIONS) += genksyms
++subdir-$(CONFIG_CTF)         += ctf
+ subdir-$(CONFIG_SECURITY_SELINUX) += selinux
+ 
+ # Let clean descend into subdirs
+diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
+new file mode 100644
+index 0000000000000..2b10de139dce5
+--- /dev/null
++++ b/scripts/Makefile.ctfa
+@@ -0,0 +1,92 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# Module CTF/CTFA generation
++# ===========================================================================
++
++include include/config/auto.conf
++include $(srctree)/scripts/Kbuild.include
++
++# CTF is already present in every object file if CONFIG_CTF is enabled.
++# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
++# it will be deduplicated into module .ko's.  For out-of-tree module builds,
++# this is what we want, but for in-tree modules we can save substantial
++# space by deduplicating it against all the core kernel types as well.  So
++# split the CTF out of in-tree module .ko's into separate .ctf files so that
++# it doesn't take up space in the modules on disk, and let the specialized
++# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
++# 'make ctf' is invoked, and use the same machinery that the linker uses to
++# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
++
++# Nothing special needs to be done if CTF is turned off or if a standalone
++# module is being built.
++module-ctf-postlink = mv $(1).tmp $(1)
++
++ifdef CONFIG_CTF
++
++# This is quite tricky.  The CTF machinery needs to be told about all the
++# built-in objects as well as all the external modules -- but Makefile.modfinal
++# only knows about the latter.  So the toplevel makefile emits the names of the
++# built-in objects into a temporary file, which is then catted and its contents
++# used as prerequisites by this rule.
++#
++# We write the names of the object files to be scanned for CTF content into a
++# file, then use that, to avoid hitting command-line length limits.
++
++ifeq ($(KBUILD_EXTMOD),)
++ctf-modules := $(shell find . -name '*.ko.ctf' -print)
++quiet_cmd_ctfa_raw = CTFARAW
++      cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
++ctf-builtins := .tmp_objects.builtin
++ctf-filelist := .tmp_ctf.filelist
++ctf-filelist-raw := .tmp_ctf.filelist.raw
++
++define module-ctf-postlink =
++	$(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
++	$(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
++endef
++
++# Split a list up like shell xargs does.
++define xargs =
++$(1) $(wordlist 1,1024,$(2))
++$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
++endef
++
++$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
++	@rm -f $(ctf-filelist-raw);
++	$(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
++	@touch $(ctf-filelist-raw)
++
++$(ctf-filelist): $(ctf-filelist-raw)
++	@rm -f $(ctf-filelist);
++	@cat $(ctf-filelist-raw) | while read -r obj; do \
++		case $$obj in \
++		$(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
++		*.a) ar t $$obj > $(ctf-filelist);; \
++		*.builtin) cat $$obj >> $(ctf-filelist);; \
++		*) echo "$$obj" >> $(ctf-filelist);; \
++		esac; \
++	done
++	@touch $(ctf-filelist)
++
++# The raw CTF depends on the output CTF file list, and that depends
++# on the .ko files for the modules.
++.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
++	$(call if_changed,ctfa_raw)
++
++quiet_cmd_ctfa = CTFA
++      cmd_ctfa = { echo 'int main () { return 0; } ' | \
++		gcc -x c -c -o $<.stub -; \
++	$(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
++		 $<.stub $@; }
++
++# The CTF itself is an ELF executable with one section: the CTF.  This lets
++# objdump work on it, at minimal size cost.
++vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
++	$(call if_changed,ctfa)
++
++targets += vmlinux.ctfa
++
++endif		# KBUILD_EXTMOD
++
++endif		# !CONFIG_CTF
++
+diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
+new file mode 100644
+index 0000000000000..210bef3854e9b
+--- /dev/null
++++ b/scripts/Makefile.ctfa-toplevel
+@@ -0,0 +1,54 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# CTF rules for the top-level makefile only
++# ===========================================================================
++
++KBUILD_CFLAGS	+= $(call cc-option,-gctf)
++KBUILD_LDFLAGS	+= $(call ld-option, --ctf-variables)
++
++ifeq ($(KBUILD_EXTMOD),)
++
++# CTF generation for in-tree code (modules, built-in and not, and core kernel)
++
++# This contains all the object files that are built directly into the
++# kernel (including built-in modules), for consumption by ctfarchive in
++# Makefile.modfinal.
++# This is made doubly annoying by the presence of '.o' files which are actually
++# thin ar archives, and the need to support file(1) versions too old to
++# recognize them as archives at all.  (So we assume that everything that is notr
++# an ELF object is an archive.)
++ifeq ($(SRCARCH),x86)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
++else
++ifeq ($(SRCARCH),arm64)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
++else
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
++endif
++endif
++	@echo $(KBUILD_VMLINUX_OBJS) | \
++		tr " " "\n" | grep "\.o$$" | xargs -r file | \
++		grep ELF | cut -d: -f1 > .tmp_objects.builtin
++	@for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
++		tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
++		$(AR) t "$$archive" >> .tmp_objects.builtin; \
++	done
++
++ctf: vmlinux.ctfa
++PHONY += ctf ctf_install
++
++# Making CTF needs the builtin files.  We need to force everything to be
++# built if not already done, since we need the .o files for the machinery
++# above to work.
++vmlinux.ctfa: KBUILD_BUILTIN := 1
++vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
++	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
++
++ctf_install:
++	$(Q)mkdir -p $(MODLIB)/kernel
++	@ln -sf $(abspath $(srctree)) $(MODLIB)/source
++	$(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
++
++CLEAN_FILES += vmlinux.ctfa
++
++endif
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 7f8ec77bf35c9..97b0ea2eee9d4 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
+ __modname = $(or $(modname-multi),$(basetarget))
+ 
+ modname = $(subst $(space),:,$(__modname))
++modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
++modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
+ modfile = $(addprefix $(obj)/,$(__modname))
+ 
+ # target with $(obj)/ and its suffix stripped
+@@ -131,7 +133,8 @@ name-fix = $(call stringify,$(call name-fix-token,$1))
+ basename_flags = -DKBUILD_BASENAME=$(call name-fix,$(basetarget))
+ modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+ 		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
+-modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
++modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile)) \
++                 -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
+ 
+ _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
+                      $(filter-out $(ccflags-remove-y), \
+@@ -238,7 +241,7 @@ modkern_rustflags =                                              \
+ 
+ modkern_aflags = $(if $(part-of-module),				\
+ 			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
+-			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
++			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
+ 
+ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 -include $(srctree)/include/linux/compiler_types.h       \
+@@ -248,7 +251,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
+ 
+ a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+-		 $(_a_flags) $(modkern_aflags)
++		 $(_a_flags) $(modkern_aflags) $(modname_flags)
+ 
+ cpp_flags      = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 $(_cpp_flags)
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 3bec9043e4f38..06807e403d162 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M]  $@
+ %.mod.o: %.mod.c FORCE
+ 	$(call if_changed_dep,cc_o_c)
+ 
++# for module-ctf-postlink
++include $(srctree)/scripts/Makefile.ctfa
++
+ quiet_cmd_ld_ko_o = LD [M]  $@
+       cmd_ld_ko_o +=							\
+ 	$(LD) -r $(KBUILD_LDFLAGS)					\
+ 		$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE)		\
+-		-T scripts/module.lds -o $@ $(filter %.o, $^)
++		-T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp	\
++		$(filter %.o, $^) &&					\
++	$(call module-ctf-postlink,$@)					\
+ 
+ quiet_cmd_btf_ko = BTF [M] $@
+       cmd_btf_ko = 							\
+diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
+index 0afd75472679f..e668469ce098c 100644
+--- a/scripts/Makefile.modinst
++++ b/scripts/Makefile.modinst
+@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
+ quiet_cmd_install_modorder = INSTALL $@
+       cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
+ 
+-# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
+-install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
++# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
++install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
+ 
+-$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
++install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
++
++$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
+ 	$(call cmd,install)
+ 
+ endif
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 49946cb968440..7e8b703799c85 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -33,6 +33,24 @@ targets += vmlinux
+ vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
+ 	+$(call if_changed_dep,link_vmlinux)
+ 
++# module.builtin.ranges
++# ---------------------------------------------------------------------------
++ifdef CONFIG_BUILTIN_MODULE_RANGES
++__default: modules.builtin.ranges
++
++quiet_cmd_modules_builtin_ranges = GEN     $@
++      cmd_modules_builtin_ranges = $(real-prereqs) > $@
++
++targets += modules.builtin.ranges
++modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
++			modules.builtin vmlinux.map vmlinux.o.map FORCE
++	$(call if_changed,modules_builtin_ranges)
++
++vmlinux.map: vmlinux
++	@:
++
++endif
++
+ # Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+ 
+diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
+index 6de297916ce68..48ba44ce3b64b 100644
+--- a/scripts/Makefile.vmlinux_o
++++ b/scripts/Makefile.vmlinux_o
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
+ PHONY := __default
+-__default: vmlinux.o modules.builtin.modinfo modules.builtin
++__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
+ 
+ include include/config/auto.conf
+ include $(srctree)/scripts/Kbuild.include
+@@ -27,6 +27,18 @@ ifdef CONFIG_LTO_CLANG
+ initcalls-lds := .tmp_initcalls.lds
+ endif
+ 
++# Generate a linker script to delete CTF sections
++# -----------------------------------------------
++
++quiet_cmd_gen_remove_ctf.lds = GEN     $@
++      cmd_gen_remove_ctf.lds = \
++	$(LD) -r --verbose | awk -f $(real-prereqs) > $@
++
++.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
++	$(call if_changed,gen_remove_ctf.lds)
++
++targets := .tmp_remove-ctf.lds
++
+ # objtool for vmlinux.o
+ # ---------------------------------------------------------------------------
+ #
+@@ -42,13 +54,18 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+ 
+ objtool-args = $(vmlinux-objtool-args-y) --link
+ 
+-# Link of vmlinux.o used for section mismatch analysis
++# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
++# section out at this stage, since ctfarchive gets it from the underlying object
++# files and linking it further is a waste of time.
+ # ---------------------------------------------------------------------------
+ 
++vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES)	+= -Map=$@.map
++
+ quiet_cmd_ld_vmlinux.o = LD      $@
+       cmd_ld_vmlinux.o = \
+ 	$(LD) ${KBUILD_LDFLAGS} -r -o $@ \
+-	$(addprefix -T , $(initcalls-lds)) \
++	$(vmlinux-o-ld-args-y) \
++	$(addprefix -T , $(initcalls-lds)) -T .tmp_remove-ctf.lds \
+ 	--whole-archive vmlinux.a --no-whole-archive \
+ 	--start-group $(KBUILD_VMLINUX_LIBS) --end-group \
+ 	$(cmd_objtool)
+@@ -58,7 +75,7 @@ define rule_ld_vmlinux.o
+ 	$(call cmd,gen_objtooldep)
+ endef
+ 
+-vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
++vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) .tmp_remove-ctf.lds FORCE
+ 	$(call if_changed_rule,ld_vmlinux.o)
+ 
+ targets += vmlinux.o
+@@ -87,6 +104,19 @@ targets += modules.builtin
+ modules.builtin: modules.builtin.modinfo FORCE
+ 	$(call if_changed,modules_builtin)
+ 
++# module.builtin.objs
++# ---------------------------------------------------------------------------
++quiet_cmd_modules_builtin_objs = GEN     $@
++      cmd_modules_builtin_objs = \
++	tr '\0' '\n' < $< | \
++	sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
++	tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
++	tr -s ' ' > $@
++
++targets += modules.builtin.objs
++modules.builtin.objs: modules.builtin.modinfo FORCE
++	$(call if_changed,modules_builtin_objs)
++
+ # Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+ 
+diff --git a/scripts/ctf/.gitignore b/scripts/ctf/.gitignore
+new file mode 100644
+index 0000000000000..6a0eb1c3ceeab
+--- /dev/null
++++ b/scripts/ctf/.gitignore
+@@ -0,0 +1 @@
++ctfarchive
+diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
+new file mode 100644
+index 0000000000000..3b83f93bb9f9a
+--- /dev/null
++++ b/scripts/ctf/Makefile
+@@ -0,0 +1,5 @@
++ifdef CONFIG_CTF
++hostprogs-always-y	:= ctfarchive
++ctfarchive-objs		:= ctfarchive.o modules_builtin.o
++HOSTLDLIBS_ctfarchive := -lctf
++endif
+diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
+new file mode 100644
+index 0000000000000..92cc4912ed0ee
+--- /dev/null
++++ b/scripts/ctf/ctfarchive.c
+@@ -0,0 +1,413 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * ctfmerge.c: Read in CTF extracted from generated object files from a
++ * specified directory and generate a CTF archive whose members are the
++ * deduplicated CTF derived from those object files, split up by kernel
++ * module.
++ *
++ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#define _GNU_SOURCE 1
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <ctf-api.h>
++#include "modules_builtin.h"
++
++static ctf_file_t *output;
++
++static int private_ctf_link_add_ctf(ctf_file_t *fp,
++				    const char *name)
++{
++#if !defined (CTF_LINK_FINAL)
++	return ctf_link_add_ctf(fp, NULL, name);
++#else
++	/* Non-upstreamed, erroneously-broken API.  */
++	return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
++#endif
++}
++
++/*
++ * Add a file to the link.
++ */
++static void add_to_link(const char *fn)
++{
++	if (private_ctf_link_add_ctf(output, fn) < 0)
++	{
++		fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++struct from_to
++{
++	char *from;
++	char *to;
++};
++
++/*
++ * The world's stupidest hash table of FROM -> TO.
++ */
++static struct from_to **from_tos[256];
++static size_t alloc_from_tos[256];
++static size_t num_from_tos[256];
++
++static unsigned char from_to_hash(const char *from)
++{
++	unsigned char hval = 0;
++
++	const char *p;
++	for (p = from; *p; p++)
++		hval += *p;
++
++	return hval;
++}
++
++/*
++ * Note that we will need to add a CU mapping later on.
++ *
++ * Present purely to work around a binutils bug that stops
++ * ctf_link_add_cu_mapping() working right when called repeatedly
++ * with the same FROM.
++ */
++static int add_cu_mapping(const char *from, const char *to)
++{
++	ssize_t i, j;
++
++	i = from_to_hash(from);
++
++	for (j = 0; j < num_from_tos[i]; j++)
++		if (strcmp(from, from_tos[i][j]->from) == 0) {
++			char *tmp;
++
++			free(from_tos[i][j]->to);
++			tmp = strdup(to);
++			if (!tmp)
++				goto oom;
++			from_tos[i][j]->to = tmp;
++			return 0;
++		    }
++
++	if (num_from_tos[i] >= alloc_from_tos[i]) {
++		struct from_to **tmp;
++		if (alloc_from_tos[i] < 16)
++			alloc_from_tos[i] = 16;
++		else
++			alloc_from_tos[i] *= 2;
++
++		tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
++		if (!tmp)
++			goto oom;
++
++		from_tos[i] = tmp;
++	}
++
++	j = num_from_tos[i];
++	from_tos[i][j] = malloc(sizeof(struct from_to));
++	if (from_tos[i][j] == NULL)
++		goto oom;
++	from_tos[i][j]->from = strdup(from);
++	from_tos[i][j]->to = strdup(to);
++	if (!from_tos[i][j]->from || !from_tos[i][j]->to)
++		goto oom;
++	num_from_tos[i]++;
++
++	return 0;
++  oom:
++	fprintf(stderr,
++		"out of memory in add_cu_mapping\n");
++	exit(1);
++}
++
++/*
++ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
++ * replaced with the most recent one.
++ */
++static void commit_cu_mappings(void)
++{
++	ssize_t i, j;
++
++	for (i = 0; i < 256; i++)
++		for (j = 0; j < num_from_tos[i]; j++)
++			ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
++						from_tos[i][j]->to);
++}
++
++/*
++ * Add a CU mapping to the link.
++ *
++ * CU mappings for built-in modules are added by suck_in_modules, below: here,
++ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
++ * modules, which appear only in the filelist (since they are not built-in).
++ * The pathnames are stripped off because modules don't have any, and hyphens
++ * are translated into underscores.
++ */
++static void add_cu_mappings(const char *fn)
++{
++	const char *last_slash;
++	const char *modname = fn;
++	char *dynmodname = NULL;
++	char *dash;
++	size_t n;
++
++	last_slash = strrchr(modname, '/');
++	if (last_slash)
++		last_slash++;
++	else
++		last_slash = modname;
++	modname = last_slash;
++	if (strchr(modname, '-') != NULL)
++	{
++		dynmodname = strdup(last_slash);
++		dash = dynmodname;
++		while (dash != NULL) {
++			dash = strchr(dash, '-');
++			if (dash != NULL)
++				*dash = '_';
++		}
++		modname = dynmodname;
++	}
++
++	n = strlen(modname);
++	if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
++		char *mod;
++
++		n -= strlen(".ko.ctf");
++		mod = strndup(modname, n);
++		add_cu_mapping(fn, mod);
++		free(mod);
++	}
++	free(dynmodname);
++}
++
++/*
++ * Add the passed names as mappings to "vmlinux".
++ */
++static void add_builtins(const char *fn)
++{
++	if (add_cu_mapping(fn, "vmlinux") < 0)
++	{
++		fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++/*
++ * Do something with a file, line by line.
++ */
++static void suck_in_lines(const char *filename, void (*func)(const char *line))
++{
++	FILE *f;
++	char *line = NULL;
++	size_t line_size = 0;
++
++	f = fopen(filename, "r");
++	if (f == NULL) {
++		fprintf(stderr, "Cannot open %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	while (getline(&line, &line_size, f) >= 0) {
++		size_t len = strlen(line);
++
++		if (len == 0)
++			continue;
++
++		if (line[len-1] == '\n')
++			line[len-1] = '\0';
++
++		func(line);
++	}
++	free(line);
++
++	if (ferror(f)) {
++		fprintf(stderr, "Error reading from %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	fclose(f);
++}
++
++/*
++ * Pull in modules.builtin.objs and turn it into CU mappings.
++ */
++static void suck_in_modules(const char *modules_builtin_name)
++{
++	struct modules_builtin_iter *i;
++	char *module_name = NULL;
++	char **paths;
++
++	i = modules_builtin_iter_new(modules_builtin_name);
++	if (i == NULL) {
++		fprintf(stderr, "Cannot iterate over builtin module file.\n");
++		exit(1);
++	}
++
++	while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
++		size_t j;
++
++		for (j = 0; paths[j] != NULL; j++) {
++			char *alloc = NULL;
++			char *path = paths[j];
++			/*
++			 * If the name doesn't start in ./, add it, to match the names
++			 * passed to add_builtins.
++			 */
++			if (strncmp(paths[j], "./", 2) != 0) {
++				char *p;
++				if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
++					fprintf(stderr, "Cannot allocate memory for "
++						"builtin module object name %s.\n",
++						paths[j]);
++					exit(1);
++				}
++				p = alloc;
++				p = stpcpy(p, "./");
++				p = stpcpy(p, paths[j]);
++				path = alloc;
++			}
++			if (add_cu_mapping(path, module_name) < 0) {
++				fprintf(stderr, "Cannot add path -> module mapping for "
++					"%s -> %s: %s\n", path, module_name,
++					ctf_errmsg(ctf_errno(output)));
++				exit(1);
++			}
++			free (alloc);
++		}
++		free(paths);
++	}
++	free(module_name);
++	modules_builtin_iter_free(i);
++}
++
++/*
++ * Strip the leading .ctf. off all the module names: transform the default name
++ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
++ * derives from the intermediate file used to keep the CTF out of the final
++ * module).
++ */
++static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
++				    const char *name,
++				    void *arg __attribute__((__unused__)))
++{
++	if (strcmp(name, ".ctf") == 0)
++		return strdup("shared_ctf");
++
++	if (strncmp(name, ".ctf", 4) == 0) {
++		size_t n = strlen (name);
++		if (strcmp(name + n - 4, ".ctf") == 0)
++			n -= 4;
++		return strndup(name + 4, n - 4);
++	}
++	return NULL;
++}
++
++int main(int argc, char *argv[])
++{
++	int err;
++	const char *output_file;
++	unsigned char *file_data = NULL;
++	size_t file_size;
++	FILE *fp;
++
++	if (argc != 5) {
++		fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
++		fprintf(stderr, "                   filelist\n");
++		exit(1);
++	}
++
++	output_file = argv[1];
++
++	/*
++	 * First pull in the input files and add them to the link.
++	 */
++
++	output = ctf_create(&err);
++	if (!output) {
++		fprintf(stderr, "Cannot create output CTF archive: %s\n",
++			ctf_errmsg(err));
++		return 1;
++	}
++
++	suck_in_lines(argv[4], add_to_link);
++
++	/*
++	 * Make sure that, even if all their types are shared, all modules have
++	 * a ctf member that can be used as a child of the shared CTF.
++	 */
++	suck_in_lines(argv[4], add_cu_mappings);
++
++	/*
++	 * Then pull in the builtin objects list and add them as
++	 * mappings to "vmlinux".
++	 */
++
++	suck_in_lines(argv[2], add_builtins);
++
++	/*
++	 * Finally, pull in the object -> module mapping and add it
++	 * as appropriate mappings.
++	 */
++	suck_in_modules(argv[3]);
++
++	/*
++	 * Commit the added CU mappings.
++	 */
++	commit_cu_mappings();
++
++	/*
++	 * Arrange to fix up the module names.
++	 */
++	ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
++
++	/*
++	 * Do the link.
++	 */
++	if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
++                     CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
++		goto ctf_err;
++
++	/*
++	 * Write the output.
++	 */
++
++	file_data = ctf_link_write(output, &file_size, 4096);
++	if (!file_data)
++		goto ctf_err;
++
++	fp = fopen(output_file, "w");
++	if (!fp)
++		goto err;
++
++	while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
++	if (ferror(fp)) {
++		errno = ferror(fp);
++		goto err;
++	}
++	if (fclose(fp) < 0)
++		goto err;
++	free(file_data);
++	ctf_file_close(output);
++
++	return 0;
++err:
++	free(file_data);
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		strerror(errno));
++	return 1;
++ctf_err:
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		ctf_errmsg(ctf_errno(output)));
++	return 1;
++}
+diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
+new file mode 100644
+index 0000000000000..10af2bbc80e0c
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.c
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.c"
+diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
+new file mode 100644
+index 0000000000000..5e0299e5600c2
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.h
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.h"
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..51ae0458ffbdd
+--- /dev/null
++++ b/scripts/generate_builtin_ranges.awk
+@@ -0,0 +1,516 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# generate_builtin_ranges.awk: Generate address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
++#		vmlinux.o.map [ <build-dir> ] > modules.builtin.ranges
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Update the ranges entry for the given module 'mod' in section 'osect'.
++#
++# We use a modified absolute start address (soff + base) as index because we
++# may need to insert an anchor record later that must be at the start of the
++# section data, and the first module may very well start at the same address.
++# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
++# (addr << 1).  This is safe because the index is only used to sort the entries
++# before writing them out.
++#
++function update_entry(osect, mod, soff, eoff, sect, idx) {
++	sect = sect_in[osect];
++	idx = (soff + sect_base[osect]) * 2 + 1;
++	entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
++	count[sect]++;
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++	if (ARGC > 4) {
++		kdir = ARGV[ARGC - 1];
++		ARGV[ARGC - 1] = "";
++	} else
++		kdir = ".";
++}
++
++# (1) Build a lookup map of built-in module names.
++#
++# The first file argument is used as input (modules.builtin).
++#
++# Lines will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 1 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (2) Collect address information for each section.
++#
++# The second file argument is used as input (vmlinux.map).
++#
++# We collect the base address of the section in order to convert all addresses
++# in the section into offset values.
++#
++# We collect the address of the anchor (or first symbol in the section if there
++# is no explicit anchor) to allow users of the range data to calculate address
++# ranges based on the actual load address of the section in the running kernel.
++#
++# We collect the start address of any sub-section (section included in the top
++# level section being processed).  This is needed when the final linking was
++# done using vmlinux.a because then the list of objects contained in each
++# section is to be obtained from vmlinux.o.map.  The offset of the sub-section
++# is recorded here, to be used as an addend when processing vmlinux.o.map
++# later.
++#
++
++# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
++# lld linker map records into equivalent GNU ld linker map records.
++#
++# The first record of the vmlinux.map file provides enough information to know
++# which format we are dealing with.
++#
++ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	if (dbg)
++		printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++# lld: ffffffff82c00000          2c00000   2493c0  8192 .data
++#  ->
++# ld:  .data           0xffffffff82c00000   0x2493c0 load address 0x0000000002c00000
++#
++ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an anchor record from lld format to ld format.
++#
++# lld: ffffffff81000000          1000000        0     1         _text = .
++#  ->
++# ld:                  0xffffffff81000000                _text = .
++#
++ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
++	$0 = "  0x"$1 " " $5 " = .";
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++# lld:            11480            11480     1f07    16         vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
++#  ->
++# ld:   .text          0x0000000000011480     0x1f07 arch/x86/events/amd/uncore.o
++#
++ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/ vmlinux\.a\(/, " ");
++	sub(/:\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++# We only care about these while processing a section for which no anchor has
++# been determined yet.
++#
++# lld: ffffffff82a859a4          2a859a4        0     1                 btf_ksym_iter_id
++#  ->
++# ld:                  0xffffffff82a859a4                btf_ksym_iter_id
++#
++ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
++	$0 = "  0x"$1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# (LD) Section records with just the section name at the start of the line
++#      need to have the next line pulled in to determine whether it is a
++#      loadable section.  If it is, the next line will contains a hex value
++#      as first and second items.
++#
++ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	if ($1 !~ /^0x/ || $2 !~ /^0x/)
++		next;
++
++	$0 = s " " $0;
++}
++
++# (LD) Object records with just the section name denote records with a long
++#      section name for which the remainder of the record can be found on the
++#      next line.
++#
++# (This is also needed for vmlinux.o.map, when used.)
++#
++ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Beginning a new section - done with the previous one (if any).
++#
++ARGIND == 2 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Process a loadable section (we only care about .-sections).
++#
++# Record the section name and its base address.
++# We also record the raw (non-stripped) address of the section because it can
++# be used to identify an anchor record.
++#
++# Note:
++# Since some AWK implementations cannot handle large integers, we strip off the
++# first 4 hex digits from the address.  This is safe because the kernel space
++# is not large enough for addresses to extend into those digits.  The portion
++# to strip off is stored in addr_prefix as a regexp, so further clauses can
++# perform a simple substitution to do the address stripping.
++#
++ARGIND == 2 && /^\./ {
++	# Explicitly ignore a few sections that are not relevant here.
++	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++		next;
++
++	# Sections with a 0-address can be ignored as well.
++	if ($2 ~ /^0x0+$/)
++		next;
++
++	raw_addr = $2;
++	addr_prefix = "^" substr($2, 1, 6);
++	base = $2;
++	sub(addr_prefix, "0x", base);
++	base = strtonum(base);
++	sect = $1;
++	anchor = 0;
++	sect_base[sect] = base;
++	sect_size[sect] = strtonum($3);
++
++	if (dbg)
++		printf "[%s] BASE   %016x\n", sect, base >"/dev/stderr";
++
++	next;
++}
++
++# If we are not in a section we care about, we ignore the record.
++#
++ARGIND == 2 && !sect {
++	next;
++}
++
++# Record the first anchor symbol for the current section.
++#
++# An anchor record for the section bears the same raw address as the section
++# record.
++#
++ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
++	anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
++
++	next;
++}
++
++# If no anchor record was found for the current section, use the first symbol
++# in the section as anchor.
++#
++ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
++	addr = $1;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr) - base;
++	anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
++
++	next;
++}
++
++# The first occurrence of a section name in an object record establishes the
++# addend (often 0) for that section.  This information is needed to handle
++# sections that get combined in the final linking of vmlinux (e.g. .head.text
++# getting included at the start of .text).
++#
++# If the section does not have a base yet, use the base of the encapsulating
++# section.
++#
++ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++	if (!($1 in sect_base)) {
++		sect_base[$1] = base;
++
++		if (dbg)
++			printf "[%s] BASE   %016x\n", $1, base >"/dev/stderr";
++	}
++
++	addr = $2;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr);
++	sect_addend[$1] = addr - sect_base[$1];
++	sect_in[$1] = sect;
++
++	if (dbg)
++		printf "[%s] ADDEND %016x - %016x = %016x\n",  $1, addr, base, sect_addend[$1] >"/dev/stderr";
++
++	# If the object is vmlinux.o then we will need vmlinux.o.map to get the
++	# actual offsets of objects.
++	if ($4 == "vmlinux.o")
++		need_o_map = 1;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++# If the final link was done using the actual objects, vmlinux.map contains all
++# the information we need (see section (3a)).
++# If linking was done using vmlinux.a as intermediary, we will need to process
++# vmlinux.o.map (see section (3b)).
++
++# (3a) Determine offset range info using vmlinux.map.
++#
++# Since we are already processing vmlinux.map, the top level section that is
++# being processed is already known.  If we do not have a base address for it,
++# we do not need to process records for it.
++#
++# Given the object name, we determine the module(s) (if any) that the current
++# object is associated with.
++#
++# If we were already processing objects for a (list of) module(s):
++#  - If the current object belongs to the same module(s), update the range data
++#    to include the current object.
++#  - Otherwise, ensure that the end offset of the range is valid.
++#
++# If the current object does not belong to a built-in module, ignore it.
++#
++# If it does, we add a new built-in module offset range record.
++#
++ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	if (!(sect in sect_base))
++		next;
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) - sect_base[sect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = $1;
++	update_entry($1, mod, soff, mod_eoff);
++
++	next;
++}
++
++# If we do not need to parse the vmlinux.o.map file, we are done.
++#
++ARGIND == 3 && !need_o_map {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++
++	sect = $6;
++	if (!(sect in sect_addend))
++		next;
++
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "sect " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (3b) Determine offset range info using vmlinux.o.map.
++#
++# If we do not know an addend for the object's section, we are interested in
++# anything within that section.
++#
++# Determine the top-level section that the object's section was included in
++# during the final link.  This is the section name offset range data will be
++# associated with for this object.
++#
++# The remainder of the processing of the current object record follows the
++# procedure outlined in (3a).
++#
++ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	osect = $1;
++	if (!(osect in sect_addend))
++		next;
++
++	# We need to work with the main section.
++	sect = sect_in[osect];
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) + sect_addend[osect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = osect;
++	update_entry(osect, mod, soff, mod_eoff);
++
++	next;
++}
++
++# (4) Generate the output.
++#
++# Anchor records are added for each section that contains offset range data
++# records.  They are added at an adjusted section base address (base << 1) to
++# ensure they come first in the second records (see update_entry() above for
++# more information).
++#
++# All entries are sorted by (adjusted) address to ensure that the output can be
++# parsed in strict ascending address order.
++#
++END {
++	for (sect in count) {
++		if (sect in sect_anchor)
++			entries[sect_base[sect] * 2] = sect_anchor[sect];
++	}
++
++	n = asorti(entries, indices);
++	for (i = 1; i <= n; i++)
++		print entries[indices[i]];
++}
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index f48d72d22dc2a..d7e6cd7781256 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -733,6 +733,7 @@ static const char *const section_white_list[] =
+ 	".comment*",
+ 	".debug*",
+ 	".zdebug*",		/* Compressed debug sections. */
++        ".ctf",			/* Type info */
+ 	".GCC.command.line",	/* record-gcc-switches */
+ 	".mdebug*",        /* alpha, score, mips etc. */
+ 	".pdr",            /* alpha, score, mips etc. */
+diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
+new file mode 100644
+index 0000000000000..df52932a4417b
+--- /dev/null
++++ b/scripts/modules_builtin.c
+@@ -0,0 +1,200 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules_builtin reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++
++#include "modules_builtin.h"
++
++/*
++ * Read a modules.builtin.objs file and translate it into a stream of
++ * name / module-name pairs.
++ */
++
++/*
++ * Construct a modules.builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file)
++{
++	struct modules_builtin_iter *i;
++
++	i = calloc(1, sizeof(struct modules_builtin_iter));
++	if (i == NULL)
++		return NULL;
++
++	i->f = fopen(modules_builtin_file, "r");
++
++	if (i->f == NULL) {
++		fprintf(stderr, "Cannot open builtin module file %s: %s\n",
++			modules_builtin_file, strerror(errno));
++		return NULL;
++	}
++
++	return i;
++}
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
++{
++	size_t npaths = 1;
++	char **module_paths;
++	char *last_slash;
++	char *last_dot;
++	char *trailing_linefeed;
++	char *object_name = i->line;
++	char *dash;
++	int composite = 0;
++
++	/*
++	 * Read in all module entries, computing the suffixless, pathless name
++	 * of the module and building the next arrayful of object file names for
++	 * return.
++	 *
++	 * Modules can consist of multiple files: in this case, the portion
++	 * before the colon is the path to the module (as before): the portion
++	 * after the colon is a space-separated list of files that should be
++	 * considered part of this module.  In this case, the portion before the
++	 * name is an "object file" that does not actually exist: it is merged
++	 * into built-in.a without ever being written out.
++	 *
++	 * All module names have - translated to _, to match what is done to the
++	 * names of the same things when built as modules.
++	 */
++
++	/*
++	 * Reinvocation of exhausted iterator. Return NULL, once.
++	 */
++retry:
++	if (getline(&i->line, &i->line_size, i->f) < 0) {
++		if (ferror(i->f)) {
++			fprintf(stderr, "Error reading from modules_builtin file:"
++				" %s\n", strerror(errno));
++			exit(1);
++		}
++		rewind(i->f);
++		return NULL;
++	}
++
++	if (i->line[0] == '\0')
++		goto retry;
++
++	trailing_linefeed = strchr(i->line, '\n');
++	if (trailing_linefeed != NULL)
++		*trailing_linefeed = '\0';
++
++	/*
++	 * Slice the line in two at the colon, if any.  If there is anything
++	 * past the ': ', this is a composite module.  (We allow for no colon
++	 * for robustness, even though one should always be present.)
++	 */
++	if (strchr(i->line, ':') != NULL) {
++		char *name_start;
++
++		object_name = strchr(i->line, ':');
++		*object_name = '\0';
++		object_name++;
++		name_start = object_name + strspn(object_name, " \n");
++		if (*name_start != '\0') {
++			composite = 1;
++			object_name = name_start;
++		}
++	}
++
++	/*
++	 * Figure out the module name.
++	 */
++	last_slash = strrchr(i->line, '/');
++	last_slash = (!last_slash) ? i->line :
++		last_slash + 1;
++	free(*module_name);
++	*module_name = strdup(last_slash);
++	dash = *module_name;
++
++	while (dash != NULL) {
++		dash = strchr(dash, '-');
++		if (dash != NULL)
++			*dash = '_';
++	}
++
++	last_dot = strrchr(*module_name, '.');
++	if (last_dot != NULL)
++		*last_dot = '\0';
++
++	/*
++	 * Multifile separator? Object file names explicitly stated:
++	 * slice them up and shuffle them in.
++	 *
++	 * The array size may be an overestimate if any object file
++	 * names start or end with spaces (very unlikely) but cannot be
++	 * an underestimate.  (Check for it anyway.)
++	 */
++	if (composite) {
++		char *one_object;
++
++		for (npaths = 0, one_object = object_name;
++		     one_object != NULL;
++		     npaths++, one_object = strchr(one_object + 1, ' '));
++	}
++
++	module_paths = malloc((npaths + 1) * sizeof(char *));
++	if (!module_paths) {
++		fprintf(stderr, "%s: out of memory on module %s\n", __func__,
++			*module_name);
++		exit(1);
++	}
++
++	if (composite) {
++		char *one_object;
++		size_t i = 0;
++
++		while ((one_object = strsep(&object_name, " ")) != NULL) {
++			if (i >= npaths) {
++				fprintf(stderr, "%s: num_objs overflow on module "
++					"%s: this is a bug.\n", __func__,
++					*module_name);
++				exit(1);
++			}
++
++			module_paths[i++] = one_object;
++		}
++	} else
++		module_paths[0] = i->line;	/* untransformed module name */
++
++	module_paths[npaths] = NULL;
++
++	return module_paths;
++}
++
++/*
++ * Free an iterator. Can be called while iteration is underway, so even
++ * state that is freed at the end of iteration must be freed here too.
++ */
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i)
++{
++	if (i == NULL)
++		return;
++	fclose(i->f);
++	free(i->line);
++	free(i);
++}
+diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
+new file mode 100644
+index 0000000000000..5138792b42ef0
+--- /dev/null
++++ b/scripts/modules_builtin.h
+@@ -0,0 +1,48 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules.builtin.objs reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef _LINUX_MODULES_BUILTIN_H
++#define _LINUX_MODULES_BUILTIN_H
++
++#include <stdio.h>
++#include <stddef.h>
++
++/*
++ * modules.builtin.objs iteration state.
++ */
++struct modules_builtin_iter {
++	FILE *f;
++	char *line;
++	size_t line_size;
++};
++
++/*
++ * Construct a modules_builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file);
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
++
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i);
++
++#endif
+diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
+index c52d517b93647..8f75906a96314 100644
+--- a/scripts/package/kernel.spec
++++ b/scripts/package/kernel.spec
+@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
+ 
+ %build
+ %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
++%if %{with_ctf}
++%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
++%endif
+ 
+ %install
+ mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
+ # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
+ %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
++%if %{with_ctf}
++%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
++%endif
+ %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+ cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index ce201bfa8377c..aeb43c7ab1229 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -21,10 +21,16 @@ else
+ echo '%define with_devel 0'
+ fi
+ 
++if grep -q CONFIG_CTF=y include/config/auto.conf; then
++echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
++else
++echo '%define with_ctf 0'
++fi
+ cat<<EOF
+ %define ARCH ${ARCH}
+ %define KERNELRELEASE ${KERNELRELEASE}
+ %define pkg_release $("${srctree}/init/build-version")
++
+ EOF
+ 
+ cat "${srctree}/scripts/package/kernel.spec"
+diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
+new file mode 100644
+index 0000000000000..5d94d6ee99227
+--- /dev/null
++++ b/scripts/remove-ctf-lds.awk
+@@ -0,0 +1,12 @@
++# SPDX-License-Identifier: GPL-2.0
++# See Makefile.vmlinux_o
++
++BEGIN {
++    discards = 0; p = 0
++}
++
++/^====/ { p = 1; next; }
++p && /\.ctf/ { next; }
++p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
++p && /^\}/ && !discards { print "  /DISCARD/ : { *(.ctf) }"; }
++p { print $0; }
+diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..f513841da83e1
+--- /dev/null
++++ b/scripts/verify_builtin_ranges.awk
+@@ -0,0 +1,356 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# verify_builtin_ranges.awk: Verify address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
++#				   modules.builtin vmlinux.map vmlinux.o.map \
++#				   [ <build-dir> ]
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	} else {
++		print "ERROR: Failed to read: " fn "\n\n" \
++		      "  Invalid kernel build directory (" kdir ")\n" \
++		      "  or its content does not match " ARGV[1] >"/dev/stderr";
++		close(fn);
++		total = 0;
++		exit(1);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Return a representative integer value for a given hexadecimal address.
++#
++# Since all kernel addresses fall within the same memory region, we can safely
++# strip off the first 6 hex digits before performing the hex-to-dec conversion,
++# thereby avoiding integer overflows.
++#
++function addr2val(val) {
++	sub(/^0x/, "", val);
++	if (length(val) == 16)
++		val = substr(val, 5);
++	return strtonum("0x" val);
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++	if (ARGC > 6) {
++		kdir = ARGV[ARGC - 1];
++		ARGV[ARGC - 1] = "";
++	} else
++		kdir = ".";
++}
++
++# (1) Load the built-in module address range data.
++#
++ARGIND == 1 {
++	ranges[FNR] = $0;
++	rcnt++;
++	next;
++}
++
++# (2) Annotate System.map symbols with module names.
++#
++ARGIND == 2 {
++	addr = addr2val($1);
++	name = $3;
++
++	while (addr >= mod_eaddr) {
++		if (sect_symb) {
++			if (sect_symb != name)
++				next;
++
++			sect_base = addr - sect_off;
++			if (dbg)
++				printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
++			sect_symb = 0;
++		}
++
++		if (++ridx > rcnt)
++			break;
++
++		$0 = ranges[ridx];
++		sub(/-/, " ");
++		if ($4 != "=") {
++			sub(/-/, " ");
++			mod_saddr = strtonum("0x" $2) + sect_base;
++			mod_eaddr = strtonum("0x" $3) + sect_base;
++			$1 = $2 = $3 = "";
++			sub(/^ +/, "");
++			mod_name = $0;
++
++			if (dbg)
++				printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
++		} else {
++			sect_name = $1;
++			sect_off = strtonum("0x" $2);
++			sect_symb = $5;
++		}
++	}
++
++	idx = addr"-"name;
++	if (addr >= mod_saddr && addr < mod_eaddr)
++		sym2mod[idx] = mod_name;
++
++	next;
++}
++
++# Once we are done annotating the System.map, we no longer need the ranges data.
++#
++FNR == 1 && ARGIND == 3 {
++	delete ranges;
++}
++
++# (3) Build a lookup map of built-in module names.
++#
++# Lines from modules.builtin will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 3 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (4) Get a list of symbols (per object).
++#
++# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
++# if vmlinux is found to have inked in vmlinux.o.
++#
++
++# If we were able to get the data we need from vmlinux.map, there is no need to
++# process vmlinux.o.map.
++#
++FNR == 1 && ARGIND == 5 && total > 0 {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
++#
++ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(\./ {
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
++	$0 = "  0x" $1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# Handle section records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Next section - previous one is done.
++#
++ARGIND >= 4 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Get the (top level) section name.
++#
++ARGIND >= 4 && /^[^ ]/ && $2 ~ /^0x/ && $3 ~ /^0x/ {
++	# Empty section or per-CPU section - ignore.
++	if (NF < 3 || $1 ~ /\.percpu/) {
++		sect = 0;
++		next;
++	}
++
++	sect = $1;
++
++	next;
++}
++
++# If we are not currently in a section we care about, ignore records.
++#
++!sect {
++	next;
++}
++
++# Handle object records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
++	# If the section name is long, the remainder of the entry is found on
++	# the next line.
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
++# symbol information
++#
++ARGIND == 4 && /^ [^ ]/ && NF == 4 {
++	idx = sect":"$1;
++	if (!(idx in sect_addend)) {
++		sect_addend[idx] = addr2val($2);
++		if (dbg)
++			printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
++	}
++	if ($4 == "vmlinux.o") {
++		need_o_map = 1;
++		next;
++	}
++}
++
++# If data from vmlinux.o.map is needed, we only process section and object
++# records from vmlinux.map to determine which section we need to pay attention
++# to in vmlinux.o.map.  So skip everything else from vmlinux.map.
++#
++ARGIND == 4 && need_o_map {
++	next;
++}
++
++# Get module information for the current object.
++#
++ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
++	msect = $1;
++	mod_name = get_module_info($4);
++	mod_eaddr = addr2val($2) + addr2val($3);
++
++	next;
++}
++
++# Process a symbol record.
++#
++# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
++# as follows:
++#  - For all symbols in a given object:
++#     - If the symbol is annotated with the same module name(s) that the object
++#       belongs to, count it as a match.
++#     - Otherwise:
++#        - If the symbol is known to have duplicates of which at least one is
++#          in a built-in module, disregard it.
++#        - If the symbol us not annotated with any module name(s) AND the
++#          object belongs to built-in modules, count it as missing.
++#        - Otherwise, count it as a mismatch.
++#
++ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
++	idx = sect":"msect;
++	if (!(idx in sect_addend))
++		next;
++
++	addr = addr2val($1);
++
++	# Handle the rare but annoying case where a 0-size symbol is placed at
++	# the byte *after* the module range.  Based on vmlinux.map it will be
++	# considered part of the current object, but it falls just beyond the
++	# module address range.  Unfortunately, its address could be at the
++	# start of another built-in module, so the only safe thing to do is to
++	# ignore it.
++	if (mod_name && addr == mod_eaddr)
++		next;
++
++	# If we are processing vmlinux.o.map, we need to apply the base address
++	# of the section to the relative address on the record.
++	#
++	if (ARGIND == 5)
++		addr += sect_addend[idx];
++
++	idx = addr"-"$2;
++	mod = "";
++	if (idx in sym2mod) {
++		mod = sym2mod[idx];
++		if (sym2mod[idx] == mod_name) {
++			mod_matches++;
++			matches++;
++		} else if (mod_name == "") {
++			print $2 " in " sym2mod[idx] " (should NOT be)";
++			mismatches++;
++		} else {
++			print $2 " in " sym2mod[idx] " (should be " mod_name ")";
++			mismatches++;
++		}
++	} else if (mod_name != "") {
++		print $2 " should be in " mod_name;
++		missing++;
++	} else
++		matches++;
++
++	total++;
++
++	next;
++}
++
++# Issue the comparison report.
++#
++END {
++	if (total) {
++		printf "Verification of %s:\n", ARGV[1];
++		printf "  Correct matches:  %6d (%d%% of total)\n", matches, 100 * matches / total;
++		printf "    Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
++		printf "  Mismatches:       %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
++		printf "  Missing:          %6d (%d%% of total)\n", missing, 100 * missing / total;
++	}
++}


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-04 13:50 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-04 13:50 UTC (permalink / raw
  To: gentoo-commits

commit:     1dac9a861cfca1e7fbacf1065567d710ab7ca580
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  4 13:48:09 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  4 13:48:09 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1dac9a86

Linuxpatch 6.10.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1007_linux-6.10.8.patch | 6428 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 6428 insertions(+)

diff --git a/1007_linux-6.10.8.patch b/1007_linux-6.10.8.patch
new file mode 100644
index 00000000..6758e614
--- /dev/null
+++ b/1007_linux-6.10.8.patch
@@ -0,0 +1,6428 @@
+diff --git a/Documentation/devicetree/bindings/usb/microchip,usb2514.yaml b/Documentation/devicetree/bindings/usb/microchip,usb2514.yaml
+index 783c27591e564..c39affb5f9983 100644
+--- a/Documentation/devicetree/bindings/usb/microchip,usb2514.yaml
++++ b/Documentation/devicetree/bindings/usb/microchip,usb2514.yaml
+@@ -10,7 +10,7 @@ maintainers:
+   - Fabio Estevam <festevam@gmail.com>
+ 
+ allOf:
+-  - $ref: usb-hcd.yaml#
++  - $ref: usb-device.yaml#
+ 
+ properties:
+   compatible:
+@@ -35,6 +35,13 @@ required:
+   - compatible
+   - reg
+ 
++patternProperties:
++  "^.*@[0-9a-f]{1,2}$":
++    description: The hard wired USB devices
++    type: object
++    $ref: /schemas/usb/usb-device.yaml
++    additionalProperties: true
++
+ unevaluatedProperties: false
+ 
+ examples:
+diff --git a/Makefile b/Makefile
+index ab77d171e268d..2e5ac6ab3d476 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6dl-yapp43-common.dtsi b/arch/arm/boot/dts/nxp/imx/imx6dl-yapp43-common.dtsi
+index 52a0f6ee426f9..bcf4d9c870ec9 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6dl-yapp43-common.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6dl-yapp43-common.dtsi
+@@ -274,24 +274,24 @@ leds: led-controller@30 {
+ 
+ 		led@0 {
+ 			chan-name = "R";
+-			led-cur = /bits/ 8 <0x20>;
+-			max-cur = /bits/ 8 <0x60>;
++			led-cur = /bits/ 8 <0x6e>;
++			max-cur = /bits/ 8 <0xc8>;
+ 			reg = <0>;
+ 			color = <LED_COLOR_ID_RED>;
+ 		};
+ 
+ 		led@1 {
+ 			chan-name = "G";
+-			led-cur = /bits/ 8 <0x20>;
+-			max-cur = /bits/ 8 <0x60>;
++			led-cur = /bits/ 8 <0xbe>;
++			max-cur = /bits/ 8 <0xc8>;
+ 			reg = <1>;
+ 			color = <LED_COLOR_ID_GREEN>;
+ 		};
+ 
+ 		led@2 {
+ 			chan-name = "B";
+-			led-cur = /bits/ 8 <0x20>;
+-			max-cur = /bits/ 8 <0x60>;
++			led-cur = /bits/ 8 <0xbe>;
++			max-cur = /bits/ 8 <0xc8>;
+ 			reg = <2>;
+ 			color = <LED_COLOR_ID_BLUE>;
+ 		};
+diff --git a/arch/arm/boot/dts/ti/omap/omap3-n900.dts b/arch/arm/boot/dts/ti/omap/omap3-n900.dts
+index 07c5b963af78a..4bde3342bb959 100644
+--- a/arch/arm/boot/dts/ti/omap/omap3-n900.dts
++++ b/arch/arm/boot/dts/ti/omap/omap3-n900.dts
+@@ -781,7 +781,7 @@ accelerometer@1d {
+ 
+ 		mount-matrix =	 "-1",  "0",  "0",
+ 				  "0",  "1",  "0",
+-				  "0",  "0",  "1";
++				  "0",  "0",  "-1";
+ 	};
+ 
+ 	cam1: camera@3e {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
+index e5d3901f29136..e1f59bdcae497 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
+@@ -211,13 +211,12 @@ sound-wm8962 {
+ 
+ 		simple-audio-card,cpu {
+ 			sound-dai = <&sai3>;
++			frame-master;
++			bitclock-master;
+ 		};
+ 
+ 		simple-audio-card,codec {
+ 			sound-dai = <&wm8962>;
+-			clocks = <&clk IMX8MP_CLK_IPP_DO_CLKO1>;
+-			frame-master;
+-			bitclock-master;
+ 		};
+ 	};
+ };
+@@ -499,10 +498,9 @@ &pcie_phy {
+ &sai3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_sai3>;
+-	assigned-clocks = <&clk IMX8MP_CLK_SAI3>,
+-			  <&clk IMX8MP_AUDIO_PLL2> ;
+-	assigned-clock-parents = <&clk IMX8MP_AUDIO_PLL2_OUT>;
+-	assigned-clock-rates = <12288000>, <361267200>;
++	assigned-clocks = <&clk IMX8MP_CLK_SAI3>;
++	assigned-clock-parents = <&clk IMX8MP_AUDIO_PLL1_OUT>;
++	assigned-clock-rates = <12288000>;
+ 	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx93-tqma9352-mba93xxla.dts b/arch/arm64/boot/dts/freescale/imx93-tqma9352-mba93xxla.dts
+index eb3f4cfb69863..ad77a96c5617b 100644
+--- a/arch/arm64/boot/dts/freescale/imx93-tqma9352-mba93xxla.dts
++++ b/arch/arm64/boot/dts/freescale/imx93-tqma9352-mba93xxla.dts
+@@ -438,7 +438,7 @@ &usdhc2 {
+ 	pinctrl-0 = <&pinctrl_usdhc2_hs>, <&pinctrl_usdhc2_gpio>;
+ 	pinctrl-1 = <&pinctrl_usdhc2_uhs>, <&pinctrl_usdhc2_gpio>;
+ 	pinctrl-2 = <&pinctrl_usdhc2_uhs>, <&pinctrl_usdhc2_gpio>;
+-	cd-gpios = <&gpio3 00 GPIO_ACTIVE_LOW>;
++	cd-gpios = <&gpio3 0 GPIO_ACTIVE_LOW>;
+ 	vmmc-supply = <&reg_usdhc2_vmmc>;
+ 	bus-width = <4>;
+ 	no-sdio;
+diff --git a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+index 9d2328c185c90..fe951f86a96bd 100644
+--- a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+@@ -19,7 +19,7 @@ reserved-memory {
+ 		linux,cma {
+ 			compatible = "shared-dma-pool";
+ 			reusable;
+-			alloc-ranges = <0 0x60000000 0 0x40000000>;
++			alloc-ranges = <0 0x80000000 0 0x40000000>;
+ 			size = <0 0x10000000>;
+ 			linux,cma-default;
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/imx93.dtsi b/arch/arm64/boot/dts/freescale/imx93.dtsi
+index 4a3f42355cb8f..a0993022c102d 100644
+--- a/arch/arm64/boot/dts/freescale/imx93.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93.dtsi
+@@ -1105,7 +1105,7 @@ eqos: ethernet@428a0000 {
+ 							 <&clk IMX93_CLK_SYS_PLL_PFD0_DIV2>;
+ 				assigned-clock-rates = <100000000>, <250000000>;
+ 				intf_mode = <&wakeupmix_gpr 0x28>;
+-				snps,clk-csr = <0>;
++				snps,clk-csr = <6>;
+ 				nvmem-cells = <&eth_mac2>;
+ 				nvmem-cell-names = "mac-address";
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/ipq5332.dtsi b/arch/arm64/boot/dts/qcom/ipq5332.dtsi
+index 770d9c2fb4562..e3064568f0221 100644
+--- a/arch/arm64/boot/dts/qcom/ipq5332.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq5332.dtsi
+@@ -321,8 +321,8 @@ usb: usb@8af8800 {
+ 			reg = <0x08af8800 0x400>;
+ 
+ 			interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 53 IRQ_TYPE_EDGE_BOTH>,
+-				     <GIC_SPI 52 IRQ_TYPE_EDGE_BOTH>;
++				     <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "pwr_event",
+ 					  "dp_hs_phy_irq",
+ 					  "dm_hs_phy_irq";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index b063dd28149e7..7d03316c279df 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -648,7 +648,7 @@ &pcie4 {
+ };
+ 
+ &pcie4_phy {
+-	vdda-phy-supply = <&vreg_l3j_0p8>;
++	vdda-phy-supply = <&vreg_l3i_0p8>;
+ 	vdda-pll-supply = <&vreg_l3e_1p2>;
+ 
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index df3577fcd93c9..2d7dedb7e30f2 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -459,7 +459,7 @@ &pcie4 {
+ };
+ 
+ &pcie4_phy {
+-	vdda-phy-supply = <&vreg_l3j_0p8>;
++	vdda-phy-supply = <&vreg_l3i_0p8>;
+ 	vdda-pll-supply = <&vreg_l3e_1p2>;
+ 
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 05e4d491ec18c..36c398e5fe501 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -2756,7 +2756,7 @@ pcie6a: pci@1bf8000 {
+ 
+ 			dma-coherent;
+ 
+-			linux,pci-domain = <7>;
++			linux,pci-domain = <6>;
+ 			num-lanes = <2>;
+ 
+ 			interrupts = <GIC_SPI 773 IRQ_TYPE_LEVEL_HIGH>,
+@@ -2814,6 +2814,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ 				      "link_down";
+ 
+ 			power-domains = <&gcc GCC_PCIE_6A_GDSC>;
++			required-opps = <&rpmhpd_opp_nom>;
+ 
+ 			phys = <&pcie6a_phy>;
+ 			phy-names = "pciephy";
+@@ -2877,7 +2878,7 @@ pcie4: pci@1c08000 {
+ 
+ 			dma-coherent;
+ 
+-			linux,pci-domain = <5>;
++			linux,pci-domain = <4>;
+ 			num-lanes = <2>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+@@ -2935,6 +2936,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ 				      "link_down";
+ 
+ 			power-domains = <&gcc GCC_PCIE_4_GDSC>;
++			required-opps = <&rpmhpd_opp_nom>;
+ 
+ 			phys = <&pcie4_phy>;
+ 			phy-names = "pciephy";
+diff --git a/arch/loongarch/include/asm/dma-direct.h b/arch/loongarch/include/asm/dma-direct.h
+deleted file mode 100644
+index 75ccd808a2af3..0000000000000
+--- a/arch/loongarch/include/asm/dma-direct.h
++++ /dev/null
+@@ -1,11 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * Copyright (C) 2020-2022 Loongson Technology Corporation Limited
+- */
+-#ifndef _LOONGARCH_DMA_DIRECT_H
+-#define _LOONGARCH_DMA_DIRECT_H
+-
+-dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr);
+-phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr);
+-
+-#endif /* _LOONGARCH_DMA_DIRECT_H */
+diff --git a/arch/loongarch/kernel/fpu.S b/arch/loongarch/kernel/fpu.S
+index 69a85f2479fba..6ab640101457c 100644
+--- a/arch/loongarch/kernel/fpu.S
++++ b/arch/loongarch/kernel/fpu.S
+@@ -530,6 +530,10 @@ SYM_FUNC_END(_restore_lasx_context)
+ 
+ #ifdef CONFIG_CPU_HAS_LBT
+ STACK_FRAME_NON_STANDARD _restore_fp
++#ifdef CONFIG_CPU_HAS_LSX
+ STACK_FRAME_NON_STANDARD _restore_lsx
++#endif
++#ifdef CONFIG_CPU_HAS_LASX
+ STACK_FRAME_NON_STANDARD _restore_lasx
+ #endif
++#endif
+diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
+index 80e988985a6ad..0c292f8184927 100644
+--- a/arch/loongarch/kvm/switch.S
++++ b/arch/loongarch/kvm/switch.S
+@@ -277,6 +277,10 @@ SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest)
+ 
+ #ifdef CONFIG_CPU_HAS_LBT
+ STACK_FRAME_NON_STANDARD kvm_restore_fpu
++#ifdef CONFIG_CPU_HAS_LSX
+ STACK_FRAME_NON_STANDARD kvm_restore_lsx
++#endif
++#ifdef CONFIG_CPU_HAS_LASX
+ STACK_FRAME_NON_STANDARD kvm_restore_lasx
+ #endif
++#endif
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index d310b525fbf00..eeba2d26d1cb9 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -29,6 +29,7 @@
+ #define BTNXPUART_CHECK_BOOT_SIGNATURE	3
+ #define BTNXPUART_SERDEV_OPEN		4
+ #define BTNXPUART_IR_IN_PROGRESS	5
++#define BTNXPUART_FW_DOWNLOAD_ABORT	6
+ 
+ /* NXP HW err codes */
+ #define BTNXPUART_IR_HW_ERR		0xb0
+@@ -159,6 +160,7 @@ struct btnxpuart_dev {
+ 	u8 fw_name[MAX_FW_FILE_NAME_LEN];
+ 	u32 fw_dnld_v1_offset;
+ 	u32 fw_v1_sent_bytes;
++	u32 fw_dnld_v3_offset;
+ 	u32 fw_v3_offset_correction;
+ 	u32 fw_v1_expected_len;
+ 	u32 boot_reg_offset;
+@@ -436,6 +438,23 @@ static bool ps_wakeup(struct btnxpuart_dev *nxpdev)
+ 	return false;
+ }
+ 
++static void ps_cleanup(struct btnxpuart_dev *nxpdev)
++{
++	struct ps_data *psdata = &nxpdev->psdata;
++	u8 ps_state;
++
++	mutex_lock(&psdata->ps_lock);
++	ps_state = psdata->ps_state;
++	mutex_unlock(&psdata->ps_lock);
++
++	if (ps_state != PS_STATE_AWAKE)
++		ps_control(psdata->hdev, PS_STATE_AWAKE);
++
++	ps_cancel_timer(nxpdev);
++	cancel_work_sync(&psdata->work);
++	mutex_destroy(&psdata->ps_lock);
++}
++
+ static int send_ps_cmd(struct hci_dev *hdev, void *data)
+ {
+ 	struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
+@@ -566,6 +585,7 @@ static int nxp_download_firmware(struct hci_dev *hdev)
+ 	nxpdev->fw_v1_sent_bytes = 0;
+ 	nxpdev->fw_v1_expected_len = HDR_LEN;
+ 	nxpdev->boot_reg_offset = 0;
++	nxpdev->fw_dnld_v3_offset = 0;
+ 	nxpdev->fw_v3_offset_correction = 0;
+ 	nxpdev->baudrate_changed = false;
+ 	nxpdev->timeout_changed = false;
+@@ -580,14 +600,23 @@ static int nxp_download_firmware(struct hci_dev *hdev)
+ 					       !test_bit(BTNXPUART_FW_DOWNLOADING,
+ 							 &nxpdev->tx_state),
+ 					       msecs_to_jiffies(60000));
++
++	release_firmware(nxpdev->fw);
++	memset(nxpdev->fw_name, 0, sizeof(nxpdev->fw_name));
++
+ 	if (err == 0) {
+-		bt_dev_err(hdev, "FW Download Timeout.");
++		bt_dev_err(hdev, "FW Download Timeout. offset: %d",
++				nxpdev->fw_dnld_v1_offset ?
++				nxpdev->fw_dnld_v1_offset :
++				nxpdev->fw_dnld_v3_offset);
+ 		return -ETIMEDOUT;
+ 	}
++	if (test_bit(BTNXPUART_FW_DOWNLOAD_ABORT, &nxpdev->tx_state)) {
++		bt_dev_err(hdev, "FW Download Aborted");
++		return -EINTR;
++	}
+ 
+ 	serdev_device_set_flow_control(nxpdev->serdev, true);
+-	release_firmware(nxpdev->fw);
+-	memset(nxpdev->fw_name, 0, sizeof(nxpdev->fw_name));
+ 
+ 	/* Allow the downloaded FW to initialize */
+ 	msleep(1200);
+@@ -998,8 +1027,9 @@ static int nxp_recv_fw_req_v3(struct hci_dev *hdev, struct sk_buff *skb)
+ 		goto free_skb;
+ 	}
+ 
+-	serdev_device_write_buf(nxpdev->serdev, nxpdev->fw->data + offset -
+-				nxpdev->fw_v3_offset_correction, len);
++	nxpdev->fw_dnld_v3_offset = offset - nxpdev->fw_v3_offset_correction;
++	serdev_device_write_buf(nxpdev->serdev, nxpdev->fw->data +
++				nxpdev->fw_dnld_v3_offset, len);
+ 
+ free_skb:
+ 	kfree_skb(skb);
+@@ -1294,7 +1324,6 @@ static int btnxpuart_close(struct hci_dev *hdev)
+ {
+ 	struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
+ 
+-	ps_wakeup(nxpdev);
+ 	serdev_device_close(nxpdev->serdev);
+ 	skb_queue_purge(&nxpdev->txq);
+ 	kfree_skb(nxpdev->rx_skb);
+@@ -1429,16 +1458,22 @@ static void nxp_serdev_remove(struct serdev_device *serdev)
+ 	struct btnxpuart_dev *nxpdev = serdev_device_get_drvdata(serdev);
+ 	struct hci_dev *hdev = nxpdev->hdev;
+ 
+-	/* Restore FW baudrate to fw_init_baudrate if changed.
+-	 * This will ensure FW baudrate is in sync with
+-	 * driver baudrate in case this driver is re-inserted.
+-	 */
+-	if (nxpdev->current_baudrate != nxpdev->fw_init_baudrate) {
+-		nxpdev->new_baudrate = nxpdev->fw_init_baudrate;
+-		nxp_set_baudrate_cmd(hdev, NULL);
++	if (is_fw_downloading(nxpdev)) {
++		set_bit(BTNXPUART_FW_DOWNLOAD_ABORT, &nxpdev->tx_state);
++		clear_bit(BTNXPUART_FW_DOWNLOADING, &nxpdev->tx_state);
++		wake_up_interruptible(&nxpdev->check_boot_sign_wait_q);
++		wake_up_interruptible(&nxpdev->fw_dnld_done_wait_q);
++	} else {
++		/* Restore FW baudrate to fw_init_baudrate if changed.
++		 * This will ensure FW baudrate is in sync with
++		 * driver baudrate in case this driver is re-inserted.
++		 */
++		if (nxpdev->current_baudrate != nxpdev->fw_init_baudrate) {
++			nxpdev->new_baudrate = nxpdev->fw_init_baudrate;
++			nxp_set_baudrate_cmd(hdev, NULL);
++		}
+ 	}
+-
+-	ps_cancel_timer(nxpdev);
++	ps_cleanup(nxpdev);
+ 	hci_unregister_dev(hdev);
+ 	hci_free_dev(hdev);
+ }
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index d3989b257f422..1e5b107d1f3bd 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -698,6 +698,10 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 		rc = tpm2_get_cc_attrs_tbl(chip);
+ 		if (rc)
+ 			goto init_irq_cleanup;
++
++		rc = tpm2_sessions_init(chip);
++		if (rc)
++			goto init_irq_cleanup;
+ 	}
+ 
+ 	return tpm_chip_register(chip);
+diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
+index 66b73c308ce67..b7318669485e4 100644
+--- a/drivers/cpufreq/amd-pstate-ut.c
++++ b/drivers/cpufreq/amd-pstate-ut.c
+@@ -160,14 +160,17 @@ static void amd_pstate_ut_check_perf(u32 index)
+ 			lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
+ 		}
+ 
+-		if ((highest_perf != READ_ONCE(cpudata->highest_perf)) ||
+-			(nominal_perf != READ_ONCE(cpudata->nominal_perf)) ||
++		if (highest_perf != READ_ONCE(cpudata->highest_perf) && !cpudata->hw_prefcore) {
++			pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n",
++				__func__, cpu, highest_perf, cpudata->highest_perf);
++			goto skip_test;
++		}
++		if ((nominal_perf != READ_ONCE(cpudata->nominal_perf)) ||
+ 			(lowest_nonlinear_perf != READ_ONCE(cpudata->lowest_nonlinear_perf)) ||
+ 			(lowest_perf != READ_ONCE(cpudata->lowest_perf))) {
+ 			amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+-			pr_err("%s cpu%d highest=%d %d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n",
+-				__func__, cpu, highest_perf, cpudata->highest_perf,
+-				nominal_perf, cpudata->nominal_perf,
++			pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n",
++				__func__, cpu, nominal_perf, cpudata->nominal_perf,
+ 				lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf,
+ 				lowest_perf, cpudata->lowest_perf);
+ 			goto skip_test;
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 67c4a6a0ef124..6f59403c91e03 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -329,7 +329,7 @@ static inline int pstate_enable(bool enable)
+ 		return 0;
+ 
+ 	for_each_present_cpu(cpu) {
+-		unsigned long logical_id = topology_logical_die_id(cpu);
++		unsigned long logical_id = topology_logical_package_id(cpu);
+ 
+ 		if (test_bit(logical_id, &logical_proc_id_mask))
+ 			continue;
+diff --git a/drivers/dma/dw-edma/dw-hdma-v0-core.c b/drivers/dma/dw-edma/dw-hdma-v0-core.c
+index 10e8f0715114f..e3f8db4fe909a 100644
+--- a/drivers/dma/dw-edma/dw-hdma-v0-core.c
++++ b/drivers/dma/dw-edma/dw-hdma-v0-core.c
+@@ -17,8 +17,8 @@ enum dw_hdma_control {
+ 	DW_HDMA_V0_CB					= BIT(0),
+ 	DW_HDMA_V0_TCB					= BIT(1),
+ 	DW_HDMA_V0_LLP					= BIT(2),
+-	DW_HDMA_V0_LIE					= BIT(3),
+-	DW_HDMA_V0_RIE					= BIT(4),
++	DW_HDMA_V0_LWIE					= BIT(3),
++	DW_HDMA_V0_RWIE					= BIT(4),
+ 	DW_HDMA_V0_CCS					= BIT(8),
+ 	DW_HDMA_V0_LLE					= BIT(9),
+ };
+@@ -195,25 +195,14 @@ static void dw_hdma_v0_write_ll_link(struct dw_edma_chunk *chunk,
+ static void dw_hdma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
+ {
+ 	struct dw_edma_burst *child;
+-	struct dw_edma_chan *chan = chunk->chan;
+ 	u32 control = 0, i = 0;
+-	int j;
+ 
+ 	if (chunk->cb)
+ 		control = DW_HDMA_V0_CB;
+ 
+-	j = chunk->bursts_alloc;
+-	list_for_each_entry(child, &chunk->burst->list, list) {
+-		j--;
+-		if (!j) {
+-			control |= DW_HDMA_V0_LIE;
+-			if (!(chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
+-				control |= DW_HDMA_V0_RIE;
+-		}
+-
++	list_for_each_entry(child, &chunk->burst->list, list)
+ 		dw_hdma_v0_write_ll_data(chunk, i++, control, child->sz,
+ 					 child->sar, child->dar);
+-	}
+ 
+ 	control = DW_HDMA_V0_LLP | DW_HDMA_V0_TCB;
+ 	if (!chunk->cb)
+@@ -247,10 +236,11 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ 	if (first) {
+ 		/* Enable engine */
+ 		SET_CH_32(dw, chan->dir, chan->id, ch_en, BIT(0));
+-		/* Interrupt enable&unmask - done, abort */
+-		tmp = GET_CH_32(dw, chan->dir, chan->id, int_setup) |
+-		      HDMA_V0_STOP_INT_MASK | HDMA_V0_ABORT_INT_MASK |
+-		      HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_ABORT_INT_EN;
++		/* Interrupt unmask - stop, abort */
++		tmp = GET_CH_32(dw, chan->dir, chan->id, int_setup);
++		tmp &= ~(HDMA_V0_STOP_INT_MASK | HDMA_V0_ABORT_INT_MASK);
++		/* Interrupt enable - stop, abort */
++		tmp |= HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_ABORT_INT_EN;
+ 		if (!(dw->chip->flags & DW_EDMA_CHIP_LOCAL))
+ 			tmp |= HDMA_V0_REMOTE_STOP_INT_EN | HDMA_V0_REMOTE_ABORT_INT_EN;
+ 		SET_CH_32(dw, chan->dir, chan->id, int_setup, tmp);
+diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
+index 5f7d690e3dbae..b341a6f1b0438 100644
+--- a/drivers/dma/dw/core.c
++++ b/drivers/dma/dw/core.c
+@@ -16,6 +16,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/log2.h>
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -621,12 +622,10 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 	struct dw_desc		*prev;
+ 	struct dw_desc		*first;
+ 	u32			ctllo, ctlhi;
+-	u8			m_master = dwc->dws.m_master;
+-	u8			lms = DWC_LLP_LMS(m_master);
++	u8			lms = DWC_LLP_LMS(dwc->dws.m_master);
+ 	dma_addr_t		reg;
+ 	unsigned int		reg_width;
+ 	unsigned int		mem_width;
+-	unsigned int		data_width = dw->pdata->data_width[m_master];
+ 	unsigned int		i;
+ 	struct scatterlist	*sg;
+ 	size_t			total_len = 0;
+@@ -660,7 +659,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			mem = sg_dma_address(sg);
+ 			len = sg_dma_len(sg);
+ 
+-			mem_width = __ffs(data_width | mem | len);
++			mem_width = __ffs(sconfig->src_addr_width | mem | len);
+ 
+ slave_sg_todev_fill_desc:
+ 			desc = dwc_desc_get(dwc);
+@@ -720,7 +719,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			lli_write(desc, sar, reg);
+ 			lli_write(desc, dar, mem);
+ 			lli_write(desc, ctlhi, ctlhi);
+-			mem_width = __ffs(data_width | mem);
++			mem_width = __ffs(sconfig->dst_addr_width | mem);
+ 			lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width));
+ 			desc->len = dlen;
+ 
+@@ -780,17 +779,93 @@ bool dw_dma_filter(struct dma_chan *chan, void *param)
+ }
+ EXPORT_SYMBOL_GPL(dw_dma_filter);
+ 
++static int dwc_verify_p_buswidth(struct dma_chan *chan)
++{
++	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
++	struct dw_dma *dw = to_dw_dma(chan->device);
++	u32 reg_width, max_width;
++
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV)
++		reg_width = dwc->dma_sconfig.dst_addr_width;
++	else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM)
++		reg_width = dwc->dma_sconfig.src_addr_width;
++	else /* DMA_MEM_TO_MEM */
++		return 0;
++
++	max_width = dw->pdata->data_width[dwc->dws.p_master];
++
++	/* Fall-back to 1-byte transfer width if undefined */
++	if (reg_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
++		reg_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
++	else if (!is_power_of_2(reg_width) || reg_width > max_width)
++		return -EINVAL;
++	else /* bus width is valid */
++		return 0;
++
++	/* Update undefined addr width value */
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV)
++		dwc->dma_sconfig.dst_addr_width = reg_width;
++	else /* DMA_DEV_TO_MEM */
++		dwc->dma_sconfig.src_addr_width = reg_width;
++
++	return 0;
++}
++
++static int dwc_verify_m_buswidth(struct dma_chan *chan)
++{
++	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
++	struct dw_dma *dw = to_dw_dma(chan->device);
++	u32 reg_width, reg_burst, mem_width;
++
++	mem_width = dw->pdata->data_width[dwc->dws.m_master];
++
++	/*
++	 * It's possible to have a data portion locked in the DMA FIFO in case
++	 * of the channel suspension. Subsequent channel disabling will cause
++	 * that data silent loss. In order to prevent that maintain the src and
++	 * dst transfer widths coherency by means of the relation:
++	 * (CTLx.SRC_TR_WIDTH * CTLx.SRC_MSIZE >= CTLx.DST_TR_WIDTH)
++	 * Look for the details in the commit message that brings this change.
++	 *
++	 * Note the DMA configs utilized in the calculations below must have
++	 * been verified to have correct values by this method call.
++	 */
++	if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV) {
++		reg_width = dwc->dma_sconfig.dst_addr_width;
++		if (mem_width < reg_width)
++			return -EINVAL;
++
++		dwc->dma_sconfig.src_addr_width = mem_width;
++	} else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM) {
++		reg_width = dwc->dma_sconfig.src_addr_width;
++		reg_burst = rounddown_pow_of_two(dwc->dma_sconfig.src_maxburst);
++
++		dwc->dma_sconfig.dst_addr_width = min(mem_width, reg_width * reg_burst);
++	}
++
++	return 0;
++}
++
+ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
+ {
+ 	struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ 	struct dw_dma *dw = to_dw_dma(chan->device);
++	int ret;
+ 
+ 	memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig));
+ 
+ 	dwc->dma_sconfig.src_maxburst =
+-		clamp(dwc->dma_sconfig.src_maxburst, 0U, dwc->max_burst);
++		clamp(dwc->dma_sconfig.src_maxburst, 1U, dwc->max_burst);
+ 	dwc->dma_sconfig.dst_maxburst =
+-		clamp(dwc->dma_sconfig.dst_maxburst, 0U, dwc->max_burst);
++		clamp(dwc->dma_sconfig.dst_maxburst, 1U, dwc->max_burst);
++
++	ret = dwc_verify_p_buswidth(chan);
++	if (ret)
++		return ret;
++
++	ret = dwc_verify_m_buswidth(chan);
++	if (ret)
++		return ret;
+ 
+ 	dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst);
+ 	dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst);
+diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c
+index b9e0e22383b72..984fbec2c4bae 100644
+--- a/drivers/dma/ti/omap-dma.c
++++ b/drivers/dma/ti/omap-dma.c
+@@ -1186,10 +1186,10 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
+ 	d->dev_addr = dev_addr;
+ 	d->fi = burst;
+ 	d->es = es;
++	d->sglen = 1;
+ 	d->sg[0].addr = buf_addr;
+ 	d->sg[0].en = period_len / es_bytes[es];
+ 	d->sg[0].fn = buf_len / period_len;
+-	d->sglen = 1;
+ 
+ 	d->ccr = c->ccr;
+ 	if (dir == DMA_DEV_TO_MEM)
+@@ -1258,10 +1258,10 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_memcpy(
+ 	d->dev_addr = src;
+ 	d->fi = 0;
+ 	d->es = data_type;
++	d->sglen = 1;
+ 	d->sg[0].en = len / BIT(data_type);
+ 	d->sg[0].fn = 1;
+ 	d->sg[0].addr = dest;
+-	d->sglen = 1;
+ 	d->ccr = c->ccr;
+ 	d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_POSTINC;
+ 
+@@ -1309,6 +1309,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_interleaved(
+ 	if (data_type > CSDP_DATA_TYPE_32)
+ 		data_type = CSDP_DATA_TYPE_32;
+ 
++	d->sglen = 1;
+ 	sg = &d->sg[0];
+ 	d->dir = DMA_MEM_TO_MEM;
+ 	d->dev_addr = xt->src_start;
+@@ -1316,7 +1317,6 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_interleaved(
+ 	sg->en = xt->sgl[0].size / BIT(data_type);
+ 	sg->fn = xt->numf;
+ 	sg->addr = xt->dst_start;
+-	d->sglen = 1;
+ 	d->ccr = c->ccr;
+ 
+ 	src_icg = dmaengine_get_src_icg(xt, &xt->sgl[0]);
+diff --git a/drivers/firmware/microchip/mpfs-auto-update.c b/drivers/firmware/microchip/mpfs-auto-update.c
+index 835a19a7a3a09..4a95fbbf4733e 100644
+--- a/drivers/firmware/microchip/mpfs-auto-update.c
++++ b/drivers/firmware/microchip/mpfs-auto-update.c
+@@ -153,7 +153,7 @@ static enum fw_upload_err mpfs_auto_update_poll_complete(struct fw_upload *fw_up
+ 	 */
+ 	ret = wait_for_completion_timeout(&priv->programming_complete,
+ 					  msecs_to_jiffies(AUTO_UPDATE_TIMEOUT_MS));
+-	if (ret)
++	if (!ret)
+ 		return FW_UPLOAD_ERR_TIMEOUT;
+ 
+ 	return FW_UPLOAD_ERR_NONE;
+diff --git a/drivers/firmware/qcom/qcom_scm-smc.c b/drivers/firmware/qcom/qcom_scm-smc.c
+index 16cf88acfa8ee..0a2a2c794d0ed 100644
+--- a/drivers/firmware/qcom/qcom_scm-smc.c
++++ b/drivers/firmware/qcom/qcom_scm-smc.c
+@@ -71,7 +71,7 @@ int scm_get_wq_ctx(u32 *wq_ctx, u32 *flags, u32 *more_pending)
+ 	struct arm_smccc_res get_wq_res;
+ 	struct arm_smccc_args get_wq_ctx = {0};
+ 
+-	get_wq_ctx.args[0] = ARM_SMCCC_CALL_VAL(ARM_SMCCC_STD_CALL,
++	get_wq_ctx.args[0] = ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,
+ 				ARM_SMCCC_SMC_64, ARM_SMCCC_OWNER_SIP,
+ 				SCM_SMC_FNID(QCOM_SCM_SVC_WAITQ, QCOM_SCM_WAITQ_GET_WQ_CTX));
+ 
+diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
+index 921f61507ae83..02a07d3d0d40a 100644
+--- a/drivers/firmware/sysfb.c
++++ b/drivers/firmware/sysfb.c
+@@ -39,6 +39,8 @@ static struct platform_device *pd;
+ static DEFINE_MUTEX(disable_lock);
+ static bool disabled;
+ 
++static struct device *sysfb_parent_dev(const struct screen_info *si);
++
+ static bool sysfb_unregister(void)
+ {
+ 	if (IS_ERR_OR_NULL(pd))
+@@ -52,6 +54,7 @@ static bool sysfb_unregister(void)
+ 
+ /**
+  * sysfb_disable() - disable the Generic System Framebuffers support
++ * @dev:	the device to check if non-NULL
+  *
+  * This disables the registration of system framebuffer devices that match the
+  * generic drivers that make use of the system framebuffer set up by firmware.
+@@ -61,17 +64,21 @@ static bool sysfb_unregister(void)
+  * Context: The function can sleep. A @disable_lock mutex is acquired to serialize
+  *          against sysfb_init(), that registers a system framebuffer device.
+  */
+-void sysfb_disable(void)
++void sysfb_disable(struct device *dev)
+ {
++	struct screen_info *si = &screen_info;
++
+ 	mutex_lock(&disable_lock);
+-	sysfb_unregister();
+-	disabled = true;
++	if (!dev || dev == sysfb_parent_dev(si)) {
++		sysfb_unregister();
++		disabled = true;
++	}
+ 	mutex_unlock(&disable_lock);
+ }
+ EXPORT_SYMBOL_GPL(sysfb_disable);
+ 
+ #if defined(CONFIG_PCI)
+-static __init bool sysfb_pci_dev_is_enabled(struct pci_dev *pdev)
++static bool sysfb_pci_dev_is_enabled(struct pci_dev *pdev)
+ {
+ 	/*
+ 	 * TODO: Try to integrate this code into the PCI subsystem
+@@ -87,13 +94,13 @@ static __init bool sysfb_pci_dev_is_enabled(struct pci_dev *pdev)
+ 	return true;
+ }
+ #else
+-static __init bool sysfb_pci_dev_is_enabled(struct pci_dev *pdev)
++static bool sysfb_pci_dev_is_enabled(struct pci_dev *pdev)
+ {
+ 	return false;
+ }
+ #endif
+ 
+-static __init struct device *sysfb_parent_dev(const struct screen_info *si)
++static struct device *sysfb_parent_dev(const struct screen_info *si)
+ {
+ 	struct pci_dev *pdev;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 0e31bdb4b7cb6..f1b08893765cf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -256,19 +256,21 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev,
+ 	u32 msg;
+ 	int i, ret = 0;
+ 
+-	/* It can take up to a second for IFWI init to complete on some dGPUs,
+-	 * but generally it should be in the 60-100ms range.  Normally this starts
+-	 * as soon as the device gets power so by the time the OS loads this has long
+-	 * completed.  However, when a card is hotplugged via e.g., USB4, we need to
+-	 * wait for this to complete.  Once the C2PMSG is updated, we can
+-	 * continue.
+-	 */
++	if (!amdgpu_sriov_vf(adev)) {
++		/* It can take up to a second for IFWI init to complete on some dGPUs,
++		 * but generally it should be in the 60-100ms range.  Normally this starts
++		 * as soon as the device gets power so by the time the OS loads this has long
++		 * completed.  However, when a card is hotplugged via e.g., USB4, we need to
++		 * wait for this to complete.  Once the C2PMSG is updated, we can
++		 * continue.
++		 */
+ 
+-	for (i = 0; i < 1000; i++) {
+-		msg = RREG32(mmMP0_SMN_C2PMSG_33);
+-		if (msg & 0x80000000)
+-			break;
+-		usleep_range(1000, 1100);
++		for (i = 0; i < 1000; i++) {
++			msg = RREG32(mmMP0_SMN_C2PMSG_33);
++			if (msg & 0x80000000)
++				break;
++			msleep(1);
++		}
+ 	}
+ 
+ 	vram_size = (uint64_t)RREG32(mmRCC_CONFIG_MEMSIZE) << 20;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 06f0a6534a94f..88ffb15e25ccc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -212,6 +212,8 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
+ 	 */
+ 	if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ)
+ 		sched_hw_submission = max(sched_hw_submission, 256);
++	if (ring->funcs->type == AMDGPU_RING_TYPE_MES)
++		sched_hw_submission = 8;
+ 	else if (ring == &adev->sdma.instance[0].page)
+ 		sched_hw_submission = 256;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 32d4519541c6b..e1a66d585f5e9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -163,7 +163,7 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 	const char *op_str, *misc_op_str;
+ 	unsigned long flags;
+ 	u64 status_gpu_addr;
+-	u32 status_offset;
++	u32 seq, status_offset;
+ 	u64 *status_ptr;
+ 	signed long r;
+ 	int ret;
+@@ -191,6 +191,13 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 	if (r)
+ 		goto error_unlock_free;
+ 
++	seq = ++ring->fence_drv.sync_seq;
++	r = amdgpu_fence_wait_polling(ring,
++				      seq - ring->fence_drv.num_fences_mask,
++				      timeout);
++	if (r < 1)
++		goto error_undo;
++
+ 	api_status = (struct MES_API_STATUS *)((char *)pkt + api_status_off);
+ 	api_status->api_completion_fence_addr = status_gpu_addr;
+ 	api_status->api_completion_fence_value = 1;
+@@ -203,8 +210,7 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 	mes_status_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+ 	mes_status_pkt.api_status.api_completion_fence_addr =
+ 		ring->fence_drv.gpu_addr;
+-	mes_status_pkt.api_status.api_completion_fence_value =
+-		++ring->fence_drv.sync_seq;
++	mes_status_pkt.api_status.api_completion_fence_value = seq;
+ 
+ 	amdgpu_ring_write_multiple(ring, &mes_status_pkt,
+ 				   sizeof(mes_status_pkt) / 4);
+@@ -224,7 +230,7 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 		dev_dbg(adev->dev, "MES msg=%d was emitted\n",
+ 			x_pkt->header.opcode);
+ 
+-	r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq, timeout);
++	r = amdgpu_fence_wait_polling(ring, seq, timeout);
+ 	if (r < 1 || !*status_ptr) {
+ 
+ 		if (misc_op_str)
+@@ -247,6 +253,10 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 	amdgpu_device_wb_free(adev, status_offset);
+ 	return 0;
+ 
++error_undo:
++	dev_err(adev->dev, "MES ring buffer is full.\n");
++	amdgpu_ring_undo(ring);
++
+ error_unlock_free:
+ 	spin_unlock_irqrestore(&mes->ring_lock, flags);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 311c62d2d1ebb..70e45d980bb93 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -28,6 +28,7 @@
+ #include <drm/drm_blend.h>
+ #include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_plane_helper.h>
++#include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_fourcc.h>
+ 
+ #include "amdgpu.h"
+@@ -854,10 +855,14 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
+ 	}
+ 
+ 	afb = to_amdgpu_framebuffer(new_state->fb);
+-	obj = new_state->fb->obj[0];
++	obj = drm_gem_fb_get_obj(new_state->fb, 0);
++	if (!obj) {
++		DRM_ERROR("Failed to get obj from framebuffer\n");
++		return -EINVAL;
++	}
++
+ 	rbo = gem_to_amdgpu_bo(obj);
+ 	adev = amdgpu_ttm_adev(rbo->tbo.bdev);
+-
+ 	r = amdgpu_bo_reserve(rbo, true);
+ 	if (r) {
+ 		dev_err(adev->dev, "fail to reserve bo (%d)\n", r);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 06409133b09b1..95f690b70c057 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2215,8 +2215,9 @@ static int smu_bump_power_profile_mode(struct smu_context *smu,
+ }
+ 
+ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+-				   enum amd_dpm_forced_level level,
+-				   bool skip_display_settings)
++					  enum amd_dpm_forced_level level,
++					  bool skip_display_settings,
++					  bool force_update)
+ {
+ 	int ret = 0;
+ 	int index = 0;
+@@ -2245,7 +2246,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ 		}
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level != level) {
++	if (force_update || smu_dpm_ctx->dpm_level != level) {
+ 		ret = smu_asic_set_performance_level(smu, level);
+ 		if (ret) {
+ 			dev_err(smu->adev->dev, "Failed to set performance level!");
+@@ -2256,13 +2257,12 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ 		smu_dpm_ctx->dpm_level = level;
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+-		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
++	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
+ 		index = fls(smu->workload_mask);
+ 		index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ 		workload[0] = smu->workload_setting[index];
+ 
+-		if (smu->power_profile_mode != workload[0])
++		if (force_update || smu->power_profile_mode != workload[0])
+ 			smu_bump_power_profile_mode(smu, workload, 0);
+ 	}
+ 
+@@ -2283,11 +2283,13 @@ static int smu_handle_task(struct smu_context *smu,
+ 		ret = smu_pre_display_config_changed(smu);
+ 		if (ret)
+ 			return ret;
+-		ret = smu_adjust_power_state_dynamic(smu, level, false);
++		ret = smu_adjust_power_state_dynamic(smu, level, false, false);
+ 		break;
+ 	case AMD_PP_TASK_COMPLETE_INIT:
++		ret = smu_adjust_power_state_dynamic(smu, level, true, true);
++		break;
+ 	case AMD_PP_TASK_READJUST_POWER_STATE:
+-		ret = smu_adjust_power_state_dynamic(smu, level, true);
++		ret = smu_adjust_power_state_dynamic(smu, level, true, false);
+ 		break;
+ 	default:
+ 		break;
+@@ -2334,8 +2336,7 @@ static int smu_switch_power_profile(void *handle,
+ 		workload[0] = smu->workload_setting[index];
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+-		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
++	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+ 		smu_bump_power_profile_mode(smu, workload, 0);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 9c9e060476c72..8ea00c8ee74a3 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5860,6 +5860,18 @@ intel_dp_detect(struct drm_connector *connector,
+ 	else
+ 		status = connector_status_disconnected;
+ 
++	if (status != connector_status_disconnected &&
++	    !intel_dp_mst_verify_dpcd_state(intel_dp))
++		/*
++		 * This requires retrying detection for instance to re-enable
++		 * the MST mode that got reset via a long HPD pulse. The retry
++		 * will happen either via the hotplug handler's retry logic,
++		 * ensured by setting the connector here to SST/disconnected,
++		 * or via a userspace connector probing in response to the
++		 * hotplug uevent sent when removing the MST connectors.
++		 */
++		status = connector_status_disconnected;
++
+ 	if (status == connector_status_disconnected) {
+ 		memset(&intel_dp->compliance, 0, sizeof(intel_dp->compliance));
+ 		memset(intel_connector->dp.dsc_dpcd, 0, sizeof(intel_connector->dp.dsc_dpcd));
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 715d2f59f5652..de141744e1c35 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -1986,3 +1986,43 @@ bool intel_dp_mst_crtc_needs_modeset(struct intel_atomic_state *state,
+ 
+ 	return false;
+ }
++
++/*
++ * intel_dp_mst_verify_dpcd_state - verify the MST SW enabled state wrt. the DPCD
++ * @intel_dp: DP port object
++ *
++ * Verify if @intel_dp's MST enabled SW state matches the corresponding DPCD
++ * state. A long HPD pulse - not long enough to be detected as a disconnected
++ * state - could've reset the DPCD state, which requires tearing
++ * down/recreating the MST topology.
++ *
++ * Returns %true if the SW MST enabled and DPCD states match, %false
++ * otherwise.
++ */
++bool intel_dp_mst_verify_dpcd_state(struct intel_dp *intel_dp)
++{
++	struct intel_display *display = to_intel_display(intel_dp);
++	struct intel_connector *connector = intel_dp->attached_connector;
++	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
++	struct intel_encoder *encoder = &dig_port->base;
++	int ret;
++	u8 val;
++
++	if (!intel_dp->is_mst)
++		return true;
++
++	ret = drm_dp_dpcd_readb(intel_dp->mst_mgr.aux, DP_MSTM_CTRL, &val);
++
++	/* Adjust the expected register value for SST + SideBand. */
++	if (ret < 0 || val != (DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC)) {
++		drm_dbg_kms(display->drm,
++			    "[CONNECTOR:%d:%s][ENCODER:%d:%s] MST mode got reset, removing topology (ret=%d, ctrl=0x%02x)\n",
++			    connector->base.base.id, connector->base.name,
++			    encoder->base.base.id, encoder->base.name,
++			    ret, val);
++
++		return false;
++	}
++
++	return true;
++}
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.h b/drivers/gpu/drm/i915/display/intel_dp_mst.h
+index 8ca1d599091c6..9e4c7679f1c3a 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.h
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.h
+@@ -27,5 +27,6 @@ int intel_dp_mst_atomic_check_link(struct intel_atomic_state *state,
+ 				   struct intel_link_bw_limits *limits);
+ bool intel_dp_mst_crtc_needs_modeset(struct intel_atomic_state *state,
+ 				     struct intel_crtc *crtc);
++bool intel_dp_mst_verify_dpcd_state(struct intel_dp *intel_dp);
+ 
+ #endif /* __INTEL_DP_MST_H__ */
+diff --git a/drivers/gpu/drm/i915/display/vlv_dsi.c b/drivers/gpu/drm/i915/display/vlv_dsi.c
+index ee9923c7b1158..b674b8d157080 100644
+--- a/drivers/gpu/drm/i915/display/vlv_dsi.c
++++ b/drivers/gpu/drm/i915/display/vlv_dsi.c
+@@ -1869,7 +1869,6 @@ static const struct dmi_system_id vlv_dsi_dmi_quirk_table[] = {
+ 		/* Lenovo Yoga Tab 3 Pro YT3-X90F */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ 		},
+ 		.driver_data = (void *)vlv_dsi_lenovo_yoga_tab3_backlight_fixup,
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 0132403b8159f..d0f55ec15ab0a 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -134,6 +134,8 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ 	struct v3d_stats *local_stats = &file->stats[queue];
+ 	u64 now = local_clock();
+ 
++	preempt_disable();
++
+ 	write_seqcount_begin(&local_stats->lock);
+ 	local_stats->start_ns = now;
+ 	write_seqcount_end(&local_stats->lock);
+@@ -141,6 +143,8 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ 	write_seqcount_begin(&global_stats->lock);
+ 	global_stats->start_ns = now;
+ 	write_seqcount_end(&global_stats->lock);
++
++	preempt_enable();
+ }
+ 
+ static void
+@@ -162,8 +166,10 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ 	struct v3d_stats *local_stats = &file->stats[queue];
+ 	u64 now = local_clock();
+ 
++	preempt_disable();
+ 	v3d_stats_update(local_stats, now);
+ 	v3d_stats_update(global_stats, now);
++	preempt_enable();
+ }
+ 
+ static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+index 717d624e9a052..890a66a2361f4 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+@@ -27,6 +27,8 @@
+  **************************************************************************/
+ 
+ #include "vmwgfx_drv.h"
++
++#include "vmwgfx_bo.h"
+ #include <linux/highmem.h>
+ 
+ /*
+@@ -420,13 +422,105 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
+ 	return 0;
+ }
+ 
++static void *map_external(struct vmw_bo *bo, struct iosys_map *map)
++{
++	struct vmw_private *vmw =
++		container_of(bo->tbo.bdev, struct vmw_private, bdev);
++	void *ptr = NULL;
++	int ret;
++
++	if (bo->tbo.base.import_attach) {
++		ret = dma_buf_vmap(bo->tbo.base.dma_buf, map);
++		if (ret) {
++			drm_dbg_driver(&vmw->drm,
++				       "Wasn't able to map external bo!\n");
++			goto out;
++		}
++		ptr = map->vaddr;
++	} else {
++		ptr = vmw_bo_map_and_cache(bo);
++	}
++
++out:
++	return ptr;
++}
++
++static void unmap_external(struct vmw_bo *bo, struct iosys_map *map)
++{
++	if (bo->tbo.base.import_attach)
++		dma_buf_vunmap(bo->tbo.base.dma_buf, map);
++	else
++		vmw_bo_unmap(bo);
++}
++
++static int vmw_external_bo_copy(struct vmw_bo *dst, u32 dst_offset,
++				u32 dst_stride, struct vmw_bo *src,
++				u32 src_offset, u32 src_stride,
++				u32 width_in_bytes, u32 height,
++				struct vmw_diff_cpy *diff)
++{
++	struct vmw_private *vmw =
++		container_of(dst->tbo.bdev, struct vmw_private, bdev);
++	size_t dst_size = dst->tbo.resource->size;
++	size_t src_size = src->tbo.resource->size;
++	struct iosys_map dst_map = {0};
++	struct iosys_map src_map = {0};
++	int ret, i;
++	int x_in_bytes;
++	u8 *vsrc;
++	u8 *vdst;
++
++	vsrc = map_external(src, &src_map);
++	if (!vsrc) {
++		drm_dbg_driver(&vmw->drm, "Wasn't able to map src\n");
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	vdst = map_external(dst, &dst_map);
++	if (!vdst) {
++		drm_dbg_driver(&vmw->drm, "Wasn't able to map dst\n");
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	vsrc += src_offset;
++	vdst += dst_offset;
++	if (src_stride == dst_stride) {
++		dst_size -= dst_offset;
++		src_size -= src_offset;
++		memcpy(vdst, vsrc,
++		       min(dst_stride * height, min(dst_size, src_size)));
++	} else {
++		WARN_ON(dst_stride < width_in_bytes);
++		for (i = 0; i < height; ++i) {
++			memcpy(vdst, vsrc, width_in_bytes);
++			vsrc += src_stride;
++			vdst += dst_stride;
++		}
++	}
++
++	x_in_bytes = (dst_offset % dst_stride);
++	diff->rect.x1 =  x_in_bytes / diff->cpp;
++	diff->rect.y1 = ((dst_offset - x_in_bytes) / dst_stride);
++	diff->rect.x2 = diff->rect.x1 + width_in_bytes / diff->cpp;
++	diff->rect.y2 = diff->rect.y1 + height;
++
++	ret = 0;
++out:
++	unmap_external(src, &src_map);
++	unmap_external(dst, &dst_map);
++
++	return ret;
++}
++
+ /**
+  * vmw_bo_cpu_blit - in-kernel cpu blit.
+  *
+- * @dst: Destination buffer object.
++ * @vmw_dst: Destination buffer object.
+  * @dst_offset: Destination offset of blit start in bytes.
+  * @dst_stride: Destination stride in bytes.
+- * @src: Source buffer object.
++ * @vmw_src: Source buffer object.
+  * @src_offset: Source offset of blit start in bytes.
+  * @src_stride: Source stride in bytes.
+  * @w: Width of blit.
+@@ -444,13 +538,15 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
+  * Neither of the buffer objects may be placed in PCI memory
+  * (Fixed memory in TTM terminology) when using this function.
+  */
+-int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
++int vmw_bo_cpu_blit(struct vmw_bo *vmw_dst,
+ 		    u32 dst_offset, u32 dst_stride,
+-		    struct ttm_buffer_object *src,
++		    struct vmw_bo *vmw_src,
+ 		    u32 src_offset, u32 src_stride,
+ 		    u32 w, u32 h,
+ 		    struct vmw_diff_cpy *diff)
+ {
++	struct ttm_buffer_object *src = &vmw_src->tbo;
++	struct ttm_buffer_object *dst = &vmw_dst->tbo;
+ 	struct ttm_operation_ctx ctx = {
+ 		.interruptible = false,
+ 		.no_wait_gpu = false
+@@ -460,6 +556,11 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
+ 	int ret = 0;
+ 	struct page **dst_pages = NULL;
+ 	struct page **src_pages = NULL;
++	bool src_external = (src->ttm->page_flags & TTM_TT_FLAG_EXTERNAL) != 0;
++	bool dst_external = (dst->ttm->page_flags & TTM_TT_FLAG_EXTERNAL) != 0;
++
++	if (WARN_ON(dst == src))
++		return -EINVAL;
+ 
+ 	/* Buffer objects need to be either pinned or reserved: */
+ 	if (!(dst->pin_count))
+@@ -479,6 +580,11 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
+ 			return ret;
+ 	}
+ 
++	if (src_external || dst_external)
++		return vmw_external_bo_copy(vmw_dst, dst_offset, dst_stride,
++					    vmw_src, src_offset, src_stride,
++					    w, h, diff);
++
+ 	if (!src->ttm->pages && src->ttm->sg) {
+ 		src_pages = kvmalloc_array(src->ttm->num_pages,
+ 					   sizeof(struct page *), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index f42ebc4a7c225..a0e433fbcba67 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -360,6 +360,8 @@ void *vmw_bo_map_and_cache_size(struct vmw_bo *vbo, size_t size)
+ 	void *virtual;
+ 	int ret;
+ 
++	atomic_inc(&vbo->map_count);
++
+ 	virtual = ttm_kmap_obj_virtual(&vbo->map, &not_used);
+ 	if (virtual)
+ 		return virtual;
+@@ -383,11 +385,17 @@ void *vmw_bo_map_and_cache_size(struct vmw_bo *vbo, size_t size)
+  */
+ void vmw_bo_unmap(struct vmw_bo *vbo)
+ {
++	int map_count;
++
+ 	if (vbo->map.bo == NULL)
+ 		return;
+ 
+-	ttm_bo_kunmap(&vbo->map);
+-	vbo->map.bo = NULL;
++	map_count = atomic_dec_return(&vbo->map_count);
++
++	if (!map_count) {
++		ttm_bo_kunmap(&vbo->map);
++		vbo->map.bo = NULL;
++	}
+ }
+ 
+ 
+@@ -421,6 +429,7 @@ static int vmw_bo_init(struct vmw_private *dev_priv,
+ 	vmw_bo->tbo.priority = 3;
+ 	vmw_bo->res_tree = RB_ROOT;
+ 	xa_init(&vmw_bo->detached_resources);
++	atomic_set(&vmw_bo->map_count, 0);
+ 
+ 	params->size = ALIGN(params->size, PAGE_SIZE);
+ 	drm_gem_private_object_init(vdev, &vmw_bo->tbo.base, params->size);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index 62b4342d5f7c5..43b5439ec9f76 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -71,6 +71,8 @@ struct vmw_bo_params {
+  * @map: Kmap object for semi-persistent mappings
+  * @res_tree: RB tree of resources using this buffer object as a backing MOB
+  * @res_prios: Eviction priority counts for attached resources
++ * @map_count: The number of currently active maps. Will differ from the
++ * cpu_writers because it includes kernel maps.
+  * @cpu_writers: Number of synccpu write grabs. Protected by reservation when
+  * increased. May be decreased without reservation.
+  * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB
+@@ -90,6 +92,7 @@ struct vmw_bo {
+ 	u32 res_prios[TTM_MAX_BO_PRIORITY];
+ 	struct xarray detached_resources;
+ 
++	atomic_t map_count;
+ 	atomic_t cpu_writers;
+ 	/* Not ref-counted.  Protected by binding_mutex */
+ 	struct vmw_resource *dx_query_ctx;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 32f50e5958097..3f4719b3c2681 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1353,9 +1353,9 @@ void vmw_diff_memcpy(struct vmw_diff_cpy *diff, u8 *dest, const u8 *src,
+ 
+ void vmw_memcpy(struct vmw_diff_cpy *diff, u8 *dest, const u8 *src, size_t n);
+ 
+-int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
++int vmw_bo_cpu_blit(struct vmw_bo *dst,
+ 		    u32 dst_offset, u32 dst_stride,
+-		    struct ttm_buffer_object *src,
++		    struct vmw_bo *src,
+ 		    u32 src_offset, u32 src_stride,
+ 		    u32 w, u32 h,
+ 		    struct vmw_diff_cpy *diff);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index 5453f7cf0e2d7..fab155a68054a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -502,7 +502,7 @@ static void vmw_stdu_bo_cpu_commit(struct vmw_kms_dirty *dirty)
+ 		container_of(dirty->unit, typeof(*stdu), base);
+ 	s32 width, height;
+ 	s32 src_pitch, dst_pitch;
+-	struct ttm_buffer_object *src_bo, *dst_bo;
++	struct vmw_bo *src_bo, *dst_bo;
+ 	u32 src_offset, dst_offset;
+ 	struct vmw_diff_cpy diff = VMW_CPU_BLIT_DIFF_INITIALIZER(stdu->cpp);
+ 
+@@ -517,11 +517,11 @@ static void vmw_stdu_bo_cpu_commit(struct vmw_kms_dirty *dirty)
+ 
+ 	/* Assume we are blitting from Guest (bo) to Host (display_srf) */
+ 	src_pitch = stdu->display_srf->metadata.base_size.width * stdu->cpp;
+-	src_bo = &stdu->display_srf->res.guest_memory_bo->tbo;
++	src_bo = stdu->display_srf->res.guest_memory_bo;
+ 	src_offset = ddirty->top * src_pitch + ddirty->left * stdu->cpp;
+ 
+ 	dst_pitch = ddirty->pitch;
+-	dst_bo = &ddirty->buf->tbo;
++	dst_bo = ddirty->buf;
+ 	dst_offset = ddirty->fb_top * dst_pitch + ddirty->fb_left * stdu->cpp;
+ 
+ 	(void) vmw_bo_cpu_blit(dst_bo, dst_offset, dst_pitch,
+@@ -1170,7 +1170,7 @@ vmw_stdu_bo_populate_update_cpu(struct vmw_du_update_plane  *update, void *cmd,
+ 	struct vmw_diff_cpy diff = VMW_CPU_BLIT_DIFF_INITIALIZER(0);
+ 	struct vmw_stdu_update_gb_image *cmd_img = cmd;
+ 	struct vmw_stdu_update *cmd_update;
+-	struct ttm_buffer_object *src_bo, *dst_bo;
++	struct vmw_bo *src_bo, *dst_bo;
+ 	u32 src_offset, dst_offset;
+ 	s32 src_pitch, dst_pitch;
+ 	s32 width, height;
+@@ -1184,11 +1184,11 @@ vmw_stdu_bo_populate_update_cpu(struct vmw_du_update_plane  *update, void *cmd,
+ 
+ 	diff.cpp = stdu->cpp;
+ 
+-	dst_bo = &stdu->display_srf->res.guest_memory_bo->tbo;
++	dst_bo = stdu->display_srf->res.guest_memory_bo;
+ 	dst_pitch = stdu->display_srf->metadata.base_size.width * stdu->cpp;
+ 	dst_offset = bb->y1 * dst_pitch + bb->x1 * stdu->cpp;
+ 
+-	src_bo = &vfbbo->buffer->tbo;
++	src_bo = vfbbo->buffer;
+ 	src_pitch = update->vfb->base.pitches[0];
+ 	src_offset = bo_update->fb_top * src_pitch + bo_update->fb_left *
+ 		stdu->cpp;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 8ae6a761c9003..1625b30d99700 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -2283,9 +2283,11 @@ int vmw_dumb_create(struct drm_file *file_priv,
+ 	/*
+ 	 * Without mob support we're just going to use raw memory buffer
+ 	 * because we wouldn't be able to support full surface coherency
+-	 * without mobs
++	 * without mobs. There also no reason to support surface coherency
++	 * without 3d (i.e. gpu usage on the host) because then all the
++	 * contents is going to be rendered guest side.
+ 	 */
+-	if (!dev_priv->has_mob) {
++	if (!dev_priv->has_mob || !vmw_supports_3d(dev_priv)) {
+ 		int cpp = DIV_ROUND_UP(args->bpp, 8);
+ 
+ 		switch (cpp) {
+diff --git a/drivers/gpu/drm/xe/display/xe_display.c b/drivers/gpu/drm/xe/display/xe_display.c
+index 7cdc03dc40ed9..96835ffa5734e 100644
+--- a/drivers/gpu/drm/xe/display/xe_display.c
++++ b/drivers/gpu/drm/xe/display/xe_display.c
+@@ -302,7 +302,28 @@ static bool suspend_to_idle(void)
+ 	return false;
+ }
+ 
+-void xe_display_pm_suspend(struct xe_device *xe)
++static void xe_display_flush_cleanup_work(struct xe_device *xe)
++{
++	struct intel_crtc *crtc;
++
++	for_each_intel_crtc(&xe->drm, crtc) {
++		struct drm_crtc_commit *commit;
++
++		spin_lock(&crtc->base.commit_lock);
++		commit = list_first_entry_or_null(&crtc->base.commit_list,
++						  struct drm_crtc_commit, commit_entry);
++		if (commit)
++			drm_crtc_commit_get(commit);
++		spin_unlock(&crtc->base.commit_lock);
++
++		if (commit) {
++			wait_for_completion(&commit->cleanup_done);
++			drm_crtc_commit_put(commit);
++		}
++	}
++}
++
++void xe_display_pm_suspend(struct xe_device *xe, bool runtime)
+ {
+ 	bool s2idle = suspend_to_idle();
+ 	if (!xe->info.enable_display)
+@@ -316,7 +337,10 @@ void xe_display_pm_suspend(struct xe_device *xe)
+ 	if (has_display(xe))
+ 		drm_kms_helper_poll_disable(&xe->drm);
+ 
+-	intel_display_driver_suspend(xe);
++	if (!runtime)
++		intel_display_driver_suspend(xe);
++
++	xe_display_flush_cleanup_work(xe);
+ 
+ 	intel_dp_mst_suspend(xe);
+ 
+@@ -352,7 +376,7 @@ void xe_display_pm_resume_early(struct xe_device *xe)
+ 	intel_power_domains_resume(xe);
+ }
+ 
+-void xe_display_pm_resume(struct xe_device *xe)
++void xe_display_pm_resume(struct xe_device *xe, bool runtime)
+ {
+ 	if (!xe->info.enable_display)
+ 		return;
+@@ -367,7 +391,8 @@ void xe_display_pm_resume(struct xe_device *xe)
+ 
+ 	/* MST sideband requires HPD interrupts enabled */
+ 	intel_dp_mst_resume(xe);
+-	intel_display_driver_resume(xe);
++	if (!runtime)
++		intel_display_driver_resume(xe);
+ 
+ 	intel_hpd_poll_disable(xe);
+ 	if (has_display(xe))
+diff --git a/drivers/gpu/drm/xe/display/xe_display.h b/drivers/gpu/drm/xe/display/xe_display.h
+index 710e56180b52d..93d1f779b9788 100644
+--- a/drivers/gpu/drm/xe/display/xe_display.h
++++ b/drivers/gpu/drm/xe/display/xe_display.h
+@@ -34,10 +34,10 @@ void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir);
+ void xe_display_irq_reset(struct xe_device *xe);
+ void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt);
+ 
+-void xe_display_pm_suspend(struct xe_device *xe);
++void xe_display_pm_suspend(struct xe_device *xe, bool runtime);
+ void xe_display_pm_suspend_late(struct xe_device *xe);
+ void xe_display_pm_resume_early(struct xe_device *xe);
+-void xe_display_pm_resume(struct xe_device *xe);
++void xe_display_pm_resume(struct xe_device *xe, bool runtime);
+ 
+ #else
+ 
+@@ -63,10 +63,10 @@ static inline void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir)
+ static inline void xe_display_irq_reset(struct xe_device *xe) {}
+ static inline void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt) {}
+ 
+-static inline void xe_display_pm_suspend(struct xe_device *xe) {}
++static inline void xe_display_pm_suspend(struct xe_device *xe, bool runtime) {}
+ static inline void xe_display_pm_suspend_late(struct xe_device *xe) {}
+ static inline void xe_display_pm_resume_early(struct xe_device *xe) {}
+-static inline void xe_display_pm_resume(struct xe_device *xe) {}
++static inline void xe_display_pm_resume(struct xe_device *xe, bool runtime) {}
+ 
+ #endif /* CONFIG_DRM_XE_DISPLAY */
+ #endif /* _XE_DISPLAY_H_ */
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index 2ae4420e29353..ba7013f82c8b6 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -67,7 +67,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
+ 	q->fence_irq = &gt->fence_irq[hwe->class];
+ 	q->ring_ops = gt->ring_ops[hwe->class];
+ 	q->ops = gt->exec_queue_ops;
+-	INIT_LIST_HEAD(&q->compute.link);
++	INIT_LIST_HEAD(&q->lr.link);
+ 	INIT_LIST_HEAD(&q->multi_gt_link);
+ 
+ 	q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
+@@ -631,8 +631,7 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
+ 			return PTR_ERR(q);
+ 
+ 		if (xe_vm_in_preempt_fence_mode(vm)) {
+-			q->compute.context = dma_fence_context_alloc(1);
+-			spin_lock_init(&q->compute.lock);
++			q->lr.context = dma_fence_context_alloc(1);
+ 
+ 			err = xe_vm_add_compute_exec_queue(vm, q);
+ 			if (XE_IOCTL_DBG(xe, err))
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+index f0c40e8ad80a1..a5aa43942d8cf 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+@@ -115,19 +115,17 @@ struct xe_exec_queue {
+ 		enum xe_exec_queue_priority priority;
+ 	} sched_props;
+ 
+-	/** @compute: compute exec queue state */
++	/** @lr: long-running exec queue state */
+ 	struct {
+-		/** @compute.pfence: preemption fence */
++		/** @lr.pfence: preemption fence */
+ 		struct dma_fence *pfence;
+-		/** @compute.context: preemption fence context */
++		/** @lr.context: preemption fence context */
+ 		u64 context;
+-		/** @compute.seqno: preemption fence seqno */
++		/** @lr.seqno: preemption fence seqno */
+ 		u32 seqno;
+-		/** @compute.link: link into VM's list of exec queues */
++		/** @lr.link: link into VM's list of exec queues */
+ 		struct list_head link;
+-		/** @compute.lock: preemption fences lock */
+-		spinlock_t lock;
+-	} compute;
++	} lr;
+ 
+ 	/** @ops: submission backend exec queue operations */
+ 	const struct xe_exec_queue_ops *ops;
+diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
+index d37f1dea9f8b8..bb815dbde63a6 100644
+--- a/drivers/gpu/drm/xe/xe_hwmon.c
++++ b/drivers/gpu/drm/xe/xe_hwmon.c
+@@ -443,7 +443,7 @@ static int xe_hwmon_pcode_write_i1(struct xe_gt *gt, u32 uval)
+ {
+ 	return xe_pcode_write(gt, PCODE_MBOX(PCODE_POWER_SETUP,
+ 			      POWER_SETUP_SUBCOMMAND_WRITE_I1, 0),
+-			      uval);
++			      (uval & POWER_SETUP_I1_DATA_MASK));
+ }
+ 
+ static int xe_hwmon_power_curr_crit_read(struct xe_hwmon *hwmon, int channel,
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index 37fbeda12d3bd..cf80679ceb701 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -91,17 +91,17 @@ int xe_pm_suspend(struct xe_device *xe)
+ 	for_each_gt(gt, xe, id)
+ 		xe_gt_suspend_prepare(gt);
+ 
++	xe_display_pm_suspend(xe, false);
++
+ 	/* FIXME: Super racey... */
+ 	err = xe_bo_evict_all(xe);
+ 	if (err)
+ 		goto err;
+ 
+-	xe_display_pm_suspend(xe);
+-
+ 	for_each_gt(gt, xe, id) {
+ 		err = xe_gt_suspend(gt);
+ 		if (err) {
+-			xe_display_pm_resume(xe);
++			xe_display_pm_resume(xe, false);
+ 			goto err;
+ 		}
+ 	}
+@@ -151,11 +151,11 @@ int xe_pm_resume(struct xe_device *xe)
+ 
+ 	xe_irq_resume(xe);
+ 
+-	xe_display_pm_resume(xe);
+-
+ 	for_each_gt(gt, xe, id)
+ 		xe_gt_resume(gt);
+ 
++	xe_display_pm_resume(xe, false);
++
+ 	err = xe_bo_restore_user(xe);
+ 	if (err)
+ 		goto err;
+@@ -363,6 +363,8 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
+ 	mutex_unlock(&xe->mem_access.vram_userfault.lock);
+ 
+ 	if (xe->d3cold.allowed) {
++		xe_display_pm_suspend(xe, true);
++
+ 		err = xe_bo_evict_all(xe);
+ 		if (err)
+ 			goto out;
+@@ -375,7 +377,12 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
+ 	}
+ 
+ 	xe_irq_suspend(xe);
++
++	if (xe->d3cold.allowed)
++		xe_display_pm_suspend_late(xe);
+ out:
++	if (err)
++		xe_display_pm_resume(xe, true);
+ 	lock_map_release(&xe_pm_runtime_lockdep_map);
+ 	xe_pm_write_callback_task(xe, NULL);
+ 	return err;
+@@ -411,6 +418,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
+ 		if (err)
+ 			goto out;
+ 
++		xe_display_pm_resume_early(xe);
++
+ 		/*
+ 		 * This only restores pinned memory which is the memory
+ 		 * required for the GT(s) to resume.
+@@ -426,6 +435,7 @@ int xe_pm_runtime_resume(struct xe_device *xe)
+ 		xe_gt_resume(gt);
+ 
+ 	if (xe->d3cold.allowed && xe->d3cold.power_lost) {
++		xe_display_pm_resume(xe, true);
+ 		err = xe_bo_restore_user(xe);
+ 		if (err)
+ 			goto out;
+diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
+index 5b243b7feb59d..c453f45328b1c 100644
+--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
++++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
+@@ -128,8 +128,9 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q,
+ {
+ 	list_del_init(&pfence->link);
+ 	pfence->q = xe_exec_queue_get(q);
++	spin_lock_init(&pfence->lock);
+ 	dma_fence_init(&pfence->base, &preempt_fence_ops,
+-		      &q->compute.lock, context, seqno);
++		      &pfence->lock, context, seqno);
+ 
+ 	return &pfence->base;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_preempt_fence_types.h b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
+index b54b5c29b5331..312c3372a49f9 100644
+--- a/drivers/gpu/drm/xe/xe_preempt_fence_types.h
++++ b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
+@@ -25,6 +25,8 @@ struct xe_preempt_fence {
+ 	struct xe_exec_queue *q;
+ 	/** @preempt_work: work struct which issues preemption */
+ 	struct work_struct preempt_work;
++	/** @lock: dma-fence fence lock */
++	spinlock_t lock;
+ 	/** @error: preempt fence is in error state */
+ 	int error;
+ };
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 4aa3943e6f292..fd5612cc6f19b 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -83,10 +83,10 @@ static bool preempt_fences_waiting(struct xe_vm *vm)
+ 	lockdep_assert_held(&vm->lock);
+ 	xe_vm_assert_held(vm);
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link) {
+-		if (!q->compute.pfence ||
+-		    (q->compute.pfence && test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
+-						   &q->compute.pfence->flags))) {
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
++		if (!q->lr.pfence ||
++		    test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
++			     &q->lr.pfence->flags)) {
+ 			return true;
+ 		}
+ 	}
+@@ -129,14 +129,14 @@ static int wait_for_existing_preempt_fences(struct xe_vm *vm)
+ 
+ 	xe_vm_assert_held(vm);
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link) {
+-		if (q->compute.pfence) {
+-			long timeout = dma_fence_wait(q->compute.pfence, false);
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
++		if (q->lr.pfence) {
++			long timeout = dma_fence_wait(q->lr.pfence, false);
+ 
+ 			if (timeout < 0)
+ 				return -ETIME;
+-			dma_fence_put(q->compute.pfence);
+-			q->compute.pfence = NULL;
++			dma_fence_put(q->lr.pfence);
++			q->lr.pfence = NULL;
+ 		}
+ 	}
+ 
+@@ -148,7 +148,7 @@ static bool xe_vm_is_idle(struct xe_vm *vm)
+ 	struct xe_exec_queue *q;
+ 
+ 	xe_vm_assert_held(vm);
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link) {
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
+ 		if (!xe_exec_queue_is_idle(q))
+ 			return false;
+ 	}
+@@ -161,17 +161,17 @@ static void arm_preempt_fences(struct xe_vm *vm, struct list_head *list)
+ 	struct list_head *link;
+ 	struct xe_exec_queue *q;
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link) {
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
+ 		struct dma_fence *fence;
+ 
+ 		link = list->next;
+ 		xe_assert(vm->xe, link != list);
+ 
+ 		fence = xe_preempt_fence_arm(to_preempt_fence_from_link(link),
+-					     q, q->compute.context,
+-					     ++q->compute.seqno);
+-		dma_fence_put(q->compute.pfence);
+-		q->compute.pfence = fence;
++					     q, q->lr.context,
++					     ++q->lr.seqno);
++		dma_fence_put(q->lr.pfence);
++		q->lr.pfence = fence;
+ 	}
+ }
+ 
+@@ -191,10 +191,10 @@ static int add_preempt_fences(struct xe_vm *vm, struct xe_bo *bo)
+ 	if (err)
+ 		goto out_unlock;
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link)
+-		if (q->compute.pfence) {
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link)
++		if (q->lr.pfence) {
+ 			dma_resv_add_fence(bo->ttm.base.resv,
+-					   q->compute.pfence,
++					   q->lr.pfence,
+ 					   DMA_RESV_USAGE_BOOKKEEP);
+ 		}
+ 
+@@ -211,10 +211,10 @@ static void resume_and_reinstall_preempt_fences(struct xe_vm *vm,
+ 	lockdep_assert_held(&vm->lock);
+ 	xe_vm_assert_held(vm);
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link) {
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
+ 		q->ops->resume(q);
+ 
+-		drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, q->compute.pfence,
++		drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, q->lr.pfence,
+ 					 DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_BOOKKEEP);
+ 	}
+ }
+@@ -238,16 +238,16 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
+ 	if (err)
+ 		goto out_up_write;
+ 
+-	pfence = xe_preempt_fence_create(q, q->compute.context,
+-					 ++q->compute.seqno);
++	pfence = xe_preempt_fence_create(q, q->lr.context,
++					 ++q->lr.seqno);
+ 	if (!pfence) {
+ 		err = -ENOMEM;
+ 		goto out_fini;
+ 	}
+ 
+-	list_add(&q->compute.link, &vm->preempt.exec_queues);
++	list_add(&q->lr.link, &vm->preempt.exec_queues);
+ 	++vm->preempt.num_exec_queues;
+-	q->compute.pfence = pfence;
++	q->lr.pfence = pfence;
+ 
+ 	down_read(&vm->userptr.notifier_lock);
+ 
+@@ -284,12 +284,12 @@ void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
+ 		return;
+ 
+ 	down_write(&vm->lock);
+-	list_del(&q->compute.link);
++	list_del(&q->lr.link);
+ 	--vm->preempt.num_exec_queues;
+-	if (q->compute.pfence) {
+-		dma_fence_enable_sw_signaling(q->compute.pfence);
+-		dma_fence_put(q->compute.pfence);
+-		q->compute.pfence = NULL;
++	if (q->lr.pfence) {
++		dma_fence_enable_sw_signaling(q->lr.pfence);
++		dma_fence_put(q->lr.pfence);
++		q->lr.pfence = NULL;
+ 	}
+ 	up_write(&vm->lock);
+ }
+@@ -325,7 +325,7 @@ static void xe_vm_kill(struct xe_vm *vm)
+ 	vm->flags |= XE_VM_FLAG_BANNED;
+ 	trace_xe_vm_kill(vm);
+ 
+-	list_for_each_entry(q, &vm->preempt.exec_queues, compute.link)
++	list_for_each_entry(q, &vm->preempt.exec_queues, lr.link)
+ 		q->ops->kill(q);
+ 	xe_vm_unlock(vm);
+ 
+diff --git a/drivers/hwmon/pt5161l.c b/drivers/hwmon/pt5161l.c
+index b0d58a26d499d..a9f0b23f9e76e 100644
+--- a/drivers/hwmon/pt5161l.c
++++ b/drivers/hwmon/pt5161l.c
+@@ -427,7 +427,7 @@ static int pt5161l_read(struct device *dev, enum hwmon_sensor_types type,
+ 	struct pt5161l_data *data = dev_get_drvdata(dev);
+ 	int ret;
+ 	u8 buf[8];
+-	long adc_code;
++	u32 adc_code;
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_input:
+@@ -449,7 +449,7 @@ static int pt5161l_read(struct device *dev, enum hwmon_sensor_types type,
+ 
+ 		adc_code = buf[3] << 24 | buf[2] << 16 | buf[1] << 8 | buf[0];
+ 		if (adc_code == 0 || adc_code >= 0x3ff) {
+-			dev_dbg(dev, "Invalid adc_code %lx\n", adc_code);
++			dev_dbg(dev, "Invalid adc_code %x\n", adc_code);
+ 			return -EIO;
+ 		}
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index 75f244a3e12df..06ffc683b28fe 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -552,9 +552,8 @@ static int arm_v7s_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ 		    paddr >= (1ULL << data->iop.cfg.oas)))
+ 		return -ERANGE;
+ 
+-	/* If no access, then nothing to do */
+ 	if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+-		return 0;
++		return -EINVAL;
+ 
+ 	while (pgcount--) {
+ 		ret = __arm_v7s_map(data, iova, paddr, pgsize, prot, 1, data->pgd,
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 3d23b924cec16..07c9b90eab2ed 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -495,9 +495,8 @@ static int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ 	if (WARN_ON(iaext || paddr >> cfg->oas))
+ 		return -ERANGE;
+ 
+-	/* If no access, then nothing to do */
+ 	if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE)))
+-		return 0;
++		return -EINVAL;
+ 
+ 	prot = arm_lpae_prot_to_pte(data, iommu_prot);
+ 	ret = __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl,
+diff --git a/drivers/iommu/io-pgtable-dart.c b/drivers/iommu/io-pgtable-dart.c
+index ad28031e1e93d..c004640640ee5 100644
+--- a/drivers/iommu/io-pgtable-dart.c
++++ b/drivers/iommu/io-pgtable-dart.c
+@@ -245,9 +245,8 @@ static int dart_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ 	if (WARN_ON(paddr >> cfg->oas))
+ 		return -ERANGE;
+ 
+-	/* If no access, then nothing to do */
+ 	if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE)))
+-		return 0;
++		return -EINVAL;
+ 
+ 	tbl = dart_get_table(data, iova);
+ 
+diff --git a/drivers/iommu/iommufd/ioas.c b/drivers/iommu/iommufd/ioas.c
+index 7422482765481..157a89b993e43 100644
+--- a/drivers/iommu/iommufd/ioas.c
++++ b/drivers/iommu/iommufd/ioas.c
+@@ -213,6 +213,10 @@ int iommufd_ioas_map(struct iommufd_ucmd *ucmd)
+ 	if (cmd->iova >= ULONG_MAX || cmd->length >= ULONG_MAX)
+ 		return -EOVERFLOW;
+ 
++	if (!(cmd->flags &
++	      (IOMMU_IOAS_MAP_WRITEABLE | IOMMU_IOAS_MAP_READABLE)))
++		return -EINVAL;
++
+ 	ioas = iommufd_get_ioas(ucmd->ictx, cmd->ioas_id);
+ 	if (IS_ERR(ioas))
+ 		return PTR_ERR(ioas);
+@@ -253,6 +257,10 @@ int iommufd_ioas_copy(struct iommufd_ucmd *ucmd)
+ 	    cmd->dst_iova >= ULONG_MAX)
+ 		return -EOVERFLOW;
+ 
++	if (!(cmd->flags &
++	      (IOMMU_IOAS_MAP_WRITEABLE | IOMMU_IOAS_MAP_READABLE)))
++		return -EINVAL;
++
+ 	src_ioas = iommufd_get_ioas(ucmd->ictx, cmd->src_ioas_id);
+ 	if (IS_ERR(src_ioas))
+ 		return PTR_ERR(src_ioas);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index b257504a85347..60db34095a255 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -427,6 +427,8 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct net_device *real_dev;
++	netdevice_tracker tracker;
+ 	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
+@@ -438,74 +440,80 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
+ 	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
+-	if (!slave) {
+-		rcu_read_unlock();
+-		return -ENODEV;
++	real_dev = slave ? slave->dev : NULL;
++	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
++	rcu_read_unlock();
++	if (!real_dev) {
++		err = -ENODEV;
++		goto out;
+ 	}
+ 
+-	if (!slave->dev->xfrmdev_ops ||
+-	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
+-	    netif_is_bond_master(slave->dev)) {
++	if (!real_dev->xfrmdev_ops ||
++	    !real_dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(real_dev)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Slave does not support ipsec offload");
+-		rcu_read_unlock();
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+-	ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC);
++	ipsec = kmalloc(sizeof(*ipsec), GFP_KERNEL);
+ 	if (!ipsec) {
+-		rcu_read_unlock();
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto out;
+ 	}
+-	xs->xso.real_dev = slave->dev;
+ 
+-	err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
++	xs->xso.real_dev = real_dev;
++	err = real_dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
+ 	if (!err) {
+ 		ipsec->xs = xs;
+ 		INIT_LIST_HEAD(&ipsec->list);
+-		spin_lock_bh(&bond->ipsec_lock);
++		mutex_lock(&bond->ipsec_lock);
+ 		list_add(&ipsec->list, &bond->ipsec_list);
+-		spin_unlock_bh(&bond->ipsec_lock);
++		mutex_unlock(&bond->ipsec_lock);
+ 	} else {
+ 		kfree(ipsec);
+ 	}
+-	rcu_read_unlock();
++out:
++	netdev_put(real_dev, &tracker);
+ 	return err;
+ }
+ 
+ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ {
+ 	struct net_device *bond_dev = bond->dev;
++	struct net_device *real_dev;
+ 	struct bond_ipsec *ipsec;
+ 	struct slave *slave;
+ 
+-	rcu_read_lock();
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	if (!slave)
+-		goto out;
++	slave = rtnl_dereference(bond->curr_active_slave);
++	real_dev = slave ? slave->dev : NULL;
++	if (!real_dev)
++		return;
+ 
+-	if (!slave->dev->xfrmdev_ops ||
+-	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
+-	    netif_is_bond_master(slave->dev)) {
+-		spin_lock_bh(&bond->ipsec_lock);
++	mutex_lock(&bond->ipsec_lock);
++	if (!real_dev->xfrmdev_ops ||
++	    !real_dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(real_dev)) {
+ 		if (!list_empty(&bond->ipsec_list))
+-			slave_warn(bond_dev, slave->dev,
++			slave_warn(bond_dev, real_dev,
+ 				   "%s: no slave xdo_dev_state_add\n",
+ 				   __func__);
+-		spin_unlock_bh(&bond->ipsec_lock);
+ 		goto out;
+ 	}
+ 
+-	spin_lock_bh(&bond->ipsec_lock);
+ 	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
+-		ipsec->xs->xso.real_dev = slave->dev;
+-		if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
+-			slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
++		/* If new state is added before ipsec_lock acquired */
++		if (ipsec->xs->xso.real_dev == real_dev)
++			continue;
++
++		ipsec->xs->xso.real_dev = real_dev;
++		if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
++			slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__);
+ 			ipsec->xs->xso.real_dev = NULL;
+ 		}
+ 	}
+-	spin_unlock_bh(&bond->ipsec_lock);
+ out:
+-	rcu_read_unlock();
++	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+ /**
+@@ -515,6 +523,8 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct net_device *real_dev;
++	netdevice_tracker tracker;
+ 	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
+@@ -525,6 +535,9 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ 	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
++	real_dev = slave ? slave->dev : NULL;
++	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
++	rcu_read_unlock();
+ 
+ 	if (!slave)
+ 		goto out;
+@@ -532,18 +545,19 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ 	if (!xs->xso.real_dev)
+ 		goto out;
+ 
+-	WARN_ON(xs->xso.real_dev != slave->dev);
++	WARN_ON(xs->xso.real_dev != real_dev);
+ 
+-	if (!slave->dev->xfrmdev_ops ||
+-	    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
+-	    netif_is_bond_master(slave->dev)) {
+-		slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
++	if (!real_dev->xfrmdev_ops ||
++	    !real_dev->xfrmdev_ops->xdo_dev_state_delete ||
++	    netif_is_bond_master(real_dev)) {
++		slave_warn(bond_dev, real_dev, "%s: no slave xdo_dev_state_delete\n", __func__);
+ 		goto out;
+ 	}
+ 
+-	slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
++	real_dev->xfrmdev_ops->xdo_dev_state_delete(xs);
+ out:
+-	spin_lock_bh(&bond->ipsec_lock);
++	netdev_put(real_dev, &tracker);
++	mutex_lock(&bond->ipsec_lock);
+ 	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
+ 		if (ipsec->xs == xs) {
+ 			list_del(&ipsec->list);
+@@ -551,40 +565,72 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ 			break;
+ 		}
+ 	}
+-	spin_unlock_bh(&bond->ipsec_lock);
+-	rcu_read_unlock();
++	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+ static void bond_ipsec_del_sa_all(struct bonding *bond)
+ {
+ 	struct net_device *bond_dev = bond->dev;
++	struct net_device *real_dev;
+ 	struct bond_ipsec *ipsec;
+ 	struct slave *slave;
+ 
+-	rcu_read_lock();
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	if (!slave) {
+-		rcu_read_unlock();
++	slave = rtnl_dereference(bond->curr_active_slave);
++	real_dev = slave ? slave->dev : NULL;
++	if (!real_dev)
+ 		return;
+-	}
+ 
+-	spin_lock_bh(&bond->ipsec_lock);
++	mutex_lock(&bond->ipsec_lock);
+ 	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
+ 		if (!ipsec->xs->xso.real_dev)
+ 			continue;
+ 
+-		if (!slave->dev->xfrmdev_ops ||
+-		    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
+-		    netif_is_bond_master(slave->dev)) {
+-			slave_warn(bond_dev, slave->dev,
++		if (!real_dev->xfrmdev_ops ||
++		    !real_dev->xfrmdev_ops->xdo_dev_state_delete ||
++		    netif_is_bond_master(real_dev)) {
++			slave_warn(bond_dev, real_dev,
+ 				   "%s: no slave xdo_dev_state_delete\n",
+ 				   __func__);
+ 		} else {
+-			slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
++			real_dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
++			if (real_dev->xfrmdev_ops->xdo_dev_state_free)
++				real_dev->xfrmdev_ops->xdo_dev_state_free(ipsec->xs);
+ 		}
+ 	}
+-	spin_unlock_bh(&bond->ipsec_lock);
++	mutex_unlock(&bond->ipsec_lock);
++}
++
++static void bond_ipsec_free_sa(struct xfrm_state *xs)
++{
++	struct net_device *bond_dev = xs->xso.dev;
++	struct net_device *real_dev;
++	netdevice_tracker tracker;
++	struct bonding *bond;
++	struct slave *slave;
++
++	if (!bond_dev)
++		return;
++
++	rcu_read_lock();
++	bond = netdev_priv(bond_dev);
++	slave = rcu_dereference(bond->curr_active_slave);
++	real_dev = slave ? slave->dev : NULL;
++	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
+ 	rcu_read_unlock();
++
++	if (!slave)
++		goto out;
++
++	if (!xs->xso.real_dev)
++		goto out;
++
++	WARN_ON(xs->xso.real_dev != real_dev);
++
++	if (real_dev && real_dev->xfrmdev_ops &&
++	    real_dev->xfrmdev_ops->xdo_dev_state_free)
++		real_dev->xfrmdev_ops->xdo_dev_state_free(xs);
++out:
++	netdev_put(real_dev, &tracker);
+ }
+ 
+ /**
+@@ -627,6 +673,7 @@ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ static const struct xfrmdev_ops bond_xfrmdev_ops = {
+ 	.xdo_dev_state_add = bond_ipsec_add_sa,
+ 	.xdo_dev_state_delete = bond_ipsec_del_sa,
++	.xdo_dev_state_free = bond_ipsec_free_sa,
+ 	.xdo_dev_offload_ok = bond_ipsec_offload_ok,
+ };
+ #endif /* CONFIG_XFRM_OFFLOAD */
+@@ -5877,7 +5924,7 @@ void bond_setup(struct net_device *bond_dev)
+ 	/* set up xfrm device ops (only supported in active-backup right now) */
+ 	bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
+ 	INIT_LIST_HEAD(&bond->ipsec_list);
+-	spin_lock_init(&bond->ipsec_lock);
++	mutex_init(&bond->ipsec_lock);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	/* don't acquire bond device's netif_tx_lock when transmitting */
+@@ -5926,6 +5973,10 @@ static void bond_uninit(struct net_device *bond_dev)
+ 		__bond_release_one(bond_dev, slave->dev, true, true);
+ 	netdev_info(bond_dev, "Released all slaves\n");
+ 
++#ifdef CONFIG_XFRM_OFFLOAD
++	mutex_destroy(&bond->ipsec_lock);
++#endif /* CONFIG_XFRM_OFFLOAD */
++
+ 	bond_set_slave_arr(bond, NULL, NULL);
+ 
+ 	list_del_rcu(&bond->bond_list);
+diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c
+index bbc4f9e16c989..0a868679d342e 100644
+--- a/drivers/net/ethernet/microsoft/mana/hw_channel.c
++++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c
+@@ -52,9 +52,33 @@ static int mana_hwc_verify_resp_msg(const struct hwc_caller_ctx *caller_ctx,
+ 	return 0;
+ }
+ 
++static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq,
++				struct hwc_work_request *req)
++{
++	struct device *dev = hwc_rxq->hwc->dev;
++	struct gdma_sge *sge;
++	int err;
++
++	sge = &req->sge;
++	sge->address = (u64)req->buf_sge_addr;
++	sge->mem_key = hwc_rxq->msg_buf->gpa_mkey;
++	sge->size = req->buf_len;
++
++	memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request));
++	req->wqe_req.sgl = sge;
++	req->wqe_req.num_sge = 1;
++	req->wqe_req.client_data_unit = 0;
++
++	err = mana_gd_post_and_ring(hwc_rxq->gdma_wq, &req->wqe_req, NULL);
++	if (err)
++		dev_err(dev, "Failed to post WQE on HWC RQ: %d\n", err);
++	return err;
++}
++
+ static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
+-				 const struct gdma_resp_hdr *resp_msg)
++				 struct hwc_work_request *rx_req)
+ {
++	const struct gdma_resp_hdr *resp_msg = rx_req->buf_va;
+ 	struct hwc_caller_ctx *ctx;
+ 	int err;
+ 
+@@ -62,6 +86,7 @@ static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
+ 		      hwc->inflight_msg_res.map)) {
+ 		dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n",
+ 			resp_msg->response.hwc_msg_id);
++		mana_hwc_post_rx_wqe(hwc->rxq, rx_req);
+ 		return;
+ 	}
+ 
+@@ -75,30 +100,13 @@ static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
+ 	memcpy(ctx->output_buf, resp_msg, resp_len);
+ out:
+ 	ctx->error = err;
+-	complete(&ctx->comp_event);
+-}
+-
+-static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq,
+-				struct hwc_work_request *req)
+-{
+-	struct device *dev = hwc_rxq->hwc->dev;
+-	struct gdma_sge *sge;
+-	int err;
+-
+-	sge = &req->sge;
+-	sge->address = (u64)req->buf_sge_addr;
+-	sge->mem_key = hwc_rxq->msg_buf->gpa_mkey;
+-	sge->size = req->buf_len;
+ 
+-	memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request));
+-	req->wqe_req.sgl = sge;
+-	req->wqe_req.num_sge = 1;
+-	req->wqe_req.client_data_unit = 0;
++	/* Must post rx wqe before complete(), otherwise the next rx may
++	 * hit no_wqe error.
++	 */
++	mana_hwc_post_rx_wqe(hwc->rxq, rx_req);
+ 
+-	err = mana_gd_post_and_ring(hwc_rxq->gdma_wq, &req->wqe_req, NULL);
+-	if (err)
+-		dev_err(dev, "Failed to post WQE on HWC RQ: %d\n", err);
+-	return err;
++	complete(&ctx->comp_event);
+ }
+ 
+ static void mana_hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
+@@ -235,14 +243,12 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id,
+ 		return;
+ 	}
+ 
+-	mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, resp);
++	mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, rx_req);
+ 
+-	/* Do no longer use 'resp', because the buffer is posted to the HW
+-	 * in the below mana_hwc_post_rx_wqe().
++	/* Can no longer use 'resp', because the buffer is posted to the HW
++	 * in mana_hwc_handle_resp() above.
+ 	 */
+ 	resp = NULL;
+-
+-	mana_hwc_post_rx_wqe(hwc_rxq, rx_req);
+ }
+ 
+ static void mana_hwc_tx_event_handler(void *ctx, u32 gdma_txq_id,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 0696faf60013e..2e94d10348cce 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1653,7 +1653,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
+ 	sock = sockfd_lookup(fd, &err);
+ 	if (!sock) {
+ 		pr_debug("gtp socket fd=%d not found\n", fd);
+-		return NULL;
++		return ERR_PTR(err);
+ 	}
+ 
+ 	sk = sock->sk;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index fa339791223b8..ba9e656037a20 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -724,22 +724,25 @@ int iwl_acpi_get_wgds_table(struct iwl_fw_runtime *fwrt)
+ 				entry = &wifi_pkg->package.elements[entry_idx];
+ 				entry_idx++;
+ 				if (entry->type != ACPI_TYPE_INTEGER ||
+-				    entry->integer.value > num_profiles) {
++				    entry->integer.value > num_profiles ||
++				    entry->integer.value <
++					rev_data[idx].min_profiles) {
+ 					ret = -EINVAL;
+ 					goto out_free;
+ 				}
+-				num_profiles = entry->integer.value;
+ 
+ 				/*
+-				 * this also validates >= min_profiles since we
+-				 * otherwise wouldn't have gotten the data when
+-				 * looking up in ACPI
++				 * Check to see if we received package count
++				 * same as max # of profiles
+ 				 */
+ 				if (wifi_pkg->package.count !=
+ 				    hdr_size + profile_size * num_profiles) {
+ 					ret = -EINVAL;
+ 					goto out_free;
+ 				}
++
++				/* Number of valid profiles */
++				num_profiles = entry->integer.value;
+ 			}
+ 			goto read_table;
+ 		}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index d343432474db0..1380ae5155f35 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1190,10 +1190,12 @@ static void iwl_mvm_trig_link_selection(struct wiphy *wiphy,
+ 	struct iwl_mvm *mvm =
+ 		container_of(wk, struct iwl_mvm, trig_link_selection_wk);
+ 
++	mutex_lock(&mvm->mutex);
+ 	ieee80211_iterate_active_interfaces(mvm->hw,
+ 					    IEEE80211_IFACE_ITER_NORMAL,
+ 					    iwl_mvm_find_link_selection_vif,
+ 					    NULL);
++	mutex_unlock(&mvm->mutex);
+ }
+ 
+ static struct iwl_op_mode *
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index e975f5ff17b5d..7615c91a55c62 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1659,6 +1659,17 @@ iwl_mvm_umac_scan_cfg_channels_v7(struct iwl_mvm *mvm,
+ 		cfg->v2.channel_num = channels[i]->hw_value;
+ 		if (cfg80211_channel_is_psc(channels[i]))
+ 			cfg->flags = 0;
++
++		if (band == NL80211_BAND_6GHZ) {
++			/* 6 GHz channels should only appear in a scan request
++			 * that has scan_6ghz set. The only exception is MLO
++			 * scan, which has to be passive.
++			 */
++			WARN_ON_ONCE(cfg->flags != 0);
++			cfg->flags =
++				cpu_to_le32(IWL_UHB_CHAN_CFG_FLAG_FORCE_PASSIVE);
++		}
++
+ 		cfg->v2.iter_count = 1;
+ 		cfg->v2.iter_interval = 0;
+ 		if (version < 17)
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 155eb0fab12a4..bf35c92f91d7e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4363,11 +4363,27 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
+ 		wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
+ 
+-	wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
+-	if (adapter->config_bands & BAND_A)
+-		wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
+-	else
++	wiphy->bands[NL80211_BAND_2GHZ] = devm_kmemdup(adapter->dev,
++						       &mwifiex_band_2ghz,
++						       sizeof(mwifiex_band_2ghz),
++						       GFP_KERNEL);
++	if (!wiphy->bands[NL80211_BAND_2GHZ]) {
++		ret = -ENOMEM;
++		goto err;
++	}
++
++	if (adapter->config_bands & BAND_A) {
++		wiphy->bands[NL80211_BAND_5GHZ] = devm_kmemdup(adapter->dev,
++							       &mwifiex_band_5ghz,
++							       sizeof(mwifiex_band_5ghz),
++							       GFP_KERNEL);
++		if (!wiphy->bands[NL80211_BAND_5GHZ]) {
++			ret = -ENOMEM;
++			goto err;
++		}
++	} else {
+ 		wiphy->bands[NL80211_BAND_5GHZ] = NULL;
++	}
+ 
+ 	if (adapter->drcs_enabled && ISSUPP_DRCS_ENABLED(adapter->fw_cap_info))
+ 		wiphy->iface_combinations = &mwifiex_iface_comb_ap_sta_drcs;
+@@ -4461,8 +4477,7 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	if (ret < 0) {
+ 		mwifiex_dbg(adapter, ERROR,
+ 			    "%s: wiphy_register failed: %d\n", __func__, ret);
+-		wiphy_free(wiphy);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	if (!adapter->regd) {
+@@ -4504,4 +4519,9 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 
+ 	adapter->wiphy = wiphy;
+ 	return ret;
++
++err:
++	wiphy_free(wiphy);
++
++	return ret;
+ }
+diff --git a/drivers/net/wireless/silabs/wfx/sta.c b/drivers/net/wireless/silabs/wfx/sta.c
+index a904602f02ce2..5930ff49c43ea 100644
+--- a/drivers/net/wireless/silabs/wfx/sta.c
++++ b/drivers/net/wireless/silabs/wfx/sta.c
+@@ -352,8 +352,11 @@ static int wfx_set_mfp_ap(struct wfx_vif *wvif)
+ 
+ 	ptr = (u16 *)cfg80211_find_ie(WLAN_EID_RSN, skb->data + ieoffset,
+ 				      skb->len - ieoffset);
+-	if (unlikely(!ptr))
++	if (!ptr) {
++		/* No RSN IE is fine in open networks */
++		ret = 0;
+ 		goto free_skb;
++	}
+ 
+ 	ptr += pairwise_cipher_suite_count_offset;
+ 	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index b19c39dcfbd93..e2bc67300a915 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -1723,6 +1723,11 @@ static int pn533_start_poll(struct nfc_dev *nfc_dev,
+ 	}
+ 
+ 	pn533_poll_create_mod_list(dev, im_protocols, tm_protocols);
++	if (!dev->poll_mod_count) {
++		nfc_err(dev->dev,
++			"Poll mod list is empty\n");
++		return -EINVAL;
++	}
+ 
+ 	/* Do not always start polling from the same modulation */
+ 	get_random_bytes(&rand_mod, sizeof(rand_mod));
+diff --git a/drivers/of/platform.c b/drivers/of/platform.c
+index 389d4ea6bfc15..ef622d41eb5b2 100644
+--- a/drivers/of/platform.c
++++ b/drivers/of/platform.c
+@@ -592,7 +592,7 @@ static int __init of_platform_default_populate_init(void)
+ 			 * This can happen for example on DT systems that do EFI
+ 			 * booting and may provide a GOP handle to the EFI stub.
+ 			 */
+-			sysfb_disable();
++			sysfb_disable(NULL);
+ 			of_platform_device_create(node, NULL, NULL);
+ 			of_node_put(node);
+ 		}
+diff --git a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
+index 0b9a59d5b8f02..adc6394626ce8 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
++++ b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
+@@ -176,7 +176,7 @@ static void imx8m_get_phy_tuning_data(struct imx8mq_usb_phy *imx_phy)
+ 		imx_phy->comp_dis_tune =
+ 			phy_comp_dis_tune_from_property(imx_phy->comp_dis_tune);
+ 
+-	if (device_property_read_u32(dev, "fsl,pcs-tx-deemph-3p5db-attenuation-db",
++	if (device_property_read_u32(dev, "fsl,phy-pcs-tx-deemph-3p5db-attenuation-db",
+ 				     &imx_phy->pcs_tx_deemph_3p5db))
+ 		imx_phy->pcs_tx_deemph_3p5db = PHY_TUNE_DEFAULT;
+ 	else
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 8fcdcb193d241..e99cbcc378908 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -1008,8 +1008,8 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_serdes_tbl[] = {
+ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_ln_shrd_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RXCLK_DIV2_CTRL, 0x01),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_DFE_DAC_ENABLE1, 0x88),
+-	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_TX_ADAPT_POST_THRESH1, 0x00),
+-	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_TX_ADAPT_POST_THRESH2, 0x1f),
++	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_TX_ADAPT_POST_THRESH1, 0x02),
++	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_TX_ADAPT_POST_THRESH2, 0x0d),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MODE_RATE_0_1_B0, 0xd4),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MODE_RATE_0_1_B1, 0x12),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MODE_RATE_0_1_B2, 0xdb),
+@@ -1026,6 +1026,7 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_ln_shrd_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MARG_COARSE_THRESH4_RATE3, 0x1f),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MARG_COARSE_THRESH5_RATE3, 0x1f),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_MARG_COARSE_THRESH6_RATE3, 0x1f),
++	QMP_PHY_INIT_CFG(QSERDES_V6_LN_SHRD_RX_SUMMER_CAL_SPD_MODE, 0x5b),
+ };
+ 
+ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_tx_tbl[] = {
+@@ -1049,12 +1050,15 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_rx_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_DFE_1, 0x01),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_DFE_2, 0x01),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_DFE_3, 0x45),
+-	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_VGA_CAL_MAN_VAL, 0x0b),
++	QMP_PHY_INIT_CFG_LANE(QSERDES_V6_20_RX_VGA_CAL_MAN_VAL, 0x0a, 1),
++	QMP_PHY_INIT_CFG_LANE(QSERDES_V6_20_RX_VGA_CAL_MAN_VAL, 0x0b, 2),
++	QMP_PHY_INIT_CFG(QSERDES_V6_20_VGA_CAL_CNTRL1, 0x00),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_GM_CAL, 0x0d),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_EQU_ADAPTOR_CNTRL4, 0x0b),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_SIGDET_ENABLES, 0x1c),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_PHPRE_CTRL, 0x20),
+-	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_DFE_CTLE_POST_CAL_OFFSET, 0x38),
++	QMP_PHY_INIT_CFG_LANE(QSERDES_V6_20_RX_DFE_CTLE_POST_CAL_OFFSET, 0x3a, 1),
++	QMP_PHY_INIT_CFG_LANE(QSERDES_V6_20_RX_DFE_CTLE_POST_CAL_OFFSET, 0x38, 2),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_Q_PI_INTRINSIC_BIAS_RATE32, 0x39),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_MODE_RATE2_B0, 0x14),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_MODE_RATE2_B1, 0xb3),
+@@ -1070,6 +1074,7 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_rx_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_MODE_RATE3_B4, 0x4b),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_MODE_RATE3_B5, 0x76),
+ 	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_MODE_RATE3_B6, 0xff),
++	QMP_PHY_INIT_CFG(QSERDES_V6_20_RX_TX_ADPT_CTRL, 0x10),
+ };
+ 
+ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_pcs_tbl[] = {
+@@ -1077,6 +1082,8 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_pcs_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QPHY_V6_20_PCS_RX_SIGDET_LVL, 0xcc),
+ 	QMP_PHY_INIT_CFG(QPHY_V6_20_PCS_EQ_CONFIG4, 0x00),
+ 	QMP_PHY_INIT_CFG(QPHY_V6_20_PCS_EQ_CONFIG5, 0x22),
++	QMP_PHY_INIT_CFG(QPHY_V6_20_PCS_TX_RX_CONFIG1, 0x04),
++	QMP_PHY_INIT_CFG(QPHY_V6_20_PCS_TX_RX_CONFIG2, 0x02),
+ };
+ 
+ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_pcs_misc_tbl[] = {
+@@ -1087,11 +1094,13 @@ static const struct qmp_phy_init_tbl x1e80100_qmp_gen4x2_pcie_pcs_misc_tbl[] = {
+ 	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G4_PRE_GAIN, 0x2e),
+ 	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_RX_MARGINING_CONFIG1, 0x03),
+ 	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_RX_MARGINING_CONFIG3, 0x28),
++	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G3_RXEQEVAL_TIME, 0x27),
++	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G4_RXEQEVAL_TIME, 0x27),
+ 	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_TX_RX_CONFIG, 0xc0),
+ 	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_POWER_STATE_CONFIG2, 0x1d),
+-	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_RX_MARGINING_CONFIG5, 0x0f),
+-	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G3_FOM_EQ_CONFIG5, 0xf2),
+-	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G4_FOM_EQ_CONFIG5, 0xf2),
++	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_RX_MARGINING_CONFIG5, 0x18),
++	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G3_FOM_EQ_CONFIG5, 0x7a),
++	QMP_PHY_INIT_CFG(QPHY_PCIE_V6_20_PCS_G4_FOM_EQ_CONFIG5, 0x8a),
+ };
+ 
+ static const struct qmp_phy_init_tbl sm8250_qmp_pcie_serdes_tbl[] = {
+diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c
+index f2bff7f25f05a..d7d12cf3011a2 100644
+--- a/drivers/phy/xilinx/phy-zynqmp.c
++++ b/drivers/phy/xilinx/phy-zynqmp.c
+@@ -166,6 +166,24 @@
+ /* Timeout values */
+ #define TIMEOUT_US			1000
+ 
++/* Lane 0/1/2/3 offset */
++#define DIG_8(n)		((0x4000 * (n)) + 0x1074)
++#define ILL13(n)		((0x4000 * (n)) + 0x1994)
++#define DIG_10(n)		((0x4000 * (n)) + 0x107c)
++#define RST_DLY(n)		((0x4000 * (n)) + 0x19a4)
++#define BYP_15(n)		((0x4000 * (n)) + 0x1038)
++#define BYP_12(n)		((0x4000 * (n)) + 0x102c)
++#define MISC3(n)		((0x4000 * (n)) + 0x19ac)
++#define EQ11(n)			((0x4000 * (n)) + 0x1978)
++
++static u32 save_reg_address[] = {
++	/* Lane 0/1/2/3 Register */
++	DIG_8(0), ILL13(0), DIG_10(0), RST_DLY(0), BYP_15(0), BYP_12(0), MISC3(0), EQ11(0),
++	DIG_8(1), ILL13(1), DIG_10(1), RST_DLY(1), BYP_15(1), BYP_12(1), MISC3(1), EQ11(1),
++	DIG_8(2), ILL13(2), DIG_10(2), RST_DLY(2), BYP_15(2), BYP_12(2), MISC3(2), EQ11(2),
++	DIG_8(3), ILL13(3), DIG_10(3), RST_DLY(3), BYP_15(3), BYP_12(3), MISC3(3), EQ11(3),
++};
++
+ struct xpsgtr_dev;
+ 
+ /**
+@@ -214,6 +232,7 @@ struct xpsgtr_phy {
+  * @tx_term_fix: fix for GT issue
+  * @saved_icm_cfg0: stored value of ICM CFG0 register
+  * @saved_icm_cfg1: stored value of ICM CFG1 register
++ * @saved_regs: registers to be saved/restored during suspend/resume
+  */
+ struct xpsgtr_dev {
+ 	struct device *dev;
+@@ -226,6 +245,7 @@ struct xpsgtr_dev {
+ 	bool tx_term_fix;
+ 	unsigned int saved_icm_cfg0;
+ 	unsigned int saved_icm_cfg1;
++	u32 *saved_regs;
+ };
+ 
+ /*
+@@ -299,6 +319,32 @@ static inline void xpsgtr_clr_set_phy(struct xpsgtr_phy *gtr_phy,
+ 	writel((readl(addr) & ~clr) | set, addr);
+ }
+ 
++/**
++ * xpsgtr_save_lane_regs - Saves registers on suspend
++ * @gtr_dev: pointer to phy controller context structure
++ */
++static void xpsgtr_save_lane_regs(struct xpsgtr_dev *gtr_dev)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(save_reg_address); i++)
++		gtr_dev->saved_regs[i] = xpsgtr_read(gtr_dev,
++						     save_reg_address[i]);
++}
++
++/**
++ * xpsgtr_restore_lane_regs - Restores registers on resume
++ * @gtr_dev: pointer to phy controller context structure
++ */
++static void xpsgtr_restore_lane_regs(struct xpsgtr_dev *gtr_dev)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(save_reg_address); i++)
++		xpsgtr_write(gtr_dev, save_reg_address[i],
++			     gtr_dev->saved_regs[i]);
++}
++
+ /*
+  * Hardware Configuration
+  */
+@@ -839,6 +885,8 @@ static int xpsgtr_runtime_suspend(struct device *dev)
+ 	gtr_dev->saved_icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0);
+ 	gtr_dev->saved_icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1);
+ 
++	xpsgtr_save_lane_regs(gtr_dev);
++
+ 	return 0;
+ }
+ 
+@@ -849,6 +897,8 @@ static int xpsgtr_runtime_resume(struct device *dev)
+ 	unsigned int i;
+ 	bool skip_phy_init;
+ 
++	xpsgtr_restore_lane_regs(gtr_dev);
++
+ 	icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0);
+ 	icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1);
+ 
+@@ -994,6 +1044,12 @@ static int xpsgtr_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	gtr_dev->saved_regs = devm_kmalloc(gtr_dev->dev,
++					   sizeof(save_reg_address),
++					   GFP_KERNEL);
++	if (!gtr_dev->saved_regs)
++		return -ENOMEM;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index b7921b59eb7b1..54301fbba524a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -709,32 +709,35 @@ static int mtk_pinconf_bias_set_rsel(struct mtk_pinctrl *hw,
+ {
+ 	int err, rsel_val;
+ 
+-	if (!pullup && arg == MTK_DISABLE)
+-		return 0;
+-
+ 	if (hw->rsel_si_unit) {
+ 		/* find pin rsel_index from pin_rsel array*/
+ 		err = mtk_hw_pin_rsel_lookup(hw, desc, pullup, arg, &rsel_val);
+ 		if (err)
+-			goto out;
++			return err;
+ 	} else {
+-		if (arg < MTK_PULL_SET_RSEL_000 ||
+-		    arg > MTK_PULL_SET_RSEL_111) {
+-			err = -EINVAL;
+-			goto out;
+-		}
++		if (arg < MTK_PULL_SET_RSEL_000 || arg > MTK_PULL_SET_RSEL_111)
++			return -EINVAL;
+ 
+ 		rsel_val = arg - MTK_PULL_SET_RSEL_000;
+ 	}
+ 
+-	err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_RSEL, rsel_val);
+-	if (err)
+-		goto out;
++	return mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_RSEL, rsel_val);
++}
+ 
+-	err = mtk_pinconf_bias_set_pu_pd(hw, desc, pullup, MTK_ENABLE);
++static int mtk_pinconf_bias_set_pu_pd_rsel(struct mtk_pinctrl *hw,
++					   const struct mtk_pin_desc *desc,
++					   u32 pullup, u32 arg)
++{
++	u32 enable = arg == MTK_DISABLE ? MTK_DISABLE : MTK_ENABLE;
++	int err;
+ 
+-out:
+-	return err;
++	if (arg != MTK_DISABLE) {
++		err = mtk_pinconf_bias_set_rsel(hw, desc, pullup, arg);
++		if (err)
++			return err;
++	}
++
++	return mtk_pinconf_bias_set_pu_pd(hw, desc, pullup, enable);
+ }
+ 
+ int mtk_pinconf_bias_set_combo(struct mtk_pinctrl *hw,
+@@ -750,22 +753,22 @@ int mtk_pinconf_bias_set_combo(struct mtk_pinctrl *hw,
+ 		try_all_type = MTK_PULL_TYPE_MASK;
+ 
+ 	if (try_all_type & MTK_PULL_RSEL_TYPE) {
+-		err = mtk_pinconf_bias_set_rsel(hw, desc, pullup, arg);
++		err = mtk_pinconf_bias_set_pu_pd_rsel(hw, desc, pullup, arg);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PU_PD_TYPE) {
+ 		err = mtk_pinconf_bias_set_pu_pd(hw, desc, pullup, arg);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PULLSEL_TYPE) {
+ 		err = mtk_pinconf_bias_set_pullsel_pullen(hw, desc,
+ 							  pullup, arg);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PUPD_R1R0_TYPE)
+@@ -803,9 +806,9 @@ static int mtk_rsel_get_si_unit(struct mtk_pinctrl *hw,
+ 	return 0;
+ }
+ 
+-static int mtk_pinconf_bias_get_rsel(struct mtk_pinctrl *hw,
+-				     const struct mtk_pin_desc *desc,
+-				     u32 *pullup, u32 *enable)
++static int mtk_pinconf_bias_get_pu_pd_rsel(struct mtk_pinctrl *hw,
++					   const struct mtk_pin_desc *desc,
++					   u32 *pullup, u32 *enable)
+ {
+ 	int pu, pd, rsel, err;
+ 
+@@ -939,22 +942,22 @@ int mtk_pinconf_bias_get_combo(struct mtk_pinctrl *hw,
+ 		try_all_type = MTK_PULL_TYPE_MASK;
+ 
+ 	if (try_all_type & MTK_PULL_RSEL_TYPE) {
+-		err = mtk_pinconf_bias_get_rsel(hw, desc, pullup, enable);
++		err = mtk_pinconf_bias_get_pu_pd_rsel(hw, desc, pullup, enable);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PU_PD_TYPE) {
+ 		err = mtk_pinconf_bias_get_pu_pd(hw, desc, pullup, enable);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PULLSEL_TYPE) {
+ 		err = mtk_pinconf_bias_get_pullsel_pullen(hw, desc,
+ 							  pullup, enable);
+ 		if (!err)
+-			return err;
++			return 0;
+ 	}
+ 
+ 	if (try_all_type & MTK_PULL_PUPD_R1R0_TYPE)
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 6a74619786300..7b7b8601d01a1 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -3800,7 +3800,7 @@ static struct rockchip_pin_bank rk3328_pin_banks[] = {
+ 	PIN_BANK_IOMUX_FLAGS(0, 32, "gpio0", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(1, 32, "gpio1", 0, 0, 0, 0),
+ 	PIN_BANK_IOMUX_FLAGS(2, 32, "gpio2", 0,
+-			     0,
++			     IOMUX_WIDTH_2BIT,
+ 			     IOMUX_WIDTH_3BIT,
+ 			     0),
+ 	PIN_BANK_IOMUX_FLAGS(3, 32, "gpio3",
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 4c6bfabb6bd7d..4da3c3f422b69 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -345,6 +345,8 @@ static int pcs_get_function(struct pinctrl_dev *pctldev, unsigned pin,
+ 		return -ENOTSUPP;
+ 	fselector = setting->func;
+ 	function = pinmux_generic_get_function(pctldev, fselector);
++	if (!function)
++		return -EINVAL;
+ 	*func = function->data;
+ 	if (!(*func)) {
+ 		dev_err(pcs->dev, "%s could not find function%i\n",
+diff --git a/drivers/pinctrl/qcom/pinctrl-x1e80100.c b/drivers/pinctrl/qcom/pinctrl-x1e80100.c
+index e30e938403574..65ed933f05ce1 100644
+--- a/drivers/pinctrl/qcom/pinctrl-x1e80100.c
++++ b/drivers/pinctrl/qcom/pinctrl-x1e80100.c
+@@ -1805,26 +1805,29 @@ static const struct msm_pingroup x1e80100_groups[] = {
+ 	[235] = PINGROUP(235, aon_cci, qdss_gpio, _, _, _, _, _, _, _),
+ 	[236] = PINGROUP(236, aon_cci, qdss_gpio, _, _, _, _, _, _, _),
+ 	[237] = PINGROUP(237, _, _, _, _, _, _, _, _, _),
+-	[238] = UFS_RESET(ufs_reset, 0x1f9000),
+-	[239] = SDC_QDSD_PINGROUP(sdc2_clk, 0x1f2000, 14, 6),
+-	[240] = SDC_QDSD_PINGROUP(sdc2_cmd, 0x1f2000, 11, 3),
+-	[241] = SDC_QDSD_PINGROUP(sdc2_data, 0x1f2000, 9, 0),
++	[238] = UFS_RESET(ufs_reset, 0xf9000),
++	[239] = SDC_QDSD_PINGROUP(sdc2_clk, 0xf2000, 14, 6),
++	[240] = SDC_QDSD_PINGROUP(sdc2_cmd, 0xf2000, 11, 3),
++	[241] = SDC_QDSD_PINGROUP(sdc2_data, 0xf2000, 9, 0),
+ };
+ 
+ static const struct msm_gpio_wakeirq_map x1e80100_pdc_map[] = {
+ 	{ 0, 72 }, { 2, 70 }, { 3, 71 }, { 6, 123 }, { 7, 67 }, { 11, 85 },
+-	{ 15, 68 }, { 18, 122 }, { 19, 69 }, { 21, 158 }, { 23, 143 }, { 26, 129 },
+-	{ 27, 144 }, { 28, 77 }, { 29, 78 }, { 30, 92 }, { 32, 145 }, { 33, 115 },
+-	{ 34, 130 }, { 35, 146 }, { 36, 147 }, { 39, 80 }, { 43, 148 }, { 47, 149 },
+-	{ 51, 79 }, { 53, 89 }, { 59, 87 }, { 64, 90 }, { 65, 106 }, { 66, 142 },
+-	{ 67, 88 }, { 71, 91 }, { 75, 152 }, { 79, 153 }, { 80, 125 }, { 81, 128 },
+-	{ 84, 137 }, { 85, 155 }, { 87, 156 }, { 91, 157 }, { 92, 138 }, { 94, 140 },
+-	{ 95, 141 }, { 113, 84 }, { 121, 73 }, { 123, 74 }, { 129, 76 }, { 131, 82 },
+-	{ 134, 83 }, { 141, 93 }, { 144, 94 }, { 147, 96 }, { 148, 97 }, { 150, 102 },
+-	{ 151, 103 }, { 153, 104 }, { 156, 105 }, { 157, 107 }, { 163, 98 }, { 166, 112 },
+-	{ 172, 99 }, { 181, 101 }, { 184, 116 }, { 193, 40 }, { 193, 117 }, { 196, 108 },
+-	{ 203, 133 }, { 212, 120 }, { 213, 150 }, { 214, 121 }, { 215, 118 }, { 217, 109 },
+-	{ 220, 110 }, { 221, 111 }, { 222, 124 }, { 224, 131 }, { 225, 132 },
++	{ 13, 86 }, { 15, 68 }, { 18, 122 }, { 19, 69 }, { 21, 158 }, { 23, 143 },
++	{ 24, 126 }, { 26, 129 }, { 27, 144 }, { 28, 77 }, { 29, 78 }, { 30, 92 },
++	{ 31, 159 }, { 32, 145 }, { 33, 115 }, { 34, 130 }, { 35, 146 }, { 36, 147 },
++	{ 38, 113 }, { 39, 80 }, { 43, 148 }, { 47, 149 }, { 51, 79 }, { 53, 89 },
++	{ 55, 81 }, { 59, 87 }, { 64, 90 }, { 65, 106 }, { 66, 142 }, { 67, 88 },
++	{ 68, 151 }, { 71, 91 }, { 75, 152 }, { 79, 153 }, { 80, 125 }, { 81, 128 },
++	{ 83, 154 }, { 84, 137 }, { 85, 155 }, { 87, 156 }, { 91, 157 }, { 92, 138 },
++	{ 93, 139 }, { 94, 140 }, { 95, 141 }, { 113, 84 }, { 121, 73 }, { 123, 74 },
++	{ 125, 75 }, { 129, 76 }, { 131, 82 }, { 134, 83 }, { 141, 93 }, { 144, 94 },
++	{ 145, 95 }, { 147, 96 }, { 148, 97 }, { 150, 102 }, { 151, 103 }, { 153, 104 },
++	{ 154, 100 }, { 156, 105 }, { 157, 107 }, { 163, 98 }, { 166, 112 }, { 172, 99 },
++	{ 175, 114 }, { 181, 101 }, { 184, 116 }, { 193, 117 }, { 196, 108 }, { 203, 133 },
++	{ 208, 134 }, { 212, 120 }, { 213, 150 }, { 214, 121 }, { 215, 118 }, { 217, 109 },
++	{ 219, 119 }, { 220, 110 }, { 221, 111 }, { 222, 124 }, { 224, 131 }, { 225, 132 },
++	{ 228, 135 }, { 230, 136 }, { 232, 162 },
+ };
+ 
+ static const struct msm_pinctrl_soc_data x1e80100_pinctrl = {
+diff --git a/drivers/pinctrl/starfive/pinctrl-starfive-jh7110.c b/drivers/pinctrl/starfive/pinctrl-starfive-jh7110.c
+index 9609eb1ecc3d8..7637de7452b91 100644
+--- a/drivers/pinctrl/starfive/pinctrl-starfive-jh7110.c
++++ b/drivers/pinctrl/starfive/pinctrl-starfive-jh7110.c
+@@ -795,12 +795,12 @@ static int jh7110_irq_set_type(struct irq_data *d, unsigned int trigger)
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		irq_type  = 0;    /* 0: level triggered */
+ 		edge_both = 0;    /* 0: ignored */
+-		polarity  = mask; /* 1: high level */
++		polarity  = 0;    /* 0: high level */
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 		irq_type  = 0;    /* 0: level triggered */
+ 		edge_both = 0;    /* 0: ignored */
+-		polarity  = 0;    /* 0: low level */
++		polarity  = mask; /* 1: low level */
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/power/supply/qcom_battmgr.c b/drivers/power/supply/qcom_battmgr.c
+index 44c6301f5f174..5b3681b9100c1 100644
+--- a/drivers/power/supply/qcom_battmgr.c
++++ b/drivers/power/supply/qcom_battmgr.c
+@@ -1384,12 +1384,16 @@ static int qcom_battmgr_probe(struct auxiliary_device *adev,
+ 					     "failed to register wireless charing power supply\n");
+ 	}
+ 
+-	battmgr->client = devm_pmic_glink_register_client(dev,
+-							  PMIC_GLINK_OWNER_BATTMGR,
+-							  qcom_battmgr_callback,
+-							  qcom_battmgr_pdr_notify,
+-							  battmgr);
+-	return PTR_ERR_OR_ZERO(battmgr->client);
++	battmgr->client = devm_pmic_glink_client_alloc(dev, PMIC_GLINK_OWNER_BATTMGR,
++						       qcom_battmgr_callback,
++						       qcom_battmgr_pdr_notify,
++						       battmgr);
++	if (IS_ERR(battmgr->client))
++		return PTR_ERR(battmgr->client);
++
++	pmic_glink_client_register(battmgr->client);
++
++	return 0;
+ }
+ 
+ static const struct auxiliary_device_id qcom_battmgr_id_table[] = {
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index bd99c5492b7d4..0f64b02443037 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -642,6 +642,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
+ 
+ 	if (aac_comm_init(dev)<0){
+ 		kfree(dev->queues);
++		dev->queues = NULL;
+ 		return NULL;
+ 	}
+ 	/*
+@@ -649,6 +650,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
+ 	 */
+ 	if (aac_fib_setup(dev) < 0) {
+ 		kfree(dev->queues);
++		dev->queues = NULL;
+ 		return NULL;
+ 	}
+ 		
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 6b64af7d49273..7d2a294ebc3d7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1711,13 +1711,15 @@ static int sd_sync_cache(struct scsi_disk *sdkp)
+ 			    (sshdr.asc == 0x74 && sshdr.ascq == 0x71))	/* drive is password locked */
+ 				/* this is no error here */
+ 				return 0;
++
+ 			/*
+-			 * This drive doesn't support sync and there's not much
+-			 * we can do because this is called during shutdown
+-			 * or suspend so just return success so those operations
+-			 * can proceed.
++			 * If a format is in progress or if the drive does not
++			 * support sync, there is not much we can do because
++			 * this is called during shutdown or suspend so just
++			 * return success so those operations can proceed.
+ 			 */
+-			if (sshdr.sense_key == ILLEGAL_REQUEST)
++			if ((sshdr.asc == 0x04 && sshdr.ascq == 0x04) ||
++			    sshdr.sense_key == ILLEGAL_REQUEST)
+ 				return 0;
+ 		}
+ 
+diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c
+index d845726620175..ae66c2623d250 100644
+--- a/drivers/soc/qcom/cmd-db.c
++++ b/drivers/soc/qcom/cmd-db.c
+@@ -349,7 +349,7 @@ static int cmd_db_dev_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WB);
++	cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WC);
+ 	if (!cmd_db_header) {
+ 		ret = -ENOMEM;
+ 		cmd_db_header = NULL;
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index 9ebc0ba359477..9606222993fd7 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -66,15 +66,14 @@ static void _devm_pmic_glink_release_client(struct device *dev, void *res)
+ 	spin_unlock_irqrestore(&pg->client_lock, flags);
+ }
+ 
+-struct pmic_glink_client *devm_pmic_glink_register_client(struct device *dev,
+-							  unsigned int id,
+-							  void (*cb)(const void *, size_t, void *),
+-							  void (*pdr)(void *, int),
+-							  void *priv)
++struct pmic_glink_client *devm_pmic_glink_client_alloc(struct device *dev,
++						       unsigned int id,
++						       void (*cb)(const void *, size_t, void *),
++						       void (*pdr)(void *, int),
++						       void *priv)
+ {
+ 	struct pmic_glink_client *client;
+ 	struct pmic_glink *pg = dev_get_drvdata(dev->parent);
+-	unsigned long flags;
+ 
+ 	client = devres_alloc(_devm_pmic_glink_release_client, sizeof(*client), GFP_KERNEL);
+ 	if (!client)
+@@ -85,6 +84,18 @@ struct pmic_glink_client *devm_pmic_glink_register_client(struct device *dev,
+ 	client->cb = cb;
+ 	client->pdr_notify = pdr;
+ 	client->priv = priv;
++	INIT_LIST_HEAD(&client->node);
++
++	devres_add(dev, client);
++
++	return client;
++}
++EXPORT_SYMBOL_GPL(devm_pmic_glink_client_alloc);
++
++void pmic_glink_client_register(struct pmic_glink_client *client)
++{
++	struct pmic_glink *pg = client->pg;
++	unsigned long flags;
+ 
+ 	mutex_lock(&pg->state_lock);
+ 	spin_lock_irqsave(&pg->client_lock, flags);
+@@ -95,17 +106,22 @@ struct pmic_glink_client *devm_pmic_glink_register_client(struct device *dev,
+ 	spin_unlock_irqrestore(&pg->client_lock, flags);
+ 	mutex_unlock(&pg->state_lock);
+ 
+-	devres_add(dev, client);
+-
+-	return client;
+ }
+-EXPORT_SYMBOL_GPL(devm_pmic_glink_register_client);
++EXPORT_SYMBOL_GPL(pmic_glink_client_register);
+ 
+ int pmic_glink_send(struct pmic_glink_client *client, void *data, size_t len)
+ {
+ 	struct pmic_glink *pg = client->pg;
++	int ret;
+ 
+-	return rpmsg_send(pg->ept, data, len);
++	mutex_lock(&pg->state_lock);
++	if (!pg->ept)
++		ret = -ECONNRESET;
++	else
++		ret = rpmsg_send(pg->ept, data, len);
++	mutex_unlock(&pg->state_lock);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(pmic_glink_send);
+ 
+@@ -175,7 +191,7 @@ static void pmic_glink_state_notify_clients(struct pmic_glink *pg)
+ 		if (pg->pdr_state == SERVREG_SERVICE_STATE_UP && pg->ept)
+ 			new_state = SERVREG_SERVICE_STATE_UP;
+ 	} else {
+-		if (pg->pdr_state == SERVREG_SERVICE_STATE_UP && pg->ept)
++		if (pg->pdr_state == SERVREG_SERVICE_STATE_DOWN || !pg->ept)
+ 			new_state = SERVREG_SERVICE_STATE_DOWN;
+ 	}
+ 
+diff --git a/drivers/soc/qcom/pmic_glink_altmode.c b/drivers/soc/qcom/pmic_glink_altmode.c
+index b3808fc24c695..d9c299330894a 100644
+--- a/drivers/soc/qcom/pmic_glink_altmode.c
++++ b/drivers/soc/qcom/pmic_glink_altmode.c
+@@ -520,12 +520,17 @@ static int pmic_glink_altmode_probe(struct auxiliary_device *adev,
+ 			return ret;
+ 	}
+ 
+-	altmode->client = devm_pmic_glink_register_client(dev,
+-							  altmode->owner_id,
+-							  pmic_glink_altmode_callback,
+-							  pmic_glink_altmode_pdr_notify,
+-							  altmode);
+-	return PTR_ERR_OR_ZERO(altmode->client);
++	altmode->client = devm_pmic_glink_client_alloc(dev,
++						       altmode->owner_id,
++						       pmic_glink_altmode_callback,
++						       pmic_glink_altmode_pdr_notify,
++						       altmode);
++	if (IS_ERR(altmode->client))
++		return PTR_ERR(altmode->client);
++
++	pmic_glink_client_register(altmode->client);
++
++	return 0;
+ }
+ 
+ static const struct auxiliary_device_id pmic_glink_altmode_id_table[] = {
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 4e9e7d2a942d8..00191b1d22601 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1286,18 +1286,18 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave,
+ 					    unsigned int port_num)
+ {
+ 	struct sdw_dpn_prop *dpn_prop;
+-	u8 num_ports;
++	unsigned long mask;
+ 	int i;
+ 
+ 	if (direction == SDW_DATA_DIR_TX) {
+-		num_ports = hweight32(slave->prop.source_ports);
++		mask = slave->prop.source_ports;
+ 		dpn_prop = slave->prop.src_dpn_prop;
+ 	} else {
+-		num_ports = hweight32(slave->prop.sink_ports);
++		mask = slave->prop.sink_ports;
+ 		dpn_prop = slave->prop.sink_dpn_prop;
+ 	}
+ 
+-	for (i = 0; i < num_ports; i++) {
++	for_each_set_bit(i, &mask, 32) {
+ 		if (dpn_prop[i].num == port_num)
+ 			return &dpn_prop[i];
+ 	}
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index dbee6f0852777..84887dfea7635 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -811,6 +811,7 @@ struct cdnsp_stream_info {
+  *        generate Missed Service Error Event.
+  *        Set skip flag when receive a Missed Service Error Event and
+  *        process the missed tds on the endpoint ring.
++ * @wa1_nop_trb: hold pointer to NOP trb.
+  */
+ struct cdnsp_ep {
+ 	struct usb_ep endpoint;
+@@ -838,6 +839,8 @@ struct cdnsp_ep {
+ #define EP_UNCONFIGURED		BIT(7)
+ 
+ 	bool skip;
++	union cdnsp_trb	 *wa1_nop_trb;
++
+ };
+ 
+ /**
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index 02f297f5637d7..dbd83d321bca0 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -402,7 +402,7 @@ static u64 cdnsp_get_hw_deq(struct cdnsp_device *pdev,
+ 	struct cdnsp_stream_ctx *st_ctx;
+ 	struct cdnsp_ep *pep;
+ 
+-	pep = &pdev->eps[stream_id];
++	pep = &pdev->eps[ep_index];
+ 
+ 	if (pep->ep_state & EP_HAS_STREAMS) {
+ 		st_ctx = &pep->stream_info.stream_ctx_array[stream_id];
+@@ -1904,6 +1904,23 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * workaround 1: STOP EP command on LINK TRB with TC bit set to 1
++	 * causes that internal cycle bit can have incorrect state after
++	 * command complete. In consequence empty transfer ring can be
++	 * incorrectly detected when EP is resumed.
++	 * NOP TRB before LINK TRB avoid such scenario. STOP EP command is
++	 * then on NOP TRB and internal cycle bit is not changed and have
++	 * correct value.
++	 */
++	if (pep->wa1_nop_trb) {
++		field = le32_to_cpu(pep->wa1_nop_trb->trans_event.flags);
++		field ^= TRB_CYCLE;
++
++		pep->wa1_nop_trb->trans_event.flags = cpu_to_le32(field);
++		pep->wa1_nop_trb = NULL;
++	}
++
+ 	/*
+ 	 * Don't give the first TRB to the hardware (by toggling the cycle bit)
+ 	 * until we've finished creating all the other TRBs. The ring's cycle
+@@ -1999,6 +2016,17 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
+ 		send_addr = addr;
+ 	}
+ 
++	if (cdnsp_trb_is_link(ring->enqueue + 1)) {
++		field = TRB_TYPE(TRB_TR_NOOP) | TRB_IOC;
++		if (!ring->cycle_state)
++			field |= TRB_CYCLE;
++
++		pep->wa1_nop_trb = ring->enqueue;
++
++		cdnsp_queue_trb(pdev, ring, 0, 0x0, 0x0,
++				TRB_INTR_TARGET(0), field);
++	}
++
+ 	cdnsp_check_trb_math(preq, enqd_len);
+ 	ret = cdnsp_giveback_first_trb(pdev, pep, preq->request.stream_id,
+ 				       start_cycle, start_trb);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 0e7439dba8fe8..0c1b69d944ca4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1761,6 +1761,9 @@ static const struct usb_device_id acm_ids[] = {
+ 	{ USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */
+ 	.driver_info = SINGLE_RX_URB,
+ 	},
++	{ USB_DEVICE(0x1901, 0x0006), /* GE Healthcare Patient Monitor UI Controller */
++	.driver_info = DISABLE_ECHO, /* DISABLE ECHO in termios flag */
++	},
+ 	{ USB_DEVICE(0x1965, 0x0018), /* Uniden UBC125XLT */
+ 	.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ 	},
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index d83231d6736ac..61b6d978892c7 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -670,6 +670,7 @@ static int add_power_attributes(struct device *dev)
+ 
+ static void remove_power_attributes(struct device *dev)
+ {
++	sysfs_unmerge_group(&dev->kobj, &usb3_hardware_lpm_attr_group);
+ 	sysfs_unmerge_group(&dev->kobj, &usb2_hardware_lpm_attr_group);
+ 	sysfs_unmerge_group(&dev->kobj, &power_attr_group);
+ }
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index cb82557678ddd..31df6fdc233ef 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -559,9 +559,17 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc)
+ void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+ {
+ 	struct dwc3_event_buffer	*evt;
++	u32				reg;
+ 
+ 	if (!dwc->ev_buf)
+ 		return;
++	/*
++	 * Exynos platforms may not be able to access event buffer if the
++	 * controller failed to halt on dwc3_core_exit().
++	 */
++	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
++	if (!(reg & DWC3_DSTS_DEVCTRLHLT))
++		return;
+ 
+ 	evt = dwc->ev_buf;
+ 
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index d5c77db4daa92..2a11fc0ee84f1 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -522,11 +522,13 @@ static int dwc3_omap_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed to request IRQ #%d --> %d\n",
+ 			omap->irq, ret);
+-		goto err1;
++		goto err2;
+ 	}
+ 	dwc3_omap_enable_irqs(omap);
+ 	return 0;
+ 
++err2:
++	of_platform_depopulate(dev);
+ err1:
+ 	pm_runtime_put_sync(dev);
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/usb/dwc3/dwc3-st.c b/drivers/usb/dwc3/dwc3-st.c
+index 211360eee95a0..c8c7cd0c17969 100644
+--- a/drivers/usb/dwc3/dwc3-st.c
++++ b/drivers/usb/dwc3/dwc3-st.c
+@@ -219,10 +219,8 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	dwc3_data->regmap = regmap;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "syscfg-reg");
+-	if (!res) {
+-		ret = -ENXIO;
+-		goto undo_platform_dev_alloc;
+-	}
++	if (!res)
++		return -ENXIO;
+ 
+ 	dwc3_data->syscfg_reg_off = res->start;
+ 
+@@ -233,8 +231,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 		devm_reset_control_get_exclusive(dev, "powerdown");
+ 	if (IS_ERR(dwc3_data->rstc_pwrdn)) {
+ 		dev_err(&pdev->dev, "could not get power controller\n");
+-		ret = PTR_ERR(dwc3_data->rstc_pwrdn);
+-		goto undo_platform_dev_alloc;
++		return PTR_ERR(dwc3_data->rstc_pwrdn);
+ 	}
+ 
+ 	/* Manage PowerDown */
+@@ -269,7 +266,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	if (!child_pdev) {
+ 		dev_err(dev, "failed to find dwc3 core device\n");
+ 		ret = -ENODEV;
+-		goto err_node_put;
++		goto depopulate;
+ 	}
+ 
+ 	dwc3_data->dr_mode = usb_get_dr_mode(&child_pdev->dev);
+@@ -285,6 +282,7 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	ret = st_dwc3_drd_init(dwc3_data);
+ 	if (ret) {
+ 		dev_err(dev, "drd initialisation failed\n");
++		of_platform_depopulate(dev);
+ 		goto undo_softreset;
+ 	}
+ 
+@@ -294,14 +292,14 @@ static int st_dwc3_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dwc3_data);
+ 	return 0;
+ 
++depopulate:
++	of_platform_depopulate(dev);
+ err_node_put:
+ 	of_node_put(child);
+ undo_softreset:
+ 	reset_control_assert(dwc3_data->rstc_rst);
+ undo_powerdown:
+ 	reset_control_assert(dwc3_data->rstc_pwrdn);
+-undo_platform_dev_alloc:
+-	platform_device_put(pdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index 6095f4dee6ceb..c9eafb6f3bdc2 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -298,9 +298,14 @@ static int dwc3_xlnx_probe(struct platform_device *pdev)
+ 		goto err_pm_set_suspended;
+ 
+ 	pm_suspend_ignore_children(dev, false);
+-	return pm_runtime_resume_and_get(dev);
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret < 0)
++		goto err_pm_set_suspended;
++
++	return 0;
+ 
+ err_pm_set_suspended:
++	of_platform_depopulate(dev);
+ 	pm_runtime_set_suspended(dev);
+ 
+ err_clk_put:
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index d96ffbe520397..c9533a99e47c8 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -232,7 +232,8 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ 	/* stall is always issued on EP0 */
+ 	dep = dwc->eps[0];
+ 	__dwc3_gadget_ep_set_halt(dep, 1, false);
+-	dep->flags = DWC3_EP_ENABLED;
++	dep->flags &= DWC3_EP_RESOURCE_ALLOCATED;
++	dep->flags |= DWC3_EP_ENABLED;
+ 	dwc->delayed_status = false;
+ 
+ 	if (!list_empty(&dep->pending_list)) {
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index d41f5f31dadd5..a9edd60fbbf77 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -753,6 +753,7 @@ int uvcg_video_enable(struct uvc_video *video)
+ 	video->req_int_count = 0;
+ 
+ 	uvc_video_ep_queue_initial_requests(video);
++	queue_work(video->async_wq, &video->pump);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 311040f9b9352..176f38750ad58 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -619,6 +619,8 @@ static void option_instat_callback(struct urb *urb);
+ 
+ /* MeiG Smart Technology products */
+ #define MEIGSMART_VENDOR_ID			0x2dee
++/* MeiG Smart SRM825L based on Qualcomm 315 */
++#define MEIGSMART_PRODUCT_SRM825L		0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320		0x4d41
+ 
+@@ -2366,6 +2368,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/typec/mux/fsa4480.c b/drivers/usb/typec/mux/fsa4480.c
+index cd235339834b0..f71dba8bf07c9 100644
+--- a/drivers/usb/typec/mux/fsa4480.c
++++ b/drivers/usb/typec/mux/fsa4480.c
+@@ -274,7 +274,7 @@ static int fsa4480_probe(struct i2c_client *client)
+ 		return dev_err_probe(dev, PTR_ERR(fsa->regmap), "failed to initialize regmap\n");
+ 
+ 	ret = regmap_read(fsa->regmap, FSA4480_DEVICE_ID, &val);
+-	if (ret || !val)
++	if (ret)
+ 		return dev_err_probe(dev, -ENODEV, "FSA4480 not found\n");
+ 
+ 	dev_dbg(dev, "Found FSA4480 v%lu.%lu (Vendor ID = %lu)\n",
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 2fa973afe4e68..2fc4e06937d87 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -72,6 +72,9 @@ struct pmic_glink_ucsi {
+ 
+ 	struct work_struct notify_work;
+ 	struct work_struct register_work;
++	spinlock_t state_lock;
++	bool ucsi_registered;
++	bool pd_running;
+ 
+ 	u8 read_buf[UCSI_BUF_SIZE];
+ };
+@@ -271,8 +274,20 @@ static void pmic_glink_ucsi_notify(struct work_struct *work)
+ static void pmic_glink_ucsi_register(struct work_struct *work)
+ {
+ 	struct pmic_glink_ucsi *ucsi = container_of(work, struct pmic_glink_ucsi, register_work);
++	unsigned long flags;
++	bool pd_running;
+ 
+-	ucsi_register(ucsi->ucsi);
++	spin_lock_irqsave(&ucsi->state_lock, flags);
++	pd_running = ucsi->pd_running;
++	spin_unlock_irqrestore(&ucsi->state_lock, flags);
++
++	if (!ucsi->ucsi_registered && pd_running) {
++		ucsi_register(ucsi->ucsi);
++		ucsi->ucsi_registered = true;
++	} else if (ucsi->ucsi_registered && !pd_running) {
++		ucsi_unregister(ucsi->ucsi);
++		ucsi->ucsi_registered = false;
++	}
+ }
+ 
+ static void pmic_glink_ucsi_callback(const void *data, size_t len, void *priv)
+@@ -296,11 +311,12 @@ static void pmic_glink_ucsi_callback(const void *data, size_t len, void *priv)
+ static void pmic_glink_ucsi_pdr_notify(void *priv, int state)
+ {
+ 	struct pmic_glink_ucsi *ucsi = priv;
++	unsigned long flags;
+ 
+-	if (state == SERVREG_SERVICE_STATE_UP)
+-		schedule_work(&ucsi->register_work);
+-	else if (state == SERVREG_SERVICE_STATE_DOWN)
+-		ucsi_unregister(ucsi->ucsi);
++	spin_lock_irqsave(&ucsi->state_lock, flags);
++	ucsi->pd_running = (state == SERVREG_SERVICE_STATE_UP);
++	spin_unlock_irqrestore(&ucsi->state_lock, flags);
++	schedule_work(&ucsi->register_work);
+ }
+ 
+ static void pmic_glink_ucsi_destroy(void *data)
+@@ -348,6 +364,7 @@ static int pmic_glink_ucsi_probe(struct auxiliary_device *adev,
+ 	init_completion(&ucsi->read_ack);
+ 	init_completion(&ucsi->write_ack);
+ 	init_completion(&ucsi->sync_ack);
++	spin_lock_init(&ucsi->state_lock);
+ 	mutex_init(&ucsi->lock);
+ 
+ 	ucsi->ucsi = ucsi_create(dev, &pmic_glink_ucsi_ops);
+@@ -395,12 +412,16 @@ static int pmic_glink_ucsi_probe(struct auxiliary_device *adev,
+ 		ucsi->port_orientation[port] = desc;
+ 	}
+ 
+-	ucsi->client = devm_pmic_glink_register_client(dev,
+-						       PMIC_GLINK_OWNER_USBC,
+-						       pmic_glink_ucsi_callback,
+-						       pmic_glink_ucsi_pdr_notify,
+-						       ucsi);
+-	return PTR_ERR_OR_ZERO(ucsi->client);
++	ucsi->client = devm_pmic_glink_client_alloc(dev, PMIC_GLINK_OWNER_USBC,
++						    pmic_glink_ucsi_callback,
++						    pmic_glink_ucsi_pdr_notify,
++						    ucsi);
++	if (IS_ERR(ucsi->client))
++		return PTR_ERR(ucsi->client);
++
++	pmic_glink_client_register(ucsi->client);
++
++	return 0;
+ }
+ 
+ static void pmic_glink_ucsi_remove(struct auxiliary_device *adev)
+diff --git a/drivers/video/aperture.c b/drivers/video/aperture.c
+index 561be8feca96c..2b5a1e666e9b2 100644
+--- a/drivers/video/aperture.c
++++ b/drivers/video/aperture.c
+@@ -293,7 +293,7 @@ int aperture_remove_conflicting_devices(resource_size_t base, resource_size_t si
+ 	 * ask for this, so let's assume that a real driver for the display
+ 	 * was already probed and prevent sysfb to register devices later.
+ 	 */
+-	sysfb_disable();
++	sysfb_disable(NULL);
+ 
+ 	aperture_detach_devices(base, size);
+ 
+@@ -346,15 +346,10 @@ EXPORT_SYMBOL(__aperture_remove_legacy_vga_devices);
+  */
+ int aperture_remove_conflicting_pci_devices(struct pci_dev *pdev, const char *name)
+ {
+-	bool primary = false;
+ 	resource_size_t base, size;
+ 	int bar, ret = 0;
+ 
+-	if (pdev == vga_default_device())
+-		primary = true;
+-
+-	if (primary)
+-		sysfb_disable();
++	sysfb_disable(&pdev->dev);
+ 
+ 	for (bar = 0; bar < PCI_STD_NUM_BARS; ++bar) {
+ 		if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM))
+@@ -370,7 +365,7 @@ int aperture_remove_conflicting_pci_devices(struct pci_dev *pdev, const char *na
+ 	 * that consumes the VGA framebuffer I/O range. Remove this
+ 	 * device as well.
+ 	 */
+-	if (primary)
++	if (pdev == vga_default_device())
+ 		ret = __aperture_remove_legacy_vga_devices(pdev);
+ 
+ 	return ret;
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 3acf5e0500728..a95e77670b494 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -695,13 +695,18 @@ static void afs_setattr_edit_file(struct afs_operation *op)
+ {
+ 	struct afs_vnode_param *vp = &op->file[0];
+ 	struct afs_vnode *vnode = vp->vnode;
++	struct inode *inode = &vnode->netfs.inode;
+ 
+ 	if (op->setattr.attr->ia_valid & ATTR_SIZE) {
+ 		loff_t size = op->setattr.attr->ia_size;
+-		loff_t i_size = op->setattr.old_i_size;
++		loff_t old = op->setattr.old_i_size;
++
++		/* Note: inode->i_size was updated by afs_apply_status() inside
++		 * the I/O and callback locks.
++		 */
+ 
+-		if (size != i_size) {
+-			truncate_setsize(&vnode->netfs.inode, size);
++		if (size != old) {
++			truncate_pagecache(inode, size);
+ 			netfs_resize_file(&vnode->netfs, size, true);
+ 			fscache_resize_cookie(afs_vnode_cache(vnode), size);
+ 		}
+diff --git a/fs/attr.c b/fs/attr.c
+index 960a310581ebb..0dbf43b6555c8 100644
+--- a/fs/attr.c
++++ b/fs/attr.c
+@@ -489,9 +489,17 @@ int notify_change(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	error = security_inode_setattr(idmap, dentry, attr);
+ 	if (error)
+ 		return error;
+-	error = try_break_deleg(inode, delegated_inode);
+-	if (error)
+-		return error;
++
++	/*
++	 * If ATTR_DELEG is set, then these attributes are being set on
++	 * behalf of the holder of a write delegation. We want to avoid
++	 * breaking the delegation in this case.
++	 */
++	if (!(ia_valid & ATTR_DELEG)) {
++		error = try_break_deleg(inode, delegated_inode);
++		if (error)
++			return error;
++	}
+ 
+ 	if (inode->i_op->setattr)
+ 		error = inode->i_op->setattr(idmap, dentry, attr);
+diff --git a/fs/backing-file.c b/fs/backing-file.c
+index afb557446c27c..8860dac58c37e 100644
+--- a/fs/backing-file.c
++++ b/fs/backing-file.c
+@@ -303,13 +303,16 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
+ 	if (WARN_ON_ONCE(!(out->f_mode & FMODE_BACKING)))
+ 		return -EIO;
+ 
++	if (!out->f_op->splice_write)
++		return -EINVAL;
++
+ 	ret = file_remove_privs(ctx->user_file);
+ 	if (ret)
+ 		return ret;
+ 
+ 	old_cred = override_creds(ctx->cred);
+ 	file_start_write(out);
+-	ret = iter_file_splice_write(pipe, out, ppos, len, flags);
++	ret = out->f_op->splice_write(pipe, out, ppos, len, flags);
+ 	file_end_write(out);
+ 	revert_creds(old_cred);
+ 
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index b799701454a96..7d35f0e1bc764 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -592,6 +592,9 @@ static int create_elf_fdpic_tables(struct linux_binprm *bprm,
+ 
+ 	if (bprm->have_execfd)
+ 		nitems++;
++#ifdef ELF_HWCAP2
++	nitems++;
++#endif
+ 
+ 	csp = sp;
+ 	sp -= nitems * 2 * sizeof(unsigned long);
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index e3a57196b0ee0..bb4ae84fa64c6 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -668,7 +668,6 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ {
+ 	struct btrfs_inode *inode = bbio->inode;
+ 	struct btrfs_fs_info *fs_info = bbio->fs_info;
+-	struct btrfs_bio *orig_bbio = bbio;
+ 	struct bio *bio = &bbio->bio;
+ 	u64 logical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+ 	u64 length = bio->bi_iter.bi_size;
+@@ -706,7 +705,7 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ 		bbio->saved_iter = bio->bi_iter;
+ 		ret = btrfs_lookup_bio_sums(bbio);
+ 		if (ret)
+-			goto fail_put_bio;
++			goto fail;
+ 	}
+ 
+ 	if (btrfs_op(bio) == BTRFS_MAP_WRITE) {
+@@ -740,13 +739,13 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ 
+ 			ret = btrfs_bio_csum(bbio);
+ 			if (ret)
+-				goto fail_put_bio;
++				goto fail;
+ 		} else if (use_append ||
+ 			   (btrfs_is_zoned(fs_info) && inode &&
+ 			    inode->flags & BTRFS_INODE_NODATASUM)) {
+ 			ret = btrfs_alloc_dummy_sum(bbio);
+ 			if (ret)
+-				goto fail_put_bio;
++				goto fail;
+ 		}
+ 	}
+ 
+@@ -754,12 +753,23 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ done:
+ 	return map_length == length;
+ 
+-fail_put_bio:
+-	if (map_length < length)
+-		btrfs_cleanup_bio(bbio);
+ fail:
+ 	btrfs_bio_counter_dec(fs_info);
+-	btrfs_bio_end_io(orig_bbio, ret);
++	/*
++	 * We have split the original bbio, now we have to end both the current
++	 * @bbio and remaining one, as the remaining one will never be submitted.
++	 */
++	if (map_length < length) {
++		struct btrfs_bio *remaining = bbio->private;
++
++		ASSERT(bbio->bio.bi_pool == &btrfs_clone_bioset);
++		ASSERT(remaining);
++
++		remaining->bio.bi_status = ret;
++		btrfs_orig_bbio_end_io(remaining);
++	}
++	bbio->bio.bi_status = ret;
++	btrfs_orig_bbio_end_io(bbio);
+ 	/* Do not submit another chunk */
+ 	return true;
+ }
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 29d6ca3b874ec..3faf2181d1ee8 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -4100,6 +4100,8 @@ static int try_flush_qgroup(struct btrfs_root *root)
+ 		return 0;
+ 	}
+ 
++	btrfs_run_delayed_iputs(root->fs_info);
++	btrfs_wait_on_delayed_iputs(root->fs_info);
+ 	ret = btrfs_start_delalloc_snapshot(root, true);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 249ddfbb1b03e..6b64cf6065598 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -697,6 +697,7 @@ void ceph_evict_inode(struct inode *inode)
+ 
+ 	percpu_counter_dec(&mdsc->metric.total_inodes);
+ 
++	netfs_wait_for_outstanding_io(inode);
+ 	truncate_inode_pages_final(&inode->i_data);
+ 	if (inode->i_state & I_PINNING_NETFS_WB)
+ 		ceph_fscache_unuse_cookie(inode, true);
+diff --git a/fs/erofs/zutil.c b/fs/erofs/zutil.c
+index 9b53883e5caf8..37afe20248409 100644
+--- a/fs/erofs/zutil.c
++++ b/fs/erofs/zutil.c
+@@ -111,7 +111,8 @@ int z_erofs_gbuf_growsize(unsigned int nrpages)
+ out:
+ 	if (i < z_erofs_gbuf_count && tmp_pages) {
+ 		for (j = 0; j < nrpages; ++j)
+-			if (tmp_pages[j] && tmp_pages[j] != gbuf->pages[j])
++			if (tmp_pages[j] && (j >= gbuf->nrpages ||
++					     tmp_pages[j] != gbuf->pages[j]))
+ 				__free_page(tmp_pages[j]);
+ 		kfree(tmp_pages);
+ 	}
+diff --git a/fs/netfs/io.c b/fs/netfs/io.c
+index f3abc5dfdbc0c..c96431d3da6d8 100644
+--- a/fs/netfs/io.c
++++ b/fs/netfs/io.c
+@@ -313,6 +313,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
+ 			netfs_reset_subreq_iter(rreq, subreq);
+ 			netfs_read_from_server(rreq, subreq);
+ 		} else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) {
++			netfs_reset_subreq_iter(rreq, subreq);
+ 			netfs_rreq_short_read(rreq, subreq);
+ 		}
+ 	}
+diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
+index 172808e83ca81..a46bf569303fc 100644
+--- a/fs/netfs/misc.c
++++ b/fs/netfs/misc.c
+@@ -97,10 +97,22 @@ EXPORT_SYMBOL(netfs_clear_inode_writeback);
+ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
+ {
+ 	struct netfs_folio *finfo;
++	struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
+ 	size_t flen = folio_size(folio);
+ 
+ 	kenter("{%lx},%zx,%zx", folio->index, offset, length);
+ 
++	if (offset == 0 && length == flen) {
++		unsigned long long i_size = i_size_read(&ctx->inode);
++		unsigned long long fpos = folio_pos(folio), end;
++
++		end = umin(fpos + flen, i_size);
++		if (fpos < i_size && end > ctx->zero_point)
++			ctx->zero_point = end;
++	}
++
++	folio_wait_private_2(folio); /* [DEPRECATED] */
++
+ 	if (!folio_test_private(folio))
+ 		return;
+ 
+@@ -113,18 +125,34 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
+ 		/* We have a partially uptodate page from a streaming write. */
+ 		unsigned int fstart = finfo->dirty_offset;
+ 		unsigned int fend = fstart + finfo->dirty_len;
+-		unsigned int end = offset + length;
++		unsigned int iend = offset + length;
+ 
+ 		if (offset >= fend)
+ 			return;
+-		if (end <= fstart)
++		if (iend <= fstart)
++			return;
++
++		/* The invalidation region overlaps the data.  If the region
++		 * covers the start of the data, we either move along the start
++		 * or just erase the data entirely.
++		 */
++		if (offset <= fstart) {
++			if (iend >= fend)
++				goto erase_completely;
++			/* Move the start of the data. */
++			finfo->dirty_len = fend - iend;
++			finfo->dirty_offset = offset;
++			return;
++		}
++
++		/* Reduce the length of the data if the invalidation region
++		 * covers the tail part.
++		 */
++		if (iend >= fend) {
++			finfo->dirty_len = offset - fstart;
+ 			return;
+-		if (offset <= fstart && end >= fend)
+-			goto erase_completely;
+-		if (offset <= fstart && end > fstart)
+-			goto reduce_len;
+-		if (offset > fstart && end >= fend)
+-			goto move_start;
++		}
++
+ 		/* A partial write was split.  The caller has already zeroed
+ 		 * it, so just absorb the hole.
+ 		 */
+@@ -137,12 +165,6 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
+ 	folio_clear_uptodate(folio);
+ 	kfree(finfo);
+ 	return;
+-reduce_len:
+-	finfo->dirty_len = offset + length - finfo->dirty_offset;
+-	return;
+-move_start:
+-	finfo->dirty_len -= offset - finfo->dirty_offset;
+-	finfo->dirty_offset = offset;
+ }
+ EXPORT_SYMBOL(netfs_invalidate_folio);
+ 
+@@ -159,12 +181,20 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp)
+ 	struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
+ 	unsigned long long end;
+ 
+-	end = folio_pos(folio) + folio_size(folio);
++	if (folio_test_dirty(folio))
++		return false;
++
++	end = umin(folio_pos(folio) + folio_size(folio), i_size_read(&ctx->inode));
+ 	if (end > ctx->zero_point)
+ 		ctx->zero_point = end;
+ 
+ 	if (folio_test_private(folio))
+ 		return false;
++	if (unlikely(folio_test_private_2(folio))) { /* [DEPRECATED] */
++		if (current_is_kswapd() || !(gfp & __GFP_FS))
++			return false;
++		folio_wait_private_2(folio);
++	}
+ 	fscache_note_page_release(netfs_i_cookie(ctx));
+ 	return true;
+ }
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 488147439fe0f..a2b697b4aa401 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -33,6 +33,7 @@
+ int netfs_folio_written_back(struct folio *folio)
+ {
+ 	enum netfs_folio_trace why = netfs_folio_trace_clear;
++	struct netfs_inode *ictx = netfs_inode(folio->mapping->host);
+ 	struct netfs_folio *finfo;
+ 	struct netfs_group *group = NULL;
+ 	int gcount = 0;
+@@ -41,6 +42,12 @@ int netfs_folio_written_back(struct folio *folio)
+ 		/* Streaming writes cannot be redirtied whilst under writeback,
+ 		 * so discard the streaming record.
+ 		 */
++		unsigned long long fend;
++
++		fend = folio_pos(folio) + finfo->dirty_offset + finfo->dirty_len;
++		if (fend > ictx->zero_point)
++			ictx->zero_point = fend;
++
+ 		folio_detach_private(folio);
+ 		group = finfo->netfs_group;
+ 		gcount++;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index a20c2c9d7d457..a366fb1c1b9b4 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2789,15 +2789,18 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
+ 		deny & NFS4_SHARE_ACCESS_READ ? "r" : "-",
+ 		deny & NFS4_SHARE_ACCESS_WRITE ? "w" : "-");
+ 
+-	spin_lock(&nf->fi_lock);
+-	file = find_any_file_locked(nf);
+-	if (file) {
+-		nfs4_show_superblock(s, file);
+-		seq_puts(s, ", ");
+-		nfs4_show_fname(s, file);
+-		seq_puts(s, ", ");
+-	}
+-	spin_unlock(&nf->fi_lock);
++	if (nf) {
++		spin_lock(&nf->fi_lock);
++		file = find_any_file_locked(nf);
++		if (file) {
++			nfs4_show_superblock(s, file);
++			seq_puts(s, ", ");
++			nfs4_show_fname(s, file);
++			seq_puts(s, ", ");
++		}
++		spin_unlock(&nf->fi_lock);
++	} else
++		seq_puts(s, "closed, ");
+ 	nfs4_show_owner(s, oo);
+ 	if (st->sc_status & SC_STATUS_ADMIN_REVOKED)
+ 		seq_puts(s, ", admin-revoked");
+@@ -3075,9 +3078,9 @@ nfsd4_cb_getattr_release(struct nfsd4_callback *cb)
+ 	struct nfs4_delegation *dp =
+ 			container_of(ncf, struct nfs4_delegation, dl_cb_fattr);
+ 
+-	nfs4_put_stid(&dp->dl_stid);
+ 	clear_bit(CB_GETATTR_BUSY, &ncf->ncf_cb_flags);
+ 	wake_up_bit(&ncf->ncf_cb_flags, CB_GETATTR_BUSY);
++	nfs4_put_stid(&dp->dl_stid);
+ }
+ 
+ static const struct nfsd4_callback_ops nfsd4_cb_recall_any_ops = {
+@@ -8812,7 +8815,7 @@ nfsd4_get_writestateid(struct nfsd4_compound_state *cstate,
+ /**
+  * nfsd4_deleg_getattr_conflict - Recall if GETATTR causes conflict
+  * @rqstp: RPC transaction context
+- * @inode: file to be checked for a conflict
++ * @dentry: dentry of inode to be checked for a conflict
+  * @modified: return true if file was modified
+  * @size: new size of file if modified is true
+  *
+@@ -8827,16 +8830,16 @@ nfsd4_get_writestateid(struct nfsd4_compound_state *cstate,
+  * code is returned.
+  */
+ __be32
+-nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode,
++nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct dentry *dentry,
+ 				bool *modified, u64 *size)
+ {
+ 	__be32 status;
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct file_lock_context *ctx;
+ 	struct file_lease *fl;
+-	struct nfs4_delegation *dp;
+ 	struct iattr attrs;
+ 	struct nfs4_cb_fattr *ncf;
++	struct inode *inode = d_inode(dentry);
+ 
+ 	*modified = false;
+ 	ctx = locks_inode_context(inode);
+@@ -8856,17 +8859,26 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode,
+ 			 */
+ 			if (type == F_RDLCK)
+ 				break;
+-			goto break_lease;
++
++			nfsd_stats_wdeleg_getattr_inc(nn);
++			spin_unlock(&ctx->flc_lock);
++
++			status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
++			if (status != nfserr_jukebox ||
++			    !nfsd_wait_for_delegreturn(rqstp, inode))
++				return status;
++			return 0;
+ 		}
+ 		if (type == F_WRLCK) {
+-			dp = fl->c.flc_owner;
++			struct nfs4_delegation *dp = fl->c.flc_owner;
++
+ 			if (dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
+ 				spin_unlock(&ctx->flc_lock);
+ 				return 0;
+ 			}
+-break_lease:
+ 			nfsd_stats_wdeleg_getattr_inc(nn);
+ 			dp = fl->c.flc_owner;
++			refcount_inc(&dp->dl_stid.sc_count);
+ 			ncf = &dp->dl_cb_fattr;
+ 			nfs4_cb_getattr(&dp->dl_cb_fattr);
+ 			spin_unlock(&ctx->flc_lock);
+@@ -8876,27 +8888,37 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode,
+ 				/* Recall delegation only if client didn't respond */
+ 				status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
+ 				if (status != nfserr_jukebox ||
+-						!nfsd_wait_for_delegreturn(rqstp, inode))
++						!nfsd_wait_for_delegreturn(rqstp, inode)) {
++					nfs4_put_stid(&dp->dl_stid);
+ 					return status;
++				}
+ 			}
+ 			if (!ncf->ncf_file_modified &&
+ 					(ncf->ncf_initial_cinfo != ncf->ncf_cb_change ||
+ 					ncf->ncf_cur_fsize != ncf->ncf_cb_fsize))
+ 				ncf->ncf_file_modified = true;
+ 			if (ncf->ncf_file_modified) {
++				int err;
++
+ 				/*
+ 				 * Per section 10.4.3 of RFC 8881, the server would
+ 				 * not update the file's metadata with the client's
+ 				 * modified size
+ 				 */
+ 				attrs.ia_mtime = attrs.ia_ctime = current_time(inode);
+-				attrs.ia_valid = ATTR_MTIME | ATTR_CTIME;
+-				setattr_copy(&nop_mnt_idmap, inode, &attrs);
+-				mark_inode_dirty(inode);
++				attrs.ia_valid = ATTR_MTIME | ATTR_CTIME | ATTR_DELEG;
++				inode_lock(inode);
++				err = notify_change(&nop_mnt_idmap, dentry, &attrs, NULL);
++				inode_unlock(inode);
++				if (err) {
++					nfs4_put_stid(&dp->dl_stid);
++					return nfserrno(err);
++				}
+ 				ncf->ncf_cur_fsize = ncf->ncf_cb_fsize;
+ 				*size = ncf->ncf_cur_fsize;
+ 				*modified = true;
+ 			}
++			nfs4_put_stid(&dp->dl_stid);
+ 			return 0;
+ 		}
+ 		break;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index c7bfd2180e3f2..0869062280ccc 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3545,6 +3545,9 @@ nfsd4_encode_fattr4(struct svc_rqst *rqstp, struct xdr_stream *xdr,
+ 	args.dentry = dentry;
+ 	args.ignore_crossmnt = (ignore_crossmnt != 0);
+ 	args.acl = NULL;
++#ifdef CONFIG_NFSD_V4_SECURITY_LABEL
++	args.context = NULL;
++#endif
+ 
+ 	/*
+ 	 * Make a local copy of the attribute bitmap that can be modified.
+@@ -3562,7 +3565,7 @@ nfsd4_encode_fattr4(struct svc_rqst *rqstp, struct xdr_stream *xdr,
+ 	}
+ 	args.size = 0;
+ 	if (attrmask[0] & (FATTR4_WORD0_CHANGE | FATTR4_WORD0_SIZE)) {
+-		status = nfsd4_deleg_getattr_conflict(rqstp, d_inode(dentry),
++		status = nfsd4_deleg_getattr_conflict(rqstp, dentry,
+ 					&file_modified, &size);
+ 		if (status)
+ 			goto out;
+@@ -3617,7 +3620,6 @@ nfsd4_encode_fattr4(struct svc_rqst *rqstp, struct xdr_stream *xdr,
+ 	args.contextsupport = false;
+ 
+ #ifdef CONFIG_NFSD_V4_SECURITY_LABEL
+-	args.context = NULL;
+ 	if ((attrmask[2] & FATTR4_WORD2_SECURITY_LABEL) ||
+ 	     attrmask[0] & FATTR4_WORD0_SUPPORTED_ATTRS) {
+ 		if (exp->ex_flags & NFSEXP_SECURITY_LABEL)
+diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
+index ffc217099d191..ec4559ecd193b 100644
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -781,5 +781,5 @@ static inline bool try_to_expire_client(struct nfs4_client *clp)
+ }
+ 
+ extern __be32 nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp,
+-		struct inode *inode, bool *file_modified, u64 *size);
++		struct dentry *dentry, bool *file_modified, u64 *size);
+ #endif   /* NFSD4_STATE_H */
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 0a271b9fbc622..1e4da268de3b4 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -254,7 +254,6 @@ struct cifs_open_info_data {
+ struct smb_rqst {
+ 	struct kvec	*rq_iov;	/* array of kvecs */
+ 	unsigned int	rq_nvec;	/* number of kvecs in array */
+-	size_t		rq_iter_size;	/* Amount of data in ->rq_iter */
+ 	struct iov_iter	rq_iter;	/* Data iterator */
+ 	struct xarray	rq_buffer;	/* Page buffer for encryption */
+ };
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 595c4b673707e..6dce70f172082 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1713,7 +1713,6 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)
+ 	rqst.rq_iov = iov;
+ 	rqst.rq_nvec = 2;
+ 	rqst.rq_iter = wdata->subreq.io_iter;
+-	rqst.rq_iter_size = iov_iter_count(&wdata->subreq.io_iter);
+ 
+ 	cifs_dbg(FYI, "async write at %llu %zu bytes\n",
+ 		 wdata->subreq.start, wdata->subreq.len);
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 7fe59235f0901..f44f5f2494006 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -3287,6 +3287,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 	struct inode *inode = file_inode(file);
+ 	struct cifsFileInfo *cfile = file->private_data;
+ 	struct file_zero_data_information fsctl_buf;
++	unsigned long long end = offset + len, i_size, remote_i_size;
+ 	long rc;
+ 	unsigned int xid;
+ 	__u8 set_sparse = 1;
+@@ -3318,6 +3319,27 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 			(char *)&fsctl_buf,
+ 			sizeof(struct file_zero_data_information),
+ 			CIFSMaxBufSize, NULL, NULL);
++
++	if (rc)
++		goto unlock;
++
++	/* If there's dirty data in the buffer that would extend the EOF if it
++	 * were written, then we need to move the EOF marker over to the lower
++	 * of the high end of the hole and the proposed EOF.  The problem is
++	 * that we locally hole-punch the tail of the dirty data, the proposed
++	 * EOF update will end up in the wrong place.
++	 */
++	i_size = i_size_read(inode);
++	remote_i_size = netfs_inode(inode)->remote_i_size;
++	if (end > remote_i_size && i_size > remote_i_size) {
++		unsigned long long extend_to = umin(end, i_size);
++		rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,
++				  cfile->fid.volatile_fid, cfile->pid, extend_to);
++		if (rc >= 0)
++			netfs_inode(inode)->remote_i_size = extend_to;
++	}
++
++unlock:
+ 	filemap_invalidate_unlock(inode->i_mapping);
+ out:
+ 	inode_unlock(inode);
+@@ -4428,7 +4450,6 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
+ 			}
+ 			iov_iter_xarray(&new->rq_iter, ITER_SOURCE,
+ 					buffer, 0, size);
+-			new->rq_iter_size = size;
+ 		}
+ 	}
+ 
+@@ -4474,7 +4495,6 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ 	rqst.rq_nvec = 2;
+ 	if (iter) {
+ 		rqst.rq_iter = *iter;
+-		rqst.rq_iter_size = iov_iter_count(iter);
+ 		iter_size = iov_iter_count(iter);
+ 	}
+ 
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4cd5c33be2a1a..d262e70100c9c 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4435,7 +4435,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
+ 	 * If we want to do a RDMA write, fill in and append
+ 	 * smbd_buffer_descriptor_v1 to the end of read request
+ 	 */
+-	if (smb3_use_rdma_offload(io_parms)) {
++	if (rdata && smb3_use_rdma_offload(io_parms)) {
+ 		struct smbd_buffer_descriptor_v1 *v1;
+ 		bool need_invalidate = server->dialect == SMB30_PROT_ID;
+ 
+@@ -4517,7 +4517,6 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 
+ 	if (rdata->got_bytes) {
+ 		rqst.rq_iter	  = rdata->subreq.io_iter;
+-		rqst.rq_iter_size = iov_iter_count(&rdata->subreq.io_iter);
+ 	}
+ 
+ 	WARN_ONCE(rdata->server != mid->server,
+@@ -4969,7 +4968,6 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 	rqst.rq_iov = iov;
+ 	rqst.rq_nvec = 1;
+ 	rqst.rq_iter = wdata->subreq.io_iter;
+-	rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter);
+ 	if (test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags))
+ 		smb2_set_replay(server, &rqst);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 36b9e87439221..5f07c1c377df6 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -208,6 +208,7 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ #define ATTR_OPEN	(1 << 15) /* Truncating from open(O_TRUNC) */
+ #define ATTR_TIMES_SET	(1 << 16)
+ #define ATTR_TOUCH	(1 << 17)
++#define ATTR_DELEG	(1 << 18) /* Delegated attrs. Don't break write delegations */
+ 
+ /*
+  * Whiteout is represented by a char device.  The following constants define the
+diff --git a/include/linux/soc/qcom/pmic_glink.h b/include/linux/soc/qcom/pmic_glink.h
+index fd124aa18c81a..7cddf10277528 100644
+--- a/include/linux/soc/qcom/pmic_glink.h
++++ b/include/linux/soc/qcom/pmic_glink.h
+@@ -23,10 +23,11 @@ struct pmic_glink_hdr {
+ 
+ int pmic_glink_send(struct pmic_glink_client *client, void *data, size_t len);
+ 
+-struct pmic_glink_client *devm_pmic_glink_register_client(struct device *dev,
+-							  unsigned int id,
+-							  void (*cb)(const void *, size_t, void *),
+-							  void (*pdr)(void *, int),
+-							  void *priv);
++struct pmic_glink_client *devm_pmic_glink_client_alloc(struct device *dev,
++						       unsigned int id,
++						       void (*cb)(const void *, size_t, void *),
++						       void (*pdr)(void *, int),
++						       void *priv);
++void pmic_glink_client_register(struct pmic_glink_client *client);
+ 
+ #endif
+diff --git a/include/linux/sysfb.h b/include/linux/sysfb.h
+index c9cb657dad08a..bef5f06a91de6 100644
+--- a/include/linux/sysfb.h
++++ b/include/linux/sysfb.h
+@@ -58,11 +58,11 @@ struct efifb_dmi_info {
+ 
+ #ifdef CONFIG_SYSFB
+ 
+-void sysfb_disable(void);
++void sysfb_disable(struct device *dev);
+ 
+ #else /* CONFIG_SYSFB */
+ 
+-static inline void sysfb_disable(void)
++static inline void sysfb_disable(struct device *dev)
+ {
+ }
+ 
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index b61fb1aa3a56b..8bb5f016969f1 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -260,7 +260,7 @@ struct bonding {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ 	struct list_head ipsec_list;
+ 	/* protecting ipsec_list */
+-	spinlock_t ipsec_lock;
++	struct mutex ipsec_lock;
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 	struct bpf_prog *xdp_prog;
+ };
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 9b09acac538ee..522f1da8b747a 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -68,7 +68,7 @@ static inline bool sk_can_busy_loop(struct sock *sk)
+ static inline unsigned long busy_loop_current_time(void)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-	return (unsigned long)(local_clock() >> 10);
++	return (unsigned long)(ktime_get_ns() >> 10);
+ #else
+ 	return 0;
+ #endif
+diff --git a/include/net/netfilter/nf_tables_ipv4.h b/include/net/netfilter/nf_tables_ipv4.h
+index 60a7d0ce30804..fcf967286e37c 100644
+--- a/include/net/netfilter/nf_tables_ipv4.h
++++ b/include/net/netfilter/nf_tables_ipv4.h
+@@ -19,7 +19,7 @@ static inline void nft_set_pktinfo_ipv4(struct nft_pktinfo *pkt)
+ static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
+ {
+ 	struct iphdr *iph, _iph;
+-	u32 len, thoff;
++	u32 len, thoff, skb_len;
+ 
+ 	iph = skb_header_pointer(pkt->skb, skb_network_offset(pkt->skb),
+ 				 sizeof(*iph), &_iph);
+@@ -30,8 +30,10 @@ static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
+ 		return -1;
+ 
+ 	len = iph_totlen(pkt->skb, iph);
+-	thoff = skb_network_offset(pkt->skb) + (iph->ihl * 4);
+-	if (pkt->skb->len < len)
++	thoff = iph->ihl * 4;
++	skb_len = pkt->skb->len - skb_network_offset(pkt->skb);
++
++	if (skb_len < len)
+ 		return -1;
+ 	else if (len < thoff)
+ 		return -1;
+@@ -40,7 +42,7 @@ static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
+ 
+ 	pkt->flags = NFT_PKTINFO_L4PROTO;
+ 	pkt->tprot = iph->protocol;
+-	pkt->thoff = thoff;
++	pkt->thoff = skb_network_offset(pkt->skb) + thoff;
+ 	pkt->fragoff = ntohs(iph->frag_off) & IP_OFFSET;
+ 
+ 	return 0;
+diff --git a/include/net/netfilter/nf_tables_ipv6.h b/include/net/netfilter/nf_tables_ipv6.h
+index 467d59b9e5334..a0633eeaec977 100644
+--- a/include/net/netfilter/nf_tables_ipv6.h
++++ b/include/net/netfilter/nf_tables_ipv6.h
+@@ -31,8 +31,8 @@ static inline int __nft_set_pktinfo_ipv6_validate(struct nft_pktinfo *pkt)
+ 	struct ipv6hdr *ip6h, _ip6h;
+ 	unsigned int thoff = 0;
+ 	unsigned short frag_off;
++	u32 pkt_len, skb_len;
+ 	int protohdr;
+-	u32 pkt_len;
+ 
+ 	ip6h = skb_header_pointer(pkt->skb, skb_network_offset(pkt->skb),
+ 				  sizeof(*ip6h), &_ip6h);
+@@ -43,7 +43,8 @@ static inline int __nft_set_pktinfo_ipv6_validate(struct nft_pktinfo *pkt)
+ 		return -1;
+ 
+ 	pkt_len = ntohs(ip6h->payload_len);
+-	if (pkt_len + sizeof(*ip6h) > pkt->skb->len)
++	skb_len = pkt->skb->len - skb_network_offset(pkt->skb);
++	if (pkt_len + sizeof(*ip6h) > skb_len)
+ 		return -1;
+ 
+ 	protohdr = ipv6_find_hdr(pkt->skb, &thoff, -1, &frag_off, &flags);
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index 1af2bd56af44a..bdfa30b38321b 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -129,7 +129,7 @@ static int io_provided_buffers_select(struct io_kiocb *req, size_t *len,
+ 
+ 	iov[0].iov_base = buf;
+ 	iov[0].iov_len = *len;
+-	return 0;
++	return 1;
+ }
+ 
+ static struct io_uring_buf *io_ring_head_to_buf(struct io_uring_buf_ring *br,
+diff --git a/mm/truncate.c b/mm/truncate.c
+index e99085bf3d34d..a2af7f088407f 100644
+--- a/mm/truncate.c
++++ b/mm/truncate.c
+@@ -174,7 +174,7 @@ static void truncate_cleanup_folio(struct folio *folio)
+ 	if (folio_mapped(folio))
+ 		unmap_mapping_folio(folio);
+ 
+-	if (folio_has_private(folio))
++	if (folio_needs_release(folio))
+ 		folio_invalidate(folio, 0, folio_size(folio));
+ 
+ 	/*
+@@ -235,7 +235,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
+ 	 */
+ 	folio_zero_range(folio, offset, length);
+ 
+-	if (folio_has_private(folio))
++	if (folio_needs_release(folio))
+ 		folio_invalidate(folio, offset, length);
+ 	if (!folio_test_large(folio))
+ 		return true;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index b488d0742c966..9493966cf389f 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2431,10 +2431,16 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
+ 	/* To avoid a potential race with hci_unregister_dev. */
+ 	hci_dev_hold(hdev);
+ 
+-	if (action == PM_SUSPEND_PREPARE)
++	switch (action) {
++	case PM_HIBERNATION_PREPARE:
++	case PM_SUSPEND_PREPARE:
+ 		ret = hci_suspend_dev(hdev);
+-	else if (action == PM_POST_SUSPEND)
++		break;
++	case PM_POST_HIBERNATION:
++	case PM_POST_SUSPEND:
+ 		ret = hci_resume_dev(hdev);
++		break;
++	}
+ 
+ 	if (ret)
+ 		bt_dev_err(hdev, "Suspend notifier action (%lu) failed: %d",
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 4c27a360c2948..dc91921da4ea0 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -235,7 +235,7 @@ static ssize_t speed_show(struct device *dev,
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+-	if (netif_running(netdev) && netif_device_present(netdev)) {
++	if (netif_running(netdev)) {
+ 		struct ethtool_link_ksettings cmd;
+ 
+ 		if (!__ethtool_get_link_ksettings(netdev, &cmd))
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index ea55a758a475a..197a50ef8e2e1 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -3654,7 +3654,7 @@ static int pktgen_thread_worker(void *arg)
+ 	struct pktgen_dev *pkt_dev = NULL;
+ 	int cpu = t->cpu;
+ 
+-	WARN_ON(smp_processor_id() != cpu);
++	WARN_ON_ONCE(smp_processor_id() != cpu);
+ 
+ 	init_waitqueue_head(&t->queue);
+ 	complete(&t->start_done);
+@@ -3989,6 +3989,7 @@ static int __net_init pg_net_init(struct net *net)
+ 		goto remove;
+ 	}
+ 
++	cpus_read_lock();
+ 	for_each_online_cpu(cpu) {
+ 		int err;
+ 
+@@ -3997,6 +3998,7 @@ static int __net_init pg_net_init(struct net *net)
+ 			pr_warn("Cannot create thread for cpu %d (%d)\n",
+ 				   cpu, err);
+ 	}
++	cpus_read_unlock();
+ 
+ 	if (list_empty(&pn->pktgen_threads)) {
+ 		pr_err("Initialization failed for all threads\n");
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index fcc3dbef8b503..f99fd564d0ee5 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -441,6 +441,9 @@ int __ethtool_get_link_ksettings(struct net_device *dev,
+ 	if (!dev->ethtool_ops->get_link_ksettings)
+ 		return -EOPNOTSUPP;
+ 
++	if (!netif_device_present(dev))
++		return -ENODEV;
++
+ 	memset(link_ksettings, 0, sizeof(*link_ksettings));
+ 	return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings);
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ec6911034138f..2edbd5b181e29 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -4573,6 +4573,13 @@ int tcp_abort(struct sock *sk, int err)
+ 		/* Don't race with userspace socket closes such as tcp_close. */
+ 		lock_sock(sk);
+ 
++	/* Avoid closing the same socket twice. */
++	if (sk->sk_state == TCP_CLOSE) {
++		if (!has_current_bpf_ctx())
++			release_sock(sk);
++		return -ENOENT;
++	}
++
+ 	if (sk->sk_state == TCP_LISTEN) {
+ 		tcp_set_state(sk, TCP_CLOSE);
+ 		inet_csk_listen_stop(sk);
+@@ -4582,16 +4589,13 @@ int tcp_abort(struct sock *sk, int err)
+ 	local_bh_disable();
+ 	bh_lock_sock(sk);
+ 
+-	if (!sock_flag(sk, SOCK_DEAD)) {
+-		if (tcp_need_reset(sk->sk_state))
+-			tcp_send_active_reset(sk, GFP_ATOMIC,
+-					      SK_RST_REASON_NOT_SPECIFIED);
+-		tcp_done_with_error(sk, err);
+-	}
++	if (tcp_need_reset(sk->sk_state))
++		tcp_send_active_reset(sk, GFP_ATOMIC,
++				      SK_RST_REASON_NOT_SPECIFIED);
++	tcp_done_with_error(sk, err);
+ 
+ 	bh_unlock_sock(sk);
+ 	local_bh_enable();
+-	tcp_write_queue_purge(sk);
+ 	if (!has_current_bpf_ctx())
+ 		release_sock(sk);
+ 	return 0;
+diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
+index ad28da655f8bc..a29ff901df758 100644
+--- a/net/mptcp/fastopen.c
++++ b/net/mptcp/fastopen.c
+@@ -68,12 +68,12 @@ void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflo
+ 	skb = skb_peek_tail(&sk->sk_receive_queue);
+ 	if (skb) {
+ 		WARN_ON_ONCE(MPTCP_SKB_CB(skb)->end_seq);
+-		pr_debug("msk %p moving seq %llx -> %llx end_seq %llx -> %llx", sk,
++		pr_debug("msk %p moving seq %llx -> %llx end_seq %llx -> %llx\n", sk,
+ 			 MPTCP_SKB_CB(skb)->map_seq, MPTCP_SKB_CB(skb)->map_seq + msk->ack_seq,
+ 			 MPTCP_SKB_CB(skb)->end_seq, MPTCP_SKB_CB(skb)->end_seq + msk->ack_seq);
+ 		MPTCP_SKB_CB(skb)->map_seq += msk->ack_seq;
+ 		MPTCP_SKB_CB(skb)->end_seq += msk->ack_seq;
+ 	}
+ 
+-	pr_debug("msk=%p ack_seq=%llx", msk, msk->ack_seq);
++	pr_debug("msk=%p ack_seq=%llx\n", msk, msk->ack_seq);
+ }
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index ac2f1a54cc43a..370c3836b7712 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -117,7 +117,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			mp_opt->suboptions |= OPTION_MPTCP_CSUMREQD;
+ 			ptr += 2;
+ 		}
+-		pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d csum=%u",
++		pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d csum=%u\n",
+ 			 version, flags, opsize, mp_opt->sndr_key,
+ 			 mp_opt->rcvr_key, mp_opt->data_len, mp_opt->csum);
+ 		break;
+@@ -131,7 +131,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			ptr += 4;
+ 			mp_opt->nonce = get_unaligned_be32(ptr);
+ 			ptr += 4;
+-			pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u",
++			pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u\n",
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->token, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_SYNACK) {
+@@ -142,19 +142,19 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			ptr += 8;
+ 			mp_opt->nonce = get_unaligned_be32(ptr);
+ 			ptr += 4;
+-			pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u",
++			pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->thmac, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_ACK) {
+ 			mp_opt->suboptions |= OPTION_MPTCP_MPJ_ACK;
+ 			ptr += 2;
+ 			memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
+-			pr_debug("MP_JOIN hmac");
++			pr_debug("MP_JOIN hmac\n");
+ 		}
+ 		break;
+ 
+ 	case MPTCPOPT_DSS:
+-		pr_debug("DSS");
++		pr_debug("DSS\n");
+ 		ptr++;
+ 
+ 		/* we must clear 'mpc_map' be able to detect MP_CAPABLE
+@@ -169,7 +169,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		mp_opt->ack64 = (flags & MPTCP_DSS_ACK64) != 0;
+ 		mp_opt->use_ack = (flags & MPTCP_DSS_HAS_ACK);
+ 
+-		pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d",
++		pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d\n",
+ 			 mp_opt->data_fin, mp_opt->dsn64,
+ 			 mp_opt->use_map, mp_opt->ack64,
+ 			 mp_opt->use_ack);
+@@ -207,7 +207,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 				ptr += 4;
+ 			}
+ 
+-			pr_debug("data_ack=%llu", mp_opt->data_ack);
++			pr_debug("data_ack=%llu\n", mp_opt->data_ack);
+ 		}
+ 
+ 		if (mp_opt->use_map) {
+@@ -231,7 +231,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 				ptr += 2;
+ 			}
+ 
+-			pr_debug("data_seq=%llu subflow_seq=%u data_len=%u csum=%d:%u",
++			pr_debug("data_seq=%llu subflow_seq=%u data_len=%u csum=%d:%u\n",
+ 				 mp_opt->data_seq, mp_opt->subflow_seq,
+ 				 mp_opt->data_len, !!(mp_opt->suboptions & OPTION_MPTCP_CSUMREQD),
+ 				 mp_opt->csum);
+@@ -293,7 +293,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 			mp_opt->ahmac = get_unaligned_be64(ptr);
+ 			ptr += 8;
+ 		}
+-		pr_debug("ADD_ADDR%s: id=%d, ahmac=%llu, echo=%d, port=%d",
++		pr_debug("ADD_ADDR%s: id=%d, ahmac=%llu, echo=%d, port=%d\n",
+ 			 (mp_opt->addr.family == AF_INET6) ? "6" : "",
+ 			 mp_opt->addr.id, mp_opt->ahmac, mp_opt->echo, ntohs(mp_opt->addr.port));
+ 		break;
+@@ -309,7 +309,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		mp_opt->rm_list.nr = opsize - TCPOLEN_MPTCP_RM_ADDR_BASE;
+ 		for (i = 0; i < mp_opt->rm_list.nr; i++)
+ 			mp_opt->rm_list.ids[i] = *ptr++;
+-		pr_debug("RM_ADDR: rm_list_nr=%d", mp_opt->rm_list.nr);
++		pr_debug("RM_ADDR: rm_list_nr=%d\n", mp_opt->rm_list.nr);
+ 		break;
+ 
+ 	case MPTCPOPT_MP_PRIO:
+@@ -318,7 +318,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 
+ 		mp_opt->suboptions |= OPTION_MPTCP_PRIO;
+ 		mp_opt->backup = *ptr++ & MPTCP_PRIO_BKUP;
+-		pr_debug("MP_PRIO: prio=%d", mp_opt->backup);
++		pr_debug("MP_PRIO: prio=%d\n", mp_opt->backup);
+ 		break;
+ 
+ 	case MPTCPOPT_MP_FASTCLOSE:
+@@ -329,7 +329,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		mp_opt->rcvr_key = get_unaligned_be64(ptr);
+ 		ptr += 8;
+ 		mp_opt->suboptions |= OPTION_MPTCP_FASTCLOSE;
+-		pr_debug("MP_FASTCLOSE: recv_key=%llu", mp_opt->rcvr_key);
++		pr_debug("MP_FASTCLOSE: recv_key=%llu\n", mp_opt->rcvr_key);
+ 		break;
+ 
+ 	case MPTCPOPT_RST:
+@@ -343,7 +343,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		flags = *ptr++;
+ 		mp_opt->reset_transient = flags & MPTCP_RST_TRANSIENT;
+ 		mp_opt->reset_reason = *ptr;
+-		pr_debug("MP_RST: transient=%u reason=%u",
++		pr_debug("MP_RST: transient=%u reason=%u\n",
+ 			 mp_opt->reset_transient, mp_opt->reset_reason);
+ 		break;
+ 
+@@ -354,7 +354,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		ptr += 2;
+ 		mp_opt->suboptions |= OPTION_MPTCP_FAIL;
+ 		mp_opt->fail_seq = get_unaligned_be64(ptr);
+-		pr_debug("MP_FAIL: data_seq=%llu", mp_opt->fail_seq);
++		pr_debug("MP_FAIL: data_seq=%llu\n", mp_opt->fail_seq);
+ 		break;
+ 
+ 	default:
+@@ -417,7 +417,7 @@ bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb,
+ 		*size = TCPOLEN_MPTCP_MPC_SYN;
+ 		return true;
+ 	} else if (subflow->request_join) {
+-		pr_debug("remote_token=%u, nonce=%u", subflow->remote_token,
++		pr_debug("remote_token=%u, nonce=%u\n", subflow->remote_token,
+ 			 subflow->local_nonce);
+ 		opts->suboptions = OPTION_MPTCP_MPJ_SYN;
+ 		opts->join_id = subflow->local_id;
+@@ -500,7 +500,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ 			*size = TCPOLEN_MPTCP_MPC_ACK;
+ 		}
+ 
+-		pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d",
++		pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d\n",
+ 			 subflow, subflow->local_key, subflow->remote_key,
+ 			 data_len);
+ 
+@@ -509,7 +509,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ 		opts->suboptions = OPTION_MPTCP_MPJ_ACK;
+ 		memcpy(opts->hmac, subflow->hmac, MPTCPOPT_HMAC_LEN);
+ 		*size = TCPOLEN_MPTCP_MPJ_ACK;
+-		pr_debug("subflow=%p", subflow);
++		pr_debug("subflow=%p\n", subflow);
+ 
+ 		/* we can use the full delegate action helper only from BH context
+ 		 * If we are in process context - sk is flushing the backlog at
+@@ -675,7 +675,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ 
+ 	*size = len;
+ 	if (drop_other_suboptions) {
+-		pr_debug("drop other suboptions");
++		pr_debug("drop other suboptions\n");
+ 		opts->suboptions = 0;
+ 
+ 		/* note that e.g. DSS could have written into the memory
+@@ -695,7 +695,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ 	} else {
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_ECHOADDTX);
+ 	}
+-	pr_debug("addr_id=%d, ahmac=%llu, echo=%d, port=%d",
++	pr_debug("addr_id=%d, ahmac=%llu, echo=%d, port=%d\n",
+ 		 opts->addr.id, opts->ahmac, echo, ntohs(opts->addr.port));
+ 
+ 	return true;
+@@ -726,7 +726,7 @@ static bool mptcp_established_options_rm_addr(struct sock *sk,
+ 	opts->rm_list = rm_list;
+ 
+ 	for (i = 0; i < opts->rm_list.nr; i++)
+-		pr_debug("rm_list_ids[%d]=%d", i, opts->rm_list.ids[i]);
++		pr_debug("rm_list_ids[%d]=%d\n", i, opts->rm_list.ids[i]);
+ 	MPTCP_ADD_STATS(sock_net(sk), MPTCP_MIB_RMADDRTX, opts->rm_list.nr);
+ 	return true;
+ }
+@@ -752,7 +752,7 @@ static bool mptcp_established_options_mp_prio(struct sock *sk,
+ 	opts->suboptions |= OPTION_MPTCP_PRIO;
+ 	opts->backup = subflow->request_bkup;
+ 
+-	pr_debug("prio=%d", opts->backup);
++	pr_debug("prio=%d\n", opts->backup);
+ 
+ 	return true;
+ }
+@@ -794,7 +794,7 @@ static bool mptcp_established_options_fastclose(struct sock *sk,
+ 	opts->suboptions |= OPTION_MPTCP_FASTCLOSE;
+ 	opts->rcvr_key = READ_ONCE(msk->remote_key);
+ 
+-	pr_debug("FASTCLOSE key=%llu", opts->rcvr_key);
++	pr_debug("FASTCLOSE key=%llu\n", opts->rcvr_key);
+ 	MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX);
+ 	return true;
+ }
+@@ -816,7 +816,7 @@ static bool mptcp_established_options_mp_fail(struct sock *sk,
+ 	opts->suboptions |= OPTION_MPTCP_FAIL;
+ 	opts->fail_seq = subflow->map_seq;
+ 
+-	pr_debug("MP_FAIL fail_seq=%llu", opts->fail_seq);
++	pr_debug("MP_FAIL fail_seq=%llu\n", opts->fail_seq);
+ 	MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX);
+ 
+ 	return true;
+@@ -904,7 +904,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		opts->csum_reqd = subflow_req->csum_reqd;
+ 		opts->allow_join_id0 = subflow_req->allow_join_id0;
+ 		*size = TCPOLEN_MPTCP_MPC_SYNACK;
+-		pr_debug("subflow_req=%p, local_key=%llu",
++		pr_debug("subflow_req=%p, local_key=%llu\n",
+ 			 subflow_req, subflow_req->local_key);
+ 		return true;
+ 	} else if (subflow_req->mp_join) {
+@@ -913,7 +913,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ 		opts->join_id = subflow_req->local_id;
+ 		opts->thmac = subflow_req->thmac;
+ 		opts->nonce = subflow_req->local_nonce;
+-		pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u",
++		pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
+ 			 subflow_req, opts->backup, opts->join_id,
+ 			 opts->thmac, opts->nonce);
+ 		*size = TCPOLEN_MPTCP_MPJ_SYNACK;
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 3e6e0f5510bb1..37f6dbcd8434d 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -19,7 +19,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
+ {
+ 	u8 add_addr = READ_ONCE(msk->pm.addr_signal);
+ 
+-	pr_debug("msk=%p, local_id=%d, echo=%d", msk, addr->id, echo);
++	pr_debug("msk=%p, local_id=%d, echo=%d\n", msk, addr->id, echo);
+ 
+ 	lockdep_assert_held(&msk->pm.lock);
+ 
+@@ -45,7 +45,7 @@ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_
+ {
+ 	u8 rm_addr = READ_ONCE(msk->pm.addr_signal);
+ 
+-	pr_debug("msk=%p, rm_list_nr=%d", msk, rm_list->nr);
++	pr_debug("msk=%p, rm_list_nr=%d\n", msk, rm_list->nr);
+ 
+ 	if (rm_addr) {
+ 		MPTCP_ADD_STATS(sock_net((struct sock *)msk),
+@@ -66,7 +66,7 @@ void mptcp_pm_new_connection(struct mptcp_sock *msk, const struct sock *ssk, int
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p, token=%u side=%d", msk, READ_ONCE(msk->token), server_side);
++	pr_debug("msk=%p, token=%u side=%d\n", msk, READ_ONCE(msk->token), server_side);
+ 
+ 	WRITE_ONCE(pm->server_side, server_side);
+ 	mptcp_event(MPTCP_EVENT_CREATED, msk, ssk, GFP_ATOMIC);
+@@ -90,7 +90,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
+ 
+ 	subflows_max = mptcp_pm_get_subflows_max(msk);
+ 
+-	pr_debug("msk=%p subflows=%d max=%d allow=%d", msk, pm->subflows,
++	pr_debug("msk=%p subflows=%d max=%d allow=%d\n", msk, pm->subflows,
+ 		 subflows_max, READ_ONCE(pm->accept_subflow));
+ 
+ 	/* try to avoid acquiring the lock below */
+@@ -114,7 +114,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
+ static bool mptcp_pm_schedule_work(struct mptcp_sock *msk,
+ 				   enum mptcp_pm_status new_status)
+ {
+-	pr_debug("msk=%p status=%x new=%lx", msk, msk->pm.status,
++	pr_debug("msk=%p status=%x new=%lx\n", msk, msk->pm.status,
+ 		 BIT(new_status));
+ 	if (msk->pm.status & BIT(new_status))
+ 		return false;
+@@ -129,7 +129,7 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk, const struct sock *ssk)
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 	bool announce = false;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	spin_lock_bh(&pm->lock);
+ 
+@@ -153,14 +153,14 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk, const struct sock *ssk)
+ 
+ void mptcp_pm_connection_closed(struct mptcp_sock *msk)
+ {
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ }
+ 
+ void mptcp_pm_subflow_established(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (!READ_ONCE(pm->work_pending))
+ 		return;
+@@ -212,7 +212,7 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
+ 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p remote_id=%d accept=%d", msk, addr->id,
++	pr_debug("msk=%p remote_id=%d accept=%d\n", msk, addr->id,
+ 		 READ_ONCE(pm->accept_addr));
+ 
+ 	mptcp_event_addr_announced(ssk, addr);
+@@ -226,7 +226,9 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
+ 		} else {
+ 			__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_ADDADDRDROP);
+ 		}
+-	} else if (!READ_ONCE(pm->accept_addr)) {
++	/* id0 should not have a different address */
++	} else if ((addr->id == 0 && !mptcp_pm_nl_is_init_remote_addr(msk, addr)) ||
++		   (addr->id > 0 && !READ_ONCE(pm->accept_addr))) {
+ 		mptcp_pm_announce_addr(msk, addr, true);
+ 		mptcp_pm_add_addr_send_ack(msk);
+ 	} else if (mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_RECEIVED)) {
+@@ -243,7 +245,7 @@ void mptcp_pm_add_addr_echoed(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	spin_lock_bh(&pm->lock);
+ 
+@@ -267,7 +269,7 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ 	struct mptcp_pm_data *pm = &msk->pm;
+ 	u8 i;
+ 
+-	pr_debug("msk=%p remote_ids_nr=%d", msk, rm_list->nr);
++	pr_debug("msk=%p remote_ids_nr=%d\n", msk, rm_list->nr);
+ 
+ 	for (i = 0; i < rm_list->nr; i++)
+ 		mptcp_event_addr_removed(msk, rm_list->ids[i]);
+@@ -299,19 +301,19 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+ 
+-	pr_debug("fail_seq=%llu", fail_seq);
++	pr_debug("fail_seq=%llu\n", fail_seq);
+ 
+ 	if (!READ_ONCE(msk->allow_infinite_fallback))
+ 		return;
+ 
+ 	if (!subflow->fail_tout) {
+-		pr_debug("send MP_FAIL response and infinite map");
++		pr_debug("send MP_FAIL response and infinite map\n");
+ 
+ 		subflow->send_mp_fail = 1;
+ 		subflow->send_infinite_map = 1;
+ 		tcp_send_ack(sk);
+ 	} else {
+-		pr_debug("MP_FAIL response received");
++		pr_debug("MP_FAIL response received\n");
+ 		WRITE_ONCE(subflow->fail_tout, 0);
+ 	}
+ }
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 3e4ad801786f2..f891bc714668c 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -130,12 +130,15 @@ static bool lookup_subflow_by_daddr(const struct list_head *list,
+ {
+ 	struct mptcp_subflow_context *subflow;
+ 	struct mptcp_addr_info cur;
+-	struct sock_common *skc;
+ 
+ 	list_for_each_entry(subflow, list, node) {
+-		skc = (struct sock_common *)mptcp_subflow_tcp_sock(subflow);
++		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 
+-		remote_address(skc, &cur);
++		if (!((1 << inet_sk_state_load(ssk)) &
++		      (TCPF_ESTABLISHED | TCPF_SYN_SENT | TCPF_SYN_RECV)))
++			continue;
++
++		remote_address((struct sock_common *)ssk, &cur);
+ 		if (mptcp_addresses_equal(&cur, daddr, daddr->port))
+ 			return true;
+ 	}
+@@ -287,7 +290,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 	struct mptcp_sock *msk = entry->sock;
+ 	struct sock *sk = (struct sock *)msk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (!msk)
+ 		return;
+@@ -306,7 +309,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+ 	if (!mptcp_pm_should_add_signal_addr(msk)) {
+-		pr_debug("retransmit ADD_ADDR id=%d", entry->addr.id);
++		pr_debug("retransmit ADD_ADDR id=%d\n", entry->addr.id);
+ 		mptcp_pm_announce_addr(msk, &entry->addr, false);
+ 		mptcp_pm_add_addr_send_ack(msk);
+ 		entry->retrans_times++;
+@@ -387,7 +390,7 @@ void mptcp_pm_free_anno_list(struct mptcp_sock *msk)
+ 	struct sock *sk = (struct sock *)msk;
+ 	LIST_HEAD(free_list);
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	list_splice_init(&msk->pm.anno_list, &free_list);
+@@ -473,7 +476,7 @@ static void __mptcp_pm_send_ack(struct mptcp_sock *msk, struct mptcp_subflow_con
+ 	struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 	bool slow;
+ 
+-	pr_debug("send ack for %s",
++	pr_debug("send ack for %s\n",
+ 		 prio ? "mp_prio" : (mptcp_pm_should_add_signal(msk) ? "add_addr" : "rm_addr"));
+ 
+ 	slow = lock_sock_fast(ssk);
+@@ -585,6 +588,11 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 
+ 		__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
+ 		msk->pm.add_addr_signaled++;
++
++		/* Special case for ID0: set the correct ID */
++		if (local.addr.id == msk->mpc_endpoint_id)
++			local.addr.id = 0;
++
+ 		mptcp_pm_announce_addr(msk, &local.addr, false);
+ 		mptcp_pm_nl_addr_send_ack(msk);
+ 
+@@ -607,8 +615,14 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
+ 
+ 		fullmesh = !!(local.flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
+ 
+-		msk->pm.local_addr_used++;
+ 		__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
++
++		/* Special case for ID0: set the correct ID */
++		if (local.addr.id == msk->mpc_endpoint_id)
++			local.addr.id = 0;
++		else /* local_addr_used is not decr for ID 0 */
++			msk->pm.local_addr_used++;
++
+ 		nr = fill_remote_addresses_vec(msk, &local.addr, fullmesh, addrs);
+ 		if (nr == 0)
+ 			continue;
+@@ -708,7 +722,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk);
+ 	subflows_max = mptcp_pm_get_subflows_max(msk);
+ 
+-	pr_debug("accepted %d:%d remote family %d",
++	pr_debug("accepted %d:%d remote family %d\n",
+ 		 msk->pm.add_addr_accepted, add_addr_accept_max,
+ 		 msk->pm.remote.family);
+ 
+@@ -737,13 +751,24 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+ 	if (sf_created) {
+-		msk->pm.add_addr_accepted++;
++		/* add_addr_accepted is not decr for ID 0 */
++		if (remote.id)
++			msk->pm.add_addr_accepted++;
+ 		if (msk->pm.add_addr_accepted >= add_addr_accept_max ||
+ 		    msk->pm.subflows >= subflows_max)
+ 			WRITE_ONCE(msk->pm.accept_addr, false);
+ 	}
+ }
+ 
++bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
++				     const struct mptcp_addr_info *remote)
++{
++	struct mptcp_addr_info mpc_remote;
++
++	remote_address((struct sock_common *)msk, &mpc_remote);
++	return mptcp_addresses_equal(&mpc_remote, remote, remote->port);
++}
++
+ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_subflow_context *subflow;
+@@ -755,9 +780,12 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ 	    !mptcp_pm_should_rm_signal(msk))
+ 		return;
+ 
+-	subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node);
+-	if (subflow)
+-		mptcp_pm_send_ack(msk, subflow, false, false);
++	mptcp_for_each_subflow(msk, subflow) {
++		if (__mptcp_subflow_active(subflow)) {
++			mptcp_pm_send_ack(msk, subflow, false, false);
++			break;
++		}
++	}
+ }
+ 
+ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+@@ -767,7 +795,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_subflow_context *subflow;
+ 
+-	pr_debug("bkup=%d", bkup);
++	pr_debug("bkup=%d\n", bkup);
+ 
+ 	mptcp_for_each_subflow(msk, subflow) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+@@ -790,11 +818,6 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 	return -EINVAL;
+ }
+ 
+-static bool mptcp_local_id_match(const struct mptcp_sock *msk, u8 local_id, u8 id)
+-{
+-	return local_id == id || (!local_id && msk->mpc_endpoint_id == id);
+-}
+-
+ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 					   const struct mptcp_rm_list *rm_list,
+ 					   enum linux_mptcp_mib_field rm_type)
+@@ -803,7 +826,7 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 	struct sock *sk = (struct sock *)msk;
+ 	u8 i;
+ 
+-	pr_debug("%s rm_list_nr %d",
++	pr_debug("%s rm_list_nr %d\n",
+ 		 rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow", rm_list->nr);
+ 
+ 	msk_owned_by_me(msk);
+@@ -827,12 +850,14 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 			int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
+ 			u8 id = subflow_get_local_id(subflow);
+ 
++			if (inet_sk_state_load(ssk) == TCP_CLOSE)
++				continue;
+ 			if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id)
+ 				continue;
+-			if (rm_type == MPTCP_MIB_RMSUBFLOW && !mptcp_local_id_match(msk, id, rm_id))
++			if (rm_type == MPTCP_MIB_RMSUBFLOW && id != rm_id)
+ 				continue;
+ 
+-			pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u",
++			pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u\n",
+ 				 rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow",
+ 				 i, rm_id, id, remote_id, msk->mpc_endpoint_id);
+ 			spin_unlock_bh(&msk->pm.lock);
+@@ -889,7 +914,7 @@ void mptcp_pm_nl_work(struct mptcp_sock *msk)
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+-	pr_debug("msk=%p status=%x", msk, pm->status);
++	pr_debug("msk=%p status=%x\n", msk, pm->status);
+ 	if (pm->status & BIT(MPTCP_PM_ADD_ADDR_RECEIVED)) {
+ 		pm->status &= ~BIT(MPTCP_PM_ADD_ADDR_RECEIVED);
+ 		mptcp_pm_nl_add_addr_received(msk);
+@@ -1307,20 +1332,27 @@ static struct pm_nl_pernet *genl_info_pm_nl(struct genl_info *info)
+ 	return pm_nl_get_pernet(genl_info_net(info));
+ }
+ 
+-static int mptcp_nl_add_subflow_or_signal_addr(struct net *net)
++static int mptcp_nl_add_subflow_or_signal_addr(struct net *net,
++					       struct mptcp_addr_info *addr)
+ {
+ 	struct mptcp_sock *msk;
+ 	long s_slot = 0, s_num = 0;
+ 
+ 	while ((msk = mptcp_token_iter_next(net, &s_slot, &s_num)) != NULL) {
+ 		struct sock *sk = (struct sock *)msk;
++		struct mptcp_addr_info mpc_addr;
+ 
+ 		if (!READ_ONCE(msk->fully_established) ||
+ 		    mptcp_pm_is_userspace(msk))
+ 			goto next;
+ 
++		/* if the endp linked to the init sf is re-added with a != ID */
++		mptcp_local_address((struct sock_common *)msk, &mpc_addr);
++
+ 		lock_sock(sk);
+ 		spin_lock_bh(&msk->pm.lock);
++		if (mptcp_addresses_equal(addr, &mpc_addr, addr->port))
++			msk->mpc_endpoint_id = addr->id;
+ 		mptcp_pm_create_subflow_or_signal_addr(msk);
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		release_sock(sk);
+@@ -1393,7 +1425,7 @@ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ 		goto out_free;
+ 	}
+ 
+-	mptcp_nl_add_subflow_or_signal_addr(sock_net(skb->sk));
++	mptcp_nl_add_subflow_or_signal_addr(sock_net(skb->sk), &entry->addr);
+ 	return 0;
+ 
+ out_free:
+@@ -1438,6 +1470,12 @@ static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
+ 	return false;
+ }
+ 
++static u8 mptcp_endp_get_local_id(struct mptcp_sock *msk,
++				  const struct mptcp_addr_info *addr)
++{
++	return msk->mpc_endpoint_id == addr->id ? 0 : addr->id;
++}
++
+ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
+ 				      const struct mptcp_addr_info *addr,
+ 				      bool force)
+@@ -1445,7 +1483,7 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
+ 	struct mptcp_rm_list list = { .nr = 0 };
+ 	bool ret;
+ 
+-	list.ids[list.nr++] = addr->id;
++	list.ids[list.nr++] = mptcp_endp_get_local_id(msk, addr);
+ 
+ 	ret = remove_anno_list_by_saddr(msk, addr);
+ 	if (ret || force) {
+@@ -1472,13 +1510,11 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 						   const struct mptcp_pm_addr_entry *entry)
+ {
+ 	const struct mptcp_addr_info *addr = &entry->addr;
+-	struct mptcp_rm_list list = { .nr = 0 };
++	struct mptcp_rm_list list = { .nr = 1 };
+ 	long s_slot = 0, s_num = 0;
+ 	struct mptcp_sock *msk;
+ 
+-	pr_debug("remove_id=%d", addr->id);
+-
+-	list.ids[list.nr++] = addr->id;
++	pr_debug("remove_id=%d\n", addr->id);
+ 
+ 	while ((msk = mptcp_token_iter_next(net, &s_slot, &s_num)) != NULL) {
+ 		struct sock *sk = (struct sock *)msk;
+@@ -1497,6 +1533,7 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 		mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+ 					  !(entry->flags & MPTCP_PM_ADDR_FLAG_IMPLICIT));
+ 
++		list.ids[0] = mptcp_endp_get_local_id(msk, addr);
+ 		if (remove_subflow) {
+ 			spin_lock_bh(&msk->pm.lock);
+ 			mptcp_pm_nl_rm_subflow_received(msk, &list);
+@@ -1509,6 +1546,8 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ 			spin_unlock_bh(&msk->pm.lock);
+ 		}
+ 
++		if (msk->mpc_endpoint_id == entry->addr.id)
++			msk->mpc_endpoint_id = 0;
+ 		release_sock(sk);
+ 
+ next:
+@@ -1603,6 +1642,7 @@ int mptcp_pm_nl_del_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ 	return ret;
+ }
+ 
++/* Called from the userspace PM only */
+ void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
+ {
+ 	struct mptcp_rm_list alist = { .nr = 0 };
+@@ -1631,6 +1671,7 @@ void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
+ 	}
+ }
+ 
++/* Called from the in-kernel PM only */
+ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 					       struct list_head *rm_list)
+ {
+@@ -1640,11 +1681,11 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
+ 	list_for_each_entry(entry, rm_list, list) {
+ 		if (slist.nr < MPTCP_RM_IDS_MAX &&
+ 		    lookup_subflow_by_saddr(&msk->conn_list, &entry->addr))
+-			slist.ids[slist.nr++] = entry->addr.id;
++			slist.ids[slist.nr++] = mptcp_endp_get_local_id(msk, &entry->addr);
+ 
+ 		if (alist.nr < MPTCP_RM_IDS_MAX &&
+ 		    remove_anno_list_by_saddr(msk, &entry->addr))
+-			alist.ids[alist.nr++] = entry->addr.id;
++			alist.ids[alist.nr++] = mptcp_endp_get_local_id(msk, &entry->addr);
+ 	}
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+@@ -1941,7 +1982,7 @@ static void mptcp_pm_nl_fullmesh(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_rm_list list = { .nr = 0 };
+ 
+-	list.ids[list.nr++] = addr->id;
++	list.ids[list.nr++] = mptcp_endp_get_local_id(msk, addr);
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	mptcp_pm_nl_rm_subflow_received(msk, &list);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index ff8292d0cf4e6..4c8c21d3c3f7c 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -139,7 +139,7 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
+ 	    !skb_try_coalesce(to, from, &fragstolen, &delta))
+ 		return false;
+ 
+-	pr_debug("colesced seq %llx into %llx new len %d new end seq %llx",
++	pr_debug("colesced seq %llx into %llx new len %d new end seq %llx\n",
+ 		 MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,
+ 		 to->len, MPTCP_SKB_CB(from)->end_seq);
+ 	MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;
+@@ -217,7 +217,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
+ 	end_seq = MPTCP_SKB_CB(skb)->end_seq;
+ 	max_seq = atomic64_read(&msk->rcv_wnd_sent);
+ 
+-	pr_debug("msk=%p seq=%llx limit=%llx empty=%d", msk, seq, max_seq,
++	pr_debug("msk=%p seq=%llx limit=%llx empty=%d\n", msk, seq, max_seq,
+ 		 RB_EMPTY_ROOT(&msk->out_of_order_queue));
+ 	if (after64(end_seq, max_seq)) {
+ 		/* out of window */
+@@ -643,7 +643,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
+ 		}
+ 	}
+ 
+-	pr_debug("msk=%p ssk=%p", msk, ssk);
++	pr_debug("msk=%p ssk=%p\n", msk, ssk);
+ 	tp = tcp_sk(ssk);
+ 	do {
+ 		u32 map_remaining, offset;
+@@ -724,7 +724,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
+ 	u64 end_seq;
+ 
+ 	p = rb_first(&msk->out_of_order_queue);
+-	pr_debug("msk=%p empty=%d", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
++	pr_debug("msk=%p empty=%d\n", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
+ 	while (p) {
+ 		skb = rb_to_skb(p);
+ 		if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq))
+@@ -746,7 +746,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
+ 			int delta = msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ 
+ 			/* skip overlapping data, if any */
+-			pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d",
++			pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d\n",
+ 				 MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq,
+ 				 delta);
+ 			MPTCP_SKB_CB(skb)->offset += delta;
+@@ -1240,7 +1240,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	size_t copy;
+ 	int i;
+ 
+-	pr_debug("msk=%p ssk=%p sending dfrag at seq=%llu len=%u already sent=%u",
++	pr_debug("msk=%p ssk=%p sending dfrag at seq=%llu len=%u already sent=%u\n",
+ 		 msk, ssk, dfrag->data_seq, dfrag->data_len, info->sent);
+ 
+ 	if (WARN_ON_ONCE(info->sent > info->limit ||
+@@ -1341,7 +1341,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	mpext->use_map = 1;
+ 	mpext->dsn64 = 1;
+ 
+-	pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d",
++	pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d\n",
+ 		 mpext->data_seq, mpext->subflow_seq, mpext->data_len,
+ 		 mpext->dsn64);
+ 
+@@ -1892,7 +1892,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 			if (!msk->first_pending)
+ 				WRITE_ONCE(msk->first_pending, dfrag);
+ 		}
+-		pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d", msk,
++		pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d\n", msk,
+ 			 dfrag->data_seq, dfrag->data_len, dfrag->already_sent,
+ 			 !dfrag_collapsed);
+ 
+@@ -2248,7 +2248,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 			}
+ 		}
+ 
+-		pr_debug("block timeout %ld", timeo);
++		pr_debug("block timeout %ld\n", timeo);
+ 		sk_wait_data(sk, &timeo, NULL);
+ 	}
+ 
+@@ -2264,7 +2264,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		}
+ 	}
+ 
+-	pr_debug("msk=%p rx queue empty=%d:%d copied=%d",
++	pr_debug("msk=%p rx queue empty=%d:%d copied=%d\n",
+ 		 msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
+ 		 skb_queue_empty(&msk->receive_queue), copied);
+ 	if (!(flags & MSG_PEEK))
+@@ -2326,7 +2326,7 @@ struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk)
+ 			continue;
+ 		}
+ 
+-		if (subflow->backup) {
++		if (subflow->backup || subflow->request_bkup) {
+ 			if (!backup)
+ 				backup = ssk;
+ 			continue;
+@@ -2508,6 +2508,12 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 		     struct mptcp_subflow_context *subflow)
+ {
++	/* The first subflow can already be closed and still in the list */
++	if (subflow->close_event_done)
++		return;
++
++	subflow->close_event_done = true;
++
+ 	if (sk->sk_state == TCP_ESTABLISHED)
+ 		mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
+ 
+@@ -2533,8 +2539,11 @@ static void __mptcp_close_subflow(struct sock *sk)
+ 
+ 	mptcp_for_each_subflow_safe(msk, subflow, tmp) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++		int ssk_state = inet_sk_state_load(ssk);
+ 
+-		if (inet_sk_state_load(ssk) != TCP_CLOSE)
++		if (ssk_state != TCP_CLOSE &&
++		    (ssk_state != TCP_CLOSE_WAIT ||
++		     inet_sk_state_load(sk) != TCP_ESTABLISHED))
+ 			continue;
+ 
+ 		/* 'subflow_data_ready' will re-sched once rx queue is empty */
+@@ -2714,7 +2723,7 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
+ 	if (!ssk)
+ 		return;
+ 
+-	pr_debug("MP_FAIL doesn't respond, reset the subflow");
++	pr_debug("MP_FAIL doesn't respond, reset the subflow\n");
+ 
+ 	slow = lock_sock_fast(ssk);
+ 	mptcp_subflow_reset(ssk);
+@@ -2888,7 +2897,7 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 		break;
+ 	default:
+ 		if (__mptcp_check_fallback(mptcp_sk(sk))) {
+-			pr_debug("Fallback");
++			pr_debug("Fallback\n");
+ 			ssk->sk_shutdown |= how;
+ 			tcp_shutdown(ssk, how);
+ 
+@@ -2898,7 +2907,7 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 			WRITE_ONCE(mptcp_sk(sk)->snd_una, mptcp_sk(sk)->snd_nxt);
+ 			mptcp_schedule_work(sk);
+ 		} else {
+-			pr_debug("Sending DATA_FIN on subflow %p", ssk);
++			pr_debug("Sending DATA_FIN on subflow %p\n", ssk);
+ 			tcp_send_ack(ssk);
+ 			if (!mptcp_rtx_timer_pending(sk))
+ 				mptcp_reset_rtx_timer(sk);
+@@ -2964,7 +2973,7 @@ static void mptcp_check_send_data_fin(struct sock *sk)
+ 	struct mptcp_subflow_context *subflow;
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p snd_data_fin_enable=%d pending=%d snd_nxt=%llu write_seq=%llu",
++	pr_debug("msk=%p snd_data_fin_enable=%d pending=%d snd_nxt=%llu write_seq=%llu\n",
+ 		 msk, msk->snd_data_fin_enable, !!mptcp_send_head(sk),
+ 		 msk->snd_nxt, msk->write_seq);
+ 
+@@ -2988,7 +2997,7 @@ static void __mptcp_wr_shutdown(struct sock *sk)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p snd_data_fin_enable=%d shutdown=%x state=%d pending=%d",
++	pr_debug("msk=%p snd_data_fin_enable=%d shutdown=%x state=%d pending=%d\n",
+ 		 msk, msk->snd_data_fin_enable, sk->sk_shutdown, sk->sk_state,
+ 		 !!mptcp_send_head(sk));
+ 
+@@ -3003,7 +3012,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	might_sleep();
+ 
+@@ -3111,7 +3120,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
+ 		mptcp_set_state(sk, TCP_CLOSE);
+ 
+ 	sock_hold(sk);
+-	pr_debug("msk=%p state=%d", sk, sk->sk_state);
++	pr_debug("msk=%p state=%d\n", sk, sk->sk_state);
+ 	if (msk->token)
+ 		mptcp_event(MPTCP_EVENT_CLOSED, msk, NULL, GFP_KERNEL);
+ 
+@@ -3543,7 +3552,7 @@ static int mptcp_get_port(struct sock *sk, unsigned short snum)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p, ssk=%p", msk, msk->first);
++	pr_debug("msk=%p, ssk=%p\n", msk, msk->first);
+ 	if (WARN_ON_ONCE(!msk->first))
+ 		return -EINVAL;
+ 
+@@ -3560,7 +3569,7 @@ void mptcp_finish_connect(struct sock *ssk)
+ 	sk = subflow->conn;
+ 	msk = mptcp_sk(sk);
+ 
+-	pr_debug("msk=%p, token=%u", sk, subflow->token);
++	pr_debug("msk=%p, token=%u\n", sk, subflow->token);
+ 
+ 	subflow->map_seq = subflow->iasn;
+ 	subflow->map_subflow_seq = 1;
+@@ -3589,7 +3598,7 @@ bool mptcp_finish_join(struct sock *ssk)
+ 	struct sock *parent = (void *)msk;
+ 	bool ret = true;
+ 
+-	pr_debug("msk=%p, subflow=%p", msk, subflow);
++	pr_debug("msk=%p, subflow=%p\n", msk, subflow);
+ 
+ 	/* mptcp socket already closing? */
+ 	if (!mptcp_is_fully_established(parent)) {
+@@ -3635,7 +3644,7 @@ bool mptcp_finish_join(struct sock *ssk)
+ 
+ static void mptcp_shutdown(struct sock *sk, int how)
+ {
+-	pr_debug("sk=%p, how=%d", sk, how);
++	pr_debug("sk=%p, how=%d\n", sk, how);
+ 
+ 	if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
+ 		__mptcp_wr_shutdown(sk);
+@@ -3856,7 +3865,7 @@ static int mptcp_listen(struct socket *sock, int backlog)
+ 	struct sock *ssk;
+ 	int err;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	lock_sock(sk);
+ 
+@@ -3895,7 +3904,7 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
+ 	struct mptcp_sock *msk = mptcp_sk(sock->sk);
+ 	struct sock *ssk, *newsk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	/* Buggy applications can call accept on socket states other then LISTEN
+ 	 * but no need to allocate the first subflow just to error out.
+@@ -3904,12 +3913,12 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
+ 	if (!ssk)
+ 		return -EINVAL;
+ 
+-	pr_debug("ssk=%p, listener=%p", ssk, mptcp_subflow_ctx(ssk));
++	pr_debug("ssk=%p, listener=%p\n", ssk, mptcp_subflow_ctx(ssk));
+ 	newsk = inet_csk_accept(ssk, arg);
+ 	if (!newsk)
+ 		return arg->err;
+ 
+-	pr_debug("newsk=%p, subflow is mptcp=%d", newsk, sk_is_mptcp(newsk));
++	pr_debug("newsk=%p, subflow is mptcp=%d\n", newsk, sk_is_mptcp(newsk));
+ 	if (sk_is_mptcp(newsk)) {
+ 		struct mptcp_subflow_context *subflow;
+ 		struct sock *new_mptcp_sock;
+@@ -4002,7 +4011,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
+ 	sock_poll_wait(file, sock, wait);
+ 
+ 	state = inet_sk_state_load(sk);
+-	pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
++	pr_debug("msk=%p state=%d flags=%lx\n", msk, state, msk->flags);
+ 	if (state == TCP_LISTEN) {
+ 		struct sock *ssk = READ_ONCE(msk->first);
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index c7c846805c4e1..0241b0dbab3ca 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -519,7 +519,8 @@ struct mptcp_subflow_context {
+ 		stale : 1,	    /* unable to snd/rcv data, do not use for xmit */
+ 		valid_csum_seen : 1,        /* at least one csum validated */
+ 		is_mptfo : 1,	    /* subflow is doing TFO */
+-		__unused : 10;
++		close_event_done : 1,       /* has done the post-closed part */
++		__unused : 9;
+ 	bool	data_avail;
+ 	bool	scheduled;
+ 	u32	remote_nonce;
+@@ -987,6 +988,8 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
+ void mptcp_pm_add_addr_echoed(struct mptcp_sock *msk,
+ 			      const struct mptcp_addr_info *addr);
+ void mptcp_pm_add_addr_send_ack(struct mptcp_sock *msk);
++bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
++				     const struct mptcp_addr_info *remote);
+ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk);
+ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ 			       const struct mptcp_rm_list *rm_list);
+@@ -1172,7 +1175,7 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
+ static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
+ {
+ 	if (__mptcp_check_fallback(msk)) {
+-		pr_debug("TCP fallback already done (msk=%p)", msk);
++		pr_debug("TCP fallback already done (msk=%p)\n", msk);
+ 		return;
+ 	}
+ 	set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
+@@ -1208,7 +1211,7 @@ static inline void mptcp_do_fallback(struct sock *ssk)
+ 	}
+ }
+ 
+-#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)", __func__, a)
++#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
+ 
+ static inline bool mptcp_check_infinite_map(struct sk_buff *skb)
+ {
+diff --git a/net/mptcp/sched.c b/net/mptcp/sched.c
+index 4a7fd0508ad28..78ed508ebc1b8 100644
+--- a/net/mptcp/sched.c
++++ b/net/mptcp/sched.c
+@@ -86,7 +86,7 @@ int mptcp_register_scheduler(struct mptcp_sched_ops *sched)
+ 	list_add_tail_rcu(&sched->list, &mptcp_sched_list);
+ 	spin_unlock(&mptcp_sched_list_lock);
+ 
+-	pr_debug("%s registered", sched->name);
++	pr_debug("%s registered\n", sched->name);
+ 	return 0;
+ }
+ 
+@@ -118,7 +118,7 @@ int mptcp_init_sched(struct mptcp_sock *msk,
+ 	if (msk->sched->init)
+ 		msk->sched->init(msk);
+ 
+-	pr_debug("sched=%s", msk->sched->name);
++	pr_debug("sched=%s\n", msk->sched->name);
+ 
+ 	return 0;
+ }
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index f9a4fb17b5b78..499bd002ceb89 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -873,7 +873,7 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct sock *ssk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	if (level == SOL_SOCKET)
+ 		return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen);
+@@ -1453,7 +1453,7 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 	struct sock *ssk;
+ 
+-	pr_debug("msk=%p", msk);
++	pr_debug("msk=%p\n", msk);
+ 
+ 	/* @@ the meaning of setsockopt() when the socket is connected and
+ 	 * there are multiple subflows is not yet defined. It is up to the
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index c330946384ba0..fc813d538a5a2 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -39,7 +39,7 @@ static void subflow_req_destructor(struct request_sock *req)
+ {
+ 	struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
+ 
+-	pr_debug("subflow_req=%p", subflow_req);
++	pr_debug("subflow_req=%p\n", subflow_req);
+ 
+ 	if (subflow_req->msk)
+ 		sock_put((struct sock *)subflow_req->msk);
+@@ -146,7 +146,7 @@ static int subflow_check_req(struct request_sock *req,
+ 	struct mptcp_options_received mp_opt;
+ 	bool opt_mp_capable, opt_mp_join;
+ 
+-	pr_debug("subflow_req=%p, listener=%p", subflow_req, listener);
++	pr_debug("subflow_req=%p, listener=%p\n", subflow_req, listener);
+ 
+ #ifdef CONFIG_TCP_MD5SIG
+ 	/* no MPTCP if MD5SIG is enabled on this socket or we may run out of
+@@ -221,7 +221,7 @@ static int subflow_check_req(struct request_sock *req,
+ 		}
+ 
+ 		if (subflow_use_different_sport(subflow_req->msk, sk_listener)) {
+-			pr_debug("syn inet_sport=%d %d",
++			pr_debug("syn inet_sport=%d %d\n",
+ 				 ntohs(inet_sk(sk_listener)->inet_sport),
+ 				 ntohs(inet_sk((struct sock *)subflow_req->msk)->inet_sport));
+ 			if (!mptcp_pm_sport_in_anno_list(subflow_req->msk, sk_listener)) {
+@@ -243,7 +243,7 @@ static int subflow_check_req(struct request_sock *req,
+ 			subflow_init_req_cookie_join_save(subflow_req, skb);
+ 		}
+ 
+-		pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token,
++		pr_debug("token=%u, remote_nonce=%u msk=%p\n", subflow_req->token,
+ 			 subflow_req->remote_nonce, subflow_req->msk);
+ 	}
+ 
+@@ -527,7 +527,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	subflow->rel_write_seq = 1;
+ 	subflow->conn_finished = 1;
+ 	subflow->ssn_offset = TCP_SKB_CB(skb)->seq;
+-	pr_debug("subflow=%p synack seq=%x", subflow, subflow->ssn_offset);
++	pr_debug("subflow=%p synack seq=%x\n", subflow, subflow->ssn_offset);
+ 
+ 	mptcp_get_options(skb, &mp_opt);
+ 	if (subflow->request_mptcp) {
+@@ -559,7 +559,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		subflow->thmac = mp_opt.thmac;
+ 		subflow->remote_nonce = mp_opt.nonce;
+ 		WRITE_ONCE(subflow->remote_id, mp_opt.join_id);
+-		pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d",
++		pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d\n",
+ 			 subflow, subflow->thmac, subflow->remote_nonce,
+ 			 subflow->backup);
+ 
+@@ -585,7 +585,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 			MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX);
+ 
+ 		if (subflow_use_different_dport(msk, sk)) {
+-			pr_debug("synack inet_dport=%d %d",
++			pr_debug("synack inet_dport=%d %d\n",
+ 				 ntohs(inet_sk(sk)->inet_dport),
+ 				 ntohs(inet_sk(parent)->inet_dport));
+ 			MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINPORTSYNACKRX);
+@@ -655,7 +655,7 @@ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	/* Never answer to SYNs sent to broadcast or multicast */
+ 	if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
+@@ -686,7 +686,7 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+ 		return subflow_v4_conn_request(sk, skb);
+@@ -807,7 +807,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 	struct mptcp_sock *owner;
+ 	struct sock *child;
+ 
+-	pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn);
++	pr_debug("listener=%p, req=%p, conn=%p\n", listener, req, listener->conn);
+ 
+ 	/* After child creation we must look for MPC even when options
+ 	 * are not parsed
+@@ -898,7 +898,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 			ctx->conn = (struct sock *)owner;
+ 
+ 			if (subflow_use_different_sport(owner, sk)) {
+-				pr_debug("ack inet_sport=%d %d",
++				pr_debug("ack inet_sport=%d %d\n",
+ 					 ntohs(inet_sk(sk)->inet_sport),
+ 					 ntohs(inet_sk((struct sock *)owner)->inet_sport));
+ 				if (!mptcp_pm_sport_in_anno_list(owner, sk)) {
+@@ -961,7 +961,7 @@ enum mapping_status {
+ 
+ static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
+ {
+-	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
++	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d\n",
+ 		 ssn, subflow->map_subflow_seq, subflow->map_data_len);
+ }
+ 
+@@ -1121,7 +1121,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 
+ 	data_len = mpext->data_len;
+ 	if (data_len == 0) {
+-		pr_debug("infinite mapping received");
++		pr_debug("infinite mapping received\n");
+ 		MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
+ 		subflow->map_data_len = 0;
+ 		return MAPPING_INVALID;
+@@ -1133,7 +1133,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 		if (data_len == 1) {
+ 			bool updated = mptcp_update_rcv_data_fin(msk, mpext->data_seq,
+ 								 mpext->dsn64);
+-			pr_debug("DATA_FIN with no payload seq=%llu", mpext->data_seq);
++			pr_debug("DATA_FIN with no payload seq=%llu\n", mpext->data_seq);
+ 			if (subflow->map_valid) {
+ 				/* A DATA_FIN might arrive in a DSS
+ 				 * option before the previous mapping
+@@ -1159,7 +1159,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 			data_fin_seq &= GENMASK_ULL(31, 0);
+ 
+ 		mptcp_update_rcv_data_fin(msk, data_fin_seq, mpext->dsn64);
+-		pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d",
++		pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d\n",
+ 			 data_fin_seq, mpext->dsn64);
+ 
+ 		/* Adjust for DATA_FIN using 1 byte of sequence space */
+@@ -1205,7 +1205,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 	if (unlikely(subflow->map_csum_reqd != csum_reqd))
+ 		return MAPPING_INVALID;
+ 
+-	pr_debug("new map seq=%llu subflow_seq=%u data_len=%u csum=%d:%u",
++	pr_debug("new map seq=%llu subflow_seq=%u data_len=%u csum=%d:%u\n",
+ 		 subflow->map_seq, subflow->map_subflow_seq,
+ 		 subflow->map_data_len, subflow->map_csum_reqd,
+ 		 subflow->map_data_csum);
+@@ -1240,7 +1240,7 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
+ 	avail_len = skb->len - offset;
+ 	incr = limit >= avail_len ? avail_len + fin : limit;
+ 
+-	pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len,
++	pr_debug("discarding=%d len=%d offset=%d seq=%d\n", incr, skb->len,
+ 		 offset, subflow->map_subflow_seq);
+ 	MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA);
+ 	tcp_sk(ssk)->copied_seq += incr;
+@@ -1255,12 +1255,16 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
+ /* sched mptcp worker to remove the subflow if no more data is pending */
+ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
+ {
+-	if (likely(ssk->sk_state != TCP_CLOSE))
++	struct sock *sk = (struct sock *)msk;
++
++	if (likely(ssk->sk_state != TCP_CLOSE &&
++		   (ssk->sk_state != TCP_CLOSE_WAIT ||
++		    inet_sk_state_load(sk) != TCP_ESTABLISHED)))
+ 		return;
+ 
+ 	if (skb_queue_empty(&ssk->sk_receive_queue) &&
+ 	    !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
+-		mptcp_schedule_work((struct sock *)msk);
++		mptcp_schedule_work(sk);
+ }
+ 
+ static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
+@@ -1337,7 +1341,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ 
+ 		old_ack = READ_ONCE(msk->ack_seq);
+ 		ack_seq = mptcp_subflow_get_mapped_dsn(subflow);
+-		pr_debug("msk ack_seq=%llx subflow ack_seq=%llx", old_ack,
++		pr_debug("msk ack_seq=%llx subflow ack_seq=%llx\n", old_ack,
+ 			 ack_seq);
+ 		if (unlikely(before64(ack_seq, old_ack))) {
+ 			mptcp_subflow_discard_data(ssk, skb, old_ack - ack_seq);
+@@ -1409,7 +1413,7 @@ bool mptcp_subflow_data_available(struct sock *sk)
+ 		subflow->map_valid = 0;
+ 		WRITE_ONCE(subflow->data_avail, false);
+ 
+-		pr_debug("Done with mapping: seq=%u data_len=%u",
++		pr_debug("Done with mapping: seq=%u data_len=%u\n",
+ 			 subflow->map_subflow_seq,
+ 			 subflow->map_data_len);
+ 	}
+@@ -1519,7 +1523,7 @@ void mptcpv6_handle_mapped(struct sock *sk, bool mapped)
+ 
+ 	target = mapped ? &subflow_v6m_specific : subflow_default_af_ops(sk);
+ 
+-	pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d",
++	pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d\n",
+ 		 subflow, sk->sk_family, icsk->icsk_af_ops, target, mapped);
+ 
+ 	if (likely(icsk->icsk_af_ops == target))
+@@ -1612,7 +1616,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ 		goto failed;
+ 
+ 	mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL);
+-	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
++	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d\n", msk,
+ 		 remote_token, local_id, remote_id);
+ 	subflow->remote_token = remote_token;
+ 	WRITE_ONCE(subflow->remote_id, remote_id);
+@@ -1747,7 +1751,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
+ 	SOCK_INODE(sf)->i_gid = SOCK_INODE(sk->sk_socket)->i_gid;
+ 
+ 	subflow = mptcp_subflow_ctx(sf->sk);
+-	pr_debug("subflow=%p", subflow);
++	pr_debug("subflow=%p\n", subflow);
+ 
+ 	*new_sock = sf;
+ 	sock_hold(sk);
+@@ -1776,7 +1780,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk,
+ 	INIT_LIST_HEAD(&ctx->node);
+ 	INIT_LIST_HEAD(&ctx->delegated_node);
+ 
+-	pr_debug("subflow=%p", ctx);
++	pr_debug("subflow=%p\n", ctx);
+ 
+ 	ctx->tcp_sock = sk;
+ 	WRITE_ONCE(ctx->local_id, -1);
+@@ -1927,7 +1931,7 @@ static int subflow_ulp_init(struct sock *sk)
+ 		goto out;
+ 	}
+ 
+-	pr_debug("subflow=%p, family=%d", ctx, sk->sk_family);
++	pr_debug("subflow=%p, family=%d\n", ctx, sk->sk_family);
+ 
+ 	tp->is_mptcp = 1;
+ 	ctx->icsk_af_ops = icsk->icsk_af_ops;
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 2389747256793..19a49af5a9e52 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -663,7 +663,9 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
+ 			pband = &q->band_flows[q->band_nr];
+ 			pband->credit = min(pband->credit + pband->quantum,
+ 					    pband->quantum);
+-			goto begin;
++			if (pband->credit > 0)
++				goto begin;
++			retry = 0;
+ 		}
+ 		if (q->time_next_delayed_flow != ~0ULL)
+ 			qdisc_watchdog_schedule_range_ns(&q->watchdog,
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 5adf0c0a6c1ac..7d315a18612ba 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -2260,12 +2260,6 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+ 		}
+ 	}
+ 
+-	/* Update socket peer label if first association. */
+-	if (security_sctp_assoc_request(new_asoc, chunk->head_skb ?: chunk->skb)) {
+-		sctp_association_free(new_asoc);
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+-	}
+-
+ 	/* Set temp so that it won't be added into hashtable */
+ 	new_asoc->temp = 1;
+ 
+@@ -2274,6 +2268,22 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+ 	 */
+ 	action = sctp_tietags_compare(new_asoc, asoc);
+ 
++	/* In cases C and E the association doesn't enter the ESTABLISHED
++	 * state, so there is no need to call security_sctp_assoc_request().
++	 */
++	switch (action) {
++	case 'A': /* Association restart. */
++	case 'B': /* Collision case B. */
++	case 'D': /* Collision case D. */
++		/* Update socket peer label if first association. */
++		if (security_sctp_assoc_request((struct sctp_association *)asoc,
++						chunk->head_skb ?: chunk->skb)) {
++			sctp_association_free(new_asoc);
++			return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++		}
++		break;
++	}
++
+ 	switch (action) {
+ 	case 'A': /* Association restart. */
+ 		retval = sctp_sf_do_dupcook_a(net, ep, asoc, chunk, commands,
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index 5c9bde25e56df..2b8003eb4f463 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -80,14 +80,14 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_U32_NAME) + 1;
+ 	strscpy(buf + 3, TEST_U32_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_U32_NAME) + 1) = AA_U32;
+-	*((u32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = TEST_U32_DATA;
++	*((__le32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = cpu_to_le32(TEST_U32_DATA);
+ 
+ 	buf = e->start + TEST_NAMED_U64_BUF_OFFSET;
+ 	*buf = AA_NAME;
+ 	*(buf + 1) = strlen(TEST_U64_NAME) + 1;
+ 	strscpy(buf + 3, TEST_U64_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_U64_NAME) + 1) = AA_U64;
+-	*((u64 *)(buf + 3 + strlen(TEST_U64_NAME) + 2)) = TEST_U64_DATA;
++	*((__le64 *)(buf + 3 + strlen(TEST_U64_NAME) + 2)) = cpu_to_le64(TEST_U64_DATA);
+ 
+ 	buf = e->start + TEST_NAMED_BLOB_BUF_OFFSET;
+ 	*buf = AA_NAME;
+@@ -103,7 +103,7 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_ARRAY_NAME) + 1;
+ 	strscpy(buf + 3, TEST_ARRAY_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_ARRAY_NAME) + 1) = AA_ARRAY;
+-	*((u16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = TEST_ARRAY_SIZE;
++	*((__le16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = cpu_to_le16(TEST_ARRAY_SIZE);
+ 
+ 	return e;
+ }
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index bfa61e005aace..400eca4ad0fb6 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -6660,8 +6660,8 @@ static int selinux_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen
+  */
+ static int selinux_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen)
+ {
+-	return __vfs_setxattr_noperm(&nop_mnt_idmap, dentry, XATTR_NAME_SELINUX,
+-				     ctx, ctxlen, 0);
++	return __vfs_setxattr_locked(&nop_mnt_idmap, dentry, XATTR_NAME_SELINUX,
++				     ctx, ctxlen, 0, NULL);
+ }
+ 
+ static int selinux_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index c1fe422cfbe19..081129be5b62c 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4874,8 +4874,8 @@ static int smack_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
+ 
+ static int smack_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen)
+ {
+-	return __vfs_setxattr_noperm(&nop_mnt_idmap, dentry, XATTR_NAME_SMACK,
+-				     ctx, ctxlen, 0);
++	return __vfs_setxattr_locked(&nop_mnt_idmap, dentry, XATTR_NAME_SMACK,
++				     ctx, ctxlen, 0, NULL);
+ }
+ 
+ static int smack_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 42a7051410501..e115fe1836349 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -537,6 +537,9 @@ static struct snd_seq_client *get_event_dest_client(struct snd_seq_event *event,
+ 		return NULL;
+ 	if (! dest->accept_input)
+ 		goto __not_avail;
++	if (snd_seq_ev_is_ump(event))
++		return dest; /* ok - no filter checks */
++
+ 	if ((dest->filter & SNDRV_SEQ_FILTER_USE_EVENT) &&
+ 	    ! test_bit(event->type, dest->event_filter))
+ 		goto __not_avail;
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index e134ede6c5aa5..357fd59aa49e4 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -980,7 +980,7 @@ int cs35l56_hda_common_probe(struct cs35l56_hda *cs35l56, int hid, int id)
+ 		goto err;
+ 	}
+ 
+-	cs35l56->base.cal_index = cs35l56->index;
++	cs35l56->base.cal_index = -1;
+ 
+ 	cs35l56_init_cs_dsp(&cs35l56->base, &cs35l56->cs_dsp);
+ 	cs35l56->cs_dsp.client_ops = &cs35l56_hda_client_ops;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c9d76bca99232..1a7b7e790fca9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10221,6 +10221,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c15, "HP Spectre x360 2-in-1 Laptop 14-eu0xxx", ALC245_FIXUP_HP_SPECTRE_X360_EU0XXX),
+ 	SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8c17, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8c21, "HP Pavilion Plus Laptop 14-ey0XXX", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ 	SND_PCI_QUIRK(0x103c, 0x8c46, "HP EliteBook 830 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c47, "HP EliteBook 840 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c48, "HP EliteBook 860 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -10259,6 +10260,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8cbd, "HP Pavilion Aero Laptop 13-bg0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ 	SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8cde, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+diff --git a/sound/soc/amd/acp/acp-legacy-mach.c b/sound/soc/amd/acp/acp-legacy-mach.c
+index 47c3b5f167f59..0d529e32e552b 100644
+--- a/sound/soc/amd/acp/acp-legacy-mach.c
++++ b/sound/soc/amd/acp/acp-legacy-mach.c
+@@ -227,6 +227,8 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
++
+ static struct platform_driver acp_asoc_audio = {
+ 	.driver = {
+ 		.pm = &snd_soc_pm_ops,
+diff --git a/sound/soc/codecs/cs-amp-lib-test.c b/sound/soc/codecs/cs-amp-lib-test.c
+index 15f991b2e16e2..8169ec88a8ba8 100644
+--- a/sound/soc/codecs/cs-amp-lib-test.c
++++ b/sound/soc/codecs/cs-amp-lib-test.c
+@@ -38,6 +38,7 @@ static void cs_amp_lib_test_init_dummy_cal_blob(struct kunit *test, int num_amps
+ {
+ 	struct cs_amp_lib_test_priv *priv = test->priv;
+ 	unsigned int blob_size;
++	int i;
+ 
+ 	blob_size = offsetof(struct cirrus_amp_efi_data, data) +
+ 		    sizeof(struct cirrus_amp_cal_data) * num_amps;
+@@ -49,6 +50,14 @@ static void cs_amp_lib_test_init_dummy_cal_blob(struct kunit *test, int num_amps
+ 	priv->cal_blob->count = num_amps;
+ 
+ 	get_random_bytes(priv->cal_blob->data, sizeof(struct cirrus_amp_cal_data) * num_amps);
++
++	/* Ensure all timestamps are non-zero to mark the entry valid. */
++	for (i = 0; i < num_amps; i++)
++		priv->cal_blob->data[i].calTime[0] |= 1;
++
++	/* Ensure that all UIDs are non-zero and unique. */
++	for (i = 0; i < num_amps; i++)
++		*(u8 *)&priv->cal_blob->data[i].calTarget[0] = i + 1;
+ }
+ 
+ static u64 cs_amp_lib_test_get_target_uid(struct kunit *test)
+diff --git a/sound/soc/codecs/cs-amp-lib.c b/sound/soc/codecs/cs-amp-lib.c
+index 605964af8afad..51b128c806718 100644
+--- a/sound/soc/codecs/cs-amp-lib.c
++++ b/sound/soc/codecs/cs-amp-lib.c
+@@ -182,6 +182,10 @@ static int _cs_amp_get_efi_calibration_data(struct device *dev, u64 target_uid,
+ 		for (i = 0; i < efi_data->count; ++i) {
+ 			u64 cal_target = cs_amp_cal_target_u64(&efi_data->data[i]);
+ 
++			/* Skip empty entries */
++			if (!efi_data->data[i].calTime[0] && !efi_data->data[i].calTime[1])
++				continue;
++
+ 			/* Skip entries with unpopulated silicon ID */
+ 			if (cal_target == 0)
+ 				continue;
+@@ -193,7 +197,8 @@ static int _cs_amp_get_efi_calibration_data(struct device *dev, u64 target_uid,
+ 		}
+ 	}
+ 
+-	if (!cal && (amp_index >= 0) && (amp_index < efi_data->count)) {
++	if (!cal && (amp_index >= 0) && (amp_index < efi_data->count) &&
++	    (efi_data->data[amp_index].calTime[0] || efi_data->data[amp_index].calTime[1])) {
+ 		u64 cal_target = cs_amp_cal_target_u64(&efi_data->data[amp_index]);
+ 
+ 		/*
+diff --git a/sound/soc/sof/amd/acp-dsp-offset.h b/sound/soc/sof/amd/acp-dsp-offset.h
+index 59afbe2e0f420..072b703f9b3f3 100644
+--- a/sound/soc/sof/amd/acp-dsp-offset.h
++++ b/sound/soc/sof/amd/acp-dsp-offset.h
+@@ -76,13 +76,15 @@
+ #define DSP_SW_INTR_CNTL_OFFSET			0x0
+ #define DSP_SW_INTR_STAT_OFFSET			0x4
+ #define DSP_SW_INTR_TRIG_OFFSET			0x8
+-#define ACP_ERROR_STATUS			0x18C4
++#define ACP3X_ERROR_STATUS			0x18C4
++#define ACP6X_ERROR_STATUS			0x1A4C
+ #define ACP3X_AXI2DAGB_SEM_0			0x1880
+ #define ACP5X_AXI2DAGB_SEM_0			0x1884
+ #define ACP6X_AXI2DAGB_SEM_0			0x1874
+ 
+ /* ACP common registers to report errors related to I2S & SoundWire interfaces */
+-#define ACP_SW0_I2S_ERROR_REASON		0x18B4
++#define ACP3X_SW_I2S_ERROR_REASON		0x18C8
++#define ACP6X_SW0_I2S_ERROR_REASON		0x18B4
+ #define ACP_SW1_I2S_ERROR_REASON		0x1A50
+ 
+ /* Registers from ACP_SHA block */
+diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c
+index 74fd5f2b148b8..85b58c8ccd0da 100644
+--- a/sound/soc/sof/amd/acp.c
++++ b/sound/soc/sof/amd/acp.c
+@@ -92,6 +92,7 @@ static int config_dma_channel(struct acp_dev_data *adata, unsigned int ch,
+ 			      unsigned int idx, unsigned int dscr_count)
+ {
+ 	struct snd_sof_dev *sdev = adata->dev;
++	const struct sof_amd_acp_desc *desc = get_chip_info(sdev->pdata);
+ 	unsigned int val, status;
+ 	int ret;
+ 
+@@ -102,7 +103,7 @@ static int config_dma_channel(struct acp_dev_data *adata, unsigned int ch,
+ 					    val & (1 << ch), ACP_REG_POLL_INTERVAL,
+ 					    ACP_REG_POLL_TIMEOUT_US);
+ 	if (ret < 0) {
+-		status = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_ERROR_STATUS);
++		status = snd_sof_dsp_read(sdev, ACP_DSP_BAR, desc->acp_error_stat);
+ 		val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_DMA_ERR_STS_0 + ch * sizeof(u32));
+ 
+ 		dev_err(sdev->dev, "ACP_DMA_ERR_STS :0x%x ACP_ERROR_STATUS :0x%x\n", val, status);
+@@ -263,6 +264,17 @@ int configure_and_run_sha_dma(struct acp_dev_data *adata, void *image_addr,
+ 	snd_sof_dsp_write(sdev, ACP_DSP_BAR, ACP_SHA_DMA_STRT_ADDR, start_addr);
+ 	snd_sof_dsp_write(sdev, ACP_DSP_BAR, ACP_SHA_DMA_DESTINATION_ADDR, dest_addr);
+ 	snd_sof_dsp_write(sdev, ACP_DSP_BAR, ACP_SHA_MSG_LENGTH, image_length);
++
++	/* psp_send_cmd only required for vangogh platform (rev - 5) */
++	if (desc->rev == 5 && !(adata->quirks && adata->quirks->skip_iram_dram_size_mod)) {
++		/* Modify IRAM and DRAM size */
++		ret = psp_send_cmd(adata, MBOX_ACP_IRAM_DRAM_FENCE_COMMAND | IRAM_DRAM_FENCE_2);
++		if (ret)
++			return ret;
++		ret = psp_send_cmd(adata, MBOX_ACP_IRAM_DRAM_FENCE_COMMAND | MBOX_ISREADY_FLAG);
++		if (ret)
++			return ret;
++	}
+ 	snd_sof_dsp_write(sdev, ACP_DSP_BAR, ACP_SHA_DMA_CMD, ACP_SHA_RUN);
+ 
+ 	ret = snd_sof_dsp_read_poll_timeout(sdev, ACP_DSP_BAR, ACP_SHA_TRANSFER_BYTE_CNT,
+@@ -280,17 +292,6 @@ int configure_and_run_sha_dma(struct acp_dev_data *adata, void *image_addr,
+ 			return ret;
+ 	}
+ 
+-	/* psp_send_cmd only required for vangogh platform (rev - 5) */
+-	if (desc->rev == 5 && !(adata->quirks && adata->quirks->skip_iram_dram_size_mod)) {
+-		/* Modify IRAM and DRAM size */
+-		ret = psp_send_cmd(adata, MBOX_ACP_IRAM_DRAM_FENCE_COMMAND | IRAM_DRAM_FENCE_2);
+-		if (ret)
+-			return ret;
+-		ret = psp_send_cmd(adata, MBOX_ACP_IRAM_DRAM_FENCE_COMMAND | MBOX_ISREADY_FLAG);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	ret = snd_sof_dsp_read_poll_timeout(sdev, ACP_DSP_BAR, ACP_SHA_DSP_FW_QUALIFIER,
+ 					    fw_qualifier, fw_qualifier & DSP_FW_RUN_ENABLE,
+ 					    ACP_REG_POLL_INTERVAL, ACP_DMA_COMPLETE_TIMEOUT_US);
+@@ -402,9 +403,11 @@ static irqreturn_t acp_irq_handler(int irq, void *dev_id)
+ 
+ 	if (val & ACP_ERROR_IRQ_MASK) {
+ 		snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->ext_intr_stat, ACP_ERROR_IRQ_MASK);
+-		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + ACP_SW0_I2S_ERROR_REASON, 0);
+-		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + ACP_SW1_I2S_ERROR_REASON, 0);
+-		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + ACP_ERROR_STATUS, 0);
++		snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->acp_sw0_i2s_err_reason, 0);
++		/* ACP_SW1_I2S_ERROR_REASON is newly added register from rmb platform onwards */
++		if (desc->rev >= 6)
++			snd_sof_dsp_write(sdev, ACP_DSP_BAR, ACP_SW1_I2S_ERROR_REASON, 0);
++		snd_sof_dsp_write(sdev, ACP_DSP_BAR, desc->acp_error_stat, 0);
+ 		irq_flag = 1;
+ 	}
+ 
+@@ -430,6 +433,7 @@ static int acp_power_on(struct snd_sof_dev *sdev)
+ 	const struct sof_amd_acp_desc *desc = get_chip_info(sdev->pdata);
+ 	unsigned int base = desc->pgfsm_base;
+ 	unsigned int val;
++	unsigned int acp_pgfsm_status_mask, acp_pgfsm_cntl_mask;
+ 	int ret;
+ 
+ 	val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, base + PGFSM_STATUS_OFFSET);
+@@ -437,9 +441,23 @@ static int acp_power_on(struct snd_sof_dev *sdev)
+ 	if (val == ACP_POWERED_ON)
+ 		return 0;
+ 
+-	if (val & ACP_PGFSM_STATUS_MASK)
++	switch (desc->rev) {
++	case 3:
++	case 5:
++		acp_pgfsm_status_mask = ACP3X_PGFSM_STATUS_MASK;
++		acp_pgfsm_cntl_mask = ACP3X_PGFSM_CNTL_POWER_ON_MASK;
++		break;
++	case 6:
++		acp_pgfsm_status_mask = ACP6X_PGFSM_STATUS_MASK;
++		acp_pgfsm_cntl_mask = ACP6X_PGFSM_CNTL_POWER_ON_MASK;
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	if (val & acp_pgfsm_status_mask)
+ 		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + PGFSM_CONTROL_OFFSET,
+-				  ACP_PGFSM_CNTL_POWER_ON_MASK);
++				  acp_pgfsm_cntl_mask);
+ 
+ 	ret = snd_sof_dsp_read_poll_timeout(sdev, ACP_DSP_BAR, base + PGFSM_STATUS_OFFSET, val,
+ 					    !val, ACP_REG_POLL_INTERVAL, ACP_REG_POLL_TIMEOUT_US);
+diff --git a/sound/soc/sof/amd/acp.h b/sound/soc/sof/amd/acp.h
+index 87e79d500865a..61b28df8c9081 100644
+--- a/sound/soc/sof/amd/acp.h
++++ b/sound/soc/sof/amd/acp.h
+@@ -25,8 +25,11 @@
+ #define ACP_REG_POLL_TIMEOUT_US                 2000
+ #define ACP_DMA_COMPLETE_TIMEOUT_US		5000
+ 
+-#define ACP_PGFSM_CNTL_POWER_ON_MASK		0x01
+-#define ACP_PGFSM_STATUS_MASK			0x03
++#define ACP3X_PGFSM_CNTL_POWER_ON_MASK		0x01
++#define ACP3X_PGFSM_STATUS_MASK			0x03
++#define ACP6X_PGFSM_CNTL_POWER_ON_MASK		0x07
++#define ACP6X_PGFSM_STATUS_MASK			0x0F
++
+ #define ACP_POWERED_ON				0x00
+ #define ACP_ASSERT_RESET			0x01
+ #define ACP_RELEASE_RESET			0x00
+@@ -203,6 +206,8 @@ struct sof_amd_acp_desc {
+ 	u32 probe_reg_offset;
+ 	u32 reg_start_addr;
+ 	u32 reg_end_addr;
++	u32 acp_error_stat;
++	u32 acp_sw0_i2s_err_reason;
+ 	u32 sdw_max_link_count;
+ 	u64 sdw_acpi_dev_addr;
+ };
+diff --git a/sound/soc/sof/amd/pci-acp63.c b/sound/soc/sof/amd/pci-acp63.c
+index fc89844473657..986f5928caedd 100644
+--- a/sound/soc/sof/amd/pci-acp63.c
++++ b/sound/soc/sof/amd/pci-acp63.c
+@@ -35,6 +35,8 @@ static const struct sof_amd_acp_desc acp63_chip_info = {
+ 	.ext_intr_cntl = ACP6X_EXTERNAL_INTR_CNTL,
+ 	.ext_intr_stat	= ACP6X_EXT_INTR_STAT,
+ 	.ext_intr_stat1	= ACP6X_EXT_INTR_STAT1,
++	.acp_error_stat = ACP6X_ERROR_STATUS,
++	.acp_sw0_i2s_err_reason = ACP6X_SW0_I2S_ERROR_REASON,
+ 	.dsp_intr_base	= ACP6X_DSP_SW_INTR_BASE,
+ 	.sram_pte_offset = ACP6X_SRAM_PTE_OFFSET,
+ 	.hw_semaphore_offset = ACP6X_AXI2DAGB_SEM_0,
+diff --git a/sound/soc/sof/amd/pci-rmb.c b/sound/soc/sof/amd/pci-rmb.c
+index 4bc30951f8b0d..a366f904e6f31 100644
+--- a/sound/soc/sof/amd/pci-rmb.c
++++ b/sound/soc/sof/amd/pci-rmb.c
+@@ -33,6 +33,8 @@ static const struct sof_amd_acp_desc rembrandt_chip_info = {
+ 	.pgfsm_base	= ACP6X_PGFSM_BASE,
+ 	.ext_intr_stat	= ACP6X_EXT_INTR_STAT,
+ 	.dsp_intr_base	= ACP6X_DSP_SW_INTR_BASE,
++	.acp_error_stat = ACP6X_ERROR_STATUS,
++	.acp_sw0_i2s_err_reason = ACP6X_SW0_I2S_ERROR_REASON,
+ 	.sram_pte_offset = ACP6X_SRAM_PTE_OFFSET,
+ 	.hw_semaphore_offset = ACP6X_AXI2DAGB_SEM_0,
+ 	.fusion_dsp_offset = ACP6X_DSP_FUSION_RUNSTALL,
+diff --git a/sound/soc/sof/amd/pci-rn.c b/sound/soc/sof/amd/pci-rn.c
+index e08875bdfa8b1..2b7c53470ce82 100644
+--- a/sound/soc/sof/amd/pci-rn.c
++++ b/sound/soc/sof/amd/pci-rn.c
+@@ -33,6 +33,8 @@ static const struct sof_amd_acp_desc renoir_chip_info = {
+ 	.pgfsm_base	= ACP3X_PGFSM_BASE,
+ 	.ext_intr_stat	= ACP3X_EXT_INTR_STAT,
+ 	.dsp_intr_base	= ACP3X_DSP_SW_INTR_BASE,
++	.acp_error_stat = ACP3X_ERROR_STATUS,
++	.acp_sw0_i2s_err_reason = ACP3X_SW_I2S_ERROR_REASON,
+ 	.sram_pte_offset = ACP3X_SRAM_PTE_OFFSET,
+ 	.hw_semaphore_offset = ACP3X_AXI2DAGB_SEM_0,
+ 	.acp_clkmux_sel	= ACP3X_CLKMUX_SEL,
+diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
+index 5f7d5a5ba89b0..d79bd5fc08db5 100644
+--- a/tools/testing/selftests/iommu/iommufd.c
++++ b/tools/testing/selftests/iommu/iommufd.c
+@@ -803,7 +803,7 @@ TEST_F(iommufd_ioas, copy_area)
+ {
+ 	struct iommu_ioas_copy copy_cmd = {
+ 		.size = sizeof(copy_cmd),
+-		.flags = IOMMU_IOAS_MAP_FIXED_IOVA,
++		.flags = IOMMU_IOAS_MAP_FIXED_IOVA | IOMMU_IOAS_MAP_WRITEABLE,
+ 		.dst_ioas_id = self->ioas_id,
+ 		.src_ioas_id = self->ioas_id,
+ 		.length = PAGE_SIZE,
+@@ -1296,7 +1296,7 @@ TEST_F(iommufd_ioas, copy_sweep)
+ {
+ 	struct iommu_ioas_copy copy_cmd = {
+ 		.size = sizeof(copy_cmd),
+-		.flags = IOMMU_IOAS_MAP_FIXED_IOVA,
++		.flags = IOMMU_IOAS_MAP_FIXED_IOVA | IOMMU_IOAS_MAP_WRITEABLE,
+ 		.src_ioas_id = self->ioas_id,
+ 		.dst_iova = MOCK_APERTURE_START,
+ 		.length = MOCK_PAGE_SIZE,
+@@ -1586,7 +1586,7 @@ TEST_F(iommufd_mock_domain, user_copy)
+ 	};
+ 	struct iommu_ioas_copy copy_cmd = {
+ 		.size = sizeof(copy_cmd),
+-		.flags = IOMMU_IOAS_MAP_FIXED_IOVA,
++		.flags = IOMMU_IOAS_MAP_FIXED_IOVA | IOMMU_IOAS_MAP_WRITEABLE,
+ 		.dst_ioas_id = self->ioas_id,
+ 		.dst_iova = MOCK_APERTURE_START,
+ 		.length = BUFFER_SIZE,
+diff --git a/tools/testing/selftests/net/forwarding/local_termination.sh b/tools/testing/selftests/net/forwarding/local_termination.sh
+index 4b364cdf3ef0c..656b1a82d1dca 100755
+--- a/tools/testing/selftests/net/forwarding/local_termination.sh
++++ b/tools/testing/selftests/net/forwarding/local_termination.sh
+@@ -284,6 +284,10 @@ bridge()
+ cleanup()
+ {
+ 	pre_cleanup
++
++	ip link set $h2 down
++	ip link set $h1 down
++
+ 	vrf_cleanup
+ }
+ 
+diff --git a/tools/testing/selftests/net/forwarding/no_forwarding.sh b/tools/testing/selftests/net/forwarding/no_forwarding.sh
+index af3b398d13f01..9e677aa64a06a 100755
+--- a/tools/testing/selftests/net/forwarding/no_forwarding.sh
++++ b/tools/testing/selftests/net/forwarding/no_forwarding.sh
+@@ -233,6 +233,9 @@ cleanup()
+ {
+ 	pre_cleanup
+ 
++	ip link set dev $swp2 down
++	ip link set dev $swp1 down
++
+ 	h2_destroy
+ 	h1_destroy
+ 
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index ed30c0ed55e7a..c0ba79a8ad6da 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -1112,26 +1112,26 @@ chk_csum_nr()
+ 
+ 	print_check "sum"
+ 	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtDataCsumErr")
+-	if [ "$count" != "$csum_ns1" ]; then
++	if [ -n "$count" ] && [ "$count" != "$csum_ns1" ]; then
+ 		extra_msg+=" ns1=$count"
+ 	fi
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } ||
+-	   { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then
++	     { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then
+ 		fail_test "got $count data checksum error[s] expected $csum_ns1"
+ 	else
+ 		print_ok
+ 	fi
+ 	print_check "csum"
+ 	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtDataCsumErr")
+-	if [ "$count" != "$csum_ns2" ]; then
++	if [ -n "$count" ] && [ "$count" != "$csum_ns2" ]; then
+ 		extra_msg+=" ns2=$count"
+ 	fi
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } ||
+-	   { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then
++	     { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then
+ 		fail_test "got $count data checksum error[s] expected $csum_ns2"
+ 	else
+ 		print_ok
+@@ -1169,13 +1169,13 @@ chk_fail_nr()
+ 
+ 	print_check "ftx"
+ 	count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPFailTx")
+-	if [ "$count" != "$fail_tx" ]; then
++	if [ -n "$count" ] && [ "$count" != "$fail_tx" ]; then
+ 		extra_msg+=",tx=$count"
+ 	fi
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } ||
+-	   { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then
++	     { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then
+ 		fail_test "got $count MP_FAIL[s] TX expected $fail_tx"
+ 	else
+ 		print_ok
+@@ -1183,13 +1183,13 @@ chk_fail_nr()
+ 
+ 	print_check "failrx"
+ 	count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPFailRx")
+-	if [ "$count" != "$fail_rx" ]; then
++	if [ -n "$count" ] && [ "$count" != "$fail_rx" ]; then
+ 		extra_msg+=",rx=$count"
+ 	fi
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } ||
+-	   { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then
++	     { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then
+ 		fail_test "got $count MP_FAIL[s] RX expected $fail_rx"
+ 	else
+ 		print_ok
+@@ -3429,14 +3429,12 @@ userspace_tests()
+ 			"signal"
+ 		userspace_pm_chk_get_addr "${ns1}" "10" "id 10 flags signal 10.0.2.1"
+ 		userspace_pm_chk_get_addr "${ns1}" "20" "id 20 flags signal 10.0.3.1"
+-		userspace_pm_rm_addr $ns1 10
+ 		userspace_pm_rm_sf $ns1 "::ffff:10.0.2.1" $MPTCP_LIB_EVENT_SUB_ESTABLISHED
+ 		userspace_pm_chk_dump_addr "${ns1}" \
+-			"id 20 flags signal 10.0.3.1" "after rm_addr 10"
++			"id 20 flags signal 10.0.3.1" "after rm_sf 10"
+ 		userspace_pm_rm_addr $ns1 20
+-		userspace_pm_rm_sf $ns1 10.0.3.1 $MPTCP_LIB_EVENT_SUB_ESTABLISHED
+ 		userspace_pm_chk_dump_addr "${ns1}" "" "after rm_addr 20"
+-		chk_rm_nr 2 2 invert
++		chk_rm_nr 1 1 invert
+ 		chk_mptcp_info subflows 0 subflows 0
+ 		chk_subflows_total 1 1
+ 		kill_events_pids
+@@ -3460,12 +3458,11 @@ userspace_tests()
+ 			"id 20 flags subflow 10.0.3.2" \
+ 			"subflow"
+ 		userspace_pm_chk_get_addr "${ns2}" "20" "id 20 flags subflow 10.0.3.2"
+-		userspace_pm_rm_addr $ns2 20
+ 		userspace_pm_rm_sf $ns2 10.0.3.2 $MPTCP_LIB_EVENT_SUB_ESTABLISHED
+ 		userspace_pm_chk_dump_addr "${ns2}" \
+ 			"" \
+-			"after rm_addr 20"
+-		chk_rm_nr 1 1
++			"after rm_sf 20"
++		chk_rm_nr 0 1
+ 		chk_mptcp_info subflows 0 subflows 0
+ 		chk_subflows_total 1 1
+ 		kill_events_pids
+@@ -3575,27 +3572,28 @@ endpoint_tests()
+ 
+ 	if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
+ 	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
+-		pm_nl_set_limits $ns1 0 2
+-		pm_nl_set_limits $ns2 0 2
++		pm_nl_set_limits $ns1 0 3
++		pm_nl_set_limits $ns2 0 3
++		pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+-		test_linkfail=4 speed=20 \
++		test_linkfail=4 speed=5 \
+ 			run_tests $ns1 $ns2 10.0.1.1 &
+ 		local tests_pid=$!
+ 
+ 		wait_mpj $ns2
+ 		pm_nl_check_endpoint "creation" \
+ 			$ns2 10.0.2.2 id 2 flags subflow dev ns2eth2
+-		chk_subflow_nr "before delete" 2
++		chk_subflow_nr "before delete id 2" 2
+ 		chk_mptcp_info subflows 1 subflows 1
+ 
+ 		pm_nl_del_endpoint $ns2 2 10.0.2.2
+ 		sleep 0.5
+-		chk_subflow_nr "after delete" 1
++		chk_subflow_nr "after delete id 2" 1
+ 		chk_mptcp_info subflows 0 subflows 0
+ 
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+ 		wait_mpj $ns2
+-		chk_subflow_nr "after re-add" 2
++		chk_subflow_nr "after re-add id 2" 2
+ 		chk_mptcp_info subflows 1 subflows 1
+ 
+ 		pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
+@@ -3610,10 +3608,23 @@ endpoint_tests()
+ 		chk_subflow_nr "after no reject" 3
+ 		chk_mptcp_info subflows 2 subflows 2
+ 
++		local i
++		for i in $(seq 3); do
++			pm_nl_del_endpoint $ns2 1 10.0.1.2
++			sleep 0.5
++			chk_subflow_nr "after delete id 0 ($i)" 2
++			chk_mptcp_info subflows 2 subflows 2 # only decr for additional sf
++
++			pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow
++			wait_mpj $ns2
++			chk_subflow_nr "after re-add id 0 ($i)" 3
++			chk_mptcp_info subflows 3 subflows 3
++		done
++
+ 		mptcp_lib_kill_wait $tests_pid
+ 
+-		chk_join_nr 3 3 3
+-		chk_rm_nr 1 1
++		chk_join_nr 6 6 6
++		chk_rm_nr 4 4
+ 	fi
+ }
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-04 13:51 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-04 13:51 UTC (permalink / raw
  To: gentoo-commits

commit:     da7a9b74230d2b2d4e158cc04234651235bb173c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  4 13:50:43 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  4 13:50:43 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=da7a9b74

Update README

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/0000_README b/0000_README
index f658d7dc..21f04f84 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.10.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.7
 
+Patch:  1007_linux-6.10.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.8
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-04 14:06 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-04 14:06 UTC (permalink / raw
  To: gentoo-commits

commit:     090e7ec1c8858bc2901ddad77f9cb6cd62721fc2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  4 14:05:29 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  4 14:05:29 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=090e7ec1

Remove .gitignore section from dtrace patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2995_dtrace-6.10_p1.patch | 35 -----------------------------------
 1 file changed, 35 deletions(-)

diff --git a/2995_dtrace-6.10_p1.patch b/2995_dtrace-6.10_p1.patch
index 6e03cbf6..97983060 100644
--- a/2995_dtrace-6.10_p1.patch
+++ b/2995_dtrace-6.10_p1.patch
@@ -1,38 +1,3 @@
-diff --git a/.gitignore b/.gitignore
-index c59dc60ba62ef..6722f45561f3a 100644
---- a/.gitignore
-+++ b/.gitignore
-@@ -16,6 +16,7 @@
- *.bin
- *.bz2
- *.c.[012]*.*
-+*.ctf
- *.dt.yaml
- *.dtb
- *.dtbo
-@@ -54,6 +55,7 @@
- Module.symvers
- dtbs-list
- modules.order
-+objects.builtin
- 
- #
- # Top-level generic files
-@@ -64,12 +66,13 @@ modules.order
- /vmlinux.32
- /vmlinux.map
- /vmlinux.symvers
-+/vmlinux.ctfa
- /vmlinux-gdb.py
- /vmlinuz
- /System.map
- /Module.markers
- /modules.builtin
--/modules.builtin.modinfo
-+/modules.builtin.*
- /modules.nsdeps
- 
- #
 diff --git a/Documentation/dontdiff b/Documentation/dontdiff
 index 3c399f132e2db..75b9655e57914 100644
 --- a/Documentation/dontdiff


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-05 13:58 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-05 13:58 UTC (permalink / raw
  To: gentoo-commits

commit:     c7e567fb7f29fa11e8063bdf16407bafdf75ed1e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  5 13:58:06 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep  5 13:58:06 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c7e567fb

parisc: Delay write-protection until mark_rodata_ro() call

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                              |  4 +++
 1710_parisc-Delay-write-protection.patch | 61 ++++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+)

diff --git a/0000_README b/0000_README
index 21f04f84..d179ade8 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
+Patch:  1710_parisc-Delay-write-protection.patch
+From:		https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
+Desc:		parisc: Delay write-protection until mark_rodata_ro() call
+
 Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc

diff --git a/1710_parisc-Delay-write-protection.patch b/1710_parisc-Delay-write-protection.patch
new file mode 100644
index 00000000..efdd4c27
--- /dev/null
+++ b/1710_parisc-Delay-write-protection.patch
@@ -0,0 +1,61 @@
+From 213aa670153ed675a007c1f35c5db544b0fefc94 Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Sat, 31 Aug 2024 14:02:06 +0200
+Subject: parisc: Delay write-protection until mark_rodata_ro() call
+
+Do not write-protect the kernel read-only and __ro_after_init sections
+earlier than before mark_rodata_ro() is called.  This fixes a boot issue on
+parisc which is triggered by commit 91a1d97ef482 ("jump_label,module: Don't
+alloc static_key_mod for __ro_after_init keys"). That commit may modify
+static key contents in the __ro_after_init section at bootup, so this
+section needs to be writable at least until mark_rodata_ro() is called.
+
+Signed-off-by: Helge Deller <deller@gmx.de>
+Reported-by: matoro <matoro_mailinglist_kernel@matoro.tk>
+Reported-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
+Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
+Link: https://lore.kernel.org/linux-parisc/096cad5aada514255cd7b0b9dbafc768@matoro.tk/#r
+Fixes: 91a1d97ef482 ("jump_label,module: Don't alloc static_key_mod for __ro_after_init keys")
+Cc: stable@vger.kernel.org # v6.10+
+---
+ arch/parisc/mm/init.c | 16 +++++++++++-----
+ 1 file changed, 11 insertions(+), 5 deletions(-)
+
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 34d91cb8b25905..96970fa75e4ac9 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -459,7 +459,6 @@ void free_initmem(void)
+ 	unsigned long kernel_end  = (unsigned long)&_end;
+ 
+ 	/* Remap kernel text and data, but do not touch init section yet. */
+-	kernel_set_to_readonly = true;
+ 	map_pages(init_end, __pa(init_end), kernel_end - init_end,
+ 		  PAGE_KERNEL, 0);
+ 
+@@ -493,11 +492,18 @@ void free_initmem(void)
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+ void mark_rodata_ro(void)
+ {
+-	/* rodata memory was already mapped with KERNEL_RO access rights by
+-           pagetable_init() and map_pages(). No need to do additional stuff here */
+-	unsigned long roai_size = __end_ro_after_init - __start_ro_after_init;
++	unsigned long start = (unsigned long) &__start_rodata;
++	unsigned long end = (unsigned long) &__end_rodata;
++
++	pr_info("Write protecting the kernel read-only data: %luk\n",
++	       (end - start) >> 10);
++
++	kernel_set_to_readonly = true;
++	map_pages(start, __pa(start), end - start, PAGE_KERNEL, 0);
+ 
+-	pr_info("Write protected read-only-after-init data: %luk\n", roai_size >> 10);
++	/* force the kernel to see the new page table entries */
++	flush_cache_all();
++	flush_tlb_all();
+ }
+ #endif
+ 
+-- 
+cgit 1.2.3-korg
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-07 14:23 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-07 14:23 UTC (permalink / raw
  To: gentoo-commits

commit:     72f43edb42071a1e39ccfe03e310ad57f675af65
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  5 19:09:56 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep  5 19:09:56 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=72f43edb

dtrace patch for 6.10.X (CTF, modules.builtin.objs) p2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    4 +-
 2995_dtrace-6.10_p2.patch | 2368 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2370 insertions(+), 2 deletions(-)

diff --git a/0000_README b/0000_README
index d179ade8..ad96829d 100644
--- a/0000_README
+++ b/0000_README
@@ -119,9 +119,9 @@ Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
-Patch:  2995_dtrace-6.10_p1.patch
+Patch:  2995_dtrace-6.10_p2.patch
 From:   https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.10
-Desc:   dtrace patch 6.10_p1
+Desc:   dtrace patch for 6.10.X (CTF, modules.builtin.objs)
 
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852

diff --git a/2995_dtrace-6.10_p2.patch b/2995_dtrace-6.10_p2.patch
new file mode 100644
index 00000000..3686ca5b
--- /dev/null
+++ b/2995_dtrace-6.10_p2.patch
@@ -0,0 +1,2368 @@
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index 3c399f132e2db..75b9655e57914 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -179,7 +179,7 @@ mkutf8data
+ modpost
+ modules-only.symvers
+ modules.builtin
+-modules.builtin.modinfo
++modules.builtin.*
+ modules.nsdeps
+ modules.order
+ modversions.h*
+diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
+index 9c8d1d046ea56..79e104ffee715 100644
+--- a/Documentation/kbuild/kbuild.rst
++++ b/Documentation/kbuild/kbuild.rst
+@@ -17,6 +17,11 @@ modules.builtin
+ This file lists all modules that are built into the kernel. This is used
+ by modprobe to not fail when trying to load something builtin.
+ 
++modules.builtin.objs
++-----------------------
++This file contains object mapping of modules that are built into the kernel
++to their corresponding object files used to build the module.
++
+ modules.builtin.modinfo
+ -----------------------
+ This file contains modinfo from all modules that are built into the kernel.
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index 5685d7bfe4d0f..8db62fe4dadff 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -63,9 +63,13 @@ cpio                   any              cpio --version
+ GNU tar                1.28             tar --version
+ gtags (optional)       6.6.5            gtags --version
+ mkimage (optional)     2017.01          mkimage --version
++GNU AWK (optional)     5.1.0            gawk --version
++GNU C\ [#f2]_          12.0             gcc --version
++binutils\ [#f2]_       2.36             ld -v
+ ====================== ===============  ========================================
+ 
+ .. [#f1] Sphinx is needed only to build the Kernel documentation
++.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
+ 
+ Kernel compilation
+ ******************
+@@ -198,6 +202,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
+ built from the U-Boot source code. See the instructions at
+ https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
+ 
++GNU AWK
++-------
++
++GNU AWK is needed if you want kernel builds to generate address range data for
++builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
++
+ System utilities
+ ****************
+ 
+diff --git a/Makefile b/Makefile
+index 2e5ac6ab3d476..635896f269f1f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1024,6 +1024,7 @@ include-$(CONFIG_UBSAN)		+= scripts/Makefile.ubsan
+ include-$(CONFIG_KCOV)		+= scripts/Makefile.kcov
+ include-$(CONFIG_RANDSTRUCT)	+= scripts/Makefile.randstruct
+ include-$(CONFIG_GCC_PLUGINS)	+= scripts/Makefile.gcc-plugins
++include-$(CONFIG_CTF)		+= scripts/Makefile.ctfa-toplevel
+ 
+ include $(addprefix $(srctree)/, $(include-y))
+ 
+@@ -1151,7 +1152,11 @@ PHONY += vmlinux_o
+ vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
+ 
+-vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
++MODULES_BUILTIN := modules.builtin.modinfo
++MODULES_BUILTIN += modules.builtin
++MODULES_BUILTIN += modules.builtin.objs
++
++vmlinux.o $(MODULES_BUILTIN): vmlinux_o
+ 	@:
+ 
+ PHONY += vmlinux
+@@ -1490,9 +1495,10 @@ endif # CONFIG_MODULES
+ 
+ # Directories & files removed with 'make clean'
+ CLEAN_FILES += vmlinux.symvers modules-only.symvers \
+-	       modules.builtin modules.builtin.modinfo modules.nsdeps \
++	       modules.builtin modules.builtin.* modules.nsdeps \
+ 	       compile_commands.json rust/test \
+-	       rust-project.json .vmlinux.objs .vmlinux.export.c
++	       rust-project.json .vmlinux.objs .vmlinux.export.c \
++	       vmlinux.ctfa
+ 
+ # Directories & files removed with 'make mrproper'
+ MRPROPER_FILES += include/config include/generated          \
+@@ -1586,6 +1592,8 @@ help:
+ 	@echo  '                    (requires a recent binutils and recent build (System.map))'
+ 	@echo  '  dir/file.ko     - Build module including final link'
+ 	@echo  '  modules_prepare - Set up for building external modules'
++	@echo  '  ctf             - Generate CTF type information, installed by make ctf_install'
++	@echo  '  ctf_install     - Install CTF to INSTALL_MOD_PATH (default: /)'
+ 	@echo  '  tags/TAGS	  - Generate tags file for editors'
+ 	@echo  '  cscope	  - Generate cscope index'
+ 	@echo  '  gtags           - Generate GNU GLOBAL index'
+@@ -1942,7 +1950,7 @@ clean: $(clean-dirs)
+ 	$(call cmd,rmfiles)
+ 	@find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
+ 		\( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
+-		-o -name '*.ko.*' \
++		-o -name '*.ko.*' -o -name '*.ctf' \
+ 		-o -name '*.dtb' -o -name '*.dtbo' \
+ 		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
+ 		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
+diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
+index 01067a2bc43b7..6464089596088 100644
+--- a/arch/arm/vdso/Makefile
++++ b/arch/arm/vdso/Makefile
+@@ -14,6 +14,12 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
+ 
++ifdef CONFIG_CTF
++  # CTF in the vDSO would introduce a new section, which would
++  # expand the vDSO to more than a page.
++  ccflags-y += $(call cc-option,-gctf0)
++endif
++
+ ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+ 	    -z max-page-size=4096 -shared $(ldflags-y) \
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index d63930c828397..9ef2050690653 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -33,6 +33,12 @@ ldflags-y += -T
+ ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
++ifdef CONFIG_CTF
++  # CTF in the vDSO would introduce a new section, which would
++  # expand the vDSO to more than a page.
++  ccflags-y += $(call cc-option,-gctf0)
++endif
++
+ # -Wmissing-prototypes and -Wmissing-declarations are removed from
+ # the CFLAGS of vgettimeofday.c to make possible to build the
+ # kernel with CONFIG_WERROR enabled.
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index d724d46b07c84..95ef763218f7b 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -22,6 +22,11 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ 	$(call cc-option, -fno-asynchronous-unwind-tables) \
+ 	$(call cc-option, -fno-stack-protector)
++
++ifdef CONFIG_CTF
++  cflags-vdso += $(call cc-option,-gctf0)
++endif
++
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index b289b2c1b2946..6b5603f3b6a2d 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -34,6 +34,10 @@ cflags-vdso := $(ccflags-vdso) \
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
++ifdef CONFIG_CTF
++cflags-vdso += $(call cc-option,-gctf0)
++endif
++
+ ifneq ($(c-gettimeofday-y),)
+ CFLAGS_vgettimeofday.o = -include $(c-gettimeofday-y)
+ 
+diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
+index 243dbfc4609d8..432422fc3febb 100644
+--- a/arch/sparc/vdso/Makefile
++++ b/arch/sparc/vdso/Makefile
+@@ -46,6 +46,10 @@ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
++ifdef CONFIG_CTF
++CFL += $(call cc-option,-gctf0)
++endif
++
+ SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
+ 
+ $(vobjs): KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 215a1b202a918..ffb33a1a7315b 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
++       $(call cc-option,-gctf0) \
+        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+@@ -131,6 +132,9 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += -fno-stack-protector
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
++ifdef CONFIG_CTF
++  KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
++endif
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 6a77ea6434ffd..d99a60494286a 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -42,6 +42,10 @@ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls
+ 
++if CONFIG_CTF
++CFL += $(call cc-option,-gctf0)
++endif
++
+ $(vobjs): KBUILD_CFLAGS += $(CFL)
+ 
+ #
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index f00a8e18f389f..2e307c0824574 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -1014,6 +1014,7 @@
+ 	*(.discard.*)							\
+ 	*(.export_symbol)						\
+ 	*(.modinfo)							\
++	*(.ctf)								\
+ 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
+ 	*(.gnu.version*)						\
+ 
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 330ffb59efe51..ec828908470c9 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -180,7 +180,9 @@ extern void cleanup_module(void);
+ #ifdef MODULE
+ #define MODULE_FILE
+ #else
+-#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
++#define MODULE_FILE					                      \
++			MODULE_INFO(file, KBUILD_MODFILE);                    \
++			MODULE_INFO(objs, KBUILD_MODOBJS);
+ #endif
+ 
+ /*
+diff --git a/init/Kconfig b/init/Kconfig
+index 9684e5d2b81c6..c1b00b2e4a43d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -111,6 +111,12 @@ config PAHOLE_VERSION
+ 	int
+ 	default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+ 
++config HAVE_CTF_TOOLCHAIN
++	def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
++	depends on CC_IS_GCC
++	help
++	  GCC and binutils support CTF generation.
++
+ config CONSTRUCTORS
+ 	bool
+ 
+diff --git a/lib/Kconfig b/lib/Kconfig
+index b0a76dff5c182..61d0be30b3562 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -633,6 +633,16 @@ config DIMLIB
+ #
+ config LIBFDT
+ 	bool
++#
++# CTF support is select'ed if needed
++#
++config CTF
++        bool "Compact Type Format generation"
++        depends on HAVE_CTF_TOOLCHAIN
++        help
++          Emit a compact, compressed description of the kernel's datatypes and
++          global variables into the vmlinux.ctfa archive (for in-tree modules)
++          or into .ctf sections in kernel modules (for out-of-tree modules).
+ 
+ config OID_REGISTRY
+ 	tristate
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 59b6765d86b8f..dab7e6983eace 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -571,6 +571,21 @@ config VMLINUX_MAP
+ 	  pieces of code get eliminated with
+ 	  CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
+ 
++config BUILTIN_MODULE_RANGES
++	bool "Generate address range information for builtin modules"
++	depends on !LTO
++	depends on VMLINUX_MAP
++	help
++	 When modules are built into the kernel, there will be no module name
++	 associated with its symbols in /proc/kallsyms.  Tracers may want to
++	 identify symbols by module name and symbol name regardless of whether
++	 the module is configured as loadable or not.
++
++	 This option generates modules.builtin.ranges in the build tree with
++	 offset ranges (per ELF section) for the module(s) they belong to.
++	 It also records an anchor symbol to determine the load address of the
++	 section.
++
+ config DEBUG_FORCE_WEAK_PER_CPU
+ 	bool "Force weak per-cpu definitions"
+ 	depends on DEBUG_KERNEL
+diff --git a/scripts/.gitignore b/scripts/.gitignore
+index 3dbb8bb2457bc..11339fa922abd 100644
+--- a/scripts/.gitignore
++++ b/scripts/.gitignore
+@@ -11,3 +11,4 @@
+ /sorttable
+ /target.json
+ /unifdef
++y!/Makefile.ctf
+diff --git a/scripts/Makefile b/scripts/Makefile
+index fe56eeef09dd4..8e7eb174d3154 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -54,6 +54,7 @@ targets += module.lds
+ 
+ subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
+ subdir-$(CONFIG_MODVERSIONS) += genksyms
++subdir-$(CONFIG_CTF)         += ctf
+ subdir-$(CONFIG_SECURITY_SELINUX) += selinux
+ 
+ # Let clean descend into subdirs
+diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
+new file mode 100644
+index 0000000000000..b65d9d391c29c
+--- /dev/null
++++ b/scripts/Makefile.ctfa
+@@ -0,0 +1,92 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# Module CTF/CTFA generation
++# ===========================================================================
++
++include include/config/auto.conf
++include $(srctree)/scripts/Kbuild.include
++
++# CTF is already present in every object file if CONFIG_CTF is enabled.
++# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
++# it will be deduplicated into module .ko's.  For out-of-tree module builds,
++# this is what we want, but for in-tree modules we can save substantial
++# space by deduplicating it against all the core kernel types as well.  So
++# split the CTF out of in-tree module .ko's into separate .ctf files so that
++# it doesn't take up space in the modules on disk, and let the specialized
++# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
++# 'make ctf' is invoked, and use the same machinery that the linker uses to
++# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
++
++# Nothing special needs to be done if CTF is turned off or if a standalone
++# module is being built.
++module-ctf-postlink = mv $(1).tmp $(1)
++
++ifdef CONFIG_CTF
++
++# This is quite tricky.  The CTF machinery needs to be told about all the
++# built-in objects as well as all the external modules -- but Makefile.modfinal
++# only knows about the latter.  So the toplevel makefile emits the names of the
++# built-in objects into a temporary file, which is then catted and its contents
++# used as prerequisites by this rule.
++#
++# We write the names of the object files to be scanned for CTF content into a
++# file, then use that, to avoid hitting command-line length limits.
++
++ifeq ($(KBUILD_EXTMOD),)
++ctf-modules := $(shell find . -name '*.ko.ctf' -print)
++quiet_cmd_ctfa_raw = CTFARAW
++      cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
++ctf-builtins := .tmp_objects.builtin
++ctf-filelist := .tmp_ctf.filelist
++ctf-filelist-raw := .tmp_ctf.filelist.raw
++
++define module-ctf-postlink =
++	$(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
++	$(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
++endef
++
++# Split a list up like shell xargs does.
++define xargs =
++$(1) $(wordlist 1,1024,$(2))
++$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
++endef
++
++$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
++	@rm -f $(ctf-filelist-raw);
++	$(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
++	@touch $(ctf-filelist-raw)
++
++$(ctf-filelist): $(ctf-filelist-raw)
++	@rm -f $(ctf-filelist);
++	@cat $(ctf-filelist-raw) | while read -r obj; do \
++		case $$obj in \
++		$(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
++		*.a) $(AR) t $$obj > $(ctf-filelist);; \
++		*.builtin) cat $$obj >> $(ctf-filelist);; \
++		*) echo "$$obj" >> $(ctf-filelist);; \
++		esac; \
++	done
++	@touch $(ctf-filelist)
++
++# The raw CTF depends on the output CTF file list, and that depends
++# on the .ko files for the modules.
++.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
++	$(call if_changed,ctfa_raw)
++
++quiet_cmd_ctfa = CTFA
++      cmd_ctfa = { echo 'int main () { return 0; } ' | \
++		$(CC) -x c -c -o $<.stub -; \
++	$(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
++		 $<.stub $@; }
++
++# The CTF itself is an ELF executable with one section: the CTF.  This lets
++# objdump work on it, at minimal size cost.
++vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
++	$(call if_changed,ctfa)
++
++targets += vmlinux.ctfa
++
++endif		# KBUILD_EXTMOD
++
++endif		# !CONFIG_CTF
++
+diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
+new file mode 100644
+index 0000000000000..210bef3854e9b
+--- /dev/null
++++ b/scripts/Makefile.ctfa-toplevel
+@@ -0,0 +1,54 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# CTF rules for the top-level makefile only
++# ===========================================================================
++
++KBUILD_CFLAGS	+= $(call cc-option,-gctf)
++KBUILD_LDFLAGS	+= $(call ld-option, --ctf-variables)
++
++ifeq ($(KBUILD_EXTMOD),)
++
++# CTF generation for in-tree code (modules, built-in and not, and core kernel)
++
++# This contains all the object files that are built directly into the
++# kernel (including built-in modules), for consumption by ctfarchive in
++# Makefile.modfinal.
++# This is made doubly annoying by the presence of '.o' files which are actually
++# thin ar archives, and the need to support file(1) versions too old to
++# recognize them as archives at all.  (So we assume that everything that is notr
++# an ELF object is an archive.)
++ifeq ($(SRCARCH),x86)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
++else
++ifeq ($(SRCARCH),arm64)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
++else
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
++endif
++endif
++	@echo $(KBUILD_VMLINUX_OBJS) | \
++		tr " " "\n" | grep "\.o$$" | xargs -r file | \
++		grep ELF | cut -d: -f1 > .tmp_objects.builtin
++	@for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
++		tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
++		$(AR) t "$$archive" >> .tmp_objects.builtin; \
++	done
++
++ctf: vmlinux.ctfa
++PHONY += ctf ctf_install
++
++# Making CTF needs the builtin files.  We need to force everything to be
++# built if not already done, since we need the .o files for the machinery
++# above to work.
++vmlinux.ctfa: KBUILD_BUILTIN := 1
++vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
++	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
++
++ctf_install:
++	$(Q)mkdir -p $(MODLIB)/kernel
++	@ln -sf $(abspath $(srctree)) $(MODLIB)/source
++	$(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
++
++CLEAN_FILES += vmlinux.ctfa
++
++endif
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 7f8ec77bf35c9..97b0ea2eee9d4 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
+ __modname = $(or $(modname-multi),$(basetarget))
+ 
+ modname = $(subst $(space),:,$(__modname))
++modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
++modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
+ modfile = $(addprefix $(obj)/,$(__modname))
+ 
+ # target with $(obj)/ and its suffix stripped
+@@ -131,7 +133,8 @@ name-fix = $(call stringify,$(call name-fix-token,$1))
+ basename_flags = -DKBUILD_BASENAME=$(call name-fix,$(basetarget))
+ modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+ 		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
+-modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
++modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile)) \
++                 -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
+ 
+ _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
+                      $(filter-out $(ccflags-remove-y), \
+@@ -238,7 +241,7 @@ modkern_rustflags =                                              \
+ 
+ modkern_aflags = $(if $(part-of-module),				\
+ 			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
+-			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
++			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
+ 
+ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 -include $(srctree)/include/linux/compiler_types.h       \
+@@ -248,7 +251,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
+ 
+ a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+-		 $(_a_flags) $(modkern_aflags)
++		 $(_a_flags) $(modkern_aflags) $(modname_flags)
+ 
+ cpp_flags      = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 $(_cpp_flags)
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 3bec9043e4f38..06807e403d162 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M]  $@
+ %.mod.o: %.mod.c FORCE
+ 	$(call if_changed_dep,cc_o_c)
+ 
++# for module-ctf-postlink
++include $(srctree)/scripts/Makefile.ctfa
++
+ quiet_cmd_ld_ko_o = LD [M]  $@
+       cmd_ld_ko_o +=							\
+ 	$(LD) -r $(KBUILD_LDFLAGS)					\
+ 		$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE)		\
+-		-T scripts/module.lds -o $@ $(filter %.o, $^)
++		-T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp	\
++		$(filter %.o, $^) &&					\
++	$(call module-ctf-postlink,$@)					\
+ 
+ quiet_cmd_btf_ko = BTF [M] $@
+       cmd_btf_ko = 							\
+diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
+index 0afd75472679f..e668469ce098c 100644
+--- a/scripts/Makefile.modinst
++++ b/scripts/Makefile.modinst
+@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
+ quiet_cmd_install_modorder = INSTALL $@
+       cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
+ 
+-# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
+-install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
++# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
++install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
+ 
+-$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
++install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
++
++$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
+ 	$(call cmd,install)
+ 
+ endif
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 49946cb968440..7e8b703799c85 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -33,6 +33,24 @@ targets += vmlinux
+ vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
+ 	+$(call if_changed_dep,link_vmlinux)
+ 
++# module.builtin.ranges
++# ---------------------------------------------------------------------------
++ifdef CONFIG_BUILTIN_MODULE_RANGES
++__default: modules.builtin.ranges
++
++quiet_cmd_modules_builtin_ranges = GEN     $@
++      cmd_modules_builtin_ranges = $(real-prereqs) > $@
++
++targets += modules.builtin.ranges
++modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
++			modules.builtin vmlinux.map vmlinux.o.map FORCE
++	$(call if_changed,modules_builtin_ranges)
++
++vmlinux.map: vmlinux
++	@:
++
++endif
++
+ # Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+ 
+diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
+index 6de297916ce68..86d6f8887313f 100644
+--- a/scripts/Makefile.vmlinux_o
++++ b/scripts/Makefile.vmlinux_o
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
+ PHONY := __default
+-__default: vmlinux.o modules.builtin.modinfo modules.builtin
++__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
+ 
+ include include/config/auto.conf
+ include $(srctree)/scripts/Kbuild.include
+@@ -27,6 +27,18 @@ ifdef CONFIG_LTO_CLANG
+ initcalls-lds := .tmp_initcalls.lds
+ endif
+ 
++# Generate a linker script to delete CTF sections
++# -----------------------------------------------
++
++quiet_cmd_gen_remove_ctf.lds = GEN     $@
++      cmd_gen_remove_ctf.lds = \
++	$(LD) $(KBUILD_LDFLAGS) -r --verbose | awk -f $(real-prereqs) > $@
++
++.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
++	$(call if_changed,gen_remove_ctf.lds)
++
++targets := .tmp_remove-ctf.lds
++
+ # objtool for vmlinux.o
+ # ---------------------------------------------------------------------------
+ #
+@@ -42,13 +54,18 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+ 
+ objtool-args = $(vmlinux-objtool-args-y) --link
+ 
+-# Link of vmlinux.o used for section mismatch analysis
++# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
++# section out at this stage, since ctfarchive gets it from the underlying object
++# files and linking it further is a waste of time.
+ # ---------------------------------------------------------------------------
+ 
++vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES)	+= -Map=$@.map
++
+ quiet_cmd_ld_vmlinux.o = LD      $@
+       cmd_ld_vmlinux.o = \
+ 	$(LD) ${KBUILD_LDFLAGS} -r -o $@ \
+-	$(addprefix -T , $(initcalls-lds)) \
++	$(vmlinux-o-ld-args-y) \
++	$(addprefix -T , $(initcalls-lds)) -T .tmp_remove-ctf.lds \
+ 	--whole-archive vmlinux.a --no-whole-archive \
+ 	--start-group $(KBUILD_VMLINUX_LIBS) --end-group \
+ 	$(cmd_objtool)
+@@ -58,7 +75,7 @@ define rule_ld_vmlinux.o
+ 	$(call cmd,gen_objtooldep)
+ endef
+ 
+-vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
++vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) .tmp_remove-ctf.lds FORCE
+ 	$(call if_changed_rule,ld_vmlinux.o)
+ 
+ targets += vmlinux.o
+@@ -87,6 +104,19 @@ targets += modules.builtin
+ modules.builtin: modules.builtin.modinfo FORCE
+ 	$(call if_changed,modules_builtin)
+ 
++# module.builtin.objs
++# ---------------------------------------------------------------------------
++quiet_cmd_modules_builtin_objs = GEN     $@
++      cmd_modules_builtin_objs = \
++	tr '\0' '\n' < $< | \
++	sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
++	tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
++	tr -s ' ' > $@
++
++targets += modules.builtin.objs
++modules.builtin.objs: modules.builtin.modinfo FORCE
++	$(call if_changed,modules_builtin_objs)
++
+ # Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+ 
+diff --git a/scripts/ctf/.gitignore b/scripts/ctf/.gitignore
+new file mode 100644
+index 0000000000000..6a0eb1c3ceeab
+--- /dev/null
++++ b/scripts/ctf/.gitignore
+@@ -0,0 +1 @@
++ctfarchive
+diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
+new file mode 100644
+index 0000000000000..3b83f93bb9f9a
+--- /dev/null
++++ b/scripts/ctf/Makefile
+@@ -0,0 +1,5 @@
++ifdef CONFIG_CTF
++hostprogs-always-y	:= ctfarchive
++ctfarchive-objs		:= ctfarchive.o modules_builtin.o
++HOSTLDLIBS_ctfarchive := -lctf
++endif
+diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
+new file mode 100644
+index 0000000000000..92cc4912ed0ee
+--- /dev/null
++++ b/scripts/ctf/ctfarchive.c
+@@ -0,0 +1,413 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * ctfmerge.c: Read in CTF extracted from generated object files from a
++ * specified directory and generate a CTF archive whose members are the
++ * deduplicated CTF derived from those object files, split up by kernel
++ * module.
++ *
++ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#define _GNU_SOURCE 1
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <ctf-api.h>
++#include "modules_builtin.h"
++
++static ctf_file_t *output;
++
++static int private_ctf_link_add_ctf(ctf_file_t *fp,
++				    const char *name)
++{
++#if !defined (CTF_LINK_FINAL)
++	return ctf_link_add_ctf(fp, NULL, name);
++#else
++	/* Non-upstreamed, erroneously-broken API.  */
++	return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
++#endif
++}
++
++/*
++ * Add a file to the link.
++ */
++static void add_to_link(const char *fn)
++{
++	if (private_ctf_link_add_ctf(output, fn) < 0)
++	{
++		fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++struct from_to
++{
++	char *from;
++	char *to;
++};
++
++/*
++ * The world's stupidest hash table of FROM -> TO.
++ */
++static struct from_to **from_tos[256];
++static size_t alloc_from_tos[256];
++static size_t num_from_tos[256];
++
++static unsigned char from_to_hash(const char *from)
++{
++	unsigned char hval = 0;
++
++	const char *p;
++	for (p = from; *p; p++)
++		hval += *p;
++
++	return hval;
++}
++
++/*
++ * Note that we will need to add a CU mapping later on.
++ *
++ * Present purely to work around a binutils bug that stops
++ * ctf_link_add_cu_mapping() working right when called repeatedly
++ * with the same FROM.
++ */
++static int add_cu_mapping(const char *from, const char *to)
++{
++	ssize_t i, j;
++
++	i = from_to_hash(from);
++
++	for (j = 0; j < num_from_tos[i]; j++)
++		if (strcmp(from, from_tos[i][j]->from) == 0) {
++			char *tmp;
++
++			free(from_tos[i][j]->to);
++			tmp = strdup(to);
++			if (!tmp)
++				goto oom;
++			from_tos[i][j]->to = tmp;
++			return 0;
++		    }
++
++	if (num_from_tos[i] >= alloc_from_tos[i]) {
++		struct from_to **tmp;
++		if (alloc_from_tos[i] < 16)
++			alloc_from_tos[i] = 16;
++		else
++			alloc_from_tos[i] *= 2;
++
++		tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
++		if (!tmp)
++			goto oom;
++
++		from_tos[i] = tmp;
++	}
++
++	j = num_from_tos[i];
++	from_tos[i][j] = malloc(sizeof(struct from_to));
++	if (from_tos[i][j] == NULL)
++		goto oom;
++	from_tos[i][j]->from = strdup(from);
++	from_tos[i][j]->to = strdup(to);
++	if (!from_tos[i][j]->from || !from_tos[i][j]->to)
++		goto oom;
++	num_from_tos[i]++;
++
++	return 0;
++  oom:
++	fprintf(stderr,
++		"out of memory in add_cu_mapping\n");
++	exit(1);
++}
++
++/*
++ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
++ * replaced with the most recent one.
++ */
++static void commit_cu_mappings(void)
++{
++	ssize_t i, j;
++
++	for (i = 0; i < 256; i++)
++		for (j = 0; j < num_from_tos[i]; j++)
++			ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
++						from_tos[i][j]->to);
++}
++
++/*
++ * Add a CU mapping to the link.
++ *
++ * CU mappings for built-in modules are added by suck_in_modules, below: here,
++ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
++ * modules, which appear only in the filelist (since they are not built-in).
++ * The pathnames are stripped off because modules don't have any, and hyphens
++ * are translated into underscores.
++ */
++static void add_cu_mappings(const char *fn)
++{
++	const char *last_slash;
++	const char *modname = fn;
++	char *dynmodname = NULL;
++	char *dash;
++	size_t n;
++
++	last_slash = strrchr(modname, '/');
++	if (last_slash)
++		last_slash++;
++	else
++		last_slash = modname;
++	modname = last_slash;
++	if (strchr(modname, '-') != NULL)
++	{
++		dynmodname = strdup(last_slash);
++		dash = dynmodname;
++		while (dash != NULL) {
++			dash = strchr(dash, '-');
++			if (dash != NULL)
++				*dash = '_';
++		}
++		modname = dynmodname;
++	}
++
++	n = strlen(modname);
++	if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
++		char *mod;
++
++		n -= strlen(".ko.ctf");
++		mod = strndup(modname, n);
++		add_cu_mapping(fn, mod);
++		free(mod);
++	}
++	free(dynmodname);
++}
++
++/*
++ * Add the passed names as mappings to "vmlinux".
++ */
++static void add_builtins(const char *fn)
++{
++	if (add_cu_mapping(fn, "vmlinux") < 0)
++	{
++		fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++/*
++ * Do something with a file, line by line.
++ */
++static void suck_in_lines(const char *filename, void (*func)(const char *line))
++{
++	FILE *f;
++	char *line = NULL;
++	size_t line_size = 0;
++
++	f = fopen(filename, "r");
++	if (f == NULL) {
++		fprintf(stderr, "Cannot open %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	while (getline(&line, &line_size, f) >= 0) {
++		size_t len = strlen(line);
++
++		if (len == 0)
++			continue;
++
++		if (line[len-1] == '\n')
++			line[len-1] = '\0';
++
++		func(line);
++	}
++	free(line);
++
++	if (ferror(f)) {
++		fprintf(stderr, "Error reading from %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	fclose(f);
++}
++
++/*
++ * Pull in modules.builtin.objs and turn it into CU mappings.
++ */
++static void suck_in_modules(const char *modules_builtin_name)
++{
++	struct modules_builtin_iter *i;
++	char *module_name = NULL;
++	char **paths;
++
++	i = modules_builtin_iter_new(modules_builtin_name);
++	if (i == NULL) {
++		fprintf(stderr, "Cannot iterate over builtin module file.\n");
++		exit(1);
++	}
++
++	while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
++		size_t j;
++
++		for (j = 0; paths[j] != NULL; j++) {
++			char *alloc = NULL;
++			char *path = paths[j];
++			/*
++			 * If the name doesn't start in ./, add it, to match the names
++			 * passed to add_builtins.
++			 */
++			if (strncmp(paths[j], "./", 2) != 0) {
++				char *p;
++				if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
++					fprintf(stderr, "Cannot allocate memory for "
++						"builtin module object name %s.\n",
++						paths[j]);
++					exit(1);
++				}
++				p = alloc;
++				p = stpcpy(p, "./");
++				p = stpcpy(p, paths[j]);
++				path = alloc;
++			}
++			if (add_cu_mapping(path, module_name) < 0) {
++				fprintf(stderr, "Cannot add path -> module mapping for "
++					"%s -> %s: %s\n", path, module_name,
++					ctf_errmsg(ctf_errno(output)));
++				exit(1);
++			}
++			free (alloc);
++		}
++		free(paths);
++	}
++	free(module_name);
++	modules_builtin_iter_free(i);
++}
++
++/*
++ * Strip the leading .ctf. off all the module names: transform the default name
++ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
++ * derives from the intermediate file used to keep the CTF out of the final
++ * module).
++ */
++static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
++				    const char *name,
++				    void *arg __attribute__((__unused__)))
++{
++	if (strcmp(name, ".ctf") == 0)
++		return strdup("shared_ctf");
++
++	if (strncmp(name, ".ctf", 4) == 0) {
++		size_t n = strlen (name);
++		if (strcmp(name + n - 4, ".ctf") == 0)
++			n -= 4;
++		return strndup(name + 4, n - 4);
++	}
++	return NULL;
++}
++
++int main(int argc, char *argv[])
++{
++	int err;
++	const char *output_file;
++	unsigned char *file_data = NULL;
++	size_t file_size;
++	FILE *fp;
++
++	if (argc != 5) {
++		fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
++		fprintf(stderr, "                   filelist\n");
++		exit(1);
++	}
++
++	output_file = argv[1];
++
++	/*
++	 * First pull in the input files and add them to the link.
++	 */
++
++	output = ctf_create(&err);
++	if (!output) {
++		fprintf(stderr, "Cannot create output CTF archive: %s\n",
++			ctf_errmsg(err));
++		return 1;
++	}
++
++	suck_in_lines(argv[4], add_to_link);
++
++	/*
++	 * Make sure that, even if all their types are shared, all modules have
++	 * a ctf member that can be used as a child of the shared CTF.
++	 */
++	suck_in_lines(argv[4], add_cu_mappings);
++
++	/*
++	 * Then pull in the builtin objects list and add them as
++	 * mappings to "vmlinux".
++	 */
++
++	suck_in_lines(argv[2], add_builtins);
++
++	/*
++	 * Finally, pull in the object -> module mapping and add it
++	 * as appropriate mappings.
++	 */
++	suck_in_modules(argv[3]);
++
++	/*
++	 * Commit the added CU mappings.
++	 */
++	commit_cu_mappings();
++
++	/*
++	 * Arrange to fix up the module names.
++	 */
++	ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
++
++	/*
++	 * Do the link.
++	 */
++	if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
++                     CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
++		goto ctf_err;
++
++	/*
++	 * Write the output.
++	 */
++
++	file_data = ctf_link_write(output, &file_size, 4096);
++	if (!file_data)
++		goto ctf_err;
++
++	fp = fopen(output_file, "w");
++	if (!fp)
++		goto err;
++
++	while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
++	if (ferror(fp)) {
++		errno = ferror(fp);
++		goto err;
++	}
++	if (fclose(fp) < 0)
++		goto err;
++	free(file_data);
++	ctf_file_close(output);
++
++	return 0;
++err:
++	free(file_data);
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		strerror(errno));
++	return 1;
++ctf_err:
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		ctf_errmsg(ctf_errno(output)));
++	return 1;
++}
+diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
+new file mode 100644
+index 0000000000000..10af2bbc80e0c
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.c
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.c"
+diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
+new file mode 100644
+index 0000000000000..5e0299e5600c2
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.h
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.h"
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..51ae0458ffbdd
+--- /dev/null
++++ b/scripts/generate_builtin_ranges.awk
+@@ -0,0 +1,516 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# generate_builtin_ranges.awk: Generate address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
++#		vmlinux.o.map [ <build-dir> ] > modules.builtin.ranges
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Update the ranges entry for the given module 'mod' in section 'osect'.
++#
++# We use a modified absolute start address (soff + base) as index because we
++# may need to insert an anchor record later that must be at the start of the
++# section data, and the first module may very well start at the same address.
++# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
++# (addr << 1).  This is safe because the index is only used to sort the entries
++# before writing them out.
++#
++function update_entry(osect, mod, soff, eoff, sect, idx) {
++	sect = sect_in[osect];
++	idx = (soff + sect_base[osect]) * 2 + 1;
++	entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
++	count[sect]++;
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++	if (ARGC > 4) {
++		kdir = ARGV[ARGC - 1];
++		ARGV[ARGC - 1] = "";
++	} else
++		kdir = ".";
++}
++
++# (1) Build a lookup map of built-in module names.
++#
++# The first file argument is used as input (modules.builtin).
++#
++# Lines will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 1 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (2) Collect address information for each section.
++#
++# The second file argument is used as input (vmlinux.map).
++#
++# We collect the base address of the section in order to convert all addresses
++# in the section into offset values.
++#
++# We collect the address of the anchor (or first symbol in the section if there
++# is no explicit anchor) to allow users of the range data to calculate address
++# ranges based on the actual load address of the section in the running kernel.
++#
++# We collect the start address of any sub-section (section included in the top
++# level section being processed).  This is needed when the final linking was
++# done using vmlinux.a because then the list of objects contained in each
++# section is to be obtained from vmlinux.o.map.  The offset of the sub-section
++# is recorded here, to be used as an addend when processing vmlinux.o.map
++# later.
++#
++
++# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
++# lld linker map records into equivalent GNU ld linker map records.
++#
++# The first record of the vmlinux.map file provides enough information to know
++# which format we are dealing with.
++#
++ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	if (dbg)
++		printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++# lld: ffffffff82c00000          2c00000   2493c0  8192 .data
++#  ->
++# ld:  .data           0xffffffff82c00000   0x2493c0 load address 0x0000000002c00000
++#
++ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an anchor record from lld format to ld format.
++#
++# lld: ffffffff81000000          1000000        0     1         _text = .
++#  ->
++# ld:                  0xffffffff81000000                _text = .
++#
++ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
++	$0 = "  0x"$1 " " $5 " = .";
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++# lld:            11480            11480     1f07    16         vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
++#  ->
++# ld:   .text          0x0000000000011480     0x1f07 arch/x86/events/amd/uncore.o
++#
++ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/ vmlinux\.a\(/, " ");
++	sub(/:\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++# We only care about these while processing a section for which no anchor has
++# been determined yet.
++#
++# lld: ffffffff82a859a4          2a859a4        0     1                 btf_ksym_iter_id
++#  ->
++# ld:                  0xffffffff82a859a4                btf_ksym_iter_id
++#
++ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
++	$0 = "  0x"$1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# (LD) Section records with just the section name at the start of the line
++#      need to have the next line pulled in to determine whether it is a
++#      loadable section.  If it is, the next line will contains a hex value
++#      as first and second items.
++#
++ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	if ($1 !~ /^0x/ || $2 !~ /^0x/)
++		next;
++
++	$0 = s " " $0;
++}
++
++# (LD) Object records with just the section name denote records with a long
++#      section name for which the remainder of the record can be found on the
++#      next line.
++#
++# (This is also needed for vmlinux.o.map, when used.)
++#
++ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Beginning a new section - done with the previous one (if any).
++#
++ARGIND == 2 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Process a loadable section (we only care about .-sections).
++#
++# Record the section name and its base address.
++# We also record the raw (non-stripped) address of the section because it can
++# be used to identify an anchor record.
++#
++# Note:
++# Since some AWK implementations cannot handle large integers, we strip off the
++# first 4 hex digits from the address.  This is safe because the kernel space
++# is not large enough for addresses to extend into those digits.  The portion
++# to strip off is stored in addr_prefix as a regexp, so further clauses can
++# perform a simple substitution to do the address stripping.
++#
++ARGIND == 2 && /^\./ {
++	# Explicitly ignore a few sections that are not relevant here.
++	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++		next;
++
++	# Sections with a 0-address can be ignored as well.
++	if ($2 ~ /^0x0+$/)
++		next;
++
++	raw_addr = $2;
++	addr_prefix = "^" substr($2, 1, 6);
++	base = $2;
++	sub(addr_prefix, "0x", base);
++	base = strtonum(base);
++	sect = $1;
++	anchor = 0;
++	sect_base[sect] = base;
++	sect_size[sect] = strtonum($3);
++
++	if (dbg)
++		printf "[%s] BASE   %016x\n", sect, base >"/dev/stderr";
++
++	next;
++}
++
++# If we are not in a section we care about, we ignore the record.
++#
++ARGIND == 2 && !sect {
++	next;
++}
++
++# Record the first anchor symbol for the current section.
++#
++# An anchor record for the section bears the same raw address as the section
++# record.
++#
++ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
++	anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
++
++	next;
++}
++
++# If no anchor record was found for the current section, use the first symbol
++# in the section as anchor.
++#
++ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
++	addr = $1;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr) - base;
++	anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
++
++	next;
++}
++
++# The first occurrence of a section name in an object record establishes the
++# addend (often 0) for that section.  This information is needed to handle
++# sections that get combined in the final linking of vmlinux (e.g. .head.text
++# getting included at the start of .text).
++#
++# If the section does not have a base yet, use the base of the encapsulating
++# section.
++#
++ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++	if (!($1 in sect_base)) {
++		sect_base[$1] = base;
++
++		if (dbg)
++			printf "[%s] BASE   %016x\n", $1, base >"/dev/stderr";
++	}
++
++	addr = $2;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr);
++	sect_addend[$1] = addr - sect_base[$1];
++	sect_in[$1] = sect;
++
++	if (dbg)
++		printf "[%s] ADDEND %016x - %016x = %016x\n",  $1, addr, base, sect_addend[$1] >"/dev/stderr";
++
++	# If the object is vmlinux.o then we will need vmlinux.o.map to get the
++	# actual offsets of objects.
++	if ($4 == "vmlinux.o")
++		need_o_map = 1;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++# If the final link was done using the actual objects, vmlinux.map contains all
++# the information we need (see section (3a)).
++# If linking was done using vmlinux.a as intermediary, we will need to process
++# vmlinux.o.map (see section (3b)).
++
++# (3a) Determine offset range info using vmlinux.map.
++#
++# Since we are already processing vmlinux.map, the top level section that is
++# being processed is already known.  If we do not have a base address for it,
++# we do not need to process records for it.
++#
++# Given the object name, we determine the module(s) (if any) that the current
++# object is associated with.
++#
++# If we were already processing objects for a (list of) module(s):
++#  - If the current object belongs to the same module(s), update the range data
++#    to include the current object.
++#  - Otherwise, ensure that the end offset of the range is valid.
++#
++# If the current object does not belong to a built-in module, ignore it.
++#
++# If it does, we add a new built-in module offset range record.
++#
++ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	if (!(sect in sect_base))
++		next;
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) - sect_base[sect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = $1;
++	update_entry($1, mod, soff, mod_eoff);
++
++	next;
++}
++
++# If we do not need to parse the vmlinux.o.map file, we are done.
++#
++ARGIND == 3 && !need_o_map {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++
++	sect = $6;
++	if (!(sect in sect_addend))
++		next;
++
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "sect " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (3b) Determine offset range info using vmlinux.o.map.
++#
++# If we do not know an addend for the object's section, we are interested in
++# anything within that section.
++#
++# Determine the top-level section that the object's section was included in
++# during the final link.  This is the section name offset range data will be
++# associated with for this object.
++#
++# The remainder of the processing of the current object record follows the
++# procedure outlined in (3a).
++#
++ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	osect = $1;
++	if (!(osect in sect_addend))
++		next;
++
++	# We need to work with the main section.
++	sect = sect_in[osect];
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) + sect_addend[osect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = osect;
++	update_entry(osect, mod, soff, mod_eoff);
++
++	next;
++}
++
++# (4) Generate the output.
++#
++# Anchor records are added for each section that contains offset range data
++# records.  They are added at an adjusted section base address (base << 1) to
++# ensure they come first in the second records (see update_entry() above for
++# more information).
++#
++# All entries are sorted by (adjusted) address to ensure that the output can be
++# parsed in strict ascending address order.
++#
++END {
++	for (sect in count) {
++		if (sect in sect_anchor)
++			entries[sect_base[sect] * 2] = sect_anchor[sect];
++	}
++
++	n = asorti(entries, indices);
++	for (i = 1; i <= n; i++)
++		print entries[indices[i]];
++}
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index f48d72d22dc2a..d7e6cd7781256 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -733,6 +733,7 @@ static const char *const section_white_list[] =
+ 	".comment*",
+ 	".debug*",
+ 	".zdebug*",		/* Compressed debug sections. */
++        ".ctf",			/* Type info */
+ 	".GCC.command.line",	/* record-gcc-switches */
+ 	".mdebug*",        /* alpha, score, mips etc. */
+ 	".pdr",            /* alpha, score, mips etc. */
+diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
+new file mode 100644
+index 0000000000000..df52932a4417b
+--- /dev/null
++++ b/scripts/modules_builtin.c
+@@ -0,0 +1,200 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules_builtin reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++
++#include "modules_builtin.h"
++
++/*
++ * Read a modules.builtin.objs file and translate it into a stream of
++ * name / module-name pairs.
++ */
++
++/*
++ * Construct a modules.builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file)
++{
++	struct modules_builtin_iter *i;
++
++	i = calloc(1, sizeof(struct modules_builtin_iter));
++	if (i == NULL)
++		return NULL;
++
++	i->f = fopen(modules_builtin_file, "r");
++
++	if (i->f == NULL) {
++		fprintf(stderr, "Cannot open builtin module file %s: %s\n",
++			modules_builtin_file, strerror(errno));
++		return NULL;
++	}
++
++	return i;
++}
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
++{
++	size_t npaths = 1;
++	char **module_paths;
++	char *last_slash;
++	char *last_dot;
++	char *trailing_linefeed;
++	char *object_name = i->line;
++	char *dash;
++	int composite = 0;
++
++	/*
++	 * Read in all module entries, computing the suffixless, pathless name
++	 * of the module and building the next arrayful of object file names for
++	 * return.
++	 *
++	 * Modules can consist of multiple files: in this case, the portion
++	 * before the colon is the path to the module (as before): the portion
++	 * after the colon is a space-separated list of files that should be
++	 * considered part of this module.  In this case, the portion before the
++	 * name is an "object file" that does not actually exist: it is merged
++	 * into built-in.a without ever being written out.
++	 *
++	 * All module names have - translated to _, to match what is done to the
++	 * names of the same things when built as modules.
++	 */
++
++	/*
++	 * Reinvocation of exhausted iterator. Return NULL, once.
++	 */
++retry:
++	if (getline(&i->line, &i->line_size, i->f) < 0) {
++		if (ferror(i->f)) {
++			fprintf(stderr, "Error reading from modules_builtin file:"
++				" %s\n", strerror(errno));
++			exit(1);
++		}
++		rewind(i->f);
++		return NULL;
++	}
++
++	if (i->line[0] == '\0')
++		goto retry;
++
++	trailing_linefeed = strchr(i->line, '\n');
++	if (trailing_linefeed != NULL)
++		*trailing_linefeed = '\0';
++
++	/*
++	 * Slice the line in two at the colon, if any.  If there is anything
++	 * past the ': ', this is a composite module.  (We allow for no colon
++	 * for robustness, even though one should always be present.)
++	 */
++	if (strchr(i->line, ':') != NULL) {
++		char *name_start;
++
++		object_name = strchr(i->line, ':');
++		*object_name = '\0';
++		object_name++;
++		name_start = object_name + strspn(object_name, " \n");
++		if (*name_start != '\0') {
++			composite = 1;
++			object_name = name_start;
++		}
++	}
++
++	/*
++	 * Figure out the module name.
++	 */
++	last_slash = strrchr(i->line, '/');
++	last_slash = (!last_slash) ? i->line :
++		last_slash + 1;
++	free(*module_name);
++	*module_name = strdup(last_slash);
++	dash = *module_name;
++
++	while (dash != NULL) {
++		dash = strchr(dash, '-');
++		if (dash != NULL)
++			*dash = '_';
++	}
++
++	last_dot = strrchr(*module_name, '.');
++	if (last_dot != NULL)
++		*last_dot = '\0';
++
++	/*
++	 * Multifile separator? Object file names explicitly stated:
++	 * slice them up and shuffle them in.
++	 *
++	 * The array size may be an overestimate if any object file
++	 * names start or end with spaces (very unlikely) but cannot be
++	 * an underestimate.  (Check for it anyway.)
++	 */
++	if (composite) {
++		char *one_object;
++
++		for (npaths = 0, one_object = object_name;
++		     one_object != NULL;
++		     npaths++, one_object = strchr(one_object + 1, ' '));
++	}
++
++	module_paths = malloc((npaths + 1) * sizeof(char *));
++	if (!module_paths) {
++		fprintf(stderr, "%s: out of memory on module %s\n", __func__,
++			*module_name);
++		exit(1);
++	}
++
++	if (composite) {
++		char *one_object;
++		size_t i = 0;
++
++		while ((one_object = strsep(&object_name, " ")) != NULL) {
++			if (i >= npaths) {
++				fprintf(stderr, "%s: num_objs overflow on module "
++					"%s: this is a bug.\n", __func__,
++					*module_name);
++				exit(1);
++			}
++
++			module_paths[i++] = one_object;
++		}
++	} else
++		module_paths[0] = i->line;	/* untransformed module name */
++
++	module_paths[npaths] = NULL;
++
++	return module_paths;
++}
++
++/*
++ * Free an iterator. Can be called while iteration is underway, so even
++ * state that is freed at the end of iteration must be freed here too.
++ */
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i)
++{
++	if (i == NULL)
++		return;
++	fclose(i->f);
++	free(i->line);
++	free(i);
++}
+diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
+new file mode 100644
+index 0000000000000..5138792b42ef0
+--- /dev/null
++++ b/scripts/modules_builtin.h
+@@ -0,0 +1,48 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules.builtin.objs reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef _LINUX_MODULES_BUILTIN_H
++#define _LINUX_MODULES_BUILTIN_H
++
++#include <stdio.h>
++#include <stddef.h>
++
++/*
++ * modules.builtin.objs iteration state.
++ */
++struct modules_builtin_iter {
++	FILE *f;
++	char *line;
++	size_t line_size;
++};
++
++/*
++ * Construct a modules_builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file);
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
++
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i);
++
++#endif
+diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
+index c52d517b93647..8f75906a96314 100644
+--- a/scripts/package/kernel.spec
++++ b/scripts/package/kernel.spec
+@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
+ 
+ %build
+ %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
++%if %{with_ctf}
++%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
++%endif
+ 
+ %install
+ mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
+ # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
+ %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
++%if %{with_ctf}
++%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
++%endif
+ %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+ cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index ce201bfa8377c..aeb43c7ab1229 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -21,10 +21,16 @@ else
+ echo '%define with_devel 0'
+ fi
+ 
++if grep -q CONFIG_CTF=y include/config/auto.conf; then
++echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
++else
++echo '%define with_ctf 0'
++fi
+ cat<<EOF
+ %define ARCH ${ARCH}
+ %define KERNELRELEASE ${KERNELRELEASE}
+ %define pkg_release $("${srctree}/init/build-version")
++
+ EOF
+ 
+ cat "${srctree}/scripts/package/kernel.spec"
+diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
+new file mode 100644
+index 0000000000000..5d94d6ee99227
+--- /dev/null
++++ b/scripts/remove-ctf-lds.awk
+@@ -0,0 +1,12 @@
++# SPDX-License-Identifier: GPL-2.0
++# See Makefile.vmlinux_o
++
++BEGIN {
++    discards = 0; p = 0
++}
++
++/^====/ { p = 1; next; }
++p && /\.ctf/ { next; }
++p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
++p && /^\}/ && !discards { print "  /DISCARD/ : { *(.ctf) }"; }
++p { print $0; }
+diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..f513841da83e1
+--- /dev/null
++++ b/scripts/verify_builtin_ranges.awk
+@@ -0,0 +1,356 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# verify_builtin_ranges.awk: Verify address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
++#				   modules.builtin vmlinux.map vmlinux.o.map \
++#				   [ <build-dir> ]
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	} else {
++		print "ERROR: Failed to read: " fn "\n\n" \
++		      "  Invalid kernel build directory (" kdir ")\n" \
++		      "  or its content does not match " ARGV[1] >"/dev/stderr";
++		close(fn);
++		total = 0;
++		exit(1);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Return a representative integer value for a given hexadecimal address.
++#
++# Since all kernel addresses fall within the same memory region, we can safely
++# strip off the first 6 hex digits before performing the hex-to-dec conversion,
++# thereby avoiding integer overflows.
++#
++function addr2val(val) {
++	sub(/^0x/, "", val);
++	if (length(val) == 16)
++		val = substr(val, 5);
++	return strtonum("0x" val);
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++	if (ARGC > 6) {
++		kdir = ARGV[ARGC - 1];
++		ARGV[ARGC - 1] = "";
++	} else
++		kdir = ".";
++}
++
++# (1) Load the built-in module address range data.
++#
++ARGIND == 1 {
++	ranges[FNR] = $0;
++	rcnt++;
++	next;
++}
++
++# (2) Annotate System.map symbols with module names.
++#
++ARGIND == 2 {
++	addr = addr2val($1);
++	name = $3;
++
++	while (addr >= mod_eaddr) {
++		if (sect_symb) {
++			if (sect_symb != name)
++				next;
++
++			sect_base = addr - sect_off;
++			if (dbg)
++				printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
++			sect_symb = 0;
++		}
++
++		if (++ridx > rcnt)
++			break;
++
++		$0 = ranges[ridx];
++		sub(/-/, " ");
++		if ($4 != "=") {
++			sub(/-/, " ");
++			mod_saddr = strtonum("0x" $2) + sect_base;
++			mod_eaddr = strtonum("0x" $3) + sect_base;
++			$1 = $2 = $3 = "";
++			sub(/^ +/, "");
++			mod_name = $0;
++
++			if (dbg)
++				printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
++		} else {
++			sect_name = $1;
++			sect_off = strtonum("0x" $2);
++			sect_symb = $5;
++		}
++	}
++
++	idx = addr"-"name;
++	if (addr >= mod_saddr && addr < mod_eaddr)
++		sym2mod[idx] = mod_name;
++
++	next;
++}
++
++# Once we are done annotating the System.map, we no longer need the ranges data.
++#
++FNR == 1 && ARGIND == 3 {
++	delete ranges;
++}
++
++# (3) Build a lookup map of built-in module names.
++#
++# Lines from modules.builtin will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 3 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (4) Get a list of symbols (per object).
++#
++# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
++# if vmlinux is found to have inked in vmlinux.o.
++#
++
++# If we were able to get the data we need from vmlinux.map, there is no need to
++# process vmlinux.o.map.
++#
++FNR == 1 && ARGIND == 5 && total > 0 {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
++#
++ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(\./ {
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
++	$0 = "  0x" $1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# Handle section records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Next section - previous one is done.
++#
++ARGIND >= 4 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Get the (top level) section name.
++#
++ARGIND >= 4 && /^[^ ]/ && $2 ~ /^0x/ && $3 ~ /^0x/ {
++	# Empty section or per-CPU section - ignore.
++	if (NF < 3 || $1 ~ /\.percpu/) {
++		sect = 0;
++		next;
++	}
++
++	sect = $1;
++
++	next;
++}
++
++# If we are not currently in a section we care about, ignore records.
++#
++!sect {
++	next;
++}
++
++# Handle object records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
++	# If the section name is long, the remainder of the entry is found on
++	# the next line.
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
++# symbol information
++#
++ARGIND == 4 && /^ [^ ]/ && NF == 4 {
++	idx = sect":"$1;
++	if (!(idx in sect_addend)) {
++		sect_addend[idx] = addr2val($2);
++		if (dbg)
++			printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
++	}
++	if ($4 == "vmlinux.o") {
++		need_o_map = 1;
++		next;
++	}
++}
++
++# If data from vmlinux.o.map is needed, we only process section and object
++# records from vmlinux.map to determine which section we need to pay attention
++# to in vmlinux.o.map.  So skip everything else from vmlinux.map.
++#
++ARGIND == 4 && need_o_map {
++	next;
++}
++
++# Get module information for the current object.
++#
++ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
++	msect = $1;
++	mod_name = get_module_info($4);
++	mod_eaddr = addr2val($2) + addr2val($3);
++
++	next;
++}
++
++# Process a symbol record.
++#
++# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
++# as follows:
++#  - For all symbols in a given object:
++#     - If the symbol is annotated with the same module name(s) that the object
++#       belongs to, count it as a match.
++#     - Otherwise:
++#        - If the symbol is known to have duplicates of which at least one is
++#          in a built-in module, disregard it.
++#        - If the symbol us not annotated with any module name(s) AND the
++#          object belongs to built-in modules, count it as missing.
++#        - Otherwise, count it as a mismatch.
++#
++ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
++	idx = sect":"msect;
++	if (!(idx in sect_addend))
++		next;
++
++	addr = addr2val($1);
++
++	# Handle the rare but annoying case where a 0-size symbol is placed at
++	# the byte *after* the module range.  Based on vmlinux.map it will be
++	# considered part of the current object, but it falls just beyond the
++	# module address range.  Unfortunately, its address could be at the
++	# start of another built-in module, so the only safe thing to do is to
++	# ignore it.
++	if (mod_name && addr == mod_eaddr)
++		next;
++
++	# If we are processing vmlinux.o.map, we need to apply the base address
++	# of the section to the relative address on the record.
++	#
++	if (ARGIND == 5)
++		addr += sect_addend[idx];
++
++	idx = addr"-"$2;
++	mod = "";
++	if (idx in sym2mod) {
++		mod = sym2mod[idx];
++		if (sym2mod[idx] == mod_name) {
++			mod_matches++;
++			matches++;
++		} else if (mod_name == "") {
++			print $2 " in " sym2mod[idx] " (should NOT be)";
++			mismatches++;
++		} else {
++			print $2 " in " sym2mod[idx] " (should be " mod_name ")";
++			mismatches++;
++		}
++	} else if (mod_name != "") {
++		print $2 " should be in " mod_name;
++		missing++;
++	} else
++		matches++;
++
++	total++;
++
++	next;
++}
++
++# Issue the comparison report.
++#
++END {
++	if (total) {
++		printf "Verification of %s:\n", ARGV[1];
++		printf "  Correct matches:  %6d (%d%% of total)\n", matches, 100 * matches / total;
++		printf "    Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
++		printf "  Mismatches:       %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
++		printf "  Missing:          %6d (%d%% of total)\n", missing, 100 * missing / total;
++	}
++}


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-07 14:23 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-07 14:23 UTC (permalink / raw
  To: gentoo-commits

commit:     5076bab3888698a67d7503e02d12976f689a60b1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep  7 14:22:49 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep  7 14:22:49 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5076bab3

Remove old dtrace patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2995_dtrace-6.10_p1.patch | 2355 ---------------------------------------------
 1 file changed, 2355 deletions(-)

diff --git a/2995_dtrace-6.10_p1.patch b/2995_dtrace-6.10_p1.patch
deleted file mode 100644
index 97983060..00000000
--- a/2995_dtrace-6.10_p1.patch
+++ /dev/null
@@ -1,2355 +0,0 @@
-diff --git a/Documentation/dontdiff b/Documentation/dontdiff
-index 3c399f132e2db..75b9655e57914 100644
---- a/Documentation/dontdiff
-+++ b/Documentation/dontdiff
-@@ -179,7 +179,7 @@ mkutf8data
- modpost
- modules-only.symvers
- modules.builtin
--modules.builtin.modinfo
-+modules.builtin.*
- modules.nsdeps
- modules.order
- modversions.h*
-diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
-index 9c8d1d046ea56..79e104ffee715 100644
---- a/Documentation/kbuild/kbuild.rst
-+++ b/Documentation/kbuild/kbuild.rst
-@@ -17,6 +17,11 @@ modules.builtin
- This file lists all modules that are built into the kernel. This is used
- by modprobe to not fail when trying to load something builtin.
- 
-+modules.builtin.objs
-+-----------------------
-+This file contains object mapping of modules that are built into the kernel
-+to their corresponding object files used to build the module.
-+
- modules.builtin.modinfo
- -----------------------
- This file contains modinfo from all modules that are built into the kernel.
-diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
-index 5685d7bfe4d0f..8db62fe4dadff 100644
---- a/Documentation/process/changes.rst
-+++ b/Documentation/process/changes.rst
-@@ -63,9 +63,13 @@ cpio                   any              cpio --version
- GNU tar                1.28             tar --version
- gtags (optional)       6.6.5            gtags --version
- mkimage (optional)     2017.01          mkimage --version
-+GNU AWK (optional)     5.1.0            gawk --version
-+GNU C\ [#f2]_          12.0             gcc --version
-+binutils\ [#f2]_       2.36             ld -v
- ====================== ===============  ========================================
- 
- .. [#f1] Sphinx is needed only to build the Kernel documentation
-+.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
- 
- Kernel compilation
- ******************
-@@ -198,6 +202,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
- built from the U-Boot source code. See the instructions at
- https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
- 
-+GNU AWK
-+-------
-+
-+GNU AWK is needed if you want kernel builds to generate address range data for
-+builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
-+
- System utilities
- ****************
- 
-diff --git a/Makefile b/Makefile
-index ab77d171e268d..425dd4d723155 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1024,6 +1024,7 @@ include-$(CONFIG_UBSAN)		+= scripts/Makefile.ubsan
- include-$(CONFIG_KCOV)		+= scripts/Makefile.kcov
- include-$(CONFIG_RANDSTRUCT)	+= scripts/Makefile.randstruct
- include-$(CONFIG_GCC_PLUGINS)	+= scripts/Makefile.gcc-plugins
-+include-$(CONFIG_CTF)		+= scripts/Makefile.ctfa-toplevel
- 
- include $(addprefix $(srctree)/, $(include-y))
- 
-@@ -1151,7 +1152,11 @@ PHONY += vmlinux_o
- vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
- 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
- 
--vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
-+MODULES_BUILTIN := modules.builtin.modinfo
-+MODULES_BUILTIN += modules.builtin
-+MODULES_BUILTIN += modules.builtin.objs
-+
-+vmlinux.o $(MODULES_BUILTIN): vmlinux_o
- 	@:
- 
- PHONY += vmlinux
-@@ -1490,9 +1495,10 @@ endif # CONFIG_MODULES
- 
- # Directories & files removed with 'make clean'
- CLEAN_FILES += vmlinux.symvers modules-only.symvers \
--	       modules.builtin modules.builtin.modinfo modules.nsdeps \
-+	       modules.builtin modules.builtin.* modules.nsdeps \
- 	       compile_commands.json rust/test \
--	       rust-project.json .vmlinux.objs .vmlinux.export.c
-+	       rust-project.json .vmlinux.objs .vmlinux.export.c \
-+	       vmlinux.ctfa
- 
- # Directories & files removed with 'make mrproper'
- MRPROPER_FILES += include/config include/generated          \
-@@ -1586,6 +1592,8 @@ help:
- 	@echo  '                    (requires a recent binutils and recent build (System.map))'
- 	@echo  '  dir/file.ko     - Build module including final link'
- 	@echo  '  modules_prepare - Set up for building external modules'
-+	@echo  '  ctf             - Generate CTF type information, installed by make ctf_install'
-+	@echo  '  ctf_install     - Install CTF to INSTALL_MOD_PATH (default: /)'
- 	@echo  '  tags/TAGS	  - Generate tags file for editors'
- 	@echo  '  cscope	  - Generate cscope index'
- 	@echo  '  gtags           - Generate GNU GLOBAL index'
-@@ -1942,7 +1950,7 @@ clean: $(clean-dirs)
- 	$(call cmd,rmfiles)
- 	@find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
- 		\( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
--		-o -name '*.ko.*' \
-+		-o -name '*.ko.*' -o -name '*.ctf' \
- 		-o -name '*.dtb' -o -name '*.dtbo' \
- 		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
- 		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
-diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
-index 01067a2bc43b7..d2193b8dfad83 100644
---- a/arch/arm/vdso/Makefile
-+++ b/arch/arm/vdso/Makefile
-@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
- ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
- ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
- 
-+# CTF in the vDSO would introduce a new section, which would
-+# expand the vDSO to more than a page.
-+ccflags-y += $(call cc-option,-gctf0)
-+
- ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
- ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
- 	    -z max-page-size=4096 -shared $(ldflags-y) \
-diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
-index d63930c828397..6e84d3822cfe3 100644
---- a/arch/arm64/kernel/vdso/Makefile
-+++ b/arch/arm64/kernel/vdso/Makefile
-@@ -33,6 +33,10 @@ ldflags-y += -T
- ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
- ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
-+# CTF in the vDSO would introduce a new section, which would
-+# expand the vDSO to more than a page.
-+ccflags-y += $(call cc-option,-gctf0)
-+
- # -Wmissing-prototypes and -Wmissing-declarations are removed from
- # the CFLAGS of vgettimeofday.c to make possible to build the
- # kernel with CONFIG_WERROR enabled.
-diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
-index d724d46b07c84..fbedb95223ae1 100644
---- a/arch/loongarch/vdso/Makefile
-+++ b/arch/loongarch/vdso/Makefile
-@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
- 	-O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
- 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
- 	$(call cc-option, -fno-asynchronous-unwind-tables) \
--	$(call cc-option, -fno-stack-protector)
-+	$(call cc-option, -fno-stack-protector) \
-+	$(call cc-option,-gctf0)
- aflags-vdso := $(ccflags-vdso) \
- 	-D__ASSEMBLY__ -Wa,-gdwarf-2
- 
-diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
-index b289b2c1b2946..6c8d777525f9b 100644
---- a/arch/mips/vdso/Makefile
-+++ b/arch/mips/vdso/Makefile
-@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
- 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
- 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
- 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
--	$(call cc-option, -fno-asynchronous-unwind-tables)
-+	$(call cc-option, -fno-asynchronous-unwind-tables) \
-+	$(call cc-option,-gctf0)
- aflags-vdso := $(ccflags-vdso) \
- 	-D__ASSEMBLY__ -Wa,-gdwarf-2
- 
-diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
-index 243dbfc4609d8..e4f3e47074e9d 100644
---- a/arch/sparc/vdso/Makefile
-+++ b/arch/sparc/vdso/Makefile
-@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
- CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
-        -fno-omit-frame-pointer -foptimize-sibling-calls \
--       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
-+       $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
- SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
- 
-diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
-index 215a1b202a918..2fa1613a06275 100644
---- a/arch/x86/entry/vdso/Makefile
-+++ b/arch/x86/entry/vdso/Makefile
-@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
- CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
-        -fno-omit-frame-pointer -foptimize-sibling-calls \
-+       $(call cc-option,-gctf0) \
-        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
- ifdef CONFIG_MITIGATION_RETPOLINE
-@@ -131,6 +132,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
- KBUILD_CFLAGS_32 += -fno-stack-protector
- KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
- KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-+KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
- KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
- 
- ifdef CONFIG_MITIGATION_RETPOLINE
-diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
-index 6a77ea6434ffd..6db233b5edd75 100644
---- a/arch/x86/um/vdso/Makefile
-+++ b/arch/x86/um/vdso/Makefile
-@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
- #
- CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
--       -fno-omit-frame-pointer -foptimize-sibling-calls
-+       -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
- 
- $(vobjs): KBUILD_CFLAGS += $(CFL)
- 
-diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
-index f00a8e18f389f..2e307c0824574 100644
---- a/include/asm-generic/vmlinux.lds.h
-+++ b/include/asm-generic/vmlinux.lds.h
-@@ -1014,6 +1014,7 @@
- 	*(.discard.*)							\
- 	*(.export_symbol)						\
- 	*(.modinfo)							\
-+	*(.ctf)								\
- 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
- 	*(.gnu.version*)						\
- 
-diff --git a/include/linux/module.h b/include/linux/module.h
-index 330ffb59efe51..ec828908470c9 100644
---- a/include/linux/module.h
-+++ b/include/linux/module.h
-@@ -180,7 +180,9 @@ extern void cleanup_module(void);
- #ifdef MODULE
- #define MODULE_FILE
- #else
--#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
-+#define MODULE_FILE					                      \
-+			MODULE_INFO(file, KBUILD_MODFILE);                    \
-+			MODULE_INFO(objs, KBUILD_MODOBJS);
- #endif
- 
- /*
-diff --git a/init/Kconfig b/init/Kconfig
-index 9684e5d2b81c6..c1b00b2e4a43d 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -111,6 +111,12 @@ config PAHOLE_VERSION
- 	int
- 	default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
- 
-+config HAVE_CTF_TOOLCHAIN
-+	def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
-+	depends on CC_IS_GCC
-+	help
-+	  GCC and binutils support CTF generation.
-+
- config CONSTRUCTORS
- 	bool
- 
-diff --git a/lib/Kconfig b/lib/Kconfig
-index b0a76dff5c182..61d0be30b3562 100644
---- a/lib/Kconfig
-+++ b/lib/Kconfig
-@@ -633,6 +633,16 @@ config DIMLIB
- #
- config LIBFDT
- 	bool
-+#
-+# CTF support is select'ed if needed
-+#
-+config CTF
-+        bool "Compact Type Format generation"
-+        depends on HAVE_CTF_TOOLCHAIN
-+        help
-+          Emit a compact, compressed description of the kernel's datatypes and
-+          global variables into the vmlinux.ctfa archive (for in-tree modules)
-+          or into .ctf sections in kernel modules (for out-of-tree modules).
- 
- config OID_REGISTRY
- 	tristate
-diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
-index 59b6765d86b8f..dab7e6983eace 100644
---- a/lib/Kconfig.debug
-+++ b/lib/Kconfig.debug
-@@ -571,6 +571,21 @@ config VMLINUX_MAP
- 	  pieces of code get eliminated with
- 	  CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
- 
-+config BUILTIN_MODULE_RANGES
-+	bool "Generate address range information for builtin modules"
-+	depends on !LTO
-+	depends on VMLINUX_MAP
-+	help
-+	 When modules are built into the kernel, there will be no module name
-+	 associated with its symbols in /proc/kallsyms.  Tracers may want to
-+	 identify symbols by module name and symbol name regardless of whether
-+	 the module is configured as loadable or not.
-+
-+	 This option generates modules.builtin.ranges in the build tree with
-+	 offset ranges (per ELF section) for the module(s) they belong to.
-+	 It also records an anchor symbol to determine the load address of the
-+	 section.
-+
- config DEBUG_FORCE_WEAK_PER_CPU
- 	bool "Force weak per-cpu definitions"
- 	depends on DEBUG_KERNEL
-diff --git a/scripts/.gitignore b/scripts/.gitignore
-index 3dbb8bb2457bc..11339fa922abd 100644
---- a/scripts/.gitignore
-+++ b/scripts/.gitignore
-@@ -11,3 +11,4 @@
- /sorttable
- /target.json
- /unifdef
-+y!/Makefile.ctf
-diff --git a/scripts/Makefile b/scripts/Makefile
-index fe56eeef09dd4..8e7eb174d3154 100644
---- a/scripts/Makefile
-+++ b/scripts/Makefile
-@@ -54,6 +54,7 @@ targets += module.lds
- 
- subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
- subdir-$(CONFIG_MODVERSIONS) += genksyms
-+subdir-$(CONFIG_CTF)         += ctf
- subdir-$(CONFIG_SECURITY_SELINUX) += selinux
- 
- # Let clean descend into subdirs
-diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
-new file mode 100644
-index 0000000000000..2b10de139dce5
---- /dev/null
-+++ b/scripts/Makefile.ctfa
-@@ -0,0 +1,92 @@
-+# SPDX-License-Identifier: GPL-2.0-only
-+# ===========================================================================
-+# Module CTF/CTFA generation
-+# ===========================================================================
-+
-+include include/config/auto.conf
-+include $(srctree)/scripts/Kbuild.include
-+
-+# CTF is already present in every object file if CONFIG_CTF is enabled.
-+# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
-+# it will be deduplicated into module .ko's.  For out-of-tree module builds,
-+# this is what we want, but for in-tree modules we can save substantial
-+# space by deduplicating it against all the core kernel types as well.  So
-+# split the CTF out of in-tree module .ko's into separate .ctf files so that
-+# it doesn't take up space in the modules on disk, and let the specialized
-+# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
-+# 'make ctf' is invoked, and use the same machinery that the linker uses to
-+# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
-+
-+# Nothing special needs to be done if CTF is turned off or if a standalone
-+# module is being built.
-+module-ctf-postlink = mv $(1).tmp $(1)
-+
-+ifdef CONFIG_CTF
-+
-+# This is quite tricky.  The CTF machinery needs to be told about all the
-+# built-in objects as well as all the external modules -- but Makefile.modfinal
-+# only knows about the latter.  So the toplevel makefile emits the names of the
-+# built-in objects into a temporary file, which is then catted and its contents
-+# used as prerequisites by this rule.
-+#
-+# We write the names of the object files to be scanned for CTF content into a
-+# file, then use that, to avoid hitting command-line length limits.
-+
-+ifeq ($(KBUILD_EXTMOD),)
-+ctf-modules := $(shell find . -name '*.ko.ctf' -print)
-+quiet_cmd_ctfa_raw = CTFARAW
-+      cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
-+ctf-builtins := .tmp_objects.builtin
-+ctf-filelist := .tmp_ctf.filelist
-+ctf-filelist-raw := .tmp_ctf.filelist.raw
-+
-+define module-ctf-postlink =
-+	$(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
-+	$(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
-+endef
-+
-+# Split a list up like shell xargs does.
-+define xargs =
-+$(1) $(wordlist 1,1024,$(2))
-+$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
-+endef
-+
-+$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
-+	@rm -f $(ctf-filelist-raw);
-+	$(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
-+	@touch $(ctf-filelist-raw)
-+
-+$(ctf-filelist): $(ctf-filelist-raw)
-+	@rm -f $(ctf-filelist);
-+	@cat $(ctf-filelist-raw) | while read -r obj; do \
-+		case $$obj in \
-+		$(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
-+		*.a) ar t $$obj > $(ctf-filelist);; \
-+		*.builtin) cat $$obj >> $(ctf-filelist);; \
-+		*) echo "$$obj" >> $(ctf-filelist);; \
-+		esac; \
-+	done
-+	@touch $(ctf-filelist)
-+
-+# The raw CTF depends on the output CTF file list, and that depends
-+# on the .ko files for the modules.
-+.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
-+	$(call if_changed,ctfa_raw)
-+
-+quiet_cmd_ctfa = CTFA
-+      cmd_ctfa = { echo 'int main () { return 0; } ' | \
-+		gcc -x c -c -o $<.stub -; \
-+	$(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
-+		 $<.stub $@; }
-+
-+# The CTF itself is an ELF executable with one section: the CTF.  This lets
-+# objdump work on it, at minimal size cost.
-+vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
-+	$(call if_changed,ctfa)
-+
-+targets += vmlinux.ctfa
-+
-+endif		# KBUILD_EXTMOD
-+
-+endif		# !CONFIG_CTF
-+
-diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
-new file mode 100644
-index 0000000000000..210bef3854e9b
---- /dev/null
-+++ b/scripts/Makefile.ctfa-toplevel
-@@ -0,0 +1,54 @@
-+# SPDX-License-Identifier: GPL-2.0-only
-+# ===========================================================================
-+# CTF rules for the top-level makefile only
-+# ===========================================================================
-+
-+KBUILD_CFLAGS	+= $(call cc-option,-gctf)
-+KBUILD_LDFLAGS	+= $(call ld-option, --ctf-variables)
-+
-+ifeq ($(KBUILD_EXTMOD),)
-+
-+# CTF generation for in-tree code (modules, built-in and not, and core kernel)
-+
-+# This contains all the object files that are built directly into the
-+# kernel (including built-in modules), for consumption by ctfarchive in
-+# Makefile.modfinal.
-+# This is made doubly annoying by the presence of '.o' files which are actually
-+# thin ar archives, and the need to support file(1) versions too old to
-+# recognize them as archives at all.  (So we assume that everything that is notr
-+# an ELF object is an archive.)
-+ifeq ($(SRCARCH),x86)
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
-+else
-+ifeq ($(SRCARCH),arm64)
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
-+else
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
-+endif
-+endif
-+	@echo $(KBUILD_VMLINUX_OBJS) | \
-+		tr " " "\n" | grep "\.o$$" | xargs -r file | \
-+		grep ELF | cut -d: -f1 > .tmp_objects.builtin
-+	@for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
-+		tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
-+		$(AR) t "$$archive" >> .tmp_objects.builtin; \
-+	done
-+
-+ctf: vmlinux.ctfa
-+PHONY += ctf ctf_install
-+
-+# Making CTF needs the builtin files.  We need to force everything to be
-+# built if not already done, since we need the .o files for the machinery
-+# above to work.
-+vmlinux.ctfa: KBUILD_BUILTIN := 1
-+vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
-+	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
-+
-+ctf_install:
-+	$(Q)mkdir -p $(MODLIB)/kernel
-+	@ln -sf $(abspath $(srctree)) $(MODLIB)/source
-+	$(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
-+
-+CLEAN_FILES += vmlinux.ctfa
-+
-+endif
-diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index 7f8ec77bf35c9..97b0ea2eee9d4 100644
---- a/scripts/Makefile.lib
-+++ b/scripts/Makefile.lib
-@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
- __modname = $(or $(modname-multi),$(basetarget))
- 
- modname = $(subst $(space),:,$(__modname))
-+modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
-+modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
- modfile = $(addprefix $(obj)/,$(__modname))
- 
- # target with $(obj)/ and its suffix stripped
-@@ -131,7 +133,8 @@ name-fix = $(call stringify,$(call name-fix-token,$1))
- basename_flags = -DKBUILD_BASENAME=$(call name-fix,$(basetarget))
- modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
- 		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
--modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
-+modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile)) \
-+                 -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
- 
- _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
-                      $(filter-out $(ccflags-remove-y), \
-@@ -238,7 +241,7 @@ modkern_rustflags =                                              \
- 
- modkern_aflags = $(if $(part-of-module),				\
- 			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
--			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
-+			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
- 
- c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- 		 -include $(srctree)/include/linux/compiler_types.h       \
-@@ -248,7 +251,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
- 
- a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
--		 $(_a_flags) $(modkern_aflags)
-+		 $(_a_flags) $(modkern_aflags) $(modname_flags)
- 
- cpp_flags      = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- 		 $(_cpp_flags)
-diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
-index 3bec9043e4f38..06807e403d162 100644
---- a/scripts/Makefile.modfinal
-+++ b/scripts/Makefile.modfinal
-@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M]  $@
- %.mod.o: %.mod.c FORCE
- 	$(call if_changed_dep,cc_o_c)
- 
-+# for module-ctf-postlink
-+include $(srctree)/scripts/Makefile.ctfa
-+
- quiet_cmd_ld_ko_o = LD [M]  $@
-       cmd_ld_ko_o +=							\
- 	$(LD) -r $(KBUILD_LDFLAGS)					\
- 		$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE)		\
--		-T scripts/module.lds -o $@ $(filter %.o, $^)
-+		-T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp	\
-+		$(filter %.o, $^) &&					\
-+	$(call module-ctf-postlink,$@)					\
- 
- quiet_cmd_btf_ko = BTF [M] $@
-       cmd_btf_ko = 							\
-diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
-index 0afd75472679f..e668469ce098c 100644
---- a/scripts/Makefile.modinst
-+++ b/scripts/Makefile.modinst
-@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
- quiet_cmd_install_modorder = INSTALL $@
-       cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
- 
--# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
--install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
-+# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
-+install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
- 
--$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
-+install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
-+
-+$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
- 	$(call cmd,install)
- 
- endif
-diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
-index 49946cb968440..7e8b703799c85 100644
---- a/scripts/Makefile.vmlinux
-+++ b/scripts/Makefile.vmlinux
-@@ -33,6 +33,24 @@ targets += vmlinux
- vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
- 	+$(call if_changed_dep,link_vmlinux)
- 
-+# module.builtin.ranges
-+# ---------------------------------------------------------------------------
-+ifdef CONFIG_BUILTIN_MODULE_RANGES
-+__default: modules.builtin.ranges
-+
-+quiet_cmd_modules_builtin_ranges = GEN     $@
-+      cmd_modules_builtin_ranges = $(real-prereqs) > $@
-+
-+targets += modules.builtin.ranges
-+modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
-+			modules.builtin vmlinux.map vmlinux.o.map FORCE
-+	$(call if_changed,modules_builtin_ranges)
-+
-+vmlinux.map: vmlinux
-+	@:
-+
-+endif
-+
- # Add FORCE to the prequisites of a target to force it to be always rebuilt.
- # ---------------------------------------------------------------------------
- 
-diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
-index 6de297916ce68..48ba44ce3b64b 100644
---- a/scripts/Makefile.vmlinux_o
-+++ b/scripts/Makefile.vmlinux_o
-@@ -1,7 +1,7 @@
- # SPDX-License-Identifier: GPL-2.0-only
- 
- PHONY := __default
--__default: vmlinux.o modules.builtin.modinfo modules.builtin
-+__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
- 
- include include/config/auto.conf
- include $(srctree)/scripts/Kbuild.include
-@@ -27,6 +27,18 @@ ifdef CONFIG_LTO_CLANG
- initcalls-lds := .tmp_initcalls.lds
- endif
- 
-+# Generate a linker script to delete CTF sections
-+# -----------------------------------------------
-+
-+quiet_cmd_gen_remove_ctf.lds = GEN     $@
-+      cmd_gen_remove_ctf.lds = \
-+	$(LD) -r --verbose | awk -f $(real-prereqs) > $@
-+
-+.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
-+	$(call if_changed,gen_remove_ctf.lds)
-+
-+targets := .tmp_remove-ctf.lds
-+
- # objtool for vmlinux.o
- # ---------------------------------------------------------------------------
- #
-@@ -42,13 +54,18 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
- 
- objtool-args = $(vmlinux-objtool-args-y) --link
- 
--# Link of vmlinux.o used for section mismatch analysis
-+# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
-+# section out at this stage, since ctfarchive gets it from the underlying object
-+# files and linking it further is a waste of time.
- # ---------------------------------------------------------------------------
- 
-+vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES)	+= -Map=$@.map
-+
- quiet_cmd_ld_vmlinux.o = LD      $@
-       cmd_ld_vmlinux.o = \
- 	$(LD) ${KBUILD_LDFLAGS} -r -o $@ \
--	$(addprefix -T , $(initcalls-lds)) \
-+	$(vmlinux-o-ld-args-y) \
-+	$(addprefix -T , $(initcalls-lds)) -T .tmp_remove-ctf.lds \
- 	--whole-archive vmlinux.a --no-whole-archive \
- 	--start-group $(KBUILD_VMLINUX_LIBS) --end-group \
- 	$(cmd_objtool)
-@@ -58,7 +75,7 @@ define rule_ld_vmlinux.o
- 	$(call cmd,gen_objtooldep)
- endef
- 
--vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
-+vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) .tmp_remove-ctf.lds FORCE
- 	$(call if_changed_rule,ld_vmlinux.o)
- 
- targets += vmlinux.o
-@@ -87,6 +104,19 @@ targets += modules.builtin
- modules.builtin: modules.builtin.modinfo FORCE
- 	$(call if_changed,modules_builtin)
- 
-+# module.builtin.objs
-+# ---------------------------------------------------------------------------
-+quiet_cmd_modules_builtin_objs = GEN     $@
-+      cmd_modules_builtin_objs = \
-+	tr '\0' '\n' < $< | \
-+	sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
-+	tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
-+	tr -s ' ' > $@
-+
-+targets += modules.builtin.objs
-+modules.builtin.objs: modules.builtin.modinfo FORCE
-+	$(call if_changed,modules_builtin_objs)
-+
- # Add FORCE to the prequisites of a target to force it to be always rebuilt.
- # ---------------------------------------------------------------------------
- 
-diff --git a/scripts/ctf/.gitignore b/scripts/ctf/.gitignore
-new file mode 100644
-index 0000000000000..6a0eb1c3ceeab
---- /dev/null
-+++ b/scripts/ctf/.gitignore
-@@ -0,0 +1 @@
-+ctfarchive
-diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
-new file mode 100644
-index 0000000000000..3b83f93bb9f9a
---- /dev/null
-+++ b/scripts/ctf/Makefile
-@@ -0,0 +1,5 @@
-+ifdef CONFIG_CTF
-+hostprogs-always-y	:= ctfarchive
-+ctfarchive-objs		:= ctfarchive.o modules_builtin.o
-+HOSTLDLIBS_ctfarchive := -lctf
-+endif
-diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
-new file mode 100644
-index 0000000000000..92cc4912ed0ee
---- /dev/null
-+++ b/scripts/ctf/ctfarchive.c
-@@ -0,0 +1,413 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * ctfmerge.c: Read in CTF extracted from generated object files from a
-+ * specified directory and generate a CTF archive whose members are the
-+ * deduplicated CTF derived from those object files, split up by kernel
-+ * module.
-+ *
-+ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#define _GNU_SOURCE 1
-+#include <errno.h>
-+#include <stdio.h>
-+#include <stdlib.h>
-+#include <string.h>
-+#include <ctf-api.h>
-+#include "modules_builtin.h"
-+
-+static ctf_file_t *output;
-+
-+static int private_ctf_link_add_ctf(ctf_file_t *fp,
-+				    const char *name)
-+{
-+#if !defined (CTF_LINK_FINAL)
-+	return ctf_link_add_ctf(fp, NULL, name);
-+#else
-+	/* Non-upstreamed, erroneously-broken API.  */
-+	return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
-+#endif
-+}
-+
-+/*
-+ * Add a file to the link.
-+ */
-+static void add_to_link(const char *fn)
-+{
-+	if (private_ctf_link_add_ctf(output, fn) < 0)
-+	{
-+		fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
-+			ctf_errmsg(ctf_errno(output)));
-+		exit(1);
-+	}
-+}
-+
-+struct from_to
-+{
-+	char *from;
-+	char *to;
-+};
-+
-+/*
-+ * The world's stupidest hash table of FROM -> TO.
-+ */
-+static struct from_to **from_tos[256];
-+static size_t alloc_from_tos[256];
-+static size_t num_from_tos[256];
-+
-+static unsigned char from_to_hash(const char *from)
-+{
-+	unsigned char hval = 0;
-+
-+	const char *p;
-+	for (p = from; *p; p++)
-+		hval += *p;
-+
-+	return hval;
-+}
-+
-+/*
-+ * Note that we will need to add a CU mapping later on.
-+ *
-+ * Present purely to work around a binutils bug that stops
-+ * ctf_link_add_cu_mapping() working right when called repeatedly
-+ * with the same FROM.
-+ */
-+static int add_cu_mapping(const char *from, const char *to)
-+{
-+	ssize_t i, j;
-+
-+	i = from_to_hash(from);
-+
-+	for (j = 0; j < num_from_tos[i]; j++)
-+		if (strcmp(from, from_tos[i][j]->from) == 0) {
-+			char *tmp;
-+
-+			free(from_tos[i][j]->to);
-+			tmp = strdup(to);
-+			if (!tmp)
-+				goto oom;
-+			from_tos[i][j]->to = tmp;
-+			return 0;
-+		    }
-+
-+	if (num_from_tos[i] >= alloc_from_tos[i]) {
-+		struct from_to **tmp;
-+		if (alloc_from_tos[i] < 16)
-+			alloc_from_tos[i] = 16;
-+		else
-+			alloc_from_tos[i] *= 2;
-+
-+		tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
-+		if (!tmp)
-+			goto oom;
-+
-+		from_tos[i] = tmp;
-+	}
-+
-+	j = num_from_tos[i];
-+	from_tos[i][j] = malloc(sizeof(struct from_to));
-+	if (from_tos[i][j] == NULL)
-+		goto oom;
-+	from_tos[i][j]->from = strdup(from);
-+	from_tos[i][j]->to = strdup(to);
-+	if (!from_tos[i][j]->from || !from_tos[i][j]->to)
-+		goto oom;
-+	num_from_tos[i]++;
-+
-+	return 0;
-+  oom:
-+	fprintf(stderr,
-+		"out of memory in add_cu_mapping\n");
-+	exit(1);
-+}
-+
-+/*
-+ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
-+ * replaced with the most recent one.
-+ */
-+static void commit_cu_mappings(void)
-+{
-+	ssize_t i, j;
-+
-+	for (i = 0; i < 256; i++)
-+		for (j = 0; j < num_from_tos[i]; j++)
-+			ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
-+						from_tos[i][j]->to);
-+}
-+
-+/*
-+ * Add a CU mapping to the link.
-+ *
-+ * CU mappings for built-in modules are added by suck_in_modules, below: here,
-+ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
-+ * modules, which appear only in the filelist (since they are not built-in).
-+ * The pathnames are stripped off because modules don't have any, and hyphens
-+ * are translated into underscores.
-+ */
-+static void add_cu_mappings(const char *fn)
-+{
-+	const char *last_slash;
-+	const char *modname = fn;
-+	char *dynmodname = NULL;
-+	char *dash;
-+	size_t n;
-+
-+	last_slash = strrchr(modname, '/');
-+	if (last_slash)
-+		last_slash++;
-+	else
-+		last_slash = modname;
-+	modname = last_slash;
-+	if (strchr(modname, '-') != NULL)
-+	{
-+		dynmodname = strdup(last_slash);
-+		dash = dynmodname;
-+		while (dash != NULL) {
-+			dash = strchr(dash, '-');
-+			if (dash != NULL)
-+				*dash = '_';
-+		}
-+		modname = dynmodname;
-+	}
-+
-+	n = strlen(modname);
-+	if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
-+		char *mod;
-+
-+		n -= strlen(".ko.ctf");
-+		mod = strndup(modname, n);
-+		add_cu_mapping(fn, mod);
-+		free(mod);
-+	}
-+	free(dynmodname);
-+}
-+
-+/*
-+ * Add the passed names as mappings to "vmlinux".
-+ */
-+static void add_builtins(const char *fn)
-+{
-+	if (add_cu_mapping(fn, "vmlinux") < 0)
-+	{
-+		fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
-+			ctf_errmsg(ctf_errno(output)));
-+		exit(1);
-+	}
-+}
-+
-+/*
-+ * Do something with a file, line by line.
-+ */
-+static void suck_in_lines(const char *filename, void (*func)(const char *line))
-+{
-+	FILE *f;
-+	char *line = NULL;
-+	size_t line_size = 0;
-+
-+	f = fopen(filename, "r");
-+	if (f == NULL) {
-+		fprintf(stderr, "Cannot open %s: %s\n", filename,
-+			strerror(errno));
-+		exit(1);
-+	}
-+
-+	while (getline(&line, &line_size, f) >= 0) {
-+		size_t len = strlen(line);
-+
-+		if (len == 0)
-+			continue;
-+
-+		if (line[len-1] == '\n')
-+			line[len-1] = '\0';
-+
-+		func(line);
-+	}
-+	free(line);
-+
-+	if (ferror(f)) {
-+		fprintf(stderr, "Error reading from %s: %s\n", filename,
-+			strerror(errno));
-+		exit(1);
-+	}
-+
-+	fclose(f);
-+}
-+
-+/*
-+ * Pull in modules.builtin.objs and turn it into CU mappings.
-+ */
-+static void suck_in_modules(const char *modules_builtin_name)
-+{
-+	struct modules_builtin_iter *i;
-+	char *module_name = NULL;
-+	char **paths;
-+
-+	i = modules_builtin_iter_new(modules_builtin_name);
-+	if (i == NULL) {
-+		fprintf(stderr, "Cannot iterate over builtin module file.\n");
-+		exit(1);
-+	}
-+
-+	while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
-+		size_t j;
-+
-+		for (j = 0; paths[j] != NULL; j++) {
-+			char *alloc = NULL;
-+			char *path = paths[j];
-+			/*
-+			 * If the name doesn't start in ./, add it, to match the names
-+			 * passed to add_builtins.
-+			 */
-+			if (strncmp(paths[j], "./", 2) != 0) {
-+				char *p;
-+				if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
-+					fprintf(stderr, "Cannot allocate memory for "
-+						"builtin module object name %s.\n",
-+						paths[j]);
-+					exit(1);
-+				}
-+				p = alloc;
-+				p = stpcpy(p, "./");
-+				p = stpcpy(p, paths[j]);
-+				path = alloc;
-+			}
-+			if (add_cu_mapping(path, module_name) < 0) {
-+				fprintf(stderr, "Cannot add path -> module mapping for "
-+					"%s -> %s: %s\n", path, module_name,
-+					ctf_errmsg(ctf_errno(output)));
-+				exit(1);
-+			}
-+			free (alloc);
-+		}
-+		free(paths);
-+	}
-+	free(module_name);
-+	modules_builtin_iter_free(i);
-+}
-+
-+/*
-+ * Strip the leading .ctf. off all the module names: transform the default name
-+ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
-+ * derives from the intermediate file used to keep the CTF out of the final
-+ * module).
-+ */
-+static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
-+				    const char *name,
-+				    void *arg __attribute__((__unused__)))
-+{
-+	if (strcmp(name, ".ctf") == 0)
-+		return strdup("shared_ctf");
-+
-+	if (strncmp(name, ".ctf", 4) == 0) {
-+		size_t n = strlen (name);
-+		if (strcmp(name + n - 4, ".ctf") == 0)
-+			n -= 4;
-+		return strndup(name + 4, n - 4);
-+	}
-+	return NULL;
-+}
-+
-+int main(int argc, char *argv[])
-+{
-+	int err;
-+	const char *output_file;
-+	unsigned char *file_data = NULL;
-+	size_t file_size;
-+	FILE *fp;
-+
-+	if (argc != 5) {
-+		fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
-+		fprintf(stderr, "                   filelist\n");
-+		exit(1);
-+	}
-+
-+	output_file = argv[1];
-+
-+	/*
-+	 * First pull in the input files and add them to the link.
-+	 */
-+
-+	output = ctf_create(&err);
-+	if (!output) {
-+		fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+			ctf_errmsg(err));
-+		return 1;
-+	}
-+
-+	suck_in_lines(argv[4], add_to_link);
-+
-+	/*
-+	 * Make sure that, even if all their types are shared, all modules have
-+	 * a ctf member that can be used as a child of the shared CTF.
-+	 */
-+	suck_in_lines(argv[4], add_cu_mappings);
-+
-+	/*
-+	 * Then pull in the builtin objects list and add them as
-+	 * mappings to "vmlinux".
-+	 */
-+
-+	suck_in_lines(argv[2], add_builtins);
-+
-+	/*
-+	 * Finally, pull in the object -> module mapping and add it
-+	 * as appropriate mappings.
-+	 */
-+	suck_in_modules(argv[3]);
-+
-+	/*
-+	 * Commit the added CU mappings.
-+	 */
-+	commit_cu_mappings();
-+
-+	/*
-+	 * Arrange to fix up the module names.
-+	 */
-+	ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
-+
-+	/*
-+	 * Do the link.
-+	 */
-+	if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
-+                     CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
-+		goto ctf_err;
-+
-+	/*
-+	 * Write the output.
-+	 */
-+
-+	file_data = ctf_link_write(output, &file_size, 4096);
-+	if (!file_data)
-+		goto ctf_err;
-+
-+	fp = fopen(output_file, "w");
-+	if (!fp)
-+		goto err;
-+
-+	while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
-+	if (ferror(fp)) {
-+		errno = ferror(fp);
-+		goto err;
-+	}
-+	if (fclose(fp) < 0)
-+		goto err;
-+	free(file_data);
-+	ctf_file_close(output);
-+
-+	return 0;
-+err:
-+	free(file_data);
-+	fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+		strerror(errno));
-+	return 1;
-+ctf_err:
-+	fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+		ctf_errmsg(ctf_errno(output)));
-+	return 1;
-+}
-diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
-new file mode 100644
-index 0000000000000..10af2bbc80e0c
---- /dev/null
-+++ b/scripts/ctf/modules_builtin.c
-@@ -0,0 +1,2 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+#include "../modules_builtin.c"
-diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
-new file mode 100644
-index 0000000000000..5e0299e5600c2
---- /dev/null
-+++ b/scripts/ctf/modules_builtin.h
-@@ -0,0 +1,2 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+#include "../modules_builtin.h"
-diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
-new file mode 100755
-index 0000000000000..51ae0458ffbdd
---- /dev/null
-+++ b/scripts/generate_builtin_ranges.awk
-@@ -0,0 +1,516 @@
-+#!/usr/bin/gawk -f
-+# SPDX-License-Identifier: GPL-2.0
-+# generate_builtin_ranges.awk: Generate address range data for builtin modules
-+# Written by Kris Van Hees <kris.van.hees@oracle.com>
-+#
-+# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
-+#		vmlinux.o.map [ <build-dir> ] > modules.builtin.ranges
-+#
-+
-+# Return the module name(s) (if any) associated with the given object.
-+#
-+# If we have seen this object before, return information from the cache.
-+# Otherwise, retrieve it from the corresponding .cmd file.
-+#
-+function get_module_info(fn, mod, obj, s) {
-+	if (fn in omod)
-+		return omod[fn];
-+
-+	if (match(fn, /\/[^/]+$/) == 0)
-+		return "";
-+
-+	obj = fn;
-+	mod = "";
-+	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
-+	if (getline s <fn == 1) {
-+		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
-+			mod = substr(s, RSTART + 16, RLENGTH - 16);
-+			gsub(/['"]/, "", mod);
-+		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
-+			mod = substr(s, RSTART + 13, RLENGTH - 13);
-+	}
-+	close(fn);
-+
-+	# A single module (common case) also reflects objects that are not part
-+	# of a module.  Some of those objects have names that are also a module
-+	# name (e.g. core).  We check the associated module file name, and if
-+	# they do not match, the object is not part of a module.
-+	if (mod !~ / /) {
-+		if (!(mod in mods))
-+			mod = "";
-+	}
-+
-+	gsub(/([^/ ]*\/)+/, "", mod);
-+	gsub(/-/, "_", mod);
-+
-+	# At this point, mod is a single (valid) module name, or a list of
-+	# module names (that do not need validation).
-+	omod[obj] = mod;
-+
-+	return mod;
-+}
-+
-+# Update the ranges entry for the given module 'mod' in section 'osect'.
-+#
-+# We use a modified absolute start address (soff + base) as index because we
-+# may need to insert an anchor record later that must be at the start of the
-+# section data, and the first module may very well start at the same address.
-+# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
-+# (addr << 1).  This is safe because the index is only used to sort the entries
-+# before writing them out.
-+#
-+function update_entry(osect, mod, soff, eoff, sect, idx) {
-+	sect = sect_in[osect];
-+	idx = (soff + sect_base[osect]) * 2 + 1;
-+	entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
-+	count[sect]++;
-+}
-+
-+# Determine the kernel build directory to use (default is .).
-+#
-+BEGIN {
-+	if (ARGC > 4) {
-+		kdir = ARGV[ARGC - 1];
-+		ARGV[ARGC - 1] = "";
-+	} else
-+		kdir = ".";
-+}
-+
-+# (1) Build a lookup map of built-in module names.
-+#
-+# The first file argument is used as input (modules.builtin).
-+#
-+# Lines will be like:
-+#	kernel/crypto/lzo-rle.ko
-+# and we record the object name "crypto/lzo-rle".
-+#
-+ARGIND == 1 {
-+	sub(/kernel\//, "");			# strip off "kernel/" prefix
-+	sub(/\.ko$/, "");			# strip off .ko suffix
-+
-+	mods[$1] = 1;
-+	next;
-+}
-+
-+# (2) Collect address information for each section.
-+#
-+# The second file argument is used as input (vmlinux.map).
-+#
-+# We collect the base address of the section in order to convert all addresses
-+# in the section into offset values.
-+#
-+# We collect the address of the anchor (or first symbol in the section if there
-+# is no explicit anchor) to allow users of the range data to calculate address
-+# ranges based on the actual load address of the section in the running kernel.
-+#
-+# We collect the start address of any sub-section (section included in the top
-+# level section being processed).  This is needed when the final linking was
-+# done using vmlinux.a because then the list of objects contained in each
-+# section is to be obtained from vmlinux.o.map.  The offset of the sub-section
-+# is recorded here, to be used as an addend when processing vmlinux.o.map
-+# later.
-+#
-+
-+# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
-+# lld linker map records into equivalent GNU ld linker map records.
-+#
-+# The first record of the vmlinux.map file provides enough information to know
-+# which format we are dealing with.
-+#
-+ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
-+	map_is_lld = 1;
-+	if (dbg)
-+		printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
-+	next;
-+}
-+
-+# (LLD) Convert a section record fronm lld format to ld format.
-+#
-+# lld: ffffffff82c00000          2c00000   2493c0  8192 .data
-+#  ->
-+# ld:  .data           0xffffffff82c00000   0x2493c0 load address 0x0000000002c00000
-+#
-+ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
-+	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
-+}
-+
-+# (LLD) Convert an anchor record from lld format to ld format.
-+#
-+# lld: ffffffff81000000          1000000        0     1         _text = .
-+#  ->
-+# ld:                  0xffffffff81000000                _text = .
-+#
-+ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
-+	$0 = "  0x"$1 " " $5 " = .";
-+}
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+# lld:            11480            11480     1f07    16         vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
-+#  ->
-+# ld:   .text          0x0000000000011480     0x1f07 arch/x86/events/amd/uncore.o
-+#
-+ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
-+	gsub(/\)/, "");
-+	sub(/ vmlinux\.a\(/, " ");
-+	sub(/:\(/, " ");
-+	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (LLD) Convert a symbol record from lld format to ld format.
-+#
-+# We only care about these while processing a section for which no anchor has
-+# been determined yet.
-+#
-+# lld: ffffffff82a859a4          2a859a4        0     1                 btf_ksym_iter_id
-+#  ->
-+# ld:                  0xffffffff82a859a4                btf_ksym_iter_id
-+#
-+ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
-+	$0 = "  0x"$1 " " $5;
-+}
-+
-+# (LLD) We do not need any other ldd linker map records.
-+#
-+ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
-+	next;
-+}
-+
-+# (LD) Section records with just the section name at the start of the line
-+#      need to have the next line pulled in to determine whether it is a
-+#      loadable section.  If it is, the next line will contains a hex value
-+#      as first and second items.
-+#
-+ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
-+	s = $0;
-+	getline;
-+	if ($1 !~ /^0x/ || $2 !~ /^0x/)
-+		next;
-+
-+	$0 = s " " $0;
-+}
-+
-+# (LD) Object records with just the section name denote records with a long
-+#      section name for which the remainder of the record can be found on the
-+#      next line.
-+#
-+# (This is also needed for vmlinux.o.map, when used.)
-+#
-+ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# Beginning a new section - done with the previous one (if any).
-+#
-+ARGIND == 2 && /^[^ ]/ {
-+	sect = 0;
-+}
-+
-+# Process a loadable section (we only care about .-sections).
-+#
-+# Record the section name and its base address.
-+# We also record the raw (non-stripped) address of the section because it can
-+# be used to identify an anchor record.
-+#
-+# Note:
-+# Since some AWK implementations cannot handle large integers, we strip off the
-+# first 4 hex digits from the address.  This is safe because the kernel space
-+# is not large enough for addresses to extend into those digits.  The portion
-+# to strip off is stored in addr_prefix as a regexp, so further clauses can
-+# perform a simple substitution to do the address stripping.
-+#
-+ARGIND == 2 && /^\./ {
-+	# Explicitly ignore a few sections that are not relevant here.
-+	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
-+		next;
-+
-+	# Sections with a 0-address can be ignored as well.
-+	if ($2 ~ /^0x0+$/)
-+		next;
-+
-+	raw_addr = $2;
-+	addr_prefix = "^" substr($2, 1, 6);
-+	base = $2;
-+	sub(addr_prefix, "0x", base);
-+	base = strtonum(base);
-+	sect = $1;
-+	anchor = 0;
-+	sect_base[sect] = base;
-+	sect_size[sect] = strtonum($3);
-+
-+	if (dbg)
-+		printf "[%s] BASE   %016x\n", sect, base >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# If we are not in a section we care about, we ignore the record.
-+#
-+ARGIND == 2 && !sect {
-+	next;
-+}
-+
-+# Record the first anchor symbol for the current section.
-+#
-+# An anchor record for the section bears the same raw address as the section
-+# record.
-+#
-+ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
-+	anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
-+	sect_anchor[sect] = anchor;
-+
-+	if (dbg)
-+		printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# If no anchor record was found for the current section, use the first symbol
-+# in the section as anchor.
-+#
-+ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
-+	addr = $1;
-+	sub(addr_prefix, "0x", addr);
-+	addr = strtonum(addr) - base;
-+	anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
-+	sect_anchor[sect] = anchor;
-+
-+	if (dbg)
-+		printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# The first occurrence of a section name in an object record establishes the
-+# addend (often 0) for that section.  This information is needed to handle
-+# sections that get combined in the final linking of vmlinux (e.g. .head.text
-+# getting included at the start of .text).
-+#
-+# If the section does not have a base yet, use the base of the encapsulating
-+# section.
-+#
-+ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
-+	if (!($1 in sect_base)) {
-+		sect_base[$1] = base;
-+
-+		if (dbg)
-+			printf "[%s] BASE   %016x\n", $1, base >"/dev/stderr";
-+	}
-+
-+	addr = $2;
-+	sub(addr_prefix, "0x", addr);
-+	addr = strtonum(addr);
-+	sect_addend[$1] = addr - sect_base[$1];
-+	sect_in[$1] = sect;
-+
-+	if (dbg)
-+		printf "[%s] ADDEND %016x - %016x = %016x\n",  $1, addr, base, sect_addend[$1] >"/dev/stderr";
-+
-+	# If the object is vmlinux.o then we will need vmlinux.o.map to get the
-+	# actual offsets of objects.
-+	if ($4 == "vmlinux.o")
-+		need_o_map = 1;
-+}
-+
-+# (3) Collect offset ranges (relative to the section base address) for built-in
-+# modules.
-+#
-+# If the final link was done using the actual objects, vmlinux.map contains all
-+# the information we need (see section (3a)).
-+# If linking was done using vmlinux.a as intermediary, we will need to process
-+# vmlinux.o.map (see section (3b)).
-+
-+# (3a) Determine offset range info using vmlinux.map.
-+#
-+# Since we are already processing vmlinux.map, the top level section that is
-+# being processed is already known.  If we do not have a base address for it,
-+# we do not need to process records for it.
-+#
-+# Given the object name, we determine the module(s) (if any) that the current
-+# object is associated with.
-+#
-+# If we were already processing objects for a (list of) module(s):
-+#  - If the current object belongs to the same module(s), update the range data
-+#    to include the current object.
-+#  - Otherwise, ensure that the end offset of the range is valid.
-+#
-+# If the current object does not belong to a built-in module, ignore it.
-+#
-+# If it does, we add a new built-in module offset range record.
-+#
-+ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
-+	if (!(sect in sect_base))
-+		next;
-+
-+	# Turn the address into an offset from the section base.
-+	soff = $2;
-+	sub(addr_prefix, "0x", soff);
-+	soff = strtonum(soff) - sect_base[sect];
-+	eoff = soff + strtonum($3);
-+
-+	# Determine which (if any) built-in modules the object belongs to.
-+	mod = get_module_info($4);
-+
-+	# If we are processing a built-in module:
-+	#   - If the current object is within the same module, we update its
-+	#     entry by extending the range and move on
-+	#   - Otherwise:
-+	#       + If we are still processing within the same main section, we
-+	#         validate the end offset against the start offset of the
-+	#         current object (e.g. .rodata.str1.[18] objects are often
-+	#         listed with an incorrect size in the linker map)
-+	#       + Otherwise, we validate the end offset against the section
-+	#         size
-+	if (mod_name) {
-+		if (mod == mod_name) {
-+			mod_eoff = eoff;
-+			update_entry(mod_sect, mod_name, mod_soff, eoff);
-+
-+			next;
-+		} else if (sect == sect_in[mod_sect]) {
-+			if (mod_eoff > soff)
-+				update_entry(mod_sect, mod_name, mod_soff, soff);
-+		} else {
-+			v = sect_size[sect_in[mod_sect]];
-+			if (mod_eoff > v)
-+				update_entry(mod_sect, mod_name, mod_soff, v);
-+		}
-+	}
-+
-+	mod_name = mod;
-+
-+	# If we encountered an object that is not part of a built-in module, we
-+	# do not need to record any data.
-+	if (!mod)
-+		next;
-+
-+	# At this point, we encountered the start of a new built-in module.
-+	mod_name = mod;
-+	mod_soff = soff;
-+	mod_eoff = eoff;
-+	mod_sect = $1;
-+	update_entry($1, mod, soff, mod_eoff);
-+
-+	next;
-+}
-+
-+# If we do not need to parse the vmlinux.o.map file, we are done.
-+#
-+ARGIND == 3 && !need_o_map {
-+	if (dbg)
-+		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
-+	exit;
-+}
-+
-+# (3) Collect offset ranges (relative to the section base address) for built-in
-+# modules.
-+#
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
-+	gsub(/\)/, "");
-+	sub(/:\(/, " ");
-+
-+	sect = $6;
-+	if (!(sect in sect_addend))
-+		next;
-+
-+	sub(/ vmlinux\.a\(/, " ");
-+	$0 = " "sect " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (3b) Determine offset range info using vmlinux.o.map.
-+#
-+# If we do not know an addend for the object's section, we are interested in
-+# anything within that section.
-+#
-+# Determine the top-level section that the object's section was included in
-+# during the final link.  This is the section name offset range data will be
-+# associated with for this object.
-+#
-+# The remainder of the processing of the current object record follows the
-+# procedure outlined in (3a).
-+#
-+ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
-+	osect = $1;
-+	if (!(osect in sect_addend))
-+		next;
-+
-+	# We need to work with the main section.
-+	sect = sect_in[osect];
-+
-+	# Turn the address into an offset from the section base.
-+	soff = $2;
-+	sub(addr_prefix, "0x", soff);
-+	soff = strtonum(soff) + sect_addend[osect];
-+	eoff = soff + strtonum($3);
-+
-+	# Determine which (if any) built-in modules the object belongs to.
-+	mod = get_module_info($4);
-+
-+	# If we are processing a built-in module:
-+	#   - If the current object is within the same module, we update its
-+	#     entry by extending the range and move on
-+	#   - Otherwise:
-+	#       + If we are still processing within the same main section, we
-+	#         validate the end offset against the start offset of the
-+	#         current object (e.g. .rodata.str1.[18] objects are often
-+	#         listed with an incorrect size in the linker map)
-+	#       + Otherwise, we validate the end offset against the section
-+	#         size
-+	if (mod_name) {
-+		if (mod == mod_name) {
-+			mod_eoff = eoff;
-+			update_entry(mod_sect, mod_name, mod_soff, eoff);
-+
-+			next;
-+		} else if (sect == sect_in[mod_sect]) {
-+			if (mod_eoff > soff)
-+				update_entry(mod_sect, mod_name, mod_soff, soff);
-+		} else {
-+			v = sect_size[sect_in[mod_sect]];
-+			if (mod_eoff > v)
-+				update_entry(mod_sect, mod_name, mod_soff, v);
-+		}
-+	}
-+
-+	mod_name = mod;
-+
-+	# If we encountered an object that is not part of a built-in module, we
-+	# do not need to record any data.
-+	if (!mod)
-+		next;
-+
-+	# At this point, we encountered the start of a new built-in module.
-+	mod_name = mod;
-+	mod_soff = soff;
-+	mod_eoff = eoff;
-+	mod_sect = osect;
-+	update_entry(osect, mod, soff, mod_eoff);
-+
-+	next;
-+}
-+
-+# (4) Generate the output.
-+#
-+# Anchor records are added for each section that contains offset range data
-+# records.  They are added at an adjusted section base address (base << 1) to
-+# ensure they come first in the second records (see update_entry() above for
-+# more information).
-+#
-+# All entries are sorted by (adjusted) address to ensure that the output can be
-+# parsed in strict ascending address order.
-+#
-+END {
-+	for (sect in count) {
-+		if (sect in sect_anchor)
-+			entries[sect_base[sect] * 2] = sect_anchor[sect];
-+	}
-+
-+	n = asorti(entries, indices);
-+	for (i = 1; i <= n; i++)
-+		print entries[indices[i]];
-+}
-diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
-index f48d72d22dc2a..d7e6cd7781256 100644
---- a/scripts/mod/modpost.c
-+++ b/scripts/mod/modpost.c
-@@ -733,6 +733,7 @@ static const char *const section_white_list[] =
- 	".comment*",
- 	".debug*",
- 	".zdebug*",		/* Compressed debug sections. */
-+        ".ctf",			/* Type info */
- 	".GCC.command.line",	/* record-gcc-switches */
- 	".mdebug*",        /* alpha, score, mips etc. */
- 	".pdr",            /* alpha, score, mips etc. */
-diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
-new file mode 100644
-index 0000000000000..df52932a4417b
---- /dev/null
-+++ b/scripts/modules_builtin.c
-@@ -0,0 +1,200 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * A simple modules_builtin reader.
-+ *
-+ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#include <errno.h>
-+#include <stdio.h>
-+#include <stdlib.h>
-+#include <string.h>
-+
-+#include "modules_builtin.h"
-+
-+/*
-+ * Read a modules.builtin.objs file and translate it into a stream of
-+ * name / module-name pairs.
-+ */
-+
-+/*
-+ * Construct a modules.builtin.objs iterator.
-+ */
-+struct modules_builtin_iter *
-+modules_builtin_iter_new(const char *modules_builtin_file)
-+{
-+	struct modules_builtin_iter *i;
-+
-+	i = calloc(1, sizeof(struct modules_builtin_iter));
-+	if (i == NULL)
-+		return NULL;
-+
-+	i->f = fopen(modules_builtin_file, "r");
-+
-+	if (i->f == NULL) {
-+		fprintf(stderr, "Cannot open builtin module file %s: %s\n",
-+			modules_builtin_file, strerror(errno));
-+		return NULL;
-+	}
-+
-+	return i;
-+}
-+
-+/*
-+ * Iterate, returning a new null-terminated array of object file names, and a
-+ * new dynamically-allocated module name.  (The module name passed in is freed.)
-+ *
-+ * The array of object file names should be freed by the caller: the strings it
-+ * points to are owned by the iterator, and should not be freed.
-+ */
-+
-+char ** __attribute__((__nonnull__))
-+modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
-+{
-+	size_t npaths = 1;
-+	char **module_paths;
-+	char *last_slash;
-+	char *last_dot;
-+	char *trailing_linefeed;
-+	char *object_name = i->line;
-+	char *dash;
-+	int composite = 0;
-+
-+	/*
-+	 * Read in all module entries, computing the suffixless, pathless name
-+	 * of the module and building the next arrayful of object file names for
-+	 * return.
-+	 *
-+	 * Modules can consist of multiple files: in this case, the portion
-+	 * before the colon is the path to the module (as before): the portion
-+	 * after the colon is a space-separated list of files that should be
-+	 * considered part of this module.  In this case, the portion before the
-+	 * name is an "object file" that does not actually exist: it is merged
-+	 * into built-in.a without ever being written out.
-+	 *
-+	 * All module names have - translated to _, to match what is done to the
-+	 * names of the same things when built as modules.
-+	 */
-+
-+	/*
-+	 * Reinvocation of exhausted iterator. Return NULL, once.
-+	 */
-+retry:
-+	if (getline(&i->line, &i->line_size, i->f) < 0) {
-+		if (ferror(i->f)) {
-+			fprintf(stderr, "Error reading from modules_builtin file:"
-+				" %s\n", strerror(errno));
-+			exit(1);
-+		}
-+		rewind(i->f);
-+		return NULL;
-+	}
-+
-+	if (i->line[0] == '\0')
-+		goto retry;
-+
-+	trailing_linefeed = strchr(i->line, '\n');
-+	if (trailing_linefeed != NULL)
-+		*trailing_linefeed = '\0';
-+
-+	/*
-+	 * Slice the line in two at the colon, if any.  If there is anything
-+	 * past the ': ', this is a composite module.  (We allow for no colon
-+	 * for robustness, even though one should always be present.)
-+	 */
-+	if (strchr(i->line, ':') != NULL) {
-+		char *name_start;
-+
-+		object_name = strchr(i->line, ':');
-+		*object_name = '\0';
-+		object_name++;
-+		name_start = object_name + strspn(object_name, " \n");
-+		if (*name_start != '\0') {
-+			composite = 1;
-+			object_name = name_start;
-+		}
-+	}
-+
-+	/*
-+	 * Figure out the module name.
-+	 */
-+	last_slash = strrchr(i->line, '/');
-+	last_slash = (!last_slash) ? i->line :
-+		last_slash + 1;
-+	free(*module_name);
-+	*module_name = strdup(last_slash);
-+	dash = *module_name;
-+
-+	while (dash != NULL) {
-+		dash = strchr(dash, '-');
-+		if (dash != NULL)
-+			*dash = '_';
-+	}
-+
-+	last_dot = strrchr(*module_name, '.');
-+	if (last_dot != NULL)
-+		*last_dot = '\0';
-+
-+	/*
-+	 * Multifile separator? Object file names explicitly stated:
-+	 * slice them up and shuffle them in.
-+	 *
-+	 * The array size may be an overestimate if any object file
-+	 * names start or end with spaces (very unlikely) but cannot be
-+	 * an underestimate.  (Check for it anyway.)
-+	 */
-+	if (composite) {
-+		char *one_object;
-+
-+		for (npaths = 0, one_object = object_name;
-+		     one_object != NULL;
-+		     npaths++, one_object = strchr(one_object + 1, ' '));
-+	}
-+
-+	module_paths = malloc((npaths + 1) * sizeof(char *));
-+	if (!module_paths) {
-+		fprintf(stderr, "%s: out of memory on module %s\n", __func__,
-+			*module_name);
-+		exit(1);
-+	}
-+
-+	if (composite) {
-+		char *one_object;
-+		size_t i = 0;
-+
-+		while ((one_object = strsep(&object_name, " ")) != NULL) {
-+			if (i >= npaths) {
-+				fprintf(stderr, "%s: num_objs overflow on module "
-+					"%s: this is a bug.\n", __func__,
-+					*module_name);
-+				exit(1);
-+			}
-+
-+			module_paths[i++] = one_object;
-+		}
-+	} else
-+		module_paths[0] = i->line;	/* untransformed module name */
-+
-+	module_paths[npaths] = NULL;
-+
-+	return module_paths;
-+}
-+
-+/*
-+ * Free an iterator. Can be called while iteration is underway, so even
-+ * state that is freed at the end of iteration must be freed here too.
-+ */
-+void
-+modules_builtin_iter_free(struct modules_builtin_iter *i)
-+{
-+	if (i == NULL)
-+		return;
-+	fclose(i->f);
-+	free(i->line);
-+	free(i);
-+}
-diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
-new file mode 100644
-index 0000000000000..5138792b42ef0
---- /dev/null
-+++ b/scripts/modules_builtin.h
-@@ -0,0 +1,48 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * A simple modules.builtin.objs reader.
-+ *
-+ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#ifndef _LINUX_MODULES_BUILTIN_H
-+#define _LINUX_MODULES_BUILTIN_H
-+
-+#include <stdio.h>
-+#include <stddef.h>
-+
-+/*
-+ * modules.builtin.objs iteration state.
-+ */
-+struct modules_builtin_iter {
-+	FILE *f;
-+	char *line;
-+	size_t line_size;
-+};
-+
-+/*
-+ * Construct a modules_builtin.objs iterator.
-+ */
-+struct modules_builtin_iter *
-+modules_builtin_iter_new(const char *modules_builtin_file);
-+
-+/*
-+ * Iterate, returning a new null-terminated array of object file names, and a
-+ * new dynamically-allocated module name.  (The module name passed in is freed.)
-+ *
-+ * The array of object file names should be freed by the caller: the strings it
-+ * points to are owned by the iterator, and should not be freed.
-+ */
-+
-+char ** __attribute__((__nonnull__))
-+modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
-+
-+void
-+modules_builtin_iter_free(struct modules_builtin_iter *i);
-+
-+#endif
-diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
-index c52d517b93647..8f75906a96314 100644
---- a/scripts/package/kernel.spec
-+++ b/scripts/package/kernel.spec
-@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
- 
- %build
- %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
-+%if %{with_ctf}
-+%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
-+%endif
- 
- %install
- mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
- cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
- # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
- %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
-+%if %{with_ctf}
-+%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
-+%endif
- %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
- cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
- cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
-diff --git a/scripts/package/mkspec b/scripts/package/mkspec
-index ce201bfa8377c..aeb43c7ab1229 100755
---- a/scripts/package/mkspec
-+++ b/scripts/package/mkspec
-@@ -21,10 +21,16 @@ else
- echo '%define with_devel 0'
- fi
- 
-+if grep -q CONFIG_CTF=y include/config/auto.conf; then
-+echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
-+else
-+echo '%define with_ctf 0'
-+fi
- cat<<EOF
- %define ARCH ${ARCH}
- %define KERNELRELEASE ${KERNELRELEASE}
- %define pkg_release $("${srctree}/init/build-version")
-+
- EOF
- 
- cat "${srctree}/scripts/package/kernel.spec"
-diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
-new file mode 100644
-index 0000000000000..5d94d6ee99227
---- /dev/null
-+++ b/scripts/remove-ctf-lds.awk
-@@ -0,0 +1,12 @@
-+# SPDX-License-Identifier: GPL-2.0
-+# See Makefile.vmlinux_o
-+
-+BEGIN {
-+    discards = 0; p = 0
-+}
-+
-+/^====/ { p = 1; next; }
-+p && /\.ctf/ { next; }
-+p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
-+p && /^\}/ && !discards { print "  /DISCARD/ : { *(.ctf) }"; }
-+p { print $0; }
-diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
-new file mode 100755
-index 0000000000000..f513841da83e1
---- /dev/null
-+++ b/scripts/verify_builtin_ranges.awk
-@@ -0,0 +1,356 @@
-+#!/usr/bin/gawk -f
-+# SPDX-License-Identifier: GPL-2.0
-+# verify_builtin_ranges.awk: Verify address range data for builtin modules
-+# Written by Kris Van Hees <kris.van.hees@oracle.com>
-+#
-+# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
-+#				   modules.builtin vmlinux.map vmlinux.o.map \
-+#				   [ <build-dir> ]
-+#
-+
-+# Return the module name(s) (if any) associated with the given object.
-+#
-+# If we have seen this object before, return information from the cache.
-+# Otherwise, retrieve it from the corresponding .cmd file.
-+#
-+function get_module_info(fn, mod, obj, s) {
-+	if (fn in omod)
-+		return omod[fn];
-+
-+	if (match(fn, /\/[^/]+$/) == 0)
-+		return "";
-+
-+	obj = fn;
-+	mod = "";
-+	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
-+	if (getline s <fn == 1) {
-+		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
-+			mod = substr(s, RSTART + 16, RLENGTH - 16);
-+			gsub(/['"]/, "", mod);
-+		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
-+			mod = substr(s, RSTART + 13, RLENGTH - 13);
-+	} else {
-+		print "ERROR: Failed to read: " fn "\n\n" \
-+		      "  Invalid kernel build directory (" kdir ")\n" \
-+		      "  or its content does not match " ARGV[1] >"/dev/stderr";
-+		close(fn);
-+		total = 0;
-+		exit(1);
-+	}
-+	close(fn);
-+
-+	# A single module (common case) also reflects objects that are not part
-+	# of a module.  Some of those objects have names that are also a module
-+	# name (e.g. core).  We check the associated module file name, and if
-+	# they do not match, the object is not part of a module.
-+	if (mod !~ / /) {
-+		if (!(mod in mods))
-+			mod = "";
-+	}
-+
-+	gsub(/([^/ ]*\/)+/, "", mod);
-+	gsub(/-/, "_", mod);
-+
-+	# At this point, mod is a single (valid) module name, or a list of
-+	# module names (that do not need validation).
-+	omod[obj] = mod;
-+
-+	return mod;
-+}
-+
-+# Return a representative integer value for a given hexadecimal address.
-+#
-+# Since all kernel addresses fall within the same memory region, we can safely
-+# strip off the first 6 hex digits before performing the hex-to-dec conversion,
-+# thereby avoiding integer overflows.
-+#
-+function addr2val(val) {
-+	sub(/^0x/, "", val);
-+	if (length(val) == 16)
-+		val = substr(val, 5);
-+	return strtonum("0x" val);
-+}
-+
-+# Determine the kernel build directory to use (default is .).
-+#
-+BEGIN {
-+	if (ARGC > 6) {
-+		kdir = ARGV[ARGC - 1];
-+		ARGV[ARGC - 1] = "";
-+	} else
-+		kdir = ".";
-+}
-+
-+# (1) Load the built-in module address range data.
-+#
-+ARGIND == 1 {
-+	ranges[FNR] = $0;
-+	rcnt++;
-+	next;
-+}
-+
-+# (2) Annotate System.map symbols with module names.
-+#
-+ARGIND == 2 {
-+	addr = addr2val($1);
-+	name = $3;
-+
-+	while (addr >= mod_eaddr) {
-+		if (sect_symb) {
-+			if (sect_symb != name)
-+				next;
-+
-+			sect_base = addr - sect_off;
-+			if (dbg)
-+				printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
-+			sect_symb = 0;
-+		}
-+
-+		if (++ridx > rcnt)
-+			break;
-+
-+		$0 = ranges[ridx];
-+		sub(/-/, " ");
-+		if ($4 != "=") {
-+			sub(/-/, " ");
-+			mod_saddr = strtonum("0x" $2) + sect_base;
-+			mod_eaddr = strtonum("0x" $3) + sect_base;
-+			$1 = $2 = $3 = "";
-+			sub(/^ +/, "");
-+			mod_name = $0;
-+
-+			if (dbg)
-+				printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
-+		} else {
-+			sect_name = $1;
-+			sect_off = strtonum("0x" $2);
-+			sect_symb = $5;
-+		}
-+	}
-+
-+	idx = addr"-"name;
-+	if (addr >= mod_saddr && addr < mod_eaddr)
-+		sym2mod[idx] = mod_name;
-+
-+	next;
-+}
-+
-+# Once we are done annotating the System.map, we no longer need the ranges data.
-+#
-+FNR == 1 && ARGIND == 3 {
-+	delete ranges;
-+}
-+
-+# (3) Build a lookup map of built-in module names.
-+#
-+# Lines from modules.builtin will be like:
-+#	kernel/crypto/lzo-rle.ko
-+# and we record the object name "crypto/lzo-rle".
-+#
-+ARGIND == 3 {
-+	sub(/kernel\//, "");			# strip off "kernel/" prefix
-+	sub(/\.ko$/, "");			# strip off .ko suffix
-+
-+	mods[$1] = 1;
-+	next;
-+}
-+
-+# (4) Get a list of symbols (per object).
-+#
-+# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
-+# if vmlinux is found to have inked in vmlinux.o.
-+#
-+
-+# If we were able to get the data we need from vmlinux.map, there is no need to
-+# process vmlinux.o.map.
-+#
-+FNR == 1 && ARGIND == 5 && total > 0 {
-+	if (dbg)
-+		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
-+	exit;
-+}
-+
-+# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
-+#
-+ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
-+	map_is_lld = 1;
-+	next;
-+}
-+
-+# (LLD) Convert a section record fronm lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]/ {
-+	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
-+}
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(\./ {
-+	gsub(/\)/, "");
-+	sub(/:\(/, " ");
-+	sub(/ vmlinux\.a\(/, " ");
-+	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (LLD) Convert a symbol record from lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
-+	$0 = "  0x" $1 " " $5;
-+}
-+
-+# (LLD) We do not need any other ldd linker map records.
-+#
-+ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
-+	next;
-+}
-+
-+# Handle section records with long section names (spilling onto a 2nd line).
-+#
-+ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# Next section - previous one is done.
-+#
-+ARGIND >= 4 && /^[^ ]/ {
-+	sect = 0;
-+}
-+
-+# Get the (top level) section name.
-+#
-+ARGIND >= 4 && /^[^ ]/ && $2 ~ /^0x/ && $3 ~ /^0x/ {
-+	# Empty section or per-CPU section - ignore.
-+	if (NF < 3 || $1 ~ /\.percpu/) {
-+		sect = 0;
-+		next;
-+	}
-+
-+	sect = $1;
-+
-+	next;
-+}
-+
-+# If we are not currently in a section we care about, ignore records.
-+#
-+!sect {
-+	next;
-+}
-+
-+# Handle object records with long section names (spilling onto a 2nd line).
-+#
-+ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
-+	# If the section name is long, the remainder of the entry is found on
-+	# the next line.
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
-+# symbol information
-+#
-+ARGIND == 4 && /^ [^ ]/ && NF == 4 {
-+	idx = sect":"$1;
-+	if (!(idx in sect_addend)) {
-+		sect_addend[idx] = addr2val($2);
-+		if (dbg)
-+			printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
-+	}
-+	if ($4 == "vmlinux.o") {
-+		need_o_map = 1;
-+		next;
-+	}
-+}
-+
-+# If data from vmlinux.o.map is needed, we only process section and object
-+# records from vmlinux.map to determine which section we need to pay attention
-+# to in vmlinux.o.map.  So skip everything else from vmlinux.map.
-+#
-+ARGIND == 4 && need_o_map {
-+	next;
-+}
-+
-+# Get module information for the current object.
-+#
-+ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
-+	msect = $1;
-+	mod_name = get_module_info($4);
-+	mod_eaddr = addr2val($2) + addr2val($3);
-+
-+	next;
-+}
-+
-+# Process a symbol record.
-+#
-+# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
-+# as follows:
-+#  - For all symbols in a given object:
-+#     - If the symbol is annotated with the same module name(s) that the object
-+#       belongs to, count it as a match.
-+#     - Otherwise:
-+#        - If the symbol is known to have duplicates of which at least one is
-+#          in a built-in module, disregard it.
-+#        - If the symbol us not annotated with any module name(s) AND the
-+#          object belongs to built-in modules, count it as missing.
-+#        - Otherwise, count it as a mismatch.
-+#
-+ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
-+	idx = sect":"msect;
-+	if (!(idx in sect_addend))
-+		next;
-+
-+	addr = addr2val($1);
-+
-+	# Handle the rare but annoying case where a 0-size symbol is placed at
-+	# the byte *after* the module range.  Based on vmlinux.map it will be
-+	# considered part of the current object, but it falls just beyond the
-+	# module address range.  Unfortunately, its address could be at the
-+	# start of another built-in module, so the only safe thing to do is to
-+	# ignore it.
-+	if (mod_name && addr == mod_eaddr)
-+		next;
-+
-+	# If we are processing vmlinux.o.map, we need to apply the base address
-+	# of the section to the relative address on the record.
-+	#
-+	if (ARGIND == 5)
-+		addr += sect_addend[idx];
-+
-+	idx = addr"-"$2;
-+	mod = "";
-+	if (idx in sym2mod) {
-+		mod = sym2mod[idx];
-+		if (sym2mod[idx] == mod_name) {
-+			mod_matches++;
-+			matches++;
-+		} else if (mod_name == "") {
-+			print $2 " in " sym2mod[idx] " (should NOT be)";
-+			mismatches++;
-+		} else {
-+			print $2 " in " sym2mod[idx] " (should be " mod_name ")";
-+			mismatches++;
-+		}
-+	} else if (mod_name != "") {
-+		print $2 " should be in " mod_name;
-+		missing++;
-+	} else
-+		matches++;
-+
-+	total++;
-+
-+	next;
-+}
-+
-+# Issue the comparison report.
-+#
-+END {
-+	if (total) {
-+		printf "Verification of %s:\n", ARGV[1];
-+		printf "  Correct matches:  %6d (%d%% of total)\n", matches, 100 * matches / total;
-+		printf "    Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
-+		printf "  Mismatches:       %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
-+		printf "  Missing:          %6d (%d%% of total)\n", missing, 100 * missing / total;
-+	}
-+}


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-07 14:26 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-07 14:26 UTC (permalink / raw
  To: gentoo-commits

commit:     e6e998f862f051598cd9b9c6cdcb7ad1f9f16d65
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep  7 14:26:05 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep  7 14:26:05 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e6e998f8

libbpf: workaround (another) -Wmaybe-uninitialized false positive

Thanks to Sam

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...workaround-Wmaybe-uninitialized-false-pos.patch | 64 ++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/0000_README b/0000_README
index ad96829d..9948b161 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
 From:   https://www.spinics.net/lists/stable/msg604665.html
 Desc:   bpf: mark get_entry_ip as __maybe_unused
 
+Patch:  2911_libbpf-second-workaround-Wmaybe-uninitialized-false-pos.patch
+From:   https://lore.kernel.org/bpf/f6962729197ae7cdf4f6d1512625bd92f2322d31.1725630494.git.sam@gentoo.org/
+Desc:   libbpf: workaround (another) -Wmaybe-uninitialized false positive
+
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2911_libbpf-second-workaround-Wmaybe-uninitialized-false-pos.patch b/2911_libbpf-second-workaround-Wmaybe-uninitialized-false-pos.patch
new file mode 100644
index 00000000..f01221c7
--- /dev/null
+++ b/2911_libbpf-second-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -0,0 +1,64 @@
+From git@z Thu Jan  1 00:00:00 1970
+Subject: [PATCH] libbpf: workaround (another) -Wmaybe-uninitialized false
+ positive
+From: Sam James <sam@gentoo.org>
+Date: Fri, 06 Sep 2024 14:48:14 +0100
+Message-Id: <f6962729197ae7cdf4f6d1512625bd92f2322d31.1725630494.git.sam@gentoo.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+We get this with GCC 15 -O3 (at least):
+```
+libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
+libbpf.c:1109:18: error: ‘mod_btf’ may be used uninitialized [-Werror=maybe-uninitialized]
+ 1109 |         kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux;
+      |         ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+libbpf.c:1094:28: note: ‘mod_btf’ was declared here
+ 1094 |         struct module_btf *mod_btf;
+      |                            ^~~~~~~
+In function ‘find_struct_ops_kern_types’,
+    inlined from ‘bpf_map__init_kern_struct_ops’ at libbpf.c:1102:8:
+libbpf.c:982:21: error: ‘btf’ may be used uninitialized [-Werror=maybe-uninitialized]
+  982 |         kern_type = btf__type_by_id(btf, kern_type_id);
+      |                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
+libbpf.c:967:21: note: ‘btf’ was declared here
+  967 |         struct btf *btf;
+      |                     ^~~
+```
+
+This is similar to the other libbpf fix from a few weeks ago for
+the same modelling-errno issue (fab45b962749184e1a1a57c7c583782b78fad539).
+
+Link: https://bugs.gentoo.org/939106
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ tools/lib/bpf/libbpf.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index a3be6f8fac09e..7315120574c29 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -988,7 +988,7 @@ find_struct_ops_kern_types(struct bpf_object *obj, const char *tname_raw,
+ {
+ 	const struct btf_type *kern_type, *kern_vtype;
+ 	const struct btf_member *kern_data_member;
+-	struct btf *btf;
++	struct btf *btf = NULL;
+ 	__s32 kern_vtype_id, kern_type_id;
+ 	char tname[256];
+ 	__u32 i;
+@@ -1115,7 +1115,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
+ 	const struct btf *btf = obj->btf;
+ 	struct bpf_struct_ops *st_ops;
+ 	const struct btf *kern_btf;
+-	struct module_btf *mod_btf;
++	struct module_btf *mod_btf = NULL;
+ 	void *data, *kern_data;
+ 	const char *tname;
+ 	int err;
+-- 
+2.46.0
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-07 18:10 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-07 18:10 UTC (permalink / raw
  To: gentoo-commits

commit:     b938df58480fa18cda9e6ffd6b0793832d6f04d0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep  7 18:10:27 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep  7 18:10:27 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b938df58

dtrace patch for 6.10.X (CTF, modules.builtin.objs) p3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   2 +-
 ...race-6.10_p2.patch => 2995_dtrace-6.10_p3.patch | 126 +++++++++------------
 2 files changed, 56 insertions(+), 72 deletions(-)

diff --git a/0000_README b/0000_README
index 9948b161..574cd30b 100644
--- a/0000_README
+++ b/0000_README
@@ -123,7 +123,7 @@ Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
-Patch:  2995_dtrace-6.10_p2.patch
+Patch:  2995_dtrace-6.10_p3.patch
 From:   https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.10
 Desc:   dtrace patch for 6.10.X (CTF, modules.builtin.objs)
 

diff --git a/2995_dtrace-6.10_p2.patch b/2995_dtrace-6.10_p3.patch
similarity index 96%
rename from 2995_dtrace-6.10_p2.patch
rename to 2995_dtrace-6.10_p3.patch
index 3686ca5b..775a7868 100644
--- a/2995_dtrace-6.10_p2.patch
+++ b/2995_dtrace-6.10_p3.patch
@@ -115,87 +115,78 @@ index 2e5ac6ab3d476..635896f269f1f 100644
  		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
  		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
 diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
-index 01067a2bc43b7..6464089596088 100644
+index 01067a2bc43b7..d2193b8dfad83 100644
 --- a/arch/arm/vdso/Makefile
 +++ b/arch/arm/vdso/Makefile
-@@ -14,6 +14,12 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
  ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
  ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
  
-+ifdef CONFIG_CTF
-+  # CTF in the vDSO would introduce a new section, which would
-+  # expand the vDSO to more than a page.
-+  ccflags-y += $(call cc-option,-gctf0)
-+endif
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
 +
  ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
  ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
  	    -z max-page-size=4096 -shared $(ldflags-y) \
 diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
-index d63930c828397..9ef2050690653 100644
+index d63930c828397..6e84d3822cfe3 100644
 --- a/arch/arm64/kernel/vdso/Makefile
 +++ b/arch/arm64/kernel/vdso/Makefile
-@@ -33,6 +33,12 @@ ldflags-y += -T
+@@ -33,6 +33,10 @@ ldflags-y += -T
  ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
  ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
  
-+ifdef CONFIG_CTF
-+  # CTF in the vDSO would introduce a new section, which would
-+  # expand the vDSO to more than a page.
-+  ccflags-y += $(call cc-option,-gctf0)
-+endif
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
 +
  # -Wmissing-prototypes and -Wmissing-declarations are removed from
  # the CFLAGS of vgettimeofday.c to make possible to build the
  # kernel with CONFIG_WERROR enabled.
 diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
-index d724d46b07c84..95ef763218f7b 100644
+index d724d46b07c84..fbedb95223ae1 100644
 --- a/arch/loongarch/vdso/Makefile
 +++ b/arch/loongarch/vdso/Makefile
-@@ -22,6 +22,11 @@ cflags-vdso := $(ccflags-vdso) \
+@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
  	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
  	$(call cc-option, -fno-asynchronous-unwind-tables) \
- 	$(call cc-option, -fno-stack-protector)
-+
-+ifdef CONFIG_CTF
-+  cflags-vdso += $(call cc-option,-gctf0)
-+endif
-+
+-	$(call cc-option, -fno-stack-protector)
++	$(call cc-option, -fno-stack-protector) \
++	$(call cc-option,-gctf0)
  aflags-vdso := $(ccflags-vdso) \
  	-D__ASSEMBLY__ -Wa,-gdwarf-2
  
 diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
-index b289b2c1b2946..6b5603f3b6a2d 100644
+index b289b2c1b2946..6c8d777525f9b 100644
 --- a/arch/mips/vdso/Makefile
 +++ b/arch/mips/vdso/Makefile
-@@ -34,6 +34,10 @@ cflags-vdso := $(ccflags-vdso) \
+@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+-	$(call cc-option, -fno-asynchronous-unwind-tables)
++	$(call cc-option, -fno-asynchronous-unwind-tables) \
++	$(call cc-option,-gctf0)
  aflags-vdso := $(ccflags-vdso) \
  	-D__ASSEMBLY__ -Wa,-gdwarf-2
  
-+ifdef CONFIG_CTF
-+cflags-vdso += $(call cc-option,-gctf0)
-+endif
-+
- ifneq ($(c-gettimeofday-y),)
- CFLAGS_vgettimeofday.o = -include $(c-gettimeofday-y)
- 
 diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
-index 243dbfc4609d8..432422fc3febb 100644
+index 243dbfc4609d8..e4f3e47074e9d 100644
 --- a/arch/sparc/vdso/Makefile
 +++ b/arch/sparc/vdso/Makefile
-@@ -46,6 +46,10 @@ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64
+@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
         -fno-omit-frame-pointer -foptimize-sibling-calls \
-        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
  
-+ifdef CONFIG_CTF
-+CFL += $(call cc-option,-gctf0)
-+endif
-+
  SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
  
- $(vobjs): KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
 diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
-index 215a1b202a918..ffb33a1a7315b 100644
+index 215a1b202a918..2fa1613a06275 100644
 --- a/arch/x86/entry/vdso/Makefile
 +++ b/arch/x86/entry/vdso/Makefile
 @@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
@@ -206,31 +197,27 @@ index 215a1b202a918..ffb33a1a7315b 100644
         -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
  
  ifdef CONFIG_MITIGATION_RETPOLINE
-@@ -131,6 +132,9 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+@@ -131,6 +132,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
  KBUILD_CFLAGS_32 += -fno-stack-protector
  KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
  KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-+ifdef CONFIG_CTF
-+  KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
-+endif
++KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
  KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
  
  ifdef CONFIG_MITIGATION_RETPOLINE
 diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
-index 6a77ea6434ffd..d99a60494286a 100644
+index 6a77ea6434ffd..6db233b5edd75 100644
 --- a/arch/x86/um/vdso/Makefile
 +++ b/arch/x86/um/vdso/Makefile
-@@ -42,6 +42,10 @@ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ #
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
         $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
-        -fno-omit-frame-pointer -foptimize-sibling-calls
+-       -fno-omit-frame-pointer -foptimize-sibling-calls
++       -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
  
-+if CONFIG_CTF
-+CFL += $(call cc-option,-gctf0)
-+endif
-+
  $(vobjs): KBUILD_CFLAGS += $(CFL)
  
- #
 diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
 index f00a8e18f389f..2e307c0824574 100644
 --- a/include/asm-generic/vmlinux.lds.h
@@ -244,17 +231,21 @@ index f00a8e18f389f..2e307c0824574 100644
  	*(.gnu.version*)						\
  
 diff --git a/include/linux/module.h b/include/linux/module.h
-index 330ffb59efe51..ec828908470c9 100644
+index 330ffb59efe51..2d9fcca542d13 100644
 --- a/include/linux/module.h
 +++ b/include/linux/module.h
-@@ -180,7 +180,9 @@ extern void cleanup_module(void);
+@@ -180,7 +180,13 @@ extern void cleanup_module(void);
  #ifdef MODULE
  #define MODULE_FILE
  #else
 -#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
++#ifdef CONFIG_CTF
 +#define MODULE_FILE					                      \
 +			MODULE_INFO(file, KBUILD_MODFILE);                    \
 +			MODULE_INFO(objs, KBUILD_MODOBJS);
++#else
++#define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
++#endif
  #endif
  
  /*
@@ -322,14 +313,6 @@ index 59b6765d86b8f..dab7e6983eace 100644
  config DEBUG_FORCE_WEAK_PER_CPU
  	bool "Force weak per-cpu definitions"
  	depends on DEBUG_KERNEL
-diff --git a/scripts/.gitignore b/scripts/.gitignore
-index 3dbb8bb2457bc..11339fa922abd 100644
---- a/scripts/.gitignore
-+++ b/scripts/.gitignore
-@@ -11,3 +11,4 @@
- /sorttable
- /target.json
- /unifdef
 +y!/Makefile.ctf
 diff --git a/scripts/Makefile b/scripts/Makefile
 index fe56eeef09dd4..8e7eb174d3154 100644
@@ -502,7 +485,7 @@ index 0000000000000..210bef3854e9b
 +
 +endif
 diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index 7f8ec77bf35c9..97b0ea2eee9d4 100644
+index 7f8ec77bf35c9..8e67961ba2ec9 100644
 --- a/scripts/Makefile.lib
 +++ b/scripts/Makefile.lib
 @@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
@@ -514,17 +497,18 @@ index 7f8ec77bf35c9..97b0ea2eee9d4 100644
  modfile = $(addprefix $(obj)/,$(__modname))
  
  # target with $(obj)/ and its suffix stripped
-@@ -131,7 +133,8 @@ name-fix = $(call stringify,$(call name-fix-token,$1))
- basename_flags = -DKBUILD_BASENAME=$(call name-fix,$(basetarget))
- modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+@@ -133,6 +135,10 @@ modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
  		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
--modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
-+modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile)) \
-+                 -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
+ modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
  
++ifdef CONFIG_CTF
++modfile_flags  += -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
++endif
++
  _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
                       $(filter-out $(ccflags-remove-y), \
-@@ -238,7 +241,7 @@ modkern_rustflags =                                              \
+                          $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(ccflags-y)) \
+@@ -238,7 +244,7 @@ modkern_rustflags =                                              \
  
  modkern_aflags = $(if $(part-of-module),				\
  			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
@@ -533,7 +517,7 @@ index 7f8ec77bf35c9..97b0ea2eee9d4 100644
  
  c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
  		 -include $(srctree)/include/linux/compiler_types.h       \
-@@ -248,7 +251,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+@@ -248,7 +254,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
  rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
  
  a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-08 11:05 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-08 11:05 UTC (permalink / raw
  To: gentoo-commits

commit:     a965b43020306cd4562bffa7f427356df8ab29de
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep  8 11:05:11 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep  8 11:05:11 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a965b430

Linux patch 6.10.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-6.10.9.patch | 7005 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7009 insertions(+)

diff --git a/0000_README b/0000_README
index 574cd30b..20bc0dd9 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.10.8.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.8
 
+Patch:  1008_linux-6.10.9.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.9
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1008_linux-6.10.9.patch b/1008_linux-6.10.9.patch
new file mode 100644
index 00000000..d0922587
--- /dev/null
+++ b/1008_linux-6.10.9.patch
@@ -0,0 +1,7005 @@
+diff --git a/Documentation/locking/hwspinlock.rst b/Documentation/locking/hwspinlock.rst
+index 6f03713b70039..2ffaa3cbd63f1 100644
+--- a/Documentation/locking/hwspinlock.rst
++++ b/Documentation/locking/hwspinlock.rst
+@@ -85,6 +85,17 @@ is already free).
+ 
+ Should be called from a process context (might sleep).
+ 
++::
++
++  int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id);
++
++After verifying the owner of the hwspinlock, release a previously acquired
++hwspinlock; returns 0 on success, or an appropriate error code on failure
++(e.g. -EOPNOTSUPP if the bust operation is not defined for the specific
++hwspinlock).
++
++Should be called from a process context (might sleep).
++
+ ::
+ 
+   int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
+diff --git a/Makefile b/Makefile
+index 2e5ac6ab3d476..5945cce6b0663 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index 7d03316c279df..9f72e748c8041 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -173,6 +173,20 @@ vreg_edp_3p3: regulator-edp-3p3 {
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 	};
++
++	vreg_nvme: regulator-nvme {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VREG_NVME_3P3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&tlmm 18 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&nvme_reg_en>;
++	};
+ };
+ 
+ &apps_rsc {
+@@ -644,6 +658,12 @@ &mdss_dp3_phy {
+ };
+ 
+ &pcie4 {
++	perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>;
++
++	pinctrl-0 = <&pcie4_default>;
++	pinctrl-names = "default";
++
+ 	status = "okay";
+ };
+ 
+@@ -655,6 +675,14 @@ &pcie4_phy {
+ };
+ 
+ &pcie6a {
++	perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>;
++
++	vddpe-3v3-supply = <&vreg_nvme>;
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie6a_default>;
++
+ 	status = "okay";
+ };
+ 
+@@ -804,6 +832,59 @@ kybd_default: kybd-default-state {
+ 		bias-disable;
+ 	};
+ 
++	nvme_reg_en: nvme-reg-en-state {
++		pins = "gpio18";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
++	pcie4_default: pcie4-default-state {
++		clkreq-n-pins {
++			pins = "gpio147";
++			function = "pcie4_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio146";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-disable;
++		};
++
++		wake-n-pins {
++			pins = "gpio148";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
++	pcie6a_default: pcie6a-default-state {
++		clkreq-n-pins {
++			pins = "gpio153";
++			function = "pcie6a_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio152";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-down;
++		};
++
++		wake-n-pins {
++			pins = "gpio154";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
+ 	tpad_default: tpad-default-state {
+ 		pins = "gpio3";
+ 		function = "gpio";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 2d7dedb7e30f2..f90177a662b7d 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -59,6 +59,20 @@ vreg_edp_3p3: regulator-edp-3p3 {
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 	};
++
++	vreg_nvme: regulator-nvme {
++		compatible = "regulator-fixed";
++
++		regulator-name = "VREG_NVME_3P3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++
++		gpio = <&tlmm 18 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&nvme_reg_en>;
++	};
+ };
+ 
+ &apps_rsc {
+@@ -455,6 +469,12 @@ &mdss_dp3_phy {
+ };
+ 
+ &pcie4 {
++	perst-gpios = <&tlmm 146 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 148 GPIO_ACTIVE_LOW>;
++
++	pinctrl-0 = <&pcie4_default>;
++	pinctrl-names = "default";
++
+ 	status = "okay";
+ };
+ 
+@@ -466,6 +486,14 @@ &pcie4_phy {
+ };
+ 
+ &pcie6a {
++	perst-gpios = <&tlmm 152 GPIO_ACTIVE_LOW>;
++	wake-gpios = <&tlmm 154 GPIO_ACTIVE_LOW>;
++
++	vddpe-3v3-supply = <&vreg_nvme>;
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie6a_default>;
++
+ 	status = "okay";
+ };
+ 
+@@ -528,6 +556,59 @@ edp_reg_en: edp-reg-en-state {
+ 		drive-strength = <16>;
+ 		bias-disable;
+ 	};
++
++	nvme_reg_en: nvme-reg-en-state {
++		pins = "gpio18";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
++	pcie4_default: pcie4-default-state {
++		clkreq-n-pins {
++			pins = "gpio147";
++			function = "pcie4_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio146";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-disable;
++		};
++
++		wake-n-pins {
++			pins = "gpio148";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
++
++	pcie6a_default: pcie6a-default-state {
++		clkreq-n-pins {
++			pins = "gpio153";
++			function = "pcie6a_clk";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++
++		perst-n-pins {
++			pins = "gpio152";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-down;
++		};
++
++		wake-n-pins {
++			pins = "gpio154";
++			function = "gpio";
++			drive-strength = <2>;
++			bias-pull-up;
++		};
++	};
+ };
+ 
+ &uart21 {
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 44df3f11e7319..7b4940530b462 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -462,7 +462,7 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ 		switch (c->x86_model) {
+ 		case 0x00 ... 0x2f:
+ 		case 0x40 ... 0x4f:
+-		case 0x70 ... 0x7f:
++		case 0x60 ... 0x7f:
+ 			setup_force_cpu_cap(X86_FEATURE_ZEN5);
+ 			break;
+ 		default:
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index ccbeb6dfa87a4..8dd8a0126274b 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -397,8 +397,6 @@ void blk_integrity_unregister(struct gendisk *disk)
+ 	if (!bi->profile)
+ 		return;
+ 
+-	/* ensure all bios are off the integrity workqueue */
+-	blk_flush_integrity();
+ 	blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, disk->queue);
+ 	memset(bi, 0, sizeof(*bi));
+ }
+diff --git a/crypto/ecc.c b/crypto/ecc.c
+index fe761256e335b..dd48d9928a210 100644
+--- a/crypto/ecc.c
++++ b/crypto/ecc.c
+@@ -78,7 +78,7 @@ void ecc_digits_from_bytes(const u8 *in, unsigned int nbytes,
+ 	/* diff > 0: not enough input bytes: set most significant digits to 0 */
+ 	if (diff > 0) {
+ 		ndigits -= diff;
+-		memset(&out[ndigits - 1], 0, diff * sizeof(u64));
++		memset(&out[ndigits], 0, diff * sizeof(u64));
+ 	}
+ 
+ 	if (o) {
+diff --git a/drivers/base/regmap/regmap-spi.c b/drivers/base/regmap/regmap-spi.c
+index 094cf2a2ca3cd..14b1d88997cbe 100644
+--- a/drivers/base/regmap/regmap-spi.c
++++ b/drivers/base/regmap/regmap-spi.c
+@@ -122,8 +122,7 @@ static const struct regmap_bus *regmap_get_spi_bus(struct spi_device *spi,
+ 			return ERR_PTR(-ENOMEM);
+ 
+ 		max_msg_size = spi_max_message_size(spi);
+-		reg_reserve_size = config->reg_bits / BITS_PER_BYTE
+-				 + config->pad_bits / BITS_PER_BYTE;
++		reg_reserve_size = (config->reg_bits + config->pad_bits) / BITS_PER_BYTE;
+ 		if (max_size + reg_reserve_size > max_msg_size)
+ 			max_size -= reg_reserve_size;
+ 
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 3b4f6bfb2f4cf..b87fd127aa433 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -63,9 +63,9 @@ static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
+ 					     unsigned int target_freq)
+ {
+ 	struct scmi_data *priv = policy->driver_data;
++	unsigned long freq = target_freq;
+ 
+-	if (!perf_ops->freq_set(ph, priv->domain_id,
+-				target_freq * 1000, true))
++	if (!perf_ops->freq_set(ph, priv->domain_id, freq * 1000, true))
+ 		return target_freq;
+ 
+ 	return 0;
+diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
+index 11ad4ffdce0d4..84f5f30d5dddb 100644
+--- a/drivers/crypto/stm32/stm32-cryp.c
++++ b/drivers/crypto/stm32/stm32-cryp.c
+@@ -11,6 +11,7 @@
+ #include <crypto/internal/des.h>
+ #include <crypto/internal/skcipher.h>
+ #include <crypto/scatterwalk.h>
++#include <linux/bottom_half.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/err.h>
+@@ -1665,8 +1666,11 @@ static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
+ 		it_mask &= ~IMSCR_OUT;
+ 	stm32_cryp_write(cryp, cryp->caps->imsc, it_mask);
+ 
+-	if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out)
++	if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out) {
++		local_bh_disable();
+ 		stm32_cryp_finish_req(cryp, 0);
++		local_bh_enable();
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/dma/altera-msgdma.c b/drivers/dma/altera-msgdma.c
+index a8e3615235b8e..041f549c8c990 100644
+--- a/drivers/dma/altera-msgdma.c
++++ b/drivers/dma/altera-msgdma.c
+@@ -233,7 +233,7 @@ static void msgdma_free_descriptor(struct msgdma_device *mdev,
+ 	struct msgdma_sw_desc *child, *next;
+ 
+ 	mdev->desc_free_cnt++;
+-	list_add_tail(&desc->node, &mdev->free_list);
++	list_move_tail(&desc->node, &mdev->free_list);
+ 	list_for_each_entry_safe(child, next, &desc->tx_list, node) {
+ 		mdev->desc_free_cnt++;
+ 		list_move_tail(&child->node, &mdev->free_list);
+@@ -583,17 +583,16 @@ static void msgdma_issue_pending(struct dma_chan *chan)
+ static void msgdma_chan_desc_cleanup(struct msgdma_device *mdev)
+ {
+ 	struct msgdma_sw_desc *desc, *next;
++	unsigned long irqflags;
+ 
+ 	list_for_each_entry_safe(desc, next, &mdev->done_list, node) {
+ 		struct dmaengine_desc_callback cb;
+ 
+-		list_del(&desc->node);
+-
+ 		dmaengine_desc_get_callback(&desc->async_tx, &cb);
+ 		if (dmaengine_desc_callback_valid(&cb)) {
+-			spin_unlock(&mdev->lock);
++			spin_unlock_irqrestore(&mdev->lock, irqflags);
+ 			dmaengine_desc_callback_invoke(&cb, NULL);
+-			spin_lock(&mdev->lock);
++			spin_lock_irqsave(&mdev->lock, irqflags);
+ 		}
+ 
+ 		/* Run any dependencies, then free the descriptor */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index c50202215f6b1..9baee7c246b6d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -534,7 +534,7 @@ int amdgpu_aca_get_error_data(struct amdgpu_device *adev, struct aca_handle *han
+ 	if (aca_handle_is_valid(handle))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!(BIT(type) & handle->mask))
++	if ((type < 0) || (!(BIT(type) & handle->mask)))
+ 		return  0;
+ 
+ 	return __aca_get_error_data(adev, handle, type, err_data, qctx);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
+index a4d65973bf7cf..80771b1480fff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c
+@@ -100,6 +100,7 @@ struct amdgpu_afmt_acr amdgpu_afmt_acr(uint32_t clock)
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_32khz, &res.n_32khz, 32000);
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_44_1khz, &res.n_44_1khz, 44100);
+ 	amdgpu_afmt_calc_cts(clock, &res.cts_48khz, &res.n_48khz, 48000);
++	res.clock = clock;
+ 
+ 	return res;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 48ad0c04aa72b..e675e4815650b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -415,6 +415,10 @@ static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
+ 		 "Called with userptr BO"))
+ 		return -EINVAL;
+ 
++	/* bo has been pinned, not need validate it */
++	if (bo->tbo.pin_count)
++		return 0;
++
+ 	amdgpu_bo_placement_from_domain(bo, domain);
+ 
+ 	ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+@@ -2712,7 +2716,7 @@ static int confirm_valid_user_pages_locked(struct amdkfd_process_info *process_i
+ 
+ 		/* keep mem without hmm range at userptr_inval_list */
+ 		if (!mem->range)
+-			 continue;
++			continue;
+ 
+ 		/* Only check mem with hmm range associated */
+ 		valid = amdgpu_ttm_tt_get_user_pages_done(
+@@ -2957,9 +2961,6 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence __rcu *
+ 			if (!attachment->is_mapped)
+ 				continue;
+ 
+-			if (attachment->bo_va->base.bo->tbo.pin_count)
+-				continue;
+-
+ 			kfd_mem_dmaunmap_attachment(mem, attachment);
+ 			ret = update_gpuvm_pte(mem, attachment, &sync_obj);
+ 			if (ret) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+index 52b12c1718eb0..7dc102f0bc1d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+@@ -1484,6 +1484,8 @@ int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
+ 										(u32)le32_to_cpu(*((u32 *)reg_data + j));
+ 									j++;
+ 								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
++									if (i == 0)
++										continue;
+ 									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
+ 										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
+ 								}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+index b8280be6225d9..c3d89088123db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+@@ -213,6 +213,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
+ 		struct amdgpu_firmware_info *ucode;
+ 
+ 		id = fw_type_convert(cgs_device, type);
++		if (id >= AMDGPU_UCODE_ID_MAXIMUM)
++			return -EINVAL;
++
+ 		ucode = &adev->firmware.ucode[id];
+ 		if (ucode->fw == NULL)
+ 			return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 89cf9ac6da174..d24d7a1086240 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5012,7 +5012,8 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+ 		shadow = vmbo->shadow;
+ 
+ 		/* No need to recover an evicted BO */
+-		if (shadow->tbo.resource->mem_type != TTM_PL_TT ||
++		if (!shadow->tbo.resource ||
++		    shadow->tbo.resource->mem_type != TTM_PL_TT ||
+ 		    shadow->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET ||
+ 		    shadow->parent->tbo.resource->mem_type != TTM_PL_VRAM)
+ 			continue;
+@@ -5726,7 +5727,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	 * to put adev in the 1st position.
+ 	 */
+ 	INIT_LIST_HEAD(&device_list);
+-	if (!amdgpu_sriov_vf(adev) && (adev->gmc.xgmi.num_physical_nodes > 1)) {
++	if (!amdgpu_sriov_vf(adev) && (adev->gmc.xgmi.num_physical_nodes > 1) && hive) {
+ 		list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
+ 			list_add_tail(&tmp_adev->reset_list, &device_list);
+ 			if (adev->shutdown)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index f1b08893765cf..1ea55ee4796e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1597,7 +1597,7 @@ static int amdgpu_discovery_get_mall_info(struct amdgpu_device *adev)
+ 		break;
+ 	case 2:
+ 		mall_size_per_umc = le32_to_cpu(mall_info->v2.mall_size_per_umc);
+-		adev->gmc.mall_size = mall_size_per_umc * adev->gmc.num_umc;
++		adev->gmc.mall_size = (uint64_t)mall_size_per_umc * adev->gmc.num_umc;
+ 		break;
+ 	default:
+ 		dev_err(adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.c
+index e71768661ca8d..09a34c7258e22 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.c
+@@ -179,7 +179,7 @@ static int __amdgpu_eeprom_xfer(struct i2c_adapter *i2c_adap, u32 eeprom_addr,
+  * Returns the number of bytes read/written; -errno on error.
+  */
+ static int amdgpu_eeprom_xfer(struct i2c_adapter *i2c_adap, u32 eeprom_addr,
+-			      u8 *eeprom_buf, u16 buf_size, bool read)
++			      u8 *eeprom_buf, u32 buf_size, bool read)
+ {
+ 	const struct i2c_adapter_quirks *quirks = i2c_adap->quirks;
+ 	u16 limit;
+@@ -225,7 +225,7 @@ static int amdgpu_eeprom_xfer(struct i2c_adapter *i2c_adap, u32 eeprom_addr,
+ 
+ int amdgpu_eeprom_read(struct i2c_adapter *i2c_adap,
+ 		       u32 eeprom_addr, u8 *eeprom_buf,
+-		       u16 bytes)
++		       u32 bytes)
+ {
+ 	return amdgpu_eeprom_xfer(i2c_adap, eeprom_addr, eeprom_buf, bytes,
+ 				  true);
+@@ -233,7 +233,7 @@ int amdgpu_eeprom_read(struct i2c_adapter *i2c_adap,
+ 
+ int amdgpu_eeprom_write(struct i2c_adapter *i2c_adap,
+ 			u32 eeprom_addr, u8 *eeprom_buf,
+-			u16 bytes)
++			u32 bytes)
+ {
+ 	return amdgpu_eeprom_xfer(i2c_adap, eeprom_addr, eeprom_buf, bytes,
+ 				  false);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.h
+index 6935adb2be1f1..8083b8253ef43 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.h
+@@ -28,10 +28,10 @@
+ 
+ int amdgpu_eeprom_read(struct i2c_adapter *i2c_adap,
+ 		       u32 eeprom_addr, u8 *eeprom_buf,
+-		       u16 bytes);
++		       u32 bytes);
+ 
+ int amdgpu_eeprom_write(struct i2c_adapter *i2c_adap,
+ 			u32 eeprom_addr, u8 *eeprom_buf,
+-			u16 bytes);
++			u32 bytes);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+index c623e23049d1d..a6ddffbf8b4df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+@@ -34,6 +34,7 @@
+ #include <asm/set_memory.h>
+ #endif
+ #include "amdgpu.h"
++#include "amdgpu_reset.h"
+ #include <drm/drm_drv.h>
+ #include <drm/ttm/ttm_tt.h>
+ 
+@@ -408,7 +409,10 @@ void amdgpu_gart_invalidate_tlb(struct amdgpu_device *adev)
+ 		return;
+ 
+ 	mb();
+-	amdgpu_device_flush_hdp(adev, NULL);
++	if (down_read_trylock(&adev->reset_domain->sem)) {
++		amdgpu_device_flush_hdp(adev, NULL);
++		up_read(&adev->reset_domain->sem);
++	}
+ 	for_each_set_bit(i, adev->vmhubs_mask, AMDGPU_MAX_VMHUBS)
+ 		amdgpu_gmc_flush_gpu_tlb(adev, 0, i, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index a0ea6fe8d0606..977cde6d13626 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -623,25 +623,32 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 			switch (type) {
+ 			case AMD_IP_BLOCK_TYPE_GFX:
+ 				ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_GFX, &inst_mask);
++				if (ret)
++					return ret;
+ 				count = hweight32(inst_mask);
+ 				break;
+ 			case AMD_IP_BLOCK_TYPE_SDMA:
+ 				ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_SDMA, &inst_mask);
++				if (ret)
++					return ret;
+ 				count = hweight32(inst_mask);
+ 				break;
+ 			case AMD_IP_BLOCK_TYPE_JPEG:
+ 				ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_VCN, &inst_mask);
++				if (ret)
++					return ret;
+ 				count = hweight32(inst_mask) * adev->jpeg.num_jpeg_rings;
+ 				break;
+ 			case AMD_IP_BLOCK_TYPE_VCN:
+ 				ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_VCN, &inst_mask);
++				if (ret)
++					return ret;
+ 				count = hweight32(inst_mask);
+ 				break;
+ 			default:
+ 				return -EINVAL;
+ 			}
+-			if (ret)
+-				return ret;
++
+ 			return copy_to_user(out, &count, min(size, 4u)) ? -EFAULT : 0;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index cef9dd0a012b5..b3df27ce76634 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1375,6 +1375,9 @@ static void psp_xgmi_reflect_topology_info(struct psp_context *psp,
+ 	uint8_t dst_num_links = node_info.num_links;
+ 
+ 	hive = amdgpu_get_xgmi_hive(psp->adev);
++	if (WARN_ON(!hive))
++		return;
++
+ 	list_for_each_entry(mirror_adev, &hive->device_list, gmc.xgmi.head) {
+ 		struct psp_xgmi_topology_info *mirror_top_info;
+ 		int j;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 0c4ee06451e9c..7ba90c5974ed3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -2112,6 +2112,7 @@ static void amdgpu_ras_interrupt_umc_handler(struct ras_manager *obj,
+ 	/* Let IP handle its data, maybe we need get the output
+ 	 * from the callback to update the error type/count, etc
+ 	 */
++	amdgpu_ras_set_fed(obj->adev, true);
+ 	ret = data->cb(obj->adev, &err_data, entry);
+ 	/* ue will trigger an interrupt, and in that case
+ 	 * we need do a reset to recovery the whole system.
+@@ -4504,3 +4505,21 @@ int amdgpu_ras_reserve_page(struct amdgpu_device *adev, uint64_t pfn)
+ 
+ 	return ret;
+ }
++
++void amdgpu_ras_event_log_print(struct amdgpu_device *adev, u64 event_id,
++				const char *fmt, ...)
++{
++	struct va_format vaf;
++	va_list args;
++
++	va_start(args, fmt);
++	vaf.fmt = fmt;
++	vaf.va = &args;
++
++	if (amdgpu_ras_event_id_is_valid(adev, event_id))
++		dev_printk(KERN_INFO, adev->dev, "{%llu}%pV", event_id, &vaf);
++	else
++		dev_printk(KERN_INFO, adev->dev, "%pV", &vaf);
++
++	va_end(args);
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
+index 7021c4a66fb5e..18d20f6faa5fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
+@@ -67,13 +67,8 @@ struct amdgpu_iv_entry;
+ /* The high three bits indicates socketid */
+ #define AMDGPU_RAS_GET_FEATURES(val)  ((val) & ~AMDGPU_RAS_FEATURES_SOCKETID_MASK)
+ 
+-#define RAS_EVENT_LOG(_adev, _id, _fmt, ...)				\
+-do {									\
+-	if (amdgpu_ras_event_id_is_valid((_adev), (_id)))			\
+-	    dev_info((_adev)->dev, "{%llu}" _fmt, (_id), ##__VA_ARGS__);	\
+-	else								\
+-	    dev_info((_adev)->dev, _fmt, ##__VA_ARGS__);			\
+-} while (0)
++#define RAS_EVENT_LOG(adev, id, fmt, ...)	\
++	amdgpu_ras_event_log_print((adev), (id), (fmt), ##__VA_ARGS__)
+ 
+ enum amdgpu_ras_block {
+ 	AMDGPU_RAS_BLOCK__UMC = 0,
+@@ -956,4 +951,8 @@ int amdgpu_ras_put_poison_req(struct amdgpu_device *adev,
+ 		enum amdgpu_ras_block block, uint16_t pasid,
+ 		pasid_notify pasid_fn, void *data, uint32_t reset);
+ 
++__printf(3, 4)
++void amdgpu_ras_event_log_print(struct amdgpu_device *adev, u64 event_id,
++				const char *fmt, ...);
++
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 88ffb15e25ccc..e6344a6b0a9f6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -354,7 +354,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
+ 	ring->max_dw = max_dw;
+ 	ring->hw_prio = hw_prio;
+ 
+-	if (!ring->no_scheduler) {
++	if (!ring->no_scheduler && ring->funcs->type < AMDGPU_HW_IP_NUM) {
+ 		hw_ip = ring->funcs->type;
+ 		num_sched = &adev->gpu_sched[hw_ip][hw_prio].num_scheds;
+ 		adev->gpu_sched[hw_ip][hw_prio].sched[(*num_sched)++] =
+@@ -475,8 +475,9 @@ static ssize_t amdgpu_debugfs_ring_read(struct file *f, char __user *buf,
+ 					size_t size, loff_t *pos)
+ {
+ 	struct amdgpu_ring *ring = file_inode(f)->i_private;
+-	int r, i;
+ 	uint32_t value, result, early[3];
++	loff_t i;
++	int r;
+ 
+ 	if (*pos & 3 || size & 3)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_securedisplay.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_securedisplay.c
+index 8ed0e073656f8..41ebe690eeffa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_securedisplay.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_securedisplay.c
+@@ -135,6 +135,10 @@ static ssize_t amdgpu_securedisplay_debugfs_write(struct file *f, const char __u
+ 		mutex_unlock(&psp->securedisplay_context.mutex);
+ 		break;
+ 	case 2:
++		if (size < 3 || phy_id >= TA_SECUREDISPLAY_MAX_PHY) {
++			dev_err(adev->dev, "Invalid input: %s\n", str);
++			return -EINVAL;
++		}
+ 		mutex_lock(&psp->securedisplay_context.mutex);
+ 		psp_prep_securedisplay_cmd_buf(psp, &securedisplay_cmd,
+ 			TA_SECUREDISPLAY_COMMAND__SEND_ROI_CRC);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 972a58f0f4924..2359d1d602751 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -395,6 +395,8 @@ static void amdgpu_virt_add_bad_page(struct amdgpu_device *adev,
+ 	else
+ 		vram_usage_va = adev->mman.drv_vram_usage_va;
+ 
++	memset(&bp, 0, sizeof(bp));
++
+ 	if (bp_block_size) {
+ 		bp_cnt = bp_block_size / sizeof(uint64_t);
+ 		for (bp_idx = 0; bp_idx < bp_cnt; bp_idx++) {
+@@ -583,7 +585,7 @@ static int amdgpu_virt_write_vf2pf_data(struct amdgpu_device *adev)
+ 	}
+ 	vf2pf_info->checksum =
+ 		amd_sriov_msg_checksum(
+-		vf2pf_info, vf2pf_info->header.size, 0, 0);
++		vf2pf_info, sizeof(*vf2pf_info), 0, 0);
+ 
+ 	return 0;
+ }
+@@ -600,7 +602,7 @@ static void amdgpu_virt_update_vf2pf_work_item(struct work_struct *work)
+ 		    amdgpu_sriov_runtime(adev) && !amdgpu_in_reset(adev)) {
+ 			amdgpu_ras_set_fed(adev, true);
+ 			if (amdgpu_reset_domain_schedule(adev->reset_domain,
+-							  &adev->virt.flr_work))
++							  &adev->kfd.reset_work))
+ 				return;
+ 			else
+ 				dev_err(adev->dev, "Failed to queue work! at %s", __func__);
+@@ -975,6 +977,9 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+ 		return 0;
+ 	}
+ 
++	if (amdgpu_device_skip_hw_access(adev))
++		return 0;
++
+ 	reg_access_ctrl = &adev->gfx.rlc.reg_access_ctrl[xcc_id];
+ 	scratch_reg0 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg0;
+ 	scratch_reg1 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg1;
+@@ -1051,6 +1056,9 @@ void amdgpu_sriov_wreg(struct amdgpu_device *adev,
+ {
+ 	u32 rlcg_flag;
+ 
++	if (amdgpu_device_skip_hw_access(adev))
++		return;
++
+ 	if (!amdgpu_sriov_runtime(adev) &&
+ 		amdgpu_virt_get_rlcg_reg_access_flag(adev, acc_flags, hwip, true, &rlcg_flag)) {
+ 		amdgpu_virt_rlcg_reg_rw(adev, offset, value, rlcg_flag, xcc_id);
+@@ -1068,6 +1076,9 @@ u32 amdgpu_sriov_rreg(struct amdgpu_device *adev,
+ {
+ 	u32 rlcg_flag;
+ 
++	if (amdgpu_device_skip_hw_access(adev))
++		return 0;
++
+ 	if (!amdgpu_sriov_runtime(adev) &&
+ 		amdgpu_virt_get_rlcg_reg_access_flag(adev, acc_flags, hwip, false, &rlcg_flag))
+ 		return amdgpu_virt_rlcg_reg_rw(adev, offset, 0, rlcg_flag, xcc_id);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 6c30eceec8965..f91cc149d06c8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -31,6 +31,8 @@
+ #include "amdgpu_atomfirmware.h"
+ #include "atom.h"
+ 
++#define AMDGPU_MAX_SG_SEGMENT_SIZE	(2UL << 30)
++
+ struct amdgpu_vram_reservation {
+ 	u64 start;
+ 	u64 size;
+@@ -518,9 +520,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ 		else
+ 			min_block_size = mgr->default_page_size;
+ 
+-		/* Limit maximum size to 2GiB due to SG table limitations */
+-		size = min(remaining_size, 2ULL << 30);
+-
++		size = remaining_size;
+ 		if ((size >= (u64)pages_per_block << PAGE_SHIFT) &&
+ 		    !(size & (((u64)pages_per_block << PAGE_SHIFT) - 1)))
+ 			min_block_size = (u64)pages_per_block << PAGE_SHIFT;
+@@ -660,7 +660,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
+ 	amdgpu_res_first(res, offset, length, &cursor);
+ 	while (cursor.remaining) {
+ 		num_entries++;
+-		amdgpu_res_next(&cursor, cursor.size);
++		amdgpu_res_next(&cursor, min(cursor.size, AMDGPU_MAX_SG_SEGMENT_SIZE));
+ 	}
+ 
+ 	r = sg_alloc_table(*sgt, num_entries, GFP_KERNEL);
+@@ -680,7 +680,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
+ 	amdgpu_res_first(res, offset, length, &cursor);
+ 	for_each_sgtable_sg((*sgt), sg, i) {
+ 		phys_addr_t phys = cursor.start + adev->gmc.aper_base;
+-		size_t size = cursor.size;
++		unsigned long size = min(cursor.size, AMDGPU_MAX_SG_SEGMENT_SIZE);
+ 		dma_addr_t addr;
+ 
+ 		addr = dma_map_resource(dev, phys, size, dir,
+@@ -693,7 +693,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
+ 		sg_dma_address(sg) = addr;
+ 		sg_dma_len(sg) = size;
+ 
+-		amdgpu_res_next(&cursor, cursor.size);
++		amdgpu_res_next(&cursor, size);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+index dd2ec48cf5c26..4a14f9c1bfe89 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+@@ -434,6 +434,9 @@ static ssize_t amdgpu_xgmi_show_connected_port_num(struct device *dev,
+ 		}
+ 	}
+ 
++	if (i == top->num_nodes)
++		return -EINVAL;
++
+ 	for (i = 0; i < top->num_nodes; i++) {
+ 		for (j = 0; j < top->nodes[i].num_links; j++)
+ 			/* node id in sysfs starts from 1 rather than 0 so +1 here */
+diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+index d4e2aed2efa33..2c9a0aa41e2d5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
++++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+@@ -501,6 +501,12 @@ static int aqua_vanjaram_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr,
+ 
+ 	if (mode == AMDGPU_AUTO_COMPUTE_PARTITION_MODE) {
+ 		mode = __aqua_vanjaram_get_auto_mode(xcp_mgr);
++		if (mode == AMDGPU_UNKNOWN_COMPUTE_PARTITION_MODE) {
++			dev_err(adev->dev,
++				"Invalid config, no compatible compute partition mode found, available memory partitions: %d",
++				adev->gmc.num_mem_partitions);
++			return -EINVAL;
++		}
+ 	} else if (!__aqua_vanjaram_is_valid_mode(xcp_mgr, mode)) {
+ 		dev_err(adev->dev,
+ 			"Invalid compute partition mode requested, requested: %s, available memory partitions: %d",
+diff --git a/drivers/gpu/drm/amd/amdgpu/df_v1_7.c b/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
+index 5dfab80ffff21..cd298556f7a60 100644
+--- a/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/df_v1_7.c
+@@ -70,6 +70,8 @@ static u32 df_v1_7_get_hbm_channel_number(struct amdgpu_device *adev)
+ 	int fb_channel_number;
+ 
+ 	fb_channel_number = adev->df.funcs->get_fb_channel_number(adev);
++	if (fb_channel_number >= ARRAY_SIZE(df_v1_7_channel_number))
++		fb_channel_number = 0;
+ 
+ 	return df_v1_7_channel_number[fb_channel_number];
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+index da6bb9022b804..4c8f9772437b5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+@@ -187,7 +187,7 @@ static int jpeg_v4_0_5_hw_init(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	struct amdgpu_ring *ring;
+-	int r, i;
++	int i, r = 0;
+ 
+ 	// TODO: Enable ring test with DPG support
+ 	if (adev->pg_flags & AMD_PG_SUPPORT_JPEG_DPG) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
+index 92432cd2c0c7b..9689e2b5d4e51 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
+@@ -544,7 +544,7 @@ static int mmhub_v1_7_set_clockgating(struct amdgpu_device *adev,
+ 
+ static void mmhub_v1_7_get_clockgating(struct amdgpu_device *adev, u64 *flags)
+ {
+-	int data, data1;
++	u32 data, data1;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+ 		*flags = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
+index 02fd45261399c..a0cc8e218ca1e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
+@@ -671,7 +671,7 @@ static int mmhub_v2_0_set_clockgating(struct amdgpu_device *adev,
+ 
+ static void mmhub_v2_0_get_clockgating(struct amdgpu_device *adev, u64 *flags)
+ {
+-	int data, data1;
++	u32 data, data1;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+ 		*flags = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+index 238ea40c24500..d7c3178254973 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+@@ -560,7 +560,7 @@ static int mmhub_v3_3_set_clockgating(struct amdgpu_device *adev,
+ 
+ static void mmhub_v3_3_get_clockgating(struct amdgpu_device *adev, u64 *flags)
+ {
+-	int data;
++	u32 data;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+ 		*flags = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
+index 1b7da4aff2b8f..ff1b58e446892 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
+@@ -657,7 +657,7 @@ static int mmhub_v9_4_set_clockgating(struct amdgpu_device *adev,
+ 
+ static void mmhub_v9_4_get_clockgating(struct amdgpu_device *adev, u64 *flags)
+ {
+-	int data, data1;
++	u32 data, data1;
+ 
+ 	if (amdgpu_sriov_vf(adev))
+ 		*flags = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+index 19986ff6a48d7..e326d6f06ca92 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+@@ -387,7 +387,7 @@ static void nbio_v7_4_handle_ras_controller_intr_no_bifring(struct amdgpu_device
+ 		else
+ 			WREG32_SOC15(NBIO, 0, mmBIF_DOORBELL_INT_CNTL, bif_doorbell_intr_cntl);
+ 
+-		if (!ras->disable_ras_err_cnt_harvest) {
++		if (ras && !ras->disable_ras_err_cnt_harvest && obj) {
+ 			/*
+ 			 * clear error status after ras_controller_intr
+ 			 * according to hw team and count ue number
+@@ -418,6 +418,7 @@ static void nbio_v7_4_handle_ras_controller_intr_no_bifring(struct amdgpu_device
+ 		/* ras_controller_int is dedicated for nbif ras error,
+ 		 * not the global interrupt for sync flood
+ 		 */
++		amdgpu_ras_set_fed(adev, true);
+ 		amdgpu_ras_reset_gpu(adev);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+index fbd3f7a582c12..55465b8a3df6c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+@@ -229,8 +229,6 @@ static int vcn_v5_0_0_hw_fini(void *handle)
+ 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+ 		if (adev->vcn.harvest_config & (1 << i))
+ 			continue;
+-
+-		amdgpu_irq_put(adev, &adev->vcn.inst[i].irq, 0);
+ 	}
+ 
+ 	return 0;
+@@ -1232,22 +1230,6 @@ static int vcn_v5_0_0_set_powergating_state(void *handle, enum amd_powergating_s
+ 	return ret;
+ }
+ 
+-/**
+- * vcn_v5_0_0_set_interrupt_state - set VCN block interrupt state
+- *
+- * @adev: amdgpu_device pointer
+- * @source: interrupt sources
+- * @type: interrupt types
+- * @state: interrupt states
+- *
+- * Set VCN block interrupt state
+- */
+-static int vcn_v5_0_0_set_interrupt_state(struct amdgpu_device *adev, struct amdgpu_irq_src *source,
+-	unsigned type, enum amdgpu_interrupt_state state)
+-{
+-	return 0;
+-}
+-
+ /**
+  * vcn_v5_0_0_process_interrupt - process VCN block interrupt
+  *
+@@ -1293,7 +1275,6 @@ static int vcn_v5_0_0_process_interrupt(struct amdgpu_device *adev, struct amdgp
+ }
+ 
+ static const struct amdgpu_irq_src_funcs vcn_v5_0_0_irq_funcs = {
+-	.set = vcn_v5_0_0_set_interrupt_state,
+ 	.process = vcn_v5_0_0_process_interrupt,
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+index 300634b9f6683..a8ca7ecb6d271 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+@@ -42,8 +42,6 @@
+ #define CRAT_OEMTABLEID_LENGTH	8
+ #define CRAT_RESERVED_LENGTH	6
+ 
+-#define CRAT_OEMID_64BIT_MASK ((1ULL << (CRAT_OEMID_LENGTH * 8)) - 1)
+-
+ /* Compute Unit flags */
+ #define COMPUTE_UNIT_CPU	(1 << 0)  /* Create Virtual CRAT for CPU */
+ #define COMPUTE_UNIT_GPU	(1 << 1)  /* Create Virtual CRAT for GPU */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+index d889e3545120a..6c2f6a26c479c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+@@ -103,7 +103,8 @@ void debug_event_write_work_handler(struct work_struct *work)
+ 			struct kfd_process,
+ 			debug_event_workarea);
+ 
+-	kernel_write(process->dbg_ev_file, &write_data, 1, &pos);
++	if (process->debug_trap_enabled && process->dbg_ev_file)
++		kernel_write(process->dbg_ev_file, &write_data, 1, &pos);
+ }
+ 
+ /* update process/device/queue exception status, write to descriptor
+@@ -645,6 +646,7 @@ int kfd_dbg_trap_disable(struct kfd_process *target)
+ 	else if (target->runtime_info.runtime_state != DEBUG_RUNTIME_STATE_DISABLED)
+ 		target->runtime_info.runtime_state = DEBUG_RUNTIME_STATE_ENABLED;
+ 
++	cancel_work_sync(&target->debug_event_workarea);
+ 	fput(target->dbg_ev_file);
+ 	target->dbg_ev_file = NULL;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+index e1c21d2506112..78dde62fb04ad 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+@@ -164,7 +164,7 @@ static void event_interrupt_poison_consumption_v9(struct kfd_node *dev,
+ 	case SOC15_IH_CLIENTID_SE3SH:
+ 	case SOC15_IH_CLIENTID_UTCL2:
+ 		block = AMDGPU_RAS_BLOCK__GFX;
+-		reset = AMDGPU_RAS_GPU_RESET_MODE2_RESET;
++		reset = AMDGPU_RAS_GPU_RESET_MODE1_RESET;
+ 		break;
+ 	case SOC15_IH_CLIENTID_VMC:
+ 	case SOC15_IH_CLIENTID_VMC1:
+@@ -177,7 +177,7 @@ static void event_interrupt_poison_consumption_v9(struct kfd_node *dev,
+ 	case SOC15_IH_CLIENTID_SDMA3:
+ 	case SOC15_IH_CLIENTID_SDMA4:
+ 		block = AMDGPU_RAS_BLOCK__SDMA;
+-		reset = AMDGPU_RAS_GPU_RESET_MODE2_RESET;
++		reset = AMDGPU_RAS_GPU_RESET_MODE1_RESET;
+ 		break;
+ 	default:
+ 		dev_warn(dev->adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 4858112f9a53b..a5bdc3258ae54 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -28,6 +28,7 @@
+ #include "kfd_priv.h"
+ #include "kfd_kernel_queue.h"
+ #include "amdgpu_amdkfd.h"
++#include "amdgpu_reset.h"
+ 
+ static inline struct process_queue_node *get_queue_by_qid(
+ 			struct process_queue_manager *pqm, unsigned int qid)
+@@ -87,8 +88,12 @@ void kfd_process_dequeue_from_device(struct kfd_process_device *pdd)
+ 		return;
+ 
+ 	dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd);
+-	if (dev->kfd->shared_resources.enable_mes)
+-		amdgpu_mes_flush_shader_debugger(dev->adev, pdd->proc_ctx_gpu_addr);
++	if (dev->kfd->shared_resources.enable_mes &&
++	    down_read_trylock(&dev->adev->reset_domain->sem)) {
++		amdgpu_mes_flush_shader_debugger(dev->adev,
++						 pdd->proc_ctx_gpu_addr);
++		up_read(&dev->adev->reset_domain->sem);
++	}
+ 	pdd->already_dequeued = true;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index bc9eb847ecfe7..1d271ecc386f0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -958,8 +958,7 @@ static void kfd_update_system_properties(void)
+ 	dev = list_last_entry(&topology_device_list,
+ 			struct kfd_topology_device, list);
+ 	if (dev) {
+-		sys_props.platform_id =
+-			(*((uint64_t *)dev->oem_id)) & CRAT_OEMID_64BIT_MASK;
++		sys_props.platform_id = dev->oem_id64;
+ 		sys_props.platform_oem = *((uint64_t *)dev->oem_table_id);
+ 		sys_props.platform_rev = dev->oem_revision;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+index 27386ce9a021d..2d1c9d771bef2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+@@ -154,7 +154,10 @@ struct kfd_topology_device {
+ 	struct attribute		attr_gpuid;
+ 	struct attribute		attr_name;
+ 	struct attribute		attr_props;
+-	uint8_t				oem_id[CRAT_OEMID_LENGTH];
++	union {
++		uint8_t				oem_id[CRAT_OEMID_LENGTH];
++		uint64_t			oem_id64;
++	};
+ 	uint8_t				oem_table_id[CRAT_OEMTABLEID_LENGTH];
+ 	uint32_t			oem_revision;
+ };
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 8d4ad15b8e171..382a41c5b5152 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -594,12 +594,14 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ 	if (!acrtc)
+ 		return;
+ 
+-	if (acrtc->wb_pending) {
+-		if (acrtc->wb_conn) {
+-			spin_lock_irqsave(&acrtc->wb_conn->job_lock, flags);
++	if (acrtc->wb_conn) {
++		spin_lock_irqsave(&acrtc->wb_conn->job_lock, flags);
++
++		if (acrtc->wb_pending) {
+ 			job = list_first_entry_or_null(&acrtc->wb_conn->job_queue,
+ 						       struct drm_writeback_job,
+ 						       list_entry);
++			acrtc->wb_pending = false;
+ 			spin_unlock_irqrestore(&acrtc->wb_conn->job_lock, flags);
+ 
+ 			if (job) {
+@@ -617,8 +619,7 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ 							       acrtc->dm_irq_params.stream, 0);
+ 			}
+ 		} else
+-			DRM_ERROR("%s: no amdgpu_crtc wb_conn\n", __func__);
+-		acrtc->wb_pending = false;
++			spin_unlock_irqrestore(&acrtc->wb_conn->job_lock, flags);
+ 	}
+ 
+ 	vrr_active = amdgpu_dm_crtc_vrr_active_irq(acrtc);
+@@ -4129,8 +4130,11 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+ 	}
+ 
+ #ifdef AMD_PRIVATE_COLOR
+-	if (amdgpu_dm_create_color_properties(adev))
++	if (amdgpu_dm_create_color_properties(adev)) {
++		dc_state_release(state->context);
++		kfree(state);
+ 		return -ENOMEM;
++	}
+ #endif
+ 
+ 	r = amdgpu_dm_audio_init(adev);
+@@ -4466,7 +4470,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ 	/* There is one primary plane per CRTC */
+ 	primary_planes = dm->dc->caps.max_streams;
+-	ASSERT(primary_planes <= AMDGPU_MAX_PLANES);
++	if (primary_planes > AMDGPU_MAX_PLANES) {
++		DRM_ERROR("DM: Plane nums out of 6 planes\n");
++		return -EINVAL;
++	}
+ 
+ 	/*
+ 	 * Initialize primary planes, implicit planes for legacy IOCTLS.
+@@ -4584,17 +4591,17 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 		}
+ 	}
+ 
++	if (link_cnt > MAX_LINKS) {
++		DRM_ERROR(
++			"KMS: Cannot support more than %d display indexes\n",
++				MAX_LINKS);
++		goto fail;
++	}
++
+ 	/* loops over all connectors on the board */
+ 	for (i = 0; i < link_cnt; i++) {
+ 		struct dc_link *link = NULL;
+ 
+-		if (i > AMDGPU_DM_MAX_DISPLAY_INDEX) {
+-			DRM_ERROR(
+-				"KMS: Cannot support more than %d display indexes\n",
+-					AMDGPU_DM_MAX_DISPLAY_INDEX);
+-			continue;
+-		}
+-
+ 		link = dc_get_link_at_index(dm->dc, i);
+ 
+ 		if (link->connector_signal == SIGNAL_TYPE_VIRTUAL) {
+@@ -8639,15 +8646,13 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 				bundle->stream_update.vrr_infopacket =
+ 					&acrtc_state->stream->vrr_infopacket;
+ 		}
+-	} else if (cursor_update && acrtc_state->active_planes > 0 &&
+-		   acrtc_attach->base.state->event) {
+-		drm_crtc_vblank_get(pcrtc);
+-
++	} else if (cursor_update && acrtc_state->active_planes > 0) {
+ 		spin_lock_irqsave(&pcrtc->dev->event_lock, flags);
+-
+-		acrtc_attach->event = acrtc_attach->base.state->event;
+-		acrtc_attach->base.state->event = NULL;
+-
++		if (acrtc_attach->base.state->event) {
++			drm_crtc_vblank_get(pcrtc);
++			acrtc_attach->event = acrtc_attach->base.state->event;
++			acrtc_attach->base.state->event = NULL;
++		}
+ 		spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 09519b7abf67b..5c9d32dff8538 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -50,7 +50,7 @@
+ 
+ #define AMDGPU_DM_MAX_NUM_EDP 2
+ 
+-#define AMDGPU_DMUB_NOTIFICATION_MAX 5
++#define AMDGPU_DMUB_NOTIFICATION_MAX 6
+ 
+ #define HDMI_AMD_VENDOR_SPECIFIC_DATA_BLOCK_IEEE_REGISTRATION_ID 0x00001A
+ #define AMD_VSDB_VERSION_3_FEATURECAP_REPLAYMODE 0x40
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+index bc16db69a6636..3bacf470f7c5b 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+@@ -665,6 +665,9 @@ static enum bp_result get_ss_info_v3_1(
+ 	ss_table_header_include = ((ATOM_ASIC_INTERNAL_SS_INFO_V3 *) bios_get_image(&bp->base,
+ 				DATA_TABLES(ASIC_InternalSS_Info),
+ 				struct_size(ss_table_header_include, asSpreadSpectrum, 1)));
++	if (!ss_table_header_include)
++		return BP_RESULT_UNSUPPORTED;
++
+ 	table_size =
+ 		(le16_to_cpu(ss_table_header_include->sHeader.usStructureSize)
+ 				- sizeof(ATOM_COMMON_TABLE_HEADER))
+@@ -1034,6 +1037,8 @@ static enum bp_result get_ss_info_from_internal_ss_info_tbl_V2_1(
+ 				&bp->base,
+ 				DATA_TABLES(ASIC_InternalSS_Info),
+ 				struct_size(header, asSpreadSpectrum, 1)));
++	if (!header)
++		return result;
+ 
+ 	memset(info, 0, sizeof(struct spread_spectrum_info));
+ 
+@@ -1107,6 +1112,8 @@ static enum bp_result get_ss_info_from_ss_info_table(
+ 	get_atom_data_table_revision(header, &revision);
+ 
+ 	tbl = GET_IMAGE(ATOM_SPREAD_SPECTRUM_INFO, DATA_TABLES(SS_Info));
++	if (!tbl)
++		return result;
+ 
+ 	if (1 != revision.major || 2 > revision.minor)
+ 		return result;
+@@ -1634,6 +1641,8 @@ static uint32_t get_ss_entry_number_from_ss_info_tbl(
+ 
+ 	tbl = GET_IMAGE(ATOM_SPREAD_SPECTRUM_INFO,
+ 			DATA_TABLES(SS_Info));
++	if (!tbl)
++		return number;
+ 
+ 	if (1 != revision.major || 2 > revision.minor)
+ 		return number;
+@@ -1716,6 +1725,8 @@ static uint32_t get_ss_entry_number_from_internal_ss_info_tbl_v2_1(
+ 				&bp->base,
+ 				DATA_TABLES(ASIC_InternalSS_Info),
+ 				struct_size(header_include, asSpreadSpectrum, 1)));
++	if (!header_include)
++		return 0;
+ 
+ 	size = (le16_to_cpu(header_include->sHeader.usStructureSize)
+ 			- sizeof(ATOM_COMMON_TABLE_HEADER))
+@@ -1755,6 +1766,9 @@ static uint32_t get_ss_entry_number_from_internal_ss_info_tbl_V3_1(
+ 	header_include = ((ATOM_ASIC_INTERNAL_SS_INFO_V3 *) bios_get_image(&bp->base,
+ 				DATA_TABLES(ASIC_InternalSS_Info),
+ 				struct_size(header_include, asSpreadSpectrum, 1)));
++	if (!header_include)
++		return number;
++
+ 	size = (le16_to_cpu(header_include->sHeader.usStructureSize) -
+ 			sizeof(ATOM_COMMON_TABLE_HEADER)) /
+ 					sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3);
+@@ -2551,8 +2565,8 @@ static enum bp_result construct_integrated_info(
+ 
+ 	/* Sort voltage table from low to high*/
+ 	if (result == BP_RESULT_OK) {
+-		uint32_t i;
+-		uint32_t j;
++		int32_t i;
++		int32_t j;
+ 
+ 		for (i = 1; i < NUMBER_OF_DISP_CLK_VOLTAGE; ++i) {
+ 			for (j = i; j > 0; --j) {
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index 9fe0020bcb9c2..c8c8587a059d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -2920,8 +2920,11 @@ static enum bp_result construct_integrated_info(
+ 	struct atom_common_table_header *header;
+ 	struct atom_data_revision revision;
+ 
+-	uint32_t i;
+-	uint32_t j;
++	int32_t i;
++	int32_t j;
++
++	if (!info)
++		return result;
+ 
+ 	if (info && DATA_TABLES(integratedsysteminfo)) {
+ 		header = GET_IMAGE(struct atom_common_table_header,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 5ef0879f6ad9c..aea4bb46856ef 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -484,7 +484,8 @@ static void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_sm
+ 			ranges->reader_wm_sets[num_valid_sets].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
+ 
+ 			/* Modify previous watermark range to cover up to max */
+-			ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
++			if (num_valid_sets > 0)
++				ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
+ 		}
+ 		num_valid_sets++;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 236876d95185b..da237f718dbdd 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1421,6 +1421,7 @@ struct dc *dc_create(const struct dc_init_data *init_params)
+ 		return NULL;
+ 
+ 	if (init_params->dce_environment == DCE_ENV_VIRTUAL_HW) {
++		dc->caps.linear_pitch_alignment = 64;
+ 		if (!dc_construct_ctx(dc, init_params))
+ 			goto destruct_dc;
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+index c6c35037bdb8b..dfdfe22d9e851 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+@@ -37,6 +37,9 @@
+ #include "dce/dce_i2c.h"
+ struct dc_link *dc_get_link_at_index(struct dc *dc, uint32_t link_index)
+ {
++	if (link_index >= MAX_LINKS)
++		return NULL;
++
+ 	return dc->links[link_index];
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 8ed599324693e..786b56e96a816 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -2283,6 +2283,9 @@ void resource_log_pipe_topology_update(struct dc *dc, struct dc_state *state)
+ 					state->stream_status[stream_idx].mall_stream_config.paired_stream);
+ 			otg_master = resource_get_otg_master_for_stream(
+ 					&state->res_ctx, state->streams[phantom_stream_idx]);
++			if (!otg_master)
++				continue;
++
+ 			resource_log_pipe_for_stream(dc, state, otg_master, stream_idx);
+ 		}
+ 	}
+@@ -3508,7 +3511,7 @@ static bool acquire_otg_master_pipe_for_stream(
+ 		if (pool->dpps[pipe_idx])
+ 			pipe_ctx->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+ 
+-		if (pipe_idx >= pool->timing_generator_count) {
++		if (pipe_idx >= pool->timing_generator_count && pool->timing_generator_count != 0) {
+ 			int tg_inst = pool->timing_generator_count - 1;
+ 
+ 			pipe_ctx->stream_res.tg = pool->timing_generators[tg_inst];
+@@ -4669,6 +4672,9 @@ void resource_build_bit_depth_reduction_params(struct dc_stream_state *stream,
+ 
+ enum dc_status dc_validate_stream(struct dc *dc, struct dc_stream_state *stream)
+ {
++	if (dc == NULL || stream == NULL)
++		return DC_ERROR_UNEXPECTED;
++
+ 	struct dc_link *link = stream->link;
+ 	struct timing_generator *tg = dc->res_pool->timing_generators[0];
+ 	enum dc_status res = DC_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index 52a1cfc5feed8..502740f6fb2ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -191,7 +191,7 @@ static void init_state(struct dc *dc, struct dc_state *state)
+ struct dc_state *dc_state_create(struct dc *dc, struct dc_state_create_params *params)
+ {
+ #ifdef CONFIG_DRM_AMD_DC_FP
+-	struct dml2_configuration_options *dml2_opt = &dc->dml2_options;
++	struct dml2_configuration_options dml2_opt = dc->dml2_options;
+ #endif
+ 	struct dc_state *state = kvzalloc(sizeof(struct dc_state),
+ 			GFP_KERNEL);
+@@ -205,11 +205,11 @@ struct dc_state *dc_state_create(struct dc *dc, struct dc_state_create_params *p
+ 
+ #ifdef CONFIG_DRM_AMD_DC_FP
+ 	if (dc->debug.using_dml2) {
+-		dml2_opt->use_clock_dc_limits = false;
+-		dml2_create(dc, dml2_opt, &state->bw_ctx.dml2);
++		dml2_opt.use_clock_dc_limits = false;
++		dml2_create(dc, &dml2_opt, &state->bw_ctx.dml2);
+ 
+-		dml2_opt->use_clock_dc_limits = true;
+-		dml2_create(dc, dml2_opt, &state->bw_ctx.dml2_dc_power_source);
++		dml2_opt.use_clock_dc_limits = true;
++		dml2_create(dc, &dml2_opt, &state->bw_ctx.dml2_dc_power_source);
+ 	}
+ #endif
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+index b851fc65f5b7c..5c2d6642633d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+@@ -258,7 +258,7 @@ bool dmub_abm_set_pipe(struct abm *abm,
+ {
+ 	union dmub_rb_cmd cmd;
+ 	struct dc_context *dc = abm->ctx;
+-	uint32_t ramping_boundary = 0xFFFF;
++	uint8_t ramping_boundary = 0xFF;
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
+index 09cf54586fd5d..424669632b3b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
+@@ -102,7 +102,8 @@ static void dmub_replay_enable(struct dmub_replay *dmub, bool enable, bool wait,
+ 					break;
+ 			}
+ 
+-			fsleep(500);
++			/* must *not* be fsleep - this can be called from high irq levels */
++			udelay(500);
+ 		}
+ 
+ 		/* assert if max retry hit */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
+index 994fb732a7cb7..a0d437f0ce2ba 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c
+@@ -690,6 +690,9 @@ static void wbscl_set_scaler_filter(
+ 	int pair;
+ 	uint16_t odd_coef, even_coef;
+ 
++	if (!filter)
++		return;
++
+ 	for (phase = 0; phase < (NUM_PHASES / 2 + 1); phase++) {
+ 		for (pair = 0; pair < tap_pairs; pair++) {
+ 			even_coef = filter[phase * taps + 2 * pair];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dccg.c
+index 58dd3c5bbff09..4677eb485f945 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dccg.c
+@@ -940,7 +940,7 @@ static uint8_t dccg35_get_other_enabled_symclk_fe(struct dccg *dccg, uint32_t st
+ 	/* for DPMST, this backend could be used by multiple front end.
+ 	only disable the backend if this stream_enc_ins is the last active stream enc connected to this back_end*/
+ 		uint8_t i;
+-		for (i = 0; i != link_enc_inst && i < sizeof(fe_clk_en); i++) {
++		for (i = 0; i != link_enc_inst && i < ARRAY_SIZE(fe_clk_en); i++) {
+ 			if (fe_clk_en[i] && be_clk_sel[i] == link_enc_inst)
+ 				num_enabled_symclk_fe++;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
+index 0c4a8fe8e5ca6..f1cde1e4265f3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
+@@ -1453,10 +1453,9 @@ void dcn_bw_update_from_pplib_fclks(
+ 	ASSERT(fclks->num_levels);
+ 
+ 	vmin0p65_idx = 0;
+-	vmid0p72_idx = fclks->num_levels -
+-		(fclks->num_levels > 2 ? 3 : (fclks->num_levels > 1 ? 2 : 1));
+-	vnom0p8_idx = fclks->num_levels - (fclks->num_levels > 1 ? 2 : 1);
+-	vmax0p9_idx = fclks->num_levels - 1;
++	vmid0p72_idx = fclks->num_levels > 2 ? fclks->num_levels - 3 : 0;
++	vnom0p8_idx = fclks->num_levels > 1 ? fclks->num_levels - 2 : 0;
++	vmax0p9_idx = fclks->num_levels > 0 ? fclks->num_levels - 1 : 0;
+ 
+ 	dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 =
+ 		32 * (fclks->data[vmin0p65_idx].clocks_in_khz / 1000.0) / 1000.0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn302/dcn302_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn302/dcn302_fpu.c
+index e2bcd205aa936..8da97a96b1ceb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn302/dcn302_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn302/dcn302_fpu.c
+@@ -304,6 +304,16 @@ void dcn302_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		/* bw_params->clk_table.entries[MAX_NUM_DPM_LVL].
++		 * MAX_NUM_DPM_LVL is 8.
++		 * dcn3_02_soc.clock_limits[DC__VOLTAGE_STATES].
++		 * DC__VOLTAGE_STATES is 40.
++		 */
++		if (num_states > MAX_NUM_DPM_LVL) {
++			ASSERT(0);
++			return;
++		}
++
+ 		dcn3_02_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_02_soc.num_states; i++) {
+ 			dcn3_02_soc.clock_limits[i].state = i;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c
+index 3f02bb806d421..e968870a4b810 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c
+@@ -310,6 +310,16 @@ void dcn303_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		/* bw_params->clk_table.entries[MAX_NUM_DPM_LVL].
++		 * MAX_NUM_DPM_LVL is 8.
++		 * dcn3_02_soc.clock_limits[DC__VOLTAGE_STATES].
++		 * DC__VOLTAGE_STATES is 40.
++		 */
++		if (num_states > MAX_NUM_DPM_LVL) {
++			ASSERT(0);
++			return;
++		}
++
+ 		dcn3_03_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_03_soc.num_states; i++) {
+ 			dcn3_03_soc.clock_limits[i].state = i;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index f6fe0a64beacf..ebcf5ece209a4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -3232,6 +3232,16 @@ void dcn32_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_pa
+ 				dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 			}
+ 
++			/* bw_params->clk_table.entries[MAX_NUM_DPM_LVL].
++			 * MAX_NUM_DPM_LVL is 8.
++			 * dcn3_02_soc.clock_limits[DC__VOLTAGE_STATES].
++			 * DC__VOLTAGE_STATES is 40.
++			 */
++			if (num_states > MAX_NUM_DPM_LVL) {
++				ASSERT(0);
++				return;
++			}
++
+ 			dcn3_2_soc.num_states = num_states;
+ 			for (i = 0; i < dcn3_2_soc.num_states; i++) {
+ 				dcn3_2_soc.clock_limits[i].state = i;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+index ff4d795c79664..4297402bdab39 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+@@ -803,6 +803,16 @@ void dcn321_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_p
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		/* bw_params->clk_table.entries[MAX_NUM_DPM_LVL].
++		 * MAX_NUM_DPM_LVL is 8.
++		 * dcn3_02_soc.clock_limits[DC__VOLTAGE_STATES].
++		 * DC__VOLTAGE_STATES is 40.
++		 */
++		if (num_states > MAX_NUM_DPM_LVL) {
++			ASSERT(0);
++			return;
++		}
++
+ 		dcn3_21_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_21_soc.num_states; i++) {
+ 			dcn3_21_soc.clock_limits[i].state = i;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+index 9a3ded3111952..85453bbb4f9b1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+@@ -1099,8 +1099,13 @@ void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib)
+ 
+ 	// Total Available Pipes Support Check
+ 	for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) {
+-		total_pipes += mode_lib->vba.DPPPerPlane[k];
+ 		pipe_idx = get_pipe_idx(mode_lib, k);
++		if (pipe_idx == -1) {
++			ASSERT(0);
++			continue; // skip inactive planes
++		}
++		total_pipes += mode_lib->vba.DPPPerPlane[k];
++
+ 		if (mode_lib->vba.cache_pipes[pipe_idx].clks_cfg.dppclk_mhz > 0.0)
+ 			mode_lib->vba.DPPCLK[k] = mode_lib->vba.cache_pipes[pipe_idx].clks_cfg.dppclk_mhz;
+ 		else
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+index 663c17f52779c..f344478e9bd47 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
+@@ -56,7 +56,7 @@ struct gpio_service *dal_gpio_service_create(
+ 	struct dc_context *ctx)
+ {
+ 	struct gpio_service *service;
+-	uint32_t index_of_id;
++	int32_t index_of_id;
+ 
+ 	service = kzalloc(sizeof(struct gpio_service), GFP_KERNEL);
+ 
+@@ -112,7 +112,7 @@ struct gpio_service *dal_gpio_service_create(
+ 	return service;
+ 
+ failure_2:
+-	while (index_of_id) {
++	while (index_of_id > 0) {
+ 		--index_of_id;
+ 		kfree(service->busyness[index_of_id]);
+ 	}
+@@ -239,6 +239,9 @@ static bool is_pin_busy(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return false;
++
+ 	return service->busyness[id][en];
+ }
+ 
+@@ -247,6 +250,9 @@ static void set_pin_busy(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return;
++
+ 	service->busyness[id][en] = true;
+ }
+ 
+@@ -255,6 +261,9 @@ static void set_pin_free(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
++	if (id == GPIO_ID_UNKNOWN)
++		return;
++
+ 	service->busyness[id][en] = false;
+ }
+ 
+@@ -263,7 +272,7 @@ enum gpio_result dal_gpio_service_lock(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
+-	if (!service->busyness[id]) {
++	if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) {
+ 		ASSERT_CRITICAL(false);
+ 		return GPIO_RESULT_OPEN_FAILED;
+ 	}
+@@ -277,7 +286,7 @@ enum gpio_result dal_gpio_service_unlock(
+ 	enum gpio_id id,
+ 	uint32_t en)
+ {
+-	if (!service->busyness[id]) {
++	if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) {
+ 		ASSERT_CRITICAL(false);
+ 		return GPIO_RESULT_OPEN_FAILED;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+index 99e17c164ce7b..1d3e8f0b915b6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+@@ -128,13 +128,21 @@ static bool hdmi_14_process_transaction(
+ 	const uint8_t hdcp_i2c_addr_link_primary = 0x3a; /* 0x74 >> 1*/
+ 	const uint8_t hdcp_i2c_addr_link_secondary = 0x3b; /* 0x76 >> 1*/
+ 	struct i2c_command i2c_command;
+-	uint8_t offset = hdcp_i2c_offsets[message_info->msg_id];
++	uint8_t offset;
+ 	struct i2c_payload i2c_payloads[] = {
+-		{ true, 0, 1, &offset },
++		{ true, 0, 1, 0 },
+ 		/* actual hdcp payload, will be filled later, zeroed for now*/
+ 		{ 0 }
+ 	};
+ 
++	if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) {
++		DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id);
++		return false;
++	}
++
++	offset = hdcp_i2c_offsets[message_info->msg_id];
++	i2c_payloads[0].data = &offset;
++
+ 	switch (message_info->link) {
+ 	case HDCP_LINK_SECONDARY:
+ 		i2c_payloads[0].address = hdcp_i2c_addr_link_secondary;
+@@ -308,6 +316,11 @@ static bool dp_11_process_transaction(
+ 	struct dc_link *link,
+ 	struct hdcp_protection_message *message_info)
+ {
++	if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) {
++		DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id);
++		return false;
++	}
++
+ 	return dpcd_access_helper(
+ 		link,
+ 		message_info->length,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn201/dcn201_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn201/dcn201_hwseq.c
+index 6be846635a798..59f46df015511 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn201/dcn201_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn201/dcn201_hwseq.c
+@@ -95,8 +95,11 @@ static bool gpu_addr_to_uma(struct dce_hwseq *hwseq,
+ 	} else if (hwseq->fb_offset.quad_part <= addr->quad_part &&
+ 			addr->quad_part <= hwseq->uma_top.quad_part) {
+ 		is_in_uma = true;
++	} else if (addr->quad_part == 0) {
++		is_in_uma = false;
+ 	} else {
+ 		is_in_uma = false;
++		BREAK_TO_DEBUGGER();
+ 	}
+ 	return is_in_uma;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+index 804be977ea47b..3de65a9f0e6f2 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+@@ -142,7 +142,7 @@ static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst,
+ {
+ 	union dmub_rb_cmd cmd;
+ 	struct dc_context *dc = abm->ctx;
+-	uint32_t ramping_boundary = 0xFFFF;
++	uint8_t ramping_boundary = 0xFF;
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index dcced89c07b38..f829ff82797e7 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -1077,7 +1077,8 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
+ 			continue;
+ 
+ 		if ((!cur_pipe->plane_state && new_pipe->plane_state) ||
+-			(!cur_pipe->stream && new_pipe->stream)) {
++			(!cur_pipe->stream && new_pipe->stream) ||
++			(cur_pipe->stream != new_pipe->stream && new_pipe->stream)) {
+ 			// New pipe addition
+ 			for (j = 0; j < PG_HW_PIPE_RESOURCES_NUM_ELEMENT; j++) {
+ 				if (j == PG_HUBP && new_pipe->plane_res.hubp)
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index d487dfcd219b0..a3df1b55e48b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -534,7 +534,7 @@ static bool decide_fallback_link_setting_max_bw_policy(
+ 		struct dc_link_settings *cur,
+ 		enum link_training_result training_result)
+ {
+-	uint8_t cur_idx = 0, next_idx;
++	uint32_t cur_idx = 0, next_idx;
+ 	bool found = false;
+ 
+ 	if (training_result == LINK_TRAINING_ABORT)
+@@ -914,21 +914,17 @@ bool link_decide_link_settings(struct dc_stream_state *stream,
+ 
+ 	memset(link_setting, 0, sizeof(*link_setting));
+ 
+-	/* if preferred is specified through AMDDP, use it, if it's enough
+-	 * to drive the mode
+-	 */
+-	if (link->preferred_link_setting.lane_count !=
+-			LANE_COUNT_UNKNOWN &&
+-			link->preferred_link_setting.link_rate !=
+-					LINK_RATE_UNKNOWN) {
++	if (dc_is_dp_signal(stream->signal)  &&
++			link->preferred_link_setting.lane_count != LANE_COUNT_UNKNOWN &&
++			link->preferred_link_setting.link_rate != LINK_RATE_UNKNOWN) {
++		/* if preferred is specified through AMDDP, use it, if it's enough
++		 * to drive the mode
++		 */
+ 		*link_setting = link->preferred_link_setting;
+-		return true;
+-	}
+-
+-	/* MST doesn't perform link training for now
+-	 * TODO: add MST specific link training routine
+-	 */
+-	if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++	} else if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++		/* MST doesn't perform link training for now
++		 * TODO: add MST specific link training routine
++		 */
+ 		decide_mst_link_settings(link, link_setting);
+ 	} else if (link->connector_signal == SIGNAL_TYPE_EDP) {
+ 		/* enable edp link optimization for DSC eDP case */
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index 1818970b8eaf7..b8e704dbe9567 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -914,10 +914,10 @@ static enum dc_status configure_lttpr_mode_non_transparent(
+ 			/* Driver does not need to train the first hop. Skip DPCD read and clear
+ 			 * AUX_RD_INTERVAL for DPTX-to-DPIA hop.
+ 			 */
+-			if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
++			if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA && repeater_cnt > 0 && repeater_cnt < MAX_REPEATER_CNT)
+ 				link->dpcd_caps.lttpr_caps.aux_rd_interval[--repeater_cnt] = 0;
+ 
+-			for (repeater_id = repeater_cnt; repeater_id > 0; repeater_id--) {
++			for (repeater_id = repeater_cnt; repeater_id > 0 && repeater_id < MAX_REPEATER_CNT; repeater_id--) {
+ 				aux_interval_address = DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1 +
+ 						((DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE) * (repeater_id - 1));
+ 				core_link_read_dpcd(
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
+index a72c898b64fab..584b9295a12af 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
+@@ -165,6 +165,7 @@ static void dpcd_extend_address_range(
+ 		*out_address = new_addr_range.start;
+ 		*out_size = ADDRESS_RANGE_SIZE(new_addr_range.start, new_addr_range.end);
+ 		*out_data = kcalloc(*out_size, sizeof(**out_data), GFP_KERNEL);
++		ASSERT(*out_data);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dce80/dce80_resource.c
+index 56ee45e12b461..a73d3c6ef4258 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dce80/dce80_resource.c
+@@ -1538,6 +1538,7 @@ struct resource_pool *dce83_create_resource_pool(
+ 	if (dce83_construct(num_virtual_links, dc, pool))
+ 		return &pool->base;
+ 
++	kfree(pool);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index d4c3e2754f516..5d1801dce2730 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -1864,6 +1864,7 @@ static struct clock_source *dcn30_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
++	kfree(clk_src);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index ff50f43e4c000..da73e842c55c8 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -1660,8 +1660,8 @@ static struct clock_source *dcn31_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
+-	BREAK_TO_DEBUGGER();
+ 	kfree(clk_src);
++	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+ 
+@@ -1821,8 +1821,8 @@ static struct clock_source *dcn30_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
+-	BREAK_TO_DEBUGGER();
+ 	kfree(clk_src);
++	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index 2df8a742516c8..915d68cc04e9c 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -785,6 +785,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.ips2_entry_delay_us = 800,
+ 	.disable_dmub_reallow_idle = false,
+ 	.static_screen_wait_frames = 2,
++	.disable_timeout = true,
+ };
+ 
+ static const struct dc_panel_config panel_config_defaults = {
+@@ -1716,6 +1717,7 @@ static struct clock_source *dcn35_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
++	kfree(clk_src);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index ddf9560ab7722..b7bd0f36125a4 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -1696,6 +1696,7 @@ static struct clock_source *dcn35_clock_source_create(
+ 		return &clk_src->base;
+ 	}
+ 
++	kfree(clk_src);
+ 	BREAK_TO_DEBUGGER();
+ 	return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+index 70e63aeb8f89b..a330827f900c3 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+@@ -459,7 +459,7 @@ uint32_t dmub_dcn35_get_current_time(struct dmub_srv *dmub)
+ void dmub_dcn35_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnostic_data *diag_data)
+ {
+ 	uint32_t is_dmub_enabled, is_soft_reset, is_sec_reset;
+-	uint32_t is_traceport_enabled, is_cw0_enabled, is_cw6_enabled;
++	uint32_t is_traceport_enabled, is_cw6_enabled;
+ 
+ 	if (!dmub || !diag_data)
+ 		return;
+@@ -510,9 +510,6 @@ void dmub_dcn35_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnosti
+ 	REG_GET(DMCUB_CNTL, DMCUB_TRACEPORT_EN, &is_traceport_enabled);
+ 	diag_data->is_traceport_en  = is_traceport_enabled;
+ 
+-	REG_GET(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE, &is_cw0_enabled);
+-	diag_data->is_cw0_enabled = is_cw0_enabled;
+-
+ 	REG_GET(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE, &is_cw6_enabled);
+ 	diag_data->is_cw6_enabled = is_cw6_enabled;
+ 
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+index 8e9caae7c9559..1b2df97226a3f 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
+@@ -156,11 +156,16 @@ static enum mod_hdcp_status read(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
+-	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID ||
++		msg_id >= MOD_HDCP_MESSAGE_ID_MAX)
+ 		return MOD_HDCP_STATUS_DDC_FAILURE;
+-	}
+ 
+ 	if (is_dp_hdcp(hdcp)) {
++		int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) /
++			sizeof(hdcp_dpcd_addrs[0]);
++		if (msg_id >= num_dpcd_addrs)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+ 			success = hdcp->config.ddc.funcs.read_dpcd(hdcp->config.ddc.handle,
+@@ -175,6 +180,11 @@ static enum mod_hdcp_status read(struct mod_hdcp *hdcp,
+ 			data_offset += cur_size;
+ 		}
+ 	} else {
++		int num_i2c_offsets = sizeof(hdcp_i2c_offsets) /
++			sizeof(hdcp_i2c_offsets[0]);
++		if (msg_id >= num_i2c_offsets)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		success = hdcp->config.ddc.funcs.read_i2c(
+ 				hdcp->config.ddc.handle,
+ 				HDCP_I2C_ADDR,
+@@ -219,11 +229,16 @@ static enum mod_hdcp_status write(struct mod_hdcp *hdcp,
+ 	uint32_t cur_size = 0;
+ 	uint32_t data_offset = 0;
+ 
+-	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) {
++	if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID ||
++		msg_id >= MOD_HDCP_MESSAGE_ID_MAX)
+ 		return MOD_HDCP_STATUS_DDC_FAILURE;
+-	}
+ 
+ 	if (is_dp_hdcp(hdcp)) {
++		int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) /
++			sizeof(hdcp_dpcd_addrs[0]);
++		if (msg_id >= num_dpcd_addrs)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		while (buf_len > 0) {
+ 			cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE);
+ 			success = hdcp->config.ddc.funcs.write_dpcd(
+@@ -239,6 +254,11 @@ static enum mod_hdcp_status write(struct mod_hdcp *hdcp,
+ 			data_offset += cur_size;
+ 		}
+ 	} else {
++		int num_i2c_offsets = sizeof(hdcp_i2c_offsets) /
++			sizeof(hdcp_i2c_offsets[0]);
++		if (msg_id >= num_i2c_offsets)
++			return MOD_HDCP_STATUS_DDC_FAILURE;
++
+ 		hdcp->buf[0] = hdcp_i2c_offsets[msg_id];
+ 		memmove(&hdcp->buf[1], buf, buf_len);
+ 		success = hdcp->config.ddc.funcs.write_i2c(
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+index f531ce1d2b1dc..a71c6117d7e54 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+@@ -99,7 +99,7 @@ static void pp_swctf_delayed_work_handler(struct work_struct *work)
+ 	struct amdgpu_device *adev = hwmgr->adev;
+ 	struct amdgpu_dpm_thermal *range =
+ 				&adev->pm.dpm.thermal;
+-	uint32_t gpu_temperature, size;
++	uint32_t gpu_temperature, size = sizeof(gpu_temperature);
+ 	int ret;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+index f4bd8e9357e22..18f00038d8441 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
+@@ -30,9 +30,8 @@ int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
+ {
+ 	int result;
+ 	unsigned int i;
+-	unsigned int table_entries;
+ 	struct pp_power_state *state;
+-	int size;
++	int size, table_entries;
+ 
+ 	if (hwmgr->hwmgr_func->get_num_of_pp_table_entries == NULL)
+ 		return 0;
+@@ -40,15 +39,19 @@ int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
+ 	if (hwmgr->hwmgr_func->get_power_state_size == NULL)
+ 		return 0;
+ 
+-	hwmgr->num_ps = table_entries = hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
++	table_entries = hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
+ 
+-	hwmgr->ps_size = size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
++	size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
+ 					  sizeof(struct pp_power_state);
+ 
+-	if (table_entries == 0 || size == 0) {
++	if (table_entries <= 0 || size == 0) {
+ 		pr_warn("Please check whether power state management is supported on this asic\n");
++		hwmgr->num_ps = 0;
++		hwmgr->ps_size = 0;
+ 		return 0;
+ 	}
++	hwmgr->num_ps = table_entries;
++	hwmgr->ps_size = size;
+ 
+ 	hwmgr->ps = kcalloc(table_entries, size, GFP_KERNEL);
+ 	if (hwmgr->ps == NULL)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index b1b4c09c34671..b56298d9da98f 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -73,8 +73,9 @@ static int atomctrl_retrieve_ac_timing(
+ 					j++;
+ 				} else if ((table->mc_reg_address[i].uc_pre_reg_data &
+ 							LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
+-					table->mc_reg_table_entry[num_ranges].mc_data[i] =
+-						table->mc_reg_table_entry[num_ranges].mc_data[i-1];
++					if (i)
++						table->mc_reg_table_entry[num_ranges].mc_data[i] =
++							table->mc_reg_table_entry[num_ranges].mc_data[i-1];
+ 				}
+ 			}
+ 			num_ranges++;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+index 02ba68d7c6546..f62381b189ade 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+@@ -1036,7 +1036,9 @@ static int smu10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 
+ 	switch (type) {
+ 	case PP_SCLK:
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetGfxclkFrequency, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetGfxclkFrequency, &now);
++		if (ret)
++			return ret;
+ 
+ 	/* driver only know min/max gfx_clk, Add level 1 for all other gfx clks */
+ 		if (now == data->gfx_max_freq_limit/100)
+@@ -1057,7 +1059,9 @@ static int smu10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 					i == 2 ? "*" : "");
+ 		break;
+ 	case PP_MCLK:
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetFclkFrequency, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetFclkFrequency, &now);
++		if (ret)
++			return ret;
+ 
+ 		for (i = 0; i < mclk_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -1550,7 +1554,10 @@ static int smu10_set_fine_grain_clk_vol(struct pp_hwmgr *hwmgr,
+ 		}
+ 
+ 		if (input[0] == 0) {
+-			smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMinGfxclkFrequency, &min_freq);
++			ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMinGfxclkFrequency, &min_freq);
++			if (ret)
++				return ret;
++
+ 			if (input[1] < min_freq) {
+ 				pr_err("Fine grain setting minimum sclk (%ld) MHz is less than the minimum allowed (%d) MHz\n",
+ 					input[1], min_freq);
+@@ -1558,7 +1565,10 @@ static int smu10_set_fine_grain_clk_vol(struct pp_hwmgr *hwmgr,
+ 			}
+ 			smu10_data->gfx_actual_soft_min_freq = input[1];
+ 		} else if (input[0] == 1) {
+-			smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxGfxclkFrequency, &max_freq);
++			ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxGfxclkFrequency, &max_freq);
++			if (ret)
++				return ret;
++
+ 			if (input[1] > max_freq) {
+ 				pr_err("Fine grain setting maximum sclk (%ld) MHz is greater than the maximum allowed (%d) MHz\n",
+ 					input[1], max_freq);
+@@ -1573,10 +1583,15 @@ static int smu10_set_fine_grain_clk_vol(struct pp_hwmgr *hwmgr,
+ 			pr_err("Input parameter number not correct\n");
+ 			return -EINVAL;
+ 		}
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMinGfxclkFrequency, &min_freq);
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxGfxclkFrequency, &max_freq);
+-
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMinGfxclkFrequency, &min_freq);
++		if (ret)
++			return ret;
+ 		smu10_data->gfx_actual_soft_min_freq = min_freq;
++
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxGfxclkFrequency, &max_freq);
++		if (ret)
++			return ret;
++
+ 		smu10_data->gfx_actual_soft_max_freq = max_freq;
+ 	} else if (type == PP_OD_COMMIT_DPM_TABLE) {
+ 		if (size != 0) {
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index f1c369945ac5d..bc27a70a1224f 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -5641,7 +5641,7 @@ static int smu7_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, uint
+ 	mode = input[size];
+ 	switch (mode) {
+ 	case PP_SMC_POWER_PROFILE_CUSTOM:
+-		if (size < 8 && size != 0)
++		if (size != 8 && size != 0)
+ 			return -EINVAL;
+ 		/* If only CUSTOM is passed in, use the saved values. Check
+ 		 * that we actually have a CUSTOM profile by ensuring that
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+index eb744401e0567..7e11974208732 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+@@ -584,6 +584,7 @@ static int smu8_init_uvd_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -591,7 +592,9 @@ static int smu8_init_uvd_limit(struct pp_hwmgr *hwmgr)
+ 	data->uvd_dpm.soft_min_clk = 0;
+ 	data->uvd_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxUvdLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxUvdLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].vclk;
+@@ -611,6 +614,7 @@ static int smu8_init_vce_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -618,7 +622,9 @@ static int smu8_init_vce_limit(struct pp_hwmgr *hwmgr)
+ 	data->vce_dpm.soft_min_clk = 0;
+ 	data->vce_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxEclkLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxEclkLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].ecclk;
+@@ -638,6 +644,7 @@ static int smu8_init_acp_limit(struct pp_hwmgr *hwmgr)
+ 				hwmgr->dyn_state.acp_clock_voltage_dependency_table;
+ 	unsigned long clock = 0;
+ 	uint32_t level;
++	int ret;
+ 
+ 	if (NULL == table || table->count <= 0)
+ 		return -EINVAL;
+@@ -645,7 +652,9 @@ static int smu8_init_acp_limit(struct pp_hwmgr *hwmgr)
+ 	data->acp_dpm.soft_min_clk = 0;
+ 	data->acp_dpm.hard_min_clk = 0;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxAclkLevel, &level);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetMaxAclkLevel, &level);
++	if (ret)
++		return ret;
+ 
+ 	if (level < table->count)
+ 		clock = table->entries[level].acpclk;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index f4acdb2267416..ff605063d7ef0 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -354,13 +354,13 @@ static int vega10_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
+-static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
++static int vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+-	int i;
+ 	uint32_t sub_vendor_id, hw_revision;
+ 	uint32_t top32, bottom32;
+ 	struct amdgpu_device *adev = hwmgr->adev;
++	int ret, i;
+ 
+ 	vega10_initialize_power_tune_defaults(hwmgr);
+ 
+@@ -485,9 +485,12 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	if (data->registry_data.vr0hot_enabled)
+ 		data->smu_features[GNLD_VR0HOT].supported = true;
+ 
+-	smum_send_msg_to_smc(hwmgr,
++	ret = smum_send_msg_to_smc(hwmgr,
+ 			PPSMC_MSG_GetSmuVersion,
+ 			&hwmgr->smu_version);
++	if (ret)
++		return ret;
++
+ 		/* ACG firmware has major version 5 */
+ 	if ((hwmgr->smu_version & 0xff000000) == 0x5000000)
+ 		data->smu_features[GNLD_ACG].supported = true;
+@@ -505,10 +508,16 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 		data->smu_features[GNLD_PCC_LIMIT].supported = true;
+ 
+ 	/* Get the SN to turn into a Unique ID */
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
++	if (ret)
++		return ret;
++
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	if (ret)
++		return ret;
+ 
+ 	adev->unique_id = ((uint64_t)bottom32 << 32) | top32;
++	return 0;
+ }
+ 
+ #ifdef PPLIB_VEGA10_EVV_SUPPORT
+@@ -882,7 +891,9 @@ static int vega10_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 
+ 	vega10_set_features_platform_caps(hwmgr);
+ 
+-	vega10_init_dpm_defaults(hwmgr);
++	result = vega10_init_dpm_defaults(hwmgr);
++	if (result)
++		return result;
+ 
+ #ifdef PPLIB_VEGA10_EVV_SUPPORT
+ 	/* Get leakage voltage based on leakage ID. */
+@@ -2350,15 +2361,20 @@ static int vega10_acg_enable(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+ 	uint32_t agc_btc_response;
++	int ret;
+ 
+ 	if (data->smu_features[GNLD_ACG].supported) {
+ 		if (0 == vega10_enable_smc_features(hwmgr, true,
+ 					data->smu_features[GNLD_DPM_PREFETCHER].smu_feature_bitmap))
+ 			data->smu_features[GNLD_DPM_PREFETCHER].enabled = true;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_InitializeAcg, NULL);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_InitializeAcg, NULL);
++		if (ret)
++			return ret;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_RunAcgBtc, &agc_btc_response);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_RunAcgBtc, &agc_btc_response);
++		if (ret)
++			agc_btc_response = 0;
+ 
+ 		if (1 == agc_btc_response) {
+ 			if (1 == data->acg_loop_state)
+@@ -2571,8 +2587,11 @@ static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
+ 		}
+ 	}
+ 
+-	pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_VDDC,
++	result = pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_VDDC,
+ 			VOLTAGE_OBJ_SVID2,  &voltage_table);
++	PP_ASSERT_WITH_CODE(!result,
++			"Failed to get voltage table!",
++			return result);
+ 	pp_table->MaxVidStep = voltage_table.max_vid_step;
+ 
+ 	pp_table->GfxDpmVoltageMode =
+@@ -3910,11 +3929,14 @@ static int vega10_get_gpu_power(struct pp_hwmgr *hwmgr,
+ 		uint32_t *query)
+ {
+ 	uint32_t value;
++	int ret;
+ 
+ 	if (!query)
+ 		return -EINVAL;
+ 
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrPkgPwr, &value);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrPkgPwr, &value);
++	if (ret)
++		return ret;
+ 
+ 	/* SMC returning actual watts, keep consistent with legacy asics, low 8 bit as 8 fractional bits */
+ 	*query = value << 8;
+@@ -4810,14 +4832,16 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 	uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width;
+ 	PPTable_t *pptable = &(data->smc_state_table.pp_table);
+ 
+-	int i, now, size = 0, count = 0;
++	int i, ret, now,  size = 0, count = 0;
+ 
+ 	switch (type) {
+ 	case PP_SCLK:
+ 		if (data->registry_data.sclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentGfxclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentGfxclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		if (hwmgr->pp_one_vf &&
+ 		    (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK))
+@@ -4833,7 +4857,9 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.mclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentUclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentUclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < mclk_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -4844,7 +4870,9 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.socclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentSocclkIndex, &now);
++		ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentSocclkIndex, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < soc_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+@@ -4855,8 +4883,10 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+ 		if (data->registry_data.dcefclk_dpm_key_disabled)
+ 			break;
+ 
+-		smum_send_msg_to_smc_with_parameter(hwmgr,
++		ret = smum_send_msg_to_smc_with_parameter(hwmgr,
+ 				PPSMC_MSG_GetClockFreqMHz, CLK_DCEFCLK, &now);
++		if (ret)
++			break;
+ 
+ 		for (i = 0; i < dcef_table->count; i++)
+ 			size += sprintf(buf + size, "%d: %uMhz %s\n",
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+index c223e3a6bfca5..10fd4e9f016cd 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
+@@ -293,12 +293,12 @@ static int vega12_set_features_platform_caps(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
+-static void vega12_init_dpm_defaults(struct pp_hwmgr *hwmgr)
++static int vega12_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
+ 	struct amdgpu_device *adev = hwmgr->adev;
+ 	uint32_t top32, bottom32;
+-	int i;
++	int i, ret;
+ 
+ 	data->smu_features[GNLD_DPM_PREFETCHER].smu_feature_id =
+ 			FEATURE_DPM_PREFETCHER_BIT;
+@@ -364,10 +364,16 @@ static void vega12_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	}
+ 
+ 	/* Get the SN to turn into a Unique ID */
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
++	if (ret)
++		return ret;
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	if (ret)
++		return ret;
+ 
+ 	adev->unique_id = ((uint64_t)bottom32 << 32) | top32;
++
++	return 0;
+ }
+ 
+ static int vega12_set_private_data_based_on_pptable(struct pp_hwmgr *hwmgr)
+@@ -410,7 +416,11 @@ static int vega12_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 
+ 	vega12_set_features_platform_caps(hwmgr);
+ 
+-	vega12_init_dpm_defaults(hwmgr);
++	result = vega12_init_dpm_defaults(hwmgr);
++	if (result) {
++		pr_err("%s failed\n", __func__);
++		return result;
++	}
+ 
+ 	/* Parse pptable data read from VBIOS */
+ 	vega12_set_private_data_based_on_pptable(hwmgr);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+index f9efb0bad8072..baf251fe5d828 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
+@@ -328,12 +328,12 @@ static int vega20_set_features_platform_caps(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
+-static void vega20_init_dpm_defaults(struct pp_hwmgr *hwmgr)
++static int vega20_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega20_hwmgr *data = (struct vega20_hwmgr *)(hwmgr->backend);
+ 	struct amdgpu_device *adev = hwmgr->adev;
+ 	uint32_t top32, bottom32;
+-	int i;
++	int i, ret;
+ 
+ 	data->smu_features[GNLD_DPM_PREFETCHER].smu_feature_id =
+ 			FEATURE_DPM_PREFETCHER_BIT;
+@@ -404,10 +404,17 @@ static void vega20_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	}
+ 
+ 	/* Get the SN to turn into a Unique ID */
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
+-	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumTop32, &top32);
++	if (ret)
++		return ret;
++
++	ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ReadSerialNumBottom32, &bottom32);
++	if (ret)
++		return ret;
+ 
+ 	adev->unique_id = ((uint64_t)bottom32 << 32) | top32;
++
++	return 0;
+ }
+ 
+ static int vega20_set_private_data_based_on_pptable(struct pp_hwmgr *hwmgr)
+@@ -427,6 +434,7 @@ static int vega20_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ {
+ 	struct vega20_hwmgr *data;
+ 	struct amdgpu_device *adev = hwmgr->adev;
++	int result;
+ 
+ 	data = kzalloc(sizeof(struct vega20_hwmgr), GFP_KERNEL);
+ 	if (data == NULL)
+@@ -452,8 +460,11 @@ static int vega20_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 
+ 	vega20_set_features_platform_caps(hwmgr);
+ 
+-	vega20_init_dpm_defaults(hwmgr);
+-
++	result = vega20_init_dpm_defaults(hwmgr);
++	if (result) {
++		pr_err("%s failed\n", __func__);
++		return result;
++	}
+ 	/* Parse pptable data read from VBIOS */
+ 	vega20_set_private_data_based_on_pptable(hwmgr);
+ 
+@@ -4091,9 +4102,11 @@ static int vega20_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 	if (power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+ 		struct vega20_hwmgr *data =
+ 			(struct vega20_hwmgr *)(hwmgr->backend);
+-		if (size == 0 && !data->is_custom_profile_set)
++
++		if (size != 10 && size != 0)
+ 			return -EINVAL;
+-		if (size < 10 && size != 0)
++
++		if (size == 0 && !data->is_custom_profile_set)
+ 			return -EINVAL;
+ 
+ 		result = vega20_get_activity_monitor_coeff(hwmgr,
+@@ -4155,6 +4168,8 @@ static int vega20_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 			activity_monitor.Fclk_PD_Data_error_coeff = input[8];
+ 			activity_monitor.Fclk_PD_Data_error_rate_coeff = input[9];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		result = vega20_set_activity_monitor_coeff(hwmgr,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+index a70d738966490..f9c0f117725dd 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+@@ -130,13 +130,17 @@ int vega10_get_enabled_smc_features(struct pp_hwmgr *hwmgr,
+ 			    uint64_t *features_enabled)
+ {
+ 	uint32_t enabled_features;
++	int ret;
+ 
+ 	if (features_enabled == NULL)
+ 		return -EINVAL;
+ 
+-	smum_send_msg_to_smc(hwmgr,
++	ret = smum_send_msg_to_smc(hwmgr,
+ 			PPSMC_MSG_GetEnabledSmuFeatures,
+ 			&enabled_features);
++	if (ret)
++		return ret;
++
+ 	*features_enabled = enabled_features;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+index d9700a3f28d2f..e58220a7ee2f7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+@@ -298,5 +298,9 @@ int smu_v13_0_enable_uclk_shadow(struct smu_context *smu, bool enable);
+ 
+ int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
+ 						 struct freq_band_range *exclusion_ranges);
++
++int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
++				     enum smu_clk_type clk_type,
++				     uint32_t *value);
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 6d334a2aff672..623f6052f97ed 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1416,6 +1416,9 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+ 
+ 	if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+ 	     (smu->smc_fw_version >= 0x360d00)) {
++		if (size != 10)
++			return -EINVAL;
++
+ 		ret = smu_cmn_update_table(smu,
+ 				       SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+ 				       WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -1449,6 +1452,8 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
+ 			activity_monitor.Mem_PD_Data_error_coeff = input[8];
+ 			activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 5a68d365967f7..01039cdd456b0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -1219,19 +1219,22 @@ static int navi10_get_current_clk_freq_by_table(struct smu_context *smu,
+ 					   value);
+ }
+ 
+-static bool navi10_is_support_fine_grained_dpm(struct smu_context *smu, enum smu_clk_type clk_type)
++static int navi10_is_support_fine_grained_dpm(struct smu_context *smu, enum smu_clk_type clk_type)
+ {
+ 	PPTable_t *pptable = smu->smu_table.driver_pptable;
+ 	DpmDescriptor_t *dpm_desc = NULL;
+-	uint32_t clk_index = 0;
++	int clk_index = 0;
+ 
+ 	clk_index = smu_cmn_to_asic_specific_index(smu,
+ 						   CMN2ASIC_MAPPING_CLK,
+ 						   clk_type);
++	if (clk_index < 0)
++		return clk_index;
++
+ 	dpm_desc = &pptable->DpmDescriptor[clk_index];
+ 
+ 	/* 0 - Fine grained DPM, 1 - Discrete DPM */
+-	return dpm_desc->SnapToDiscrete == 0;
++	return dpm_desc->SnapToDiscrete == 0 ? 1 : 0;
+ }
+ 
+ static inline bool navi10_od_feature_is_supported(struct smu_11_0_overdrive_table *od_table, enum SMU_11_0_ODFEATURE_CAP cap)
+@@ -1287,7 +1290,11 @@ static int navi10_emit_clk_levels(struct smu_context *smu,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (!navi10_is_support_fine_grained_dpm(smu, clk_type)) {
++		ret = navi10_is_support_fine_grained_dpm(smu, clk_type);
++		if (ret < 0)
++			return ret;
++
++		if (!ret) {
+ 			for (i = 0; i < count; i++) {
+ 				ret = smu_v11_0_get_dpm_freq_by_index(smu,
+ 								      clk_type, i, &value);
+@@ -1496,7 +1503,11 @@ static int navi10_print_clk_levels(struct smu_context *smu,
+ 		if (ret)
+ 			return size;
+ 
+-		if (!navi10_is_support_fine_grained_dpm(smu, clk_type)) {
++		ret = navi10_is_support_fine_grained_dpm(smu, clk_type);
++		if (ret < 0)
++			return ret;
++
++		if (!ret) {
+ 			for (i = 0; i < count; i++) {
+ 				ret = smu_v11_0_get_dpm_freq_by_index(smu, clk_type, i, &value);
+ 				if (ret)
+@@ -1665,7 +1676,11 @@ static int navi10_force_clk_levels(struct smu_context *smu,
+ 	case SMU_UCLK:
+ 	case SMU_FCLK:
+ 		/* There is only 2 levels for fine grained DPM */
+-		if (navi10_is_support_fine_grained_dpm(smu, clk_type)) {
++		ret = navi10_is_support_fine_grained_dpm(smu, clk_type);
++		if (ret < 0)
++			return ret;
++
++		if (ret) {
+ 			soft_max_level = (soft_max_level >= 1 ? 1 : 0);
+ 			soft_min_level = (soft_min_level >= 1 ? 1 : 0);
+ 		}
+@@ -2006,6 +2021,8 @@ static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, u
+ 	}
+ 
+ 	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
++		if (size != 10)
++			return -EINVAL;
+ 
+ 		ret = smu_cmn_update_table(smu,
+ 				       SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -2049,6 +2066,8 @@ static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, u
+ 			activity_monitor.Mem_PD_Data_error_coeff = input[8];
+ 			activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index e426f457a017f..d5a21d7836cc6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1722,6 +1722,8 @@ static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
+ 	}
+ 
+ 	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
++		if (size != 10)
++			return -EINVAL;
+ 
+ 		ret = smu_cmn_update_table(smu,
+ 				       SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -1765,6 +1767,8 @@ static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
+ 			activity_monitor->Mem_PD_Data_error_coeff = input[8];
+ 			activity_monitor->Mem_PD_Data_error_rate_coeff = input[9];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 379e44eb0019d..22737b11b1bfb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -976,6 +976,18 @@ static int vangogh_get_dpm_ultimate_freq(struct smu_context *smu,
+ 		}
+ 	}
+ 	if (min) {
++		ret = vangogh_get_profiling_clk_mask(smu,
++						     AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK,
++						     NULL,
++						     NULL,
++						     &mclk_mask,
++						     &fclk_mask,
++						     &soc_mask);
++		if (ret)
++			goto failed;
++
++		vclk_mask = dclk_mask = 0;
++
+ 		switch (clk_type) {
+ 		case SMU_UCLK:
+ 		case SMU_MCLK:
+@@ -2450,6 +2462,8 @@ static u32 vangogh_set_gfxoff_residency(struct smu_context *smu, bool start)
+ 
+ 	ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_LogGfxOffResidency,
+ 					      start, &residency);
++	if (ret)
++		return ret;
+ 
+ 	if (!start)
+ 		adev->gfx.gfx_off_residency = residency;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index ce941fbb9cfbe..a22eb6bbb05ed 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1886,7 +1886,8 @@ static int aldebaran_mode2_reset(struct smu_context *smu)
+ 
+ 	index = smu_cmn_to_asic_specific_index(smu, CMN2ASIC_MAPPING_MSG,
+ 						SMU_MSG_GfxDeviceDriverReset);
+-
++	if (index < 0 )
++		return -EINVAL;
+ 	mutex_lock(&smu->message_lock);
+ 	if (smu->smc_fw_version >= 0x00441400) {
+ 		ret = smu_cmn_send_msg_without_waiting(smu, (uint16_t)index, SMU_RESET_MODE_2);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index b63ad9cb24bfd..933fe93c8d1e6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1559,22 +1559,9 @@ int smu_v13_0_get_dpm_ultimate_freq(struct smu_context *smu, enum smu_clk_type c
+ 	uint32_t clock_limit;
+ 
+ 	if (!smu_cmn_clk_dpm_is_enabled(smu, clk_type)) {
+-		switch (clk_type) {
+-		case SMU_MCLK:
+-		case SMU_UCLK:
+-			clock_limit = smu->smu_table.boot_values.uclk;
+-			break;
+-		case SMU_GFXCLK:
+-		case SMU_SCLK:
+-			clock_limit = smu->smu_table.boot_values.gfxclk;
+-			break;
+-		case SMU_SOCCLK:
+-			clock_limit = smu->smu_table.boot_values.socclk;
+-			break;
+-		default:
+-			clock_limit = 0;
+-			break;
+-		}
++		ret = smu_v13_0_get_boot_freq_by_index(smu, clk_type, &clock_limit);
++		if (ret)
++			return ret;
+ 
+ 		/* clock in Mhz unit */
+ 		if (min)
+@@ -1894,6 +1881,40 @@ int smu_v13_0_set_power_source(struct smu_context *smu,
+ 					       NULL);
+ }
+ 
++int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
++				     enum smu_clk_type clk_type,
++				     uint32_t *value)
++{
++	int ret = 0;
++
++	switch (clk_type) {
++	case SMU_MCLK:
++	case SMU_UCLK:
++		*value = smu->smu_table.boot_values.uclk;
++		break;
++	case SMU_FCLK:
++		*value = smu->smu_table.boot_values.fclk;
++		break;
++	case SMU_GFXCLK:
++	case SMU_SCLK:
++		*value = smu->smu_table.boot_values.gfxclk;
++		break;
++	case SMU_SOCCLK:
++		*value = smu->smu_table.boot_values.socclk;
++		break;
++	case SMU_VCLK:
++		*value = smu->smu_table.boot_values.vclk;
++		break;
++	case SMU_DCLK:
++		*value = smu->smu_table.boot_values.dclk;
++		break;
++	default:
++		ret = -EINVAL;
++		break;
++	}
++	return ret;
++}
++
+ int smu_v13_0_get_dpm_freq_by_index(struct smu_context *smu,
+ 				    enum smu_clk_type clk_type, uint16_t level,
+ 				    uint32_t *value)
+@@ -1905,7 +1926,7 @@ int smu_v13_0_get_dpm_freq_by_index(struct smu_context *smu,
+ 		return -EINVAL;
+ 
+ 	if (!smu_cmn_clk_dpm_is_enabled(smu, clk_type))
+-		return 0;
++		return smu_v13_0_get_boot_freq_by_index(smu, clk_type, value);
+ 
+ 	clk_id = smu_cmn_to_asic_specific_index(smu,
+ 						CMN2ASIC_MAPPING_CLK,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 1e09d5f2d82fe..f7e756ca36dcd 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2495,6 +2495,9 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ 	}
+ 
+ 	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
++		if (size != 9)
++			return -EINVAL;
++
+ 		ret = smu_cmn_update_table(smu,
+ 					   SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+ 					   WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -2526,6 +2529,8 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ 			activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+ 			activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+index b6257f34a7c65..b081ae3e8f43c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+@@ -758,31 +758,9 @@ static int smu_v13_0_4_get_dpm_ultimate_freq(struct smu_context *smu,
+ 	int ret = 0;
+ 
+ 	if (!smu_v13_0_4_clk_dpm_is_enabled(smu, clk_type)) {
+-		switch (clk_type) {
+-		case SMU_MCLK:
+-		case SMU_UCLK:
+-			clock_limit = smu->smu_table.boot_values.uclk;
+-			break;
+-		case SMU_FCLK:
+-			clock_limit = smu->smu_table.boot_values.fclk;
+-			break;
+-		case SMU_GFXCLK:
+-		case SMU_SCLK:
+-			clock_limit = smu->smu_table.boot_values.gfxclk;
+-			break;
+-		case SMU_SOCCLK:
+-			clock_limit = smu->smu_table.boot_values.socclk;
+-			break;
+-		case SMU_VCLK:
+-			clock_limit = smu->smu_table.boot_values.vclk;
+-			break;
+-		case SMU_DCLK:
+-			clock_limit = smu->smu_table.boot_values.dclk;
+-			break;
+-		default:
+-			clock_limit = 0;
+-			break;
+-		}
++		ret = smu_v13_0_get_boot_freq_by_index(smu, clk_type, &clock_limit);
++		if (ret)
++			return ret;
+ 
+ 		/* clock in Mhz unit */
+ 		if (min)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
+index 218f209c37756..59854465d7115 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
+@@ -733,31 +733,9 @@ static int smu_v13_0_5_get_dpm_ultimate_freq(struct smu_context *smu,
+ 	int ret = 0;
+ 
+ 	if (!smu_v13_0_5_clk_dpm_is_enabled(smu, clk_type)) {
+-		switch (clk_type) {
+-		case SMU_MCLK:
+-		case SMU_UCLK:
+-			clock_limit = smu->smu_table.boot_values.uclk;
+-			break;
+-		case SMU_FCLK:
+-			clock_limit = smu->smu_table.boot_values.fclk;
+-			break;
+-		case SMU_GFXCLK:
+-		case SMU_SCLK:
+-			clock_limit = smu->smu_table.boot_values.gfxclk;
+-			break;
+-		case SMU_SOCCLK:
+-			clock_limit = smu->smu_table.boot_values.socclk;
+-			break;
+-		case SMU_VCLK:
+-			clock_limit = smu->smu_table.boot_values.vclk;
+-			break;
+-		case SMU_DCLK:
+-			clock_limit = smu->smu_table.boot_values.dclk;
+-			break;
+-		default:
+-			clock_limit = 0;
+-			break;
+-		}
++		ret = smu_v13_0_get_boot_freq_by_index(smu, clk_type, &clock_limit);
++		if (ret)
++			return ret;
+ 
+ 		/* clock in Mhz unit */
+ 		if (min)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+index 4d3eca2fc3f11..f4469d001d7c4 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+@@ -2333,6 +2333,8 @@ static int smu_v13_0_6_mode2_reset(struct smu_context *smu)
+ 
+ 	index = smu_cmn_to_asic_specific_index(smu, CMN2ASIC_MAPPING_MSG,
+ 					       SMU_MSG_GfxDeviceDriverReset);
++	if (index < 0)
++		return index;
+ 
+ 	mutex_lock(&smu->message_lock);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index e996a0a4d33e1..4f98869e02848 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2450,6 +2450,8 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
+ 	}
+ 
+ 	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
++		if (size != 8)
++			return -EINVAL;
+ 
+ 		ret = smu_cmn_update_table(smu,
+ 				       SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -2478,6 +2480,8 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
+ 			activity_monitor->Fclk_MinActiveFreq = input[6];
+ 			activity_monitor->Fclk_BoosterFreq = input[7];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index d8bcf765a8038..5917c88cc87d6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -867,31 +867,9 @@ static int yellow_carp_get_dpm_ultimate_freq(struct smu_context *smu,
+ 	int ret = 0;
+ 
+ 	if (!yellow_carp_clk_dpm_is_enabled(smu, clk_type)) {
+-		switch (clk_type) {
+-		case SMU_MCLK:
+-		case SMU_UCLK:
+-			clock_limit = smu->smu_table.boot_values.uclk;
+-			break;
+-		case SMU_FCLK:
+-			clock_limit = smu->smu_table.boot_values.fclk;
+-			break;
+-		case SMU_GFXCLK:
+-		case SMU_SCLK:
+-			clock_limit = smu->smu_table.boot_values.gfxclk;
+-			break;
+-		case SMU_SOCCLK:
+-			clock_limit = smu->smu_table.boot_values.socclk;
+-			break;
+-		case SMU_VCLK:
+-			clock_limit = smu->smu_table.boot_values.vclk;
+-			break;
+-		case SMU_DCLK:
+-			clock_limit = smu->smu_table.boot_values.dclk;
+-			break;
+-		default:
+-			clock_limit = 0;
+-			break;
+-		}
++		ret = smu_v13_0_get_boot_freq_by_index(smu, clk_type, &clock_limit);
++		if (ret)
++			return ret;
+ 
+ 		/* clock in Mhz unit */
+ 		if (min)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 90703f4542aba..06b65159f7b4a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1364,6 +1364,9 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+ 	}
+ 
+ 	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
++		if (size != 9)
++			return -EINVAL;
++
+ 		ret = smu_cmn_update_table(smu,
+ 					   SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+ 					   WORKLOAD_PPLIB_CUSTOM_BIT,
+@@ -1395,6 +1398,8 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+ 			activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+ 			activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+ 			break;
++		default:
++			return -EINVAL;
+ 		}
+ 
+ 		ret = smu_cmn_update_table(smu,
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 166f9a3e9622d..332f0aa50fee4 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -2135,7 +2135,7 @@ static irqreturn_t tc_irq_handler(int irq, void *arg)
+ 		dev_err(tc->dev, "syserr %x\n", stat);
+ 	}
+ 
+-	if (tc->hpd_pin >= 0 && tc->bridge.dev) {
++	if (tc->hpd_pin >= 0 && tc->bridge.dev && tc->aux.drm_dev) {
+ 		/*
+ 		 * H is triggered when the GPIO goes high.
+ 		 *
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index 28abe9aa99ca5..584d109330ab1 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -353,13 +353,8 @@ int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
+ 	bridge->encoder = NULL;
+ 	list_del(&bridge->chain_node);
+ 
+-#ifdef CONFIG_OF
+ 	DRM_ERROR("failed to attach bridge %pOF to encoder %s: %d\n",
+ 		  bridge->of_node, encoder->name, ret);
+-#else
+-	DRM_ERROR("failed to attach bridge to encoder %s: %d\n",
+-		  encoder->name, ret);
+-#endif
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 117237d3528bd..618b045230336 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -631,6 +631,17 @@ static void drm_fb_helper_add_damage_clip(struct drm_fb_helper *helper, u32 x, u
+ static void drm_fb_helper_damage(struct drm_fb_helper *helper, u32 x, u32 y,
+ 				 u32 width, u32 height)
+ {
++	/*
++	 * This function may be invoked by panic() to flush the frame
++	 * buffer, where all CPUs except the panic CPU are stopped.
++	 * During the following schedule_work(), the panic CPU needs
++	 * the worker_pool lock, which might be held by a stopped CPU,
++	 * causing schedule_work() and panic() to block. Return early on
++	 * oops_in_progress to prevent this blocking.
++	 */
++	if (oops_in_progress)
++		return;
++
+ 	drm_fb_helper_add_damage_clip(helper, x, y, width, height);
+ 
+ 	schedule_work(&helper->damage_work);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 3860a8ce1e2d4..903f4bfea7e83 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -414,6 +414,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
+ 		},
+ 		.driver_data = (void *)&lcd1600x2560_leftside_up,
++	}, {	/* OrangePi Neo */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "OrangePi"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "NEO-01"),
++		},
++		.driver_data = (void *)&lcd1200x1920_rightside_up,
+ 	}, {	/* Samsung GalaxyBook 10.6 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+diff --git a/drivers/gpu/drm/meson/meson_plane.c b/drivers/gpu/drm/meson/meson_plane.c
+index 815dfe30492b6..b43ac61201f31 100644
+--- a/drivers/gpu/drm/meson/meson_plane.c
++++ b/drivers/gpu/drm/meson/meson_plane.c
+@@ -534,6 +534,7 @@ int meson_plane_create(struct meson_drm *priv)
+ 	struct meson_plane *meson_plane;
+ 	struct drm_plane *plane;
+ 	const uint64_t *format_modifiers = format_modifiers_default;
++	int ret;
+ 
+ 	meson_plane = devm_kzalloc(priv->drm->dev, sizeof(*meson_plane),
+ 				   GFP_KERNEL);
+@@ -548,12 +549,16 @@ int meson_plane_create(struct meson_drm *priv)
+ 	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
+ 		format_modifiers = format_modifiers_afbc_g12a;
+ 
+-	drm_universal_plane_init(priv->drm, plane, 0xFF,
+-				 &meson_plane_funcs,
+-				 supported_drm_formats,
+-				 ARRAY_SIZE(supported_drm_formats),
+-				 format_modifiers,
+-				 DRM_PLANE_TYPE_PRIMARY, "meson_primary_plane");
++	ret = drm_universal_plane_init(priv->drm, plane, 0xFF,
++					&meson_plane_funcs,
++					supported_drm_formats,
++					ARRAY_SIZE(supported_drm_formats),
++					format_modifiers,
++					DRM_PLANE_TYPE_PRIMARY, "meson_primary_plane");
++	if (ret) {
++		devm_kfree(priv->drm->dev, meson_plane);
++		return ret;
++	}
+ 
+ 	drm_plane_helper_add(plane, &meson_plane_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
+index 83cb157da7cc6..a2577672f4e3e 100644
+--- a/drivers/gpu/drm/xe/xe_force_wake.h
++++ b/drivers/gpu/drm/xe/xe_force_wake.h
+@@ -24,14 +24,25 @@ static inline int
+ xe_force_wake_ref(struct xe_force_wake *fw,
+ 		  enum xe_force_wake_domains domain)
+ {
+-	xe_gt_assert(fw->gt, domain);
++	xe_gt_assert(fw->gt, domain != XE_FORCEWAKE_ALL);
+ 	return fw->domains[ffs(domain) - 1].ref;
+ }
+ 
++/**
++ * xe_force_wake_assert_held - asserts domain is awake
++ * @fw : xe_force_wake structure
++ * @domain: xe_force_wake_domains apart from XE_FORCEWAKE_ALL
++ *
++ * xe_force_wake_assert_held() is designed to confirm a particular
++ * forcewake domain's wakefulness; it doesn't verify the wakefulness of
++ * multiple domains. Make sure the caller doesn't input multiple
++ * domains(XE_FORCEWAKE_ALL) as a parameter.
++ */
+ static inline void
+ xe_force_wake_assert_held(struct xe_force_wake *fw,
+ 			  enum xe_force_wake_domains domain)
+ {
++	xe_gt_assert(fw->gt, domain != XE_FORCEWAKE_ALL);
+ 	xe_gt_assert(fw->gt, fw->awake_domains & domain);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
+index 396aeb5b99242..a34c9a24dafc7 100644
+--- a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
++++ b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
+@@ -68,8 +68,8 @@ static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
+ 
+ 	xe_mmio_write32(gt, CCS_MODE, mode);
+ 
+-	xe_gt_info(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n",
+-		   mode, config, num_engines, num_slices);
++	xe_gt_dbg(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n",
++		  mode, config, num_engines, num_slices);
+ }
+ 
+ void xe_gt_apply_ccs_mode(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_gt_topology.c b/drivers/gpu/drm/xe/xe_gt_topology.c
+index 3733e7a6860d3..d224ed1b5c0f9 100644
+--- a/drivers/gpu/drm/xe/xe_gt_topology.c
++++ b/drivers/gpu/drm/xe/xe_gt_topology.c
+@@ -108,7 +108,9 @@ gen_l3_mask_from_pattern(struct xe_device *xe, xe_l3_bank_mask_t dst,
+ {
+ 	unsigned long bit;
+ 
+-	xe_assert(xe, fls(mask) <= patternbits);
++	xe_assert(xe, find_last_bit(pattern, XE_MAX_L3_BANK_MASK_BITS) < patternbits ||
++		  bitmap_empty(pattern, XE_MAX_L3_BANK_MASK_BITS));
++	xe_assert(xe, !mask || patternbits * (__fls(mask) + 1) <= XE_MAX_L3_BANK_MASK_BITS);
+ 	for_each_set_bit(bit, &mask, 32) {
+ 		xe_l3_bank_mask_t shifted_pattern = {};
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc_relay.c b/drivers/gpu/drm/xe/xe_guc_relay.c
+index c0a2d8d5d3b3c..b49137ea6d843 100644
+--- a/drivers/gpu/drm/xe/xe_guc_relay.c
++++ b/drivers/gpu/drm/xe/xe_guc_relay.c
+@@ -757,7 +757,14 @@ static void relay_process_incoming_action(struct xe_guc_relay *relay)
+ 
+ static bool relay_needs_worker(struct xe_guc_relay *relay)
+ {
+-	return !list_empty(&relay->incoming_actions);
++	bool is_empty;
++
++	spin_lock(&relay->lock);
++	is_empty = list_empty(&relay->incoming_actions);
++	spin_unlock(&relay->lock);
++
++	return !is_empty;
++
+ }
+ 
+ static void relay_kick_worker(struct xe_guc_relay *relay)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index e48285c81bf57..0a496612c810f 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1551,6 +1551,11 @@ static void deregister_exec_queue(struct xe_guc *guc, struct xe_exec_queue *q)
+ 		q->guc->id,
+ 	};
+ 
++	xe_gt_assert(guc_to_gt(guc), exec_queue_destroyed(q));
++	xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
++	xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_disable(q));
++	xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_enable(q));
++
+ 	trace_xe_exec_queue_deregister(q);
+ 
+ 	xe_guc_ct_send_g2h_handler(&guc->ct, action, ARRAY_SIZE(action));
+diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
+index bb815dbde63a6..daf0d15354fea 100644
+--- a/drivers/gpu/drm/xe/xe_hwmon.c
++++ b/drivers/gpu/drm/xe/xe_hwmon.c
+@@ -551,12 +551,17 @@ xe_hwmon_curr_is_visible(const struct xe_hwmon *hwmon, u32 attr, int channel)
+ {
+ 	u32 uval;
+ 
++	/* hwmon sysfs attribute of current available only for package */
++	if (channel != CHANNEL_PKG)
++		return 0;
++
+ 	switch (attr) {
+ 	case hwmon_curr_crit:
+-	case hwmon_curr_label:
+-		if (channel == CHANNEL_PKG)
+ 			return (xe_hwmon_pcode_read_i1(hwmon->gt, &uval) ||
+ 				(uval & POWER_SETUP_I1_WATTS)) ? 0 : 0644;
++	case hwmon_curr_label:
++			return (xe_hwmon_pcode_read_i1(hwmon->gt, &uval) ||
++				(uval & POWER_SETUP_I1_WATTS)) ? 0 : 0444;
+ 		break;
+ 	default:
+ 		return 0;
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 198f5c2189cb4..208649436fdb3 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -69,7 +69,7 @@ struct xe_migrate {
+ 
+ #define MAX_PREEMPTDISABLE_TRANSFER SZ_8M /* Around 1ms. */
+ #define MAX_CCS_LIMITED_TRANSFER SZ_4M /* XE_PAGE_SIZE * (FIELD_MAX(XE2_CCS_SIZE_MASK) + 1) */
+-#define NUM_KERNEL_PDE 17
++#define NUM_KERNEL_PDE 15
+ #define NUM_PT_SLOTS 32
+ #define LEVEL0_PAGE_TABLE_ENCODE_SIZE SZ_2M
+ #define MAX_NUM_PTE 512
+@@ -137,10 +137,11 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ 	struct xe_device *xe = tile_to_xe(tile);
+ 	u16 pat_index = xe->pat.idx[XE_CACHE_WB];
+ 	u8 id = tile->id;
+-	u32 num_entries = NUM_PT_SLOTS, num_level = vm->pt_root[id]->level;
++	u32 num_entries = NUM_PT_SLOTS, num_level = vm->pt_root[id]->level,
++	    num_setup = num_level + 1;
+ 	u32 map_ofs, level, i;
+ 	struct xe_bo *bo, *batch = tile->mem.kernel_bb_pool->bo;
+-	u64 entry;
++	u64 entry, pt30_ofs;
+ 
+ 	/* Can't bump NUM_PT_SLOTS too high */
+ 	BUILD_BUG_ON(NUM_PT_SLOTS > SZ_2M/XE_PAGE_SIZE);
+@@ -160,10 +161,12 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ 	if (IS_ERR(bo))
+ 		return PTR_ERR(bo);
+ 
+-	entry = vm->pt_ops->pde_encode_bo(bo, bo->size - XE_PAGE_SIZE, pat_index);
++	/* PT31 reserved for 2M identity map */
++	pt30_ofs = bo->size - 2 * XE_PAGE_SIZE;
++	entry = vm->pt_ops->pde_encode_bo(bo, pt30_ofs, pat_index);
+ 	xe_pt_write(xe, &vm->pt_root[id]->bo->vmap, 0, entry);
+ 
+-	map_ofs = (num_entries - num_level) * XE_PAGE_SIZE;
++	map_ofs = (num_entries - num_setup) * XE_PAGE_SIZE;
+ 
+ 	/* Map the entire BO in our level 0 pt */
+ 	for (i = 0, level = 0; i < num_entries; level++) {
+@@ -234,7 +237,7 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ 	}
+ 
+ 	/* Write PDE's that point to our BO. */
+-	for (i = 0; i < num_entries - num_level; i++) {
++	for (i = 0; i < map_ofs / PAGE_SIZE; i++) {
+ 		entry = vm->pt_ops->pde_encode_bo(bo, (u64)i * XE_PAGE_SIZE,
+ 						  pat_index);
+ 
+@@ -252,28 +255,54 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ 	/* Identity map the entire vram at 256GiB offset */
+ 	if (IS_DGFX(xe)) {
+ 		u64 pos, ofs, flags;
++		/* XXX: Unclear if this should be usable_size? */
++		u64 vram_limit =  xe->mem.vram.actual_physical_size +
++			xe->mem.vram.dpa_base;
+ 
+ 		level = 2;
+ 		ofs = map_ofs + XE_PAGE_SIZE * level + 256 * 8;
+ 		flags = vm->pt_ops->pte_encode_addr(xe, 0, pat_index, level,
+ 						    true, 0);
+ 
++		xe_assert(xe, IS_ALIGNED(xe->mem.vram.usable_size, SZ_2M));
++
+ 		/*
+-		 * Use 1GB pages, it shouldn't matter the physical amount of
+-		 * vram is less, when we don't access it.
++		 * Use 1GB pages when possible, last chunk always use 2M
++		 * pages as mixing reserved memory (stolen, WOCPM) with a single
++		 * mapping is not allowed on certain platforms.
+ 		 */
+-		for (pos = xe->mem.vram.dpa_base;
+-		     pos < xe->mem.vram.actual_physical_size + xe->mem.vram.dpa_base;
+-		     pos += SZ_1G, ofs += 8)
++		for (pos = xe->mem.vram.dpa_base; pos < vram_limit;
++		     pos += SZ_1G, ofs += 8) {
++			if (pos + SZ_1G >= vram_limit) {
++				u64 pt31_ofs = bo->size - XE_PAGE_SIZE;
++
++				entry = vm->pt_ops->pde_encode_bo(bo, pt31_ofs,
++								  pat_index);
++				xe_map_wr(xe, &bo->vmap, ofs, u64, entry);
++
++				flags = vm->pt_ops->pte_encode_addr(xe, 0,
++								    pat_index,
++								    level - 1,
++								    true, 0);
++
++				for (ofs = pt31_ofs; pos < vram_limit;
++				     pos += SZ_2M, ofs += 8)
++					xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
++				break;	/* Ensure pos == vram_limit assert correct */
++			}
++
+ 			xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
++		}
++
++		xe_assert(xe, pos == vram_limit);
+ 	}
+ 
+ 	/*
+ 	 * Example layout created above, with root level = 3:
+ 	 * [PT0...PT7]: kernel PT's for copy/clear; 64 or 4KiB PTE's
+ 	 * [PT8]: Kernel PT for VM_BIND, 4 KiB PTE's
+-	 * [PT9...PT28]: Userspace PT's for VM_BIND, 4 KiB PTE's
+-	 * [PT29 = PDE 0] [PT30 = PDE 1] [PT31 = PDE 2]
++	 * [PT9...PT27]: Userspace PT's for VM_BIND, 4 KiB PTE's
++	 * [PT28 = PDE 0] [PT29 = PDE 1] [PT30 = PDE 2] [PT31 = 2M vram identity map]
+ 	 *
+ 	 * This makes the lowest part of the VM point to the pagetables.
+ 	 * Hence the lowest 2M in the vm should point to itself, with a few writes
+diff --git a/drivers/gpu/drm/xe/xe_pcode.c b/drivers/gpu/drm/xe/xe_pcode.c
+index a5e7da8cf9441..9c4eefdf66428 100644
+--- a/drivers/gpu/drm/xe/xe_pcode.c
++++ b/drivers/gpu/drm/xe/xe_pcode.c
+@@ -10,6 +10,7 @@
+ 
+ #include <drm/drm_managed.h>
+ 
++#include "xe_assert.h"
+ #include "xe_device.h"
+ #include "xe_gt.h"
+ #include "xe_mmio.h"
+@@ -124,6 +125,8 @@ static int pcode_try_request(struct xe_gt *gt, u32 mbox,
+ {
+ 	int slept, wait = 10;
+ 
++	xe_gt_assert(gt, timeout_us > 0);
++
+ 	for (slept = 0; slept < timeout_us; slept += wait) {
+ 		if (locked)
+ 			*status = pcode_mailbox_rw(gt, mbox, &request, NULL, 1, true,
+@@ -169,6 +172,8 @@ int xe_pcode_request(struct xe_gt *gt, u32 mbox, u32 request,
+ 	u32 status;
+ 	int ret;
+ 
++	xe_gt_assert(gt, timeout_base_ms <= 3);
++
+ 	mutex_lock(&gt->pcode.lock);
+ 
+ 	ret = pcode_try_request(gt, mbox, request, reply_mask, reply, &status,
+@@ -188,7 +193,6 @@ int xe_pcode_request(struct xe_gt *gt, u32 mbox, u32 request,
+ 	 */
+ 	drm_err(&gt_to_xe(gt)->drm,
+ 		"PCODE timeout, retrying with preemption disabled\n");
+-	drm_WARN_ON_ONCE(&gt_to_xe(gt)->drm, timeout_base_ms > 1);
+ 	preempt_disable();
+ 	ret = pcode_try_request(gt, mbox, request, reply_mask, reply, &status,
+ 				true, 50 * 1000, true);
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 8092312c0a877..6cad35e7f1828 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -153,8 +153,9 @@ static void read_tempreg_nb_f15(struct pci_dev *pdev, u32 *regval)
+ 
+ static void read_tempreg_nb_zen(struct pci_dev *pdev, u32 *regval)
+ {
+-	amd_smn_read(amd_pci_dev_to_node_id(pdev),
+-		     ZEN_REPORTED_TEMP_CTRL_BASE, regval);
++	if (amd_smn_read(amd_pci_dev_to_node_id(pdev),
++			 ZEN_REPORTED_TEMP_CTRL_BASE, regval))
++		*regval = 0;
+ }
+ 
+ static long get_raw_temp(struct k10temp_data *data)
+@@ -205,6 +206,7 @@ static int k10temp_read_temp(struct device *dev, u32 attr, int channel,
+ 			     long *val)
+ {
+ 	struct k10temp_data *data = dev_get_drvdata(dev);
++	int ret = -EOPNOTSUPP;
+ 	u32 regval;
+ 
+ 	switch (attr) {
+@@ -221,13 +223,17 @@ static int k10temp_read_temp(struct device *dev, u32 attr, int channel,
+ 				*val = 0;
+ 			break;
+ 		case 2 ... 13:		/* Tccd{1-12} */
+-			amd_smn_read(amd_pci_dev_to_node_id(data->pdev),
+-				     ZEN_CCD_TEMP(data->ccd_offset, channel - 2),
+-						  &regval);
++			ret = amd_smn_read(amd_pci_dev_to_node_id(data->pdev),
++					   ZEN_CCD_TEMP(data->ccd_offset, channel - 2),
++					   &regval);
++
++			if (ret)
++				return ret;
++
+ 			*val = (regval & ZEN_CCD_TEMP_MASK) * 125 - 49000;
+ 			break;
+ 		default:
+-			return -EOPNOTSUPP;
++			return ret;
+ 		}
+ 		break;
+ 	case hwmon_temp_max:
+@@ -243,7 +249,7 @@ static int k10temp_read_temp(struct device *dev, u32 attr, int channel,
+ 			- ((regval >> 24) & 0xf)) * 500 + 52000;
+ 		break;
+ 	default:
+-		return -EOPNOTSUPP;
++		return ret;
+ 	}
+ 	return 0;
+ }
+@@ -381,8 +387,20 @@ static void k10temp_get_ccd_support(struct pci_dev *pdev,
+ 	int i;
+ 
+ 	for (i = 0; i < limit; i++) {
+-		amd_smn_read(amd_pci_dev_to_node_id(pdev),
+-			     ZEN_CCD_TEMP(data->ccd_offset, i), &regval);
++		/*
++		 * Ignore inaccessible CCDs.
++		 *
++		 * Some systems will return a register value of 0, and the TEMP_VALID
++		 * bit check below will naturally fail.
++		 *
++		 * Other systems will return a PCI_ERROR_RESPONSE (0xFFFFFFFF) for
++		 * the register value. And this will incorrectly pass the TEMP_VALID
++		 * bit check.
++		 */
++		if (amd_smn_read(amd_pci_dev_to_node_id(pdev),
++				 ZEN_CCD_TEMP(data->ccd_offset, i), &regval))
++			continue;
++
+ 		if (regval & ZEN_CCD_TEMP_VALID)
+ 			data->show_temp |= BIT(TCCD_BIT(i));
+ 	}
+diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
+index 0c0a932c00f35..6505261e60686 100644
+--- a/drivers/hwspinlock/hwspinlock_core.c
++++ b/drivers/hwspinlock/hwspinlock_core.c
+@@ -305,6 +305,34 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
+ }
+ EXPORT_SYMBOL_GPL(__hwspin_unlock);
+ 
++/**
++ * hwspin_lock_bust() - bust a specific hwspinlock
++ * @hwlock: a previously-acquired hwspinlock which we want to bust
++ * @id: identifier of the remote lock holder, if applicable
++ *
++ * This function will bust a hwspinlock that was previously acquired as
++ * long as the current owner of the lock matches the id given by the caller.
++ *
++ * Context: Process context.
++ *
++ * Returns: 0 on success, or -EINVAL if the hwspinlock does not exist, or
++ * the bust operation fails, and -EOPNOTSUPP if the bust operation is not
++ * defined for the hwspinlock.
++ */
++int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id)
++{
++	if (WARN_ON(!hwlock))
++		return -EINVAL;
++
++	if (!hwlock->bank->ops->bust) {
++		pr_err("bust operation not defined\n");
++		return -EOPNOTSUPP;
++	}
++
++	return hwlock->bank->ops->bust(hwlock, id);
++}
++EXPORT_SYMBOL_GPL(hwspin_lock_bust);
++
+ /**
+  * of_hwspin_lock_simple_xlate - translate hwlock_spec to return a lock id
+  * @hwlock_spec: hwlock specifier as found in the device tree
+diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h
+index 29892767bb7a0..f298fc0ee5adb 100644
+--- a/drivers/hwspinlock/hwspinlock_internal.h
++++ b/drivers/hwspinlock/hwspinlock_internal.h
+@@ -21,6 +21,8 @@ struct hwspinlock_device;
+  * @trylock: make a single attempt to take the lock. returns 0 on
+  *	     failure and true on success. may _not_ sleep.
+  * @unlock:  release the lock. always succeed. may _not_ sleep.
++ * @bust:    optional, platform-specific bust handler, called by hwspinlock
++ *	     core to bust a specific lock.
+  * @relax:   optional, platform-specific relax handler, called by hwspinlock
+  *	     core while spinning on a lock, between two successive
+  *	     invocations of @trylock. may _not_ sleep.
+@@ -28,6 +30,7 @@ struct hwspinlock_device;
+ struct hwspinlock_ops {
+ 	int (*trylock)(struct hwspinlock *lock);
+ 	void (*unlock)(struct hwspinlock *lock);
++	int (*bust)(struct hwspinlock *lock, unsigned int id);
+ 	void (*relax)(struct hwspinlock *lock);
+ };
+ 
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index fa7cc051b4c49..2f185b3869495 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -758,9 +758,11 @@ static ssize_t iio_read_channel_info(struct device *dev,
+ 							INDIO_MAX_RAW_ELEMENTS,
+ 							vals, &val_len,
+ 							this_attr->address);
+-	else
++	else if (indio_dev->info->read_raw)
+ 		ret = indio_dev->info->read_raw(indio_dev, this_attr->c,
+ 				    &vals[0], &vals[1], this_attr->address);
++	else
++		return -EINVAL;
+ 
+ 	if (ret < 0)
+ 		return ret;
+@@ -842,6 +844,9 @@ static ssize_t iio_read_channel_info_avail(struct device *dev,
+ 	int length;
+ 	int type;
+ 
++	if (!indio_dev->info->read_avail)
++		return -EINVAL;
++
+ 	ret = indio_dev->info->read_avail(indio_dev, this_attr->c,
+ 					  &vals, &type, &length,
+ 					  this_attr->address);
+diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c
+index 910c1f14abd55..a64f8fbac597e 100644
+--- a/drivers/iio/industrialio-event.c
++++ b/drivers/iio/industrialio-event.c
+@@ -285,6 +285,9 @@ static ssize_t iio_ev_state_store(struct device *dev,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (!indio_dev->info->write_event_config)
++		return -EINVAL;
++
+ 	ret = indio_dev->info->write_event_config(indio_dev,
+ 		this_attr->c, iio_ev_attr_type(this_attr),
+ 		iio_ev_attr_dir(this_attr), val);
+@@ -300,6 +303,9 @@ static ssize_t iio_ev_state_show(struct device *dev,
+ 	struct iio_dev_attr *this_attr = to_iio_dev_attr(attr);
+ 	int val;
+ 
++	if (!indio_dev->info->read_event_config)
++		return -EINVAL;
++
+ 	val = indio_dev->info->read_event_config(indio_dev,
+ 		this_attr->c, iio_ev_attr_type(this_attr),
+ 		iio_ev_attr_dir(this_attr));
+@@ -318,6 +324,9 @@ static ssize_t iio_ev_value_show(struct device *dev,
+ 	int val, val2, val_arr[2];
+ 	int ret;
+ 
++	if (!indio_dev->info->read_event_value)
++		return -EINVAL;
++
+ 	ret = indio_dev->info->read_event_value(indio_dev,
+ 		this_attr->c, iio_ev_attr_type(this_attr),
+ 		iio_ev_attr_dir(this_attr), iio_ev_attr_info(this_attr),
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 485e6fc44a04c..39cf26d69d17a 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -543,6 +543,7 @@ EXPORT_SYMBOL_GPL(devm_iio_channel_get_all);
+ static int iio_channel_read(struct iio_channel *chan, int *val, int *val2,
+ 			    enum iio_chan_info_enum info)
+ {
++	const struct iio_info *iio_info = chan->indio_dev->info;
+ 	int unused;
+ 	int vals[INDIO_MAX_RAW_ELEMENTS];
+ 	int ret;
+@@ -554,15 +555,18 @@ static int iio_channel_read(struct iio_channel *chan, int *val, int *val2,
+ 	if (!iio_channel_has_info(chan->channel, info))
+ 		return -EINVAL;
+ 
+-	if (chan->indio_dev->info->read_raw_multi) {
+-		ret = chan->indio_dev->info->read_raw_multi(chan->indio_dev,
+-					chan->channel, INDIO_MAX_RAW_ELEMENTS,
+-					vals, &val_len, info);
++	if (iio_info->read_raw_multi) {
++		ret = iio_info->read_raw_multi(chan->indio_dev,
++					       chan->channel,
++					       INDIO_MAX_RAW_ELEMENTS,
++					       vals, &val_len, info);
+ 		*val = vals[0];
+ 		*val2 = vals[1];
++	} else if (iio_info->read_raw) {
++		ret = iio_info->read_raw(chan->indio_dev,
++					 chan->channel, val, val2, info);
+ 	} else {
+-		ret = chan->indio_dev->info->read_raw(chan->indio_dev,
+-					chan->channel, val, val2, info);
++		return -EINVAL;
+ 	}
+ 
+ 	return ret;
+@@ -750,11 +754,15 @@ static int iio_channel_read_avail(struct iio_channel *chan,
+ 				  const int **vals, int *type, int *length,
+ 				  enum iio_chan_info_enum info)
+ {
++	const struct iio_info *iio_info = chan->indio_dev->info;
++
+ 	if (!iio_channel_has_available(chan->channel, info))
+ 		return -EINVAL;
+ 
+-	return chan->indio_dev->info->read_avail(chan->indio_dev, chan->channel,
+-						 vals, type, length, info);
++	if (iio_info->read_avail)
++		return iio_info->read_avail(chan->indio_dev, chan->channel,
++					    vals, type, length, info);
++	return -EINVAL;
+ }
+ 
+ int iio_read_avail_channel_attribute(struct iio_channel *chan,
+@@ -917,8 +925,12 @@ EXPORT_SYMBOL_GPL(iio_get_channel_type);
+ static int iio_channel_write(struct iio_channel *chan, int val, int val2,
+ 			     enum iio_chan_info_enum info)
+ {
+-	return chan->indio_dev->info->write_raw(chan->indio_dev,
+-						chan->channel, val, val2, info);
++	const struct iio_info *iio_info = chan->indio_dev->info;
++
++	if (iio_info->write_raw)
++		return iio_info->write_raw(chan->indio_dev,
++					   chan->channel, val, val2, info);
++	return -EINVAL;
+ }
+ 
+ int iio_write_channel_attribute(struct iio_channel *chan, int val, int val2,
+diff --git a/drivers/infiniband/hw/efa/efa_com.c b/drivers/infiniband/hw/efa/efa_com.c
+index 16a24a05fc2a6..bafd210dd43e8 100644
+--- a/drivers/infiniband/hw/efa/efa_com.c
++++ b/drivers/infiniband/hw/efa/efa_com.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+ /*
+- * Copyright 2018-2021 Amazon.com, Inc. or its affiliates. All rights reserved.
++ * Copyright 2018-2024 Amazon.com, Inc. or its affiliates. All rights reserved.
+  */
+ 
+ #include "efa_com.h"
+@@ -406,8 +406,8 @@ static struct efa_comp_ctx *efa_com_submit_admin_cmd(struct efa_com_admin_queue
+ 	return comp_ctx;
+ }
+ 
+-static void efa_com_handle_single_admin_completion(struct efa_com_admin_queue *aq,
+-						   struct efa_admin_acq_entry *cqe)
++static int efa_com_handle_single_admin_completion(struct efa_com_admin_queue *aq,
++						  struct efa_admin_acq_entry *cqe)
+ {
+ 	struct efa_comp_ctx *comp_ctx;
+ 	u16 cmd_id;
+@@ -416,11 +416,11 @@ static void efa_com_handle_single_admin_completion(struct efa_com_admin_queue *a
+ 			 EFA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID);
+ 
+ 	comp_ctx = efa_com_get_comp_ctx(aq, cmd_id, false);
+-	if (!comp_ctx) {
++	if (comp_ctx->status != EFA_CMD_SUBMITTED) {
+ 		ibdev_err(aq->efa_dev,
+-			  "comp_ctx is NULL. Changing the admin queue running state\n");
+-		clear_bit(EFA_AQ_STATE_RUNNING_BIT, &aq->state);
+-		return;
++			  "Received completion with unexpected command id[%d], sq producer: %d, sq consumer: %d, cq consumer: %d\n",
++			  cmd_id, aq->sq.pc, aq->sq.cc, aq->cq.cc);
++		return -EINVAL;
+ 	}
+ 
+ 	comp_ctx->status = EFA_CMD_COMPLETED;
+@@ -428,14 +428,17 @@ static void efa_com_handle_single_admin_completion(struct efa_com_admin_queue *a
+ 
+ 	if (!test_bit(EFA_AQ_STATE_POLLING_BIT, &aq->state))
+ 		complete(&comp_ctx->wait_event);
++
++	return 0;
+ }
+ 
+ static void efa_com_handle_admin_completion(struct efa_com_admin_queue *aq)
+ {
+ 	struct efa_admin_acq_entry *cqe;
+ 	u16 queue_size_mask;
+-	u16 comp_num = 0;
++	u16 comp_cmds = 0;
+ 	u8 phase;
++	int err;
+ 	u16 ci;
+ 
+ 	queue_size_mask = aq->depth - 1;
+@@ -453,10 +456,12 @@ static void efa_com_handle_admin_completion(struct efa_com_admin_queue *aq)
+ 		 * phase bit was validated
+ 		 */
+ 		dma_rmb();
+-		efa_com_handle_single_admin_completion(aq, cqe);
++		err = efa_com_handle_single_admin_completion(aq, cqe);
++		if (!err)
++			comp_cmds++;
+ 
++		aq->cq.cc++;
+ 		ci++;
+-		comp_num++;
+ 		if (ci == aq->depth) {
+ 			ci = 0;
+ 			phase = !phase;
+@@ -465,10 +470,9 @@ static void efa_com_handle_admin_completion(struct efa_com_admin_queue *aq)
+ 		cqe = &aq->cq.entries[ci];
+ 	}
+ 
+-	aq->cq.cc += comp_num;
+ 	aq->cq.phase = phase;
+-	aq->sq.cc += comp_num;
+-	atomic64_add(comp_num, &aq->stats.completed_cmd);
++	aq->sq.cc += comp_cmds;
++	atomic64_add(comp_cmds, &aq->stats.completed_cmd);
+ }
+ 
+ static int efa_com_comp_status_to_errno(u8 comp_status)
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index d435b6a6c295d..13c2c11cfdf64 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -687,16 +687,26 @@ static int uvc_parse_streaming(struct uvc_device *dev,
+ 		goto error;
+ 	}
+ 
+-	size = nformats * sizeof(*format) + nframes * sizeof(*frame)
++	/*
++	 * Allocate memory for the formats, the frames and the intervals,
++	 * plus any required padding to guarantee that everything has the
++	 * correct alignment.
++	 */
++	size = nformats * sizeof(*format);
++	size = ALIGN(size, __alignof__(*frame)) + nframes * sizeof(*frame);
++	size = ALIGN(size, __alignof__(*interval))
+ 	     + nintervals * sizeof(*interval);
++
+ 	format = kzalloc(size, GFP_KERNEL);
+-	if (format == NULL) {
++	if (!format) {
+ 		ret = -ENOMEM;
+ 		goto error;
+ 	}
+ 
+-	frame = (struct uvc_frame *)&format[nformats];
+-	interval = (u32 *)&frame[nframes];
++	frame = (void *)format + nformats * sizeof(*format);
++	frame = PTR_ALIGN(frame, __alignof__(*frame));
++	interval = (void *)frame + nframes * sizeof(*frame);
++	interval = PTR_ALIGN(interval, __alignof__(*interval));
+ 
+ 	streaming->formats = format;
+ 	streaming->nformats = 0;
+diff --git a/drivers/media/v4l2-core/v4l2-cci.c b/drivers/media/v4l2-core/v4l2-cci.c
+index ee3475bed37fa..1ff94affbaf3a 100644
+--- a/drivers/media/v4l2-core/v4l2-cci.c
++++ b/drivers/media/v4l2-core/v4l2-cci.c
+@@ -23,6 +23,15 @@ int cci_read(struct regmap *map, u32 reg, u64 *val, int *err)
+ 	u8 buf[8];
+ 	int ret;
+ 
++	/*
++	 * TODO: Fix smatch. Assign *val to 0 here in order to avoid
++	 * failing a smatch check on caller when the caller proceeds to
++	 * read *val without initialising it on caller's side. *val is set
++	 * to a valid value whenever this function returns 0 but smatch
++	 * can't figure that out currently.
++	 */
++	*val = 0;
++
+ 	if (err && *err)
+ 		return *err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index cdc84a27a04ed..0138f77eaeed0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -2369,7 +2369,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
+ 	if (flush)
+ 		mlx5e_shampo_flush_skb(rq, cqe, match);
+ free_hd_entry:
+-	mlx5e_free_rx_shampo_hd_entry(rq, header_index);
++	if (likely(head_size))
++		mlx5e_free_rx_shampo_hd_entry(rq, header_index);
+ mpwrq_cqe_out:
+ 	if (likely(wi->consumed_strides < rq->mpwqe.num_strides))
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+index 042ca03491243..d1db04baa1fa6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+@@ -7,7 +7,7 @@
+ /* don't try to optimize STE allocation if the stack is too constaraining */
+ #define DR_RULE_MAX_STES_OPTIMIZED 0
+ #else
+-#define DR_RULE_MAX_STES_OPTIMIZED 5
++#define DR_RULE_MAX_STES_OPTIMIZED 2
+ #endif
+ #define DR_RULE_MAX_STE_CHAIN_OPTIMIZED (DR_RULE_MAX_STES_OPTIMIZED + DR_ACTION_MAX_STES)
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 1837a30ba08ac..f64d18949f68b 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -242,7 +242,7 @@ static int ionic_request_irq(struct ionic_lif *lif, struct ionic_qcq *qcq)
+ 		name = dev_name(dev);
+ 
+ 	snprintf(intr->name, sizeof(intr->name),
+-		 "%s-%s-%s", IONIC_DRV_NAME, name, q->name);
++		 "%.5s-%.16s-%.8s", IONIC_DRV_NAME, name, q->name);
+ 
+ 	return devm_request_irq(dev, intr->vector, ionic_isr,
+ 				0, intr->name, &qcq->napi);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index cfda32047cffb..4823dbdf54656 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1432,6 +1432,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1546, 0x1312, 4)},	/* u-blox LARA-R6 01B */
+ 	{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)},	/* u-blox LARA-L6 */
+ 	{QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */
++	{QMI_FIXED_INTF(0x2dee, 0x4d22, 5)},    /* MeiG Smart SRM825L */
+ 
+ 	/* 4. Gobi 1000 devices */
+ 	{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index f32e017b62e9b..21bd0c127b05a 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3172,6 +3172,9 @@ static int virtnet_send_rx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+ {
+ 	int err;
+ 
++	if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL))
++		return -EOPNOTSUPP;
++
+ 	err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
+ 					    max_usecs, max_packets);
+ 	if (err)
+@@ -3189,6 +3192,9 @@ static int virtnet_send_tx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+ {
+ 	int err;
+ 
++	if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL))
++		return -EOPNOTSUPP;
++
+ 	err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
+ 					    max_usecs, max_packets);
+ 	if (err)
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index d4a243b64f6c3..aa160e6fe24f1 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2293,7 +2293,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
+ 	struct qmi_txn txn;
+ 	const u8 *temp = data;
+ 	void __iomem *bdf_addr = NULL;
+-	int ret;
++	int ret = 0;
+ 	u32 remaining = len;
+ 
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 1d287ed25a949..3cdc4c51d6dfe 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -4058,7 +4058,7 @@ int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab)
+ 	struct ath12k_dp *dp = &ab->dp;
+ 	struct htt_rx_ring_tlv_filter tlv_filter = {0};
+ 	u32 ring_id;
+-	int ret;
++	int ret = 0;
+ 	u32 hal_rx_desc_sz = ab->hal.hal_desc_sz;
+ 	int i;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c
+index 5484112859a66..6d1ebbba17d9b 100644
+--- a/drivers/net/wireless/ath/ath12k/qmi.c
++++ b/drivers/net/wireless/ath/ath12k/qmi.c
+@@ -2538,7 +2538,7 @@ static int ath12k_qmi_load_file_target_mem(struct ath12k_base *ab,
+ 	struct qmi_wlanfw_bdf_download_resp_msg_v01 resp = {};
+ 	struct qmi_txn txn;
+ 	const u8 *temp = data;
+-	int ret;
++	int ret = 0;
+ 	u32 remaining = len;
+ 
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+index 751a125a1566f..893b21fcaf87c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+@@ -230,8 +230,7 @@ static ssize_t iwl_dbgfs_send_hcmd_write(struct iwl_fw_runtime *fwrt, char *buf,
+ 		.data = { NULL, },
+ 	};
+ 
+-	if (fwrt->ops && fwrt->ops->fw_running &&
+-	    !fwrt->ops->fw_running(fwrt->ops_ctx))
++	if (!iwl_trans_fw_running(fwrt->trans))
+ 		return -EIO;
+ 
+ 	if (count < header_size + 1 || count > 1024 * 4)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+index 9122f9a1260ae..d201440066ea9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+@@ -19,7 +19,6 @@
+ struct iwl_fw_runtime_ops {
+ 	void (*dump_start)(void *ctx);
+ 	void (*dump_end)(void *ctx);
+-	bool (*fw_running)(void *ctx);
+ 	int (*send_hcmd)(void *ctx, struct iwl_host_cmd *host_cmd);
+ 	bool (*d3_debug_enable)(void *ctx);
+ };
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/link.c b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
+index 92ac6cc40faa7..61b5648d3ab07 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/link.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
+@@ -504,17 +504,27 @@ iwl_mvm_get_puncturing_factor(const struct ieee80211_bss_conf *link_conf)
+ static unsigned int
+ iwl_mvm_get_chan_load(struct ieee80211_bss_conf *link_conf)
+ {
++	struct ieee80211_vif *vif = link_conf->vif;
+ 	struct iwl_mvm_vif_link_info *mvm_link =
+ 		iwl_mvm_vif_from_mac80211(link_conf->vif)->link[link_conf->link_id];
+ 	const struct element *bss_load_elem;
+ 	const struct ieee80211_bss_load_elem *bss_load;
+ 	enum nl80211_band band = link_conf->chanreq.oper.chan->band;
++	const struct cfg80211_bss_ies *ies;
+ 	unsigned int chan_load;
+ 	u32 chan_load_by_us;
+ 
+ 	rcu_read_lock();
+-	bss_load_elem = ieee80211_bss_get_elem(link_conf->bss,
+-					       WLAN_EID_QBSS_LOAD);
++	if (ieee80211_vif_link_active(vif, link_conf->link_id))
++		ies = rcu_dereference(link_conf->bss->beacon_ies);
++	else
++		ies = rcu_dereference(link_conf->bss->ies);
++
++	if (ies)
++		bss_load_elem = cfg80211_find_elem(WLAN_EID_QBSS_LOAD,
++						   ies->data, ies->len);
++	else
++		bss_load_elem = NULL;
+ 
+ 	/* If there isn't BSS Load element, take the defaults */
+ 	if (!bss_load_elem ||
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 1380ae5155f35..498afbe4ee6be 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -770,11 +770,6 @@ static void iwl_mvm_fwrt_dump_end(void *ctx)
+ 	mutex_unlock(&mvm->mutex);
+ }
+ 
+-static bool iwl_mvm_fwrt_fw_running(void *ctx)
+-{
+-	return iwl_mvm_firmware_running(ctx);
+-}
+-
+ static int iwl_mvm_fwrt_send_hcmd(void *ctx, struct iwl_host_cmd *host_cmd)
+ {
+ 	struct iwl_mvm *mvm = (struct iwl_mvm *)ctx;
+@@ -795,7 +790,6 @@ static bool iwl_mvm_d3_debug_enable(void *ctx)
+ static const struct iwl_fw_runtime_ops iwl_mvm_fwrt_ops = {
+ 	.dump_start = iwl_mvm_fwrt_dump_start,
+ 	.dump_end = iwl_mvm_fwrt_dump_end,
+-	.fw_running = iwl_mvm_fwrt_fw_running,
+ 	.send_hcmd = iwl_mvm_fwrt_send_hcmd,
+ 	.d3_debug_enable = iwl_mvm_d3_debug_enable,
+ };
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c b/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c
+index f49e3c98b1ba4..991dc875a7ead 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tests/links.c
+@@ -208,6 +208,7 @@ static void setup_link_conf(struct kunit *test)
+ 	bss_load->channel_util = params->channel_util;
+ 
+ 	rcu_assign_pointer(bss.ies, ies);
++	rcu_assign_pointer(bss.beacon_ies, ies);
+ }
+ 
+ static void test_link_grading(struct kunit *test)
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 99896d85d2f81..5fc2faa9ba5a7 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -308,9 +308,13 @@ static void ser_reset_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ 
+ static void ser_sta_deinit_cam_iter(void *data, struct ieee80211_sta *sta)
+ {
+-	struct rtw89_vif *rtwvif = (struct rtw89_vif *)data;
+-	struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++	struct rtw89_vif *target_rtwvif = (struct rtw89_vif *)data;
+ 	struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++	struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++	struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++
++	if (rtwvif != target_rtwvif)
++		return;
+ 
+ 	if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
+ 		rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
+diff --git a/drivers/pci/controller/dwc/pcie-al.c b/drivers/pci/controller/dwc/pcie-al.c
+index 6dfdda59f3283..643115f74092d 100644
+--- a/drivers/pci/controller/dwc/pcie-al.c
++++ b/drivers/pci/controller/dwc/pcie-al.c
+@@ -242,18 +242,24 @@ static struct pci_ops al_child_pci_ops = {
+ 	.write = pci_generic_config_write,
+ };
+ 
+-static void al_pcie_config_prepare(struct al_pcie *pcie)
++static int al_pcie_config_prepare(struct al_pcie *pcie)
+ {
+ 	struct al_pcie_target_bus_cfg *target_bus_cfg;
+ 	struct dw_pcie_rp *pp = &pcie->pci->pp;
+ 	unsigned int ecam_bus_mask;
++	struct resource_entry *ft;
+ 	u32 cfg_control_offset;
++	struct resource *bus;
+ 	u8 subordinate_bus;
+ 	u8 secondary_bus;
+ 	u32 cfg_control;
+ 	u32 reg;
+-	struct resource *bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res;
+ 
++	ft = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS);
++	if (!ft)
++		return -ENODEV;
++
++	bus = ft->res;
+ 	target_bus_cfg = &pcie->target_bus_cfg;
+ 
+ 	ecam_bus_mask = (pcie->ecam_size >> PCIE_ECAM_BUS_SHIFT) - 1;
+@@ -287,6 +293,8 @@ static void al_pcie_config_prepare(struct al_pcie *pcie)
+ 	       FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus);
+ 
+ 	al_pcie_controller_writel(pcie, cfg_control_offset, reg);
++
++	return 0;
+ }
+ 
+ static int al_pcie_host_init(struct dw_pcie_rp *pp)
+@@ -305,7 +313,9 @@ static int al_pcie_host_init(struct dw_pcie_rp *pp)
+ 	if (rc)
+ 		return rc;
+ 
+-	al_pcie_config_prepare(pcie);
++	rc = al_pcie_config_prepare(pcie);
++	if (rc)
++		return rc;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4438f3b4b5ef9..60f866f1e6d78 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1670,6 +1670,7 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
+ 		seq_printf(s, "pin %d (%s) ", pin, desc->name);
+ 
+ #ifdef CONFIG_GPIOLIB
++		gdev = NULL;
+ 		gpio_num = -1;
+ 		list_for_each_entry(range, &pctldev->gpio_ranges, node) {
+ 			if ((pin >= range->pin_base) &&
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 60be78da9f529..389602e4d7ab3 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -2583,8 +2583,10 @@ static int rzg2l_pinctrl_suspend_noirq(struct device *dev)
+ 	rzg2l_pinctrl_pm_setup_dedicated_regs(pctrl, true);
+ 
+ 	for (u8 i = 0; i < 2; i++) {
+-		cache->sd_ch[i] = readb(pctrl->base + SD_CH(regs->sd_ch, i));
+-		cache->eth_poc[i] = readb(pctrl->base + ETH_POC(regs->eth_poc, i));
++		if (regs->sd_ch)
++			cache->sd_ch[i] = readb(pctrl->base + SD_CH(regs->sd_ch, i));
++		if (regs->eth_poc)
++			cache->eth_poc[i] = readb(pctrl->base + ETH_POC(regs->eth_poc, i));
+ 	}
+ 
+ 	cache->qspi = readb(pctrl->base + QSPI);
+@@ -2615,8 +2617,10 @@ static int rzg2l_pinctrl_resume_noirq(struct device *dev)
+ 	writeb(cache->qspi, pctrl->base + QSPI);
+ 	writeb(cache->eth_mode, pctrl->base + ETH_MODE);
+ 	for (u8 i = 0; i < 2; i++) {
+-		writeb(cache->sd_ch[i], pctrl->base + SD_CH(regs->sd_ch, i));
+-		writeb(cache->eth_poc[i], pctrl->base + ETH_POC(regs->eth_poc, i));
++		if (regs->sd_ch)
++			writeb(cache->sd_ch[i], pctrl->base + SD_CH(regs->sd_ch, i));
++		if (regs->eth_poc)
++			writeb(cache->eth_poc[i], pctrl->base + ETH_POC(regs->eth_poc, i));
+ 	}
+ 
+ 	rzg2l_pinctrl_pm_setup_pfc(pctrl);
+diff --git a/drivers/platform/chrome/cros_ec_lpc_mec.c b/drivers/platform/chrome/cros_ec_lpc_mec.c
+index 0d9c79b270ce2..63b6b261b8e58 100644
+--- a/drivers/platform/chrome/cros_ec_lpc_mec.c
++++ b/drivers/platform/chrome/cros_ec_lpc_mec.c
+@@ -10,13 +10,65 @@
+ 
+ #include "cros_ec_lpc_mec.h"
+ 
++#define ACPI_LOCK_DELAY_MS 500
++
+ /*
+  * This mutex must be held while accessing the EMI unit. We can't rely on the
+  * EC mutex because memmap data may be accessed without it being held.
+  */
+ static DEFINE_MUTEX(io_mutex);
++/*
++ * An alternative mutex to be used when the ACPI AML code may also
++ * access memmap data.  When set, this mutex is used in preference to
++ * io_mutex.
++ */
++static acpi_handle aml_mutex;
++
+ static u16 mec_emi_base, mec_emi_end;
+ 
++/**
++ * cros_ec_lpc_mec_lock() - Acquire mutex for EMI
++ *
++ * @return: Negative error code, or zero for success
++ */
++static int cros_ec_lpc_mec_lock(void)
++{
++	bool success;
++
++	if (!aml_mutex) {
++		mutex_lock(&io_mutex);
++		return 0;
++	}
++
++	success = ACPI_SUCCESS(acpi_acquire_mutex(aml_mutex,
++						  NULL, ACPI_LOCK_DELAY_MS));
++	if (!success)
++		return -EBUSY;
++
++	return 0;
++}
++
++/**
++ * cros_ec_lpc_mec_unlock() - Release mutex for EMI
++ *
++ * @return: Negative error code, or zero for success
++ */
++static int cros_ec_lpc_mec_unlock(void)
++{
++	bool success;
++
++	if (!aml_mutex) {
++		mutex_unlock(&io_mutex);
++		return 0;
++	}
++
++	success = ACPI_SUCCESS(acpi_release_mutex(aml_mutex, NULL));
++	if (!success)
++		return -EBUSY;
++
++	return 0;
++}
++
+ /**
+  * cros_ec_lpc_mec_emi_write_address() - Initialize EMI at a given address.
+  *
+@@ -77,6 +129,7 @@ u8 cros_ec_lpc_io_bytes_mec(enum cros_ec_lpc_mec_io_type io_type,
+ 	int io_addr;
+ 	u8 sum = 0;
+ 	enum cros_ec_lpc_mec_emi_access_mode access, new_access;
++	int ret;
+ 
+ 	/* Return checksum of 0 if window is not initialized */
+ 	WARN_ON(mec_emi_base == 0 || mec_emi_end == 0);
+@@ -92,7 +145,9 @@ u8 cros_ec_lpc_io_bytes_mec(enum cros_ec_lpc_mec_io_type io_type,
+ 	else
+ 		access = ACCESS_TYPE_LONG_AUTO_INCREMENT;
+ 
+-	mutex_lock(&io_mutex);
++	ret = cros_ec_lpc_mec_lock();
++	if (ret)
++		return ret;
+ 
+ 	/* Initialize I/O at desired address */
+ 	cros_ec_lpc_mec_emi_write_address(offset, access);
+@@ -134,7 +189,9 @@ u8 cros_ec_lpc_io_bytes_mec(enum cros_ec_lpc_mec_io_type io_type,
+ 	}
+ 
+ done:
+-	mutex_unlock(&io_mutex);
++	ret = cros_ec_lpc_mec_unlock();
++	if (ret)
++		return ret;
+ 
+ 	return sum;
+ }
+@@ -146,3 +203,18 @@ void cros_ec_lpc_mec_init(unsigned int base, unsigned int end)
+ 	mec_emi_end = end;
+ }
+ EXPORT_SYMBOL(cros_ec_lpc_mec_init);
++
++int cros_ec_lpc_mec_acpi_mutex(struct acpi_device *adev, const char *pathname)
++{
++	int status;
++
++	if (!adev)
++		return -ENOENT;
++
++	status = acpi_get_handle(adev->handle, pathname, &aml_mutex);
++	if (ACPI_FAILURE(status))
++		return -ENOENT;
++
++	return 0;
++}
++EXPORT_SYMBOL(cros_ec_lpc_mec_acpi_mutex);
+diff --git a/drivers/platform/chrome/cros_ec_lpc_mec.h b/drivers/platform/chrome/cros_ec_lpc_mec.h
+index 9d0521b23e8ae..3f3af37e58a50 100644
+--- a/drivers/platform/chrome/cros_ec_lpc_mec.h
++++ b/drivers/platform/chrome/cros_ec_lpc_mec.h
+@@ -8,6 +8,8 @@
+ #ifndef __CROS_EC_LPC_MEC_H
+ #define __CROS_EC_LPC_MEC_H
+ 
++#include <linux/acpi.h>
++
+ enum cros_ec_lpc_mec_emi_access_mode {
+ 	/* 8-bit access */
+ 	ACCESS_TYPE_BYTE = 0x0,
+@@ -45,6 +47,15 @@ enum cros_ec_lpc_mec_io_type {
+  */
+ void cros_ec_lpc_mec_init(unsigned int base, unsigned int end);
+ 
++/**
++ * cros_ec_lpc_mec_acpi_mutex() - Find and set ACPI mutex for MEC
++ *
++ * @adev:     Parent ACPI device
++ * @pathname: Name of AML mutex
++ * @return:   Negative error code, or zero for success
++ */
++int cros_ec_lpc_mec_acpi_mutex(struct acpi_device *adev, const char *pathname);
++
+ /**
+  * cros_ec_lpc_mec_in_range() - Determine if addresses are in MEC EMI range.
+  *
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 2d6e2558863c5..8f1f719befa3e 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -41,6 +41,7 @@
+ #define AMD_CPU_ID_RMB			0x14b5
+ #define AMD_CPU_ID_PS			0x14e8
+ #define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT	0x1507
++#define PCI_DEVICE_ID_AMD_1AH_M60H_ROOT	0x1122
+ 
+ #define PMF_MSG_DELAY_MIN_US		50
+ #define RESPONSE_REGISTER_LOOP_MAX	20000
+@@ -249,6 +250,7 @@ static const struct pci_device_id pmf_pci_ids[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_RMB) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_ROOT) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M60H_ROOT) },
+ 	{ }
+ };
+ 
+@@ -382,6 +384,7 @@ static const struct acpi_device_id amd_pmf_acpi_ids[] = {
+ 	{"AMDI0102", 0},
+ 	{"AMDI0103", 0},
+ 	{"AMDI0105", 0},
++	{"AMDI0107", 0},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(acpi, amd_pmf_acpi_ids);
+diff --git a/drivers/platform/x86/amd/pmf/pmf-quirks.c b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+index 0b2eb0ae85feb..460444cda1b29 100644
+--- a/drivers/platform/x86/amd/pmf/pmf-quirks.c
++++ b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+@@ -29,6 +29,14 @@ static const struct dmi_system_id fwbug_list[] = {
+ 		},
+ 		.driver_data = &quirk_no_sps_bug,
+ 	},
++	{
++		.ident = "ROG Ally X",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "RC72LA"),
++		},
++		.driver_data = &quirk_no_sps_bug,
++	},
+ 	{}
+ };
+ 
+@@ -48,4 +56,3 @@ void amd_pmf_quirks_init(struct amd_pmf_dev *dev)
+ 			dmi_id->ident);
+ 	}
+ }
+-
+diff --git a/drivers/ras/amd/atl/dehash.c b/drivers/ras/amd/atl/dehash.c
+index 4ea46262c4f58..d4ee7ecabaeee 100644
+--- a/drivers/ras/amd/atl/dehash.c
++++ b/drivers/ras/amd/atl/dehash.c
+@@ -12,41 +12,10 @@
+ 
+ #include "internal.h"
+ 
+-/*
+- * Verify the interleave bits are correct in the different interleaving
+- * settings.
+- *
+- * If @num_intlv_dies and/or @num_intlv_sockets are 1, it means the
+- * respective interleaving is disabled.
+- */
+-static inline bool map_bits_valid(struct addr_ctx *ctx, u8 bit1, u8 bit2,
+-				  u8 num_intlv_dies, u8 num_intlv_sockets)
+-{
+-	if (!(ctx->map.intlv_bit_pos == bit1 || ctx->map.intlv_bit_pos == bit2)) {
+-		pr_debug("Invalid interleave bit: %u", ctx->map.intlv_bit_pos);
+-		return false;
+-	}
+-
+-	if (ctx->map.num_intlv_dies > num_intlv_dies) {
+-		pr_debug("Invalid number of interleave dies: %u", ctx->map.num_intlv_dies);
+-		return false;
+-	}
+-
+-	if (ctx->map.num_intlv_sockets > num_intlv_sockets) {
+-		pr_debug("Invalid number of interleave sockets: %u", ctx->map.num_intlv_sockets);
+-		return false;
+-	}
+-
+-	return true;
+-}
+-
+ static int df2_dehash_addr(struct addr_ctx *ctx)
+ {
+ 	u8 hashed_bit, intlv_bit, intlv_bit_pos;
+ 
+-	if (!map_bits_valid(ctx, 8, 9, 1, 1))
+-		return -EINVAL;
+-
+ 	intlv_bit_pos = ctx->map.intlv_bit_pos;
+ 	intlv_bit = !!(BIT_ULL(intlv_bit_pos) & ctx->ret_addr);
+ 
+@@ -67,9 +36,6 @@ static int df3_dehash_addr(struct addr_ctx *ctx)
+ 	bool hash_ctl_64k, hash_ctl_2M, hash_ctl_1G;
+ 	u8 hashed_bit, intlv_bit, intlv_bit_pos;
+ 
+-	if (!map_bits_valid(ctx, 8, 9, 1, 1))
+-		return -EINVAL;
+-
+ 	hash_ctl_64k = FIELD_GET(DF3_HASH_CTL_64K, ctx->map.ctl);
+ 	hash_ctl_2M  = FIELD_GET(DF3_HASH_CTL_2M, ctx->map.ctl);
+ 	hash_ctl_1G  = FIELD_GET(DF3_HASH_CTL_1G, ctx->map.ctl);
+@@ -171,9 +137,6 @@ static int df4_dehash_addr(struct addr_ctx *ctx)
+ 	bool hash_ctl_64k, hash_ctl_2M, hash_ctl_1G;
+ 	u8 hashed_bit, intlv_bit;
+ 
+-	if (!map_bits_valid(ctx, 8, 8, 1, 2))
+-		return -EINVAL;
+-
+ 	hash_ctl_64k = FIELD_GET(DF4_HASH_CTL_64K, ctx->map.ctl);
+ 	hash_ctl_2M  = FIELD_GET(DF4_HASH_CTL_2M, ctx->map.ctl);
+ 	hash_ctl_1G  = FIELD_GET(DF4_HASH_CTL_1G, ctx->map.ctl);
+@@ -247,9 +210,6 @@ static int df4p5_dehash_addr(struct addr_ctx *ctx)
+ 	u8 hashed_bit, intlv_bit;
+ 	u64 rehash_vector;
+ 
+-	if (!map_bits_valid(ctx, 8, 8, 1, 2))
+-		return -EINVAL;
+-
+ 	hash_ctl_64k = FIELD_GET(DF4_HASH_CTL_64K, ctx->map.ctl);
+ 	hash_ctl_2M  = FIELD_GET(DF4_HASH_CTL_2M, ctx->map.ctl);
+ 	hash_ctl_1G  = FIELD_GET(DF4_HASH_CTL_1G, ctx->map.ctl);
+@@ -360,9 +320,6 @@ static int mi300_dehash_addr(struct addr_ctx *ctx)
+ 	bool hashed_bit, intlv_bit, test_bit;
+ 	u8 num_intlv_bits, base_bit, i;
+ 
+-	if (!map_bits_valid(ctx, 8, 8, 4, 1))
+-		return -EINVAL;
+-
+ 	hash_ctl_4k  = FIELD_GET(DF4p5_HASH_CTL_4K, ctx->map.ctl);
+ 	hash_ctl_64k = FIELD_GET(DF4_HASH_CTL_64K,  ctx->map.ctl);
+ 	hash_ctl_2M  = FIELD_GET(DF4_HASH_CTL_2M,   ctx->map.ctl);
+diff --git a/drivers/ras/amd/atl/map.c b/drivers/ras/amd/atl/map.c
+index 8b908e8d7495f..04419923f0884 100644
+--- a/drivers/ras/amd/atl/map.c
++++ b/drivers/ras/amd/atl/map.c
+@@ -642,6 +642,79 @@ static int get_global_map_data(struct addr_ctx *ctx)
+ 	return 0;
+ }
+ 
++/*
++ * Verify the interleave bits are correct in the different interleaving
++ * settings.
++ *
++ * If @num_intlv_dies and/or @num_intlv_sockets are 1, it means the
++ * respective interleaving is disabled.
++ */
++static inline bool map_bits_valid(struct addr_ctx *ctx, u8 bit1, u8 bit2,
++				  u8 num_intlv_dies, u8 num_intlv_sockets)
++{
++	if (!(ctx->map.intlv_bit_pos == bit1 || ctx->map.intlv_bit_pos == bit2)) {
++		pr_debug("Invalid interleave bit: %u", ctx->map.intlv_bit_pos);
++		return false;
++	}
++
++	if (ctx->map.num_intlv_dies > num_intlv_dies) {
++		pr_debug("Invalid number of interleave dies: %u", ctx->map.num_intlv_dies);
++		return false;
++	}
++
++	if (ctx->map.num_intlv_sockets > num_intlv_sockets) {
++		pr_debug("Invalid number of interleave sockets: %u", ctx->map.num_intlv_sockets);
++		return false;
++	}
++
++	return true;
++}
++
++static int validate_address_map(struct addr_ctx *ctx)
++{
++	switch (ctx->map.intlv_mode) {
++	case DF2_2CHAN_HASH:
++	case DF3_COD4_2CHAN_HASH:
++	case DF3_COD2_4CHAN_HASH:
++	case DF3_COD1_8CHAN_HASH:
++		if (!map_bits_valid(ctx, 8, 9, 1, 1))
++			goto err;
++		break;
++
++	case DF4_NPS4_2CHAN_HASH:
++	case DF4_NPS2_4CHAN_HASH:
++	case DF4_NPS1_8CHAN_HASH:
++	case DF4p5_NPS4_2CHAN_1K_HASH:
++	case DF4p5_NPS4_2CHAN_2K_HASH:
++	case DF4p5_NPS2_4CHAN_1K_HASH:
++	case DF4p5_NPS2_4CHAN_2K_HASH:
++	case DF4p5_NPS1_8CHAN_1K_HASH:
++	case DF4p5_NPS1_8CHAN_2K_HASH:
++	case DF4p5_NPS1_16CHAN_1K_HASH:
++	case DF4p5_NPS1_16CHAN_2K_HASH:
++		if (!map_bits_valid(ctx, 8, 8, 1, 2))
++			goto err;
++		break;
++
++	case MI3_HASH_8CHAN:
++	case MI3_HASH_16CHAN:
++	case MI3_HASH_32CHAN:
++		if (!map_bits_valid(ctx, 8, 8, 4, 1))
++			goto err;
++		break;
++
++	/* Nothing to do for modes that don't need special validation checks. */
++	default:
++		break;
++	}
++
++	return 0;
++
++err:
++	atl_debug(ctx, "Inconsistent address map");
++	return -EINVAL;
++}
++
+ static void dump_address_map(struct dram_addr_map *map)
+ {
+ 	u8 i;
+@@ -678,5 +751,9 @@ int get_address_map(struct addr_ctx *ctx)
+ 
+ 	dump_address_map(&ctx->map);
+ 
++	ret = validate_address_map(ctx);
++	if (ret)
++		return ret;
++
+ 	return ret;
+ }
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index abf7b371b8604..e744c07507eed 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -117,8 +117,8 @@ static void scp_ipi_handler(struct mtk_scp *scp)
+ 		return;
+ 	}
+ 
+-	memset(scp->share_buf, 0, scp_sizes->ipi_share_buffer_size);
+ 	memcpy_fromio(scp->share_buf, &rcv_obj->share_buf, len);
++	memset(&scp->share_buf[len], 0, scp_sizes->ipi_share_buffer_size - len);
+ 	handler(scp->share_buf, len, ipi_desc[id].priv);
+ 	scp_ipi_unlock(scp, id);
+ 
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 54d8005d40a34..8458bcfe9e19e 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -52,6 +52,7 @@ struct adsp_data {
+ 	const char *ssr_name;
+ 	const char *sysmon_name;
+ 	int ssctl_id;
++	unsigned int smem_host_id;
+ 
+ 	int region_assign_idx;
+ 	int region_assign_count;
+@@ -81,6 +82,7 @@ struct qcom_adsp {
+ 	int lite_pas_id;
+ 	unsigned int minidump_id;
+ 	int crash_reason_smem;
++	unsigned int smem_host_id;
+ 	bool decrypt_shutdown;
+ 	const char *info_name;
+ 
+@@ -399,6 +401,9 @@ static int adsp_stop(struct rproc *rproc)
+ 	if (handover)
+ 		qcom_pas_handover(&adsp->q6v5);
+ 
++	if (adsp->smem_host_id)
++		ret = qcom_smem_bust_hwspin_lock_by_host(adsp->smem_host_id);
++
+ 	return ret;
+ }
+ 
+@@ -727,6 +732,7 @@ static int adsp_probe(struct platform_device *pdev)
+ 	adsp->pas_id = desc->pas_id;
+ 	adsp->lite_pas_id = desc->lite_pas_id;
+ 	adsp->info_name = desc->sysmon_name;
++	adsp->smem_host_id = desc->smem_host_id;
+ 	adsp->decrypt_shutdown = desc->decrypt_shutdown;
+ 	adsp->region_assign_idx = desc->region_assign_idx;
+ 	adsp->region_assign_count = min_t(int, MAX_ASSIGN_COUNT, desc->region_assign_count);
+@@ -1196,6 +1202,7 @@ static const struct adsp_data sm8550_adsp_resource = {
+ 	.ssr_name = "lpass",
+ 	.sysmon_name = "adsp",
+ 	.ssctl_id = 0x14,
++	.smem_host_id = 2,
+ };
+ 
+ static const struct adsp_data sm8550_cdsp_resource = {
+@@ -1216,6 +1223,7 @@ static const struct adsp_data sm8550_cdsp_resource = {
+ 	.ssr_name = "cdsp",
+ 	.sysmon_name = "cdsp",
+ 	.ssctl_id = 0x17,
++	.smem_host_id = 5,
+ };
+ 
+ static const struct adsp_data sm8550_mpss_resource = {
+@@ -1236,6 +1244,7 @@ static const struct adsp_data sm8550_mpss_resource = {
+ 	.ssr_name = "mpss",
+ 	.sysmon_name = "modem",
+ 	.ssctl_id = 0x12,
++	.smem_host_id = 1,
+ 	.region_assign_idx = 2,
+ 	.region_assign_count = 1,
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+@@ -1275,6 +1284,7 @@ static const struct adsp_data sm8650_cdsp_resource = {
+ 	.ssr_name = "cdsp",
+ 	.sysmon_name = "cdsp",
+ 	.ssctl_id = 0x17,
++	.smem_host_id = 5,
+ 	.region_assign_idx = 2,
+ 	.region_assign_count = 1,
+ 	.region_assign_shared = true,
+@@ -1299,6 +1309,7 @@ static const struct adsp_data sm8650_mpss_resource = {
+ 	.ssr_name = "mpss",
+ 	.sysmon_name = "modem",
+ 	.ssctl_id = 0x12,
++	.smem_host_id = 1,
+ 	.region_assign_idx = 2,
+ 	.region_assign_count = 3,
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 7191fa0c087f2..50039e983ebaa 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -359,6 +359,32 @@ static struct qcom_smem *__smem;
+ /* Timeout (ms) for the trylock of remote spinlocks */
+ #define HWSPINLOCK_TIMEOUT	1000
+ 
++/* The qcom hwspinlock id is always plus one from the smem host id */
++#define SMEM_HOST_ID_TO_HWSPINLOCK_ID(__x) ((__x) + 1)
++
++/**
++ * qcom_smem_bust_hwspin_lock_by_host() - bust the smem hwspinlock for a host
++ * @host:	remote processor id
++ *
++ * Busts the hwspin_lock for the given smem host id. This helper is intended
++ * for remoteproc drivers that manage remoteprocs with an equivalent smem
++ * driver instance in the remote firmware. Drivers can force a release of the
++ * smem hwspin_lock if the rproc unexpectedly goes into a bad state.
++ *
++ * Context: Process context.
++ *
++ * Returns: 0 on success, otherwise negative errno.
++ */
++int qcom_smem_bust_hwspin_lock_by_host(unsigned int host)
++{
++	/* This function is for remote procs, so ignore SMEM_HOST_APPS */
++	if (host == SMEM_HOST_APPS || host >= SMEM_HOST_COUNT)
++		return -EINVAL;
++
++	return hwspin_lock_bust(__smem->hwlock, SMEM_HOST_ID_TO_HWSPINLOCK_ID(host));
++}
++EXPORT_SYMBOL_GPL(qcom_smem_bust_hwspin_lock_by_host);
++
+ /**
+  * qcom_smem_is_available() - Check if SMEM is available
+  *
+diff --git a/drivers/spi/spi-hisi-kunpeng.c b/drivers/spi/spi-hisi-kunpeng.c
+index 77e9738e42f60..6910b4d4c427b 100644
+--- a/drivers/spi/spi-hisi-kunpeng.c
++++ b/drivers/spi/spi-hisi-kunpeng.c
+@@ -495,6 +495,7 @@ static int hisi_spi_probe(struct platform_device *pdev)
+ 	host->transfer_one = hisi_spi_transfer_one;
+ 	host->handle_err = hisi_spi_handle_err;
+ 	host->dev.fwnode = dev->fwnode;
++	host->min_speed_hz = DIV_ROUND_UP(host->max_speed_hz, CLK_DIV_MAX);
+ 
+ 	hisi_spi_hw_init(hs);
+ 
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 88211ccdfbd62..5be6113e7c80f 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -150,7 +150,7 @@ trip_point_temp_show(struct device *dev, struct device_attribute *attr,
+ 	if (sscanf(attr->attr.name, "trip_point_%d_temp", &trip_id) != 1)
+ 		return -EINVAL;
+ 
+-	return sprintf(buf, "%d\n", tz->trips[trip_id].trip.temperature);
++	return sprintf(buf, "%d\n", READ_ONCE(tz->trips[trip_id].trip.temperature));
+ }
+ 
+ static ssize_t
+@@ -174,7 +174,7 @@ trip_point_hyst_store(struct device *dev, struct device_attribute *attr,
+ 	trip = &tz->trips[trip_id].trip;
+ 
+ 	if (hyst != trip->hysteresis) {
+-		trip->hysteresis = hyst;
++		WRITE_ONCE(trip->hysteresis, hyst);
+ 
+ 		thermal_zone_trip_updated(tz, trip);
+ 	}
+@@ -194,7 +194,7 @@ trip_point_hyst_show(struct device *dev, struct device_attribute *attr,
+ 	if (sscanf(attr->attr.name, "trip_point_%d_hyst", &trip_id) != 1)
+ 		return -EINVAL;
+ 
+-	return sprintf(buf, "%d\n", tz->trips[trip_id].trip.hysteresis);
++	return sprintf(buf, "%d\n", READ_ONCE(tz->trips[trip_id].trip.hysteresis));
+ }
+ 
+ static ssize_t
+diff --git a/drivers/thermal/thermal_trip.c b/drivers/thermal/thermal_trip.c
+index 49e63db685172..b4e7411b2fe74 100644
+--- a/drivers/thermal/thermal_trip.c
++++ b/drivers/thermal/thermal_trip.c
+@@ -152,7 +152,7 @@ void thermal_zone_set_trip_temp(struct thermal_zone_device *tz,
+ 	if (trip->temperature == temp)
+ 		return;
+ 
+-	trip->temperature = temp;
++	WRITE_ONCE(trip->temperature, temp);
+ 	thermal_notify_tz_trip_change(tz, trip);
+ 
+ 	if (temp == THERMAL_TEMP_INVALID) {
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 5864d65448ce5..91bfdc17eedb3 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2412,7 +2412,17 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ 		return err;
+ 	}
+ 
++	/*
++	 * The UFSHCI 3.0 specification does not define MCQ_SUPPORT and
++	 * LSDB_SUPPORT, but [31:29] as reserved bits with reset value 0s, which
++	 * means we can simply read values regardless of version.
++	 */
+ 	hba->mcq_sup = FIELD_GET(MASK_MCQ_SUPPORT, hba->capabilities);
++	/*
++	 * 0h: legacy single doorbell support is available
++	 * 1h: indicate that legacy single doorbell support has been removed
++	 */
++	hba->lsdb_sup = !FIELD_GET(MASK_LSDB_SUPPORT, hba->capabilities);
+ 	if (!hba->mcq_sup)
+ 		return 0;
+ 
+@@ -6550,7 +6560,8 @@ static void ufshcd_err_handler(struct work_struct *work)
+ 	if (ufshcd_err_handling_should_stop(hba))
+ 		goto skip_err_handling;
+ 
+-	if (hba->dev_quirks & UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS) {
++	if ((hba->dev_quirks & UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS) &&
++	    !hba->force_reset) {
+ 		bool ret;
+ 
+ 		spin_unlock_irqrestore(hba->host->host_lock, flags);
+@@ -10456,6 +10467,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 	}
+ 
+ 	if (!is_mcq_supported(hba)) {
++		if (!hba->lsdb_sup) {
++			dev_err(hba->dev, "%s: failed to initialize (legacy doorbell mode not supported)\n",
++				__func__);
++			err = -EINVAL;
++			goto out_disable;
++		}
+ 		err = scsi_add_host(host, hba->dev);
+ 		if (err) {
+ 			dev_err(hba->dev, "scsi_add_host failed\n");
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index c4d103db9d0f8..f66224a270bc6 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -496,7 +496,7 @@ ucsi_register_displayport(struct ucsi_connector *con,
+ 			  bool override, int offset,
+ 			  struct typec_altmode_desc *desc)
+ {
+-	return NULL;
++	return typec_port_register_altmode(con->port, desc);
+ }
+ 
+ static inline void
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index fc01b31bbb875..6338d818bc8bc 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -144,53 +144,62 @@ static int tweak_set_configuration_cmd(struct urb *urb)
+ 	if (err && err != -ENODEV)
+ 		dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n",
+ 			config, err);
+-	return 0;
++	return err;
+ }
+ 
+ static int tweak_reset_device_cmd(struct urb *urb)
+ {
+ 	struct stub_priv *priv = (struct stub_priv *) urb->context;
+ 	struct stub_device *sdev = priv->sdev;
++	int err;
+ 
+ 	dev_info(&urb->dev->dev, "usb_queue_reset_device\n");
+ 
+-	if (usb_lock_device_for_reset(sdev->udev, NULL) < 0) {
++	err = usb_lock_device_for_reset(sdev->udev, NULL);
++	if (err < 0) {
+ 		dev_err(&urb->dev->dev, "could not obtain lock to reset device\n");
+-		return 0;
++		return err;
+ 	}
+-	usb_reset_device(sdev->udev);
++	err = usb_reset_device(sdev->udev);
+ 	usb_unlock_device(sdev->udev);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ /*
+  * clear_halt, set_interface, and set_configuration require special tricks.
++ * Returns 1 if request was tweaked, 0 otherwise.
+  */
+-static void tweak_special_requests(struct urb *urb)
++static int tweak_special_requests(struct urb *urb)
+ {
++	int err;
++
+ 	if (!urb || !urb->setup_packet)
+-		return;
++		return 0;
+ 
+ 	if (usb_pipetype(urb->pipe) != PIPE_CONTROL)
+-		return;
++		return 0;
+ 
+ 	if (is_clear_halt_cmd(urb))
+ 		/* tweak clear_halt */
+-		 tweak_clear_halt_cmd(urb);
++		err = tweak_clear_halt_cmd(urb);
+ 
+ 	else if (is_set_interface_cmd(urb))
+ 		/* tweak set_interface */
+-		tweak_set_interface_cmd(urb);
++		err = tweak_set_interface_cmd(urb);
+ 
+ 	else if (is_set_configuration_cmd(urb))
+ 		/* tweak set_configuration */
+-		tweak_set_configuration_cmd(urb);
++		err = tweak_set_configuration_cmd(urb);
+ 
+ 	else if (is_reset_device_cmd(urb))
+-		tweak_reset_device_cmd(urb);
+-	else
++		err = tweak_reset_device_cmd(urb);
++	else {
+ 		usbip_dbg_stub_rx("no need to tweak\n");
++		return 0;
++	}
++
++	return !err;
+ }
+ 
+ /*
+@@ -468,6 +477,7 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 	int support_sg = 1;
+ 	int np = 0;
+ 	int ret, i;
++	int is_tweaked;
+ 
+ 	if (pipe == -1)
+ 		return;
+@@ -580,8 +590,11 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 		priv->urbs[i]->pipe = pipe;
+ 		priv->urbs[i]->complete = stub_complete;
+ 
+-		/* no need to submit an intercepted request, but harmless? */
+-		tweak_special_requests(priv->urbs[i]);
++		/*
++		 * all URBs belong to a single PDU, so a global is_tweaked flag is
++		 * enough
++		 */
++		is_tweaked = tweak_special_requests(priv->urbs[i]);
+ 
+ 		masking_bogus_flags(priv->urbs[i]);
+ 	}
+@@ -594,22 +607,32 @@ static void stub_recv_cmd_submit(struct stub_device *sdev,
+ 
+ 	/* urb is now ready to submit */
+ 	for (i = 0; i < priv->num_urbs; i++) {
+-		ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL);
++		if (!is_tweaked) {
++			ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL);
+ 
+-		if (ret == 0)
+-			usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
+-					pdu->base.seqnum);
+-		else {
+-			dev_err(&udev->dev, "submit_urb error, %d\n", ret);
+-			usbip_dump_header(pdu);
+-			usbip_dump_urb(priv->urbs[i]);
++			if (ret == 0)
++				usbip_dbg_stub_rx("submit urb ok, seqnum %u\n",
++						pdu->base.seqnum);
++			else {
++				dev_err(&udev->dev, "submit_urb error, %d\n", ret);
++				usbip_dump_header(pdu);
++				usbip_dump_urb(priv->urbs[i]);
+ 
++				/*
++				 * Pessimistic.
++				 * This connection will be discarded.
++				 */
++				usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
++				break;
++			}
++		} else {
+ 			/*
+-			 * Pessimistic.
+-			 * This connection will be discarded.
++			 * An identical URB was already submitted in
++			 * tweak_special_requests(). Skip submitting this URB to not
++			 * duplicate the request.
+ 			 */
+-			usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT);
+-			break;
++			priv->urbs[i]->status = 0;
++			stub_complete(priv->urbs[i]);
+ 		}
+ 	}
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 39d22693e47b6..c2f48fc159e5a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1586,6 +1586,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ 					     locked_page, &cached,
+ 					     clear_bits,
+ 					     page_ops);
++		btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
+ 		start += cur_alloc_size;
+ 	}
+ 
+@@ -1599,6 +1600,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ 		clear_bits |= EXTENT_CLEAR_DATA_RESV;
+ 		extent_clear_unlock_delalloc(inode, start, end, locked_page,
+ 					     &cached, clear_bits, page_ops);
++		btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
+ 	}
+ 	return ret;
+ }
+@@ -2269,6 +2271,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ 					     EXTENT_DO_ACCOUNTING, PAGE_UNLOCK |
+ 					     PAGE_START_WRITEBACK |
+ 					     PAGE_END_WRITEBACK);
++		btrfs_qgroup_free_data(inode, NULL, cur_offset, end - cur_offset + 1, NULL);
+ 	}
+ 	btrfs_free_path(path);
+ 	return ret;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index d7caa3732f074..731d7d562db1a 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1648,14 +1648,20 @@ static void scrub_reset_stripe(struct scrub_stripe *stripe)
+ 	}
+ }
+ 
++static u32 stripe_length(const struct scrub_stripe *stripe)
++{
++	ASSERT(stripe->bg);
++
++	return min(BTRFS_STRIPE_LEN,
++		   stripe->bg->start + stripe->bg->length - stripe->logical);
++}
++
+ static void scrub_submit_extent_sector_read(struct scrub_ctx *sctx,
+ 					    struct scrub_stripe *stripe)
+ {
+ 	struct btrfs_fs_info *fs_info = stripe->bg->fs_info;
+ 	struct btrfs_bio *bbio = NULL;
+-	unsigned int nr_sectors = min(BTRFS_STRIPE_LEN, stripe->bg->start +
+-				      stripe->bg->length - stripe->logical) >>
+-				  fs_info->sectorsize_bits;
++	unsigned int nr_sectors = stripe_length(stripe) >> fs_info->sectorsize_bits;
+ 	u64 stripe_len = BTRFS_STRIPE_LEN;
+ 	int mirror = stripe->mirror_num;
+ 	int i;
+@@ -1729,9 +1735,7 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx,
+ {
+ 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+ 	struct btrfs_bio *bbio;
+-	unsigned int nr_sectors = min(BTRFS_STRIPE_LEN, stripe->bg->start +
+-				      stripe->bg->length - stripe->logical) >>
+-				  fs_info->sectorsize_bits;
++	unsigned int nr_sectors = stripe_length(stripe) >> fs_info->sectorsize_bits;
+ 	int mirror = stripe->mirror_num;
+ 
+ 	ASSERT(stripe->bg);
+@@ -1871,6 +1875,9 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx)
+ 		stripe = &sctx->stripes[i];
+ 
+ 		wait_scrub_stripe_io(stripe);
++		spin_lock(&sctx->stat_lock);
++		sctx->stat.last_physical = stripe->physical + stripe_length(stripe);
++		spin_unlock(&sctx->stat_lock);
+ 		scrub_reset_stripe(stripe);
+ 	}
+ out:
+@@ -2139,7 +2146,9 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
+ 					 cur_physical, &found_logical);
+ 		if (ret > 0) {
+ 			/* No more extent, just update the accounting */
++			spin_lock(&sctx->stat_lock);
+ 			sctx->stat.last_physical = physical + logical_length;
++			spin_unlock(&sctx->stat_lock);
+ 			ret = 0;
+ 			break;
+ 		}
+@@ -2336,6 +2345,10 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
+ 			stripe_logical += chunk_logical;
+ 			ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg,
+ 							 map, stripe_logical);
++			spin_lock(&sctx->stat_lock);
++			sctx->stat.last_physical = min(physical + BTRFS_STRIPE_LEN,
++						       physical_end);
++			spin_unlock(&sctx->stat_lock);
+ 			if (ret)
+ 				goto out;
+ 			goto next;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 897e19790522d..de1c063bc39db 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1272,6 +1272,19 @@ static void extent_err(const struct extent_buffer *eb, int slot,
+ 	va_end(args);
+ }
+ 
++static bool is_valid_dref_root(u64 rootid)
++{
++	/*
++	 * The following tree root objectids are allowed to have a data backref:
++	 * - subvolume trees
++	 * - data reloc tree
++	 * - tree root
++	 *   For v1 space cache
++	 */
++	return is_fstree(rootid) || rootid == BTRFS_DATA_RELOC_TREE_OBJECTID ||
++	       rootid == BTRFS_ROOT_TREE_OBJECTID;
++}
++
+ static int check_extent_item(struct extent_buffer *leaf,
+ 			     struct btrfs_key *key, int slot,
+ 			     struct btrfs_key *prev_key)
+@@ -1424,6 +1437,8 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 		struct btrfs_extent_data_ref *dref;
+ 		struct btrfs_shared_data_ref *sref;
+ 		u64 seq;
++		u64 dref_root;
++		u64 dref_objectid;
+ 		u64 dref_offset;
+ 		u64 inline_offset;
+ 		u8 inline_type;
+@@ -1467,11 +1482,26 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 		 */
+ 		case BTRFS_EXTENT_DATA_REF_KEY:
+ 			dref = (struct btrfs_extent_data_ref *)(&iref->offset);
++			dref_root = btrfs_extent_data_ref_root(leaf, dref);
++			dref_objectid = btrfs_extent_data_ref_objectid(leaf, dref);
+ 			dref_offset = btrfs_extent_data_ref_offset(leaf, dref);
+ 			seq = hash_extent_data_ref(
+ 					btrfs_extent_data_ref_root(leaf, dref),
+ 					btrfs_extent_data_ref_objectid(leaf, dref),
+ 					btrfs_extent_data_ref_offset(leaf, dref));
++			if (unlikely(!is_valid_dref_root(dref_root))) {
++				extent_err(leaf, slot,
++					   "invalid data ref root value %llu",
++					   dref_root);
++				return -EUCLEAN;
++			}
++			if (unlikely(dref_objectid < BTRFS_FIRST_FREE_OBJECTID ||
++				     dref_objectid > BTRFS_LAST_FREE_OBJECTID)) {
++				extent_err(leaf, slot,
++					   "invalid data ref objectid value %llu",
++					   dref_root);
++				return -EUCLEAN;
++			}
+ 			if (unlikely(!IS_ALIGNED(dref_offset,
+ 						 fs_info->sectorsize))) {
+ 				extent_err(leaf, slot,
+@@ -1610,6 +1640,8 @@ static int check_extent_data_ref(struct extent_buffer *leaf,
+ 		return -EUCLEAN;
+ 	}
+ 	for (; ptr < end; ptr += sizeof(*dref)) {
++		u64 root;
++		u64 objectid;
+ 		u64 offset;
+ 
+ 		/*
+@@ -1617,7 +1649,22 @@ static int check_extent_data_ref(struct extent_buffer *leaf,
+ 		 * overflow from the leaf due to hash collisions.
+ 		 */
+ 		dref = (struct btrfs_extent_data_ref *)ptr;
++		root = btrfs_extent_data_ref_root(leaf, dref);
++		objectid = btrfs_extent_data_ref_objectid(leaf, dref);
+ 		offset = btrfs_extent_data_ref_offset(leaf, dref);
++		if (unlikely(!is_valid_dref_root(root))) {
++			extent_err(leaf, slot,
++				   "invalid extent data backref root value %llu",
++				   root);
++			return -EUCLEAN;
++		}
++		if (unlikely(objectid < BTRFS_FIRST_FREE_OBJECTID ||
++			     objectid > BTRFS_LAST_FREE_OBJECTID)) {
++			extent_err(leaf, slot,
++				   "invalid extent data backref objectid value %llu",
++				   root);
++			return -EUCLEAN;
++		}
+ 		if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->sectorsize))) {
+ 			extent_err(leaf, slot,
+ 	"invalid extent data backref offset, have %llu expect aligned to %u",
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 5556ab491368d..92fda31c68cdc 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -4154,7 +4154,7 @@ extern struct kmem_cache *f2fs_inode_entry_slab;
+  * inline.c
+  */
+ bool f2fs_may_inline_data(struct inode *inode);
+-bool f2fs_sanity_check_inline_data(struct inode *inode);
++bool f2fs_sanity_check_inline_data(struct inode *inode, struct page *ipage);
+ bool f2fs_may_inline_dentry(struct inode *inode);
+ void f2fs_do_read_inline_data(struct folio *folio, struct page *ipage);
+ void f2fs_truncate_inline_inode(struct inode *inode,
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 215daa71dc18a..cca7d448e55cb 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -33,11 +33,29 @@ bool f2fs_may_inline_data(struct inode *inode)
+ 	return !f2fs_post_read_required(inode);
+ }
+ 
+-bool f2fs_sanity_check_inline_data(struct inode *inode)
++static bool inode_has_blocks(struct inode *inode, struct page *ipage)
++{
++	struct f2fs_inode *ri = F2FS_INODE(ipage);
++	int i;
++
++	if (F2FS_HAS_BLOCKS(inode))
++		return true;
++
++	for (i = 0; i < DEF_NIDS_PER_INODE; i++) {
++		if (ri->i_nid[i])
++			return true;
++	}
++	return false;
++}
++
++bool f2fs_sanity_check_inline_data(struct inode *inode, struct page *ipage)
+ {
+ 	if (!f2fs_has_inline_data(inode))
+ 		return false;
+ 
++	if (inode_has_blocks(inode, ipage))
++		return false;
++
+ 	if (!support_inline_data(inode))
+ 		return true;
+ 
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index ed629dabbfda4..57da02bfa823e 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -347,7 +347,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ 		}
+ 	}
+ 
+-	if (f2fs_sanity_check_inline_data(inode)) {
++	if (f2fs_sanity_check_inline_data(inode, node_page)) {
+ 		f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix",
+ 			  __func__, inode->i_ino, inode->i_mode);
+ 		return false;
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index aa9cf01028489..52556b6bae6b1 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -75,9 +75,6 @@
+ #define GFS2_QD_HASH_SIZE       BIT(GFS2_QD_HASH_SHIFT)
+ #define GFS2_QD_HASH_MASK       (GFS2_QD_HASH_SIZE - 1)
+ 
+-#define QC_CHANGE 0
+-#define QC_SYNC 1
+-
+ /* Lock order: qd_lock -> bucket lock -> qd->lockref.lock -> lru lock */
+ /*                     -> sd_bitmap_lock                              */
+ static DEFINE_SPINLOCK(qd_lock);
+@@ -710,7 +707,7 @@ static int sort_qd(const void *a, const void *b)
+ 	return 0;
+ }
+ 
+-static void do_qc(struct gfs2_quota_data *qd, s64 change, int qc_type)
++static void do_qc(struct gfs2_quota_data *qd, s64 change)
+ {
+ 	struct gfs2_sbd *sdp = qd->qd_sbd;
+ 	struct gfs2_inode *ip = GFS2_I(sdp->sd_qc_inode);
+@@ -735,18 +732,16 @@ static void do_qc(struct gfs2_quota_data *qd, s64 change, int qc_type)
+ 	qd->qd_change = x;
+ 	spin_unlock(&qd_lock);
+ 
+-	if (qc_type == QC_CHANGE) {
+-		if (!test_and_set_bit(QDF_CHANGE, &qd->qd_flags)) {
+-			qd_hold(qd);
+-			slot_hold(qd);
+-		}
+-	} else {
++	if (!x) {
+ 		gfs2_assert_warn(sdp, test_bit(QDF_CHANGE, &qd->qd_flags));
+ 		clear_bit(QDF_CHANGE, &qd->qd_flags);
+ 		qc->qc_flags = 0;
+ 		qc->qc_id = 0;
+ 		slot_put(qd);
+ 		qd_put(qd);
++	} else if (!test_and_set_bit(QDF_CHANGE, &qd->qd_flags)) {
++		qd_hold(qd);
++		slot_hold(qd);
+ 	}
+ 
+ 	if (change < 0) /* Reset quiet flag if we freed some blocks */
+@@ -992,7 +987,7 @@ static int do_sync(unsigned int num_qd, struct gfs2_quota_data **qda)
+ 		if (error)
+ 			goto out_end_trans;
+ 
+-		do_qc(qd, -qd->qd_change_sync, QC_SYNC);
++		do_qc(qd, -qd->qd_change_sync);
+ 		set_bit(QDF_REFRESH, &qd->qd_flags);
+ 	}
+ 
+@@ -1312,7 +1307,7 @@ void gfs2_quota_change(struct gfs2_inode *ip, s64 change,
+ 
+ 		if (qid_eq(qd->qd_id, make_kqid_uid(uid)) ||
+ 		    qid_eq(qd->qd_id, make_kqid_gid(gid))) {
+-			do_qc(qd, change, QC_CHANGE);
++			do_qc(qd, change);
+ 		}
+ 	}
+ }
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index af4758d8d894c..551efd7820ad8 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -99,12 +99,12 @@ int check_journal_clean(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
+  */
+ int gfs2_freeze_lock_shared(struct gfs2_sbd *sdp)
+ {
++	int flags = LM_FLAG_NOEXP | GL_EXACT;
+ 	int error;
+ 
+-	error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+-				   LM_FLAG_NOEXP | GL_EXACT,
++	error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED, flags,
+ 				   &sdp->sd_freeze_gh);
+-	if (error)
++	if (error && error != GLR_TRYFAILED)
+ 		fs_err(sdp, "can't lock the freeze glock: %d\n", error);
+ 	return error;
+ }
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index ff69ae24c4e89..272c8a1dab3c2 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -117,17 +117,13 @@ void fsnotify_sb_free(struct super_block *sb)
+  * parent cares.  Thus when an event happens on a child it can quickly tell
+  * if there is a need to find a parent and send the event to the parent.
+  */
+-void __fsnotify_update_child_dentry_flags(struct inode *inode)
++void fsnotify_set_children_dentry_flags(struct inode *inode)
+ {
+ 	struct dentry *alias;
+-	int watched;
+ 
+ 	if (!S_ISDIR(inode->i_mode))
+ 		return;
+ 
+-	/* determine if the children should tell inode about their events */
+-	watched = fsnotify_inode_watches_children(inode);
+-
+ 	spin_lock(&inode->i_lock);
+ 	/* run all of the dentries associated with this inode.  Since this is a
+ 	 * directory, there damn well better only be one item on this list */
+@@ -143,10 +139,7 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
+ 				continue;
+ 
+ 			spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED);
+-			if (watched)
+-				child->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
+-			else
+-				child->d_flags &= ~DCACHE_FSNOTIFY_PARENT_WATCHED;
++			child->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
+ 			spin_unlock(&child->d_lock);
+ 		}
+ 		spin_unlock(&alias->d_lock);
+@@ -154,6 +147,24 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
+ 	spin_unlock(&inode->i_lock);
+ }
+ 
++/*
++ * Lazily clear false positive PARENT_WATCHED flag for child whose parent had
++ * stopped watching children.
++ */
++static void fsnotify_clear_child_dentry_flag(struct inode *pinode,
++					     struct dentry *dentry)
++{
++	spin_lock(&dentry->d_lock);
++	/*
++	 * d_lock is a sufficient barrier to prevent observing a non-watched
++	 * parent state from before the fsnotify_set_children_dentry_flags()
++	 * or fsnotify_update_flags() call that had set PARENT_WATCHED.
++	 */
++	if (!fsnotify_inode_watches_children(pinode))
++		dentry->d_flags &= ~DCACHE_FSNOTIFY_PARENT_WATCHED;
++	spin_unlock(&dentry->d_lock);
++}
++
+ /* Are inode/sb/mount interested in parent and name info with this event? */
+ static bool fsnotify_event_needs_parent(struct inode *inode, __u32 mnt_mask,
+ 					__u32 mask)
+@@ -228,7 +239,7 @@ int __fsnotify_parent(struct dentry *dentry, __u32 mask, const void *data,
+ 	p_inode = parent->d_inode;
+ 	p_mask = fsnotify_inode_watches_children(p_inode);
+ 	if (unlikely(parent_watched && !p_mask))
+-		__fsnotify_update_child_dentry_flags(p_inode);
++		fsnotify_clear_child_dentry_flag(p_inode, dentry);
+ 
+ 	/*
+ 	 * Include parent/name in notification either if some notification
+diff --git a/fs/notify/fsnotify.h b/fs/notify/fsnotify.h
+index 2d059f789ee35..663759ed6fbc1 100644
+--- a/fs/notify/fsnotify.h
++++ b/fs/notify/fsnotify.h
+@@ -93,7 +93,7 @@ static inline void fsnotify_clear_marks_by_sb(struct super_block *sb)
+  * update the dentry->d_flags of all of inode's children to indicate if inode cares
+  * about events that happen to its children.
+  */
+-extern void __fsnotify_update_child_dentry_flags(struct inode *inode);
++extern void fsnotify_set_children_dentry_flags(struct inode *inode);
+ 
+ extern struct kmem_cache *fsnotify_mark_connector_cachep;
+ 
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index c3eefa70633c4..5e170e7130886 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -250,6 +250,24 @@ static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ 	return fsnotify_update_iref(conn, want_iref);
+ }
+ 
++static bool fsnotify_conn_watches_children(
++					struct fsnotify_mark_connector *conn)
++{
++	if (conn->type != FSNOTIFY_OBJ_TYPE_INODE)
++		return false;
++
++	return fsnotify_inode_watches_children(fsnotify_conn_inode(conn));
++}
++
++static void fsnotify_conn_set_children_dentry_flags(
++					struct fsnotify_mark_connector *conn)
++{
++	if (conn->type != FSNOTIFY_OBJ_TYPE_INODE)
++		return;
++
++	fsnotify_set_children_dentry_flags(fsnotify_conn_inode(conn));
++}
++
+ /*
+  * Calculate mask of events for a list of marks. The caller must make sure
+  * connector and connector->obj cannot disappear under us.  Callers achieve
+@@ -258,15 +276,23 @@ static void *__fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+  */
+ void fsnotify_recalc_mask(struct fsnotify_mark_connector *conn)
+ {
++	bool update_children;
++
+ 	if (!conn)
+ 		return;
+ 
+ 	spin_lock(&conn->lock);
++	update_children = !fsnotify_conn_watches_children(conn);
+ 	__fsnotify_recalc_mask(conn);
++	update_children &= fsnotify_conn_watches_children(conn);
+ 	spin_unlock(&conn->lock);
+-	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE)
+-		__fsnotify_update_child_dentry_flags(
+-					fsnotify_conn_inode(conn));
++	/*
++	 * Set children's PARENT_WATCHED flags only if parent started watching.
++	 * When parent stops watching, we clear false positive PARENT_WATCHED
++	 * flags lazily in __fsnotify_parent().
++	 */
++	if (update_children)
++		fsnotify_conn_set_children_dentry_flags(conn);
+ }
+ 
+ /* Free all connectors queued for freeing once SRCU period ends */
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 062b86a4936fd..9f5bc41433c15 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -950,7 +950,8 @@ int smb2_query_path_info(const unsigned int xid,
+ 			cmds[num_cmds++] = SMB2_OP_GET_REPARSE;
+ 
+ 		oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+-				     FILE_READ_ATTRIBUTES | FILE_READ_EA,
++				     FILE_READ_ATTRIBUTES |
++				     FILE_READ_EA | SYNCHRONIZE,
+ 				     FILE_OPEN, create_options |
+ 				     OPEN_REPARSE_POINT, ACL_NO_MODE);
+ 		cifs_get_readable_path(tcon, full_path, &cfile);
+@@ -1258,7 +1259,8 @@ int smb2_query_reparse_point(const unsigned int xid,
+ 	cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path);
+ 
+ 	cifs_get_readable_path(tcon, full_path, &cfile);
+-	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path, FILE_READ_ATTRIBUTES,
++	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
++			     FILE_READ_ATTRIBUTES | FILE_READ_EA | SYNCHRONIZE,
+ 			     FILE_OPEN, OPEN_REPARSE_POINT, ACL_NO_MODE);
+ 	rc = smb2_compound_op(xid, tcon, cifs_sb,
+ 			      full_path, &oparms, &in_iov,
+diff --git a/include/clocksource/timer-xilinx.h b/include/clocksource/timer-xilinx.h
+index c0f56fe6d22ae..d116f18de899c 100644
+--- a/include/clocksource/timer-xilinx.h
++++ b/include/clocksource/timer-xilinx.h
+@@ -41,7 +41,7 @@ struct regmap;
+ struct xilinx_timer_priv {
+ 	struct regmap *map;
+ 	struct clk *clk;
+-	u32 max;
++	u64 max;
+ };
+ 
+ /**
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index 4dd6143db2716..8be029bc50b1e 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -594,12 +594,14 @@ static inline __u32 fsnotify_parent_needed_mask(__u32 mask)
+ 
+ static inline int fsnotify_inode_watches_children(struct inode *inode)
+ {
++	__u32 parent_mask = READ_ONCE(inode->i_fsnotify_mask);
++
+ 	/* FS_EVENT_ON_CHILD is set if the inode may care */
+-	if (!(inode->i_fsnotify_mask & FS_EVENT_ON_CHILD))
++	if (!(parent_mask & FS_EVENT_ON_CHILD))
+ 		return 0;
+ 	/* this inode might care about child events, does it care about the
+ 	 * specific set of events that can happen on a child? */
+-	return inode->i_fsnotify_mask & FS_EVENTS_POSS_ON_CHILD;
++	return parent_mask & FS_EVENTS_POSS_ON_CHILD;
+ }
+ 
+ /*
+@@ -613,7 +615,7 @@ static inline void fsnotify_update_flags(struct dentry *dentry)
+ 	/*
+ 	 * Serialisation of setting PARENT_WATCHED on the dentries is provided
+ 	 * by d_lock. If inotify_inode_watched changes after we have taken
+-	 * d_lock, the following __fsnotify_update_child_dentry_flags call will
++	 * d_lock, the following fsnotify_set_children_dentry_flags call will
+ 	 * find our entry, so it will spin until we complete here, and update
+ 	 * us with the new state.
+ 	 */
+diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h
+index bfe7c1f1ac6d1..f0231dbc47771 100644
+--- a/include/linux/hwspinlock.h
++++ b/include/linux/hwspinlock.h
+@@ -68,6 +68,7 @@ int __hwspin_lock_timeout(struct hwspinlock *, unsigned int, int,
+ int __hwspin_trylock(struct hwspinlock *, int, unsigned long *);
+ void __hwspin_unlock(struct hwspinlock *, int, unsigned long *);
+ int of_hwspin_lock_get_id_byname(struct device_node *np, const char *name);
++int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id);
+ int devm_hwspin_lock_free(struct device *dev, struct hwspinlock *hwlock);
+ struct hwspinlock *devm_hwspin_lock_request(struct device *dev);
+ struct hwspinlock *devm_hwspin_lock_request_specific(struct device *dev,
+@@ -127,6 +128,11 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
+ {
+ }
+ 
++static inline int hwspin_lock_bust(struct hwspinlock *hwlock, unsigned int id)
++{
++	return 0;
++}
++
+ static inline int of_hwspin_lock_get_id(struct device_node *np, int index)
+ {
+ 	return 0;
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 424acb98c7c26..c11624a3d9c04 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -1053,7 +1053,7 @@ static inline int of_i2c_get_board_info(struct device *dev,
+ struct acpi_resource;
+ struct acpi_resource_i2c_serialbus;
+ 
+-#if IS_ENABLED(CONFIG_ACPI)
++#if IS_REACHABLE(CONFIG_ACPI) && IS_REACHABLE(CONFIG_I2C)
+ bool i2c_acpi_get_i2c_resource(struct acpi_resource *ares,
+ 			       struct acpi_resource_i2c_serialbus **i2c);
+ int i2c_acpi_client_count(struct acpi_device *adev);
+diff --git a/include/linux/soc/qcom/smem.h b/include/linux/soc/qcom/smem.h
+index a36a3b9d4929e..03187bc958518 100644
+--- a/include/linux/soc/qcom/smem.h
++++ b/include/linux/soc/qcom/smem.h
+@@ -14,4 +14,6 @@ phys_addr_t qcom_smem_virt_to_phys(void *p);
+ 
+ int qcom_smem_get_soc_id(u32 *id);
+ 
++int qcom_smem_bust_hwspin_lock_by_host(unsigned int host);
++
+ #endif
+diff --git a/include/net/inet_timewait_sock.h b/include/net/inet_timewait_sock.h
+index 2a536eea9424e..5b43d220243de 100644
+--- a/include/net/inet_timewait_sock.h
++++ b/include/net/inet_timewait_sock.h
+@@ -93,8 +93,10 @@ struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk,
+ 					   struct inet_timewait_death_row *dr,
+ 					   const int state);
+ 
+-void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+-			 struct inet_hashinfo *hashinfo);
++void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
++				  struct sock *sk,
++				  struct inet_hashinfo *hashinfo,
++				  int timeo);
+ 
+ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo,
+ 			  bool rearm);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 6d735e00d3f3e..c5606cadb1a55 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -506,8 +506,7 @@ static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+ 	return mtu - lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
+ }
+ 
+-struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
+-					int fc_mx_len,
++struct dst_metrics *ip_fib_metrics_init(struct nlattr *fc_mx, int fc_mx_len,
+ 					struct netlink_ext_ack *extack);
+ static inline void ip_fib_metrics_put(struct dst_metrics *fib_metrics)
+ {
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 32815a40dea16..45bbb54e42e85 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1216,7 +1216,7 @@ extern struct tcp_congestion_ops tcp_reno;
+ 
+ struct tcp_congestion_ops *tcp_ca_find(const char *name);
+ struct tcp_congestion_ops *tcp_ca_find_key(u32 key);
+-u32 tcp_ca_get_key_by_name(struct net *net, const char *name, bool *ecn_ca);
++u32 tcp_ca_get_key_by_name(const char *name, bool *ecn_ca);
+ #ifdef CONFIG_INET
+ char *tcp_ca_get_name_by_key(u32 key, char *buffer);
+ #else
+diff --git a/include/sound/ump_convert.h b/include/sound/ump_convert.h
+index 28c364c63245d..d099ae27f8491 100644
+--- a/include/sound/ump_convert.h
++++ b/include/sound/ump_convert.h
+@@ -13,6 +13,7 @@ struct ump_cvt_to_ump_bank {
+ 	unsigned char cc_nrpn_msb, cc_nrpn_lsb;
+ 	unsigned char cc_data_msb, cc_data_lsb;
+ 	unsigned char cc_bank_msb, cc_bank_lsb;
++	bool cc_data_msb_set, cc_data_lsb_set;
+ };
+ 
+ /* context for converting from MIDI1 byte stream to UMP packet */
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index d965e4d1277e6..52f0094a8c083 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -1074,6 +1074,7 @@ struct ufs_hba {
+ 	bool ext_iid_sup;
+ 	bool scsi_host_added;
+ 	bool mcq_sup;
++	bool lsdb_sup;
+ 	bool mcq_enabled;
+ 	struct ufshcd_res_info res[RES_MAX];
+ 	void __iomem *mcq_base;
+diff --git a/include/ufs/ufshci.h b/include/ufs/ufshci.h
+index 385e1c6b8d604..22ba85e81d8c9 100644
+--- a/include/ufs/ufshci.h
++++ b/include/ufs/ufshci.h
+@@ -75,6 +75,7 @@ enum {
+ 	MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT	= 0x02000000,
+ 	MASK_UIC_DME_TEST_MODE_SUPPORT		= 0x04000000,
+ 	MASK_CRYPTO_SUPPORT			= 0x10000000,
++	MASK_LSDB_SUPPORT			= 0x20000000,
+ 	MASK_MCQ_SUPPORT			= 0x40000000,
+ };
+ 
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index a6e3792b15f8a..d570535342cb7 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -416,8 +416,11 @@ static unsigned long long phys_addr(struct dma_debug_entry *entry)
+  * dma_active_cacheline entry to track per event.  dma_map_sg(), on the
+  * other hand, consumes a single dma_debug_entry, but inserts 'nents'
+  * entries into the tree.
++ *
++ * Use __GFP_NOWARN because the printk from an OOM, to netconsole, could end
++ * up right back in the DMA debugging code, leading to a deadlock.
+  */
+-static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC);
++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC | __GFP_NOWARN);
+ static DEFINE_SPINLOCK(radix_lock);
+ #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+ #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index bae7925c497fe..179f60ca03130 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -223,7 +223,6 @@ struct rcu_data {
+ 	struct swait_queue_head nocb_state_wq; /* For offloading state changes */
+ 	struct task_struct *nocb_gp_kthread;
+ 	raw_spinlock_t nocb_lock;	/* Guard following pair of fields. */
+-	atomic_t nocb_lock_contended;	/* Contention experienced. */
+ 	int nocb_defer_wakeup;		/* Defer wakeup of nocb_kthread. */
+ 	struct timer_list nocb_timer;	/* Enforce finite deferral. */
+ 	unsigned long nocb_gp_adv_time;	/* Last call_rcu() CB adv (jiffies). */
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 3f85577bddd4e..2d9eed2bf7509 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -91,8 +91,7 @@ module_param(nocb_nobypass_lim_per_jiffy, int, 0);
+ 
+ /*
+  * Acquire the specified rcu_data structure's ->nocb_bypass_lock.  If the
+- * lock isn't immediately available, increment ->nocb_lock_contended to
+- * flag the contention.
++ * lock isn't immediately available, perform minimal sanity check.
+  */
+ static void rcu_nocb_bypass_lock(struct rcu_data *rdp)
+ 	__acquires(&rdp->nocb_bypass_lock)
+@@ -100,29 +99,12 @@ static void rcu_nocb_bypass_lock(struct rcu_data *rdp)
+ 	lockdep_assert_irqs_disabled();
+ 	if (raw_spin_trylock(&rdp->nocb_bypass_lock))
+ 		return;
+-	atomic_inc(&rdp->nocb_lock_contended);
++	/*
++	 * Contention expected only when local enqueue collide with
++	 * remote flush from kthreads.
++	 */
+ 	WARN_ON_ONCE(smp_processor_id() != rdp->cpu);
+-	smp_mb__after_atomic(); /* atomic_inc() before lock. */
+ 	raw_spin_lock(&rdp->nocb_bypass_lock);
+-	smp_mb__before_atomic(); /* atomic_dec() after lock. */
+-	atomic_dec(&rdp->nocb_lock_contended);
+-}
+-
+-/*
+- * Spinwait until the specified rcu_data structure's ->nocb_lock is
+- * not contended.  Please note that this is extremely special-purpose,
+- * relying on the fact that at most two kthreads and one CPU contend for
+- * this lock, and also that the two kthreads are guaranteed to have frequent
+- * grace-period-duration time intervals between successive acquisitions
+- * of the lock.  This allows us to use an extremely simple throttling
+- * mechanism, and further to apply it only to the CPU doing floods of
+- * call_rcu() invocations.  Don't try this at home!
+- */
+-static void rcu_nocb_wait_contended(struct rcu_data *rdp)
+-{
+-	WARN_ON_ONCE(smp_processor_id() != rdp->cpu);
+-	while (WARN_ON_ONCE(atomic_read(&rdp->nocb_lock_contended)))
+-		cpu_relax();
+ }
+ 
+ /*
+@@ -510,7 +492,6 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
+ 	}
+ 
+ 	// We need to use the bypass.
+-	rcu_nocb_wait_contended(rdp);
+ 	rcu_nocb_bypass_lock(rdp);
+ 	ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass);
+ 	rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */
+@@ -1678,12 +1659,11 @@ static void show_rcu_nocb_state(struct rcu_data *rdp)
+ 
+ 	sprintf(bufw, "%ld", rsclp->gp_seq[RCU_WAIT_TAIL]);
+ 	sprintf(bufr, "%ld", rsclp->gp_seq[RCU_NEXT_READY_TAIL]);
+-	pr_info("   CB %d^%d->%d %c%c%c%c%c%c F%ld L%ld C%d %c%c%s%c%s%c%c q%ld %c CPU %d%s\n",
++	pr_info("   CB %d^%d->%d %c%c%c%c%c F%ld L%ld C%d %c%c%s%c%s%c%c q%ld %c CPU %d%s\n",
+ 		rdp->cpu, rdp->nocb_gp_rdp->cpu,
+ 		nocb_next_rdp ? nocb_next_rdp->cpu : -1,
+ 		"kK"[!!rdp->nocb_cb_kthread],
+ 		"bB"[raw_spin_is_locked(&rdp->nocb_bypass_lock)],
+-		"cC"[!!atomic_read(&rdp->nocb_lock_contended)],
+ 		"lL"[raw_spin_is_locked(&rdp->nocb_lock)],
+ 		"sS"[!!rdp->nocb_cb_sleep],
+ 		".W"[swait_active(&rdp->nocb_cb_wq)],
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 657bcd887fdb8..3577f94f69d97 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -4221,7 +4221,7 @@ int filemap_invalidate_inode(struct inode *inode, bool flush,
+ 	}
+ 
+ 	/* Wait for writeback to complete on all folios and discard. */
+-	truncate_inode_pages_range(mapping, start, end);
++	invalidate_inode_pages2_range(mapping, start / PAGE_SIZE, end / PAGE_SIZE);
+ 
+ unlock:
+ 	filemap_invalidate_unlock(mapping);
+diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
+index 251a57cf58223..deb52d7d31b48 100644
+--- a/net/dccp/minisocks.c
++++ b/net/dccp/minisocks.c
+@@ -59,11 +59,10 @@ void dccp_time_wait(struct sock *sk, int state, int timeo)
+ 		 * we complete the initialization.
+ 		 */
+ 		local_bh_disable();
+-		inet_twsk_schedule(tw, timeo);
+ 		/* Linkage updates.
+ 		 * Note that access to tw after this point is illegal.
+ 		 */
+-		inet_twsk_hashdance(tw, sk, &dccp_hashinfo);
++		inet_twsk_hashdance_schedule(tw, sk, &dccp_hashinfo, timeo);
+ 		local_bh_enable();
+ 	} else {
+ 		/* Sorry, if we're out of memory, just CLOSE this
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 8956026bc0a2c..2b57cd2b96e2a 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1030,7 +1030,7 @@ bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi)
+ 			bool ecn_ca = false;
+ 
+ 			nla_strscpy(tmp, nla, sizeof(tmp));
+-			val = tcp_ca_get_key_by_name(fi->fib_net, tmp, &ecn_ca);
++			val = tcp_ca_get_key_by_name(tmp, &ecn_ca);
+ 		} else {
+ 			if (nla_len(nla) != sizeof(u32))
+ 				return false;
+@@ -1459,8 +1459,7 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
+ 	fi = kzalloc(struct_size(fi, fib_nh, nhs), GFP_KERNEL);
+ 	if (!fi)
+ 		goto failure;
+-	fi->fib_metrics = ip_fib_metrics_init(fi->fib_net, cfg->fc_mx,
+-					      cfg->fc_mx_len, extack);
++	fi->fib_metrics = ip_fib_metrics_init(cfg->fc_mx, cfg->fc_mx_len, extack);
+ 	if (IS_ERR(fi->fib_metrics)) {
+ 		err = PTR_ERR(fi->fib_metrics);
+ 		kfree(fi);
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index e28075f0006e3..628d33a41ce5f 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -96,9 +96,13 @@ static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
+  * Enter the time wait state. This is called with locally disabled BH.
+  * Essentially we whip up a timewait bucket, copy the relevant info into it
+  * from the SK, and mess with hash chains and list linkage.
++ *
++ * The caller must not access @tw anymore after this function returns.
+  */
+-void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+-			   struct inet_hashinfo *hashinfo)
++void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
++				  struct sock *sk,
++				  struct inet_hashinfo *hashinfo,
++				  int timeo)
+ {
+ 	const struct inet_sock *inet = inet_sk(sk);
+ 	const struct inet_connection_sock *icsk = inet_csk(sk);
+@@ -129,26 +133,33 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
+ 
+ 	spin_lock(lock);
+ 
++	/* Step 2: Hash TW into tcp ehash chain */
+ 	inet_twsk_add_node_rcu(tw, &ehead->chain);
+ 
+ 	/* Step 3: Remove SK from hash chain */
+ 	if (__sk_nulls_del_node_init_rcu(sk))
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+ 
+-	spin_unlock(lock);
+ 
++	/* Ensure above writes are committed into memory before updating the
++	 * refcount.
++	 * Provides ordering vs later refcount_inc().
++	 */
++	smp_wmb();
+ 	/* tw_refcnt is set to 3 because we have :
+ 	 * - one reference for bhash chain.
+ 	 * - one reference for ehash chain.
+ 	 * - one reference for timer.
+-	 * We can use atomic_set() because prior spin_lock()/spin_unlock()
+-	 * committed into memory all tw fields.
+ 	 * Also note that after this point, we lost our implicit reference
+ 	 * so we are not allowed to use tw anymore.
+ 	 */
+ 	refcount_set(&tw->tw_refcnt, 3);
++
++	inet_twsk_schedule(tw, timeo);
++
++	spin_unlock(lock);
+ }
+-EXPORT_SYMBOL_GPL(inet_twsk_hashdance);
++EXPORT_SYMBOL_GPL(inet_twsk_hashdance_schedule);
+ 
+ static void tw_timer_handler(struct timer_list *t)
+ {
+@@ -217,7 +228,34 @@ EXPORT_SYMBOL_GPL(inet_twsk_alloc);
+  */
+ void inet_twsk_deschedule_put(struct inet_timewait_sock *tw)
+ {
+-	if (del_timer_sync(&tw->tw_timer))
++	struct inet_hashinfo *hashinfo = tw->tw_dr->hashinfo;
++	spinlock_t *lock = inet_ehash_lockp(hashinfo, tw->tw_hash);
++
++	/* inet_twsk_purge() walks over all sockets, including tw ones,
++	 * and removes them via inet_twsk_deschedule_put() after a
++	 * refcount_inc_not_zero().
++	 *
++	 * inet_twsk_hashdance_schedule() must (re)init the refcount before
++	 * arming the timer, i.e. inet_twsk_purge can obtain a reference to
++	 * a twsk that did not yet schedule the timer.
++	 *
++	 * The ehash lock synchronizes these two:
++	 * After acquiring the lock, the timer is always scheduled (else
++	 * timer_shutdown returns false), because hashdance_schedule releases
++	 * the ehash lock only after completing the timer initialization.
++	 *
++	 * Without grabbing the ehash lock, we get:
++	 * 1) cpu x sets twsk refcount to 3
++	 * 2) cpu y bumps refcount to 4
++	 * 3) cpu y calls inet_twsk_deschedule_put() and shuts timer down
++	 * 4) cpu x tries to start timer, but mod_timer is a noop post-shutdown
++	 * -> timer refcount is never decremented.
++	 */
++	spin_lock(lock);
++	/*  Makes sure hashdance_schedule() has completed */
++	spin_unlock(lock);
++
++	if (timer_shutdown_sync(&tw->tw_timer))
+ 		inet_twsk_kill(tw);
+ 	inet_twsk_put(tw);
+ }
+diff --git a/net/ipv4/metrics.c b/net/ipv4/metrics.c
+index 0e3ee1532848c..8ddac1f595ed8 100644
+--- a/net/ipv4/metrics.c
++++ b/net/ipv4/metrics.c
+@@ -7,7 +7,7 @@
+ #include <net/net_namespace.h>
+ #include <net/tcp.h>
+ 
+-static int ip_metrics_convert(struct net *net, struct nlattr *fc_mx,
++static int ip_metrics_convert(struct nlattr *fc_mx,
+ 			      int fc_mx_len, u32 *metrics,
+ 			      struct netlink_ext_ack *extack)
+ {
+@@ -31,7 +31,7 @@ static int ip_metrics_convert(struct net *net, struct nlattr *fc_mx,
+ 			char tmp[TCP_CA_NAME_MAX];
+ 
+ 			nla_strscpy(tmp, nla, sizeof(tmp));
+-			val = tcp_ca_get_key_by_name(net, tmp, &ecn_ca);
++			val = tcp_ca_get_key_by_name(tmp, &ecn_ca);
+ 			if (val == TCP_CA_UNSPEC) {
+ 				NL_SET_ERR_MSG(extack, "Unknown tcp congestion algorithm");
+ 				return -EINVAL;
+@@ -63,7 +63,7 @@ static int ip_metrics_convert(struct net *net, struct nlattr *fc_mx,
+ 	return 0;
+ }
+ 
+-struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
++struct dst_metrics *ip_fib_metrics_init(struct nlattr *fc_mx,
+ 					int fc_mx_len,
+ 					struct netlink_ext_ack *extack)
+ {
+@@ -77,7 +77,7 @@ struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
+ 	if (unlikely(!fib_metrics))
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	err = ip_metrics_convert(net, fc_mx, fc_mx_len, fib_metrics->metrics,
++	err = ip_metrics_convert(fc_mx, fc_mx_len, fib_metrics->metrics,
+ 				 extack);
+ 	if (!err) {
+ 		refcount_set(&fib_metrics->refcnt, 1);
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 28ffcfbeef14e..48617d99abb0d 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -46,8 +46,7 @@ void tcp_set_ca_state(struct sock *sk, const u8 ca_state)
+ }
+ 
+ /* Must be called with rcu lock held */
+-static struct tcp_congestion_ops *tcp_ca_find_autoload(struct net *net,
+-						       const char *name)
++static struct tcp_congestion_ops *tcp_ca_find_autoload(const char *name)
+ {
+ 	struct tcp_congestion_ops *ca = tcp_ca_find(name);
+ 
+@@ -178,7 +177,7 @@ int tcp_update_congestion_control(struct tcp_congestion_ops *ca, struct tcp_cong
+ 	return ret;
+ }
+ 
+-u32 tcp_ca_get_key_by_name(struct net *net, const char *name, bool *ecn_ca)
++u32 tcp_ca_get_key_by_name(const char *name, bool *ecn_ca)
+ {
+ 	const struct tcp_congestion_ops *ca;
+ 	u32 key = TCP_CA_UNSPEC;
+@@ -186,7 +185,7 @@ u32 tcp_ca_get_key_by_name(struct net *net, const char *name, bool *ecn_ca)
+ 	might_sleep();
+ 
+ 	rcu_read_lock();
+-	ca = tcp_ca_find_autoload(net, name);
++	ca = tcp_ca_find_autoload(name);
+ 	if (ca) {
+ 		key = ca->key;
+ 		*ecn_ca = ca->flags & TCP_CONG_NEEDS_ECN;
+@@ -283,7 +282,7 @@ int tcp_set_default_congestion_control(struct net *net, const char *name)
+ 	int ret;
+ 
+ 	rcu_read_lock();
+-	ca = tcp_ca_find_autoload(net, name);
++	ca = tcp_ca_find_autoload(name);
+ 	if (!ca) {
+ 		ret = -ENOENT;
+ 	} else if (!bpf_try_module_get(ca, ca->owner)) {
+@@ -421,7 +420,7 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load,
+ 	if (!load)
+ 		ca = tcp_ca_find(name);
+ 	else
+-		ca = tcp_ca_find_autoload(sock_net(sk), name);
++		ca = tcp_ca_find_autoload(name);
+ 
+ 	/* No change asking for existing value */
+ 	if (ca == icsk->icsk_ca_ops) {
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 8f8f93716ff85..da0f502553991 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -116,6 +116,7 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 	const struct inet_timewait_sock *tw = inet_twsk(sktw);
+ 	const struct tcp_timewait_sock *tcptw = tcp_twsk(sktw);
+ 	struct tcp_sock *tp = tcp_sk(sk);
++	int ts_recent_stamp;
+ 
+ 	if (reuse == 2) {
+ 		/* Still does not detect *everything* that goes through
+@@ -154,10 +155,11 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 	   If TW bucket has been already destroyed we fall back to VJ's scheme
+ 	   and use initial timestamp retrieved from peer table.
+ 	 */
+-	if (tcptw->tw_ts_recent_stamp &&
++	ts_recent_stamp = READ_ONCE(tcptw->tw_ts_recent_stamp);
++	if (ts_recent_stamp &&
+ 	    (!twp || (reuse && time_after32(ktime_get_seconds(),
+-					    tcptw->tw_ts_recent_stamp)))) {
+-		/* inet_twsk_hashdance() sets sk_refcnt after putting twsk
++					    ts_recent_stamp)))) {
++		/* inet_twsk_hashdance_schedule() sets sk_refcnt after putting twsk
+ 		 * and releasing the bucket lock.
+ 		 */
+ 		if (unlikely(!refcount_inc_not_zero(&sktw->sk_refcnt)))
+@@ -180,8 +182,8 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 			if (!seq)
+ 				seq = 1;
+ 			WRITE_ONCE(tp->write_seq, seq);
+-			tp->rx_opt.ts_recent	   = tcptw->tw_ts_recent;
+-			tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp;
++			tp->rx_opt.ts_recent	   = READ_ONCE(tcptw->tw_ts_recent);
++			tp->rx_opt.ts_recent_stamp = ts_recent_stamp;
+ 		}
+ 
+ 		return 1;
+@@ -1066,7 +1068,7 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ 			tcptw->tw_snd_nxt, tcptw->tw_rcv_nxt,
+ 			tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
+ 			tcp_tw_tsval(tcptw),
+-			tcptw->tw_ts_recent,
++			READ_ONCE(tcptw->tw_ts_recent),
+ 			tw->tw_bound_dev_if, &key,
+ 			tw->tw_transparent ? IP_REPLY_ARG_NOSRCCHECK : 0,
+ 			tw->tw_tos,
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 0fbebf6266e91..d5da3ec8f846e 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -101,16 +101,18 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
+ 	struct tcp_options_received tmp_opt;
+ 	struct tcp_timewait_sock *tcptw = tcp_twsk((struct sock *)tw);
+ 	bool paws_reject = false;
++	int ts_recent_stamp;
+ 
+ 	tmp_opt.saw_tstamp = 0;
+-	if (th->doff > (sizeof(*th) >> 2) && tcptw->tw_ts_recent_stamp) {
++	ts_recent_stamp = READ_ONCE(tcptw->tw_ts_recent_stamp);
++	if (th->doff > (sizeof(*th) >> 2) && ts_recent_stamp) {
+ 		tcp_parse_options(twsk_net(tw), skb, &tmp_opt, 0, NULL);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+ 			if (tmp_opt.rcv_tsecr)
+ 				tmp_opt.rcv_tsecr -= tcptw->tw_ts_offset;
+-			tmp_opt.ts_recent	= tcptw->tw_ts_recent;
+-			tmp_opt.ts_recent_stamp	= tcptw->tw_ts_recent_stamp;
++			tmp_opt.ts_recent	= READ_ONCE(tcptw->tw_ts_recent);
++			tmp_opt.ts_recent_stamp	= ts_recent_stamp;
+ 			paws_reject = tcp_paws_reject(&tmp_opt, th->rst);
+ 		}
+ 	}
+@@ -152,8 +154,10 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
+ 		twsk_rcv_nxt_update(tcptw, TCP_SKB_CB(skb)->end_seq);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+-			tcptw->tw_ts_recent_stamp = ktime_get_seconds();
+-			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
++			WRITE_ONCE(tcptw->tw_ts_recent_stamp,
++				  ktime_get_seconds());
++			WRITE_ONCE(tcptw->tw_ts_recent,
++				   tmp_opt.rcv_tsval);
+ 		}
+ 
+ 		inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+@@ -197,8 +201,10 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
+ 		}
+ 
+ 		if (tmp_opt.saw_tstamp) {
+-			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
+-			tcptw->tw_ts_recent_stamp = ktime_get_seconds();
++			WRITE_ONCE(tcptw->tw_ts_recent,
++				   tmp_opt.rcv_tsval);
++			WRITE_ONCE(tcptw->tw_ts_recent_stamp,
++				   ktime_get_seconds());
+ 		}
+ 
+ 		inet_twsk_put(tw);
+@@ -225,7 +231,7 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
+ 	if (th->syn && !th->rst && !th->ack && !paws_reject &&
+ 	    (after(TCP_SKB_CB(skb)->seq, tcptw->tw_rcv_nxt) ||
+ 	     (tmp_opt.saw_tstamp &&
+-	      (s32)(tcptw->tw_ts_recent - tmp_opt.rcv_tsval) < 0))) {
++	      (s32)(READ_ONCE(tcptw->tw_ts_recent) - tmp_opt.rcv_tsval) < 0))) {
+ 		u32 isn = tcptw->tw_snd_nxt + 65535 + 2;
+ 		if (isn == 0)
+ 			isn++;
+@@ -344,11 +350,10 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
+ 		 * we complete the initialization.
+ 		 */
+ 		local_bh_disable();
+-		inet_twsk_schedule(tw, timeo);
+ 		/* Linkage updates.
+ 		 * Note that access to tw after this point is illegal.
+ 		 */
+-		inet_twsk_hashdance(tw, sk, net->ipv4.tcp_death_row.hashinfo);
++		inet_twsk_hashdance_schedule(tw, sk, net->ipv4.tcp_death_row.hashinfo, timeo);
+ 		local_bh_enable();
+ 	} else {
+ 		/* Sorry, if we're out of memory, just CLOSE this
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index c9a9506b714d7..a9644a8edb960 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3764,7 +3764,7 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ 	if (!rt)
+ 		goto out;
+ 
+-	rt->fib6_metrics = ip_fib_metrics_init(net, cfg->fc_mx, cfg->fc_mx_len,
++	rt->fib6_metrics = ip_fib_metrics_init(cfg->fc_mx, cfg->fc_mx_len,
+ 					       extack);
+ 	if (IS_ERR(rt->fib6_metrics)) {
+ 		err = PTR_ERR(rt->fib6_metrics);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 3385faf1d5dcb..66f6fe5afb030 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1196,9 +1196,9 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ 	tcp_v6_send_ack(sk, skb, tcptw->tw_snd_nxt, tcptw->tw_rcv_nxt,
+ 			tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
+ 			tcp_tw_tsval(tcptw),
+-			tcptw->tw_ts_recent, tw->tw_bound_dev_if, &key,
+-			tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel), tw->tw_priority,
+-			tw->tw_txhash);
++			READ_ONCE(tcptw->tw_ts_recent), tw->tw_bound_dev_if,
++			&key, tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel),
++			tw->tw_priority, tw->tw_txhash);
+ 
+ #ifdef CONFIG_TCP_AO
+ out:
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 7ba329ebdda91..e44b2a26354b5 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -337,6 +337,8 @@ void ieee80211_bss_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ 
+ 	might_sleep();
+ 
++	WARN_ON_ONCE(ieee80211_vif_is_mld(&sdata->vif));
++
+ 	if (!changed || sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ 		return;
+ 
+@@ -369,7 +371,6 @@ void ieee80211_bss_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ 	if (changed & ~BSS_CHANGED_VIF_CFG_FLAGS) {
+ 		u64 ch = changed & ~BSS_CHANGED_VIF_CFG_FLAGS;
+ 
+-		/* FIXME: should be for each link */
+ 		trace_drv_link_info_changed(local, sdata, &sdata->vif.bss_conf,
+ 					    changed);
+ 		if (local->ops->link_info_changed)
+diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c
+index 9f02ee5f08beb..34e5acff39351 100644
+--- a/net/wireless/ibss.c
++++ b/net/wireless/ibss.c
+@@ -3,7 +3,7 @@
+  * Some IBSS support code for cfg80211.
+  *
+  * Copyright 2009	Johannes Berg <johannes@sipsolutions.net>
+- * Copyright (C) 2020-2023 Intel Corporation
++ * Copyright (C) 2020-2024 Intel Corporation
+  */
+ 
+ #include <linux/etherdevice.h>
+@@ -94,6 +94,9 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
+ 
+ 	lockdep_assert_held(&rdev->wiphy.mtx);
+ 
++	if (wdev->cac_started)
++		return -EBUSY;
++
+ 	if (wdev->u.ibss.ssid_len)
+ 		return -EALREADY;
+ 
+diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c
+index 83306979fbe21..aaca65b66af48 100644
+--- a/net/wireless/mesh.c
++++ b/net/wireless/mesh.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+  * Portions
+- * Copyright (C) 2022-2023 Intel Corporation
++ * Copyright (C) 2022-2024 Intel Corporation
+  */
+ #include <linux/ieee80211.h>
+ #include <linux/export.h>
+@@ -127,6 +127,9 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
+ 	if (!rdev->ops->join_mesh)
+ 		return -EOPNOTSUPP;
+ 
++	if (wdev->cac_started)
++		return -EBUSY;
++
+ 	if (!setup->chandef.chan) {
+ 		/* if no channel explicitly given, use preset channel */
+ 		setup->chandef = wdev->u.mesh.preset_chandef;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index c2829d673bc76..967bc4935b4ed 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -5965,6 +5965,9 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ 	if (!rdev->ops->start_ap)
+ 		return -EOPNOTSUPP;
+ 
++	if (wdev->cac_started)
++		return -EBUSY;
++
+ 	if (wdev->links[link_id].ap.beacon_interval)
+ 		return -EALREADY;
+ 
+@@ -9957,6 +9960,17 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+ 
+ 	flush_delayed_work(&rdev->dfs_update_channels_wk);
+ 
++	switch (wdev->iftype) {
++	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_P2P_GO:
++	case NL80211_IFTYPE_MESH_POINT:
++	case NL80211_IFTYPE_ADHOC:
++		break;
++	default:
++		/* caution - see cfg80211_beaconing_iface_active() below */
++		return -EINVAL;
++	}
++
+ 	wiphy_lock(wiphy);
+ 
+ 	dfs_region = reg_get_dfs_region(wiphy);
+@@ -9987,12 +10001,7 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+ 		goto unlock;
+ 	}
+ 
+-	if (netif_carrier_ok(dev)) {
+-		err = -EBUSY;
+-		goto unlock;
+-	}
+-
+-	if (wdev->cac_started) {
++	if (cfg80211_beaconing_iface_active(wdev) || wdev->cac_started) {
+ 		err = -EBUSY;
+ 		goto unlock;
+ 	}
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 292b530a6dd31..64c779788a646 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1604,7 +1604,7 @@ struct cfg80211_bss *__cfg80211_get_bss(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL(__cfg80211_get_bss);
+ 
+-static void rb_insert_bss(struct cfg80211_registered_device *rdev,
++static bool rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 			  struct cfg80211_internal_bss *bss)
+ {
+ 	struct rb_node **p = &rdev->bss_tree.rb_node;
+@@ -1620,7 +1620,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 
+ 		if (WARN_ON(!cmp)) {
+ 			/* will sort of leak this BSS */
+-			return;
++			return false;
+ 		}
+ 
+ 		if (cmp < 0)
+@@ -1631,6 +1631,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev,
+ 
+ 	rb_link_node(&bss->rbn, parent, p);
+ 	rb_insert_color(&bss->rbn, &rdev->bss_tree);
++	return true;
+ }
+ 
+ static struct cfg80211_internal_bss *
+@@ -1657,6 +1658,34 @@ rb_find_bss(struct cfg80211_registered_device *rdev,
+ 	return NULL;
+ }
+ 
++static void cfg80211_insert_bss(struct cfg80211_registered_device *rdev,
++				struct cfg80211_internal_bss *bss)
++{
++	lockdep_assert_held(&rdev->bss_lock);
++
++	if (!rb_insert_bss(rdev, bss))
++		return;
++	list_add_tail(&bss->list, &rdev->bss_list);
++	rdev->bss_entries++;
++}
++
++static void cfg80211_rehash_bss(struct cfg80211_registered_device *rdev,
++                                struct cfg80211_internal_bss *bss)
++{
++	lockdep_assert_held(&rdev->bss_lock);
++
++	rb_erase(&bss->rbn, &rdev->bss_tree);
++	if (!rb_insert_bss(rdev, bss)) {
++		list_del(&bss->list);
++		if (!list_empty(&bss->hidden_list))
++			list_del_init(&bss->hidden_list);
++		if (!list_empty(&bss->pub.nontrans_list))
++			list_del_init(&bss->pub.nontrans_list);
++		rdev->bss_entries--;
++	}
++	rdev->bss_generation++;
++}
++
+ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
+ 				   struct cfg80211_internal_bss *new)
+ {
+@@ -1969,9 +1998,7 @@ __cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 			bss_ref_get(rdev, bss_from_pub(tmp->pub.transmitted_bss));
+ 		}
+ 
+-		list_add_tail(&new->list, &rdev->bss_list);
+-		rdev->bss_entries++;
+-		rb_insert_bss(rdev, new);
++		cfg80211_insert_bss(rdev, new);
+ 		found = new;
+ 	}
+ 
+@@ -3354,19 +3381,14 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
+ 		if (!WARN_ON(!__cfg80211_unlink_bss(rdev, new)))
+ 			rdev->bss_generation++;
+ 	}
+-
+-	rb_erase(&cbss->rbn, &rdev->bss_tree);
+-	rb_insert_bss(rdev, cbss);
+-	rdev->bss_generation++;
++	cfg80211_rehash_bss(rdev, cbss);
+ 
+ 	list_for_each_entry_safe(nontrans_bss, tmp,
+ 				 &cbss->pub.nontrans_list,
+ 				 nontrans_list) {
+ 		bss = bss_from_pub(nontrans_bss);
+ 		bss->pub.channel = chan;
+-		rb_erase(&bss->rbn, &rdev->bss_tree);
+-		rb_insert_bss(rdev, bss);
+-		rdev->bss_generation++;
++		cfg80211_rehash_bss(rdev, bss);
+ 	}
+ 
+ done:
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index bcfea073e3f2e..01b923d97a446 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -1692,6 +1692,10 @@ int __aafs_profile_mkdir(struct aa_profile *profile, struct dentry *parent)
+ 		struct aa_profile *p;
+ 		p = aa_deref_parent(profile);
+ 		dent = prof_dir(p);
++		if (!dent) {
++			error = -ENOENT;
++			goto fail2;
++		}
+ 		/* adding to parent that previously didn't have children */
+ 		dent = aafs_create_dir("profiles", dent);
+ 		if (IS_ERR(dent))
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 081129be5b62c..ab939e6449e41 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4456,7 +4456,7 @@ static int smack_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ 	rcu_read_unlock();
+ 
+ 	if (hskp == NULL)
+-		rc = netlbl_req_setattr(req, &skp->smk_netlabel);
++		rc = netlbl_req_setattr(req, &ssp->smk_out->smk_netlabel);
+ 	else
+ 		netlbl_req_delattr(req);
+ 
+diff --git a/sound/core/seq/seq_ports.h b/sound/core/seq/seq_ports.h
+index b111382f697aa..9e36738c0dd04 100644
+--- a/sound/core/seq/seq_ports.h
++++ b/sound/core/seq/seq_ports.h
+@@ -7,6 +7,7 @@
+ #define __SND_SEQ_PORTS_H
+ 
+ #include <sound/seq_kernel.h>
++#include <sound/ump_convert.h>
+ #include "seq_lock.h"
+ 
+ /* list of 'exported' ports */
+@@ -42,17 +43,6 @@ struct snd_seq_port_subs_info {
+ 	int (*close)(void *private_data, struct snd_seq_port_subscribe *info);
+ };
+ 
+-/* context for converting from legacy control event to UMP packet */
+-struct snd_seq_ump_midi2_bank {
+-	bool rpn_set;
+-	bool nrpn_set;
+-	bool bank_set;
+-	unsigned char cc_rpn_msb, cc_rpn_lsb;
+-	unsigned char cc_nrpn_msb, cc_nrpn_lsb;
+-	unsigned char cc_data_msb, cc_data_lsb;
+-	unsigned char cc_bank_msb, cc_bank_lsb;
+-};
+-
+ struct snd_seq_client_port {
+ 
+ 	struct snd_seq_addr addr;	/* client/port number */
+@@ -88,7 +78,7 @@ struct snd_seq_client_port {
+ 	unsigned char ump_group;
+ 
+ #if IS_ENABLED(CONFIG_SND_SEQ_UMP)
+-	struct snd_seq_ump_midi2_bank midi2_bank[16]; /* per channel */
++	struct ump_cvt_to_ump_bank midi2_bank[16]; /* per channel */
+ #endif
+ };
+ 
+diff --git a/sound/core/seq/seq_ump_convert.c b/sound/core/seq/seq_ump_convert.c
+index d9dacfbe4a9ae..4dd540cbb1cbb 100644
+--- a/sound/core/seq/seq_ump_convert.c
++++ b/sound/core/seq/seq_ump_convert.c
+@@ -368,7 +368,7 @@ static int cvt_ump_midi1_to_midi2(struct snd_seq_client *dest,
+ 	struct snd_seq_ump_event ev_cvt;
+ 	const union snd_ump_midi1_msg *midi1 = (const union snd_ump_midi1_msg *)event->ump;
+ 	union snd_ump_midi2_msg *midi2 = (union snd_ump_midi2_msg *)ev_cvt.ump;
+-	struct snd_seq_ump_midi2_bank *cc;
++	struct ump_cvt_to_ump_bank *cc;
+ 
+ 	ev_cvt = *event;
+ 	memset(&ev_cvt.ump, 0, sizeof(ev_cvt.ump));
+@@ -789,28 +789,45 @@ static int paf_ev_to_ump_midi2(const struct snd_seq_event *event,
+ 	return 1;
+ }
+ 
++static void reset_rpn(struct ump_cvt_to_ump_bank *cc)
++{
++	cc->rpn_set = 0;
++	cc->nrpn_set = 0;
++	cc->cc_rpn_msb = cc->cc_rpn_lsb = 0;
++	cc->cc_data_msb = cc->cc_data_lsb = 0;
++	cc->cc_data_msb_set = cc->cc_data_lsb_set = 0;
++}
++
+ /* set up the MIDI2 RPN/NRPN packet data from the parsed info */
+-static void fill_rpn(struct snd_seq_ump_midi2_bank *cc,
+-		     union snd_ump_midi2_msg *data,
+-		     unsigned char channel)
++static int fill_rpn(struct ump_cvt_to_ump_bank *cc,
++		    union snd_ump_midi2_msg *data,
++		    unsigned char channel,
++		    bool flush)
+ {
++	if (!(cc->cc_data_lsb_set || cc->cc_data_msb_set))
++		return 0; // skip
++	/* when not flushing, wait for complete data set */
++	if (!flush && (!cc->cc_data_lsb_set || !cc->cc_data_msb_set))
++		return 0; // skip
++
+ 	if (cc->rpn_set) {
+ 		data->rpn.status = UMP_MSG_STATUS_RPN;
+ 		data->rpn.bank = cc->cc_rpn_msb;
+ 		data->rpn.index = cc->cc_rpn_lsb;
+-		cc->rpn_set = 0;
+-		cc->cc_rpn_msb = cc->cc_rpn_lsb = 0;
+-	} else {
++	} else if (cc->nrpn_set) {
+ 		data->rpn.status = UMP_MSG_STATUS_NRPN;
+ 		data->rpn.bank = cc->cc_nrpn_msb;
+ 		data->rpn.index = cc->cc_nrpn_lsb;
+-		cc->nrpn_set = 0;
+-		cc->cc_nrpn_msb = cc->cc_nrpn_lsb = 0;
++	} else {
++		return 0; // skip
+ 	}
++
+ 	data->rpn.data = upscale_14_to_32bit((cc->cc_data_msb << 7) |
+ 					     cc->cc_data_lsb);
+ 	data->rpn.channel = channel;
+-	cc->cc_data_msb = cc->cc_data_lsb = 0;
++
++	reset_rpn(cc);
++	return 1;
+ }
+ 
+ /* convert CC event to MIDI 2.0 UMP */
+@@ -822,29 +839,39 @@ static int cc_ev_to_ump_midi2(const struct snd_seq_event *event,
+ 	unsigned char channel = event->data.control.channel & 0x0f;
+ 	unsigned char index = event->data.control.param & 0x7f;
+ 	unsigned char val = event->data.control.value & 0x7f;
+-	struct snd_seq_ump_midi2_bank *cc = &dest_port->midi2_bank[channel];
++	struct ump_cvt_to_ump_bank *cc = &dest_port->midi2_bank[channel];
++	int ret;
+ 
+ 	/* process special CC's (bank/rpn/nrpn) */
+ 	switch (index) {
+ 	case UMP_CC_RPN_MSB:
++		ret = fill_rpn(cc, data, channel, true);
+ 		cc->rpn_set = 1;
+ 		cc->cc_rpn_msb = val;
+-		return 0; // skip
++		if (cc->cc_rpn_msb == 0x7f && cc->cc_rpn_lsb == 0x7f)
++			reset_rpn(cc);
++		return ret;
+ 	case UMP_CC_RPN_LSB:
++		ret = fill_rpn(cc, data, channel, true);
+ 		cc->rpn_set = 1;
+ 		cc->cc_rpn_lsb = val;
+-		return 0; // skip
++		if (cc->cc_rpn_msb == 0x7f && cc->cc_rpn_lsb == 0x7f)
++			reset_rpn(cc);
++		return ret;
+ 	case UMP_CC_NRPN_MSB:
++		ret = fill_rpn(cc, data, channel, true);
+ 		cc->nrpn_set = 1;
+ 		cc->cc_nrpn_msb = val;
+-		return 0; // skip
++		return ret;
+ 	case UMP_CC_NRPN_LSB:
++		ret = fill_rpn(cc, data, channel, true);
+ 		cc->nrpn_set = 1;
+ 		cc->cc_nrpn_lsb = val;
+-		return 0; // skip
++		return ret;
+ 	case UMP_CC_DATA:
++		cc->cc_data_msb_set = 1;
+ 		cc->cc_data_msb = val;
+-		return 0; // skip
++		return fill_rpn(cc, data, channel, false);
+ 	case UMP_CC_BANK_SELECT:
+ 		cc->bank_set = 1;
+ 		cc->cc_bank_msb = val;
+@@ -854,11 +881,9 @@ static int cc_ev_to_ump_midi2(const struct snd_seq_event *event,
+ 		cc->cc_bank_lsb = val;
+ 		return 0; // skip
+ 	case UMP_CC_DATA_LSB:
++		cc->cc_data_lsb_set = 1;
+ 		cc->cc_data_lsb = val;
+-		if (!(cc->rpn_set || cc->nrpn_set))
+-			return 0; // skip
+-		fill_rpn(cc, data, channel);
+-		return 1;
++		return fill_rpn(cc, data, channel, false);
+ 	}
+ 
+ 	data->cc.status = status;
+@@ -887,7 +912,7 @@ static int pgm_ev_to_ump_midi2(const struct snd_seq_event *event,
+ 			       unsigned char status)
+ {
+ 	unsigned char channel = event->data.control.channel & 0x0f;
+-	struct snd_seq_ump_midi2_bank *cc = &dest_port->midi2_bank[channel];
++	struct ump_cvt_to_ump_bank *cc = &dest_port->midi2_bank[channel];
+ 
+ 	data->pg.status = status;
+ 	data->pg.channel = channel;
+@@ -924,8 +949,9 @@ static int ctrl14_ev_to_ump_midi2(const struct snd_seq_event *event,
+ {
+ 	unsigned char channel = event->data.control.channel & 0x0f;
+ 	unsigned char index = event->data.control.param & 0x7f;
+-	struct snd_seq_ump_midi2_bank *cc = &dest_port->midi2_bank[channel];
++	struct ump_cvt_to_ump_bank *cc = &dest_port->midi2_bank[channel];
+ 	unsigned char msb, lsb;
++	int ret;
+ 
+ 	msb = (event->data.control.value >> 7) & 0x7f;
+ 	lsb = event->data.control.value & 0x7f;
+@@ -939,28 +965,27 @@ static int ctrl14_ev_to_ump_midi2(const struct snd_seq_event *event,
+ 		cc->cc_bank_lsb = lsb;
+ 		return 0; // skip
+ 	case UMP_CC_RPN_MSB:
+-		cc->cc_rpn_msb = msb;
+-		fallthrough;
+ 	case UMP_CC_RPN_LSB:
+-		cc->rpn_set = 1;
++		ret = fill_rpn(cc, data, channel, true);
++		cc->cc_rpn_msb = msb;
+ 		cc->cc_rpn_lsb = lsb;
+-		return 0; // skip
++		cc->rpn_set = 1;
++		if (cc->cc_rpn_msb == 0x7f && cc->cc_rpn_lsb == 0x7f)
++			reset_rpn(cc);
++		return ret;
+ 	case UMP_CC_NRPN_MSB:
+-		cc->cc_nrpn_msb = msb;
+-		fallthrough;
+ 	case UMP_CC_NRPN_LSB:
++		ret = fill_rpn(cc, data, channel, true);
++		cc->cc_nrpn_msb = msb;
+ 		cc->nrpn_set = 1;
+ 		cc->cc_nrpn_lsb = lsb;
+-		return 0; // skip
++		return ret;
+ 	case UMP_CC_DATA:
+-		cc->cc_data_msb = msb;
+-		fallthrough;
+ 	case UMP_CC_DATA_LSB:
++		cc->cc_data_msb_set = cc->cc_data_lsb_set = 1;
++		cc->cc_data_msb = msb;
+ 		cc->cc_data_lsb = lsb;
+-		if (!(cc->rpn_set || cc->nrpn_set))
+-			return 0; // skip
+-		fill_rpn(cc, data, channel);
+-		return 1;
++		return fill_rpn(cc, data, channel, false);
+ 	}
+ 
+ 	data->cc.status = UMP_MSG_STATUS_CC;
+diff --git a/sound/core/ump_convert.c b/sound/core/ump_convert.c
+index f67c44c83fde4..0fe13d0316568 100644
+--- a/sound/core/ump_convert.c
++++ b/sound/core/ump_convert.c
+@@ -287,25 +287,42 @@ static int cvt_legacy_system_to_ump(struct ump_cvt_to_ump *cvt,
+ 	return 4;
+ }
+ 
+-static void fill_rpn(struct ump_cvt_to_ump_bank *cc,
+-		     union snd_ump_midi2_msg *midi2)
++static void reset_rpn(struct ump_cvt_to_ump_bank *cc)
+ {
++	cc->rpn_set = 0;
++	cc->nrpn_set = 0;
++	cc->cc_rpn_msb = cc->cc_rpn_lsb = 0;
++	cc->cc_data_msb = cc->cc_data_lsb = 0;
++	cc->cc_data_msb_set = cc->cc_data_lsb_set = 0;
++}
++
++static int fill_rpn(struct ump_cvt_to_ump_bank *cc,
++		    union snd_ump_midi2_msg *midi2,
++		    bool flush)
++{
++	if (!(cc->cc_data_lsb_set || cc->cc_data_msb_set))
++		return 0; // skip
++	/* when not flushing, wait for complete data set */
++	if (!flush && (!cc->cc_data_lsb_set || !cc->cc_data_msb_set))
++		return 0; // skip
++
+ 	if (cc->rpn_set) {
+ 		midi2->rpn.status = UMP_MSG_STATUS_RPN;
+ 		midi2->rpn.bank = cc->cc_rpn_msb;
+ 		midi2->rpn.index = cc->cc_rpn_lsb;
+-		cc->rpn_set = 0;
+-		cc->cc_rpn_msb = cc->cc_rpn_lsb = 0;
+-	} else {
++	} else if (cc->nrpn_set) {
+ 		midi2->rpn.status = UMP_MSG_STATUS_NRPN;
+ 		midi2->rpn.bank = cc->cc_nrpn_msb;
+ 		midi2->rpn.index = cc->cc_nrpn_lsb;
+-		cc->nrpn_set = 0;
+-		cc->cc_nrpn_msb = cc->cc_nrpn_lsb = 0;
++	} else {
++		return 0; // skip
+ 	}
++
+ 	midi2->rpn.data = upscale_14_to_32bit((cc->cc_data_msb << 7) |
+ 					      cc->cc_data_lsb);
+-	cc->cc_data_msb = cc->cc_data_lsb = 0;
++
++	reset_rpn(cc);
++	return 1;
+ }
+ 
+ /* convert to a MIDI 1.0 Channel Voice message */
+@@ -318,6 +335,7 @@ static int cvt_legacy_cmd_to_ump(struct ump_cvt_to_ump *cvt,
+ 	struct ump_cvt_to_ump_bank *cc;
+ 	union snd_ump_midi2_msg *midi2 = (union snd_ump_midi2_msg *)data;
+ 	unsigned char status, channel;
++	int ret;
+ 
+ 	BUILD_BUG_ON(sizeof(union snd_ump_midi1_msg) != 4);
+ 	BUILD_BUG_ON(sizeof(union snd_ump_midi2_msg) != 8);
+@@ -358,24 +376,33 @@ static int cvt_legacy_cmd_to_ump(struct ump_cvt_to_ump *cvt,
+ 	case UMP_MSG_STATUS_CC:
+ 		switch (buf[1]) {
+ 		case UMP_CC_RPN_MSB:
++			ret = fill_rpn(cc, midi2, true);
+ 			cc->rpn_set = 1;
+ 			cc->cc_rpn_msb = buf[2];
+-			return 0; // skip
++			if (cc->cc_rpn_msb == 0x7f && cc->cc_rpn_lsb == 0x7f)
++				reset_rpn(cc);
++			return ret;
+ 		case UMP_CC_RPN_LSB:
++			ret = fill_rpn(cc, midi2, true);
+ 			cc->rpn_set = 1;
+ 			cc->cc_rpn_lsb = buf[2];
+-			return 0; // skip
++			if (cc->cc_rpn_msb == 0x7f && cc->cc_rpn_lsb == 0x7f)
++				reset_rpn(cc);
++			return ret;
+ 		case UMP_CC_NRPN_MSB:
++			ret = fill_rpn(cc, midi2, true);
+ 			cc->nrpn_set = 1;
+ 			cc->cc_nrpn_msb = buf[2];
+-			return 0; // skip
++			return ret;
+ 		case UMP_CC_NRPN_LSB:
++			ret = fill_rpn(cc, midi2, true);
+ 			cc->nrpn_set = 1;
+ 			cc->cc_nrpn_lsb = buf[2];
+-			return 0; // skip
++			return ret;
+ 		case UMP_CC_DATA:
++			cc->cc_data_msb_set = 1;
+ 			cc->cc_data_msb = buf[2];
+-			return 0; // skip
++			return fill_rpn(cc, midi2, false);
+ 		case UMP_CC_BANK_SELECT:
+ 			cc->bank_set = 1;
+ 			cc->cc_bank_msb = buf[2];
+@@ -385,12 +412,9 @@ static int cvt_legacy_cmd_to_ump(struct ump_cvt_to_ump *cvt,
+ 			cc->cc_bank_lsb = buf[2];
+ 			return 0; // skip
+ 		case UMP_CC_DATA_LSB:
++			cc->cc_data_lsb_set = 1;
+ 			cc->cc_data_lsb = buf[2];
+-			if (cc->rpn_set || cc->nrpn_set)
+-				fill_rpn(cc, midi2);
+-			else
+-				return 0; // skip
+-			break;
++			return fill_rpn(cc, midi2, false);
+ 		default:
+ 			midi2->cc.index = buf[1];
+ 			midi2->cc.data = upscale_7_to_32bit(buf[2]);
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index f64d9dc197a31..9cff87dfbecbb 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -4955,6 +4955,69 @@ void snd_hda_gen_stream_pm(struct hda_codec *codec, hda_nid_t nid, bool on)
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_gen_stream_pm);
+ 
++/* forcibly mute the speaker output without caching; return true if updated */
++static bool force_mute_output_path(struct hda_codec *codec, hda_nid_t nid)
++{
++	if (!nid)
++		return false;
++	if (!nid_has_mute(codec, nid, HDA_OUTPUT))
++		return false; /* no mute, skip */
++	if (snd_hda_codec_amp_read(codec, nid, 0, HDA_OUTPUT, 0) &
++	    snd_hda_codec_amp_read(codec, nid, 1, HDA_OUTPUT, 0) &
++	    HDA_AMP_MUTE)
++		return false; /* both channels already muted, skip */
++
++	/* direct amp update without caching */
++	snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_AMP_GAIN_MUTE,
++			    AC_AMP_SET_OUTPUT | AC_AMP_SET_LEFT |
++			    AC_AMP_SET_RIGHT | HDA_AMP_MUTE);
++	return true;
++}
++
++/**
++ * snd_hda_gen_shutup_speakers - Forcibly mute the speaker outputs
++ * @codec: the HDA codec
++ *
++ * Forcibly mute the speaker outputs, to be called at suspend or shutdown.
++ *
++ * The mute state done by this function isn't cached, hence the original state
++ * will be restored at resume.
++ *
++ * Return true if the mute state has been changed.
++ */
++bool snd_hda_gen_shutup_speakers(struct hda_codec *codec)
++{
++	struct hda_gen_spec *spec = codec->spec;
++	const int *paths;
++	const struct nid_path *path;
++	int i, p, num_paths;
++	bool updated = false;
++
++	/* if already powered off, do nothing */
++	if (!snd_hdac_is_power_on(&codec->core))
++		return false;
++
++	if (spec->autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT) {
++		paths = spec->out_paths;
++		num_paths = spec->autocfg.line_outs;
++	} else {
++		paths = spec->speaker_paths;
++		num_paths = spec->autocfg.speaker_outs;
++	}
++
++	for (i = 0; i < num_paths; i++) {
++		path = snd_hda_get_path_from_idx(codec, paths[i]);
++		if (!path)
++			continue;
++		for (p = 0; p < path->depth; p++)
++			if (force_mute_output_path(codec, path->path[p]))
++				updated = true;
++	}
++
++	return updated;
++}
++EXPORT_SYMBOL_GPL(snd_hda_gen_shutup_speakers);
++
+ /**
+  * snd_hda_gen_parse_auto_config - Parse the given BIOS configuration and
+  * set up the hda_gen_spec
+diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h
+index 8f5ecf740c491..08544601b4ce2 100644
+--- a/sound/pci/hda/hda_generic.h
++++ b/sound/pci/hda/hda_generic.h
+@@ -353,5 +353,6 @@ int snd_hda_gen_add_mute_led_cdev(struct hda_codec *codec,
+ int snd_hda_gen_add_micmute_led_cdev(struct hda_codec *codec,
+ 				     int (*callback)(struct led_classdev *,
+ 						     enum led_brightness));
++bool snd_hda_gen_shutup_speakers(struct hda_codec *codec);
+ 
+ #endif /* __SOUND_HDA_GENERIC_H */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 4472923ba694b..f030669243f9a 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -205,6 +205,8 @@ static void cx_auto_shutdown(struct hda_codec *codec)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+ 
++	snd_hda_gen_shutup_speakers(codec);
++
+ 	/* Turn the problematic codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+ 	cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index d597e59863ee3..f6c1dbd0ebcf5 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -220,6 +220,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21J6"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21M3"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -430,6 +437,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8A3E"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8B27"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/es8326.c b/sound/soc/codecs/es8326.c
+index 6a4e42e5e35b9..e620af9b864cb 100644
+--- a/sound/soc/codecs/es8326.c
++++ b/sound/soc/codecs/es8326.c
+@@ -825,6 +825,8 @@ static void es8326_jack_detect_handler(struct work_struct *work)
+ 		es8326_disable_micbias(es8326->component);
+ 		if (es8326->jack->status & SND_JACK_HEADPHONE) {
+ 			dev_dbg(comp->dev, "Report hp remove event\n");
++			snd_soc_jack_report(es8326->jack, 0,
++				    SND_JACK_BTN_0 | SND_JACK_BTN_1 | SND_JACK_BTN_2);
+ 			snd_soc_jack_report(es8326->jack, 0, SND_JACK_HEADSET);
+ 			/* mute adc when mic path switch */
+ 			regmap_write(es8326->regmap, ES8326_ADC1_SRC, 0x44);
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index c0ba79a8ad6da..a4762c49a8786 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -420,12 +420,17 @@ reset_with_fail()
+ 	fi
+ }
+ 
++start_events()
++{
++	mptcp_lib_events "${ns1}" "${evts_ns1}" evts_ns1_pid
++	mptcp_lib_events "${ns2}" "${evts_ns2}" evts_ns2_pid
++}
++
+ reset_with_events()
+ {
+ 	reset "${1}" || return 1
+ 
+-	mptcp_lib_events "${ns1}" "${evts_ns1}" evts_ns1_pid
+-	mptcp_lib_events "${ns2}" "${evts_ns2}" evts_ns2_pid
++	start_events
+ }
+ 
+ reset_with_tcp_filter()
+@@ -3333,6 +3338,36 @@ userspace_pm_chk_get_addr()
+ 	fi
+ }
+ 
++# $1: ns ; $2: event type ; $3: count
++chk_evt_nr()
++{
++	local ns=${1}
++	local evt_name="${2}"
++	local exp="${3}"
++
++	local evts="${evts_ns1}"
++	local evt="${!evt_name}"
++	local count
++
++	evt_name="${evt_name:16}" # without MPTCP_LIB_EVENT_
++	[ "${ns}" == "ns2" ] && evts="${evts_ns2}"
++
++	print_check "event ${ns} ${evt_name} (${exp})"
++
++	if [[ "${evt_name}" = "LISTENER_"* ]] &&
++	   ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then
++		print_skip "event not supported"
++		return
++	fi
++
++	count=$(grep -cw "type:${evt}" "${evts}")
++	if [ "${count}" != "${exp}" ]; then
++		fail_test "got ${count} events, expected ${exp}"
++	else
++		print_ok
++	fi
++}
++
+ userspace_tests()
+ {
+ 	# userspace pm type prevents add_addr
+@@ -3572,6 +3607,7 @@ endpoint_tests()
+ 
+ 	if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
+ 	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
++		start_events
+ 		pm_nl_set_limits $ns1 0 3
+ 		pm_nl_set_limits $ns2 0 3
+ 		pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow
+@@ -3623,9 +3659,129 @@ endpoint_tests()
+ 
+ 		mptcp_lib_kill_wait $tests_pid
+ 
++		kill_events_pids
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_LISTENER_CREATED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_CREATED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_ESTABLISHED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_ANNOUNCED 0
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_REMOVED 4
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_ESTABLISHED 6
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_CLOSED 4
++
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_CREATED 1
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_ESTABLISHED 1
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_ANNOUNCED 0
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_REMOVED 0
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_ESTABLISHED 6
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_CLOSED 5 # one has been closed before estab
++
+ 		chk_join_nr 6 6 6
+ 		chk_rm_nr 4 4
+ 	fi
++
++	# remove and re-add
++	if reset_with_events "delete re-add signal" &&
++	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
++		pm_nl_set_limits $ns1 0 3
++		pm_nl_set_limits $ns2 3 3
++		pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
++		# broadcast IP: no packet for this address will be received on ns1
++		pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal
++		pm_nl_add_endpoint $ns1 10.0.1.1 id 42 flags signal
++		test_linkfail=4 speed=5 \
++			run_tests $ns1 $ns2 10.0.1.1 &
++		local tests_pid=$!
++
++		wait_mpj $ns2
++		pm_nl_check_endpoint "creation" \
++			$ns1 10.0.2.1 id 1 flags signal
++		chk_subflow_nr "before delete" 2
++		chk_mptcp_info subflows 1 subflows 1
++
++		pm_nl_del_endpoint $ns1 1 10.0.2.1
++		pm_nl_del_endpoint $ns1 2 224.0.0.1
++		sleep 0.5
++		chk_subflow_nr "after delete" 1
++		chk_mptcp_info subflows 0 subflows 0
++
++		pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
++		pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal
++		wait_mpj $ns2
++		chk_subflow_nr "after re-add" 3
++		chk_mptcp_info subflows 2 subflows 2
++
++		pm_nl_del_endpoint $ns1 42 10.0.1.1
++		sleep 0.5
++		chk_subflow_nr "after delete ID 0" 2
++		chk_mptcp_info subflows 2 subflows 2
++
++		pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal
++		wait_mpj $ns2
++		chk_subflow_nr "after re-add ID 0" 3
++		chk_mptcp_info subflows 3 subflows 3
++
++		pm_nl_del_endpoint $ns1 99 10.0.1.1
++		sleep 0.5
++		chk_subflow_nr "after re-delete ID 0" 2
++		chk_mptcp_info subflows 2 subflows 2
++
++		pm_nl_add_endpoint $ns1 10.0.1.1 id 88 flags signal
++		wait_mpj $ns2
++		chk_subflow_nr "after re-re-add ID 0" 3
++		chk_mptcp_info subflows 3 subflows 3
++		mptcp_lib_kill_wait $tests_pid
++
++		kill_events_pids
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_LISTENER_CREATED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_CREATED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_ESTABLISHED 1
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_ANNOUNCED 0
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_REMOVED 0
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_ESTABLISHED 5
++		chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_CLOSED 3
++
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_CREATED 1
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_ESTABLISHED 1
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_ANNOUNCED 6
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_REMOVED 4
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_ESTABLISHED 5
++		chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_CLOSED 3
++
++		chk_join_nr 5 5 5
++		chk_add_nr 6 6
++		chk_rm_nr 4 3 invert
++	fi
++
++	# flush and re-add
++	if reset_with_tcp_filter "flush re-add" ns2 10.0.3.2 REJECT OUTPUT &&
++	   mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
++		pm_nl_set_limits $ns1 0 2
++		pm_nl_set_limits $ns2 1 2
++		# broadcast IP: no packet for this address will be received on ns1
++		pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal
++		pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
++		test_linkfail=4 speed=20 \
++			run_tests $ns1 $ns2 10.0.1.1 &
++		local tests_pid=$!
++
++		wait_attempt_fail $ns2
++		chk_subflow_nr "before flush" 1
++		chk_mptcp_info subflows 0 subflows 0
++
++		pm_nl_flush_endpoint $ns2
++		pm_nl_flush_endpoint $ns1
++		wait_rm_addr $ns2 0
++		ip netns exec "${ns2}" ${iptables} -D OUTPUT -s "10.0.3.2" -p tcp -j REJECT
++		pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
++		wait_mpj $ns2
++		pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal
++		wait_mpj $ns2
++		mptcp_lib_kill_wait $tests_pid
++
++		chk_join_nr 2 2 2
++		chk_add_nr 2 2
++		chk_rm_nr 1 0 invert
++	fi
+ }
+ 
+ # [$1: error message]
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 6ffa9b7a3260d..e299090eb0426 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -9,10 +9,14 @@ readonly KSFT_SKIP=4
+ readonly KSFT_TEST="${MPTCP_LIB_KSFT_TEST:-$(basename "${0}" .sh)}"
+ 
+ # These variables are used in some selftests, read-only
++declare -rx MPTCP_LIB_EVENT_CREATED=1           # MPTCP_EVENT_CREATED
++declare -rx MPTCP_LIB_EVENT_ESTABLISHED=2       # MPTCP_EVENT_ESTABLISHED
++declare -rx MPTCP_LIB_EVENT_CLOSED=3            # MPTCP_EVENT_CLOSED
+ declare -rx MPTCP_LIB_EVENT_ANNOUNCED=6         # MPTCP_EVENT_ANNOUNCED
+ declare -rx MPTCP_LIB_EVENT_REMOVED=7           # MPTCP_EVENT_REMOVED
+ declare -rx MPTCP_LIB_EVENT_SUB_ESTABLISHED=10  # MPTCP_EVENT_SUB_ESTABLISHED
+ declare -rx MPTCP_LIB_EVENT_SUB_CLOSED=11       # MPTCP_EVENT_SUB_CLOSED
++declare -rx MPTCP_LIB_EVENT_SUB_PRIORITY=13     # MPTCP_EVENT_SUB_PRIORITY
+ declare -rx MPTCP_LIB_EVENT_LISTENER_CREATED=15 # MPTCP_EVENT_LISTENER_CREATED
+ declare -rx MPTCP_LIB_EVENT_LISTENER_CLOSED=16  # MPTCP_EVENT_LISTENER_CLOSED
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-12 12:30 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-12 12:30 UTC (permalink / raw
  To: gentoo-commits

commit:     0e58d5899781a7e637af315dada54546b9564ee9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 12 12:30:16 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 12 12:30:16 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0e58d589

Linux patch 6.10.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1009_linux-6.10.10.patch | 18867 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18871 insertions(+)

diff --git a/0000_README b/0000_README
index 20bc0dd9..3b3f17cf 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.10.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.9
 
+Patch:  1009_linux-6.10.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.10
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1009_linux-6.10.10.patch b/1009_linux-6.10.10.patch
new file mode 100644
index 00000000..3481e02a
--- /dev/null
+++ b/1009_linux-6.10.10.patch
@@ -0,0 +1,18867 @@
+diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
+index 8fbb0519d5569..4a7a59bbf76f8 100644
+--- a/Documentation/admin-guide/cgroup-v2.rst
++++ b/Documentation/admin-guide/cgroup-v2.rst
+@@ -1706,9 +1706,10 @@ PAGE_SIZE multiple when read back.
+ 	entries fault back in or are written out to disk.
+ 
+   memory.zswap.writeback
+-	A read-write single value file. The default value is "1". The
+-	initial value of the root cgroup is 1, and when a new cgroup is
+-	created, it inherits the current value of its parent.
++	A read-write single value file. The default value is "1".
++	Note that this setting is hierarchical, i.e. the writeback would be
++	implicitly disabled for child cgroups if the upper hierarchy
++	does so.
+ 
+ 	When this is set to 0, all swapping attempts to swapping devices
+ 	are disabled. This included both zswap writebacks, and swapping due
+@@ -2346,8 +2347,12 @@ Cpuset Interface Files
+ 	is always a subset of it.
+ 
+ 	Users can manually set it to a value that is different from
+-	"cpuset.cpus".	The only constraint in setting it is that the
+-	list of CPUs must be exclusive with respect to its sibling.
++	"cpuset.cpus".	One constraint in setting it is that the list of
++	CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
++	of its sibling.  If "cpuset.cpus.exclusive" of a sibling cgroup
++	isn't set, its "cpuset.cpus" value, if set, cannot be a subset
++	of it to leave at least one CPU available when the exclusive
++	CPUs are taken away.
+ 
+ 	For a parent cgroup, any one of its exclusive CPUs can only
+ 	be distributed to at most one of its child cgroups.  Having an
+diff --git a/Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.yaml b/Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.yaml
+index 917c40d5c382f..1cbe44ab23b1d 100644
+--- a/Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.yaml
++++ b/Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.yaml
+@@ -28,7 +28,7 @@ unevaluatedProperties: false
+ 
+ examples:
+   - |
+-    nvmem {
++    soc-nvmem {
+         compatible = "xlnx,zynqmp-nvmem-fw";
+         nvmem-layout {
+             compatible = "fixed-layout";
+diff --git a/Makefile b/Makefile
+index 5945cce6b0663..9b4614c0fcbb6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index 6792a1f83f2ad..a407f9cd549ed 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -119,6 +119,18 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu)
+ 	return	acpi_cpu_get_madt_gicc(cpu)->uid;
+ }
+ 
++static inline int get_cpu_for_acpi_id(u32 uid)
++{
++	int cpu;
++
++	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
++		if (acpi_cpu_get_madt_gicc(cpu) &&
++		    uid == get_acpi_id_for_cpu(cpu))
++			return cpu;
++
++	return -EINVAL;
++}
++
+ static inline void arch_fix_phys_package_id(int num, u32 slot) { }
+ void __init acpi_init_cpus(void);
+ int apei_claim_sea(struct pt_regs *regs);
+diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
+index ccbff21ce1faf..2465f291c7e17 100644
+--- a/arch/arm64/kernel/acpi_numa.c
++++ b/arch/arm64/kernel/acpi_numa.c
+@@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu)
+ 	return acpi_early_node_map[cpu];
+ }
+ 
+-static inline int get_cpu_for_acpi_id(u32 uid)
+-{
+-	int cpu;
+-
+-	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
+-		if (uid == get_acpi_id_for_cpu(cpu))
+-			return cpu;
+-
+-	return -EINVAL;
+-}
+-
+ static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header,
+ 				      const unsigned long end)
+ {
+diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
+index aa44b3fe43dde..5da32c00d483f 100644
+--- a/arch/loongarch/include/asm/hugetlb.h
++++ b/arch/loongarch/include/asm/hugetlb.h
+@@ -34,7 +34,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ 					    unsigned long addr, pte_t *ptep)
+ {
+ 	pte_t clear;
+-	pte_t pte = *ptep;
++	pte_t pte = ptep_get(ptep);
+ 
+ 	pte_val(clear) = (unsigned long)invalid_pte_table;
+ 	set_pte_at(mm, addr, ptep, clear);
+@@ -65,7 +65,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ 					     pte_t *ptep, pte_t pte,
+ 					     int dirty)
+ {
+-	int changed = !pte_same(*ptep, pte);
++	int changed = !pte_same(ptep_get(ptep), pte);
+ 
+ 	if (changed) {
+ 		set_pte_at(vma->vm_mm, addr, ptep, pte);
+diff --git a/arch/loongarch/include/asm/kfence.h b/arch/loongarch/include/asm/kfence.h
+index 92636e82957c7..da9e93024626c 100644
+--- a/arch/loongarch/include/asm/kfence.h
++++ b/arch/loongarch/include/asm/kfence.h
+@@ -53,13 +53,13 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
+ {
+ 	pte_t *pte = virt_to_kpte(addr);
+ 
+-	if (WARN_ON(!pte) || pte_none(*pte))
++	if (WARN_ON(!pte) || pte_none(ptep_get(pte)))
+ 		return false;
+ 
+ 	if (protect)
+-		set_pte(pte, __pte(pte_val(*pte) & ~(_PAGE_VALID | _PAGE_PRESENT)));
++		set_pte(pte, __pte(pte_val(ptep_get(pte)) & ~(_PAGE_VALID | _PAGE_PRESENT)));
+ 	else
+-		set_pte(pte, __pte(pte_val(*pte) | (_PAGE_VALID | _PAGE_PRESENT)));
++		set_pte(pte, __pte(pte_val(ptep_get(pte)) | (_PAGE_VALID | _PAGE_PRESENT)));
+ 
+ 	preempt_disable();
+ 	local_flush_tlb_one(addr);
+diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
+index af3acdf3481a6..c4b1a7595c4eb 100644
+--- a/arch/loongarch/include/asm/pgtable.h
++++ b/arch/loongarch/include/asm/pgtable.h
+@@ -106,6 +106,9 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #define KFENCE_AREA_START	(VMEMMAP_END + 1)
+ #define KFENCE_AREA_END		(KFENCE_AREA_START + KFENCE_AREA_SIZE - 1)
+ 
++#define ptep_get(ptep) READ_ONCE(*(ptep))
++#define pmdp_get(pmdp) READ_ONCE(*(pmdp))
++
+ #define pte_ERROR(e) \
+ 	pr_err("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e))
+ #ifndef __PAGETABLE_PMD_FOLDED
+@@ -147,11 +150,6 @@ static inline int p4d_present(p4d_t p4d)
+ 	return p4d_val(p4d) != (unsigned long)invalid_pud_table;
+ }
+ 
+-static inline void p4d_clear(p4d_t *p4dp)
+-{
+-	p4d_val(*p4dp) = (unsigned long)invalid_pud_table;
+-}
+-
+ static inline pud_t *p4d_pgtable(p4d_t p4d)
+ {
+ 	return (pud_t *)p4d_val(p4d);
+@@ -159,7 +157,12 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
+ 
+ static inline void set_p4d(p4d_t *p4d, p4d_t p4dval)
+ {
+-	*p4d = p4dval;
++	WRITE_ONCE(*p4d, p4dval);
++}
++
++static inline void p4d_clear(p4d_t *p4dp)
++{
++	set_p4d(p4dp, __p4d((unsigned long)invalid_pud_table));
+ }
+ 
+ #define p4d_phys(p4d)		PHYSADDR(p4d_val(p4d))
+@@ -193,17 +196,20 @@ static inline int pud_present(pud_t pud)
+ 	return pud_val(pud) != (unsigned long)invalid_pmd_table;
+ }
+ 
+-static inline void pud_clear(pud_t *pudp)
++static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	pud_val(*pudp) = ((unsigned long)invalid_pmd_table);
++	return (pmd_t *)pud_val(pud);
+ }
+ 
+-static inline pmd_t *pud_pgtable(pud_t pud)
++static inline void set_pud(pud_t *pud, pud_t pudval)
+ {
+-	return (pmd_t *)pud_val(pud);
++	WRITE_ONCE(*pud, pudval);
+ }
+ 
+-#define set_pud(pudptr, pudval) do { *(pudptr) = (pudval); } while (0)
++static inline void pud_clear(pud_t *pudp)
++{
++	set_pud(pudp, __pud((unsigned long)invalid_pmd_table));
++}
+ 
+ #define pud_phys(pud)		PHYSADDR(pud_val(pud))
+ #define pud_page(pud)		(pfn_to_page(pud_phys(pud) >> PAGE_SHIFT))
+@@ -231,12 +237,15 @@ static inline int pmd_present(pmd_t pmd)
+ 	return pmd_val(pmd) != (unsigned long)invalid_pte_table;
+ }
+ 
+-static inline void pmd_clear(pmd_t *pmdp)
++static inline void set_pmd(pmd_t *pmd, pmd_t pmdval)
+ {
+-	pmd_val(*pmdp) = ((unsigned long)invalid_pte_table);
++	WRITE_ONCE(*pmd, pmdval);
+ }
+ 
+-#define set_pmd(pmdptr, pmdval) do { *(pmdptr) = (pmdval); } while (0)
++static inline void pmd_clear(pmd_t *pmdp)
++{
++	set_pmd(pmdp, __pmd((unsigned long)invalid_pte_table));
++}
+ 
+ #define pmd_phys(pmd)		PHYSADDR(pmd_val(pmd))
+ 
+@@ -314,7 +323,8 @@ extern void paging_init(void);
+ 
+ static inline void set_pte(pte_t *ptep, pte_t pteval)
+ {
+-	*ptep = pteval;
++	WRITE_ONCE(*ptep, pteval);
++
+ 	if (pte_val(pteval) & _PAGE_GLOBAL) {
+ 		pte_t *buddy = ptep_buddy(ptep);
+ 		/*
+@@ -341,8 +351,8 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
+ 		: [buddy] "+m" (buddy->pte), [tmp] "=&r" (tmp)
+ 		: [global] "r" (page_global));
+ #else /* !CONFIG_SMP */
+-		if (pte_none(*buddy))
+-			pte_val(*buddy) = pte_val(*buddy) | _PAGE_GLOBAL;
++		if (pte_none(ptep_get(buddy)))
++			WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL));
+ #endif /* CONFIG_SMP */
+ 	}
+ }
+@@ -350,7 +360,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
+ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+ 	/* Preserve global status for the pair */
+-	if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL)
++	if (pte_val(ptep_get(ptep_buddy(ptep))) & _PAGE_GLOBAL)
+ 		set_pte(ptep, __pte(_PAGE_GLOBAL));
+ 	else
+ 		set_pte(ptep, __pte(0));
+@@ -589,7 +599,7 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
+ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
+ 					    unsigned long address, pmd_t *pmdp)
+ {
+-	pmd_t old = *pmdp;
++	pmd_t old = pmdp_get(pmdp);
+ 
+ 	pmd_clear(pmdp);
+ 
+diff --git a/arch/loongarch/kernel/relocate.c b/arch/loongarch/kernel/relocate.c
+index 1acfa704c8d09..0eddd4a66b874 100644
+--- a/arch/loongarch/kernel/relocate.c
++++ b/arch/loongarch/kernel/relocate.c
+@@ -13,6 +13,7 @@
+ #include <asm/bootinfo.h>
+ #include <asm/early_ioremap.h>
+ #include <asm/inst.h>
++#include <asm/io.h>
+ #include <asm/sections.h>
+ #include <asm/setup.h>
+ 
+@@ -170,7 +171,7 @@ unsigned long __init relocate_kernel(void)
+ 	unsigned long kernel_length;
+ 	unsigned long random_offset = 0;
+ 	void *location_new = _text; /* Default to original kernel start */
+-	char *cmdline = early_ioremap(fw_arg1, COMMAND_LINE_SIZE); /* Boot command line is passed in fw_arg1 */
++	char *cmdline = early_memremap_ro(fw_arg1, COMMAND_LINE_SIZE); /* Boot command line is passed in fw_arg1 */
+ 
+ 	strscpy(boot_command_line, cmdline, COMMAND_LINE_SIZE);
+ 
+@@ -182,6 +183,7 @@ unsigned long __init relocate_kernel(void)
+ 		random_offset = (unsigned long)location_new - (unsigned long)(_text);
+ #endif
+ 	reloc_offset = (unsigned long)_text - VMLINUX_LOAD_ADDRESS;
++	early_memunmap(cmdline, COMMAND_LINE_SIZE);
+ 
+ 	if (random_offset) {
+ 		kernel_length = (long)(_end) - (long)(_text);
+diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
+index 98883aa23ab82..357f775abd49d 100644
+--- a/arch/loongarch/kvm/mmu.c
++++ b/arch/loongarch/kvm/mmu.c
+@@ -695,19 +695,19 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
+ 	 * value) and then p*d_offset() walks into the target huge page instead
+ 	 * of the old page table (sees the new value).
+ 	 */
+-	pgd = READ_ONCE(*pgd_offset(kvm->mm, hva));
++	pgd = pgdp_get(pgd_offset(kvm->mm, hva));
+ 	if (pgd_none(pgd))
+ 		goto out;
+ 
+-	p4d = READ_ONCE(*p4d_offset(&pgd, hva));
++	p4d = p4dp_get(p4d_offset(&pgd, hva));
+ 	if (p4d_none(p4d) || !p4d_present(p4d))
+ 		goto out;
+ 
+-	pud = READ_ONCE(*pud_offset(&p4d, hva));
++	pud = pudp_get(pud_offset(&p4d, hva));
+ 	if (pud_none(pud) || !pud_present(pud))
+ 		goto out;
+ 
+-	pmd = READ_ONCE(*pmd_offset(&pud, hva));
++	pmd = pmdp_get(pmd_offset(&pud, hva));
+ 	if (pmd_none(pmd) || !pmd_present(pmd))
+ 		goto out;
+ 
+diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
+index 12222c56cb594..e4068906143b3 100644
+--- a/arch/loongarch/mm/hugetlbpage.c
++++ b/arch/loongarch/mm/hugetlbpage.c
+@@ -39,11 +39,11 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+ 	pmd_t *pmd = NULL;
+ 
+ 	pgd = pgd_offset(mm, addr);
+-	if (pgd_present(*pgd)) {
++	if (pgd_present(pgdp_get(pgd))) {
+ 		p4d = p4d_offset(pgd, addr);
+-		if (p4d_present(*p4d)) {
++		if (p4d_present(p4dp_get(p4d))) {
+ 			pud = pud_offset(p4d, addr);
+-			if (pud_present(*pud))
++			if (pud_present(pudp_get(pud)))
+ 				pmd = pmd_offset(pud, addr);
+ 		}
+ 	}
+diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
+index bf789d114c2d7..8a87a482c8f44 100644
+--- a/arch/loongarch/mm/init.c
++++ b/arch/loongarch/mm/init.c
+@@ -141,7 +141,7 @@ void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
+ int __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
+ 				unsigned long addr, unsigned long next)
+ {
+-	int huge = pmd_val(*pmd) & _PAGE_HUGE;
++	int huge = pmd_val(pmdp_get(pmd)) & _PAGE_HUGE;
+ 
+ 	if (huge)
+ 		vmemmap_verify((pte_t *)pmd, node, addr, next);
+@@ -173,7 +173,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
+ 	pud_t *pud;
+ 	pmd_t *pmd;
+ 
+-	if (p4d_none(*p4d)) {
++	if (p4d_none(p4dp_get(p4d))) {
+ 		pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ 		if (!pud)
+ 			panic("%s: Failed to allocate memory\n", __func__);
+@@ -184,7 +184,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
+ 	}
+ 
+ 	pud = pud_offset(p4d, addr);
+-	if (pud_none(*pud)) {
++	if (pud_none(pudp_get(pud))) {
+ 		pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ 		if (!pmd)
+ 			panic("%s: Failed to allocate memory\n", __func__);
+@@ -195,7 +195,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
+ 	}
+ 
+ 	pmd = pmd_offset(pud, addr);
+-	if (!pmd_present(*pmd)) {
++	if (!pmd_present(pmdp_get(pmd))) {
+ 		pte_t *pte;
+ 
+ 		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+@@ -216,7 +216,7 @@ void __init __set_fixmap(enum fixed_addresses idx,
+ 	BUG_ON(idx <= FIX_HOLE || idx >= __end_of_fixed_addresses);
+ 
+ 	ptep = populate_kernel_pte(addr);
+-	if (!pte_none(*ptep)) {
++	if (!pte_none(ptep_get(ptep))) {
+ 		pte_ERROR(*ptep);
+ 		return;
+ 	}
+diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
+index c608adc998458..427d6b1aec09e 100644
+--- a/arch/loongarch/mm/kasan_init.c
++++ b/arch/loongarch/mm/kasan_init.c
+@@ -105,7 +105,7 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node)
+ 
+ static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node, bool early)
+ {
+-	if (__pmd_none(early, READ_ONCE(*pmdp))) {
++	if (__pmd_none(early, pmdp_get(pmdp))) {
+ 		phys_addr_t pte_phys = early ?
+ 				__pa_symbol(kasan_early_shadow_pte) : kasan_alloc_zeroed_page(node);
+ 		if (!early)
+@@ -118,7 +118,7 @@ static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node,
+ 
+ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node, bool early)
+ {
+-	if (__pud_none(early, READ_ONCE(*pudp))) {
++	if (__pud_none(early, pudp_get(pudp))) {
+ 		phys_addr_t pmd_phys = early ?
+ 				__pa_symbol(kasan_early_shadow_pmd) : kasan_alloc_zeroed_page(node);
+ 		if (!early)
+@@ -131,7 +131,7 @@ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node,
+ 
+ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, bool early)
+ {
+-	if (__p4d_none(early, READ_ONCE(*p4dp))) {
++	if (__p4d_none(early, p4dp_get(p4dp))) {
+ 		phys_addr_t pud_phys = early ?
+ 			__pa_symbol(kasan_early_shadow_pud) : kasan_alloc_zeroed_page(node);
+ 		if (!early)
+@@ -154,7 +154,7 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
+ 					      : kasan_alloc_zeroed_page(node);
+ 		next = addr + PAGE_SIZE;
+ 		set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
+-	} while (ptep++, addr = next, addr != end && __pte_none(early, READ_ONCE(*ptep)));
++	} while (ptep++, addr = next, addr != end && __pte_none(early, ptep_get(ptep)));
+ }
+ 
+ static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
+@@ -166,7 +166,7 @@ static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr,
+ 	do {
+ 		next = pmd_addr_end(addr, end);
+ 		kasan_pte_populate(pmdp, addr, next, node, early);
+-	} while (pmdp++, addr = next, addr != end && __pmd_none(early, READ_ONCE(*pmdp)));
++	} while (pmdp++, addr = next, addr != end && __pmd_none(early, pmdp_get(pmdp)));
+ }
+ 
+ static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,
+diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
+index bda018150000e..eb6a29b491a72 100644
+--- a/arch/loongarch/mm/pgtable.c
++++ b/arch/loongarch/mm/pgtable.c
+@@ -128,7 +128,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot)
+ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+ 		pmd_t *pmdp, pmd_t pmd)
+ {
+-	*pmdp = pmd;
++	WRITE_ONCE(*pmdp, pmd);
+ 	flush_tlb_all();
+ }
+ 
+diff --git a/arch/mips/kernel/cevt-r4k.c b/arch/mips/kernel/cevt-r4k.c
+index 368e8475870f0..5f6e9e2ebbdbb 100644
+--- a/arch/mips/kernel/cevt-r4k.c
++++ b/arch/mips/kernel/cevt-r4k.c
+@@ -303,13 +303,6 @@ int r4k_clockevent_init(void)
+ 	if (!c0_compare_int_usable())
+ 		return -ENXIO;
+ 
+-	/*
+-	 * With vectored interrupts things are getting platform specific.
+-	 * get_c0_compare_int is a hook to allow a platform to return the
+-	 * interrupt number of its liking.
+-	 */
+-	irq = get_c0_compare_int();
+-
+ 	cd = &per_cpu(mips_clockevent_device, cpu);
+ 
+ 	cd->name		= "MIPS";
+@@ -320,7 +313,6 @@ int r4k_clockevent_init(void)
+ 	min_delta		= calculate_min_delta();
+ 
+ 	cd->rating		= 300;
+-	cd->irq			= irq;
+ 	cd->cpumask		= cpumask_of(cpu);
+ 	cd->set_next_event	= mips_next_event;
+ 	cd->event_handler	= mips_event_handler;
+@@ -332,6 +324,13 @@ int r4k_clockevent_init(void)
+ 
+ 	cp0_timer_irq_installed = 1;
+ 
++	/*
++	 * With vectored interrupts things are getting platform specific.
++	 * get_c0_compare_int is a hook to allow a platform to return the
++	 * interrupt number of its liking.
++	 */
++	irq = get_c0_compare_int();
++
+ 	if (request_irq(irq, c0_compare_interrupt, flags, "timer",
+ 			c0_compare_interrupt))
+ 		pr_err("Failed to request irq %d (timer)\n", irq);
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 34d91cb8b2590..96970fa75e4ac 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -459,7 +459,6 @@ void free_initmem(void)
+ 	unsigned long kernel_end  = (unsigned long)&_end;
+ 
+ 	/* Remap kernel text and data, but do not touch init section yet. */
+-	kernel_set_to_readonly = true;
+ 	map_pages(init_end, __pa(init_end), kernel_end - init_end,
+ 		  PAGE_KERNEL, 0);
+ 
+@@ -493,11 +492,18 @@ void free_initmem(void)
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+ void mark_rodata_ro(void)
+ {
+-	/* rodata memory was already mapped with KERNEL_RO access rights by
+-           pagetable_init() and map_pages(). No need to do additional stuff here */
+-	unsigned long roai_size = __end_ro_after_init - __start_ro_after_init;
++	unsigned long start = (unsigned long) &__start_rodata;
++	unsigned long end = (unsigned long) &__end_rodata;
++
++	pr_info("Write protecting the kernel read-only data: %luk\n",
++	       (end - start) >> 10);
++
++	kernel_set_to_readonly = true;
++	map_pages(start, __pa(start), end - start, PAGE_KERNEL, 0);
+ 
+-	pr_info("Write protected read-only-after-init data: %luk\n", roai_size >> 10);
++	/* force the kernel to see the new page table entries */
++	flush_cache_all();
++	flush_tlb_all();
+ }
+ #endif
+ 
+diff --git a/arch/powerpc/include/asm/nohash/mmu-e500.h b/arch/powerpc/include/asm/nohash/mmu-e500.h
+index 6ddced0415cb5..7dc24b8632d7c 100644
+--- a/arch/powerpc/include/asm/nohash/mmu-e500.h
++++ b/arch/powerpc/include/asm/nohash/mmu-e500.h
+@@ -303,8 +303,7 @@ extern unsigned long linear_map_top;
+ extern int book3e_htw_mode;
+ 
+ #define PPC_HTW_NONE	0
+-#define PPC_HTW_IBM	1
+-#define PPC_HTW_E6500	2
++#define PPC_HTW_E6500	1
+ 
+ /*
+  * 64-bit booke platforms don't load the tlb in the tlb miss handler code.
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 8064d9c3de862..f7e86e09c49fa 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -19,6 +19,7 @@
+ #include <linux/lockdep.h>
+ #include <linux/memblock.h>
+ #include <linux/mutex.h>
++#include <linux/nospec.h>
+ #include <linux/of.h>
+ #include <linux/of_fdt.h>
+ #include <linux/reboot.h>
+@@ -1916,6 +1917,9 @@ SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs)
+ 	    || nargs + nret > ARRAY_SIZE(args.args))
+ 		return -EINVAL;
+ 
++	nargs = array_index_nospec(nargs, ARRAY_SIZE(args.args));
++	nret = array_index_nospec(nret, ARRAY_SIZE(args.args) - nargs);
++
+ 	/* Copy in args. */
+ 	if (copy_from_user(args.args, uargs->args,
+ 			   nargs * sizeof(rtas_arg_t)) != 0)
+diff --git a/arch/powerpc/kernel/vdso/vdso32.lds.S b/arch/powerpc/kernel/vdso/vdso32.lds.S
+index 426e1ccc6971a..8f57107000a24 100644
+--- a/arch/powerpc/kernel/vdso/vdso32.lds.S
++++ b/arch/powerpc/kernel/vdso/vdso32.lds.S
+@@ -74,6 +74,8 @@ SECTIONS
+ 	.got		: { *(.got) }			:text
+ 	.plt		: { *(.plt) }
+ 
++	.rela.dyn	: { *(.rela .rela*) }
++
+ 	_end = .;
+ 	__end = .;
+ 	PROVIDE(end = .);
+@@ -87,7 +89,7 @@ SECTIONS
+ 		*(.branch_lt)
+ 		*(.data .data.* .gnu.linkonce.d.* .sdata*)
+ 		*(.bss .sbss .dynbss .dynsbss)
+-		*(.got1 .glink .iplt .rela*)
++		*(.got1 .glink .iplt)
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/kernel/vdso/vdso64.lds.S b/arch/powerpc/kernel/vdso/vdso64.lds.S
+index bda6c8cdd459c..400819258c06b 100644
+--- a/arch/powerpc/kernel/vdso/vdso64.lds.S
++++ b/arch/powerpc/kernel/vdso/vdso64.lds.S
+@@ -69,7 +69,7 @@ SECTIONS
+ 	.eh_frame_hdr	: { *(.eh_frame_hdr) }		:text	:eh_frame_hdr
+ 	.eh_frame	: { KEEP (*(.eh_frame)) }	:text
+ 	.gcc_except_table : { *(.gcc_except_table) }
+-	.rela.dyn ALIGN(8) : { *(.rela.dyn) }
++	.rela.dyn ALIGN(8) : { *(.rela .rela*) }
+ 
+ 	.got ALIGN(8)	: { *(.got .toc) }
+ 
+@@ -86,7 +86,7 @@ SECTIONS
+ 		*(.data .data.* .gnu.linkonce.d.* .sdata*)
+ 		*(.bss .sbss .dynbss .dynsbss)
+ 		*(.opd)
+-		*(.glink .iplt .plt .rela*)
++		*(.glink .iplt .plt)
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
+index 5de4dd549f6ec..bcc7e4dff8c30 100644
+--- a/arch/powerpc/lib/qspinlock.c
++++ b/arch/powerpc/lib/qspinlock.c
+@@ -697,7 +697,15 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
+ 	}
+ 
+ release:
+-	qnodesp->count--; /* release the node */
++	/*
++	 * Clear the lock before releasing the node, as another CPU might see stale
++	 * values if an interrupt occurs after we increment qnodesp->count
++	 * but before node->lock is initialized. The barrier ensures that
++	 * there are no further stores to the node after it has been released.
++	 */
++	node->lock = NULL;
++	barrier();
++	qnodesp->count--;
+ }
+ 
+ void queued_spin_lock_slowpath(struct qspinlock *lock)
+diff --git a/arch/powerpc/mm/nohash/Makefile b/arch/powerpc/mm/nohash/Makefile
+index b3f0498dd42f3..90e846f0c46c7 100644
+--- a/arch/powerpc/mm/nohash/Makefile
++++ b/arch/powerpc/mm/nohash/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ obj-y				+= mmu_context.o tlb.o tlb_low.o kup.o
+-obj-$(CONFIG_PPC_BOOK3E_64)  	+= tlb_low_64e.o book3e_pgtable.o
++obj-$(CONFIG_PPC_BOOK3E_64)  	+= tlb_64e.o tlb_low_64e.o book3e_pgtable.o
+ obj-$(CONFIG_40x)		+= 40x.o
+ obj-$(CONFIG_44x)		+= 44x.o
+ obj-$(CONFIG_PPC_8xx)		+= 8xx.o
+diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c
+index 5ffa0af4328af..f57dc721d0636 100644
+--- a/arch/powerpc/mm/nohash/tlb.c
++++ b/arch/powerpc/mm/nohash/tlb.c
+@@ -110,28 +110,6 @@ struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT] = {
+ };
+ #endif
+ 
+-/* The variables below are currently only used on 64-bit Book3E
+- * though this will probably be made common with other nohash
+- * implementations at some point
+- */
+-#ifdef CONFIG_PPC64
+-
+-int mmu_pte_psize;		/* Page size used for PTE pages */
+-int mmu_vmemmap_psize;		/* Page size used for the virtual mem map */
+-int book3e_htw_mode;		/* HW tablewalk?  Value is PPC_HTW_* */
+-unsigned long linear_map_top;	/* Top of linear mapping */
+-
+-
+-/*
+- * Number of bytes to add to SPRN_SPRG_TLB_EXFRAME on crit/mcheck/debug
+- * exceptions.  This is used for bolted and e6500 TLB miss handlers which
+- * do not modify this SPRG in the TLB miss code; for other TLB miss handlers,
+- * this is set to zero.
+- */
+-int extlb_level_exc;
+-
+-#endif /* CONFIG_PPC64 */
+-
+ #ifdef CONFIG_PPC_E500
+ /* next_tlbcam_idx is used to round-robin tlbcam entry assignment */
+ DEFINE_PER_CPU(int, next_tlbcam_idx);
+@@ -358,381 +336,7 @@ void tlb_flush(struct mmu_gather *tlb)
+ 	flush_tlb_mm(tlb->mm);
+ }
+ 
+-/*
+- * Below are functions specific to the 64-bit variant of Book3E though that
+- * may change in the future
+- */
+-
+-#ifdef CONFIG_PPC64
+-
+-/*
+- * Handling of virtual linear page tables or indirect TLB entries
+- * flushing when PTE pages are freed
+- */
+-void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address)
+-{
+-	int tsize = mmu_psize_defs[mmu_pte_psize].enc;
+-
+-	if (book3e_htw_mode != PPC_HTW_NONE) {
+-		unsigned long start = address & PMD_MASK;
+-		unsigned long end = address + PMD_SIZE;
+-		unsigned long size = 1UL << mmu_psize_defs[mmu_pte_psize].shift;
+-
+-		/* This isn't the most optimal, ideally we would factor out the
+-		 * while preempt & CPU mask mucking around, or even the IPI but
+-		 * it will do for now
+-		 */
+-		while (start < end) {
+-			__flush_tlb_page(tlb->mm, start, tsize, 1);
+-			start += size;
+-		}
+-	} else {
+-		unsigned long rmask = 0xf000000000000000ul;
+-		unsigned long rid = (address & rmask) | 0x1000000000000000ul;
+-		unsigned long vpte = address & ~rmask;
+-
+-		vpte = (vpte >> (PAGE_SHIFT - 3)) & ~0xffful;
+-		vpte |= rid;
+-		__flush_tlb_page(tlb->mm, vpte, tsize, 0);
+-	}
+-}
+-
+-static void __init setup_page_sizes(void)
+-{
+-	unsigned int tlb0cfg;
+-	unsigned int tlb0ps;
+-	unsigned int eptcfg;
+-	int i, psize;
+-
+-#ifdef CONFIG_PPC_E500
+-	unsigned int mmucfg = mfspr(SPRN_MMUCFG);
+-	int fsl_mmu = mmu_has_feature(MMU_FTR_TYPE_FSL_E);
+-
+-	if (fsl_mmu && (mmucfg & MMUCFG_MAVN) == MMUCFG_MAVN_V1) {
+-		unsigned int tlb1cfg = mfspr(SPRN_TLB1CFG);
+-		unsigned int min_pg, max_pg;
+-
+-		min_pg = (tlb1cfg & TLBnCFG_MINSIZE) >> TLBnCFG_MINSIZE_SHIFT;
+-		max_pg = (tlb1cfg & TLBnCFG_MAXSIZE) >> TLBnCFG_MAXSIZE_SHIFT;
+-
+-		for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
+-			struct mmu_psize_def *def;
+-			unsigned int shift;
+-
+-			def = &mmu_psize_defs[psize];
+-			shift = def->shift;
+-
+-			if (shift == 0 || shift & 1)
+-				continue;
+-
+-			/* adjust to be in terms of 4^shift Kb */
+-			shift = (shift - 10) >> 1;
+-
+-			if ((shift >= min_pg) && (shift <= max_pg))
+-				def->flags |= MMU_PAGE_SIZE_DIRECT;
+-		}
+-
+-		goto out;
+-	}
+-
+-	if (fsl_mmu && (mmucfg & MMUCFG_MAVN) == MMUCFG_MAVN_V2) {
+-		u32 tlb1cfg, tlb1ps;
+-
+-		tlb0cfg = mfspr(SPRN_TLB0CFG);
+-		tlb1cfg = mfspr(SPRN_TLB1CFG);
+-		tlb1ps = mfspr(SPRN_TLB1PS);
+-		eptcfg = mfspr(SPRN_EPTCFG);
+-
+-		if ((tlb1cfg & TLBnCFG_IND) && (tlb0cfg & TLBnCFG_PT))
+-			book3e_htw_mode = PPC_HTW_E6500;
+-
+-		/*
+-		 * We expect 4K subpage size and unrestricted indirect size.
+-		 * The lack of a restriction on indirect size is a Freescale
+-		 * extension, indicated by PSn = 0 but SPSn != 0.
+-		 */
+-		if (eptcfg != 2)
+-			book3e_htw_mode = PPC_HTW_NONE;
+-
+-		for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
+-			struct mmu_psize_def *def = &mmu_psize_defs[psize];
+-
+-			if (!def->shift)
+-				continue;
+-
+-			if (tlb1ps & (1U << (def->shift - 10))) {
+-				def->flags |= MMU_PAGE_SIZE_DIRECT;
+-
+-				if (book3e_htw_mode && psize == MMU_PAGE_2M)
+-					def->flags |= MMU_PAGE_SIZE_INDIRECT;
+-			}
+-		}
+-
+-		goto out;
+-	}
+-#endif
+-
+-	tlb0cfg = mfspr(SPRN_TLB0CFG);
+-	tlb0ps = mfspr(SPRN_TLB0PS);
+-	eptcfg = mfspr(SPRN_EPTCFG);
+-
+-	/* Look for supported direct sizes */
+-	for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
+-		struct mmu_psize_def *def = &mmu_psize_defs[psize];
+-
+-		if (tlb0ps & (1U << (def->shift - 10)))
+-			def->flags |= MMU_PAGE_SIZE_DIRECT;
+-	}
+-
+-	/* Indirect page sizes supported ? */
+-	if ((tlb0cfg & TLBnCFG_IND) == 0 ||
+-	    (tlb0cfg & TLBnCFG_PT) == 0)
+-		goto out;
+-
+-	book3e_htw_mode = PPC_HTW_IBM;
+-
+-	/* Now, we only deal with one IND page size for each
+-	 * direct size. Hopefully all implementations today are
+-	 * unambiguous, but we might want to be careful in the
+-	 * future.
+-	 */
+-	for (i = 0; i < 3; i++) {
+-		unsigned int ps, sps;
+-
+-		sps = eptcfg & 0x1f;
+-		eptcfg >>= 5;
+-		ps = eptcfg & 0x1f;
+-		eptcfg >>= 5;
+-		if (!ps || !sps)
+-			continue;
+-		for (psize = 0; psize < MMU_PAGE_COUNT; psize++) {
+-			struct mmu_psize_def *def = &mmu_psize_defs[psize];
+-
+-			if (ps == (def->shift - 10))
+-				def->flags |= MMU_PAGE_SIZE_INDIRECT;
+-			if (sps == (def->shift - 10))
+-				def->ind = ps + 10;
+-		}
+-	}
+-
+-out:
+-	/* Cleanup array and print summary */
+-	pr_info("MMU: Supported page sizes\n");
+-	for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
+-		struct mmu_psize_def *def = &mmu_psize_defs[psize];
+-		const char *__page_type_names[] = {
+-			"unsupported",
+-			"direct",
+-			"indirect",
+-			"direct & indirect"
+-		};
+-		if (def->flags == 0) {
+-			def->shift = 0;	
+-			continue;
+-		}
+-		pr_info("  %8ld KB as %s\n", 1ul << (def->shift - 10),
+-			__page_type_names[def->flags & 0x3]);
+-	}
+-}
+-
+-static void __init setup_mmu_htw(void)
+-{
+-	/*
+-	 * If we want to use HW tablewalk, enable it by patching the TLB miss
+-	 * handlers to branch to the one dedicated to it.
+-	 */
+-
+-	switch (book3e_htw_mode) {
+-	case PPC_HTW_IBM:
+-		patch_exception(0x1c0, exc_data_tlb_miss_htw_book3e);
+-		patch_exception(0x1e0, exc_instruction_tlb_miss_htw_book3e);
+-		break;
+-#ifdef CONFIG_PPC_E500
+-	case PPC_HTW_E6500:
+-		extlb_level_exc = EX_TLB_SIZE;
+-		patch_exception(0x1c0, exc_data_tlb_miss_e6500_book3e);
+-		patch_exception(0x1e0, exc_instruction_tlb_miss_e6500_book3e);
+-		break;
+-#endif
+-	}
+-	pr_info("MMU: Book3E HW tablewalk %s\n",
+-		book3e_htw_mode != PPC_HTW_NONE ? "enabled" : "not supported");
+-}
+-
+-/*
+- * Early initialization of the MMU TLB code
+- */
+-static void early_init_this_mmu(void)
+-{
+-	unsigned int mas4;
+-
+-	/* Set MAS4 based on page table setting */
+-
+-	mas4 = 0x4 << MAS4_WIMGED_SHIFT;
+-	switch (book3e_htw_mode) {
+-	case PPC_HTW_E6500:
+-		mas4 |= MAS4_INDD;
+-		mas4 |= BOOK3E_PAGESZ_2M << MAS4_TSIZED_SHIFT;
+-		mas4 |= MAS4_TLBSELD(1);
+-		mmu_pte_psize = MMU_PAGE_2M;
+-		break;
+-
+-	case PPC_HTW_IBM:
+-		mas4 |= MAS4_INDD;
+-		mas4 |=	BOOK3E_PAGESZ_1M << MAS4_TSIZED_SHIFT;
+-		mmu_pte_psize = MMU_PAGE_1M;
+-		break;
+-
+-	case PPC_HTW_NONE:
+-		mas4 |=	BOOK3E_PAGESZ_4K << MAS4_TSIZED_SHIFT;
+-		mmu_pte_psize = mmu_virtual_psize;
+-		break;
+-	}
+-	mtspr(SPRN_MAS4, mas4);
+-
+-#ifdef CONFIG_PPC_E500
+-	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
+-		unsigned int num_cams;
+-		bool map = true;
+-
+-		/* use a quarter of the TLBCAM for bolted linear map */
+-		num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
+-
+-		/*
+-		 * Only do the mapping once per core, or else the
+-		 * transient mapping would cause problems.
+-		 */
+-#ifdef CONFIG_SMP
+-		if (hweight32(get_tensr()) > 1)
+-			map = false;
+-#endif
+-
+-		if (map)
+-			linear_map_top = map_mem_in_cams(linear_map_top,
+-							 num_cams, false, true);
+-	}
+-#endif
+-
+-	/* A sync won't hurt us after mucking around with
+-	 * the MMU configuration
+-	 */
+-	mb();
+-}
+-
+-static void __init early_init_mmu_global(void)
+-{
+-	/* XXX This should be decided at runtime based on supported
+-	 * page sizes in the TLB, but for now let's assume 16M is
+-	 * always there and a good fit (which it probably is)
+-	 *
+-	 * Freescale booke only supports 4K pages in TLB0, so use that.
+-	 */
+-	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E))
+-		mmu_vmemmap_psize = MMU_PAGE_4K;
+-	else
+-		mmu_vmemmap_psize = MMU_PAGE_16M;
+-
+-	/* XXX This code only checks for TLB 0 capabilities and doesn't
+-	 *     check what page size combos are supported by the HW. It
+-	 *     also doesn't handle the case where a separate array holds
+-	 *     the IND entries from the array loaded by the PT.
+-	 */
+-	/* Look for supported page sizes */
+-	setup_page_sizes();
+-
+-	/* Look for HW tablewalk support */
+-	setup_mmu_htw();
+-
+-#ifdef CONFIG_PPC_E500
+-	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
+-		if (book3e_htw_mode == PPC_HTW_NONE) {
+-			extlb_level_exc = EX_TLB_SIZE;
+-			patch_exception(0x1c0, exc_data_tlb_miss_bolted_book3e);
+-			patch_exception(0x1e0,
+-				exc_instruction_tlb_miss_bolted_book3e);
+-		}
+-	}
+-#endif
+-
+-	/* Set the global containing the top of the linear mapping
+-	 * for use by the TLB miss code
+-	 */
+-	linear_map_top = memblock_end_of_DRAM();
+-
+-	ioremap_bot = IOREMAP_BASE;
+-}
+-
+-static void __init early_mmu_set_memory_limit(void)
+-{
+-#ifdef CONFIG_PPC_E500
+-	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
+-		/*
+-		 * Limit memory so we dont have linear faults.
+-		 * Unlike memblock_set_current_limit, which limits
+-		 * memory available during early boot, this permanently
+-		 * reduces the memory available to Linux.  We need to
+-		 * do this because highmem is not supported on 64-bit.
+-		 */
+-		memblock_enforce_memory_limit(linear_map_top);
+-	}
+-#endif
+-
+-	memblock_set_current_limit(linear_map_top);
+-}
+-
+-/* boot cpu only */
+-void __init early_init_mmu(void)
+-{
+-	early_init_mmu_global();
+-	early_init_this_mmu();
+-	early_mmu_set_memory_limit();
+-}
+-
+-void early_init_mmu_secondary(void)
+-{
+-	early_init_this_mmu();
+-}
+-
+-void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+-				phys_addr_t first_memblock_size)
+-{
+-	/* On non-FSL Embedded 64-bit, we adjust the RMA size to match
+-	 * the bolted TLB entry. We know for now that only 1G
+-	 * entries are supported though that may eventually
+-	 * change.
+-	 *
+-	 * on FSL Embedded 64-bit, usually all RAM is bolted, but with
+-	 * unusual memory sizes it's possible for some RAM to not be mapped
+-	 * (such RAM is not used at all by Linux, since we don't support
+-	 * highmem on 64-bit).  We limit ppc64_rma_size to what would be
+-	 * mappable if this memblock is the only one.  Additional memblocks
+-	 * can only increase, not decrease, the amount that ends up getting
+-	 * mapped.  We still limit max to 1G even if we'll eventually map
+-	 * more.  This is due to what the early init code is set up to do.
+-	 *
+-	 * We crop it to the size of the first MEMBLOCK to
+-	 * avoid going over total available memory just in case...
+-	 */
+-#ifdef CONFIG_PPC_E500
+-	if (early_mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
+-		unsigned long linear_sz;
+-		unsigned int num_cams;
+-
+-		/* use a quarter of the TLBCAM for bolted linear map */
+-		num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
+-
+-		linear_sz = map_mem_in_cams(first_memblock_size, num_cams,
+-					    true, true);
+-
+-		ppc64_rma_size = min_t(u64, linear_sz, 0x40000000);
+-	} else
+-#endif
+-		ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000);
+-
+-	/* Finally limit subsequent allocations */
+-	memblock_set_current_limit(first_memblock_base + ppc64_rma_size);
+-}
+-#else /* ! CONFIG_PPC64 */
++#ifndef CONFIG_PPC64
+ void __init early_init_mmu(void)
+ {
+ 	unsigned long root = of_get_flat_dt_root();
+diff --git a/arch/powerpc/mm/nohash/tlb_64e.c b/arch/powerpc/mm/nohash/tlb_64e.c
+new file mode 100644
+index 0000000000000..b6af3ec4d001d
+--- /dev/null
++++ b/arch/powerpc/mm/nohash/tlb_64e.c
+@@ -0,0 +1,361 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * Copyright 2008,2009 Ben Herrenschmidt <benh@kernel.crashing.org>
++ *                     IBM Corp.
++ *
++ *  Derived from arch/ppc/mm/init.c:
++ *    Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
++ *
++ *  Modifications by Paul Mackerras (PowerMac) (paulus@cs.anu.edu.au)
++ *  and Cort Dougan (PReP) (cort@cs.nmt.edu)
++ *    Copyright (C) 1996 Paul Mackerras
++ *
++ *  Derived from "arch/i386/mm/init.c"
++ *    Copyright (C) 1991, 1992, 1993, 1994  Linus Torvalds
++ */
++
++#include <linux/kernel.h>
++#include <linux/export.h>
++#include <linux/mm.h>
++#include <linux/init.h>
++#include <linux/pagemap.h>
++#include <linux/memblock.h>
++
++#include <asm/pgalloc.h>
++#include <asm/tlbflush.h>
++#include <asm/tlb.h>
++#include <asm/code-patching.h>
++#include <asm/cputhreads.h>
++
++#include <mm/mmu_decl.h>
++
++/* The variables below are currently only used on 64-bit Book3E
++ * though this will probably be made common with other nohash
++ * implementations at some point
++ */
++static int mmu_pte_psize;	/* Page size used for PTE pages */
++int mmu_vmemmap_psize;		/* Page size used for the virtual mem map */
++int book3e_htw_mode;		/* HW tablewalk?  Value is PPC_HTW_* */
++unsigned long linear_map_top;	/* Top of linear mapping */
++
++
++/*
++ * Number of bytes to add to SPRN_SPRG_TLB_EXFRAME on crit/mcheck/debug
++ * exceptions.  This is used for bolted and e6500 TLB miss handlers which
++ * do not modify this SPRG in the TLB miss code; for other TLB miss handlers,
++ * this is set to zero.
++ */
++int extlb_level_exc;
++
++/*
++ * Handling of virtual linear page tables or indirect TLB entries
++ * flushing when PTE pages are freed
++ */
++void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address)
++{
++	int tsize = mmu_psize_defs[mmu_pte_psize].enc;
++
++	if (book3e_htw_mode != PPC_HTW_NONE) {
++		unsigned long start = address & PMD_MASK;
++		unsigned long end = address + PMD_SIZE;
++		unsigned long size = 1UL << mmu_psize_defs[mmu_pte_psize].shift;
++
++		/* This isn't the most optimal, ideally we would factor out the
++		 * while preempt & CPU mask mucking around, or even the IPI but
++		 * it will do for now
++		 */
++		while (start < end) {
++			__flush_tlb_page(tlb->mm, start, tsize, 1);
++			start += size;
++		}
++	} else {
++		unsigned long rmask = 0xf000000000000000ul;
++		unsigned long rid = (address & rmask) | 0x1000000000000000ul;
++		unsigned long vpte = address & ~rmask;
++
++		vpte = (vpte >> (PAGE_SHIFT - 3)) & ~0xffful;
++		vpte |= rid;
++		__flush_tlb_page(tlb->mm, vpte, tsize, 0);
++	}
++}
++
++static void __init setup_page_sizes(void)
++{
++	unsigned int tlb0cfg;
++	unsigned int eptcfg;
++	int psize;
++
++#ifdef CONFIG_PPC_E500
++	unsigned int mmucfg = mfspr(SPRN_MMUCFG);
++	int fsl_mmu = mmu_has_feature(MMU_FTR_TYPE_FSL_E);
++
++	if (fsl_mmu && (mmucfg & MMUCFG_MAVN) == MMUCFG_MAVN_V1) {
++		unsigned int tlb1cfg = mfspr(SPRN_TLB1CFG);
++		unsigned int min_pg, max_pg;
++
++		min_pg = (tlb1cfg & TLBnCFG_MINSIZE) >> TLBnCFG_MINSIZE_SHIFT;
++		max_pg = (tlb1cfg & TLBnCFG_MAXSIZE) >> TLBnCFG_MAXSIZE_SHIFT;
++
++		for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
++			struct mmu_psize_def *def;
++			unsigned int shift;
++
++			def = &mmu_psize_defs[psize];
++			shift = def->shift;
++
++			if (shift == 0 || shift & 1)
++				continue;
++
++			/* adjust to be in terms of 4^shift Kb */
++			shift = (shift - 10) >> 1;
++
++			if ((shift >= min_pg) && (shift <= max_pg))
++				def->flags |= MMU_PAGE_SIZE_DIRECT;
++		}
++
++		goto out;
++	}
++
++	if (fsl_mmu && (mmucfg & MMUCFG_MAVN) == MMUCFG_MAVN_V2) {
++		u32 tlb1cfg, tlb1ps;
++
++		tlb0cfg = mfspr(SPRN_TLB0CFG);
++		tlb1cfg = mfspr(SPRN_TLB1CFG);
++		tlb1ps = mfspr(SPRN_TLB1PS);
++		eptcfg = mfspr(SPRN_EPTCFG);
++
++		if ((tlb1cfg & TLBnCFG_IND) && (tlb0cfg & TLBnCFG_PT))
++			book3e_htw_mode = PPC_HTW_E6500;
++
++		/*
++		 * We expect 4K subpage size and unrestricted indirect size.
++		 * The lack of a restriction on indirect size is a Freescale
++		 * extension, indicated by PSn = 0 but SPSn != 0.
++		 */
++		if (eptcfg != 2)
++			book3e_htw_mode = PPC_HTW_NONE;
++
++		for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
++			struct mmu_psize_def *def = &mmu_psize_defs[psize];
++
++			if (!def->shift)
++				continue;
++
++			if (tlb1ps & (1U << (def->shift - 10))) {
++				def->flags |= MMU_PAGE_SIZE_DIRECT;
++
++				if (book3e_htw_mode && psize == MMU_PAGE_2M)
++					def->flags |= MMU_PAGE_SIZE_INDIRECT;
++			}
++		}
++
++		goto out;
++	}
++#endif
++out:
++	/* Cleanup array and print summary */
++	pr_info("MMU: Supported page sizes\n");
++	for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
++		struct mmu_psize_def *def = &mmu_psize_defs[psize];
++		const char *__page_type_names[] = {
++			"unsupported",
++			"direct",
++			"indirect",
++			"direct & indirect"
++		};
++		if (def->flags == 0) {
++			def->shift = 0;
++			continue;
++		}
++		pr_info("  %8ld KB as %s\n", 1ul << (def->shift - 10),
++			__page_type_names[def->flags & 0x3]);
++	}
++}
++
++static void __init setup_mmu_htw(void)
++{
++	/*
++	 * If we want to use HW tablewalk, enable it by patching the TLB miss
++	 * handlers to branch to the one dedicated to it.
++	 */
++
++	switch (book3e_htw_mode) {
++#ifdef CONFIG_PPC_E500
++	case PPC_HTW_E6500:
++		extlb_level_exc = EX_TLB_SIZE;
++		patch_exception(0x1c0, exc_data_tlb_miss_e6500_book3e);
++		patch_exception(0x1e0, exc_instruction_tlb_miss_e6500_book3e);
++		break;
++#endif
++	}
++	pr_info("MMU: Book3E HW tablewalk %s\n",
++		book3e_htw_mode != PPC_HTW_NONE ? "enabled" : "not supported");
++}
++
++/*
++ * Early initialization of the MMU TLB code
++ */
++static void early_init_this_mmu(void)
++{
++	unsigned int mas4;
++
++	/* Set MAS4 based on page table setting */
++
++	mas4 = 0x4 << MAS4_WIMGED_SHIFT;
++	switch (book3e_htw_mode) {
++	case PPC_HTW_E6500:
++		mas4 |= MAS4_INDD;
++		mas4 |= BOOK3E_PAGESZ_2M << MAS4_TSIZED_SHIFT;
++		mas4 |= MAS4_TLBSELD(1);
++		mmu_pte_psize = MMU_PAGE_2M;
++		break;
++
++	case PPC_HTW_NONE:
++		mas4 |=	BOOK3E_PAGESZ_4K << MAS4_TSIZED_SHIFT;
++		mmu_pte_psize = mmu_virtual_psize;
++		break;
++	}
++	mtspr(SPRN_MAS4, mas4);
++
++#ifdef CONFIG_PPC_E500
++	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
++		unsigned int num_cams;
++		bool map = true;
++
++		/* use a quarter of the TLBCAM for bolted linear map */
++		num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
++
++		/*
++		 * Only do the mapping once per core, or else the
++		 * transient mapping would cause problems.
++		 */
++#ifdef CONFIG_SMP
++		if (hweight32(get_tensr()) > 1)
++			map = false;
++#endif
++
++		if (map)
++			linear_map_top = map_mem_in_cams(linear_map_top,
++							 num_cams, false, true);
++	}
++#endif
++
++	/* A sync won't hurt us after mucking around with
++	 * the MMU configuration
++	 */
++	mb();
++}
++
++static void __init early_init_mmu_global(void)
++{
++	/* XXX This should be decided at runtime based on supported
++	 * page sizes in the TLB, but for now let's assume 16M is
++	 * always there and a good fit (which it probably is)
++	 *
++	 * Freescale booke only supports 4K pages in TLB0, so use that.
++	 */
++	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E))
++		mmu_vmemmap_psize = MMU_PAGE_4K;
++	else
++		mmu_vmemmap_psize = MMU_PAGE_16M;
++
++	/* XXX This code only checks for TLB 0 capabilities and doesn't
++	 *     check what page size combos are supported by the HW. It
++	 *     also doesn't handle the case where a separate array holds
++	 *     the IND entries from the array loaded by the PT.
++	 */
++	/* Look for supported page sizes */
++	setup_page_sizes();
++
++	/* Look for HW tablewalk support */
++	setup_mmu_htw();
++
++#ifdef CONFIG_PPC_E500
++	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
++		if (book3e_htw_mode == PPC_HTW_NONE) {
++			extlb_level_exc = EX_TLB_SIZE;
++			patch_exception(0x1c0, exc_data_tlb_miss_bolted_book3e);
++			patch_exception(0x1e0,
++				exc_instruction_tlb_miss_bolted_book3e);
++		}
++	}
++#endif
++
++	/* Set the global containing the top of the linear mapping
++	 * for use by the TLB miss code
++	 */
++	linear_map_top = memblock_end_of_DRAM();
++
++	ioremap_bot = IOREMAP_BASE;
++}
++
++static void __init early_mmu_set_memory_limit(void)
++{
++#ifdef CONFIG_PPC_E500
++	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
++		/*
++		 * Limit memory so we dont have linear faults.
++		 * Unlike memblock_set_current_limit, which limits
++		 * memory available during early boot, this permanently
++		 * reduces the memory available to Linux.  We need to
++		 * do this because highmem is not supported on 64-bit.
++		 */
++		memblock_enforce_memory_limit(linear_map_top);
++	}
++#endif
++
++	memblock_set_current_limit(linear_map_top);
++}
++
++/* boot cpu only */
++void __init early_init_mmu(void)
++{
++	early_init_mmu_global();
++	early_init_this_mmu();
++	early_mmu_set_memory_limit();
++}
++
++void early_init_mmu_secondary(void)
++{
++	early_init_this_mmu();
++}
++
++void setup_initial_memory_limit(phys_addr_t first_memblock_base,
++				phys_addr_t first_memblock_size)
++{
++	/* On non-FSL Embedded 64-bit, we adjust the RMA size to match
++	 * the bolted TLB entry. We know for now that only 1G
++	 * entries are supported though that may eventually
++	 * change.
++	 *
++	 * on FSL Embedded 64-bit, usually all RAM is bolted, but with
++	 * unusual memory sizes it's possible for some RAM to not be mapped
++	 * (such RAM is not used at all by Linux, since we don't support
++	 * highmem on 64-bit).  We limit ppc64_rma_size to what would be
++	 * mappable if this memblock is the only one.  Additional memblocks
++	 * can only increase, not decrease, the amount that ends up getting
++	 * mapped.  We still limit max to 1G even if we'll eventually map
++	 * more.  This is due to what the early init code is set up to do.
++	 *
++	 * We crop it to the size of the first MEMBLOCK to
++	 * avoid going over total available memory just in case...
++	 */
++#ifdef CONFIG_PPC_E500
++	if (early_mmu_has_feature(MMU_FTR_TYPE_FSL_E)) {
++		unsigned long linear_sz;
++		unsigned int num_cams;
++
++		/* use a quarter of the TLBCAM for bolted linear map */
++		num_cams = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) / 4;
++
++		linear_sz = map_mem_in_cams(first_memblock_size, num_cams,
++					    true, true);
++
++		ppc64_rma_size = min_t(u64, linear_sz, 0x40000000);
++	} else
++#endif
++		ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000);
++
++	/* Finally limit subsequent allocations */
++	memblock_set_current_limit(first_memblock_base + ppc64_rma_size);
++}
+diff --git a/arch/powerpc/mm/nohash/tlb_low_64e.S b/arch/powerpc/mm/nohash/tlb_low_64e.S
+index 7e0b8fe1c2797..b0eb3f7eaed14 100644
+--- a/arch/powerpc/mm/nohash/tlb_low_64e.S
++++ b/arch/powerpc/mm/nohash/tlb_low_64e.S
+@@ -893,201 +893,6 @@ virt_page_table_tlb_miss_whacko_fault:
+ 	TLB_MISS_EPILOG_ERROR
+ 	b	exc_data_storage_book3e
+ 
+-
+-/**************************************************************
+- *                                                            *
+- * TLB miss handling for Book3E with hw page table support    *
+- *                                                            *
+- **************************************************************/
+-
+-
+-/* Data TLB miss */
+-	START_EXCEPTION(data_tlb_miss_htw)
+-	TLB_MISS_PROLOG
+-
+-	/* Now we handle the fault proper. We only save DEAR in normal
+-	 * fault case since that's the only interesting values here.
+-	 * We could probably also optimize by not saving SRR0/1 in the
+-	 * linear mapping case but I'll leave that for later
+-	 */
+-	mfspr	r14,SPRN_ESR
+-	mfspr	r16,SPRN_DEAR		/* get faulting address */
+-	srdi	r11,r16,44		/* get region */
+-	xoris	r11,r11,0xc
+-	cmpldi	cr0,r11,0		/* linear mapping ? */
+-	beq	tlb_load_linear		/* yes -> go to linear map load */
+-	cmpldi	cr1,r11,1		/* vmalloc mapping ? */
+-
+-	/* We do the user/kernel test for the PID here along with the RW test
+-	 */
+-	srdi.	r11,r16,60		/* Check for user region */
+-	ld	r15,PACAPGD(r13)	/* Load user pgdir */
+-	beq	htw_tlb_miss
+-
+-	/* XXX replace the RMW cycles with immediate loads + writes */
+-1:	mfspr	r10,SPRN_MAS1
+-	rlwinm	r10,r10,0,16,1		/* Clear TID */
+-	mtspr	SPRN_MAS1,r10
+-	ld	r15,PACA_KERNELPGD(r13)	/* Load kernel pgdir */
+-	beq+	cr1,htw_tlb_miss
+-
+-	/* We got a crappy address, just fault with whatever DEAR and ESR
+-	 * are here
+-	 */
+-	TLB_MISS_EPILOG_ERROR
+-	b	exc_data_storage_book3e
+-
+-/* Instruction TLB miss */
+-	START_EXCEPTION(instruction_tlb_miss_htw)
+-	TLB_MISS_PROLOG
+-
+-	/* If we take a recursive fault, the second level handler may need
+-	 * to know whether we are handling a data or instruction fault in
+-	 * order to get to the right store fault handler. We provide that
+-	 * info by keeping a crazy value for ESR in r14
+-	 */
+-	li	r14,-1	/* store to exception frame is done later */
+-
+-	/* Now we handle the fault proper. We only save DEAR in the non
+-	 * linear mapping case since we know the linear mapping case will
+-	 * not re-enter. We could indeed optimize and also not save SRR0/1
+-	 * in the linear mapping case but I'll leave that for later
+-	 *
+-	 * Faulting address is SRR0 which is already in r16
+-	 */
+-	srdi	r11,r16,44		/* get region */
+-	xoris	r11,r11,0xc
+-	cmpldi	cr0,r11,0		/* linear mapping ? */
+-	beq	tlb_load_linear		/* yes -> go to linear map load */
+-	cmpldi	cr1,r11,1		/* vmalloc mapping ? */
+-
+-	/* We do the user/kernel test for the PID here along with the RW test
+-	 */
+-	srdi.	r11,r16,60		/* Check for user region */
+-	ld	r15,PACAPGD(r13)		/* Load user pgdir */
+-	beq	htw_tlb_miss
+-
+-	/* XXX replace the RMW cycles with immediate loads + writes */
+-1:	mfspr	r10,SPRN_MAS1
+-	rlwinm	r10,r10,0,16,1			/* Clear TID */
+-	mtspr	SPRN_MAS1,r10
+-	ld	r15,PACA_KERNELPGD(r13)		/* Load kernel pgdir */
+-	beq+	htw_tlb_miss
+-
+-	/* We got a crappy address, just fault */
+-	TLB_MISS_EPILOG_ERROR
+-	b	exc_instruction_storage_book3e
+-
+-
+-/*
+- * This is the guts of the second-level TLB miss handler for direct
+- * misses. We are entered with:
+- *
+- * r16 = virtual page table faulting address
+- * r15 = PGD pointer
+- * r14 = ESR
+- * r13 = PACA
+- * r12 = TLB exception frame in PACA
+- * r11 = crap (free to use)
+- * r10 = crap (free to use)
+- *
+- * It can be re-entered by the linear mapping miss handler. However, to
+- * avoid too much complication, it will save/restore things for us
+- */
+-htw_tlb_miss:
+-#ifdef CONFIG_PPC_KUAP
+-	mfspr	r10,SPRN_MAS1
+-	rlwinm.	r10,r10,0,0x3fff0000
+-	beq-	htw_tlb_miss_fault /* KUAP fault */
+-#endif
+-	/* Search if we already have a TLB entry for that virtual address, and
+-	 * if we do, bail out.
+-	 *
+-	 * MAS1:IND should be already set based on MAS4
+-	 */
+-	PPC_TLBSRX_DOT(0,R16)
+-	beq	htw_tlb_miss_done
+-
+-	/* Now, we need to walk the page tables. First check if we are in
+-	 * range.
+-	 */
+-	rldicl.	r10,r16,64-PGTABLE_EADDR_SIZE,PGTABLE_EADDR_SIZE+4
+-	bne-	htw_tlb_miss_fault
+-
+-	/* Get the PGD pointer */
+-	cmpldi	cr0,r15,0
+-	beq-	htw_tlb_miss_fault
+-
+-	/* Get to PGD entry */
+-	rldicl	r11,r16,64-(PGDIR_SHIFT-3),64-PGD_INDEX_SIZE-3
+-	clrrdi	r10,r11,3
+-	ldx	r15,r10,r15
+-	cmpdi	cr0,r15,0
+-	bge	htw_tlb_miss_fault
+-
+-	/* Get to PUD entry */
+-	rldicl	r11,r16,64-(PUD_SHIFT-3),64-PUD_INDEX_SIZE-3
+-	clrrdi	r10,r11,3
+-	ldx	r15,r10,r15
+-	cmpdi	cr0,r15,0
+-	bge	htw_tlb_miss_fault
+-
+-	/* Get to PMD entry */
+-	rldicl	r11,r16,64-(PMD_SHIFT-3),64-PMD_INDEX_SIZE-3
+-	clrrdi	r10,r11,3
+-	ldx	r15,r10,r15
+-	cmpdi	cr0,r15,0
+-	bge	htw_tlb_miss_fault
+-
+-	/* Ok, we're all right, we can now create an indirect entry for
+-	 * a 1M or 256M page.
+-	 *
+-	 * The last trick is now that because we use "half" pages for
+-	 * the HTW (1M IND is 2K and 256M IND is 32K) we need to account
+-	 * for an added LSB bit to the RPN. For 64K pages, there is no
+-	 * problem as we already use 32K arrays (half PTE pages), but for
+-	 * 4K page we need to extract a bit from the virtual address and
+-	 * insert it into the "PA52" bit of the RPN.
+-	 */
+-	rlwimi	r15,r16,32-9,20,20
+-	/* Now we build the MAS:
+-	 *
+-	 * MAS 0   :	Fully setup with defaults in MAS4 and TLBnCFG
+-	 * MAS 1   :	Almost fully setup
+-	 *               - PID already updated by caller if necessary
+-	 *               - TSIZE for now is base ind page size always
+-	 * MAS 2   :	Use defaults
+-	 * MAS 3+7 :	Needs to be done
+-	 */
+-	ori	r10,r15,(BOOK3E_PAGESZ_4K << MAS3_SPSIZE_SHIFT)
+-
+-	srdi	r16,r10,32
+-	mtspr	SPRN_MAS3,r10
+-	mtspr	SPRN_MAS7,r16
+-
+-	tlbwe
+-
+-htw_tlb_miss_done:
+-	/* We don't bother with restoring DEAR or ESR since we know we are
+-	 * level 0 and just going back to userland. They are only needed
+-	 * if you are going to take an access fault
+-	 */
+-	TLB_MISS_EPILOG_SUCCESS
+-	rfi
+-
+-htw_tlb_miss_fault:
+-	/* We need to check if it was an instruction miss. We know this
+-	 * though because r14 would contain -1
+-	 */
+-	cmpdi	cr0,r14,-1
+-	beq	1f
+-	mtspr	SPRN_DEAR,r16
+-	mtspr	SPRN_ESR,r14
+-	TLB_MISS_EPILOG_ERROR
+-	b	exc_data_storage_book3e
+-1:	TLB_MISS_EPILOG_ERROR
+-	b	exc_instruction_storage_book3e
+-
+ /*
+  * This is the guts of "any" level TLB miss handler for kernel linear
+  * mapping misses. We are entered with:
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 0525ee2d63c71..006232b67b467 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -545,8 +545,8 @@ config RISCV_ISA_SVPBMT
+ config TOOLCHAIN_HAS_V
+ 	bool
+ 	default y
+-	depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64iv)
+-	depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32iv)
++	depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64imv)
++	depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32imv)
+ 	depends on LLD_VERSION >= 140000 || LD_VERSION >= 23800
+ 	depends on AS_HAS_OPTION_ARCH
+ 
+diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
+index 68c3432dc6ea4..6c129144ef190 100644
+--- a/arch/riscv/include/asm/processor.h
++++ b/arch/riscv/include/asm/processor.h
+@@ -14,36 +14,14 @@
+ 
+ #include <asm/ptrace.h>
+ 
+-/*
+- * addr is a hint to the maximum userspace address that mmap should provide, so
+- * this macro needs to return the largest address space available so that
+- * mmap_end < addr, being mmap_end the top of that address space.
+- * See Documentation/arch/riscv/vm-layout.rst for more details.
+- */
+ #define arch_get_mmap_end(addr, len, flags)			\
+ ({								\
+-	unsigned long mmap_end;					\
+-	typeof(addr) _addr = (addr);				\
+-	if ((_addr) == 0 || is_compat_task() ||			\
+-	    ((_addr + len) > BIT(VA_BITS - 1)))			\
+-		mmap_end = STACK_TOP_MAX;			\
+-	else							\
+-		mmap_end = (_addr + len);			\
+-	mmap_end;						\
++	STACK_TOP_MAX;						\
+ })
+ 
+ #define arch_get_mmap_base(addr, base)				\
+ ({								\
+-	unsigned long mmap_base;				\
+-	typeof(addr) _addr = (addr);				\
+-	typeof(base) _base = (base);				\
+-	unsigned long rnd_gap = DEFAULT_MAP_WINDOW - (_base);	\
+-	if ((_addr) == 0 || is_compat_task() || 		\
+-	    ((_addr + len) > BIT(VA_BITS - 1)))			\
+-		mmap_base = (_base);				\
+-	else							\
+-		mmap_base = (_addr + len) - rnd_gap;		\
+-	mmap_base;						\
++	base;							\
+ })
+ 
+ #ifdef CONFIG_64BIT
+diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
+index 1079e214fe855..7bd3746028c9e 100644
+--- a/arch/riscv/include/asm/sbi.h
++++ b/arch/riscv/include/asm/sbi.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/types.h>
+ #include <linux/cpumask.h>
++#include <linux/jump_label.h>
+ 
+ #ifdef CONFIG_RISCV_SBI
+ enum sbi_ext_id {
+@@ -304,10 +305,13 @@ struct sbiret {
+ };
+ 
+ void sbi_init(void);
+-struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
+-			unsigned long arg1, unsigned long arg2,
+-			unsigned long arg3, unsigned long arg4,
+-			unsigned long arg5);
++long __sbi_base_ecall(int fid);
++struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1,
++			  unsigned long arg2, unsigned long arg3,
++			  unsigned long arg4, unsigned long arg5,
++			  int fid, int ext);
++#define sbi_ecall(e, f, a0, a1, a2, a3, a4, a5)	\
++		__sbi_ecall(a0, a1, a2, a3, a4, a5, f, e)
+ 
+ #ifdef CONFIG_RISCV_SBI_V01
+ void sbi_console_putchar(int ch);
+@@ -371,7 +375,23 @@ static inline unsigned long sbi_mk_version(unsigned long major,
+ 		| (minor & SBI_SPEC_VERSION_MINOR_MASK);
+ }
+ 
+-int sbi_err_map_linux_errno(int err);
++static inline int sbi_err_map_linux_errno(int err)
++{
++	switch (err) {
++	case SBI_SUCCESS:
++		return 0;
++	case SBI_ERR_DENIED:
++		return -EPERM;
++	case SBI_ERR_INVALID_PARAM:
++		return -EINVAL;
++	case SBI_ERR_INVALID_ADDRESS:
++		return -EFAULT;
++	case SBI_ERR_NOT_SUPPORTED:
++	case SBI_ERR_FAILURE:
++	default:
++		return -ENOTSUPP;
++	};
++}
+ 
+ extern bool sbi_debug_console_available;
+ int sbi_debug_console_write(const char *bytes, unsigned int num_bytes);
+diff --git a/arch/riscv/include/asm/trace.h b/arch/riscv/include/asm/trace.h
+new file mode 100644
+index 0000000000000..6151cee5450cd
+--- /dev/null
++++ b/arch/riscv/include/asm/trace.h
+@@ -0,0 +1,54 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM riscv
++
++#if !defined(_TRACE_RISCV_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _TRACE_RISCV_H
++
++#include <linux/tracepoint.h>
++
++TRACE_EVENT_CONDITION(sbi_call,
++	TP_PROTO(int ext, int fid),
++	TP_ARGS(ext, fid),
++	TP_CONDITION(ext != SBI_EXT_HSM),
++
++	TP_STRUCT__entry(
++		__field(int, ext)
++		__field(int, fid)
++	),
++
++	TP_fast_assign(
++		__entry->ext = ext;
++		__entry->fid = fid;
++	),
++
++	TP_printk("ext=0x%x fid=%d", __entry->ext, __entry->fid)
++);
++
++TRACE_EVENT_CONDITION(sbi_return,
++	TP_PROTO(int ext, long error, long value),
++	TP_ARGS(ext, error, value),
++	TP_CONDITION(ext != SBI_EXT_HSM),
++
++	TP_STRUCT__entry(
++		__field(long, error)
++		__field(long, value)
++	),
++
++	TP_fast_assign(
++		__entry->error = error;
++		__entry->value = value;
++	),
++
++	TP_printk("error=%ld value=0x%lx", __entry->error, __entry->value)
++);
++
++#endif /* _TRACE_RISCV_H */
++
++#undef TRACE_INCLUDE_PATH
++#undef TRACE_INCLUDE_FILE
++
++#define TRACE_INCLUDE_PATH asm
++#define TRACE_INCLUDE_FILE trace
++
++#include <trace/define_trace.h>
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index 5b243d46f4b17..1d71002e4f7b4 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -20,17 +20,21 @@ endif
+ ifdef CONFIG_RISCV_ALTERNATIVE_EARLY
+ CFLAGS_alternative.o := -mcmodel=medany
+ CFLAGS_cpufeature.o := -mcmodel=medany
++CFLAGS_sbi_ecall.o := -mcmodel=medany
+ ifdef CONFIG_FTRACE
+ CFLAGS_REMOVE_alternative.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_cpufeature.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE)
+ endif
+ ifdef CONFIG_RELOCATABLE
+ CFLAGS_alternative.o += -fno-pie
+ CFLAGS_cpufeature.o += -fno-pie
++CFLAGS_sbi_ecall.o += -fno-pie
+ endif
+ ifdef CONFIG_KASAN
+ KASAN_SANITIZE_alternative.o := n
+ KASAN_SANITIZE_cpufeature.o := n
++KASAN_SANITIZE_sbi_ecall.o := n
+ endif
+ endif
+ 
+@@ -86,7 +90,7 @@ obj-$(CONFIG_DYNAMIC_FTRACE)	+= mcount-dyn.o
+ 
+ obj-$(CONFIG_PERF_EVENTS)	+= perf_callchain.o
+ obj-$(CONFIG_HAVE_PERF_REGS)	+= perf_regs.o
+-obj-$(CONFIG_RISCV_SBI)		+= sbi.o
++obj-$(CONFIG_RISCV_SBI)		+= sbi.o sbi_ecall.o
+ ifeq ($(CONFIG_RISCV_SBI), y)
+ obj-$(CONFIG_SMP)		+= sbi-ipi.o
+ obj-$(CONFIG_SMP) += cpu_ops_sbi.o
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index a00f7523cb91f..356d5397b2a25 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -305,6 +305,9 @@ SYM_CODE_START(_start_kernel)
+ #else
+ 	mv a0, a1
+ #endif /* CONFIG_BUILTIN_DTB */
++	/* Set trap vector to spin forever to help debug */
++	la a3, .Lsecondary_park
++	csrw CSR_TVEC, a3
+ 	call setup_vm
+ #ifdef CONFIG_MMU
+ 	la a0, early_pg_dir
+diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
+index dfb28e57d9001..03cd103b84494 100644
+--- a/arch/riscv/kernel/probes/kprobes.c
++++ b/arch/riscv/kernel/probes/kprobes.c
+@@ -29,9 +29,8 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+ 
+ 	p->ainsn.api.restore = (unsigned long)p->addr + offset;
+ 
+-	patch_text(p->ainsn.api.insn, &p->opcode, 1);
+-	patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
+-		   &insn, 1);
++	patch_text_nosync(p->ainsn.api.insn, &p->opcode, 1);
++	patch_text_nosync(p->ainsn.api.insn + offset, &insn, 1);
+ }
+ 
+ static void __kprobes arch_prepare_simulate(struct kprobe *p)
+diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c
+index e66e0999a8005..1989b8cade1b9 100644
+--- a/arch/riscv/kernel/sbi.c
++++ b/arch/riscv/kernel/sbi.c
+@@ -24,51 +24,6 @@ static int (*__sbi_rfence)(int fid, const struct cpumask *cpu_mask,
+ 			   unsigned long start, unsigned long size,
+ 			   unsigned long arg4, unsigned long arg5) __ro_after_init;
+ 
+-struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
+-			unsigned long arg1, unsigned long arg2,
+-			unsigned long arg3, unsigned long arg4,
+-			unsigned long arg5)
+-{
+-	struct sbiret ret;
+-
+-	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);
+-	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);
+-	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);
+-	register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3);
+-	register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4);
+-	register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5);
+-	register uintptr_t a6 asm ("a6") = (uintptr_t)(fid);
+-	register uintptr_t a7 asm ("a7") = (uintptr_t)(ext);
+-	asm volatile ("ecall"
+-		      : "+r" (a0), "+r" (a1)
+-		      : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+-		      : "memory");
+-	ret.error = a0;
+-	ret.value = a1;
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL(sbi_ecall);
+-
+-int sbi_err_map_linux_errno(int err)
+-{
+-	switch (err) {
+-	case SBI_SUCCESS:
+-		return 0;
+-	case SBI_ERR_DENIED:
+-		return -EPERM;
+-	case SBI_ERR_INVALID_PARAM:
+-		return -EINVAL;
+-	case SBI_ERR_INVALID_ADDRESS:
+-		return -EFAULT;
+-	case SBI_ERR_NOT_SUPPORTED:
+-	case SBI_ERR_FAILURE:
+-	default:
+-		return -ENOTSUPP;
+-	};
+-}
+-EXPORT_SYMBOL(sbi_err_map_linux_errno);
+-
+ #ifdef CONFIG_RISCV_SBI_V01
+ static unsigned long __sbi_v01_cpumask_to_hartmask(const struct cpumask *cpu_mask)
+ {
+@@ -528,17 +483,6 @@ long sbi_probe_extension(int extid)
+ }
+ EXPORT_SYMBOL(sbi_probe_extension);
+ 
+-static long __sbi_base_ecall(int fid)
+-{
+-	struct sbiret ret;
+-
+-	ret = sbi_ecall(SBI_EXT_BASE, fid, 0, 0, 0, 0, 0, 0);
+-	if (!ret.error)
+-		return ret.value;
+-	else
+-		return sbi_err_map_linux_errno(ret.error);
+-}
+-
+ static inline long sbi_get_spec_version(void)
+ {
+ 	return __sbi_base_ecall(SBI_EXT_BASE_GET_SPEC_VERSION);
+diff --git a/arch/riscv/kernel/sbi_ecall.c b/arch/riscv/kernel/sbi_ecall.c
+new file mode 100644
+index 0000000000000..24aabb4fbde3a
+--- /dev/null
++++ b/arch/riscv/kernel/sbi_ecall.c
+@@ -0,0 +1,48 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2024 Rivos Inc. */
++
++#include <asm/sbi.h>
++#define CREATE_TRACE_POINTS
++#include <asm/trace.h>
++
++long __sbi_base_ecall(int fid)
++{
++	struct sbiret ret;
++
++	ret = sbi_ecall(SBI_EXT_BASE, fid, 0, 0, 0, 0, 0, 0);
++	if (!ret.error)
++		return ret.value;
++	else
++		return sbi_err_map_linux_errno(ret.error);
++}
++EXPORT_SYMBOL(__sbi_base_ecall);
++
++struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1,
++			  unsigned long arg2, unsigned long arg3,
++			  unsigned long arg4, unsigned long arg5,
++			  int fid, int ext)
++{
++	struct sbiret ret;
++
++	trace_sbi_call(ext, fid);
++
++	register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);
++	register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);
++	register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);
++	register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3);
++	register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4);
++	register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5);
++	register uintptr_t a6 asm ("a6") = (uintptr_t)(fid);
++	register uintptr_t a7 asm ("a7") = (uintptr_t)(ext);
++	asm volatile ("ecall"
++		       : "+r" (a0), "+r" (a1)
++		       : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
++		       : "memory");
++	ret.error = a0;
++	ret.value = a1;
++
++	trace_sbi_return(ext, ret.error, ret.value);
++
++	return ret;
++}
++EXPORT_SYMBOL(__sbi_ecall);
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index b62d5a2f4541e..1a76f99ff185a 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -417,7 +417,7 @@ int handle_misaligned_load(struct pt_regs *regs)
+ 
+ 	val.data_u64 = 0;
+ 	if (user_mode(regs)) {
+-		if (raw_copy_from_user(&val, (u8 __user *)addr, len))
++		if (copy_from_user(&val, (u8 __user *)addr, len))
+ 			return -1;
+ 	} else {
+ 		memcpy(&val, (u8 *)addr, len);
+@@ -515,7 +515,7 @@ int handle_misaligned_store(struct pt_regs *regs)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (user_mode(regs)) {
+-		if (raw_copy_to_user((u8 __user *)addr, &val, len))
++		if (copy_to_user((u8 __user *)addr, &val, len))
+ 			return -1;
+ 	} else {
+ 		memcpy((u8 *)addr, &val, len);
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index c5c66f53971af..91346c9da8ef4 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -251,7 +251,7 @@ static void __init setup_bootmem(void)
+ 	 * The size of the linear page mapping may restrict the amount of
+ 	 * usable RAM.
+ 	 */
+-	if (IS_ENABLED(CONFIG_64BIT)) {
++	if (IS_ENABLED(CONFIG_64BIT) && IS_ENABLED(CONFIG_MMU)) {
+ 		max_mapped_addr = __pa(PAGE_OFFSET) + KERN_VIRT_SIZE;
+ 		memblock_cap_memory_range(phys_ram_base,
+ 					  max_mapped_addr - phys_ram_base);
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 6d88f241dd43a..66ee97ac803de 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -476,8 +476,12 @@ void startup_kernel(void)
+ 	 * before the kernel started. Therefore, in case the two sections
+ 	 * overlap there is no risk of corrupting any data.
+ 	 */
+-	if (kaslr_enabled())
+-		amode31_lma = randomize_within_range(vmlinux.amode31_size, PAGE_SIZE, 0, SZ_2G);
++	if (kaslr_enabled()) {
++		unsigned long amode31_min;
++
++		amode31_min = (unsigned long)_decompressor_end;
++		amode31_lma = randomize_within_range(vmlinux.amode31_size, PAGE_SIZE, amode31_min, SZ_2G);
++	}
+ 	if (!amode31_lma)
+ 		amode31_lma = text_lma - vmlinux.amode31_size;
+ 	physmem_reserve(RR_AMODE31, amode31_lma, vmlinux.amode31_size);
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 52bd969b28283..779162c664c4e 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -59,14 +59,6 @@ SECTIONS
+ 	} :text = 0x0700
+ 
+ 	RO_DATA(PAGE_SIZE)
+-	.data.rel.ro : {
+-		*(.data.rel.ro .data.rel.ro.*)
+-	}
+-	.got : {
+-		__got_start = .;
+-		*(.got)
+-		__got_end = .;
+-	}
+ 
+ 	. = ALIGN(PAGE_SIZE);
+ 	_sdata = .;		/* Start of data section */
+@@ -80,6 +72,15 @@ SECTIONS
+ 	. = ALIGN(PAGE_SIZE);
+ 	__end_ro_after_init = .;
+ 
++	.data.rel.ro : {
++		*(.data.rel.ro .data.rel.ro.*)
++	}
++	.got : {
++		__got_start = .;
++		*(.got)
++		__got_end = .;
++	}
++
+ 	RW_DATA(0x100, PAGE_SIZE, THREAD_SIZE)
+ 	.data.rel : {
+ 		*(.data.rel*)
+diff --git a/arch/um/drivers/line.c b/arch/um/drivers/line.c
+index d82bc3fdb86e7..43d8959cc746f 100644
+--- a/arch/um/drivers/line.c
++++ b/arch/um/drivers/line.c
+@@ -383,6 +383,7 @@ int setup_one_line(struct line *lines, int n, char *init,
+ 			parse_chan_pair(NULL, line, n, opts, error_out);
+ 			err = 0;
+ 		}
++		*error_out = "configured as 'none'";
+ 	} else {
+ 		char *new = kstrdup(init, GFP_KERNEL);
+ 		if (!new) {
+@@ -406,6 +407,7 @@ int setup_one_line(struct line *lines, int n, char *init,
+ 			}
+ 		}
+ 		if (err) {
++			*error_out = "failed to parse channel pair";
+ 			line->init_str = NULL;
+ 			line->valid = 0;
+ 			kfree(new);
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index c1cb90369915b..8fe4c2b07128e 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -385,7 +385,6 @@ static bool mmio_read(int size, unsigned long addr, unsigned long *val)
+ 		.r12 = size,
+ 		.r13 = EPT_READ,
+ 		.r14 = addr,
+-		.r15 = *val,
+ 	};
+ 
+ 	if (__tdx_hypercall(&args))
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f25205d047e83..dcac96133cb6a 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4529,6 +4529,25 @@ static enum hybrid_cpu_type adl_get_hybrid_cpu_type(void)
+ 	return HYBRID_INTEL_CORE;
+ }
+ 
++static inline bool erratum_hsw11(struct perf_event *event)
++{
++	return (event->hw.config & INTEL_ARCH_EVENT_MASK) ==
++		X86_CONFIG(.event=0xc0, .umask=0x01);
++}
++
++/*
++ * The HSW11 requires a period larger than 100 which is the same as the BDM11.
++ * A minimum period of 128 is enforced as well for the INST_RETIRED.ALL.
++ *
++ * The message 'interrupt took too long' can be observed on any counter which
++ * was armed with a period < 32 and two events expired in the same NMI.
++ * A minimum period of 32 is enforced for the rest of the events.
++ */
++static void hsw_limit_period(struct perf_event *event, s64 *left)
++{
++	*left = max(*left, erratum_hsw11(event) ? 128 : 32);
++}
++
+ /*
+  * Broadwell:
+  *
+@@ -4546,8 +4565,7 @@ static enum hybrid_cpu_type adl_get_hybrid_cpu_type(void)
+  */
+ static void bdw_limit_period(struct perf_event *event, s64 *left)
+ {
+-	if ((event->hw.config & INTEL_ARCH_EVENT_MASK) ==
+-			X86_CONFIG(.event=0xc0, .umask=0x01)) {
++	if (erratum_hsw11(event)) {
+ 		if (*left < 128)
+ 			*left = 128;
+ 		*left &= ~0x3fULL;
+@@ -5715,8 +5733,22 @@ exra_is_visible(struct kobject *kobj, struct attribute *attr, int i)
+ 	return x86_pmu.version >= 2 ? attr->mode : 0;
+ }
+ 
++static umode_t
++td_is_visible(struct kobject *kobj, struct attribute *attr, int i)
++{
++	/*
++	 * Hide the perf metrics topdown events
++	 * if the feature is not enumerated.
++	 */
++	if (x86_pmu.num_topdown_events)
++		return x86_pmu.intel_cap.perf_metrics ? attr->mode : 0;
++
++	return attr->mode;
++}
++
+ static struct attribute_group group_events_td  = {
+ 	.name = "events",
++	.is_visible = td_is_visible,
+ };
+ 
+ static struct attribute_group group_events_mem = {
+@@ -5918,9 +5950,27 @@ static umode_t hybrid_format_is_visible(struct kobject *kobj,
+ 	return (cpu >= 0) && (pmu->pmu_type & pmu_attr->pmu_type) ? attr->mode : 0;
+ }
+ 
++static umode_t hybrid_td_is_visible(struct kobject *kobj,
++				    struct attribute *attr, int i)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct x86_hybrid_pmu *pmu =
++		 container_of(dev_get_drvdata(dev), struct x86_hybrid_pmu, pmu);
++
++	if (!is_attr_for_this_pmu(kobj, attr))
++		return 0;
++
++
++	/* Only the big core supports perf metrics */
++	if (pmu->pmu_type == hybrid_big)
++		return pmu->intel_cap.perf_metrics ? attr->mode : 0;
++
++	return attr->mode;
++}
++
+ static struct attribute_group hybrid_group_events_td  = {
+ 	.name		= "events",
+-	.is_visible	= hybrid_events_is_visible,
++	.is_visible	= hybrid_td_is_visible,
+ };
+ 
+ static struct attribute_group hybrid_group_events_mem = {
+@@ -6573,6 +6623,7 @@ __init int intel_pmu_init(void)
+ 
+ 		x86_pmu.hw_config = hsw_hw_config;
+ 		x86_pmu.get_event_constraints = hsw_get_event_constraints;
++		x86_pmu.limit_period = hsw_limit_period;
+ 		x86_pmu.lbr_double_abort = true;
+ 		extra_attr = boot_cpu_has(X86_FEATURE_RTM) ?
+ 			hsw_format_attr : nhm_format_attr;
+diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h
+index eb17f31b06d25..de16862bf230b 100644
+--- a/arch/x86/include/asm/fpu/types.h
++++ b/arch/x86/include/asm/fpu/types.h
+@@ -591,6 +591,13 @@ struct fpu_state_config {
+ 	 * even without XSAVE support, i.e. legacy features FP + SSE
+ 	 */
+ 	u64 legacy_features;
++	/*
++	 * @independent_features:
++	 *
++	 * Features that are supported by XSAVES, but not managed as part of
++	 * the FPU core, such as LBR
++	 */
++	u64 independent_features;
+ };
+ 
+ /* FPU state configuration information */
+diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
+index cc6b8e087192e..9dab85aba7afd 100644
+--- a/arch/x86/include/asm/page_64.h
++++ b/arch/x86/include/asm/page_64.h
+@@ -17,6 +17,7 @@ extern unsigned long phys_base;
+ extern unsigned long page_offset_base;
+ extern unsigned long vmalloc_base;
+ extern unsigned long vmemmap_base;
++extern unsigned long physmem_end;
+ 
+ static __always_inline unsigned long __phys_addr_nodebug(unsigned long x)
+ {
+diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
+index 9053dfe9fa03f..a98e53491a4e6 100644
+--- a/arch/x86/include/asm/pgtable_64_types.h
++++ b/arch/x86/include/asm/pgtable_64_types.h
+@@ -140,6 +140,10 @@ extern unsigned int ptrs_per_p4d;
+ # define VMEMMAP_START		__VMEMMAP_BASE_L4
+ #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
+ 
++#ifdef CONFIG_RANDOMIZE_MEMORY
++# define PHYSMEM_END		physmem_end
++#endif
++
+ /*
+  * End of the region for which vmalloc page tables are pre-allocated.
+  * For non-KMSAN builds, this is the same as VMALLOC_END.
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 66fd4b2a37a3a..373638691cd48 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -1775,12 +1775,9 @@ static __init void apic_set_fixmap(bool read_apic);
+ 
+ static __init void x2apic_disable(void)
+ {
+-	u32 x2apic_id, state = x2apic_state;
++	u32 x2apic_id;
+ 
+-	x2apic_mode = 0;
+-	x2apic_state = X2APIC_DISABLED;
+-
+-	if (state != X2APIC_ON)
++	if (x2apic_state < X2APIC_ON)
+ 		return;
+ 
+ 	x2apic_id = read_apic_id();
+@@ -1793,6 +1790,10 @@ static __init void x2apic_disable(void)
+ 	}
+ 
+ 	__x2apic_disable();
++
++	x2apic_mode = 0;
++	x2apic_state = X2APIC_DISABLED;
++
+ 	/*
+ 	 * Don't reread the APIC ID as it was already done from
+ 	 * check_x2apic() and the APIC driver still is a x2APIC variant,
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index c5a026fee5e06..1339f8328db5a 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -788,6 +788,9 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
+ 		goto out_disable;
+ 	}
+ 
++	fpu_kernel_cfg.independent_features = fpu_kernel_cfg.max_features &
++					      XFEATURE_MASK_INDEPENDENT;
++
+ 	/*
+ 	 * Clear XSAVE features that are disabled in the normal CPUID.
+ 	 */
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 05df04f396282..f2611145f3caa 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -62,9 +62,9 @@ static inline u64 xfeatures_mask_supervisor(void)
+ static inline u64 xfeatures_mask_independent(void)
+ {
+ 	if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR))
+-		return XFEATURE_MASK_INDEPENDENT & ~XFEATURE_MASK_LBR;
++		return fpu_kernel_cfg.independent_features & ~XFEATURE_MASK_LBR;
+ 
+-	return XFEATURE_MASK_INDEPENDENT;
++	return fpu_kernel_cfg.independent_features;
+ }
+ 
+ /* XSAVE/XRSTOR wrapper functions */
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index c95d3900fe564..0357f7af55966 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2863,6 +2863,12 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_CSTAR:
+ 		msr_info->data = svm->vmcb01.ptr->save.cstar;
+ 		break;
++	case MSR_GS_BASE:
++		msr_info->data = svm->vmcb01.ptr->save.gs.base;
++		break;
++	case MSR_FS_BASE:
++		msr_info->data = svm->vmcb01.ptr->save.fs.base;
++		break;
+ 	case MSR_KERNEL_GS_BASE:
+ 		msr_info->data = svm->vmcb01.ptr->save.kernel_gs_base;
+ 		break;
+@@ -3088,6 +3094,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 	case MSR_CSTAR:
+ 		svm->vmcb01.ptr->save.cstar = data;
+ 		break;
++	case MSR_GS_BASE:
++		svm->vmcb01.ptr->save.gs.base = data;
++		break;
++	case MSR_FS_BASE:
++		svm->vmcb01.ptr->save.fs.base = data;
++		break;
+ 	case MSR_KERNEL_GS_BASE:
+ 		svm->vmcb01.ptr->save.kernel_gs_base = data;
+ 		break;
+@@ -5211,6 +5223,9 @@ static __init void svm_set_cpu_caps(void)
+ 
+ 	/* CPUID 0x8000001F (SME/SEV features) */
+ 	sev_set_cpu_caps();
++
++	/* Don't advertise Bus Lock Detect to guest if SVM support is absent */
++	kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
+ }
+ 
+ static __init int svm_hardware_setup(void)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0b7adf3bc58a6..d8fc894706eff 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6040,7 +6040,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		if (copy_from_user(&events, argp, sizeof(struct kvm_vcpu_events)))
+ 			break;
+ 
++		kvm_vcpu_srcu_read_lock(vcpu);
+ 		r = kvm_vcpu_ioctl_x86_set_vcpu_events(vcpu, &events);
++		kvm_vcpu_srcu_read_unlock(vcpu);
+ 		break;
+ 	}
+ 	case KVM_GET_DEBUGREGS: {
+diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c
+index e0411a3774d49..5eecb45d05d5d 100644
+--- a/arch/x86/lib/iomem.c
++++ b/arch/x86/lib/iomem.c
+@@ -25,6 +25,9 @@ static __always_inline void rep_movs(void *to, const void *from, size_t n)
+ 
+ static void string_memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
+ {
++	const void *orig_to = to;
++	const size_t orig_n = n;
++
+ 	if (unlikely(!n))
+ 		return;
+ 
+@@ -39,7 +42,7 @@ static void string_memcpy_fromio(void *to, const volatile void __iomem *from, si
+ 	}
+ 	rep_movs(to, (const void *)from, n);
+ 	/* KMSAN must treat values read from devices as initialized. */
+-	kmsan_unpoison_memory(to, n);
++	kmsan_unpoison_memory(orig_to, orig_n);
+ }
+ 
+ static void string_memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 7e177856ee4fe..59087c8712469 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -950,8 +950,12 @@ static void update_end_of_memory_vars(u64 start, u64 size)
+ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
+ 	      struct mhp_params *params)
+ {
++	unsigned long end = ((start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+ 	int ret;
+ 
++	if (WARN_ON_ONCE(end > PHYSMEM_END))
++		return -ERANGE;
++
+ 	ret = __add_pages(nid, start_pfn, nr_pages, params);
+ 	WARN_ON_ONCE(ret);
+ 
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 37db264866b64..230f1dee4f095 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -47,13 +47,24 @@ static const unsigned long vaddr_end = CPU_ENTRY_AREA_BASE;
+  */
+ static __initdata struct kaslr_memory_region {
+ 	unsigned long *base;
++	unsigned long *end;
+ 	unsigned long size_tb;
+ } kaslr_regions[] = {
+-	{ &page_offset_base, 0 },
+-	{ &vmalloc_base, 0 },
+-	{ &vmemmap_base, 0 },
++	{
++		.base	= &page_offset_base,
++		.end	= &physmem_end,
++	},
++	{
++		.base	= &vmalloc_base,
++	},
++	{
++		.base	= &vmemmap_base,
++	},
+ };
+ 
++/* The end of the possible address space for physical memory */
++unsigned long physmem_end __ro_after_init;
++
+ /* Get size in bytes used by the memory region */
+ static inline unsigned long get_padding(struct kaslr_memory_region *region)
+ {
+@@ -82,6 +93,8 @@ void __init kernel_randomize_memory(void)
+ 	BUILD_BUG_ON(vaddr_end != CPU_ENTRY_AREA_BASE);
+ 	BUILD_BUG_ON(vaddr_end > __START_KERNEL_map);
+ 
++	/* Preset the end of the possible address space for physical memory */
++	physmem_end = ((1ULL << MAX_PHYSMEM_BITS) - 1);
+ 	if (!kaslr_memory_enabled())
+ 		return;
+ 
+@@ -128,11 +141,18 @@ void __init kernel_randomize_memory(void)
+ 		vaddr += entropy;
+ 		*kaslr_regions[i].base = vaddr;
+ 
++		/* Calculate the end of the region */
++		vaddr += get_padding(&kaslr_regions[i]);
+ 		/*
+-		 * Jump the region and add a minimum padding based on
+-		 * randomization alignment.
++		 * KASLR trims the maximum possible size of the
++		 * direct-map. Update the physmem_end boundary.
++		 * No rounding required as the region starts
++		 * PUD aligned and size is in units of TB.
+ 		 */
+-		vaddr += get_padding(&kaslr_regions[i]);
++		if (kaslr_regions[i].end)
++			*kaslr_regions[i].end = __pa_nodebug(vaddr - 1);
++
++		/* Add a minimum padding based on randomization alignment. */
+ 		vaddr = round_up(vaddr + 1, PUD_SIZE);
+ 		remain_entropy -= entropy;
+ 	}
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index bfdf5f45b1370..851ec8f1363a8 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -241,7 +241,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+  *
+  * Returns a pointer to a PTE on success, or NULL on failure.
+  */
+-static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address, bool late_text)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ 	pmd_t *pmd;
+@@ -251,10 +251,15 @@ static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ 	if (!pmd)
+ 		return NULL;
+ 
+-	/* We can't do anything sensible if we hit a large mapping. */
++	/* Large PMD mapping found */
+ 	if (pmd_leaf(*pmd)) {
+-		WARN_ON(1);
+-		return NULL;
++		/* Clear the PMD if we hit a large mapping from the first round */
++		if (late_text) {
++			set_pmd(pmd, __pmd(0));
++		} else {
++			WARN_ON_ONCE(1);
++			return NULL;
++		}
+ 	}
+ 
+ 	if (pmd_none(*pmd)) {
+@@ -283,7 +288,7 @@ static void __init pti_setup_vsyscall(void)
+ 	if (!pte || WARN_ON(level != PG_LEVEL_4K) || pte_none(*pte))
+ 		return;
+ 
+-	target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR);
++	target_pte = pti_user_pagetable_walk_pte(VSYSCALL_ADDR, false);
+ 	if (WARN_ON(!target_pte))
+ 		return;
+ 
+@@ -301,7 +306,7 @@ enum pti_clone_level {
+ 
+ static void
+ pti_clone_pgtable(unsigned long start, unsigned long end,
+-		  enum pti_clone_level level)
++		  enum pti_clone_level level, bool late_text)
+ {
+ 	unsigned long addr;
+ 
+@@ -390,7 +395,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
+ 				return;
+ 
+ 			/* Allocate PTE in the user page-table */
+-			target_pte = pti_user_pagetable_walk_pte(addr);
++			target_pte = pti_user_pagetable_walk_pte(addr, late_text);
+ 			if (WARN_ON(!target_pte))
+ 				return;
+ 
+@@ -452,7 +457,7 @@ static void __init pti_clone_user_shared(void)
+ 		phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
+ 		pte_t *target_pte;
+ 
+-		target_pte = pti_user_pagetable_walk_pte(va);
++		target_pte = pti_user_pagetable_walk_pte(va, false);
+ 		if (WARN_ON(!target_pte))
+ 			return;
+ 
+@@ -475,7 +480,7 @@ static void __init pti_clone_user_shared(void)
+ 	start = CPU_ENTRY_AREA_BASE;
+ 	end   = start + (PAGE_SIZE * CPU_ENTRY_AREA_PAGES);
+ 
+-	pti_clone_pgtable(start, end, PTI_CLONE_PMD);
++	pti_clone_pgtable(start, end, PTI_CLONE_PMD, false);
+ }
+ #endif /* CONFIG_X86_64 */
+ 
+@@ -492,11 +497,11 @@ static void __init pti_setup_espfix64(void)
+ /*
+  * Clone the populated PMDs of the entry text and force it RO.
+  */
+-static void pti_clone_entry_text(void)
++static void pti_clone_entry_text(bool late)
+ {
+ 	pti_clone_pgtable((unsigned long) __entry_text_start,
+ 			  (unsigned long) __entry_text_end,
+-			  PTI_LEVEL_KERNEL_IMAGE);
++			  PTI_LEVEL_KERNEL_IMAGE, late);
+ }
+ 
+ /*
+@@ -571,7 +576,7 @@ static void pti_clone_kernel_text(void)
+ 	 * pti_set_kernel_image_nonglobal() did to clear the
+ 	 * global bit.
+ 	 */
+-	pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE);
++	pti_clone_pgtable(start, end_clone, PTI_LEVEL_KERNEL_IMAGE, false);
+ 
+ 	/*
+ 	 * pti_clone_pgtable() will set the global bit in any PMDs
+@@ -638,8 +643,15 @@ void __init pti_init(void)
+ 
+ 	/* Undo all global bits from the init pagetables in head_64.S: */
+ 	pti_set_kernel_image_nonglobal();
++
+ 	/* Replace some of the global bits just for shared entry text: */
+-	pti_clone_entry_text();
++	/*
++	 * This is very early in boot. Device and Late initcalls can do
++	 * modprobe before free_initmem() and mark_readonly(). This
++	 * pti_clone_entry_text() allows those user-mode-helpers to function,
++	 * but notably the text is still RW.
++	 */
++	pti_clone_entry_text(false);
+ 	pti_setup_espfix64();
+ 	pti_setup_vsyscall();
+ }
+@@ -656,10 +668,11 @@ void pti_finalize(void)
+ 	if (!boot_cpu_has(X86_FEATURE_PTI))
+ 		return;
+ 	/*
+-	 * We need to clone everything (again) that maps parts of the
+-	 * kernel image.
++	 * This is after free_initmem() (all initcalls are done) and we've done
++	 * mark_readonly(). Text is now NX which might've split some PMDs
++	 * relative to the early clone.
+ 	 */
+-	pti_clone_entry_text();
++	pti_clone_entry_text(true);
+ 	pti_clone_kernel_text();
+ 
+ 	debug_checkwx_user();
+diff --git a/block/bio.c b/block/bio.c
+index e9e809a63c597..c7a4bc05c43e7 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1630,8 +1630,18 @@ void bio_endio(struct bio *bio)
+ 		goto again;
+ 	}
+ 
+-	/* release cgroup info */
+-	bio_uninit(bio);
++#ifdef CONFIG_BLK_CGROUP
++	/*
++	 * Release cgroup info.  We shouldn't have to do this here, but quite
++	 * a few callers of bio_init fail to call bio_uninit, so we cover up
++	 * for that here at least for now.
++	 */
++	if (bio->bi_blkg) {
++		blkg_put(bio->bi_blkg);
++		bio->bi_blkg = NULL;
++	}
++#endif
++
+ 	if (bio->bi_end_io)
+ 		bio->bi_end_io(bio);
+ }
+diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2_security.c b/drivers/accel/habanalabs/gaudi2/gaudi2_security.c
+index 34bf80c5a44bf..307ccb912ccd6 100644
+--- a/drivers/accel/habanalabs/gaudi2/gaudi2_security.c
++++ b/drivers/accel/habanalabs/gaudi2/gaudi2_security.c
+@@ -479,6 +479,7 @@ static const u32 gaudi2_pb_dcr0_edma0_unsecured_regs[] = {
+ 	mmDCORE0_EDMA0_CORE_CTX_TE_NUMROWS,
+ 	mmDCORE0_EDMA0_CORE_CTX_IDX,
+ 	mmDCORE0_EDMA0_CORE_CTX_IDX_INC,
++	mmDCORE0_EDMA0_CORE_WR_COMP_MAX_OUTSTAND,
+ 	mmDCORE0_EDMA0_CORE_RD_LBW_RATE_LIM_CFG,
+ 	mmDCORE0_EDMA0_QM_CQ_CFG0_0,
+ 	mmDCORE0_EDMA0_QM_CQ_CFG0_1,
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 7a0dd35d62c94..5f5a01ccfc3ab 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -400,7 +400,7 @@ static int acpi_processor_add(struct acpi_device *device,
+ 
+ 	result = acpi_processor_get_info(device);
+ 	if (result) /* Processor is not physically present or unavailable */
+-		return 0;
++		goto err_clear_driver_data;
+ 
+ 	BUG_ON(pr->id >= nr_cpu_ids);
+ 
+@@ -415,7 +415,7 @@ static int acpi_processor_add(struct acpi_device *device,
+ 			"BIOS reported wrong ACPI id %d for the processor\n",
+ 			pr->id);
+ 		/* Give up, but do not abort the namespace scan. */
+-		goto err;
++		goto err_clear_driver_data;
+ 	}
+ 	/*
+ 	 * processor_device_array is not cleared on errors to allow buggy BIOS
+@@ -427,12 +427,12 @@ static int acpi_processor_add(struct acpi_device *device,
+ 	dev = get_cpu_device(pr->id);
+ 	if (!dev) {
+ 		result = -ENODEV;
+-		goto err;
++		goto err_clear_per_cpu;
+ 	}
+ 
+ 	result = acpi_bind_one(dev, device);
+ 	if (result)
+-		goto err;
++		goto err_clear_per_cpu;
+ 
+ 	pr->dev = dev;
+ 
+@@ -443,10 +443,11 @@ static int acpi_processor_add(struct acpi_device *device,
+ 	dev_err(dev, "Processor driver could not be attached\n");
+ 	acpi_unbind_one(dev);
+ 
+- err:
+-	free_cpumask_var(pr->throttling.shared_cpu_map);
+-	device->driver_data = NULL;
++ err_clear_per_cpu:
+ 	per_cpu(processors, pr->id) = NULL;
++ err_clear_driver_data:
++	device->driver_data = NULL;
++	free_cpumask_var(pr->throttling.shared_cpu_map);
+  err_free_pr:
+ 	kfree(pr);
+ 	return result;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 2d0a24a565084..2ca71b1d36780 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -3342,6 +3342,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		 */
+ 		copy_size = object_offset - user_offset;
+ 		if (copy_size && (user_offset > object_offset ||
++				object_offset > tr->data_size ||
+ 				binder_alloc_copy_user_to_buffer(
+ 					&target_proc->alloc,
+ 					t->buffer, user_offset,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 74b59b78d278c..71801001ee09f 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5583,8 +5583,10 @@ struct ata_host *ata_host_alloc(struct device *dev, int max_ports)
+ 	}
+ 
+ 	dr = devres_alloc(ata_devres_release, 0, GFP_KERNEL);
+-	if (!dr)
++	if (!dr) {
++		kfree(host);
+ 		goto err_out;
++	}
+ 
+ 	devres_add(dev, dr);
+ 	dev_set_drvdata(dev, host);
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 4e08476011039..4116ae0887191 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -242,10 +242,17 @@ void ata_scsi_set_sense_information(struct ata_device *dev,
+  */
+ static void ata_scsi_set_passthru_sense_fields(struct ata_queued_cmd *qc)
+ {
++	struct ata_device *dev = qc->dev;
+ 	struct scsi_cmnd *cmd = qc->scsicmd;
+ 	struct ata_taskfile *tf = &qc->result_tf;
+ 	unsigned char *sb = cmd->sense_buffer;
+ 
++	if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) {
++		ata_dev_dbg(dev,
++			    "missing result TF: can't set ATA PT sense fields\n");
++		return;
++	}
++
+ 	if ((sb[0] & 0x7f) >= 0x72) {
+ 		unsigned char *desc;
+ 		u8 len;
+@@ -924,12 +931,16 @@ static void ata_to_sense_error(unsigned id, u8 drv_stat, u8 drv_err, u8 *sk,
+  */
+ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ {
++	struct ata_device *dev = qc->dev;
+ 	struct scsi_cmnd *cmd = qc->scsicmd;
+ 	struct ata_taskfile *tf = &qc->result_tf;
+-	unsigned char *sb = cmd->sense_buffer;
+ 	u8 sense_key, asc, ascq;
+ 
+-	memset(sb, 0, SCSI_SENSE_BUFFERSIZE);
++	if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) {
++		ata_dev_dbg(dev,
++			    "missing result TF: can't generate ATA PT sense data\n");
++		return;
++	}
+ 
+ 	/*
+ 	 * Use ata_to_sense_error() to map status register bits
+@@ -976,14 +987,19 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc)
+ 	u64 block;
+ 	u8 sense_key, asc, ascq;
+ 
+-	memset(sb, 0, SCSI_SENSE_BUFFERSIZE);
+-
+ 	if (ata_dev_disabled(dev)) {
+ 		/* Device disabled after error recovery */
+ 		/* LOGICAL UNIT NOT READY, HARD RESET REQUIRED */
+ 		ata_scsi_set_sense(dev, cmd, NOT_READY, 0x04, 0x21);
+ 		return;
+ 	}
++
++	if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) {
++		ata_dev_dbg(dev,
++			    "missing result TF: can't generate sense data\n");
++		return;
++	}
++
+ 	/* Use ata_to_sense_error() to map status register bits
+ 	 * onto sense key, asc & ascq.
+ 	 */
+diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c
+index 99fc5d9d95d7a..cac022eb1492f 100644
+--- a/drivers/ata/pata_macio.c
++++ b/drivers/ata/pata_macio.c
+@@ -554,7 +554,8 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
+ 
+ 		while (sg_len) {
+ 			/* table overflow should never happen */
+-			BUG_ON (pi++ >= MAX_DCMDS);
++			if (WARN_ON_ONCE(pi >= MAX_DCMDS))
++				return AC_ERR_SYSTEM;
+ 
+ 			len = (sg_len < MAX_DBDMA_SEG) ? sg_len : MAX_DBDMA_SEG;
+ 			table->command = cpu_to_le16(write ? OUTPUT_MORE: INPUT_MORE);
+@@ -566,11 +567,13 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
+ 			addr += len;
+ 			sg_len -= len;
+ 			++table;
++			++pi;
+ 		}
+ 	}
+ 
+ 	/* Should never happen according to Tejun */
+-	BUG_ON(!pi);
++	if (WARN_ON_ONCE(!pi))
++		return AC_ERR_SYSTEM;
+ 
+ 	/* Convert the last command to an input/output */
+ 	table--;
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 8d709dbd4e0c1..e9b0d94aeabd9 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -567,6 +567,7 @@ void * devres_open_group(struct device *dev, void *id, gfp_t gfp)
+ 	grp->id = grp;
+ 	if (id)
+ 		grp->id = id;
++	grp->color = 0;
+ 
+ 	spin_lock_irqsave(&dev->devres_lock, flags);
+ 	add_dr(dev, &grp->node[0]);
+diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
+index e424334048549..4c034c8131267 100644
+--- a/drivers/base/regmap/regcache-maple.c
++++ b/drivers/base/regmap/regcache-maple.c
+@@ -110,7 +110,8 @@ static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ 	struct maple_tree *mt = map->cache;
+ 	MA_STATE(mas, mt, min, max);
+ 	unsigned long *entry, *lower, *upper;
+-	unsigned long lower_index, lower_last;
++	/* initialized to work around false-positive -Wuninitialized warning */
++	unsigned long lower_index = 0, lower_last = 0;
+ 	unsigned long upper_index, upper_last;
+ 	int ret = 0;
+ 
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 3b58839321333..fc001e9f95f61 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2664,6 +2664,8 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
+ 	mutex_lock(&ub->mutex);
+ 	if (!ublk_can_use_recovery(ub))
+ 		goto out_unlock;
++	if (!ub->nr_queues_ready)
++		goto out_unlock;
+ 	/*
+ 	 * START_RECOVERY is only allowd after:
+ 	 *
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index eeba2d26d1cb9..5890ecd8e9488 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1326,8 +1326,10 @@ static int btnxpuart_close(struct hci_dev *hdev)
+ 
+ 	serdev_device_close(nxpdev->serdev);
+ 	skb_queue_purge(&nxpdev->txq);
+-	kfree_skb(nxpdev->rx_skb);
+-	nxpdev->rx_skb = NULL;
++	if (!IS_ERR_OR_NULL(nxpdev->rx_skb)) {
++		kfree_skb(nxpdev->rx_skb);
++		nxpdev->rx_skb = NULL;
++	}
+ 	clear_bit(BTNXPUART_SERDEV_OPEN, &nxpdev->tx_state);
+ 	return 0;
+ }
+@@ -1342,8 +1344,10 @@ static int btnxpuart_flush(struct hci_dev *hdev)
+ 
+ 	cancel_work_sync(&nxpdev->tx_work);
+ 
+-	kfree_skb(nxpdev->rx_skb);
+-	nxpdev->rx_skb = NULL;
++	if (!IS_ERR_OR_NULL(nxpdev->rx_skb)) {
++		kfree_skb(nxpdev->rx_skb);
++		nxpdev->rx_skb = NULL;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 34c36f0f781ea..c5606a62f230a 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1090,6 +1090,7 @@ static void qca_controller_memdump(struct work_struct *work)
+ 				qca->memdump_state = QCA_MEMDUMP_COLLECTED;
+ 				cancel_delayed_work(&qca->ctrl_memdump_timeout);
+ 				clear_bit(QCA_MEMDUMP_COLLECTION, &qca->flags);
++				clear_bit(QCA_IBS_DISABLED, &qca->flags);
+ 				mutex_unlock(&qca->hci_memdump_lock);
+ 				return;
+ 			}
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index c51647e37df8e..25a7b4b15c56a 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -40,7 +40,7 @@
+ 
+ #define PLL_USER_CTL(p)		((p)->offset + (p)->regs[PLL_OFF_USER_CTL])
+ # define PLL_POST_DIV_SHIFT	8
+-# define PLL_POST_DIV_MASK(p)	GENMASK((p)->width, 0)
++# define PLL_POST_DIV_MASK(p)	GENMASK((p)->width - 1, 0)
+ # define PLL_ALPHA_EN		BIT(24)
+ # define PLL_ALPHA_MODE		BIT(25)
+ # define PLL_VCO_SHIFT		20
+@@ -1505,8 +1505,8 @@ clk_trion_pll_postdiv_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	}
+ 
+ 	return regmap_update_bits(regmap, PLL_USER_CTL(pll),
+-				  PLL_POST_DIV_MASK(pll) << PLL_POST_DIV_SHIFT,
+-				  val << PLL_POST_DIV_SHIFT);
++				  PLL_POST_DIV_MASK(pll) << pll->post_div_shift,
++				  val << pll->post_div_shift);
+ }
+ 
+ const struct clk_ops clk_alpha_pll_postdiv_trion_ops = {
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index d7414361e432e..8e0f3372dc7a8 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -198,6 +198,7 @@ extern const struct clk_ops clk_byte2_ops;
+ extern const struct clk_ops clk_pixel_ops;
+ extern const struct clk_ops clk_gfx3d_ops;
+ extern const struct clk_ops clk_rcg2_shared_ops;
++extern const struct clk_ops clk_rcg2_shared_no_init_park_ops;
+ extern const struct clk_ops clk_dp_ops;
+ 
+ struct clk_rcg_dfs_data {
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 30b19bd39d087..bf26c5448f006 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -1348,6 +1348,36 @@ const struct clk_ops clk_rcg2_shared_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg2_shared_ops);
+ 
++static int clk_rcg2_shared_no_init_park(struct clk_hw *hw)
++{
++	struct clk_rcg2 *rcg = to_clk_rcg2(hw);
++
++	/*
++	 * Read the config register so that the parent is properly mapped at
++	 * registration time.
++	 */
++	regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, &rcg->parked_cfg);
++
++	return 0;
++}
++
++/*
++ * Like clk_rcg2_shared_ops but skip the init so that the clk frequency is left
++ * unchanged at registration time.
++ */
++const struct clk_ops clk_rcg2_shared_no_init_park_ops = {
++	.init = clk_rcg2_shared_no_init_park,
++	.enable = clk_rcg2_shared_enable,
++	.disable = clk_rcg2_shared_disable,
++	.get_parent = clk_rcg2_shared_get_parent,
++	.set_parent = clk_rcg2_shared_set_parent,
++	.recalc_rate = clk_rcg2_shared_recalc_rate,
++	.determine_rate = clk_rcg2_determine_rate,
++	.set_rate = clk_rcg2_shared_set_rate,
++	.set_rate_and_parent = clk_rcg2_shared_set_rate_and_parent,
++};
++EXPORT_SYMBOL_GPL(clk_rcg2_shared_no_init_park_ops);
++
+ /* Common APIs to be used for DFS based RCGR */
+ static void clk_rcg2_dfs_populate_freq(struct clk_hw *hw, unsigned int l,
+ 				       struct freq_tbl *f)
+diff --git a/drivers/clk/qcom/gcc-ipq9574.c b/drivers/clk/qcom/gcc-ipq9574.c
+index f8b9a1e93bef2..cdbbf2cc9c5d1 100644
+--- a/drivers/clk/qcom/gcc-ipq9574.c
++++ b/drivers/clk/qcom/gcc-ipq9574.c
+@@ -65,7 +65,7 @@ static const struct clk_parent_data gcc_sleep_clk_data[] = {
+ 
+ static struct clk_alpha_pll gpll0_main = {
+ 	.offset = 0x20000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.clkr = {
+ 		.enable_reg = 0x0b000,
+ 		.enable_mask = BIT(0),
+@@ -93,7 +93,7 @@ static struct clk_fixed_factor gpll0_out_main_div2 = {
+ 
+ static struct clk_alpha_pll_postdiv gpll0 = {
+ 	.offset = 0x20000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.width = 4,
+ 	.clkr.hw.init = &(const struct clk_init_data) {
+ 		.name = "gpll0",
+@@ -107,7 +107,7 @@ static struct clk_alpha_pll_postdiv gpll0 = {
+ 
+ static struct clk_alpha_pll gpll4_main = {
+ 	.offset = 0x22000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.clkr = {
+ 		.enable_reg = 0x0b000,
+ 		.enable_mask = BIT(2),
+@@ -122,7 +122,7 @@ static struct clk_alpha_pll gpll4_main = {
+ 
+ static struct clk_alpha_pll_postdiv gpll4 = {
+ 	.offset = 0x22000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.width = 4,
+ 	.clkr.hw.init = &(const struct clk_init_data) {
+ 		.name = "gpll4",
+@@ -136,7 +136,7 @@ static struct clk_alpha_pll_postdiv gpll4 = {
+ 
+ static struct clk_alpha_pll gpll2_main = {
+ 	.offset = 0x21000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.clkr = {
+ 		.enable_reg = 0x0b000,
+ 		.enable_mask = BIT(1),
+@@ -151,7 +151,7 @@ static struct clk_alpha_pll gpll2_main = {
+ 
+ static struct clk_alpha_pll_postdiv gpll2 = {
+ 	.offset = 0x21000,
+-	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO],
+ 	.width = 4,
+ 	.clkr.hw.init = &(const struct clk_init_data) {
+ 		.name = "gpll2",
+diff --git a/drivers/clk/qcom/gcc-sm8550.c b/drivers/clk/qcom/gcc-sm8550.c
+index 26d7349e76424..eae42f756c13a 100644
+--- a/drivers/clk/qcom/gcc-sm8550.c
++++ b/drivers/clk/qcom/gcc-sm8550.c
+@@ -536,7 +536,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s0_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -551,7 +551,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s1_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -566,7 +566,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s2_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -581,7 +581,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s3_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -596,7 +596,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s4_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -611,7 +611,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s5_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -626,7 +626,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s6_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -641,7 +641,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s7_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -656,7 +656,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s8_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -671,7 +671,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s9_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -700,7 +700,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -717,7 +717,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -750,7 +750,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -767,7 +767,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -784,7 +784,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -801,7 +801,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -818,7 +818,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -835,7 +835,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+@@ -852,7 +852,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+@@ -869,7 +869,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+@@ -886,7 +886,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+@@ -903,7 +903,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+@@ -920,7 +920,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+@@ -937,7 +937,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+@@ -975,7 +975,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_8,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_8),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = {
+@@ -992,7 +992,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = {
+@@ -1159,7 +1159,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_shared_no_init_park_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index a263f0c412f5a..52ea2a0888f37 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -670,7 +670,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+@@ -687,7 +687,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+@@ -719,7 +719,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+@@ -736,7 +736,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+@@ -768,7 +768,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+@@ -785,7 +785,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+@@ -802,7 +802,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+@@ -819,7 +819,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+@@ -836,7 +836,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -853,7 +853,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -870,7 +870,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -887,7 +887,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -904,7 +904,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -921,7 +921,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -938,7 +938,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -955,7 +955,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+@@ -972,7 +972,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+@@ -989,7 +989,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+@@ -1006,7 +1006,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+@@ -1023,7 +1023,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+@@ -1040,7 +1040,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+@@ -1057,7 +1057,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+@@ -1074,7 +1074,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_8,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_8),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = {
+@@ -1091,7 +1091,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = {
+@@ -6203,7 +6203,7 @@ static struct gdsc gcc_usb_0_phy_gdsc = {
+ 	.pd = {
+ 		.name = "gcc_usb_0_phy_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ 	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+@@ -6215,7 +6215,7 @@ static struct gdsc gcc_usb_1_phy_gdsc = {
+ 	.pd = {
+ 		.name = "gcc_usb_1_phy_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ 	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+diff --git a/drivers/clk/starfive/clk-starfive-jh7110-sys.c b/drivers/clk/starfive/clk-starfive-jh7110-sys.c
+index 8f5e5abfa178d..17325f17696f6 100644
+--- a/drivers/clk/starfive/clk-starfive-jh7110-sys.c
++++ b/drivers/clk/starfive/clk-starfive-jh7110-sys.c
+@@ -385,6 +385,32 @@ int jh7110_reset_controller_register(struct jh71x0_clk_priv *priv,
+ }
+ EXPORT_SYMBOL_GPL(jh7110_reset_controller_register);
+ 
++/*
++ * This clock notifier is called when the rate of PLL0 clock is to be changed.
++ * The cpu_root clock should save the curent parent clock and switch its parent
++ * clock to osc before PLL0 rate will be changed. Then switch its parent clock
++ * back after the PLL0 rate is completed.
++ */
++static int jh7110_pll0_clk_notifier_cb(struct notifier_block *nb,
++				       unsigned long action, void *data)
++{
++	struct jh71x0_clk_priv *priv = container_of(nb, struct jh71x0_clk_priv, pll_clk_nb);
++	struct clk *cpu_root = priv->reg[JH7110_SYSCLK_CPU_ROOT].hw.clk;
++	int ret = 0;
++
++	if (action == PRE_RATE_CHANGE) {
++		struct clk *osc = clk_get(priv->dev, "osc");
++
++		priv->original_clk = clk_get_parent(cpu_root);
++		ret = clk_set_parent(cpu_root, osc);
++		clk_put(osc);
++	} else if (action == POST_RATE_CHANGE) {
++		ret = clk_set_parent(cpu_root, priv->original_clk);
++	}
++
++	return notifier_from_errno(ret);
++}
++
+ static int __init jh7110_syscrg_probe(struct platform_device *pdev)
+ {
+ 	struct jh71x0_clk_priv *priv;
+@@ -413,7 +439,10 @@ static int __init jh7110_syscrg_probe(struct platform_device *pdev)
+ 		if (IS_ERR(priv->pll[0]))
+ 			return PTR_ERR(priv->pll[0]);
+ 	} else {
+-		clk_put(pllclk);
++		priv->pll_clk_nb.notifier_call = jh7110_pll0_clk_notifier_cb;
++		ret = clk_notifier_register(pllclk, &priv->pll_clk_nb);
++		if (ret)
++			return ret;
+ 		priv->pll[0] = NULL;
+ 	}
+ 
+diff --git a/drivers/clk/starfive/clk-starfive-jh71x0.h b/drivers/clk/starfive/clk-starfive-jh71x0.h
+index 23e052fc15495..e3f441393e48f 100644
+--- a/drivers/clk/starfive/clk-starfive-jh71x0.h
++++ b/drivers/clk/starfive/clk-starfive-jh71x0.h
+@@ -114,6 +114,8 @@ struct jh71x0_clk_priv {
+ 	spinlock_t rmw_lock;
+ 	struct device *dev;
+ 	void __iomem *base;
++	struct clk *original_clk;
++	struct notifier_block pll_clk_nb;
+ 	struct clk_hw *pll[3];
+ 	struct jh71x0_clk reg[];
+ };
+diff --git a/drivers/clocksource/timer-imx-tpm.c b/drivers/clocksource/timer-imx-tpm.c
+index bd64a8a8427f3..92c025b70eb62 100644
+--- a/drivers/clocksource/timer-imx-tpm.c
++++ b/drivers/clocksource/timer-imx-tpm.c
+@@ -83,20 +83,28 @@ static u64 notrace tpm_read_sched_clock(void)
+ static int tpm_set_next_event(unsigned long delta,
+ 				struct clock_event_device *evt)
+ {
+-	unsigned long next, now;
++	unsigned long next, prev, now;
+ 
+-	next = tpm_read_counter();
+-	next += delta;
++	prev = tpm_read_counter();
++	next = prev + delta;
+ 	writel(next, timer_base + TPM_C0V);
+ 	now = tpm_read_counter();
+ 
++	/*
++	 * Need to wait CNT increase at least 1 cycle to make sure
++	 * the C0V has been updated into HW.
++	 */
++	if ((next & 0xffffffff) != readl(timer_base + TPM_C0V))
++		while (now == tpm_read_counter())
++			;
++
+ 	/*
+ 	 * NOTE: We observed in a very small probability, the bus fabric
+ 	 * contention between GPU and A7 may results a few cycles delay
+ 	 * of writing CNT registers which may cause the min_delta event got
+ 	 * missed, so we need add a ETIME check here in case it happened.
+ 	 */
+-	return (int)(next - now) <= 0 ? -ETIME : 0;
++	return (now - prev) >= delta ? -ETIME : 0;
+ }
+ 
+ static int tpm_set_state_oneshot(struct clock_event_device *evt)
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index c3f54d9912be7..420202bf76e42 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -25,10 +25,7 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq)
+ 
+ 	struct clock_event_device *clkevt = &to->clkevt;
+ 
+-	if (of_irq->percpu)
+-		free_percpu_irq(of_irq->irq, clkevt);
+-	else
+-		free_irq(of_irq->irq, clkevt);
++	free_irq(of_irq->irq, clkevt);
+ }
+ 
+ /**
+@@ -42,9 +39,6 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq)
+  * - Get interrupt number by name
+  * - Get interrupt number by index
+  *
+- * When the interrupt is per CPU, 'request_percpu_irq()' is called,
+- * otherwise 'request_irq()' is used.
+- *
+  * Returns 0 on success, < 0 otherwise
+  */
+ static __init int timer_of_irq_init(struct device_node *np,
+@@ -69,12 +63,9 @@ static __init int timer_of_irq_init(struct device_node *np,
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = of_irq->percpu ?
+-		request_percpu_irq(of_irq->irq, of_irq->handler,
+-				   np->full_name, clkevt) :
+-		request_irq(of_irq->irq, of_irq->handler,
+-			    of_irq->flags ? of_irq->flags : IRQF_TIMER,
+-			    np->full_name, clkevt);
++	ret = request_irq(of_irq->irq, of_irq->handler,
++			  of_irq->flags ? of_irq->flags : IRQF_TIMER,
++			  np->full_name, clkevt);
+ 	if (ret) {
+ 		pr_err("Failed to request irq %d for %pOF\n", of_irq->irq, np);
+ 		return ret;
+diff --git a/drivers/clocksource/timer-of.h b/drivers/clocksource/timer-of.h
+index a5478f3e8589d..01a2c6b7db065 100644
+--- a/drivers/clocksource/timer-of.h
++++ b/drivers/clocksource/timer-of.h
+@@ -11,7 +11,6 @@
+ struct of_timer_irq {
+ 	int irq;
+ 	int index;
+-	int percpu;
+ 	const char *name;
+ 	unsigned long flags;
+ 	irq_handler_t handler;
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_pfvf.c b/drivers/crypto/intel/qat/qat_common/adf_gen2_pfvf.c
+index 70ef119639381..43af81fcab868 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen2_pfvf.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_pfvf.c
+@@ -100,7 +100,9 @@ static u32 adf_gen2_disable_pending_vf2pf_interrupts(void __iomem *pmisc_addr)
+ 	errmsk3 |= ADF_GEN2_ERR_MSK_VF2PF(ADF_GEN2_VF_MSK);
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK3, errmsk3);
+ 
+-	errmsk3 &= ADF_GEN2_ERR_MSK_VF2PF(sources | disabled);
++	/* Update only section of errmsk3 related to VF2PF */
++	errmsk3 &= ~ADF_GEN2_ERR_MSK_VF2PF(ADF_GEN2_VF_MSK);
++	errmsk3 |= ADF_GEN2_ERR_MSK_VF2PF(sources | disabled);
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK3, errmsk3);
+ 
+ 	/* Return the sources of the (new) interrupt(s) */
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.c b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+index 346ef8bee99d9..e782c23fc1bfc 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_rl.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+@@ -1106,6 +1106,7 @@ int adf_rl_init(struct adf_accel_dev *accel_dev)
+ 	mutex_init(&rl->rl_lock);
+ 	rl->device_data = &accel_dev->hw_device->rl_data;
+ 	rl->accel_dev = accel_dev;
++	init_rwsem(&rl->user_input.lock);
+ 	accel_dev->rate_limiting = rl;
+ 
+ err_ret:
+diff --git a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+index 6e24d57e6b98e..c0661ff5e9292 100644
+--- a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+@@ -193,8 +193,12 @@ static u32 disable_pending_vf2pf_interrupts(void __iomem *pmisc_addr)
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK3, errmsk3);
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK5, errmsk5);
+ 
+-	errmsk3 &= ADF_DH895XCC_ERR_MSK_VF2PF_L(sources | disabled);
+-	errmsk5 &= ADF_DH895XCC_ERR_MSK_VF2PF_U(sources | disabled);
++	/* Update only section of errmsk3 and errmsk5 related to VF2PF */
++	errmsk3 &= ~ADF_DH895XCC_ERR_MSK_VF2PF_L(ADF_DH895XCC_VF_MSK);
++	errmsk5 &= ~ADF_DH895XCC_ERR_MSK_VF2PF_U(ADF_DH895XCC_VF_MSK);
++
++	errmsk3 |= ADF_DH895XCC_ERR_MSK_VF2PF_L(sources | disabled);
++	errmsk5 |= ADF_DH895XCC_ERR_MSK_VF2PF_U(sources | disabled);
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK3, errmsk3);
+ 	ADF_CSR_WR(pmisc_addr, ADF_GEN2_ERRMSK5, errmsk5);
+ 
+diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h
+index 494a74f527064..5ed4ba5da7f9e 100644
+--- a/drivers/crypto/starfive/jh7110-cryp.h
++++ b/drivers/crypto/starfive/jh7110-cryp.h
+@@ -30,6 +30,7 @@
+ #define MAX_KEY_SIZE				SHA512_BLOCK_SIZE
+ #define STARFIVE_AES_IV_LEN			AES_BLOCK_SIZE
+ #define STARFIVE_AES_CTR_LEN			AES_BLOCK_SIZE
++#define STARFIVE_RSA_MAX_KEYSZ			256
+ 
+ union starfive_aes_csr {
+ 	u32 v;
+@@ -217,12 +218,11 @@ struct starfive_cryp_request_ctx {
+ 	struct scatterlist			*out_sg;
+ 	struct ahash_request			ahash_fbk_req;
+ 	size_t					total;
+-	size_t					nents;
+ 	unsigned int				blksize;
+ 	unsigned int				digsize;
+ 	unsigned long				in_sg_len;
+ 	unsigned char				*adata;
+-	u8 rsa_data[] __aligned(sizeof(u32));
++	u8 rsa_data[STARFIVE_RSA_MAX_KEYSZ] __aligned(sizeof(u32));
+ };
+ 
+ struct starfive_cryp_dev *starfive_cryp_find_dev(struct starfive_cryp_ctx *ctx);
+diff --git a/drivers/crypto/starfive/jh7110-rsa.c b/drivers/crypto/starfive/jh7110-rsa.c
+index 33093ba4b13af..a778c48460253 100644
+--- a/drivers/crypto/starfive/jh7110-rsa.c
++++ b/drivers/crypto/starfive/jh7110-rsa.c
+@@ -31,7 +31,6 @@
+ /* A * A * R mod N ==> A */
+ #define CRYPTO_CMD_AARN			0x7
+ 
+-#define STARFIVE_RSA_MAX_KEYSZ		256
+ #define STARFIVE_RSA_RESET		0x2
+ 
+ static inline int starfive_pka_wait_done(struct starfive_cryp_ctx *ctx)
+@@ -74,7 +73,7 @@ static int starfive_rsa_montgomery_form(struct starfive_cryp_ctx *ctx,
+ {
+ 	struct starfive_cryp_dev *cryp = ctx->cryp;
+ 	struct starfive_cryp_request_ctx *rctx = ctx->rctx;
+-	int count = rctx->total / sizeof(u32) - 1;
++	int count = (ALIGN(rctx->total, 4) / 4) - 1;
+ 	int loop;
+ 	u32 temp;
+ 	u8 opsize;
+@@ -251,12 +250,17 @@ static int starfive_rsa_enc_core(struct starfive_cryp_ctx *ctx, int enc)
+ 	struct starfive_cryp_dev *cryp = ctx->cryp;
+ 	struct starfive_cryp_request_ctx *rctx = ctx->rctx;
+ 	struct starfive_rsa_key *key = &ctx->rsa_key;
+-	int ret = 0;
++	int ret = 0, shift = 0;
+ 
+ 	writel(STARFIVE_RSA_RESET, cryp->base + STARFIVE_PKA_CACR_OFFSET);
+ 
+-	rctx->total = sg_copy_to_buffer(rctx->in_sg, rctx->nents,
+-					rctx->rsa_data, rctx->total);
++	if (!IS_ALIGNED(rctx->total, sizeof(u32))) {
++		shift = sizeof(u32) - (rctx->total & 0x3);
++		memset(rctx->rsa_data, 0, shift);
++	}
++
++	rctx->total = sg_copy_to_buffer(rctx->in_sg, sg_nents(rctx->in_sg),
++					rctx->rsa_data + shift, rctx->total);
+ 
+ 	if (enc) {
+ 		key->bitlen = key->e_bitlen;
+@@ -305,7 +309,6 @@ static int starfive_rsa_enc(struct akcipher_request *req)
+ 	rctx->in_sg = req->src;
+ 	rctx->out_sg = req->dst;
+ 	rctx->total = req->src_len;
+-	rctx->nents = sg_nents(rctx->in_sg);
+ 	ctx->rctx = rctx;
+ 
+ 	return starfive_rsa_enc_core(ctx, 1);
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 538ebd5a64fd9..0e30e0a29d400 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -1632,10 +1632,13 @@ static int cxl_region_attach_position(struct cxl_region *cxlr,
+ 				      const struct cxl_dport *dport, int pos)
+ {
+ 	struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
++	struct cxl_switch_decoder *cxlsd = &cxlrd->cxlsd;
++	struct cxl_decoder *cxld = &cxlsd->cxld;
++	int iw = cxld->interleave_ways;
+ 	struct cxl_port *iter;
+ 	int rc;
+ 
+-	if (cxlrd->calc_hb(cxlrd, pos) != dport) {
++	if (dport != cxlrd->cxlsd.target[pos % iw]) {
+ 		dev_dbg(&cxlr->dev, "%s:%s invalid target position for %s\n",
+ 			dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev),
+ 			dev_name(&cxlrd->cxlsd.cxld.dev));
+@@ -2386,14 +2389,25 @@ static bool cxl_region_update_coordinates(struct cxl_region *cxlr, int nid)
+ 	return true;
+ }
+ 
++static int cxl_region_nid(struct cxl_region *cxlr)
++{
++	struct cxl_region_params *p = &cxlr->params;
++	struct cxl_endpoint_decoder *cxled;
++	struct cxl_decoder *cxld;
++
++	guard(rwsem_read)(&cxl_region_rwsem);
++	cxled = p->targets[0];
++	if (!cxled)
++		return NUMA_NO_NODE;
++	cxld = &cxled->cxld;
++	return phys_to_target_node(cxld->hpa_range.start);
++}
++
+ static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
+ 					  unsigned long action, void *arg)
+ {
+ 	struct cxl_region *cxlr = container_of(nb, struct cxl_region,
+ 					       memory_notifier);
+-	struct cxl_region_params *p = &cxlr->params;
+-	struct cxl_endpoint_decoder *cxled = p->targets[0];
+-	struct cxl_decoder *cxld = &cxled->cxld;
+ 	struct memory_notify *mnb = arg;
+ 	int nid = mnb->status_change_nid;
+ 	int region_nid;
+@@ -2401,7 +2415,7 @@ static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
+ 	if (nid == NUMA_NO_NODE || action != MEM_ONLINE)
+ 		return NOTIFY_DONE;
+ 
+-	region_nid = phys_to_target_node(cxld->hpa_range.start);
++	region_nid = cxl_region_nid(cxlr);
+ 	if (nid != region_nid)
+ 		return NOTIFY_DONE;
+ 
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 8a347b9384064..89fd63205a6e1 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -796,6 +796,9 @@ int cs_dsp_coeff_write_ctrl(struct cs_dsp_coeff_ctl *ctl,
+ 
+ 	lockdep_assert_held(&ctl->dsp->pwr_lock);
+ 
++	if (ctl->flags && !(ctl->flags & WMFW_CTL_FLAG_WRITEABLE))
++		return -EPERM;
++
+ 	if (len + off * sizeof(u32) > ctl->len)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index 0bd339813110e..365ab947983ca 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -713,6 +713,7 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	pctldev = of_pinctrl_get(pctlnp);
++	of_node_put(pctlnp);
+ 	if (!pctldev)
+ 		return -EPROBE_DEFER;
+ 
+diff --git a/drivers/gpio/gpio-zynqmp-modepin.c b/drivers/gpio/gpio-zynqmp-modepin.c
+index a0d69387c1532..2f3c9ebfa78d1 100644
+--- a/drivers/gpio/gpio-zynqmp-modepin.c
++++ b/drivers/gpio/gpio-zynqmp-modepin.c
+@@ -146,6 +146,7 @@ static const struct of_device_id modepin_platform_id[] = {
+ 	{ .compatible = "xlnx,zynqmp-gpio-modepin", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, modepin_platform_id);
+ 
+ static struct platform_driver modepin_platform_driver = {
+ 	.driver = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 936c98a13a240..6dfdff58bffd1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1096,6 +1096,21 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ 	unsigned int i;
+ 	int r;
+ 
++	/*
++	 * We can't use gang submit on with reserved VMIDs when the VM changes
++	 * can't be invalidated by more than one engine at the same time.
++	 */
++	if (p->gang_size > 1 && !p->adev->vm_manager.concurrent_flush) {
++		for (i = 0; i < p->gang_size; ++i) {
++			struct drm_sched_entity *entity = p->entities[i];
++			struct drm_gpu_scheduler *sched = entity->rq->sched;
++			struct amdgpu_ring *ring = to_amdgpu_ring(sched);
++
++			if (amdgpu_vmid_uses_reserved(vm, ring->vm_hub))
++				return -EINVAL;
++		}
++	}
++
+ 	r = amdgpu_vm_clear_freed(adev, vm, NULL);
+ 	if (r)
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d24d7a1086240..e66546df0bc19 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5057,29 +5057,26 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+  * amdgpu_device_reset_sriov - reset ASIC for SR-IOV vf
+  *
+  * @adev: amdgpu_device pointer
+- * @from_hypervisor: request from hypervisor
++ * @reset_context: amdgpu reset context pointer
+  *
+  * do VF FLR and reinitialize Asic
+  * return 0 means succeeded otherwise failed
+  */
+ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
+-				     bool from_hypervisor)
++				     struct amdgpu_reset_context *reset_context)
+ {
+ 	int r;
+ 	struct amdgpu_hive_info *hive = NULL;
+-	int retry_limit = 0;
+-
+-retry:
+-	amdgpu_amdkfd_pre_reset(adev);
+-
+-	amdgpu_device_stop_pending_resets(adev);
+ 
+-	if (from_hypervisor)
++	if (test_bit(AMDGPU_HOST_FLR, &reset_context->flags)) {
++		clear_bit(AMDGPU_HOST_FLR, &reset_context->flags);
+ 		r = amdgpu_virt_request_full_gpu(adev, true);
+-	else
++	} else {
+ 		r = amdgpu_virt_reset_gpu(adev);
++	}
+ 	if (r)
+ 		return r;
++
+ 	amdgpu_ras_set_fed(adev, false);
+ 	amdgpu_irq_gpu_reset_resume_helper(adev);
+ 
+@@ -5089,7 +5086,7 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
+ 	/* Resume IP prior to SMC */
+ 	r = amdgpu_device_ip_reinit_early_sriov(adev);
+ 	if (r)
+-		goto error;
++		return r;
+ 
+ 	amdgpu_virt_init_data_exchange(adev);
+ 
+@@ -5100,38 +5097,35 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
+ 	/* now we are okay to resume SMC/CP/SDMA */
+ 	r = amdgpu_device_ip_reinit_late_sriov(adev);
+ 	if (r)
+-		goto error;
++		return r;
+ 
+ 	hive = amdgpu_get_xgmi_hive(adev);
+ 	/* Update PSP FW topology after reset */
+ 	if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
+ 		r = amdgpu_xgmi_update_topology(hive, adev);
+-
+ 	if (hive)
+ 		amdgpu_put_xgmi_hive(hive);
++	if (r)
++		return r;
+ 
+-	if (!r) {
+-		r = amdgpu_ib_ring_tests(adev);
+-
+-		amdgpu_amdkfd_post_reset(adev);
+-	}
++	r = amdgpu_ib_ring_tests(adev);
++	if (r)
++		return r;
+ 
+-error:
+-	if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) {
++	if (adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) {
+ 		amdgpu_inc_vram_lost(adev);
+ 		r = amdgpu_device_recover_vram(adev);
+ 	}
+-	amdgpu_virt_release_full_gpu(adev, true);
++	if (r)
++		return r;
+ 
+-	if (AMDGPU_RETRY_SRIOV_RESET(r)) {
+-		if (retry_limit < AMDGPU_MAX_RETRY_LIMIT) {
+-			retry_limit++;
+-			goto retry;
+-		} else
+-			DRM_ERROR("GPU reset retry is beyond the retry limit\n");
+-	}
++	/* need to be called during full access so we can't do it later like
++	 * bare-metal does.
++	 */
++	amdgpu_amdkfd_post_reset(adev);
++	amdgpu_virt_release_full_gpu(adev, true);
+ 
+-	return r;
++	return 0;
+ }
+ 
+ /**
+@@ -5693,6 +5687,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	int i, r = 0;
+ 	bool need_emergency_restart = false;
+ 	bool audio_suspended = false;
++	int retry_limit = AMDGPU_MAX_RETRY_LIMIT;
+ 
+ 	/*
+ 	 * Special case: RAS triggered and full reset isn't supported
+@@ -5774,8 +5769,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 
+ 		cancel_delayed_work_sync(&tmp_adev->delayed_init_work);
+ 
+-		if (!amdgpu_sriov_vf(tmp_adev))
+-			amdgpu_amdkfd_pre_reset(tmp_adev);
++		amdgpu_amdkfd_pre_reset(tmp_adev);
+ 
+ 		/*
+ 		 * Mark these ASICs to be reseted as untracked first
+@@ -5828,19 +5822,16 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 				  r, adev_to_drm(tmp_adev)->unique);
+ 			tmp_adev->asic_reset_res = r;
+ 		}
+-
+-		if (!amdgpu_sriov_vf(tmp_adev))
+-			/*
+-			* Drop all pending non scheduler resets. Scheduler resets
+-			* were already dropped during drm_sched_stop
+-			*/
+-			amdgpu_device_stop_pending_resets(tmp_adev);
+ 	}
+ 
+ 	/* Actual ASIC resets if needed.*/
+ 	/* Host driver will handle XGMI hive reset for SRIOV */
+ 	if (amdgpu_sriov_vf(adev)) {
+-		r = amdgpu_device_reset_sriov(adev, job ? false : true);
++		r = amdgpu_device_reset_sriov(adev, reset_context);
++		if (AMDGPU_RETRY_SRIOV_RESET(r) && (retry_limit--) > 0) {
++			amdgpu_virt_release_full_gpu(adev, true);
++			goto retry;
++		}
+ 		if (r)
+ 			adev->asic_reset_res = r;
+ 
+@@ -5856,6 +5847,16 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 			goto retry;
+ 	}
+ 
++	list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
++		/*
++		 * Drop any pending non scheduler resets queued before reset is done.
++		 * Any reset scheduled after this point would be valid. Scheduler resets
++		 * were already dropped during drm_sched_stop and no new ones can come
++		 * in before drm_sched_start.
++		 */
++		amdgpu_device_stop_pending_resets(tmp_adev);
++	}
++
+ skip_hw_reset:
+ 
+ 	/* Post ASIC reset for all devs .*/
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 3ecc7ef95172c..30755ce4002dd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -917,8 +917,7 @@ static int check_tiling_flags_gfx6(struct amdgpu_framebuffer *afb)
+ {
+ 	u64 micro_tile_mode;
+ 
+-	/* Zero swizzle mode means linear */
+-	if (AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0)
++	if (AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) == 1) /* LINEAR_ALIGNED */
+ 		return 0;
+ 
+ 	micro_tile_mode = AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE);
+@@ -1042,6 +1041,30 @@ static int amdgpu_display_verify_sizes(struct amdgpu_framebuffer *rfb)
+ 			block_width = 256 / format_info->cpp[i];
+ 			block_height = 1;
+ 			block_size_log2 = 8;
++		} else if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) >= AMD_FMT_MOD_TILE_VER_GFX12) {
++			int swizzle = AMD_FMT_MOD_GET(TILE, modifier);
++
++			switch (swizzle) {
++			case AMD_FMT_MOD_TILE_GFX12_256B_2D:
++				block_size_log2 = 8;
++				break;
++			case AMD_FMT_MOD_TILE_GFX12_4K_2D:
++				block_size_log2 = 12;
++				break;
++			case AMD_FMT_MOD_TILE_GFX12_64K_2D:
++				block_size_log2 = 16;
++				break;
++			case AMD_FMT_MOD_TILE_GFX12_256K_2D:
++				block_size_log2 = 18;
++				break;
++			default:
++				drm_dbg_kms(rfb->base.dev,
++					    "Gfx12 swizzle mode with unknown block size: %d\n", swizzle);
++				return -EINVAL;
++			}
++
++			get_block_dimensions(block_size_log2, format_info->cpp[i],
++					     &block_width, &block_height);
+ 		} else {
+ 			int swizzle = AMD_FMT_MOD_GET(TILE, modifier);
+ 
+@@ -1077,7 +1100,8 @@ static int amdgpu_display_verify_sizes(struct amdgpu_framebuffer *rfb)
+ 			return ret;
+ 	}
+ 
+-	if (AMD_FMT_MOD_GET(DCC, modifier)) {
++	if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) <= AMD_FMT_MOD_TILE_VER_GFX11 &&
++	    AMD_FMT_MOD_GET(DCC, modifier)) {
+ 		if (AMD_FMT_MOD_GET(DCC_RETILE, modifier)) {
+ 			block_size_log2 = get_dcc_block_size(modifier, false, false);
+ 			get_block_dimensions(block_size_log2 + 8, format_info->cpp[0],
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 3adaa46701036..dc65c32bb1b2f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -347,6 +347,9 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
+ 		return -EINVAL;
+ 	}
+ 
++	/* always clear VRAM */
++	flags |= AMDGPU_GEM_CREATE_VRAM_CLEARED;
++
+ 	/* create a gem object to contain this object in */
+ 	if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
+ 	    AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 86b096ad0319c..f4478f2d5305c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -720,7 +720,11 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
+ 			ndw += kiq->pmf->invalidate_tlbs_size;
+ 
+ 		spin_lock(&adev->gfx.kiq[inst].ring_lock);
+-		amdgpu_ring_alloc(ring, ndw);
++		r = amdgpu_ring_alloc(ring, ndw);
++		if (r) {
++			spin_unlock(&adev->gfx.kiq[inst].ring_lock);
++			goto error_unlock_reset;
++		}
+ 		if (adev->gmc.flush_tlb_needs_extra_type_2)
+ 			kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 2, all_hub);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+index 3d7fcdeaf8cf0..e8f6e4dbc5a4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+@@ -406,7 +406,7 @@ int amdgpu_vmid_grab(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
+ 	if (r || !idle)
+ 		goto error;
+ 
+-	if (vm->reserved_vmid[vmhub] || (enforce_isolation && (vmhub == AMDGPU_GFXHUB(0)))) {
++	if (amdgpu_vmid_uses_reserved(vm, vmhub)) {
+ 		r = amdgpu_vmid_grab_reserved(vm, ring, job, &id, fence);
+ 		if (r || !id)
+ 			goto error;
+@@ -456,6 +456,19 @@ int amdgpu_vmid_grab(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
+ 	return r;
+ }
+ 
++/*
++ * amdgpu_vmid_uses_reserved - check if a VM will use a reserved VMID
++ * @vm: the VM to check
++ * @vmhub: the VMHUB which will be used
++ *
++ * Returns: True if the VM will use a reserved VMID.
++ */
++bool amdgpu_vmid_uses_reserved(struct amdgpu_vm *vm, unsigned int vmhub)
++{
++	return vm->reserved_vmid[vmhub] ||
++		(enforce_isolation && (vmhub == AMDGPU_GFXHUB(0)));
++}
++
+ int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev,
+ 			       unsigned vmhub)
+ {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.h
+index fa8c42c83d5d2..240fa67512602 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.h
+@@ -78,6 +78,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv,
+ 
+ bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev,
+ 			       struct amdgpu_vmid *id);
++bool amdgpu_vmid_uses_reserved(struct amdgpu_vm *vm, unsigned int vmhub);
+ int amdgpu_vmid_alloc_reserved(struct amdgpu_device *adev,
+ 				unsigned vmhub);
+ void amdgpu_vmid_free_reserved(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index b3df27ce76634..ee19af2d20fb1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1584,6 +1584,68 @@ static void psp_ras_ta_check_status(struct psp_context *psp)
+ 	}
+ }
+ 
++static int psp_ras_send_cmd(struct psp_context *psp,
++		enum ras_command cmd_id, void *in, void *out)
++{
++	struct ta_ras_shared_memory *ras_cmd;
++	uint32_t cmd = cmd_id;
++	int ret = 0;
++
++	if (!in)
++		return -EINVAL;
++
++	mutex_lock(&psp->ras_context.mutex);
++	ras_cmd = (struct ta_ras_shared_memory *)psp->ras_context.context.mem_context.shared_buf;
++	memset(ras_cmd, 0, sizeof(struct ta_ras_shared_memory));
++
++	switch (cmd) {
++	case TA_RAS_COMMAND__ENABLE_FEATURES:
++	case TA_RAS_COMMAND__DISABLE_FEATURES:
++		memcpy(&ras_cmd->ras_in_message,
++			in, sizeof(ras_cmd->ras_in_message));
++		break;
++	case TA_RAS_COMMAND__TRIGGER_ERROR:
++		memcpy(&ras_cmd->ras_in_message.trigger_error,
++			in, sizeof(ras_cmd->ras_in_message.trigger_error));
++		break;
++	case TA_RAS_COMMAND__QUERY_ADDRESS:
++		memcpy(&ras_cmd->ras_in_message.address,
++			in, sizeof(ras_cmd->ras_in_message.address));
++		break;
++	default:
++		dev_err(psp->adev->dev, "Invalid ras cmd id: %u\n", cmd);
++		ret = -EINVAL;
++		goto err_out;
++	}
++
++	ras_cmd->cmd_id = cmd;
++	ret = psp_ras_invoke(psp, ras_cmd->cmd_id);
++
++	switch (cmd) {
++	case TA_RAS_COMMAND__TRIGGER_ERROR:
++		if (ret || psp->cmd_buf_mem->resp.status)
++			ret = -EINVAL;
++		else if (out)
++			memcpy(out, &ras_cmd->ras_status, sizeof(ras_cmd->ras_status));
++		break;
++	case TA_RAS_COMMAND__QUERY_ADDRESS:
++		if (ret || ras_cmd->ras_status || psp->cmd_buf_mem->resp.status)
++			ret = -EINVAL;
++		else if (out)
++			memcpy(out,
++				&ras_cmd->ras_out_message.address,
++				sizeof(ras_cmd->ras_out_message.address));
++		break;
++	default:
++		break;
++	}
++
++err_out:
++	mutex_unlock(&psp->ras_context.mutex);
++
++	return ret;
++}
++
+ int psp_ras_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
+ {
+ 	struct ta_ras_shared_memory *ras_cmd;
+@@ -1625,23 +1687,15 @@ int psp_ras_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
+ int psp_ras_enable_features(struct psp_context *psp,
+ 		union ta_ras_cmd_input *info, bool enable)
+ {
+-	struct ta_ras_shared_memory *ras_cmd;
++	enum ras_command cmd_id;
+ 	int ret;
+ 
+-	if (!psp->ras_context.context.initialized)
++	if (!psp->ras_context.context.initialized || !info)
+ 		return -EINVAL;
+ 
+-	ras_cmd = (struct ta_ras_shared_memory *)psp->ras_context.context.mem_context.shared_buf;
+-	memset(ras_cmd, 0, sizeof(struct ta_ras_shared_memory));
+-
+-	if (enable)
+-		ras_cmd->cmd_id = TA_RAS_COMMAND__ENABLE_FEATURES;
+-	else
+-		ras_cmd->cmd_id = TA_RAS_COMMAND__DISABLE_FEATURES;
+-
+-	ras_cmd->ras_in_message = *info;
+-
+-	ret = psp_ras_invoke(psp, ras_cmd->cmd_id);
++	cmd_id = enable ?
++		TA_RAS_COMMAND__ENABLE_FEATURES : TA_RAS_COMMAND__DISABLE_FEATURES;
++	ret = psp_ras_send_cmd(psp, cmd_id, info, NULL);
+ 	if (ret)
+ 		return -EINVAL;
+ 
+@@ -1665,6 +1719,8 @@ int psp_ras_terminate(struct psp_context *psp)
+ 
+ 	psp->ras_context.context.initialized = false;
+ 
++	mutex_destroy(&psp->ras_context.mutex);
++
+ 	return ret;
+ }
+ 
+@@ -1749,9 +1805,10 @@ int psp_ras_initialize(struct psp_context *psp)
+ 
+ 	ret = psp_ta_load(psp, &psp->ras_context.context);
+ 
+-	if (!ret && !ras_cmd->ras_status)
++	if (!ret && !ras_cmd->ras_status) {
+ 		psp->ras_context.context.initialized = true;
+-	else {
++		mutex_init(&psp->ras_context.mutex);
++	} else {
+ 		if (ras_cmd->ras_status)
+ 			dev_warn(adev->dev, "RAS Init Status: 0x%X\n", ras_cmd->ras_status);
+ 
+@@ -1765,12 +1822,12 @@ int psp_ras_initialize(struct psp_context *psp)
+ int psp_ras_trigger_error(struct psp_context *psp,
+ 			  struct ta_ras_trigger_error_input *info, uint32_t instance_mask)
+ {
+-	struct ta_ras_shared_memory *ras_cmd;
+ 	struct amdgpu_device *adev = psp->adev;
+ 	int ret;
+ 	uint32_t dev_mask;
++	uint32_t ras_status = 0;
+ 
+-	if (!psp->ras_context.context.initialized)
++	if (!psp->ras_context.context.initialized || !info)
+ 		return -EINVAL;
+ 
+ 	switch (info->block_id) {
+@@ -1794,13 +1851,8 @@ int psp_ras_trigger_error(struct psp_context *psp,
+ 	dev_mask &= AMDGPU_RAS_INST_MASK;
+ 	info->sub_block_index |= dev_mask;
+ 
+-	ras_cmd = (struct ta_ras_shared_memory *)psp->ras_context.context.mem_context.shared_buf;
+-	memset(ras_cmd, 0, sizeof(struct ta_ras_shared_memory));
+-
+-	ras_cmd->cmd_id = TA_RAS_COMMAND__TRIGGER_ERROR;
+-	ras_cmd->ras_in_message.trigger_error = *info;
+-
+-	ret = psp_ras_invoke(psp, ras_cmd->cmd_id);
++	ret = psp_ras_send_cmd(psp,
++			TA_RAS_COMMAND__TRIGGER_ERROR, info, &ras_status);
+ 	if (ret)
+ 		return -EINVAL;
+ 
+@@ -1810,9 +1862,9 @@ int psp_ras_trigger_error(struct psp_context *psp,
+ 	if (amdgpu_ras_intr_triggered())
+ 		return 0;
+ 
+-	if (ras_cmd->ras_status == TA_RAS_STATUS__TEE_ERROR_ACCESS_DENIED)
++	if (ras_status == TA_RAS_STATUS__TEE_ERROR_ACCESS_DENIED)
+ 		return -EACCES;
+-	else if (ras_cmd->ras_status)
++	else if (ras_status)
+ 		return -EINVAL;
+ 
+ 	return 0;
+@@ -1822,25 +1874,16 @@ int psp_ras_query_address(struct psp_context *psp,
+ 			  struct ta_ras_query_address_input *addr_in,
+ 			  struct ta_ras_query_address_output *addr_out)
+ {
+-	struct ta_ras_shared_memory *ras_cmd;
+ 	int ret;
+ 
+-	if (!psp->ras_context.context.initialized)
+-		return -EINVAL;
+-
+-	ras_cmd = (struct ta_ras_shared_memory *)psp->ras_context.context.mem_context.shared_buf;
+-	memset(ras_cmd, 0, sizeof(struct ta_ras_shared_memory));
+-
+-	ras_cmd->cmd_id = TA_RAS_COMMAND__QUERY_ADDRESS;
+-	ras_cmd->ras_in_message.address = *addr_in;
+-
+-	ret = psp_ras_invoke(psp, ras_cmd->cmd_id);
+-	if (ret || ras_cmd->ras_status || psp->cmd_buf_mem->resp.status)
++	if (!psp->ras_context.context.initialized ||
++		!addr_in || !addr_out)
+ 		return -EINVAL;
+ 
+-	*addr_out = ras_cmd->ras_out_message.address;
++	ret = psp_ras_send_cmd(psp,
++			TA_RAS_COMMAND__QUERY_ADDRESS, addr_in, addr_out);
+ 
+-	return 0;
++	return ret;
+ }
+ // ras end
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index 3635303e65484..74a96516c9138 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -200,6 +200,7 @@ struct psp_xgmi_context {
+ struct psp_ras_context {
+ 	struct ta_context		context;
+ 	struct amdgpu_ras		*ras;
++	struct mutex			mutex;
+ };
+ 
+ #define MEM_TRAIN_SYSTEM_SIGNATURE		0x54534942
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+index 9aff579c6abf5..38face981c3e3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+@@ -351,6 +351,7 @@ static ssize_t ta_if_invoke_debugfs_write(struct file *fp, const char *buf, size
+ 
+ 	context->session_id = ta_id;
+ 
++	mutex_lock(&psp->ras_context.mutex);
+ 	ret = prep_ta_mem_context(&context->mem_context, shared_buf, shared_buf_len);
+ 	if (ret)
+ 		goto err_free_shared_buf;
+@@ -369,6 +370,7 @@ static ssize_t ta_if_invoke_debugfs_write(struct file *fp, const char *buf, size
+ 		ret = -EFAULT;
+ 
+ err_free_shared_buf:
++	mutex_unlock(&psp->ras_context.mutex);
+ 	kfree(shared_buf);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+index b11d190ece535..5a9cc043b8583 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+@@ -33,6 +33,7 @@ enum AMDGPU_RESET_FLAGS {
+ 	AMDGPU_NEED_FULL_RESET = 0,
+ 	AMDGPU_SKIP_HW_RESET = 1,
+ 	AMDGPU_SKIP_COREDUMP = 2,
++	AMDGPU_HOST_FLR = 3,
+ };
+ 
+ struct amdgpu_reset_context {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 2359d1d602751..e12d179a451b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -86,8 +86,10 @@ int amdgpu_virt_request_full_gpu(struct amdgpu_device *adev, bool init)
+ 
+ 	if (virt->ops && virt->ops->req_full_gpu) {
+ 		r = virt->ops->req_full_gpu(adev, init);
+-		if (r)
++		if (r) {
++			adev->no_hw_access = true;
+ 			return r;
++		}
+ 
+ 		adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME;
+ 	}
+@@ -599,7 +601,7 @@ static void amdgpu_virt_update_vf2pf_work_item(struct work_struct *work)
+ 	if (ret) {
+ 		adev->virt.vf2pf_update_retry_cnt++;
+ 		if ((adev->virt.vf2pf_update_retry_cnt >= AMDGPU_VF2PF_UPDATE_MAX_RETRY_LIMIT) &&
+-		    amdgpu_sriov_runtime(adev) && !amdgpu_in_reset(adev)) {
++		    amdgpu_sriov_runtime(adev)) {
+ 			amdgpu_ras_set_fed(adev, true);
+ 			if (amdgpu_reset_domain_schedule(adev->reset_domain,
+ 							  &adev->kfd.reset_work))
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index ad6431013c738..4ba8eb45ac174 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -4293,11 +4293,11 @@ static int gfx_v11_0_hw_init(void *handle)
+ 			/* RLC autoload sequence 1: Program rlc ram */
+ 			if (adev->gfx.imu.funcs->program_rlc_ram)
+ 				adev->gfx.imu.funcs->program_rlc_ram(adev);
++			/* rlc autoload firmware */
++			r = gfx_v11_0_rlc_backdoor_autoload_enable(adev);
++			if (r)
++				return r;
+ 		}
+-		/* rlc autoload firmware */
+-		r = gfx_v11_0_rlc_backdoor_autoload_enable(adev);
+-		if (r)
+-			return r;
+ 	} else {
+ 		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+ 			if (adev->gfx.imu.funcs && (amdgpu_dpm > 0)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
+index 77df8c9cbad2f..9e10e552952e1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
+@@ -627,9 +627,11 @@ static bool gfxhub_v1_2_query_utcl2_poison_status(struct amdgpu_device *adev,
+ 
+ 	status = RREG32_SOC15(GC, GET_INST(GC, xcc_id), regVM_L2_PROTECTION_FAULT_STATUS);
+ 	fed = REG_GET_FIELD(status, VM_L2_PROTECTION_FAULT_STATUS, FED);
+-	/* reset page fault status */
+-	WREG32_P(SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id),
+-			regVM_L2_PROTECTION_FAULT_STATUS), 1, ~1);
++	if (!amdgpu_sriov_vf(adev)) {
++		/* clear page fault status and address */
++		WREG32_P(SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id),
++			 regVM_L2_PROTECTION_FAULT_CNTL), 1, ~1);
++	}
+ 
+ 	return fed;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index f7f4924751020..bd55a7e43f077 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -671,7 +671,8 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
+ 	    (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(9, 4, 2)))
+ 		return 0;
+ 
+-	WREG32_P(hub->vm_l2_pro_fault_cntl, 1, ~1);
++	if (!amdgpu_sriov_vf(adev))
++		WREG32_P(hub->vm_l2_pro_fault_cntl, 1, ~1);
+ 
+ 	amdgpu_vm_update_fault_cache(adev, entry->pasid, addr, status, vmhub);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+index 3cb64c8f71758..18a761d6ef330 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+@@ -135,6 +135,34 @@ static int ih_v6_0_toggle_ring_interrupts(struct amdgpu_device *adev,
+ 
+ 	tmp = RREG32(ih_regs->ih_rb_cntl);
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, RB_ENABLE, (enable ? 1 : 0));
++
++	if (enable) {
++		/* Unset the CLEAR_OVERFLOW bit to make sure the next step
++		 * is switching the bit from 0 to 1
++		 */
++		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++		if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++			if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++				return -ETIMEDOUT;
++		} else {
++			WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++		}
++
++		/* Clear RB_OVERFLOW bit */
++		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++		if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++			if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++				return -ETIMEDOUT;
++		} else {
++			WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++		}
++
++		/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++		 * can be detected.
++		 */
++		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	}
++
+ 	/* enable_intr field is only valid in ring0 */
+ 	if (ih == &adev->irq.ih)
+ 		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, ENABLE_INTR, (enable ? 1 : 0));
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+index 7a1ff298417ab..621761a17ac74 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+@@ -566,9 +566,11 @@ static bool mmhub_v1_8_query_utcl2_poison_status(struct amdgpu_device *adev,
+ 
+ 	status = RREG32_SOC15(MMHUB, hub_inst, regVM_L2_PROTECTION_FAULT_STATUS);
+ 	fed = REG_GET_FIELD(status, VM_L2_PROTECTION_FAULT_STATUS, FED);
+-	/* reset page fault status */
+-	WREG32_P(SOC15_REG_OFFSET(MMHUB, hub_inst,
+-			regVM_L2_PROTECTION_FAULT_STATUS), 1, ~1);
++	if (!amdgpu_sriov_vf(adev)) {
++		/* clear page fault status and address */
++		WREG32_P(SOC15_REG_OFFSET(MMHUB, hub_inst,
++			 regVM_L2_PROTECTION_FAULT_CNTL), 1, ~1);
++	}
+ 
+ 	return fed;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
+index 0c7275bca8f73..f4c47492e0cd5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
++++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
+@@ -292,6 +292,7 @@ static void xgpu_ai_mailbox_flr_work(struct work_struct *work)
+ 		reset_context.method = AMD_RESET_METHOD_NONE;
+ 		reset_context.reset_req_dev = adev;
+ 		clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
++		set_bit(AMDGPU_HOST_FLR, &reset_context.flags);
+ 
+ 		amdgpu_device_gpu_recover(adev, NULL, &reset_context);
+ 	}
+@@ -319,7 +320,7 @@ static int xgpu_ai_mailbox_rcv_irq(struct amdgpu_device *adev,
+ 
+ 	switch (event) {
+ 		case IDH_FLR_NOTIFICATION:
+-		if (amdgpu_sriov_runtime(adev) && !amdgpu_in_reset(adev))
++		if (amdgpu_sriov_runtime(adev))
+ 			WARN_ONCE(!amdgpu_reset_domain_schedule(adev->reset_domain,
+ 								&adev->virt.flr_work),
+ 				  "Failed to queue work! at %s",
+diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
+index aba00d961627b..14cc7910e5cfc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
+@@ -328,6 +328,7 @@ static void xgpu_nv_mailbox_flr_work(struct work_struct *work)
+ 		reset_context.method = AMD_RESET_METHOD_NONE;
+ 		reset_context.reset_req_dev = adev;
+ 		clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
++		set_bit(AMDGPU_HOST_FLR, &reset_context.flags);
+ 
+ 		amdgpu_device_gpu_recover(adev, NULL, &reset_context);
+ 	}
+@@ -358,7 +359,7 @@ static int xgpu_nv_mailbox_rcv_irq(struct amdgpu_device *adev,
+ 
+ 	switch (event) {
+ 	case IDH_FLR_NOTIFICATION:
+-		if (amdgpu_sriov_runtime(adev) && !amdgpu_in_reset(adev))
++		if (amdgpu_sriov_runtime(adev))
+ 			WARN_ONCE(!amdgpu_reset_domain_schedule(adev->reset_domain,
+ 				   &adev->virt.flr_work),
+ 				  "Failed to queue work! at %s",
+diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
+index 59f53c7433620..78cd07744ebe4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
+@@ -529,6 +529,7 @@ static void xgpu_vi_mailbox_flr_work(struct work_struct *work)
+ 		reset_context.method = AMD_RESET_METHOD_NONE;
+ 		reset_context.reset_req_dev = adev;
+ 		clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
++		set_bit(AMDGPU_HOST_FLR, &reset_context.flags);
+ 
+ 		amdgpu_device_gpu_recover(adev, NULL, &reset_context);
+ 	}
+@@ -560,7 +561,7 @@ static int xgpu_vi_mailbox_rcv_irq(struct amdgpu_device *adev,
+ 		r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
+ 
+ 		/* only handle FLR_NOTIFY now */
+-		if (!r && !amdgpu_in_reset(adev))
++		if (!r)
+ 			WARN_ONCE(!amdgpu_reset_domain_schedule(adev->reset_domain,
+ 								&adev->virt.flr_work),
+ 				  "Failed to queue work! at %s",
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 382a41c5b5152..27e641f176289 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4237,7 +4237,7 @@ static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ 	struct amdgpu_dm_backlight_caps caps;
+ 	struct dc_link *link;
+ 	u32 brightness;
+-	bool rc;
++	bool rc, reallow_idle = false;
+ 
+ 	amdgpu_dm_update_backlight_caps(dm, bl_idx);
+ 	caps = dm->backlight_caps[bl_idx];
+@@ -4250,6 +4250,12 @@ static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ 	link = (struct dc_link *)dm->backlight_link[bl_idx];
+ 
+ 	/* Change brightness based on AUX property */
++	mutex_lock(&dm->dc_lock);
++	if (dm->dc->caps.ips_support && dm->dc->ctx->dmub_srv->idle_allowed) {
++		dc_allow_idle_optimizations(dm->dc, false);
++		reallow_idle = true;
++	}
++
+ 	if (caps.aux_support) {
+ 		rc = dc_link_set_backlight_level_nits(link, true, brightness,
+ 						      AUX_BL_DEFAULT_TRANSITION_TIME_MS);
+@@ -4261,6 +4267,11 @@ static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ 			DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
+ 	}
+ 
++	if (dm->dc->caps.ips_support && reallow_idle)
++		dc_allow_idle_optimizations(dm->dc, true);
++
++	mutex_unlock(&dm->dc_lock);
++
+ 	if (rc)
+ 		dm->actual_brightness[bl_idx] = user_brightness;
+ }
+@@ -7291,7 +7302,7 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
+ 			}
+ 		}
+ 
+-		if (j == dc_state->stream_count)
++		if (j == dc_state->stream_count || pbn_div == 0)
+ 			continue;
+ 
+ 		slot_num = DIV_ROUND_UP(pbn, pbn_div);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 70e45d980bb93..7d47acdd11d55 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -1400,8 +1400,6 @@ static bool amdgpu_dm_plane_format_mod_supported(struct drm_plane *plane,
+ 	const struct drm_format_info *info = drm_format_info(format);
+ 	int i;
+ 
+-	enum dm_micro_swizzle microtile = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier) & 3;
+-
+ 	if (!info)
+ 		return false;
+ 
+@@ -1423,29 +1421,34 @@ static bool amdgpu_dm_plane_format_mod_supported(struct drm_plane *plane,
+ 	if (i == plane->modifier_count)
+ 		return false;
+ 
+-	/*
+-	 * For D swizzle the canonical modifier depends on the bpp, so check
+-	 * it here.
+-	 */
+-	if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) == AMD_FMT_MOD_TILE_VER_GFX9 &&
+-	    adev->family >= AMDGPU_FAMILY_NV) {
+-		if (microtile == MICRO_SWIZZLE_D && info->cpp[0] == 4)
+-			return false;
+-	}
+-
+-	if (adev->family >= AMDGPU_FAMILY_RV && microtile == MICRO_SWIZZLE_D &&
+-	    info->cpp[0] < 8)
+-		return false;
++	/* GFX12 doesn't have these limitations. */
++	if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) <= AMD_FMT_MOD_TILE_VER_GFX11) {
++		enum dm_micro_swizzle microtile = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier) & 3;
+ 
+-	if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
+-		/* Per radeonsi comments 16/64 bpp are more complicated. */
+-		if (info->cpp[0] != 4)
+-			return false;
+-		/* We support multi-planar formats, but not when combined with
+-		 * additional DCC metadata planes.
++		/*
++		 * For D swizzle the canonical modifier depends on the bpp, so check
++		 * it here.
+ 		 */
+-		if (info->num_planes > 1)
++		if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) == AMD_FMT_MOD_TILE_VER_GFX9 &&
++		    adev->family >= AMDGPU_FAMILY_NV) {
++			if (microtile == MICRO_SWIZZLE_D && info->cpp[0] == 4)
++				return false;
++		}
++
++		if (adev->family >= AMDGPU_FAMILY_RV && microtile == MICRO_SWIZZLE_D &&
++		    info->cpp[0] < 8)
+ 			return false;
++
++		if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
++			/* Per radeonsi comments 16/64 bpp are more complicated. */
++			if (info->cpp[0] != 4)
++				return false;
++			/* We support multi-planar formats, but not when combined with
++			 * additional DCC metadata planes.
++			 */
++			if (info->num_planes > 1)
++				return false;
++		}
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 2293a92df3bed..22d2ab8ce7f8b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -245,7 +245,9 @@ bool dc_dmub_srv_cmd_run_list(struct dc_dmub_srv *dc_dmub_srv, unsigned int coun
+ 			if (status == DMUB_STATUS_POWER_STATE_D3)
+ 				return false;
+ 
+-			dmub_srv_wait_for_idle(dmub, 100000);
++			status = dmub_srv_wait_for_idle(dmub, 100000);
++			if (status != DMUB_STATUS_OK)
++				return false;
+ 
+ 			/* Requeue the command. */
+ 			status = dmub_srv_cmd_queue(dmub, &cmd_list[i]);
+@@ -511,7 +513,8 @@ void dc_dmub_srv_get_visual_confirm_color_cmd(struct dc *dc, struct pipe_ctx *pi
+ 	union dmub_rb_cmd cmd = { 0 };
+ 	unsigned int panel_inst = 0;
+ 
+-	dc_get_edp_link_panel_inst(dc, pipe_ctx->stream->link, &panel_inst);
++	if (!dc_get_edp_link_panel_inst(dc, pipe_ctx->stream->link, &panel_inst))
++		return;
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.c
+index c6f859871d11e..7e4ca2022d649 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.c
+@@ -595,7 +595,8 @@ static bool hubbub2_program_watermarks(
+ 		hubbub1->base.ctx->dc->clk_mgr->clks.p_state_change_support == false)
+ 		safe_to_lower = true;
+ 
+-	hubbub1_program_pstate_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower);
++	if (hubbub1_program_pstate_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower))
++		wm_pending = true;
+ 
+ 	REG_SET(DCHUBBUB_ARB_SAT_LEVEL, 0,
+ 			DCHUBBUB_ARB_SAT_LEVEL, 60 * refclk_mhz);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index 3e919f5c00ca2..fee1df342f122 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -4282,7 +4282,7 @@ static void CalculateSwathAndDETConfiguration(struct display_mode_lib_scratch_st
+ 	}
+ 
+ 	*p->compbuf_reserved_space_64b = 2 * p->PixelChunkSizeInKByte * 1024 / 64;
+-	if (p->UnboundedRequestEnabled) {
++	if (*p->UnboundedRequestEnabled) {
+ 		*p->compbuf_reserved_space_64b = dml_max(*p->compbuf_reserved_space_64b,
+ 				(dml_float_t)(p->ROBBufferSizeInKByte * 1024/64)
+ 				- (dml_float_t)(RoundedUpSwathSizeBytesY[SurfaceDoingUnboundedRequest] * TTUFIFODEPTH / MAXIMUMCOMPRESSION/64));
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index cf22b8f28ba6c..72df9bdfb23ff 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -611,14 +611,14 @@ static bool construct_phy(struct dc_link *link,
+ 	link->link_enc =
+ 		link->dc->res_pool->funcs->link_enc_create(dc_ctx, &enc_init_data);
+ 
+-	DC_LOG_DC("BIOS object table - DP_IS_USB_C: %d", link->link_enc->features.flags.bits.DP_IS_USB_C);
+-	DC_LOG_DC("BIOS object table - IS_DP2_CAPABLE: %d", link->link_enc->features.flags.bits.IS_DP2_CAPABLE);
+-
+ 	if (!link->link_enc) {
+ 		DC_ERROR("Failed to create link encoder!\n");
+ 		goto link_enc_create_fail;
+ 	}
+ 
++	DC_LOG_DC("BIOS object table - DP_IS_USB_C: %d", link->link_enc->features.flags.bits.DP_IS_USB_C);
++	DC_LOG_DC("BIOS object table - IS_DP2_CAPABLE: %d", link->link_enc->features.flags.bits.IS_DP2_CAPABLE);
++
+ 	/* Update link encoder tracking variables. These are used for the dynamic
+ 	 * assignment of link encoders to streams.
+ 	 */
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index b8e704dbe9567..8c0dea6f75bf1 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -1659,8 +1659,7 @@ bool perform_link_training_with_retries(
+ 		if (status == LINK_TRAINING_ABORT) {
+ 			enum dc_connection_type type = dc_connection_none;
+ 
+-			link_detect_connection_type(link, &type);
+-			if (type == dc_connection_none) {
++			if (link_detect_connection_type(link, &type) && type == dc_connection_none) {
+ 				DC_LOG_HW_LINK_TRAINING("%s: Aborting training because sink unplugged\n", __func__);
+ 				break;
+ 			}
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+index 4ce0f4bf1d9bb..3329eaecfb15b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+@@ -1756,7 +1756,7 @@ static int dcn315_populate_dml_pipes_from_context(
+ 				bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc)
+ 						|| (pipe->plane_state && pipe->plane_state->src_rect.width > 5120);
+ 
+-				if (remaining_det_segs > MIN_RESERVED_DET_SEGS)
++				if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0)
+ 					pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes +
+ 							(crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0);
+ 				if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) {
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+index 182e7532dda8a..d77836cef5635 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+@@ -433,17 +433,20 @@ static enum mod_hdcp_status authenticated_dp(struct mod_hdcp *hdcp,
+ 	}
+ 
+ 	if (status == MOD_HDCP_STATUS_SUCCESS)
+-		mod_hdcp_execute_and_set(mod_hdcp_read_bstatus,
++		if (!mod_hdcp_execute_and_set(mod_hdcp_read_bstatus,
+ 				&input->bstatus_read, &status,
+-				hdcp, "bstatus_read");
++				hdcp, "bstatus_read"))
++			goto out;
+ 	if (status == MOD_HDCP_STATUS_SUCCESS)
+-		mod_hdcp_execute_and_set(check_link_integrity_dp,
++		if (!mod_hdcp_execute_and_set(check_link_integrity_dp,
+ 				&input->link_integrity_check, &status,
+-				hdcp, "link_integrity_check");
++				hdcp, "link_integrity_check"))
++			goto out;
+ 	if (status == MOD_HDCP_STATUS_SUCCESS)
+-		mod_hdcp_execute_and_set(check_no_reauthentication_request_dp,
++		if (!mod_hdcp_execute_and_set(check_no_reauthentication_request_dp,
+ 				&input->reauth_request_check, &status,
+-				hdcp, "reauth_request_check");
++				hdcp, "reauth_request_check"))
++			goto out;
+ out:
+ 	return status;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 95f690b70c057..45d053497f497 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2257,7 +2257,8 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ 		smu_dpm_ctx->dpm_level = level;
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
++	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
++		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
+ 		index = fls(smu->workload_mask);
+ 		index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+ 		workload[0] = smu->workload_setting[index];
+@@ -2336,7 +2337,8 @@ static int smu_switch_power_profile(void *handle,
+ 		workload[0] = smu->workload_setting[index];
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
++	if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
++		smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+ 		smu_bump_power_profile_mode(smu, workload, 0);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index 6747c10da298e..48c5809e01b57 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -1840,6 +1840,10 @@ struct intel_dp {
+ 	unsigned long last_oui_write;
+ 
+ 	bool colorimetry_support;
++
++	struct {
++		unsigned long mask;
++	} quirks;
+ };
+ 
+ enum lspcon_vendor {
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 8ea00c8ee74a3..a7d91ca1d8baf 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -79,6 +79,7 @@
+ #include "intel_pch_display.h"
+ #include "intel_pps.h"
+ #include "intel_psr.h"
++#include "intel_quirks.h"
+ #include "intel_tc.h"
+ #include "intel_vdsc.h"
+ #include "intel_vrr.h"
+@@ -3941,6 +3942,7 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector
+ 
+ 	drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
+ 			 drm_dp_is_branch(intel_dp->dpcd));
++	intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
+ 
+ 	/*
+ 	 * Read the eDP display control registers.
+@@ -4053,6 +4055,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
+ 		drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
+ 				 drm_dp_is_branch(intel_dp->dpcd));
+ 
++		intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
++
+ 		intel_dp_update_sink_caps(intel_dp);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.c b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+index b8a53bb174dab..be58185a77c01 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c
+@@ -13,6 +13,7 @@
+ #include "intel_dp_aux.h"
+ #include "intel_dp_aux_regs.h"
+ #include "intel_pps.h"
++#include "intel_quirks.h"
+ #include "intel_tc.h"
+ 
+ #define AUX_CH_NAME_BUFSIZE	6
+@@ -142,16 +143,21 @@ static int intel_dp_aux_sync_len(void)
+ 	return precharge + preamble;
+ }
+ 
+-int intel_dp_aux_fw_sync_len(void)
++int intel_dp_aux_fw_sync_len(struct intel_dp *intel_dp)
+ {
++	int precharge = 10; /* 10-16 */
++	int preamble = 8;
++
+ 	/*
+ 	 * We faced some glitches on Dell Precision 5490 MTL laptop with panel:
+ 	 * "Manufacturer: AUO, Model: 63898" when using HW default 18. Using 20
+ 	 * is fixing these problems with the panel. It is still within range
+-	 * mentioned in eDP specification.
++	 * mentioned in eDP specification. Increasing Fast Wake sync length is
++	 * causing problems with other panels: increase length as a quirk for
++	 * this specific laptop.
+ 	 */
+-	int precharge = 12; /* 10-16 */
+-	int preamble = 8;
++	if (intel_has_dpcd_quirk(intel_dp, QUIRK_FW_SYNC_LEN))
++		precharge += 2;
+ 
+ 	return precharge + preamble;
+ }
+@@ -211,7 +217,7 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp,
+ 		DP_AUX_CH_CTL_TIME_OUT_MAX |
+ 		DP_AUX_CH_CTL_RECEIVE_ERROR |
+ 		DP_AUX_CH_CTL_MESSAGE_SIZE(send_bytes) |
+-		DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len()) |
++		DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len(intel_dp)) |
+ 		DP_AUX_CH_CTL_SYNC_PULSE_SKL(intel_dp_aux_sync_len());
+ 
+ 	if (intel_tc_port_in_tbt_alt_mode(dig_port))
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.h b/drivers/gpu/drm/i915/display/intel_dp_aux.h
+index 76d1f2ed7c2f4..593f58fafab71 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux.h
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux.h
+@@ -20,6 +20,6 @@ enum aux_ch intel_dp_aux_ch(struct intel_encoder *encoder);
+ 
+ void intel_dp_aux_irq_handler(struct drm_i915_private *i915);
+ u32 intel_dp_aux_pack(const u8 *src, int src_bytes);
+-int intel_dp_aux_fw_sync_len(void);
++int intel_dp_aux_fw_sync_len(struct intel_dp *intel_dp);
+ 
+ #endif /* __INTEL_DP_AUX_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 3c7da862222bf..7173ffc7c66c1 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1356,7 +1356,7 @@ static bool _compute_alpm_params(struct intel_dp *intel_dp,
+ 	int tfw_exit_latency = 20; /* eDP spec */
+ 	int phy_wake = 4;	   /* eDP spec */
+ 	int preamble = 8;	   /* eDP spec */
+-	int precharge = intel_dp_aux_fw_sync_len() - preamble;
++	int precharge = intel_dp_aux_fw_sync_len(intel_dp) - preamble;
+ 	u8 max_wake_lines;
+ 
+ 	io_wake_time = max(precharge, io_buffer_wake_time(crtc_state)) +
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
+index 14d5fefc9c5b2..dfd8b4960e6d6 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.c
++++ b/drivers/gpu/drm/i915/display/intel_quirks.c
+@@ -14,6 +14,11 @@ static void intel_set_quirk(struct intel_display *display, enum intel_quirk_id q
+ 	display->quirks.mask |= BIT(quirk);
+ }
+ 
++static void intel_set_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk)
++{
++	intel_dp->quirks.mask |= BIT(quirk);
++}
++
+ /*
+  * Some machines (Lenovo U160) do not work with SSC on LVDS for some reason
+  */
+@@ -65,6 +70,14 @@ static void quirk_no_pps_backlight_power_hook(struct intel_display *display)
+ 	drm_info(display->drm, "Applying no pps backlight power quirk\n");
+ }
+ 
++static void quirk_fw_sync_len(struct intel_dp *intel_dp)
++{
++	struct intel_display *display = to_intel_display(intel_dp);
++
++	intel_set_dpcd_quirk(intel_dp, QUIRK_FW_SYNC_LEN);
++	drm_info(display->drm, "Applying Fast Wake sync pulse count quirk\n");
++}
++
+ struct intel_quirk {
+ 	int device;
+ 	int subsystem_vendor;
+@@ -72,6 +85,21 @@ struct intel_quirk {
+ 	void (*hook)(struct intel_display *display);
+ };
+ 
++struct intel_dpcd_quirk {
++	int device;
++	int subsystem_vendor;
++	int subsystem_device;
++	u8 sink_oui[3];
++	u8 sink_device_id[6];
++	void (*hook)(struct intel_dp *intel_dp);
++};
++
++#define SINK_OUI(first, second, third) { (first), (second), (third) }
++#define SINK_DEVICE_ID(first, second, third, fourth, fifth, sixth) \
++	{ (first), (second), (third), (fourth), (fifth), (sixth) }
++
++#define SINK_DEVICE_ID_ANY	SINK_DEVICE_ID(0, 0, 0, 0, 0, 0)
++
+ /* For systems that don't have a meaningful PCI subdevice/subvendor ID */
+ struct intel_dmi_quirk {
+ 	void (*hook)(struct intel_display *display);
+@@ -203,6 +231,18 @@ static struct intel_quirk intel_quirks[] = {
+ 	{ 0x0f31, 0x103c, 0x220f, quirk_invert_brightness },
+ };
+ 
++static struct intel_dpcd_quirk intel_dpcd_quirks[] = {
++	/* Dell Precision 5490 */
++	{
++		.device = 0x7d55,
++		.subsystem_vendor = 0x1028,
++		.subsystem_device = 0x0cc7,
++		.sink_oui = SINK_OUI(0x38, 0xec, 0x11),
++		.hook = quirk_fw_sync_len,
++	},
++
++};
++
+ void intel_init_quirks(struct intel_display *display)
+ {
+ 	struct pci_dev *d = to_pci_dev(display->drm->dev);
+@@ -224,7 +264,35 @@ void intel_init_quirks(struct intel_display *display)
+ 	}
+ }
+ 
++void intel_init_dpcd_quirks(struct intel_dp *intel_dp,
++			    const struct drm_dp_dpcd_ident *ident)
++{
++	struct intel_display *display = to_intel_display(intel_dp);
++	struct pci_dev *d = to_pci_dev(display->drm->dev);
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(intel_dpcd_quirks); i++) {
++		struct intel_dpcd_quirk *q = &intel_dpcd_quirks[i];
++
++		if (d->device == q->device &&
++		    (d->subsystem_vendor == q->subsystem_vendor ||
++		     q->subsystem_vendor == PCI_ANY_ID) &&
++		    (d->subsystem_device == q->subsystem_device ||
++		     q->subsystem_device == PCI_ANY_ID) &&
++		    !memcmp(q->sink_oui, ident->oui, sizeof(ident->oui)) &&
++		    (!memcmp(q->sink_device_id, ident->device_id,
++			    sizeof(ident->device_id)) ||
++		     !memchr_inv(q->sink_device_id, 0, sizeof(q->sink_device_id))))
++			q->hook(intel_dp);
++	}
++}
++
+ bool intel_has_quirk(struct intel_display *display, enum intel_quirk_id quirk)
+ {
+ 	return display->quirks.mask & BIT(quirk);
+ }
++
++bool intel_has_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk)
++{
++	return intel_dp->quirks.mask & BIT(quirk);
++}
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.h b/drivers/gpu/drm/i915/display/intel_quirks.h
+index 151c8f4ae5760..cafdebda75354 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.h
++++ b/drivers/gpu/drm/i915/display/intel_quirks.h
+@@ -9,6 +9,8 @@
+ #include <linux/types.h>
+ 
+ struct intel_display;
++struct intel_dp;
++struct drm_dp_dpcd_ident;
+ 
+ enum intel_quirk_id {
+ 	QUIRK_BACKLIGHT_PRESENT,
+@@ -17,9 +19,13 @@ enum intel_quirk_id {
+ 	QUIRK_INVERT_BRIGHTNESS,
+ 	QUIRK_LVDS_SSC_DISABLE,
+ 	QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK,
++	QUIRK_FW_SYNC_LEN,
+ };
+ 
+ void intel_init_quirks(struct intel_display *display);
++void intel_init_dpcd_quirks(struct intel_dp *intel_dp,
++			    const struct drm_dp_dpcd_ident *ident);
+ bool intel_has_quirk(struct intel_display *display, enum intel_quirk_id quirk);
++bool intel_has_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk);
+ 
+ #endif /* __INTEL_QUIRKS_H__ */
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
+index 453d855dd1de7..3d3191deb0ab9 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
+@@ -302,7 +302,7 @@ void intel_gsc_uc_load_start(struct intel_gsc_uc *gsc)
+ {
+ 	struct intel_gt *gt = gsc_uc_to_gt(gsc);
+ 
+-	if (!intel_uc_fw_is_loadable(&gsc->fw))
++	if (!intel_uc_fw_is_loadable(&gsc->fw) || intel_uc_fw_is_in_error(&gsc->fw))
+ 		return;
+ 
+ 	if (intel_gsc_uc_fw_init_done(gsc))
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
+index 9a431726c8d5b..ac7b3aad2222e 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
+@@ -258,6 +258,11 @@ static inline bool intel_uc_fw_is_running(struct intel_uc_fw *uc_fw)
+ 	return __intel_uc_fw_status(uc_fw) == INTEL_UC_FIRMWARE_RUNNING;
+ }
+ 
++static inline bool intel_uc_fw_is_in_error(struct intel_uc_fw *uc_fw)
++{
++	return intel_uc_fw_status_to_error(__intel_uc_fw_status(uc_fw)) != 0;
++}
++
+ static inline bool intel_uc_fw_is_overridden(const struct intel_uc_fw *uc_fw)
+ {
+ 	return uc_fw->user_overridden;
+diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
+index 8a9aad523eec2..1d4cc91c0e40d 100644
+--- a/drivers/gpu/drm/i915/i915_sw_fence.c
++++ b/drivers/gpu/drm/i915/i915_sw_fence.c
+@@ -51,7 +51,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence)
+ 	debug_object_init(fence, &i915_sw_fence_debug_descr);
+ }
+ 
+-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence)
+ {
+ 	debug_object_init_on_stack(fence, &i915_sw_fence_debug_descr);
+ }
+@@ -77,7 +77,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence)
+ 	debug_object_destroy(fence, &i915_sw_fence_debug_descr);
+ }
+ 
+-static inline void debug_fence_free(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence)
+ {
+ 	debug_object_free(fence, &i915_sw_fence_debug_descr);
+ 	smp_wmb(); /* flush the change in state before reallocation */
+@@ -94,7 +94,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence)
+ {
+ }
+ 
+-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence)
+ {
+ }
+ 
+@@ -115,7 +115,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence)
+ {
+ }
+ 
+-static inline void debug_fence_free(struct i915_sw_fence *fence)
++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence)
+ {
+ }
+ 
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index e59517ba039ef..97c0f772ed65f 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -114,6 +114,8 @@ struct pvr_vm_gpuva {
+ 	struct drm_gpuva base;
+ };
+ 
++#define to_pvr_vm_gpuva(va) container_of_const(va, struct pvr_vm_gpuva, base)
++
+ enum pvr_vm_bind_type {
+ 	PVR_VM_BIND_TYPE_MAP,
+ 	PVR_VM_BIND_TYPE_UNMAP,
+@@ -386,6 +388,7 @@ pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
+ 
+ 	drm_gpuva_unmap(&op->unmap);
+ 	drm_gpuva_unlink(op->unmap.va);
++	kfree(to_pvr_vm_gpuva(op->unmap.va));
+ 
+ 	return 0;
+ }
+@@ -433,6 +436,7 @@ pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
+ 	}
+ 
+ 	drm_gpuva_unlink(op->remap.unmap->va);
++	kfree(to_pvr_vm_gpuva(op->remap.unmap->va));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
+index 330d72b1a4af1..52412965fac10 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
+@@ -324,7 +324,7 @@ nvkm_gsp_fwsec_sb(struct nvkm_gsp *gsp)
+ 		return ret;
+ 
+ 	/* Verify. */
+-	err = nvkm_rd32(device, 0x001400 + (0xf * 4)) & 0x0000ffff;
++	err = nvkm_rd32(device, 0x001400 + (0x15 * 4)) & 0x0000ffff;
+ 	if (err) {
+ 		nvkm_error(subdev, "fwsec-sb: 0x%04x\n", err);
+ 		return -EIO;
+diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
+index b5e7b919f241e..34182f67136c1 100644
+--- a/drivers/gpu/drm/panthor/panthor_drv.c
++++ b/drivers/gpu/drm/panthor/panthor_drv.c
+@@ -10,6 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ 
++#include <drm/drm_auth.h>
+ #include <drm/drm_debugfs.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_exec.h>
+@@ -996,6 +997,24 @@ static int panthor_ioctl_group_destroy(struct drm_device *ddev, void *data,
+ 	return panthor_group_destroy(pfile, args->group_handle);
+ }
+ 
++static int group_priority_permit(struct drm_file *file,
++				 u8 priority)
++{
++	/* Ensure that priority is valid */
++	if (priority > PANTHOR_GROUP_PRIORITY_HIGH)
++		return -EINVAL;
++
++	/* Medium priority and below are always allowed */
++	if (priority <= PANTHOR_GROUP_PRIORITY_MEDIUM)
++		return 0;
++
++	/* Higher priorities require CAP_SYS_NICE or DRM_MASTER */
++	if (capable(CAP_SYS_NICE) || drm_is_current_master(file))
++		return 0;
++
++	return -EACCES;
++}
++
+ static int panthor_ioctl_group_create(struct drm_device *ddev, void *data,
+ 				      struct drm_file *file)
+ {
+@@ -1011,6 +1030,10 @@ static int panthor_ioctl_group_create(struct drm_device *ddev, void *data,
+ 	if (ret)
+ 		return ret;
+ 
++	ret = group_priority_permit(file, args->priority);
++	if (ret)
++		return ret;
++
+ 	ret = panthor_group_create(pfile, args, queue_args);
+ 	if (ret >= 0) {
+ 		args->group_handle = ret;
+diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c
+index 857f3f11258aa..ef232c0c20493 100644
+--- a/drivers/gpu/drm/panthor/panthor_fw.c
++++ b/drivers/gpu/drm/panthor/panthor_fw.c
+@@ -1089,6 +1089,12 @@ int panthor_fw_post_reset(struct panthor_device *ptdev)
+ 		panthor_fw_stop(ptdev);
+ 		ptdev->fw->fast_reset = false;
+ 		drm_err(&ptdev->base, "FW fast reset failed, trying a slow reset");
++
++		ret = panthor_vm_flush_all(ptdev->fw->vm);
++		if (ret) {
++			drm_err(&ptdev->base, "FW slow reset failed (couldn't flush FW's AS l2cache)");
++			return ret;
++		}
+ 	}
+ 
+ 	/* Reload all sections, including RO ones. We're not supposed
+@@ -1099,7 +1105,7 @@ int panthor_fw_post_reset(struct panthor_device *ptdev)
+ 
+ 	ret = panthor_fw_start(ptdev);
+ 	if (ret) {
+-		drm_err(&ptdev->base, "FW slow reset failed");
++		drm_err(&ptdev->base, "FW slow reset failed (couldn't start the FW )");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index fa0a002b1016e..cc6e13a977835 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -576,6 +576,12 @@ static int mmu_hw_do_operation_locked(struct panthor_device *ptdev, int as_nr,
+ 	if (as_nr < 0)
+ 		return 0;
+ 
++	/*
++	 * If the AS number is greater than zero, then we can be sure
++	 * the device is up and running, so we don't need to explicitly
++	 * power it up
++	 */
++
+ 	if (op != AS_COMMAND_UNLOCK)
+ 		lock_region(ptdev, as_nr, iova, size);
+ 
+@@ -874,14 +880,23 @@ static int panthor_vm_flush_range(struct panthor_vm *vm, u64 iova, u64 size)
+ 	if (!drm_dev_enter(&ptdev->base, &cookie))
+ 		return 0;
+ 
+-	/* Flush the PTs only if we're already awake */
+-	if (pm_runtime_active(ptdev->base.dev))
+-		ret = mmu_hw_do_operation(vm, iova, size, AS_COMMAND_FLUSH_PT);
++	ret = mmu_hw_do_operation(vm, iova, size, AS_COMMAND_FLUSH_PT);
+ 
+ 	drm_dev_exit(cookie);
+ 	return ret;
+ }
+ 
++/**
++ * panthor_vm_flush_all() - Flush L2 caches for the entirety of a VM's AS
++ * @vm: VM whose cache to flush
++ *
++ * Return: 0 on success, a negative error code if flush failed.
++ */
++int panthor_vm_flush_all(struct panthor_vm *vm)
++{
++	return panthor_vm_flush_range(vm, vm->base.mm_start, vm->base.mm_range);
++}
++
+ static int panthor_vm_unmap_pages(struct panthor_vm *vm, u64 iova, u64 size)
+ {
+ 	struct panthor_device *ptdev = vm->ptdev;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h
+index f3c1ed19f973f..6788771071e35 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.h
++++ b/drivers/gpu/drm/panthor/panthor_mmu.h
+@@ -31,6 +31,7 @@ panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset);
+ int panthor_vm_active(struct panthor_vm *vm);
+ void panthor_vm_idle(struct panthor_vm *vm);
+ int panthor_vm_as(struct panthor_vm *vm);
++int panthor_vm_flush_all(struct panthor_vm *vm);
+ 
+ struct panthor_heap_pool *
+ panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create);
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 463bcd3cf00f3..12b272a912f86 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -3092,7 +3092,7 @@ int panthor_group_create(struct panthor_file *pfile,
+ 	if (group_args->pad)
+ 		return -EINVAL;
+ 
+-	if (group_args->priority > PANTHOR_CSG_PRIORITY_HIGH)
++	if (group_args->priority >= PANTHOR_CSG_PRIORITY_COUNT)
+ 		return -EINVAL;
+ 
+ 	if ((group_args->compute_core_mask & ~ptdev->gpu_info.shader_present) ||
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index 94445810ccc93..23c302af4cd57 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -350,6 +350,7 @@
+ 
+ #define HALF_SLICE_CHICKEN7				XE_REG_MCR(0xe194, XE_REG_OPTION_MASKED)
+ #define   DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA	REG_BIT(15)
++#define   CLEAR_OPTIMIZATION_DISABLE			REG_BIT(6)
+ 
+ #define CACHE_MODE_SS				XE_REG_MCR(0xe420, XE_REG_OPTION_MASKED)
+ #define   DISABLE_ECC				REG_BIT(5)
+diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
+index 60202b9036877..95c17b72fa574 100644
+--- a/drivers/gpu/drm/xe/xe_gsc.c
++++ b/drivers/gpu/drm/xe/xe_gsc.c
+@@ -511,10 +511,22 @@ int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc)
+ void xe_gsc_load_start(struct xe_gsc *gsc)
+ {
+ 	struct xe_gt *gt = gsc_to_gt(gsc);
++	struct xe_device *xe = gt_to_xe(gt);
+ 
+ 	if (!xe_uc_fw_is_loadable(&gsc->fw) || !gsc->q)
+ 		return;
+ 
++	/*
++	 * The GSC HW is only reset by driver FLR or D3cold entry. We don't
++	 * support the former at runtime, while the latter is only supported on
++	 * DGFX, for which we don't support GSC. Therefore, if GSC failed to
++	 * load previously there is no need to try again because the HW is
++	 * stuck in the error state.
++	 */
++	xe_assert(xe, !IS_DGFX(xe));
++	if (xe_uc_fw_is_in_error_state(&gsc->fw))
++		return;
++
+ 	/* GSC FW survives GT reset and D3Hot */
+ 	if (gsc_fw_is_loaded(gt)) {
+ 		xe_uc_fw_change_status(&gsc->fw, XE_UC_FIRMWARE_TRANSFERRED);
+diff --git a/drivers/gpu/drm/xe/xe_uc_fw.h b/drivers/gpu/drm/xe/xe_uc_fw.h
+index 35078038797e7..03951cb6de1ce 100644
+--- a/drivers/gpu/drm/xe/xe_uc_fw.h
++++ b/drivers/gpu/drm/xe/xe_uc_fw.h
+@@ -65,7 +65,7 @@ const char *xe_uc_fw_status_repr(enum xe_uc_fw_status status)
+ 	return "<invalid>";
+ }
+ 
+-static inline int xe_uc_fw_status_to_error(enum xe_uc_fw_status status)
++static inline int xe_uc_fw_status_to_error(const enum xe_uc_fw_status status)
+ {
+ 	switch (status) {
+ 	case XE_UC_FIRMWARE_NOT_SUPPORTED:
+@@ -108,7 +108,7 @@ static inline const char *xe_uc_fw_type_repr(enum xe_uc_fw_type type)
+ }
+ 
+ static inline enum xe_uc_fw_status
+-__xe_uc_fw_status(struct xe_uc_fw *uc_fw)
++__xe_uc_fw_status(const struct xe_uc_fw *uc_fw)
+ {
+ 	/* shouldn't call this before checking hw/blob availability */
+ 	XE_WARN_ON(uc_fw->status == XE_UC_FIRMWARE_UNINITIALIZED);
+@@ -156,6 +156,11 @@ static inline bool xe_uc_fw_is_overridden(const struct xe_uc_fw *uc_fw)
+ 	return uc_fw->user_overridden;
+ }
+ 
++static inline bool xe_uc_fw_is_in_error_state(const struct xe_uc_fw *uc_fw)
++{
++	return xe_uc_fw_status_to_error(__xe_uc_fw_status(uc_fw)) < 0;
++}
++
+ static inline void xe_uc_fw_sanitize(struct xe_uc_fw *uc_fw)
+ {
+ 	if (xe_uc_fw_is_loaded(uc_fw))
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index dd214d95e4b63..66dafe980b9c2 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -485,6 +485,10 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ 	  XE_RTP_RULES(GRAPHICS_VERSION(2004), FUNC(xe_rtp_match_first_render_or_compute)),
+ 	  XE_RTP_ACTIONS(SET(TDL_TSL_CHICKEN, SLM_WMTP_RESTORE))
+ 	},
++	{ XE_RTP_NAME("14021402888"),
++	  XE_RTP_RULES(GRAPHICS_VERSION(2004), ENGINE_CLASS(RENDER)),
++	  XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
++	},
+ 
+ 	/* Xe2_HPG */
+ 
+@@ -533,6 +537,10 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ 		       FUNC(xe_rtp_match_first_render_or_compute)),
+ 	  XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0, WR_REQ_CHAINING_DIS))
+ 	},
++	{ XE_RTP_NAME("14021402888"),
++	  XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
++	  XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
++	},
+ 
+ 	/* Xe2_HPM */
+ 
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+index 705b523370684..81f3024b7b1b5 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+@@ -171,11 +171,13 @@ int amdtp_hid_probe(u32 cur_hid_dev, struct amdtp_cl_data *cli_data)
+ void amdtp_hid_remove(struct amdtp_cl_data *cli_data)
+ {
+ 	int i;
++	struct amdtp_hid_data *hid_data;
+ 
+ 	for (i = 0; i < cli_data->num_hid_devices; ++i) {
+ 		if (cli_data->hid_sensor_hubs[i]) {
+-			kfree(cli_data->hid_sensor_hubs[i]->driver_data);
++			hid_data = cli_data->hid_sensor_hubs[i]->driver_data;
+ 			hid_destroy_device(cli_data->hid_sensor_hubs[i]);
++			kfree(hid_data);
+ 			cli_data->hid_sensor_hubs[i] = NULL;
+ 		}
+ 	}
+diff --git a/drivers/hid/bpf/Kconfig b/drivers/hid/bpf/Kconfig
+index 83214bae67687..d65482e02a6c2 100644
+--- a/drivers/hid/bpf/Kconfig
++++ b/drivers/hid/bpf/Kconfig
+@@ -3,7 +3,7 @@ menu "HID-BPF support"
+ 
+ config HID_BPF
+ 	bool "HID-BPF support"
+-	depends on BPF
++	depends on BPF_JIT
+ 	depends on BPF_SYSCALL
+ 	depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ 	help
+diff --git a/drivers/hid/hid-cougar.c b/drivers/hid/hid-cougar.c
+index cb8bd8aae15b5..0fa785f52707a 100644
+--- a/drivers/hid/hid-cougar.c
++++ b/drivers/hid/hid-cougar.c
+@@ -106,7 +106,7 @@ static void cougar_fix_g6_mapping(void)
+ static __u8 *cougar_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 				 unsigned int *rsize)
+ {
+-	if (rdesc[2] == 0x09 && rdesc[3] == 0x02 &&
++	if (*rsize >= 117 && rdesc[2] == 0x09 && rdesc[3] == 0x02 &&
+ 	    (rdesc[115] | rdesc[116] << 8) >= HID_MAX_USAGES) {
+ 		hid_info(hdev,
+ 			"usage count exceeds max: fixing up report descriptor\n");
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 12a707ab73f85..4d8378b677a62 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1952,6 +1952,7 @@ void vmbus_device_unregister(struct hv_device *device_obj)
+ 	 */
+ 	device_unregister(&device_obj->device);
+ }
++EXPORT_SYMBOL_GPL(vmbus_device_unregister);
+ 
+ #ifdef CONFIG_ACPI
+ /*
+diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c
+index 8ac6e735ec5cf..5e805d4ee76ab 100644
+--- a/drivers/hwmon/adc128d818.c
++++ b/drivers/hwmon/adc128d818.c
+@@ -175,7 +175,7 @@ static ssize_t adc128_in_store(struct device *dev,
+ 
+ 	mutex_lock(&data->update_lock);
+ 	/* 10 mV LSB on limit registers */
+-	regval = clamp_val(DIV_ROUND_CLOSEST(val, 10), 0, 255);
++	regval = DIV_ROUND_CLOSEST(clamp_val(val, 0, 2550), 10);
+ 	data->in[index][nr] = regval << 4;
+ 	reg = index == 1 ? ADC128_REG_IN_MIN(nr) : ADC128_REG_IN_MAX(nr);
+ 	i2c_smbus_write_byte_data(data->client, reg, regval);
+@@ -213,7 +213,7 @@ static ssize_t adc128_temp_store(struct device *dev,
+ 		return err;
+ 
+ 	mutex_lock(&data->update_lock);
+-	regval = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127);
++	regval = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000);
+ 	data->temp[index] = regval << 1;
+ 	i2c_smbus_write_byte_data(data->client,
+ 				  index == 1 ? ADC128_REG_TEMP_MAX
+diff --git a/drivers/hwmon/hp-wmi-sensors.c b/drivers/hwmon/hp-wmi-sensors.c
+index b5325d0e72b9c..dfa1d6926deac 100644
+--- a/drivers/hwmon/hp-wmi-sensors.c
++++ b/drivers/hwmon/hp-wmi-sensors.c
+@@ -1637,6 +1637,8 @@ static void hp_wmi_notify(u32 value, void *context)
+ 		goto out_unlock;
+ 
+ 	wobj = out.pointer;
++	if (!wobj)
++		goto out_unlock;
+ 
+ 	err = populate_event_from_wobj(dev, &event, wobj);
+ 	if (err) {
+diff --git a/drivers/hwmon/lm95234.c b/drivers/hwmon/lm95234.c
+index 67b9d7636ee42..37e8e9679aeb6 100644
+--- a/drivers/hwmon/lm95234.c
++++ b/drivers/hwmon/lm95234.c
+@@ -301,7 +301,8 @@ static ssize_t tcrit2_store(struct device *dev, struct device_attribute *attr,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, index ? 255 : 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, (index ? 255 : 127) * 1000),
++				1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->tcrit2[index] = val;
+@@ -350,7 +351,7 @@ static ssize_t tcrit1_store(struct device *dev, struct device_attribute *attr,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->tcrit1[index] = val;
+@@ -391,7 +392,7 @@ static ssize_t tcrit1_hyst_store(struct device *dev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	val = DIV_ROUND_CLOSEST(val, 1000);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -255000, 255000), 1000);
+ 	val = clamp_val((int)data->tcrit1[index] - val, 0, 31);
+ 
+ 	mutex_lock(&data->update_lock);
+@@ -431,7 +432,7 @@ static ssize_t offset_store(struct device *dev, struct device_attribute *attr,
+ 		return ret;
+ 
+ 	/* Accuracy is 1/2 degrees C */
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 500), -128, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -64000, 63500), 500);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->toffset[index] = val;
+diff --git a/drivers/hwmon/ltc2991.c b/drivers/hwmon/ltc2991.c
+index f74ce9c25bf71..d5e120dfd592d 100644
+--- a/drivers/hwmon/ltc2991.c
++++ b/drivers/hwmon/ltc2991.c
+@@ -42,9 +42,9 @@
+ #define LTC2991_V7_V8_FILT_EN		BIT(7)
+ #define LTC2991_V7_V8_TEMP_EN		BIT(5)
+ #define LTC2991_V7_V8_DIFF_EN		BIT(4)
+-#define LTC2991_V5_V6_FILT_EN		BIT(7)
+-#define LTC2991_V5_V6_TEMP_EN		BIT(5)
+-#define LTC2991_V5_V6_DIFF_EN		BIT(4)
++#define LTC2991_V5_V6_FILT_EN		BIT(3)
++#define LTC2991_V5_V6_TEMP_EN		BIT(1)
++#define LTC2991_V5_V6_DIFF_EN		BIT(0)
+ 
+ #define LTC2991_REPEAT_ACQ_EN		BIT(4)
+ #define LTC2991_T_INT_FILT_EN		BIT(3)
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 9fbab8f023340..934fed3dd5866 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -2262,7 +2262,7 @@ store_temp_offset(struct device *dev, struct device_attribute *attr,
+ 	if (err < 0)
+ 		return err;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->temp_offset[nr] = val;
+diff --git a/drivers/hwmon/w83627ehf.c b/drivers/hwmon/w83627ehf.c
+index fe960c0a624f7..7d7d70afde655 100644
+--- a/drivers/hwmon/w83627ehf.c
++++ b/drivers/hwmon/w83627ehf.c
+@@ -895,7 +895,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
+ 	if (err < 0)
+ 		return err;
+ 
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 127);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 127000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	data->target_temp[nr] = val;
+@@ -920,7 +920,7 @@ store_tolerance(struct device *dev, struct device_attribute *attr,
+ 		return err;
+ 
+ 	/* Limit the temp to 0C - 15C */
+-	val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 15);
++	val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 15000), 1000);
+ 
+ 	mutex_lock(&data->update_lock);
+ 	reg = w83627ehf_read_value(data, W83627EHF_REG_TOLERANCE[nr]);
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
+index 4e01a95cc4d0a..1a96bf5a0bf87 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
+@@ -294,7 +294,10 @@ static int hci_dma_init(struct i3c_hci *hci)
+ 
+ 		rh->ibi_chunk_sz = dma_get_cache_alignment();
+ 		rh->ibi_chunk_sz *= IBI_CHUNK_CACHELINES;
+-		BUG_ON(rh->ibi_chunk_sz > 256);
++		if (rh->ibi_chunk_sz > 256) {
++			ret = -EINVAL;
++			goto err_out;
++		}
+ 
+ 		ibi_status_ring_sz = rh->ibi_status_sz * rh->ibi_status_entries;
+ 		ibi_data_ring_sz = rh->ibi_chunk_sz * rh->ibi_chunks_total;
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index bb299ce02cccb..f0362509319e0 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1052,29 +1052,59 @@ static int svc_i3c_master_xfer(struct svc_i3c_master *master,
+ 			       u8 *in, const u8 *out, unsigned int xfer_len,
+ 			       unsigned int *actual_len, bool continued)
+ {
++	int retry = 2;
+ 	u32 reg;
+ 	int ret;
+ 
+ 	/* clean SVC_I3C_MINT_IBIWON w1c bits */
+ 	writel(SVC_I3C_MINT_IBIWON, master->regs + SVC_I3C_MSTATUS);
+ 
+-	writel(SVC_I3C_MCTRL_REQUEST_START_ADDR |
+-	       xfer_type |
+-	       SVC_I3C_MCTRL_IBIRESP_NACK |
+-	       SVC_I3C_MCTRL_DIR(rnw) |
+-	       SVC_I3C_MCTRL_ADDR(addr) |
+-	       SVC_I3C_MCTRL_RDTERM(*actual_len),
+-	       master->regs + SVC_I3C_MCTRL);
+ 
+-	ret = readl_poll_timeout(master->regs + SVC_I3C_MSTATUS, reg,
++	while (retry--) {
++		writel(SVC_I3C_MCTRL_REQUEST_START_ADDR |
++		       xfer_type |
++		       SVC_I3C_MCTRL_IBIRESP_NACK |
++		       SVC_I3C_MCTRL_DIR(rnw) |
++		       SVC_I3C_MCTRL_ADDR(addr) |
++		       SVC_I3C_MCTRL_RDTERM(*actual_len),
++		       master->regs + SVC_I3C_MCTRL);
++
++		ret = readl_poll_timeout(master->regs + SVC_I3C_MSTATUS, reg,
+ 				 SVC_I3C_MSTATUS_MCTRLDONE(reg), 0, 1000);
+-	if (ret)
+-		goto emit_stop;
++		if (ret)
++			goto emit_stop;
+ 
+-	if (readl(master->regs + SVC_I3C_MERRWARN) & SVC_I3C_MERRWARN_NACK) {
+-		ret = -ENXIO;
+-		*actual_len = 0;
+-		goto emit_stop;
++		if (readl(master->regs + SVC_I3C_MERRWARN) & SVC_I3C_MERRWARN_NACK) {
++			/*
++			 * According to I3C Spec 1.1.1, 11-Jun-2021, section: 5.1.2.2.3.
++			 * If the Controller chooses to start an I3C Message with an I3C Dynamic
++			 * Address, then special provisions shall be made because that same I3C
++			 * Target may be initiating an IBI or a Controller Role Request. So, one of
++			 * three things may happen: (skip 1, 2)
++			 *
++			 * 3. The Addresses match and the RnW bits also match, and so neither
++			 * Controller nor Target will ACK since both are expecting the other side to
++			 * provide ACK. As a result, each side might think it had "won" arbitration,
++			 * but neither side would continue, as each would subsequently see that the
++			 * other did not provide ACK.
++			 * ...
++			 * For either value of RnW: Due to the NACK, the Controller shall defer the
++			 * Private Write or Private Read, and should typically transmit the Target
++			 * Address again after a Repeated START (i.e., the next one or any one prior
++			 * to a STOP in the Frame). Since the Address Header following a Repeated
++			 * START is not arbitrated, the Controller will always win (see Section
++			 * 5.1.2.2.4).
++			 */
++			if (retry && addr != 0x7e) {
++				writel(SVC_I3C_MERRWARN_NACK, master->regs + SVC_I3C_MERRWARN);
++			} else {
++				ret = -ENXIO;
++				*actual_len = 0;
++				goto emit_stop;
++			}
++		} else {
++			break;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index e7b1d517d3def..d2fe0269b6d3a 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -147,15 +147,18 @@ struct ad7124_chip_info {
+ struct ad7124_channel_config {
+ 	bool live;
+ 	unsigned int cfg_slot;
+-	enum ad7124_ref_sel refsel;
+-	bool bipolar;
+-	bool buf_positive;
+-	bool buf_negative;
+-	unsigned int vref_mv;
+-	unsigned int pga_bits;
+-	unsigned int odr;
+-	unsigned int odr_sel_bits;
+-	unsigned int filter_type;
++	/* Following fields are used to compare equality. */
++	struct_group(config_props,
++		enum ad7124_ref_sel refsel;
++		bool bipolar;
++		bool buf_positive;
++		bool buf_negative;
++		unsigned int vref_mv;
++		unsigned int pga_bits;
++		unsigned int odr;
++		unsigned int odr_sel_bits;
++		unsigned int filter_type;
++	);
+ };
+ 
+ struct ad7124_channel {
+@@ -334,11 +337,12 @@ static struct ad7124_channel_config *ad7124_find_similar_live_cfg(struct ad7124_
+ 	ptrdiff_t cmp_size;
+ 	int i;
+ 
+-	cmp_size = (u8 *)&cfg->live - (u8 *)cfg;
++	cmp_size = sizeof_field(struct ad7124_channel_config, config_props);
+ 	for (i = 0; i < st->num_channels; i++) {
+ 		cfg_aux = &st->channels[i].cfg;
+ 
+-		if (cfg_aux->live && !memcmp(cfg, cfg_aux, cmp_size))
++		if (cfg_aux->live &&
++		    !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size))
+ 			return cfg_aux;
+ 	}
+ 
+@@ -762,6 +766,7 @@ static int ad7124_soft_reset(struct ad7124_state *st)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	fsleep(200);
+ 	timeout = 100;
+ 	do {
+ 		ret = ad_sd_read_reg(&st->sd, AD7124_STATUS, 1, &readval);
+@@ -837,8 +842,6 @@ static int ad7124_parse_channel_config(struct iio_dev *indio_dev,
+ 	st->channels = channels;
+ 
+ 	device_for_each_child_node_scoped(dev, child) {
+-		cfg = &st->channels[channel].cfg;
+-
+ 		ret = fwnode_property_read_u32(child, "reg", &channel);
+ 		if (ret)
+ 			return ret;
+@@ -856,6 +859,7 @@ static int ad7124_parse_channel_config(struct iio_dev *indio_dev,
+ 		st->channels[channel].ain = AD7124_CHANNEL_AINP(ain[0]) |
+ 						  AD7124_CHANNEL_AINM(ain[1]);
+ 
++		cfg = &st->channels[channel].cfg;
+ 		cfg->bipolar = fwnode_property_read_bool(child, "bipolar");
+ 
+ 		ret = fwnode_property_read_u32(child, "adi,reference-select", &tmp);
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index 1928d9ae5bcff..1c08c0921ee71 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -49,7 +49,7 @@ static const unsigned int ad7616_oversampling_avail[8] = {
+ 	1, 2, 4, 8, 16, 32, 64, 128,
+ };
+ 
+-static int ad7606_reset(struct ad7606_state *st)
++int ad7606_reset(struct ad7606_state *st)
+ {
+ 	if (st->gpio_reset) {
+ 		gpiod_set_value(st->gpio_reset, 1);
+@@ -60,6 +60,7 @@ static int ad7606_reset(struct ad7606_state *st)
+ 
+ 	return -ENODEV;
+ }
++EXPORT_SYMBOL_NS_GPL(ad7606_reset, IIO_AD7606);
+ 
+ static int ad7606_reg_access(struct iio_dev *indio_dev,
+ 			     unsigned int reg,
+@@ -88,31 +89,6 @@ static int ad7606_read_samples(struct ad7606_state *st)
+ {
+ 	unsigned int num = st->chip_info->num_channels - 1;
+ 	u16 *data = st->data;
+-	int ret;
+-
+-	/*
+-	 * The frstdata signal is set to high while and after reading the sample
+-	 * of the first channel and low for all other channels. This can be used
+-	 * to check that the incoming data is correctly aligned. During normal
+-	 * operation the data should never become unaligned, but some glitch or
+-	 * electrostatic discharge might cause an extra read or clock cycle.
+-	 * Monitoring the frstdata signal allows to recover from such failure
+-	 * situations.
+-	 */
+-
+-	if (st->gpio_frstdata) {
+-		ret = st->bops->read_block(st->dev, 1, data);
+-		if (ret)
+-			return ret;
+-
+-		if (!gpiod_get_value(st->gpio_frstdata)) {
+-			ad7606_reset(st);
+-			return -EIO;
+-		}
+-
+-		data++;
+-		num--;
+-	}
+ 
+ 	return st->bops->read_block(st->dev, num, data);
+ }
+diff --git a/drivers/iio/adc/ad7606.h b/drivers/iio/adc/ad7606.h
+index 0c6a88cc46958..6649e84d25de6 100644
+--- a/drivers/iio/adc/ad7606.h
++++ b/drivers/iio/adc/ad7606.h
+@@ -151,6 +151,8 @@ int ad7606_probe(struct device *dev, int irq, void __iomem *base_address,
+ 		 const char *name, unsigned int id,
+ 		 const struct ad7606_bus_ops *bops);
+ 
++int ad7606_reset(struct ad7606_state *st);
++
+ enum ad7606_supported_device_ids {
+ 	ID_AD7605_4,
+ 	ID_AD7606_8,
+diff --git a/drivers/iio/adc/ad7606_par.c b/drivers/iio/adc/ad7606_par.c
+index d8408052262e4..6bc587b20f05d 100644
+--- a/drivers/iio/adc/ad7606_par.c
++++ b/drivers/iio/adc/ad7606_par.c
+@@ -7,6 +7,7 @@
+ 
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/platform_device.h>
+ #include <linux/types.h>
+ #include <linux/err.h>
+@@ -21,8 +22,29 @@ static int ad7606_par16_read_block(struct device *dev,
+ 	struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ 	struct ad7606_state *st = iio_priv(indio_dev);
+ 
+-	insw((unsigned long)st->base_address, buf, count);
+ 
++	/*
++	 * On the parallel interface, the frstdata signal is set to high while
++	 * and after reading the sample of the first channel and low for all
++	 * other channels.  This can be used to check that the incoming data is
++	 * correctly aligned.  During normal operation the data should never
++	 * become unaligned, but some glitch or electrostatic discharge might
++	 * cause an extra read or clock cycle.  Monitoring the frstdata signal
++	 * allows to recover from such failure situations.
++	 */
++	int num = count;
++	u16 *_buf = buf;
++
++	if (st->gpio_frstdata) {
++		insw((unsigned long)st->base_address, _buf, 1);
++		if (!gpiod_get_value(st->gpio_frstdata)) {
++			ad7606_reset(st);
++			return -EIO;
++		}
++		_buf++;
++		num--;
++	}
++	insw((unsigned long)st->base_address, _buf, num);
+ 	return 0;
+ }
+ 
+@@ -35,8 +57,28 @@ static int ad7606_par8_read_block(struct device *dev,
+ {
+ 	struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ 	struct ad7606_state *st = iio_priv(indio_dev);
+-
+-	insb((unsigned long)st->base_address, buf, count * 2);
++	/*
++	 * On the parallel interface, the frstdata signal is set to high while
++	 * and after reading the sample of the first channel and low for all
++	 * other channels.  This can be used to check that the incoming data is
++	 * correctly aligned.  During normal operation the data should never
++	 * become unaligned, but some glitch or electrostatic discharge might
++	 * cause an extra read or clock cycle.  Monitoring the frstdata signal
++	 * allows to recover from such failure situations.
++	 */
++	int num = count;
++	u16 *_buf = buf;
++
++	if (st->gpio_frstdata) {
++		insb((unsigned long)st->base_address, _buf, 2);
++		if (!gpiod_get_value(st->gpio_frstdata)) {
++			ad7606_reset(st);
++			return -EIO;
++		}
++		_buf++;
++		num--;
++	}
++	insb((unsigned long)st->base_address, _buf, num * 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index a2b87f6b7a071..6231c635803d1 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -568,7 +568,7 @@ EXPORT_SYMBOL_NS_GPL(ad_sd_validate_trigger, IIO_AD_SIGMA_DELTA);
+ static int devm_ad_sd_probe_trigger(struct device *dev, struct iio_dev *indio_dev)
+ {
+ 	struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev);
+-	unsigned long irq_flags = irq_get_trigger_type(sigma_delta->spi->irq);
++	unsigned long irq_flags = irq_get_trigger_type(sigma_delta->irq_line);
+ 	int ret;
+ 
+ 	if (dev != &sigma_delta->spi->dev) {
+diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+index 918f6f8d65b6c..0f50a99e89676 100644
+--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
++++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+@@ -193,7 +193,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
+ 
+ 	ret = dma_get_slave_caps(chan, &caps);
+ 	if (ret < 0)
+-		goto err_free;
++		goto err_release;
+ 
+ 	/* Needs to be aligned to the maximum of the minimums */
+ 	if (caps.src_addr_widths)
+@@ -219,6 +219,8 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
+ 
+ 	return &dmaengine_buffer->queue.buffer;
+ 
++err_release:
++	dma_release_channel(chan);
+ err_free:
+ 	kfree(dmaengine_buffer);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+index 84273660ca2eb..3bfeabab0ec4f 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+@@ -248,12 +248,20 @@ static irqreturn_t inv_mpu6050_interrupt_handle(int irq, void *p)
+ 	int result;
+ 
+ 	switch (st->chip_type) {
++	case INV_MPU6000:
+ 	case INV_MPU6050:
++	case INV_MPU9150:
++		/*
++		 * WoM is not supported and interrupt status read seems to be broken for
++		 * some chips. Since data ready is the only interrupt, bypass interrupt
++		 * status read and always assert data ready bit.
++		 */
++		wom_bits = 0;
++		int_status = INV_MPU6050_BIT_RAW_DATA_RDY_INT;
++		goto data_ready_interrupt;
+ 	case INV_MPU6500:
+ 	case INV_MPU6515:
+ 	case INV_MPU6880:
+-	case INV_MPU6000:
+-	case INV_MPU9150:
+ 	case INV_MPU9250:
+ 	case INV_MPU9255:
+ 		wom_bits = INV_MPU6500_BIT_WOM_INT;
+@@ -279,6 +287,7 @@ static irqreturn_t inv_mpu6050_interrupt_handle(int irq, void *p)
+ 		}
+ 	}
+ 
++data_ready_interrupt:
+ 	/* handle raw data interrupt */
+ 	if (int_status & INV_MPU6050_BIT_RAW_DATA_RDY_INT) {
+ 		indio_dev->pollfunc->timestamp = st->it_timestamp;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 39cf26d69d17a..e1e1b31bf8b4b 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -647,17 +647,17 @@ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
+ 		break;
+ 	case IIO_VAL_INT_PLUS_MICRO:
+ 		if (scale_val2 < 0)
+-			*processed = -raw64 * scale_val;
++			*processed = -raw64 * scale_val * scale;
+ 		else
+-			*processed = raw64 * scale_val;
++			*processed = raw64 * scale_val * scale;
+ 		*processed += div_s64(raw64 * (s64)scale_val2 * scale,
+ 				      1000000LL);
+ 		break;
+ 	case IIO_VAL_INT_PLUS_NANO:
+ 		if (scale_val2 < 0)
+-			*processed = -raw64 * scale_val;
++			*processed = -raw64 * scale_val * scale;
+ 		else
+-			*processed = raw64 * scale_val;
++			*processed = raw64 * scale_val * scale;
+ 		*processed += div_s64(raw64 * (s64)scale_val2 * scale,
+ 				      1000000000LL);
+ 		break;
+diff --git a/drivers/input/misc/uinput.c b/drivers/input/misc/uinput.c
+index d98212d55108c..2c973f15cab7d 100644
+--- a/drivers/input/misc/uinput.c
++++ b/drivers/input/misc/uinput.c
+@@ -417,6 +417,20 @@ static int uinput_validate_absinfo(struct input_dev *dev, unsigned int code,
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Limit number of contacts to a reasonable value (100). This
++	 * ensures that we need less than 2 pages for struct input_mt
++	 * (we are not using in-kernel slot assignment so not going to
++	 * allocate memory for the "red" table), and we should have no
++	 * trouble getting this much memory.
++	 */
++	if (code == ABS_MT_SLOT && max > 99) {
++		printk(KERN_DEBUG
++		       "%s: unreasonably large number of slots requested: %d\n",
++		       UINPUT_NAME, max);
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index 79bdb2b109496..f3c3ad70244f1 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -597,7 +597,7 @@ static int ili251x_firmware_to_buffer(const struct firmware *fw,
+ 	 * once, copy them all into this buffer at the right locations, and then
+ 	 * do all operations on this linear buffer.
+ 	 */
+-	fw_buf = kzalloc(SZ_64K, GFP_KERNEL);
++	fw_buf = kvmalloc(SZ_64K, GFP_KERNEL);
+ 	if (!fw_buf)
+ 		return -ENOMEM;
+ 
+@@ -627,7 +627,7 @@ static int ili251x_firmware_to_buffer(const struct firmware *fw,
+ 	return 0;
+ 
+ err_big:
+-	kfree(fw_buf);
++	kvfree(fw_buf);
+ 	return error;
+ }
+ 
+@@ -870,7 +870,7 @@ static ssize_t ili210x_firmware_update_store(struct device *dev,
+ 	ili210x_hardware_reset(priv->reset_gpio);
+ 	dev_dbg(dev, "Firmware update ended, error=%i\n", error);
+ 	enable_irq(client->irq);
+-	kfree(fwbuf);
++	kvfree(fwbuf);
+ 	return error;
+ }
+ 
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 304e84949ca72..1c8d3141cb55c 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1446,7 +1446,7 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
+ 	 */
+ 	writel(qi->free_head << shift, iommu->reg + DMAR_IQT_REG);
+ 
+-	while (qi->desc_status[wait_index] != QI_DONE) {
++	while (READ_ONCE(qi->desc_status[wait_index]) != QI_DONE) {
+ 		/*
+ 		 * We will leave the interrupts disabled, to prevent interrupt
+ 		 * context to queue another cmd while a cmd is already submitted
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index f55ec1fd7942a..e9bea0305c268 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -854,7 +854,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
+ 			domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);
+ 			pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
+ 			if (domain->use_first_level)
+-				pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS;
++				pteval |= DMA_FL_PTE_US | DMA_FL_PTE_ACCESS;
+ 
+ 			tmp = 0ULL;
+ 			if (!try_cmpxchg64(&pte->val, &tmp, pteval))
+@@ -1872,7 +1872,7 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 	attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
+ 	attr |= DMA_FL_PTE_PRESENT;
+ 	if (domain->use_first_level) {
+-		attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS;
++		attr |= DMA_FL_PTE_US | DMA_FL_PTE_ACCESS;
+ 		if (prot & DMA_PTE_WRITE)
+ 			attr |= DMA_FL_PTE_DIRTY;
+ 	}
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index eaf015b4353b1..9a3b064126de9 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -49,7 +49,6 @@
+ #define DMA_FL_PTE_US		BIT_ULL(2)
+ #define DMA_FL_PTE_ACCESS	BIT_ULL(5)
+ #define DMA_FL_PTE_DIRTY	BIT_ULL(6)
+-#define DMA_FL_PTE_XD		BIT_ULL(63)
+ 
+ #define DMA_SL_PTE_DIRTY_BIT	9
+ #define DMA_SL_PTE_DIRTY	BIT_ULL(DMA_SL_PTE_DIRTY_BIT)
+@@ -831,11 +830,10 @@ static inline void dma_clear_pte(struct dma_pte *pte)
+ static inline u64 dma_pte_addr(struct dma_pte *pte)
+ {
+ #ifdef CONFIG_64BIT
+-	return pte->val & VTD_PAGE_MASK & (~DMA_FL_PTE_XD);
++	return pte->val & VTD_PAGE_MASK;
+ #else
+ 	/* Must have a full atomic 64-bit read */
+-	return  __cmpxchg64(&pte->val, 0ULL, 0ULL) &
+-			VTD_PAGE_MASK & (~DMA_FL_PTE_XD);
++	return  __cmpxchg64(&pte->val, 0ULL, 0ULL) & VTD_PAGE_MASK;
+ #endif
+ }
+ 
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index abce19e2ad6f4..aabcdf7565817 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -333,7 +333,6 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
+ 	pasid_set_domain_id(pte, did);
+ 	pasid_set_address_width(pte, iommu->agaw);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+-	pasid_set_nxe(pte);
+ 
+ 	/* Setup Present and PASID Granular Transfer Type: */
+ 	pasid_set_translation_type(pte, PASID_ENTRY_PGTT_FL_ONLY);
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index da9978fef7ac5..dde6d3ba5ae0f 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -247,16 +247,6 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
+ 	pasid_set_bits(&pe->val[1], 1 << 23, value << 23);
+ }
+ 
+-/*
+- * Setup No Execute Enable bit (Bit 133) of a scalable mode PASID
+- * entry. It is required when XD bit of the first level page table
+- * entry is about to be set.
+- */
+-static inline void pasid_set_nxe(struct pasid_entry *pe)
+-{
+-	pasid_set_bits(&pe->val[2], 1 << 5, 1 << 5);
+-}
+-
+ /*
+  * Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
+  * PASID entry.
+diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
+index 33d142f8057d7..a9f1fe44c4c0b 100644
+--- a/drivers/iommu/iommufd/hw_pagetable.c
++++ b/drivers/iommu/iommufd/hw_pagetable.c
+@@ -236,7 +236,8 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
+ 	}
+ 	hwpt->domain->owner = ops;
+ 
+-	if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
++	if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED ||
++			 !hwpt->domain->ops->cache_invalidate_user)) {
+ 		rc = -EINVAL;
+ 		goto out_abort;
+ 	}
+diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
+index c519b991749d7..dd3f07384624c 100644
+--- a/drivers/iommu/sun50i-iommu.c
++++ b/drivers/iommu/sun50i-iommu.c
+@@ -452,6 +452,7 @@ static int sun50i_iommu_enable(struct sun50i_iommu *iommu)
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(3) |
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(4) |
+ 		    IOMMU_TLB_PREFETCH_MASTER_ENABLE(5));
++	iommu_write(iommu, IOMMU_BYPASS_REG, 0);
+ 	iommu_write(iommu, IOMMU_INT_ENABLE_REG, IOMMU_INT_MASK);
+ 	iommu_write(iommu, IOMMU_DM_AUT_CTRL_REG(SUN50I_IOMMU_ACI_NONE),
+ 		    IOMMU_DM_AUT_CTRL_RD_UNAVAIL(SUN50I_IOMMU_ACI_NONE, 0) |
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index 4b021a67bdfe4..f488c35d91308 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -566,6 +566,10 @@ static struct irq_chip armada_370_xp_irq_chip = {
+ static int armada_370_xp_mpic_irq_map(struct irq_domain *h,
+ 				      unsigned int virq, irq_hw_number_t hw)
+ {
++	/* IRQs 0 and 1 cannot be mapped, they are handled internally */
++	if (hw <= 1)
++		return -EINVAL;
++
+ 	armada_370_xp_irq_mask(irq_get_irq_data(virq));
+ 	if (!is_percpu_irq(hw))
+ 		writel(hw, per_cpu_int_base +
+diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
+index f2ff4387870d6..d83c2c85962c3 100644
+--- a/drivers/irqchip/irq-gic-v2m.c
++++ b/drivers/irqchip/irq-gic-v2m.c
+@@ -438,12 +438,12 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
+ 
+ 		ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis,
+ 				      &res, 0);
+-		if (ret) {
+-			of_node_put(child);
++		if (ret)
+ 			break;
+-		}
+ 	}
+ 
++	if (ret && child)
++		of_node_put(child);
+ 	if (!ret)
+ 		ret = gicv2m_allocate_domains(parent);
+ 	if (ret)
+diff --git a/drivers/irqchip/irq-renesas-rzg2l.c b/drivers/irqchip/irq-renesas-rzg2l.c
+index f6484bf15e0b8..5a4521cf3ec6e 100644
+--- a/drivers/irqchip/irq-renesas-rzg2l.c
++++ b/drivers/irqchip/irq-renesas-rzg2l.c
+@@ -162,8 +162,8 @@ static void rzg2l_tint_irq_endisable(struct irq_data *d, bool enable)
+ 
+ static void rzg2l_irqc_irq_disable(struct irq_data *d)
+ {
+-	rzg2l_tint_irq_endisable(d, false);
+ 	irq_chip_disable_parent(d);
++	rzg2l_tint_irq_endisable(d, false);
+ }
+ 
+ static void rzg2l_irqc_irq_enable(struct irq_data *d)
+diff --git a/drivers/irqchip/irq-riscv-aplic-main.c b/drivers/irqchip/irq-riscv-aplic-main.c
+index 774a0c97fdab6..362db30ed7755 100644
+--- a/drivers/irqchip/irq-riscv-aplic-main.c
++++ b/drivers/irqchip/irq-riscv-aplic-main.c
+@@ -175,9 +175,9 @@ static int aplic_probe(struct platform_device *pdev)
+ 
+ 	/* Map the MMIO registers */
+ 	regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (!regs) {
++	if (IS_ERR(regs)) {
+ 		dev_err(dev, "failed map MMIO registers\n");
+-		return -ENOMEM;
++		return PTR_ERR(regs);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 9e22f7e378f50..4d9ea718086d3 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2017 SiFive
+  * Copyright (C) 2018 Christoph Hellwig
+  */
++#define pr_fmt(fmt) "riscv-plic: " fmt
+ #include <linux/cpu.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+@@ -63,7 +64,7 @@
+ #define PLIC_QUIRK_EDGE_INTERRUPT	0
+ 
+ struct plic_priv {
+-	struct device *dev;
++	struct fwnode_handle *fwnode;
+ 	struct cpumask lmask;
+ 	struct irq_domain *irqdomain;
+ 	void __iomem *regs;
+@@ -378,8 +379,8 @@ static void plic_handle_irq(struct irq_desc *desc)
+ 		int err = generic_handle_domain_irq(handler->priv->irqdomain,
+ 						    hwirq);
+ 		if (unlikely(err)) {
+-			dev_warn_ratelimited(handler->priv->dev,
+-					     "can't find mapping for hwirq %lu\n", hwirq);
++			pr_warn_ratelimited("%pfwP: can't find mapping for hwirq %lu\n",
++					    handler->priv->fwnode, hwirq);
+ 		}
+ 	}
+ 
+@@ -408,7 +409,8 @@ static int plic_starting_cpu(unsigned int cpu)
+ 		enable_percpu_irq(plic_parent_irq,
+ 				  irq_get_trigger_type(plic_parent_irq));
+ 	else
+-		dev_warn(handler->priv->dev, "cpu%d: parent irq not available\n", cpu);
++		pr_warn("%pfwP: cpu%d: parent irq not available\n",
++			handler->priv->fwnode, cpu);
+ 	plic_set_threshold(handler, PLIC_ENABLE_THRESHOLD);
+ 
+ 	return 0;
+@@ -424,38 +426,36 @@ static const struct of_device_id plic_match[] = {
+ 	{}
+ };
+ 
+-static int plic_parse_nr_irqs_and_contexts(struct platform_device *pdev,
++static int plic_parse_nr_irqs_and_contexts(struct fwnode_handle *fwnode,
+ 					   u32 *nr_irqs, u32 *nr_contexts)
+ {
+-	struct device *dev = &pdev->dev;
+ 	int rc;
+ 
+ 	/*
+ 	 * Currently, only OF fwnode is supported so extend this
+ 	 * function for ACPI support.
+ 	 */
+-	if (!is_of_node(dev->fwnode))
++	if (!is_of_node(fwnode))
+ 		return -EINVAL;
+ 
+-	rc = of_property_read_u32(to_of_node(dev->fwnode), "riscv,ndev", nr_irqs);
++	rc = of_property_read_u32(to_of_node(fwnode), "riscv,ndev", nr_irqs);
+ 	if (rc) {
+-		dev_err(dev, "riscv,ndev property not available\n");
++		pr_err("%pfwP: riscv,ndev property not available\n", fwnode);
+ 		return rc;
+ 	}
+ 
+-	*nr_contexts = of_irq_count(to_of_node(dev->fwnode));
++	*nr_contexts = of_irq_count(to_of_node(fwnode));
+ 	if (WARN_ON(!(*nr_contexts))) {
+-		dev_err(dev, "no PLIC context available\n");
++		pr_err("%pfwP: no PLIC context available\n", fwnode);
+ 		return -EINVAL;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int plic_parse_context_parent(struct platform_device *pdev, u32 context,
++static int plic_parse_context_parent(struct fwnode_handle *fwnode, u32 context,
+ 				     u32 *parent_hwirq, int *parent_cpu)
+ {
+-	struct device *dev = &pdev->dev;
+ 	struct of_phandle_args parent;
+ 	unsigned long hartid;
+ 	int rc;
+@@ -464,10 +464,10 @@ static int plic_parse_context_parent(struct platform_device *pdev, u32 context,
+ 	 * Currently, only OF fwnode is supported so extend this
+ 	 * function for ACPI support.
+ 	 */
+-	if (!is_of_node(dev->fwnode))
++	if (!is_of_node(fwnode))
+ 		return -EINVAL;
+ 
+-	rc = of_irq_parse_one(to_of_node(dev->fwnode), context, &parent);
++	rc = of_irq_parse_one(to_of_node(fwnode), context, &parent);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -480,48 +480,55 @@ static int plic_parse_context_parent(struct platform_device *pdev, u32 context,
+ 	return 0;
+ }
+ 
+-static int plic_probe(struct platform_device *pdev)
++static int plic_probe(struct fwnode_handle *fwnode)
+ {
+ 	int error = 0, nr_contexts, nr_handlers = 0, cpu, i;
+-	struct device *dev = &pdev->dev;
+ 	unsigned long plic_quirks = 0;
+ 	struct plic_handler *handler;
+ 	u32 nr_irqs, parent_hwirq;
+ 	struct plic_priv *priv;
+ 	irq_hw_number_t hwirq;
++	void __iomem *regs;
+ 
+-	if (is_of_node(dev->fwnode)) {
++	if (is_of_node(fwnode)) {
+ 		const struct of_device_id *id;
+ 
+-		id = of_match_node(plic_match, to_of_node(dev->fwnode));
++		id = of_match_node(plic_match, to_of_node(fwnode));
+ 		if (id)
+ 			plic_quirks = (unsigned long)id->data;
++
++		regs = of_iomap(to_of_node(fwnode), 0);
++		if (!regs)
++			return -ENOMEM;
++	} else {
++		return -ENODEV;
+ 	}
+ 
+-	error = plic_parse_nr_irqs_and_contexts(pdev, &nr_irqs, &nr_contexts);
++	error = plic_parse_nr_irqs_and_contexts(fwnode, &nr_irqs, &nr_contexts);
+ 	if (error)
+-		return error;
++		goto fail_free_regs;
+ 
+-	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		error = -ENOMEM;
++		goto fail_free_regs;
++	}
+ 
+-	priv->dev = dev;
++	priv->fwnode = fwnode;
+ 	priv->plic_quirks = plic_quirks;
+ 	priv->nr_irqs = nr_irqs;
++	priv->regs = regs;
+ 
+-	priv->regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(!priv->regs))
+-		return -EIO;
+-
+-	priv->prio_save = devm_bitmap_zalloc(dev, nr_irqs, GFP_KERNEL);
+-	if (!priv->prio_save)
+-		return -ENOMEM;
++	priv->prio_save = bitmap_zalloc(nr_irqs, GFP_KERNEL);
++	if (!priv->prio_save) {
++		error = -ENOMEM;
++		goto fail_free_priv;
++	}
+ 
+ 	for (i = 0; i < nr_contexts; i++) {
+-		error = plic_parse_context_parent(pdev, i, &parent_hwirq, &cpu);
++		error = plic_parse_context_parent(fwnode, i, &parent_hwirq, &cpu);
+ 		if (error) {
+-			dev_warn(dev, "hwirq for context%d not found\n", i);
++			pr_warn("%pfwP: hwirq for context%d not found\n", fwnode, i);
+ 			continue;
+ 		}
+ 
+@@ -543,7 +550,7 @@ static int plic_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		if (cpu < 0) {
+-			dev_warn(dev, "Invalid cpuid for context %d\n", i);
++			pr_warn("%pfwP: Invalid cpuid for context %d\n", fwnode, i);
+ 			continue;
+ 		}
+ 
+@@ -554,7 +561,7 @@ static int plic_probe(struct platform_device *pdev)
+ 		 */
+ 		handler = per_cpu_ptr(&plic_handlers, cpu);
+ 		if (handler->present) {
+-			dev_warn(dev, "handler already present for context %d.\n", i);
++			pr_warn("%pfwP: handler already present for context %d.\n", fwnode, i);
+ 			plic_set_threshold(handler, PLIC_DISABLE_THRESHOLD);
+ 			goto done;
+ 		}
+@@ -568,8 +575,8 @@ static int plic_probe(struct platform_device *pdev)
+ 			i * CONTEXT_ENABLE_SIZE;
+ 		handler->priv = priv;
+ 
+-		handler->enable_save = devm_kcalloc(dev, DIV_ROUND_UP(nr_irqs, 32),
+-						    sizeof(*handler->enable_save), GFP_KERNEL);
++		handler->enable_save = kcalloc(DIV_ROUND_UP(nr_irqs, 32),
++					       sizeof(*handler->enable_save), GFP_KERNEL);
+ 		if (!handler->enable_save)
+ 			goto fail_cleanup_contexts;
+ done:
+@@ -581,7 +588,7 @@ static int plic_probe(struct platform_device *pdev)
+ 		nr_handlers++;
+ 	}
+ 
+-	priv->irqdomain = irq_domain_add_linear(to_of_node(dev->fwnode), nr_irqs + 1,
++	priv->irqdomain = irq_domain_add_linear(to_of_node(fwnode), nr_irqs + 1,
+ 						&plic_irqdomain_ops, priv);
+ 	if (WARN_ON(!priv->irqdomain))
+ 		goto fail_cleanup_contexts;
+@@ -619,13 +626,13 @@ static int plic_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	dev_info(dev, "mapped %d interrupts with %d handlers for %d contexts.\n",
+-		 nr_irqs, nr_handlers, nr_contexts);
++	pr_info("%pfwP: mapped %d interrupts with %d handlers for %d contexts.\n",
++		fwnode, nr_irqs, nr_handlers, nr_contexts);
+ 	return 0;
+ 
+ fail_cleanup_contexts:
+ 	for (i = 0; i < nr_contexts; i++) {
+-		if (plic_parse_context_parent(pdev, i, &parent_hwirq, &cpu))
++		if (plic_parse_context_parent(fwnode, i, &parent_hwirq, &cpu))
+ 			continue;
+ 		if (parent_hwirq != RV_IRQ_EXT || cpu < 0)
+ 			continue;
+@@ -634,17 +641,37 @@ static int plic_probe(struct platform_device *pdev)
+ 		handler->present = false;
+ 		handler->hart_base = NULL;
+ 		handler->enable_base = NULL;
++		kfree(handler->enable_save);
+ 		handler->enable_save = NULL;
+ 		handler->priv = NULL;
+ 	}
+-	return -ENOMEM;
++	bitmap_free(priv->prio_save);
++fail_free_priv:
++	kfree(priv);
++fail_free_regs:
++	iounmap(regs);
++	return error;
++}
++
++static int plic_platform_probe(struct platform_device *pdev)
++{
++	return plic_probe(pdev->dev.fwnode);
+ }
+ 
+ static struct platform_driver plic_driver = {
+ 	.driver = {
+ 		.name		= "riscv-plic",
+ 		.of_match_table	= plic_match,
++		.suppress_bind_attrs = true,
+ 	},
+-	.probe = plic_probe,
++	.probe = plic_platform_probe,
+ };
+ builtin_platform_driver(plic_driver);
++
++static int __init plic_early_probe(struct device_node *node,
++				   struct device_node *parent)
++{
++	return plic_probe(&node->fwnode);
++}
++
++IRQCHIP_DECLARE(riscv, "allwinner,sun20i-d1-plic", plic_early_probe);
+diff --git a/drivers/leds/leds-spi-byte.c b/drivers/leds/leds-spi-byte.c
+index 96296db5f410d..b04cf502e6035 100644
+--- a/drivers/leds/leds-spi-byte.c
++++ b/drivers/leds/leds-spi-byte.c
+@@ -91,7 +91,6 @@ static int spi_byte_probe(struct spi_device *spi)
+ 		dev_err(dev, "Device must have exactly one LED sub-node.");
+ 		return -EINVAL;
+ 	}
+-	child = of_get_next_available_child(dev_of_node(dev), NULL);
+ 
+ 	led = devm_kzalloc(dev, sizeof(*led), GFP_KERNEL);
+ 	if (!led)
+@@ -104,11 +103,13 @@ static int spi_byte_probe(struct spi_device *spi)
+ 	led->ldev.max_brightness = led->cdef->max_value - led->cdef->off_value;
+ 	led->ldev.brightness_set_blocking = spi_byte_brightness_set_blocking;
+ 
++	child = of_get_next_available_child(dev_of_node(dev), NULL);
+ 	state = of_get_property(child, "default-state", NULL);
+ 	if (state) {
+ 		if (!strcmp(state, "on")) {
+ 			led->ldev.brightness = led->ldev.max_brightness;
+ 		} else if (strcmp(state, "off")) {
++			of_node_put(child);
+ 			/* all other cases except "off" */
+ 			dev_err(dev, "default-state can only be 'on' or 'off'");
+ 			return -EINVAL;
+@@ -123,9 +124,12 @@ static int spi_byte_probe(struct spi_device *spi)
+ 
+ 	ret = devm_led_classdev_register_ext(&spi->dev, &led->ldev, &init_data);
+ 	if (ret) {
++		of_node_put(child);
+ 		mutex_destroy(&led->mutex);
+ 		return ret;
+ 	}
++
++	of_node_put(child);
+ 	spi_set_drvdata(spi, led);
+ 
+ 	return 0;
+diff --git a/drivers/md/dm-init.c b/drivers/md/dm-init.c
+index 2a71bcdba92d1..b37bbe7625003 100644
+--- a/drivers/md/dm-init.c
++++ b/drivers/md/dm-init.c
+@@ -212,8 +212,10 @@ static char __init *dm_parse_device_entry(struct dm_device *dev, char *str)
+ 	strscpy(dev->dmi.uuid, field[1], sizeof(dev->dmi.uuid));
+ 	/* minor */
+ 	if (strlen(field[2])) {
+-		if (kstrtoull(field[2], 0, &dev->dmi.dev))
++		if (kstrtoull(field[2], 0, &dev->dmi.dev) ||
++		    dev->dmi.dev >= (1 << MINORBITS))
+ 			return ERR_PTR(-EINVAL);
++		dev->dmi.dev = huge_encode_dev((dev_t)dev->dmi.dev);
+ 		dev->dmi.flags |= DM_PERSISTENT_DEV_FLAG;
+ 	}
+ 	/* flags */
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 1923615f0eea7..c90a28fa8891f 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -1406,8 +1406,11 @@ static int camss_of_parse_endpoint_node(struct device *dev,
+ 	struct v4l2_mbus_config_mipi_csi2 *mipi_csi2;
+ 	struct v4l2_fwnode_endpoint vep = { { 0 } };
+ 	unsigned int i;
++	int ret;
+ 
+-	v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep);
++	ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep);
++	if (ret)
++		return ret;
+ 
+ 	csd->interface.csiphy_id = vep.base.port;
+ 
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 2804975fe2787..afa0dc5bcdae1 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -106,8 +106,9 @@ static int vid_cap_queue_setup(struct vb2_queue *vq,
+ 		if (*nplanes != buffers)
+ 			return -EINVAL;
+ 		for (p = 0; p < buffers; p++) {
+-			if (sizes[p] < tpg_g_line_width(&dev->tpg, p) * h +
+-						dev->fmt_cap->data_offset[p])
++			if (sizes[p] < tpg_g_line_width(&dev->tpg, p) * h /
++					dev->fmt_cap->vdownsampling[p] +
++					dev->fmt_cap->data_offset[p])
+ 				return -EINVAL;
+ 		}
+ 	} else {
+@@ -1553,8 +1554,10 @@ int vidioc_s_edid(struct file *file, void *_fh,
+ 		return -EINVAL;
+ 	if (edid->blocks == 0) {
+ 		dev->edid_blocks = 0;
+-		v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, 0);
+-		v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, 0);
++		if (dev->num_outputs) {
++			v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, 0);
++			v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, 0);
++		}
+ 		phys_addr = CEC_PHYS_ADDR_INVALID;
+ 		goto set_phys_addr;
+ 	}
+@@ -1578,8 +1581,10 @@ int vidioc_s_edid(struct file *file, void *_fh,
+ 			display_present |=
+ 				dev->display_present[i] << j++;
+ 
+-	v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, display_present);
+-	v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, display_present);
++	if (dev->num_outputs) {
++		v4l2_ctrl_s_ctrl(dev->ctrl_tx_edid_present, display_present);
++		v4l2_ctrl_s_ctrl(dev->ctrl_tx_hotplug, display_present);
++	}
+ 
+ set_phys_addr:
+ 	/* TODO: a proper hotplug detect cycle should be emulated here */
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-out.c b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+index 1653b2988f7e3..7a0f4c61ac807 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+@@ -63,14 +63,16 @@ static int vid_out_queue_setup(struct vb2_queue *vq,
+ 		if (sizes[0] < size)
+ 			return -EINVAL;
+ 		for (p = 1; p < planes; p++) {
+-			if (sizes[p] < dev->bytesperline_out[p] * h +
+-				       vfmt->data_offset[p])
++			if (sizes[p] < dev->bytesperline_out[p] * h /
++					vfmt->vdownsampling[p] +
++					vfmt->data_offset[p])
+ 				return -EINVAL;
+ 		}
+ 	} else {
+ 		for (p = 0; p < planes; p++)
+-			sizes[p] = p ? dev->bytesperline_out[p] * h +
+-				       vfmt->data_offset[p] : size;
++			sizes[p] = p ? dev->bytesperline_out[p] * h /
++					vfmt->vdownsampling[p] +
++					vfmt->data_offset[p] : size;
+ 	}
+ 
+ 	*nplanes = planes;
+@@ -124,7 +126,7 @@ static int vid_out_buf_prepare(struct vb2_buffer *vb)
+ 
+ 	for (p = 0; p < planes; p++) {
+ 		if (p)
+-			size = dev->bytesperline_out[p] * h;
++			size = dev->bytesperline_out[p] * h / vfmt->vdownsampling[p];
+ 		size += vb->planes[p].data_offset;
+ 
+ 		if (vb2_get_plane_payload(vb, p) < size) {
+@@ -331,8 +333,8 @@ int vivid_g_fmt_vid_out(struct file *file, void *priv,
+ 	for (p = 0; p < mp->num_planes; p++) {
+ 		mp->plane_fmt[p].bytesperline = dev->bytesperline_out[p];
+ 		mp->plane_fmt[p].sizeimage =
+-			mp->plane_fmt[p].bytesperline * mp->height +
+-			fmt->data_offset[p];
++			mp->plane_fmt[p].bytesperline * mp->height /
++			fmt->vdownsampling[p] + fmt->data_offset[p];
+ 	}
+ 	for (p = fmt->buffers; p < fmt->planes; p++) {
+ 		unsigned stride = dev->bytesperline_out[p];
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 90f1aea99dac0..8033622543f28 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -179,7 +179,7 @@ static int flexcop_usb_memory_req(struct flexcop_usb *fc_usb,
+ 		flexcop_usb_request_t req, flexcop_usb_mem_page_t page_start,
+ 		u32 addr, int extended, u8 *buf, u32 len)
+ {
+-	int i, ret = 0;
++	int ret = 0;
+ 	u16 wMax;
+ 	u32 pagechunk = 0;
+ 
+@@ -196,7 +196,7 @@ static int flexcop_usb_memory_req(struct flexcop_usb *fc_usb,
+ 	default:
+ 		return -EINVAL;
+ 	}
+-	for (i = 0; i < len;) {
++	while (len) {
+ 		pagechunk = min(wMax, bytes_left_to_read_on_page(addr, len));
+ 		deb_info("%x\n",
+ 			(addr & V8_MEMORY_PAGE_MASK) |
+@@ -206,11 +206,12 @@ static int flexcop_usb_memory_req(struct flexcop_usb *fc_usb,
+ 			page_start + (addr / V8_MEMORY_PAGE_SIZE),
+ 			(addr & V8_MEMORY_PAGE_MASK) |
+ 				(V8_MEMORY_EXTENDED*extended),
+-			&buf[i], pagechunk);
++			buf, pagechunk);
+ 
+ 		if (ret < 0)
+ 			return ret;
+ 		addr += pagechunk;
++		buf += pagechunk;
+ 		len -= pagechunk;
+ 	}
+ 	return 0;
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 5680856c0fb82..f525971601e4d 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1912,7 +1912,8 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp)
+ 				      &args[0]);
+ 	if (err) {
+ 		dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size);
+-		goto err_invoke;
++		fastrpc_buf_free(buf);
++		return err;
+ 	}
+ 
+ 	/* update the buffer to be able to deallocate the memory on the DSP */
+@@ -1950,8 +1951,6 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp)
+ 
+ err_assign:
+ 	fastrpc_req_munmap_impl(fl, buf);
+-err_invoke:
+-	fastrpc_buf_free(buf);
+ 
+ 	return err;
+ }
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 692daa9eff341..19c9d2cdd277b 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -144,7 +144,8 @@ void vmci_resource_remove(struct vmci_resource *resource)
+ 	spin_lock(&vmci_resource_table.lock);
+ 
+ 	hlist_for_each_entry(r, &vmci_resource_table.entries[idx], node) {
+-		if (vmci_handle_is_equal(r->handle, resource->handle)) {
++		if (vmci_handle_is_equal(r->handle, resource->handle) &&
++		    resource->type == r->type) {
+ 			hlist_del_init_rcu(&r->node);
+ 			break;
+ 		}
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index cca71867bc4ad..92905fc46436d 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -15,6 +15,19 @@
+ 
+ #include "card.h"
+ 
++static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
++	/*
++	 * Kingston Canvas Go! Plus microSD cards never finish SD cache flush.
++	 * This has so far only been observed on cards from 11/2019, while new
++	 * cards from 2023/05 do not exhibit this behavior.
++	 */
++	_FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11,
++		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++		   MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
++
++	END_FIXUP
++};
++
+ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ #define INAND_CMD38_ARG_EXT_CSD  113
+ #define INAND_CMD38_ARG_ERASE    0x00
+@@ -53,15 +66,6 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_BLK_NO_CMD23),
+ 
+-	/*
+-	 * Kingston Canvas Go! Plus microSD cards never finish SD cache flush.
+-	 * This has so far only been observed on cards from 11/2019, while new
+-	 * cards from 2023/05 do not exhibit this behavior.
+-	 */
+-	_FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11,
+-		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
+-		   MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
+-
+ 	/*
+ 	 * Some SD cards lockup while using CMD23 multiblock transfers.
+ 	 */
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 1c8148cdda505..ee37ad14e79ee 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -26,6 +26,7 @@
+ #include "host.h"
+ #include "bus.h"
+ #include "mmc_ops.h"
++#include "quirks.h"
+ #include "sd.h"
+ #include "sd_ops.h"
+ 
+@@ -1475,6 +1476,9 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
+ 			goto free_card;
+ 	}
+ 
++	/* Apply quirks prior to card setup */
++	mmc_fixup_device(card, mmc_sd_fixups);
++
+ 	err = mmc_sd_setup_card(host, card, oldcard != NULL);
+ 	if (err)
+ 		goto free_card;
+diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c
+index c14d7251d0bbe..a02da26a1efd1 100644
+--- a/drivers/mmc/host/cqhci-core.c
++++ b/drivers/mmc/host/cqhci-core.c
+@@ -617,7 +617,7 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		cqhci_writel(cq_host, 0, CQHCI_CTL);
+ 		mmc->cqe_on = true;
+ 		pr_debug("%s: cqhci: CQE on\n", mmc_hostname(mmc));
+-		if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT) {
++		if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) {
+ 			pr_err("%s: cqhci: CQE failed to exit halt state\n",
+ 			       mmc_hostname(mmc));
+ 		}
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index a4f813ea177a8..c27e9ab9ff821 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2951,8 +2951,8 @@ static int dw_mci_init_slot(struct dw_mci *host)
+ 	if (host->use_dma == TRANS_MODE_IDMAC) {
+ 		mmc->max_segs = host->ring_size;
+ 		mmc->max_blk_size = 65535;
+-		mmc->max_seg_size = 0x1000;
+-		mmc->max_req_size = mmc->max_seg_size * host->ring_size;
++		mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size;
++		mmc->max_seg_size = mmc->max_req_size;
+ 		mmc->max_blk_count = mmc->max_req_size / 512;
+ 	} else if (host->use_dma == TRANS_MODE_EDMAC) {
+ 		mmc->max_segs = 64;
+diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c
+index 430c1f90037b5..37240895ffaaf 100644
+--- a/drivers/mmc/host/sdhci-of-aspeed.c
++++ b/drivers/mmc/host/sdhci-of-aspeed.c
+@@ -510,6 +510,7 @@ static const struct of_device_id aspeed_sdhci_of_match[] = {
+ 	{ .compatible = "aspeed,ast2600-sdhci", .data = &ast2600_sdhci_pdata, },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, aspeed_sdhci_of_match);
+ 
+ static struct platform_driver aspeed_sdhci_driver = {
+ 	.driver		= {
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index d5c56ca91b771..7aca0544fb29c 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -83,7 +83,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 
+ 		if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion,
+ 				  sizeof(ipversion))) {
+-			bareudp->dev->stats.rx_dropped++;
++			DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 			goto drop;
+ 		}
+ 		ipversion >>= 4;
+@@ -93,7 +93,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 		} else if (ipversion == 6 && bareudp->multi_proto_mode) {
+ 			proto = htons(ETH_P_IPV6);
+ 		} else {
+-			bareudp->dev->stats.rx_dropped++;
++			DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 			goto drop;
+ 		}
+ 	} else if (bareudp->ethertype == htons(ETH_P_MPLS_UC)) {
+@@ -107,7 +107,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				   ipv4_is_multicast(tunnel_hdr->daddr)) {
+ 				proto = htons(ETH_P_MPLS_MC);
+ 			} else {
+-				bareudp->dev->stats.rx_dropped++;
++				DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 				goto drop;
+ 			}
+ 		} else {
+@@ -123,7 +123,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				   (addr_type & IPV6_ADDR_MULTICAST)) {
+ 				proto = htons(ETH_P_MPLS_MC);
+ 			} else {
+-				bareudp->dev->stats.rx_dropped++;
++				DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 				goto drop;
+ 			}
+ 		}
+@@ -135,7 +135,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 				 proto,
+ 				 !net_eq(bareudp->net,
+ 				 dev_net(bareudp->dev)))) {
+-		bareudp->dev->stats.rx_dropped++;
++		DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 		goto drop;
+ 	}
+ 
+@@ -143,7 +143,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 
+ 	tun_dst = udp_tun_rx_dst(skb, family, key, 0, 0);
+ 	if (!tun_dst) {
+-		bareudp->dev->stats.rx_dropped++;
++		DEV_STATS_INC(bareudp->dev, rx_dropped);
+ 		goto drop;
+ 	}
+ 	skb_dst_set(skb, &tun_dst->dst);
+@@ -169,8 +169,8 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 						     &((struct ipv6hdr *)oiph)->saddr);
+ 		}
+ 		if (err > 1) {
+-			++bareudp->dev->stats.rx_frame_errors;
+-			++bareudp->dev->stats.rx_errors;
++			DEV_STATS_INC(bareudp->dev, rx_frame_errors);
++			DEV_STATS_INC(bareudp->dev, rx_errors);
+ 			goto drop;
+ 		}
+ 	}
+@@ -467,11 +467,11 @@ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	dev_kfree_skb(skb);
+ 
+ 	if (err == -ELOOP)
+-		dev->stats.collisions++;
++		DEV_STATS_INC(dev, collisions);
+ 	else if (err == -ENETUNREACH)
+-		dev->stats.tx_carrier_errors++;
++		DEV_STATS_INC(dev, tx_carrier_errors);
+ 
+-	dev->stats.tx_errors++;
++	DEV_STATS_INC(dev, tx_errors);
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 7b5028b67cd5c..ab15a2ae8a202 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -1640,23 +1640,15 @@ static int kvaser_pciefd_read_buffer(struct kvaser_pciefd *pcie, int dma_buf)
+ 	return res;
+ }
+ 
+-static void kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
++static u32 kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
+ {
+ 	u32 irq = ioread32(KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IRQ_REG);
+ 
+-	if (irq & KVASER_PCIEFD_SRB_IRQ_DPD0) {
++	if (irq & KVASER_PCIEFD_SRB_IRQ_DPD0)
+ 		kvaser_pciefd_read_buffer(pcie, 0);
+-		/* Reset DMA buffer 0 */
+-		iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0,
+-			  KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
+-	}
+ 
+-	if (irq & KVASER_PCIEFD_SRB_IRQ_DPD1) {
++	if (irq & KVASER_PCIEFD_SRB_IRQ_DPD1)
+ 		kvaser_pciefd_read_buffer(pcie, 1);
+-		/* Reset DMA buffer 1 */
+-		iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
+-			  KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
+-	}
+ 
+ 	if (irq & KVASER_PCIEFD_SRB_IRQ_DOF0 ||
+ 	    irq & KVASER_PCIEFD_SRB_IRQ_DOF1 ||
+@@ -1665,6 +1657,7 @@ static void kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
+ 		dev_err(&pcie->pci->dev, "DMA IRQ error 0x%08X\n", irq);
+ 
+ 	iowrite32(irq, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IRQ_REG);
++	return irq;
+ }
+ 
+ static void kvaser_pciefd_transmit_irq(struct kvaser_pciefd_can *can)
+@@ -1691,27 +1684,31 @@ static irqreturn_t kvaser_pciefd_irq_handler(int irq, void *dev)
+ {
+ 	struct kvaser_pciefd *pcie = (struct kvaser_pciefd *)dev;
+ 	const struct kvaser_pciefd_irq_mask *irq_mask = pcie->driver_data->irq_mask;
+-	u32 board_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie));
++	u32 pci_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie));
++	u32 srb_irq = 0;
++	u32 srb_release = 0;
+ 	int i;
+ 
+-	if (!(board_irq & irq_mask->all))
++	if (!(pci_irq & irq_mask->all))
+ 		return IRQ_NONE;
+ 
+-	if (board_irq & irq_mask->kcan_rx0)
+-		kvaser_pciefd_receive_irq(pcie);
++	if (pci_irq & irq_mask->kcan_rx0)
++		srb_irq = kvaser_pciefd_receive_irq(pcie);
+ 
+ 	for (i = 0; i < pcie->nr_channels; i++) {
+-		if (!pcie->can[i]) {
+-			dev_err(&pcie->pci->dev,
+-				"IRQ mask points to unallocated controller\n");
+-			break;
+-		}
+-
+-		/* Check that mask matches channel (i) IRQ mask */
+-		if (board_irq & irq_mask->kcan_tx[i])
++		if (pci_irq & irq_mask->kcan_tx[i])
+ 			kvaser_pciefd_transmit_irq(pcie->can[i]);
+ 	}
+ 
++	if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0)
++		srb_release |= KVASER_PCIEFD_SRB_CMD_RDB0;
++
++	if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1)
++		srb_release |= KVASER_PCIEFD_SRB_CMD_RDB1;
++
++	if (srb_release)
++		iowrite32(srb_release, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 14b231c4d7ecf..e4f0a382c2165 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -449,11 +449,10 @@ static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev)
+ {
+ 	m_can_coalescing_disable(cdev);
+ 	m_can_write(cdev, M_CAN_ILE, 0x0);
+-	cdev->active_interrupts = 0x0;
+ 
+ 	if (!cdev->net->irq) {
+ 		dev_dbg(cdev->dev, "Stop hrtimer\n");
+-		hrtimer_cancel(&cdev->hrtimer);
++		hrtimer_try_to_cancel(&cdev->hrtimer);
+ 	}
+ }
+ 
+@@ -1003,22 +1002,6 @@ static int m_can_rx_handler(struct net_device *dev, int quota, u32 irqstatus)
+ 	return work_done;
+ }
+ 
+-static int m_can_rx_peripheral(struct net_device *dev, u32 irqstatus)
+-{
+-	struct m_can_classdev *cdev = netdev_priv(dev);
+-	int work_done;
+-
+-	work_done = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, irqstatus);
+-
+-	/* Don't re-enable interrupts if the driver had a fatal error
+-	 * (e.g., FIFO read failure).
+-	 */
+-	if (work_done < 0)
+-		m_can_disable_all_interrupts(cdev);
+-
+-	return work_done;
+-}
+-
+ static int m_can_poll(struct napi_struct *napi, int quota)
+ {
+ 	struct net_device *dev = napi->dev;
+@@ -1183,16 +1166,18 @@ static void m_can_coalescing_update(struct m_can_classdev *cdev, u32 ir)
+ 			      HRTIMER_MODE_REL);
+ }
+ 
+-static irqreturn_t m_can_isr(int irq, void *dev_id)
++/* This interrupt handler is called either from the interrupt thread or a
++ * hrtimer. This has implications like cancelling a timer won't be possible
++ * blocking.
++ */
++static int m_can_interrupt_handler(struct m_can_classdev *cdev)
+ {
+-	struct net_device *dev = (struct net_device *)dev_id;
+-	struct m_can_classdev *cdev = netdev_priv(dev);
++	struct net_device *dev = cdev->net;
+ 	u32 ir;
++	int ret;
+ 
+-	if (pm_runtime_suspended(cdev->dev)) {
+-		m_can_coalescing_disable(cdev);
++	if (pm_runtime_suspended(cdev->dev))
+ 		return IRQ_NONE;
+-	}
+ 
+ 	ir = m_can_read(cdev, M_CAN_IR);
+ 	m_can_coalescing_update(cdev, ir);
+@@ -1216,11 +1201,9 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
+ 			m_can_disable_all_interrupts(cdev);
+ 			napi_schedule(&cdev->napi);
+ 		} else {
+-			int pkts;
+-
+-			pkts = m_can_rx_peripheral(dev, ir);
+-			if (pkts < 0)
+-				goto out_fail;
++			ret = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, ir);
++			if (ret < 0)
++				return ret;
+ 		}
+ 	}
+ 
+@@ -1238,8 +1221,9 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
+ 	} else  {
+ 		if (ir & (IR_TEFN | IR_TEFW)) {
+ 			/* New TX FIFO Element arrived */
+-			if (m_can_echo_tx_event(dev) != 0)
+-				goto out_fail;
++			ret = m_can_echo_tx_event(dev);
++			if (ret != 0)
++				return ret;
+ 		}
+ 	}
+ 
+@@ -1247,16 +1231,31 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
+ 		can_rx_offload_threaded_irq_finish(&cdev->offload);
+ 
+ 	return IRQ_HANDLED;
++}
+ 
+-out_fail:
+-	m_can_disable_all_interrupts(cdev);
+-	return IRQ_HANDLED;
++static irqreturn_t m_can_isr(int irq, void *dev_id)
++{
++	struct net_device *dev = (struct net_device *)dev_id;
++	struct m_can_classdev *cdev = netdev_priv(dev);
++	int ret;
++
++	ret =  m_can_interrupt_handler(cdev);
++	if (ret < 0) {
++		m_can_disable_all_interrupts(cdev);
++		return IRQ_HANDLED;
++	}
++
++	return ret;
+ }
+ 
+ static enum hrtimer_restart m_can_coalescing_timer(struct hrtimer *timer)
+ {
+ 	struct m_can_classdev *cdev = container_of(timer, struct m_can_classdev, hrtimer);
+ 
++	if (cdev->can.state == CAN_STATE_BUS_OFF ||
++	    cdev->can.state == CAN_STATE_STOPPED)
++		return HRTIMER_NORESTART;
++
+ 	irq_wake_thread(cdev->net->irq, cdev->net);
+ 
+ 	return HRTIMER_NORESTART;
+@@ -1506,6 +1505,7 @@ static int m_can_chip_config(struct net_device *dev)
+ 		else
+ 			interrupts &= ~(IR_ERR_LEC_31X);
+ 	}
++	cdev->active_interrupts = 0;
+ 	m_can_interrupt_enable(cdev, interrupts);
+ 
+ 	/* route all interrupts to INT0 */
+@@ -1948,8 +1948,17 @@ static enum hrtimer_restart hrtimer_callback(struct hrtimer *timer)
+ {
+ 	struct m_can_classdev *cdev = container_of(timer, struct
+ 						   m_can_classdev, hrtimer);
++	int ret;
+ 
+-	m_can_isr(0, cdev->net);
++	if (cdev->can.state == CAN_STATE_BUS_OFF ||
++	    cdev->can.state == CAN_STATE_STOPPED)
++		return HRTIMER_NORESTART;
++
++	ret = m_can_interrupt_handler(cdev);
++
++	/* On error or if napi is scheduled to read, stop the timer */
++	if (ret < 0 || napi_is_scheduled(&cdev->napi))
++		return HRTIMER_NORESTART;
+ 
+ 	hrtimer_forward_now(timer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS));
+ 
+@@ -2009,7 +2018,7 @@ static int m_can_open(struct net_device *dev)
+ 	/* start the m_can controller */
+ 	err = m_can_start(dev);
+ 	if (err)
+-		goto exit_irq_fail;
++		goto exit_start_fail;
+ 
+ 	if (!cdev->is_peripheral)
+ 		napi_enable(&cdev->napi);
+@@ -2018,6 +2027,9 @@ static int m_can_open(struct net_device *dev)
+ 
+ 	return 0;
+ 
++exit_start_fail:
++	if (cdev->is_peripheral || dev->irq)
++		free_irq(dev->irq, dev);
+ exit_irq_fail:
+ 	if (cdev->is_peripheral)
+ 		destroy_workqueue(cdev->tx_wq);
+@@ -2384,12 +2396,15 @@ int m_can_class_suspend(struct device *dev)
+ 		netif_device_detach(ndev);
+ 
+ 		/* leave the chip running with rx interrupt enabled if it is
+-		 * used as a wake-up source.
++		 * used as a wake-up source. Coalescing needs to be reset then,
++		 * the timer is cancelled here, interrupts are done in resume.
+ 		 */
+-		if (cdev->pm_wake_source)
++		if (cdev->pm_wake_source) {
++			hrtimer_cancel(&cdev->hrtimer);
+ 			m_can_write(cdev, M_CAN_IE, IR_RF0N);
+-		else
++		} else {
+ 			m_can_stop(ndev);
++		}
+ 
+ 		m_can_clk_stop(cdev);
+ 	}
+@@ -2419,6 +2434,13 @@ int m_can_class_resume(struct device *dev)
+ 			return ret;
+ 
+ 		if (cdev->pm_wake_source) {
++			/* Restore active interrupts but disable coalescing as
++			 * we may have missed important waterlevel interrupts
++			 * between suspend and resume. Timers are already
++			 * stopped in suspend. Here we enable all interrupts
++			 * again.
++			 */
++			cdev->active_interrupts |= IR_RF0N | IR_TEFN;
+ 			m_can_write(cdev, M_CAN_IE, cdev->active_interrupts);
+ 		} else {
+ 			ret  = m_can_start(ndev);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 79c4bab5f7246..8c56f85e87c1a 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -753,7 +753,7 @@ static int mcp251x_hw_wake(struct spi_device *spi)
+ 	int ret;
+ 
+ 	/* Force wakeup interrupt to wake device, but don't execute IST */
+-	disable_irq(spi->irq);
++	disable_irq_nosync(spi->irq);
+ 	mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF);
+ 
+ 	/* Wait for oscillator startup timer after wake up */
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index bf1589aef1fc6..f1e6007b74cec 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2,7 +2,7 @@
+ //
+ // mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+ //
+-// Copyright (c) 2019, 2020, 2021 Pengutronix,
++// Copyright (c) 2019, 2020, 2021, 2023 Pengutronix,
+ //               Marc Kleine-Budde <kernel@pengutronix.de>
+ //
+ // Based on:
+@@ -867,18 +867,18 @@ static int mcp251xfd_get_berr_counter(const struct net_device *ndev,
+ 
+ static struct sk_buff *
+ mcp251xfd_alloc_can_err_skb(struct mcp251xfd_priv *priv,
+-			    struct can_frame **cf, u32 *timestamp)
++			    struct can_frame **cf, u32 *ts_raw)
+ {
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+-	err = mcp251xfd_get_timestamp(priv, timestamp);
++	err = mcp251xfd_get_timestamp_raw(priv, ts_raw);
+ 	if (err)
+ 		return NULL;
+ 
+ 	skb = alloc_can_err_skb(priv->ndev, cf);
+ 	if (skb)
+-		mcp251xfd_skb_set_timestamp(priv, skb, *timestamp);
++		mcp251xfd_skb_set_timestamp_raw(priv, skb, *ts_raw);
+ 
+ 	return skb;
+ }
+@@ -889,7 +889,7 @@ static int mcp251xfd_handle_rxovif(struct mcp251xfd_priv *priv)
+ 	struct mcp251xfd_rx_ring *ring;
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf;
+-	u32 timestamp, rxovif;
++	u32 ts_raw, rxovif;
+ 	int err, i;
+ 
+ 	stats->rx_over_errors++;
+@@ -924,14 +924,14 @@ static int mcp251xfd_handle_rxovif(struct mcp251xfd_priv *priv)
+ 			return err;
+ 	}
+ 
+-	skb = mcp251xfd_alloc_can_err_skb(priv, &cf, &timestamp);
++	skb = mcp251xfd_alloc_can_err_skb(priv, &cf, &ts_raw);
+ 	if (!skb)
+ 		return 0;
+ 
+ 	cf->can_id |= CAN_ERR_CRTL;
+ 	cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+ 
+-	err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
++	err = can_rx_offload_queue_timestamp(&priv->offload, skb, ts_raw);
+ 	if (err)
+ 		stats->rx_fifo_errors++;
+ 
+@@ -948,12 +948,12 @@ static int mcp251xfd_handle_txatif(struct mcp251xfd_priv *priv)
+ static int mcp251xfd_handle_ivmif(struct mcp251xfd_priv *priv)
+ {
+ 	struct net_device_stats *stats = &priv->ndev->stats;
+-	u32 bdiag1, timestamp;
++	u32 bdiag1, ts_raw;
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf = NULL;
+ 	int err;
+ 
+-	err = mcp251xfd_get_timestamp(priv, &timestamp);
++	err = mcp251xfd_get_timestamp_raw(priv, &ts_raw);
+ 	if (err)
+ 		return err;
+ 
+@@ -1035,8 +1035,8 @@ static int mcp251xfd_handle_ivmif(struct mcp251xfd_priv *priv)
+ 	if (!cf)
+ 		return 0;
+ 
+-	mcp251xfd_skb_set_timestamp(priv, skb, timestamp);
+-	err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
++	mcp251xfd_skb_set_timestamp_raw(priv, skb, ts_raw);
++	err = can_rx_offload_queue_timestamp(&priv->offload, skb, ts_raw);
+ 	if (err)
+ 		stats->rx_fifo_errors++;
+ 
+@@ -1049,7 +1049,7 @@ static int mcp251xfd_handle_cerrif(struct mcp251xfd_priv *priv)
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf = NULL;
+ 	enum can_state new_state, rx_state, tx_state;
+-	u32 trec, timestamp;
++	u32 trec, ts_raw;
+ 	int err;
+ 
+ 	err = regmap_read(priv->map_reg, MCP251XFD_REG_TREC, &trec);
+@@ -1079,7 +1079,7 @@ static int mcp251xfd_handle_cerrif(struct mcp251xfd_priv *priv)
+ 	/* The skb allocation might fail, but can_change_state()
+ 	 * handles cf == NULL.
+ 	 */
+-	skb = mcp251xfd_alloc_can_err_skb(priv, &cf, &timestamp);
++	skb = mcp251xfd_alloc_can_err_skb(priv, &cf, &ts_raw);
+ 	can_change_state(priv->ndev, cf, tx_state, rx_state);
+ 
+ 	if (new_state == CAN_STATE_BUS_OFF) {
+@@ -1110,7 +1110,7 @@ static int mcp251xfd_handle_cerrif(struct mcp251xfd_priv *priv)
+ 		cf->data[7] = bec.rxerr;
+ 	}
+ 
+-	err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
++	err = can_rx_offload_queue_timestamp(&priv->offload, skb, ts_raw);
+ 	if (err)
+ 		stats->rx_fifo_errors++;
+ 
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c
+index 9e8e82cdba461..61b0d6fa52dd8 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c
+@@ -97,7 +97,16 @@ void can_ram_get_layout(struct can_ram_layout *layout,
+ 	if (ring) {
+ 		u8 num_rx_coalesce = 0, num_tx_coalesce = 0;
+ 
+-		num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, ring->rx_pending);
++		/* If the ring parameters have been configured in
++		 * CAN-CC mode, but and we are in CAN-FD mode now,
++		 * they might be to big. Use the default CAN-FD values
++		 * in this case.
++		 */
++		num_rx = ring->rx_pending;
++		if (num_rx > layout->max_rx)
++			num_rx = layout->default_rx;
++
++		num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, num_rx);
+ 
+ 		/* The ethtool doc says:
+ 		 * To disable coalescing, set usecs = 0 and max_frames = 1.
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+index 4cb79a4f24612..f72582d4d3e8e 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+@@ -206,6 +206,7 @@ mcp251xfd_ring_init_rx(struct mcp251xfd_priv *priv, u16 *base, u8 *fifo_nr)
+ 	int i, j;
+ 
+ 	mcp251xfd_for_each_rx_ring(priv, rx_ring, i) {
++		rx_ring->last_valid = timecounter_read(&priv->tc);
+ 		rx_ring->head = 0;
+ 		rx_ring->tail = 0;
+ 		rx_ring->base = *base;
+@@ -468,11 +469,25 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv)
+ 
+ 	/* switching from CAN-2.0 to CAN-FD mode or vice versa */
+ 	if (fd_mode != test_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags)) {
++		const struct ethtool_ringparam ring = {
++			.rx_pending = priv->rx_obj_num,
++			.tx_pending = priv->tx->obj_num,
++		};
++		const struct ethtool_coalesce ec = {
++			.rx_coalesce_usecs_irq = priv->rx_coalesce_usecs_irq,
++			.rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq,
++			.tx_coalesce_usecs_irq = priv->tx_coalesce_usecs_irq,
++			.tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq,
++		};
+ 		struct can_ram_layout layout;
+ 
+-		can_ram_get_layout(&layout, &mcp251xfd_ram_config, NULL, NULL, fd_mode);
+-		priv->rx_obj_num = layout.default_rx;
+-		tx_ring->obj_num = layout.default_tx;
++		can_ram_get_layout(&layout, &mcp251xfd_ram_config, &ring, &ec, fd_mode);
++
++		priv->rx_obj_num = layout.cur_rx;
++		priv->rx_obj_num_coalesce_irq = layout.rx_coalesce;
++
++		tx_ring->obj_num = layout.cur_tx;
++		priv->tx_obj_num_coalesce_irq = layout.tx_coalesce;
+ 	}
+ 
+ 	if (fd_mode) {
+@@ -509,6 +524,8 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv)
+ 		}
+ 
+ 		rx_ring->obj_num = rx_obj_num;
++		rx_ring->obj_num_shift_to_u8 = BITS_PER_TYPE(rx_ring->obj_num_shift_to_u8) -
++			ilog2(rx_obj_num);
+ 		rx_ring->obj_size = rx_obj_size;
+ 		priv->rx[i] = rx_ring;
+ 	}
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-rx.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-rx.c
+index ced8d9c81f8c6..fe897f3e4c12a 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-rx.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-rx.c
+@@ -2,7 +2,7 @@
+ //
+ // mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+ //
+-// Copyright (c) 2019, 2020, 2021 Pengutronix,
++// Copyright (c) 2019, 2020, 2021, 2023 Pengutronix,
+ //               Marc Kleine-Budde <kernel@pengutronix.de>
+ //
+ // Based on:
+@@ -16,23 +16,14 @@
+ 
+ #include "mcp251xfd.h"
+ 
+-static inline int
+-mcp251xfd_rx_head_get_from_chip(const struct mcp251xfd_priv *priv,
+-				const struct mcp251xfd_rx_ring *ring,
+-				u8 *rx_head, bool *fifo_empty)
++static inline bool mcp251xfd_rx_fifo_sta_empty(const u32 fifo_sta)
+ {
+-	u32 fifo_sta;
+-	int err;
+-
+-	err = regmap_read(priv->map_reg, MCP251XFD_REG_FIFOSTA(ring->fifo_nr),
+-			  &fifo_sta);
+-	if (err)
+-		return err;
+-
+-	*rx_head = FIELD_GET(MCP251XFD_REG_FIFOSTA_FIFOCI_MASK, fifo_sta);
+-	*fifo_empty = !(fifo_sta & MCP251XFD_REG_FIFOSTA_TFNRFNIF);
++	return !(fifo_sta & MCP251XFD_REG_FIFOSTA_TFNRFNIF);
++}
+ 
+-	return 0;
++static inline bool mcp251xfd_rx_fifo_sta_full(const u32 fifo_sta)
++{
++	return fifo_sta & MCP251XFD_REG_FIFOSTA_TFERFFIF;
+ }
+ 
+ static inline int
+@@ -80,29 +71,49 @@ mcp251xfd_check_rx_tail(const struct mcp251xfd_priv *priv,
+ }
+ 
+ static int
+-mcp251xfd_rx_ring_update(const struct mcp251xfd_priv *priv,
+-			 struct mcp251xfd_rx_ring *ring)
++mcp251xfd_get_rx_len(const struct mcp251xfd_priv *priv,
++		     const struct mcp251xfd_rx_ring *ring,
++		     u8 *len_p)
+ {
+-	u32 new_head;
+-	u8 chip_rx_head;
+-	bool fifo_empty;
++	const u8 shift = ring->obj_num_shift_to_u8;
++	u8 chip_head, tail, len;
++	u32 fifo_sta;
+ 	int err;
+ 
+-	err = mcp251xfd_rx_head_get_from_chip(priv, ring, &chip_rx_head,
+-					      &fifo_empty);
+-	if (err || fifo_empty)
++	err = regmap_read(priv->map_reg, MCP251XFD_REG_FIFOSTA(ring->fifo_nr),
++			  &fifo_sta);
++	if (err)
++		return err;
++
++	if (mcp251xfd_rx_fifo_sta_empty(fifo_sta)) {
++		*len_p = 0;
++		return 0;
++	}
++
++	if (mcp251xfd_rx_fifo_sta_full(fifo_sta)) {
++		*len_p = ring->obj_num;
++		return 0;
++	}
++
++	chip_head = FIELD_GET(MCP251XFD_REG_FIFOSTA_FIFOCI_MASK, fifo_sta);
++
++	err =  mcp251xfd_check_rx_tail(priv, ring);
++	if (err)
+ 		return err;
++	tail = mcp251xfd_get_rx_tail(ring);
+ 
+-	/* chip_rx_head, is the next RX-Object filled by the HW.
+-	 * The new RX head must be >= the old head.
++	/* First shift to full u8. The subtraction works on signed
++	 * values, that keeps the difference steady around the u8
++	 * overflow. The right shift acts on len, which is an u8.
+ 	 */
+-	new_head = round_down(ring->head, ring->obj_num) + chip_rx_head;
+-	if (new_head <= ring->head)
+-		new_head += ring->obj_num;
++	BUILD_BUG_ON(sizeof(ring->obj_num) != sizeof(chip_head));
++	BUILD_BUG_ON(sizeof(ring->obj_num) != sizeof(tail));
++	BUILD_BUG_ON(sizeof(ring->obj_num) != sizeof(len));
+ 
+-	ring->head = new_head;
++	len = (chip_head << shift) - (tail << shift);
++	*len_p = len >> shift;
+ 
+-	return mcp251xfd_check_rx_tail(priv, ring);
++	return 0;
+ }
+ 
+ static void
+@@ -148,8 +159,6 @@ mcp251xfd_hw_rx_obj_to_skb(const struct mcp251xfd_priv *priv,
+ 
+ 	if (!(hw_rx_obj->flags & MCP251XFD_OBJ_FLAGS_RTR))
+ 		memcpy(cfd->data, hw_rx_obj->data, cfd->len);
+-
+-	mcp251xfd_skb_set_timestamp(priv, skb, hw_rx_obj->ts);
+ }
+ 
+ static int
+@@ -160,8 +169,26 @@ mcp251xfd_handle_rxif_one(struct mcp251xfd_priv *priv,
+ 	struct net_device_stats *stats = &priv->ndev->stats;
+ 	struct sk_buff *skb;
+ 	struct canfd_frame *cfd;
++	u64 timestamp;
+ 	int err;
+ 
++	/* According to mcp2518fd erratum DS80000789E 6. the FIFOCI
++	 * bits of a FIFOSTA register, here the RX FIFO head index
++	 * might be corrupted and we might process past the RX FIFO's
++	 * head into old CAN frames.
++	 *
++	 * Compare the timestamp of currently processed CAN frame with
++	 * last valid frame received. Abort with -EBADMSG if an old
++	 * CAN frame is detected.
++	 */
++	timestamp = timecounter_cyc2time(&priv->tc, hw_rx_obj->ts);
++	if (timestamp <= ring->last_valid) {
++		stats->rx_fifo_errors++;
++
++		return -EBADMSG;
++	}
++	ring->last_valid = timestamp;
++
+ 	if (hw_rx_obj->flags & MCP251XFD_OBJ_FLAGS_FDF)
+ 		skb = alloc_canfd_skb(priv->ndev, &cfd);
+ 	else
+@@ -172,6 +199,7 @@ mcp251xfd_handle_rxif_one(struct mcp251xfd_priv *priv,
+ 		return 0;
+ 	}
+ 
++	mcp251xfd_skb_set_timestamp(skb, timestamp);
+ 	mcp251xfd_hw_rx_obj_to_skb(priv, hw_rx_obj, skb);
+ 	err = can_rx_offload_queue_timestamp(&priv->offload, skb, hw_rx_obj->ts);
+ 	if (err)
+@@ -197,52 +225,81 @@ mcp251xfd_rx_obj_read(const struct mcp251xfd_priv *priv,
+ 	return err;
+ }
+ 
++static int
++mcp251xfd_handle_rxif_ring_uinc(const struct mcp251xfd_priv *priv,
++				struct mcp251xfd_rx_ring *ring,
++				u8 len)
++{
++	int offset;
++	int err;
++
++	if (!len)
++		return 0;
++
++	ring->head += len;
++
++	/* Increment the RX FIFO tail pointer 'len' times in a
++	 * single SPI message.
++	 *
++	 * Note:
++	 * Calculate offset, so that the SPI transfer ends on
++	 * the last message of the uinc_xfer array, which has
++	 * "cs_change == 0", to properly deactivate the chip
++	 * select.
++	 */
++	offset = ARRAY_SIZE(ring->uinc_xfer) - len;
++	err = spi_sync_transfer(priv->spi,
++				ring->uinc_xfer + offset, len);
++	if (err)
++		return err;
++
++	ring->tail += len;
++
++	return 0;
++}
++
+ static int
+ mcp251xfd_handle_rxif_ring(struct mcp251xfd_priv *priv,
+ 			   struct mcp251xfd_rx_ring *ring)
+ {
+ 	struct mcp251xfd_hw_rx_obj_canfd *hw_rx_obj = ring->obj;
+-	u8 rx_tail, len;
++	u8 rx_tail, len, l;
+ 	int err, i;
+ 
+-	err = mcp251xfd_rx_ring_update(priv, ring);
++	err = mcp251xfd_get_rx_len(priv, ring, &len);
+ 	if (err)
+ 		return err;
+ 
+-	while ((len = mcp251xfd_get_rx_linear_len(ring))) {
+-		int offset;
+-
++	while ((l = mcp251xfd_get_rx_linear_len(ring, len))) {
+ 		rx_tail = mcp251xfd_get_rx_tail(ring);
+ 
+ 		err = mcp251xfd_rx_obj_read(priv, ring, hw_rx_obj,
+-					    rx_tail, len);
++					    rx_tail, l);
+ 		if (err)
+ 			return err;
+ 
+-		for (i = 0; i < len; i++) {
++		for (i = 0; i < l; i++) {
+ 			err = mcp251xfd_handle_rxif_one(priv, ring,
+ 							(void *)hw_rx_obj +
+ 							i * ring->obj_size);
+-			if (err)
++
++			/* -EBADMSG means we're affected by mcp2518fd
++			 * erratum DS80000789E 6., i.e. the timestamp
++			 * in the RX object is older that the last
++			 * valid received CAN frame. Don't process any
++			 * further and mark processed frames as good.
++			 */
++			if (err == -EBADMSG)
++				return mcp251xfd_handle_rxif_ring_uinc(priv, ring, i);
++			else if (err)
+ 				return err;
+ 		}
+ 
+-		/* Increment the RX FIFO tail pointer 'len' times in a
+-		 * single SPI message.
+-		 *
+-		 * Note:
+-		 * Calculate offset, so that the SPI transfer ends on
+-		 * the last message of the uinc_xfer array, which has
+-		 * "cs_change == 0", to properly deactivate the chip
+-		 * select.
+-		 */
+-		offset = ARRAY_SIZE(ring->uinc_xfer) - len;
+-		err = spi_sync_transfer(priv->spi,
+-					ring->uinc_xfer + offset, len);
++		err = mcp251xfd_handle_rxif_ring_uinc(priv, ring, l);
+ 		if (err)
+ 			return err;
+ 
+-		ring->tail += len;
++		len -= l;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index 5b0c7890d4b44..3886476a8f8ef 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -97,7 +97,7 @@ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
+ 	tef_tail = mcp251xfd_get_tef_tail(priv);
+ 	skb = priv->can.echo_skb[tef_tail];
+ 	if (skb)
+-		mcp251xfd_skb_set_timestamp(priv, skb, hw_tef_obj->ts);
++		mcp251xfd_skb_set_timestamp_raw(priv, skb, hw_tef_obj->ts);
+ 	stats->tx_bytes +=
+ 		can_rx_offload_get_echo_skb_queue_timestamp(&priv->offload,
+ 							    tef_tail, hw_tef_obj->ts,
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
+index 712e091869870..1db99aabe85c5 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
+@@ -2,7 +2,7 @@
+ //
+ // mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+ //
+-// Copyright (c) 2021 Pengutronix,
++// Copyright (c) 2021, 2023 Pengutronix,
+ //               Marc Kleine-Budde <kernel@pengutronix.de>
+ //
+ 
+@@ -11,20 +11,20 @@
+ 
+ #include "mcp251xfd.h"
+ 
+-static u64 mcp251xfd_timestamp_read(const struct cyclecounter *cc)
++static u64 mcp251xfd_timestamp_raw_read(const struct cyclecounter *cc)
+ {
+ 	const struct mcp251xfd_priv *priv;
+-	u32 timestamp = 0;
++	u32 ts_raw = 0;
+ 	int err;
+ 
+ 	priv = container_of(cc, struct mcp251xfd_priv, cc);
+-	err = mcp251xfd_get_timestamp(priv, &timestamp);
++	err = mcp251xfd_get_timestamp_raw(priv, &ts_raw);
+ 	if (err)
+ 		netdev_err(priv->ndev,
+ 			   "Error %d while reading timestamp. HW timestamps may be inaccurate.",
+ 			   err);
+ 
+-	return timestamp;
++	return ts_raw;
+ }
+ 
+ static void mcp251xfd_timestamp_work(struct work_struct *work)
+@@ -39,21 +39,11 @@ static void mcp251xfd_timestamp_work(struct work_struct *work)
+ 			      MCP251XFD_TIMESTAMP_WORK_DELAY_SEC * HZ);
+ }
+ 
+-void mcp251xfd_skb_set_timestamp(const struct mcp251xfd_priv *priv,
+-				 struct sk_buff *skb, u32 timestamp)
+-{
+-	struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
+-	u64 ns;
+-
+-	ns = timecounter_cyc2time(&priv->tc, timestamp);
+-	hwtstamps->hwtstamp = ns_to_ktime(ns);
+-}
+-
+ void mcp251xfd_timestamp_init(struct mcp251xfd_priv *priv)
+ {
+ 	struct cyclecounter *cc = &priv->cc;
+ 
+-	cc->read = mcp251xfd_timestamp_read;
++	cc->read = mcp251xfd_timestamp_raw_read;
+ 	cc->mask = CYCLECOUNTER_MASK(32);
+ 	cc->shift = 1;
+ 	cc->mult = clocksource_hz2mult(priv->can.clock.freq, cc->shift);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index 4628bf847bc9b..991662fbba42e 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -2,7 +2,7 @@
+  *
+  * mcp251xfd - Microchip MCP251xFD Family CAN controller driver
+  *
+- * Copyright (c) 2019, 2020, 2021 Pengutronix,
++ * Copyright (c) 2019, 2020, 2021, 2023 Pengutronix,
+  *               Marc Kleine-Budde <kernel@pengutronix.de>
+  * Copyright (c) 2019 Martin Sperl <kernel@martin.sperl.org>
+  */
+@@ -554,10 +554,14 @@ struct mcp251xfd_rx_ring {
+ 	unsigned int head;
+ 	unsigned int tail;
+ 
++	/* timestamp of the last valid received CAN frame */
++	u64 last_valid;
++
+ 	u16 base;
+ 	u8 nr;
+ 	u8 fifo_nr;
+ 	u8 obj_num;
++	u8 obj_num_shift_to_u8;
+ 	u8 obj_size;
+ 
+ 	union mcp251xfd_write_reg_buf irq_enable_buf;
+@@ -811,10 +815,27 @@ mcp251xfd_spi_cmd_write(const struct mcp251xfd_priv *priv,
+ 	return data;
+ }
+ 
+-static inline int mcp251xfd_get_timestamp(const struct mcp251xfd_priv *priv,
+-					  u32 *timestamp)
++static inline int mcp251xfd_get_timestamp_raw(const struct mcp251xfd_priv *priv,
++					      u32 *ts_raw)
++{
++	return regmap_read(priv->map_reg, MCP251XFD_REG_TBC, ts_raw);
++}
++
++static inline void mcp251xfd_skb_set_timestamp(struct sk_buff *skb, u64 ns)
++{
++	struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb);
++
++	hwtstamps->hwtstamp = ns_to_ktime(ns);
++}
++
++static inline
++void mcp251xfd_skb_set_timestamp_raw(const struct mcp251xfd_priv *priv,
++				     struct sk_buff *skb, u32 ts_raw)
+ {
+-	return regmap_read(priv->map_reg, MCP251XFD_REG_TBC, timestamp);
++	u64 ns;
++
++	ns = timecounter_cyc2time(&priv->tc, ts_raw);
++	mcp251xfd_skb_set_timestamp(skb, ns);
+ }
+ 
+ static inline u16 mcp251xfd_get_tef_obj_addr(u8 n)
+@@ -907,18 +928,9 @@ static inline u8 mcp251xfd_get_rx_tail(const struct mcp251xfd_rx_ring *ring)
+ 	return ring->tail & (ring->obj_num - 1);
+ }
+ 
+-static inline u8 mcp251xfd_get_rx_len(const struct mcp251xfd_rx_ring *ring)
+-{
+-	return ring->head - ring->tail;
+-}
+-
+ static inline u8
+-mcp251xfd_get_rx_linear_len(const struct mcp251xfd_rx_ring *ring)
++mcp251xfd_get_rx_linear_len(const struct mcp251xfd_rx_ring *ring, u8 len)
+ {
+-	u8 len;
+-
+-	len = mcp251xfd_get_rx_len(ring);
+-
+ 	return min_t(u8, len, ring->obj_num - mcp251xfd_get_rx_tail(ring));
+ }
+ 
+@@ -944,8 +956,6 @@ void mcp251xfd_ring_free(struct mcp251xfd_priv *priv);
+ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv);
+ int mcp251xfd_handle_rxif(struct mcp251xfd_priv *priv);
+ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv);
+-void mcp251xfd_skb_set_timestamp(const struct mcp251xfd_priv *priv,
+-				 struct sk_buff *skb, u32 timestamp);
+ void mcp251xfd_timestamp_init(struct mcp251xfd_priv *priv);
+ void mcp251xfd_timestamp_stop(struct mcp251xfd_priv *priv);
+ 
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index 56bb77dbd28a2..cefddcf3cc6fe 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -34,7 +34,7 @@
+ #define VSC73XX_BLOCK_ANALYZER	0x2 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_MII	0x3 /* Subblocks 0 and 1 */
+ #define VSC73XX_BLOCK_MEMINIT	0x3 /* Only subblock 2 */
+-#define VSC73XX_BLOCK_CAPTURE	0x4 /* Only subblock 2 */
++#define VSC73XX_BLOCK_CAPTURE	0x4 /* Subblocks 0-4, 6, 7 */
+ #define VSC73XX_BLOCK_ARBITER	0x5 /* Only subblock 0 */
+ #define VSC73XX_BLOCK_SYSTEM	0x7 /* Only subblock 0 */
+ 
+@@ -370,13 +370,19 @@ int vsc73xx_is_addr_valid(u8 block, u8 subblock)
+ 		break;
+ 
+ 	case VSC73XX_BLOCK_MII:
+-	case VSC73XX_BLOCK_CAPTURE:
+ 	case VSC73XX_BLOCK_ARBITER:
+ 		switch (subblock) {
+ 		case 0 ... 1:
+ 			return 1;
+ 		}
+ 		break;
++	case VSC73XX_BLOCK_CAPTURE:
++		switch (subblock) {
++		case 0 ... 4:
++		case 6 ... 7:
++			return 1;
++		}
++		break;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index baa0b3c2ce6ff..946c3d3b69d94 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -931,14 +931,18 @@ static inline void dpaa_setup_egress(const struct dpaa_priv *priv,
+ 	}
+ }
+ 
+-static void dpaa_fq_setup(struct dpaa_priv *priv,
+-			  const struct dpaa_fq_cbs *fq_cbs,
+-			  struct fman_port *tx_port)
++static int dpaa_fq_setup(struct dpaa_priv *priv,
++			 const struct dpaa_fq_cbs *fq_cbs,
++			 struct fman_port *tx_port)
+ {
+ 	int egress_cnt = 0, conf_cnt = 0, num_portals = 0, portal_cnt = 0, cpu;
+ 	const cpumask_t *affine_cpus = qman_affine_cpus();
+-	u16 channels[NR_CPUS];
+ 	struct dpaa_fq *fq;
++	u16 *channels;
++
++	channels = kcalloc(num_possible_cpus(), sizeof(u16), GFP_KERNEL);
++	if (!channels)
++		return -ENOMEM;
+ 
+ 	for_each_cpu_and(cpu, affine_cpus, cpu_online_mask)
+ 		channels[num_portals++] = qman_affine_channel(cpu);
+@@ -997,6 +1001,10 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
+ 				break;
+ 		}
+ 	}
++
++	kfree(channels);
++
++	return 0;
+ }
+ 
+ static inline int dpaa_tx_fq_to_id(const struct dpaa_priv *priv,
+@@ -3416,7 +3424,9 @@ static int dpaa_eth_probe(struct platform_device *pdev)
+ 	 */
+ 	dpaa_eth_add_channel(priv->channel, &pdev->dev);
+ 
+-	dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
++	err = dpaa_fq_setup(priv, &dpaa_fq_cbs, priv->mac_dev->port[TX]);
++	if (err)
++		goto free_dpaa_bps;
+ 
+ 	/* Create a congestion group for this netdev, with
+ 	 * dynamically-allocated CGR ID.
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 5bd0b36d1feb5..3f8cd4a7d8457 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -457,12 +457,16 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	const cpumask_t *cpus = qman_affine_cpus();
+-	bool needs_revert[NR_CPUS] = {false};
+ 	struct qman_portal *portal;
+ 	u32 period, prev_period;
+ 	u8 thresh, prev_thresh;
++	bool *needs_revert;
+ 	int cpu, res;
+ 
++	needs_revert = kcalloc(num_possible_cpus(), sizeof(bool), GFP_KERNEL);
++	if (!needs_revert)
++		return -ENOMEM;
++
+ 	period = c->rx_coalesce_usecs;
+ 	thresh = c->rx_max_coalesced_frames;
+ 
+@@ -485,6 +489,8 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 		needs_revert[cpu] = true;
+ 	}
+ 
++	kfree(needs_revert);
++
+ 	return 0;
+ 
+ revert_values:
+@@ -498,6 +504,8 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 		qman_dqrr_set_ithresh(portal, prev_thresh);
+ 	}
+ 
++	kfree(needs_revert);
++
+ 	return res;
+ }
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index ae1e21c9b0a54..ca7fce17f2c05 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -724,6 +724,7 @@ struct gve_priv {
+ 	union gve_adminq_command *adminq;
+ 	dma_addr_t adminq_bus_addr;
+ 	struct dma_pool *adminq_pool;
++	struct mutex adminq_lock; /* Protects adminq command execution */
+ 	u32 adminq_mask; /* masks prod_cnt to adminq size */
+ 	u32 adminq_prod_cnt; /* free-running count of AQ cmds executed */
+ 	u32 adminq_cmd_fail; /* free-running count of AQ cmds failed */
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 8ca0def176ef4..2e0c1eb87b11c 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -284,6 +284,7 @@ int gve_adminq_alloc(struct device *dev, struct gve_priv *priv)
+ 			    &priv->reg_bar0->adminq_base_address_lo);
+ 		iowrite32be(GVE_DRIVER_STATUS_RUN_MASK, &priv->reg_bar0->driver_status);
+ 	}
++	mutex_init(&priv->adminq_lock);
+ 	gve_set_admin_queue_ok(priv);
+ 	return 0;
+ }
+@@ -511,28 +512,29 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 	return 0;
+ }
+ 
+-/* This function is not threadsafe - the caller is responsible for any
+- * necessary locks.
+- * The caller is also responsible for making sure there are no commands
+- * waiting to be executed.
+- */
+ static int gve_adminq_execute_cmd(struct gve_priv *priv,
+ 				  union gve_adminq_command *cmd_orig)
+ {
+ 	u32 tail, head;
+ 	int err;
+ 
++	mutex_lock(&priv->adminq_lock);
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+ 	head = priv->adminq_prod_cnt;
+-	if (tail != head)
+-		// This is not a valid path
+-		return -EINVAL;
++	if (tail != head) {
++		err = -EINVAL;
++		goto out;
++	}
+ 
+ 	err = gve_adminq_issue_cmd(priv, cmd_orig);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+-	return gve_adminq_kick_and_wait(priv);
++	err = gve_adminq_kick_and_wait(priv);
++
++out:
++	mutex_unlock(&priv->adminq_lock);
++	return err;
+ }
+ 
+ /* The device specifies that the management vector can either be the first irq
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index e132c2f095609..cc7f46c0b35ff 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1598,8 +1598,7 @@ static void hclge_query_reg_info_of_ssu(struct hclge_dev *hdev)
+ {
+ 	u32 loop_para[HCLGE_MOD_MSG_PARA_ARRAY_MAX_SIZE] = {0};
+ 	struct hclge_mod_reg_common_msg msg;
+-	u8 i, j, num;
+-	u32 loop_time;
++	u8 i, j, num, loop_time;
+ 
+ 	num = ARRAY_SIZE(hclge_ssu_reg_common_msg);
+ 	for (i = 0; i < num; i++) {
+@@ -1609,7 +1608,8 @@ static void hclge_query_reg_info_of_ssu(struct hclge_dev *hdev)
+ 		loop_time = 1;
+ 		loop_para[0] = 0;
+ 		if (msg.need_para) {
+-			loop_time = hdev->ae_dev->dev_specs.tnl_num;
++			loop_time = min(hdev->ae_dev->dev_specs.tnl_num,
++					HCLGE_MOD_MSG_PARA_ARRAY_MAX_SIZE);
+ 			for (j = 0; j < loop_time; j++)
+ 				loop_para[j] = j + 1;
+ 		}
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index caaa10157909e..ce8b5505b16da 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -318,6 +318,7 @@ enum ice_vsi_state {
+ 	ICE_VSI_UMAC_FLTR_CHANGED,
+ 	ICE_VSI_MMAC_FLTR_CHANGED,
+ 	ICE_VSI_PROMISC_CHANGED,
++	ICE_VSI_REBUILD_PENDING,
+ 	ICE_VSI_STATE_NBITS		/* must be last */
+ };
+ 
+@@ -411,6 +412,7 @@ struct ice_vsi {
+ 	struct ice_tx_ring **xdp_rings;	 /* XDP ring array */
+ 	u16 num_xdp_txq;		 /* Used XDP queues */
+ 	u8 xdp_mapping_mode;		 /* ICE_MAP_MODE_[CONTIG|SCATTER] */
++	struct mutex xdp_state_lock;
+ 
+ 	struct net_device **target_netdevs;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index f448d3a845642..c158749a80e05 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -190,16 +190,11 @@ static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)
+ 	}
+ 	q_vector = vsi->q_vectors[v_idx];
+ 
+-	ice_for_each_tx_ring(tx_ring, q_vector->tx) {
+-		ice_queue_set_napi(vsi, tx_ring->q_index, NETDEV_QUEUE_TYPE_TX,
+-				   NULL);
++	ice_for_each_tx_ring(tx_ring, vsi->q_vectors[v_idx]->tx)
+ 		tx_ring->q_vector = NULL;
+-	}
+-	ice_for_each_rx_ring(rx_ring, q_vector->rx) {
+-		ice_queue_set_napi(vsi, rx_ring->q_index, NETDEV_QUEUE_TYPE_RX,
+-				   NULL);
++
++	ice_for_each_rx_ring(rx_ring, vsi->q_vectors[v_idx]->rx)
+ 		rx_ring->q_vector = NULL;
+-	}
+ 
+ 	/* only VSI with an associated netdev is set up with NAPI */
+ 	if (vsi->netdev)
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 7629b0190578b..7076a77388641 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -447,6 +447,7 @@ static void ice_vsi_free(struct ice_vsi *vsi)
+ 
+ 	ice_vsi_free_stats(vsi);
+ 	ice_vsi_free_arrays(vsi);
++	mutex_destroy(&vsi->xdp_state_lock);
+ 	mutex_unlock(&pf->sw_mutex);
+ 	devm_kfree(dev, vsi);
+ }
+@@ -626,6 +627,8 @@ static struct ice_vsi *ice_vsi_alloc(struct ice_pf *pf)
+ 	pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi,
+ 					 pf->next_vsi);
+ 
++	mutex_init(&vsi->xdp_state_lock);
++
+ unlock_pf:
+ 	mutex_unlock(&pf->sw_mutex);
+ 	return vsi;
+@@ -2286,9 +2289,6 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
+ 
+ 		ice_vsi_map_rings_to_vectors(vsi);
+ 
+-		/* Associate q_vector rings to napi */
+-		ice_vsi_set_napi_queues(vsi);
+-
+ 		vsi->stat_offsets_loaded = false;
+ 
+ 		/* ICE_VSI_CTRL does not need RSS so skip RSS processing */
+@@ -2628,6 +2628,7 @@ void ice_vsi_close(struct ice_vsi *vsi)
+ 	if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state))
+ 		ice_down(vsi);
+ 
++	ice_vsi_clear_napi_queues(vsi);
+ 	ice_vsi_free_irq(vsi);
+ 	ice_vsi_free_tx_rings(vsi);
+ 	ice_vsi_free_rx_rings(vsi);
+@@ -2671,8 +2672,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked)
+  */
+ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
+ {
+-	if (test_bit(ICE_VSI_DOWN, vsi->state))
+-		return;
++	bool already_down = test_bit(ICE_VSI_DOWN, vsi->state);
+ 
+ 	set_bit(ICE_VSI_NEEDS_RESTART, vsi->state);
+ 
+@@ -2680,134 +2680,70 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
+ 		if (netif_running(vsi->netdev)) {
+ 			if (!locked)
+ 				rtnl_lock();
+-
+-			ice_vsi_close(vsi);
++			already_down = test_bit(ICE_VSI_DOWN, vsi->state);
++			if (!already_down)
++				ice_vsi_close(vsi);
+ 
+ 			if (!locked)
+ 				rtnl_unlock();
+-		} else {
++		} else if (!already_down) {
+ 			ice_vsi_close(vsi);
+ 		}
+-	} else if (vsi->type == ICE_VSI_CTRL) {
++	} else if (vsi->type == ICE_VSI_CTRL && !already_down) {
+ 		ice_vsi_close(vsi);
+ 	}
+ }
+ 
+ /**
+- * __ice_queue_set_napi - Set the napi instance for the queue
+- * @dev: device to which NAPI and queue belong
+- * @queue_index: Index of queue
+- * @type: queue type as RX or TX
+- * @napi: NAPI context
+- * @locked: is the rtnl_lock already held
+- *
+- * Set the napi instance for the queue. Caller indicates the lock status.
+- */
+-static void
+-__ice_queue_set_napi(struct net_device *dev, unsigned int queue_index,
+-		     enum netdev_queue_type type, struct napi_struct *napi,
+-		     bool locked)
+-{
+-	if (!locked)
+-		rtnl_lock();
+-	netif_queue_set_napi(dev, queue_index, type, napi);
+-	if (!locked)
+-		rtnl_unlock();
+-}
+-
+-/**
+- * ice_queue_set_napi - Set the napi instance for the queue
+- * @vsi: VSI being configured
+- * @queue_index: Index of queue
+- * @type: queue type as RX or TX
+- * @napi: NAPI context
++ * ice_vsi_set_napi_queues - associate netdev queues with napi
++ * @vsi: VSI pointer
+  *
+- * Set the napi instance for the queue. The rtnl lock state is derived from the
+- * execution path.
++ * Associate queue[s] with napi for all vectors.
++ * The caller must hold rtnl_lock.
+  */
+-void
+-ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index,
+-		   enum netdev_queue_type type, struct napi_struct *napi)
++void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
+ {
+-	struct ice_pf *pf = vsi->back;
++	struct net_device *netdev = vsi->netdev;
++	int q_idx, v_idx;
+ 
+-	if (!vsi->netdev)
++	if (!netdev)
+ 		return;
+ 
+-	if (current_work() == &pf->serv_task ||
+-	    test_bit(ICE_PREPARED_FOR_RESET, pf->state) ||
+-	    test_bit(ICE_DOWN, pf->state) ||
+-	    test_bit(ICE_SUSPENDED, pf->state))
+-		__ice_queue_set_napi(vsi->netdev, queue_index, type, napi,
+-				     false);
+-	else
+-		__ice_queue_set_napi(vsi->netdev, queue_index, type, napi,
+-				     true);
+-}
++	ice_for_each_rxq(vsi, q_idx)
++		netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX,
++				     &vsi->rx_rings[q_idx]->q_vector->napi);
+ 
+-/**
+- * __ice_q_vector_set_napi_queues - Map queue[s] associated with the napi
+- * @q_vector: q_vector pointer
+- * @locked: is the rtnl_lock already held
+- *
+- * Associate the q_vector napi with all the queue[s] on the vector.
+- * Caller indicates the lock status.
+- */
+-void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked)
+-{
+-	struct ice_rx_ring *rx_ring;
+-	struct ice_tx_ring *tx_ring;
+-
+-	ice_for_each_rx_ring(rx_ring, q_vector->rx)
+-		__ice_queue_set_napi(q_vector->vsi->netdev, rx_ring->q_index,
+-				     NETDEV_QUEUE_TYPE_RX, &q_vector->napi,
+-				     locked);
+-
+-	ice_for_each_tx_ring(tx_ring, q_vector->tx)
+-		__ice_queue_set_napi(q_vector->vsi->netdev, tx_ring->q_index,
+-				     NETDEV_QUEUE_TYPE_TX, &q_vector->napi,
+-				     locked);
++	ice_for_each_txq(vsi, q_idx)
++		netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX,
++				     &vsi->tx_rings[q_idx]->q_vector->napi);
+ 	/* Also set the interrupt number for the NAPI */
+-	netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
+-}
+-
+-/**
+- * ice_q_vector_set_napi_queues - Map queue[s] associated with the napi
+- * @q_vector: q_vector pointer
+- *
+- * Associate the q_vector napi with all the queue[s] on the vector
+- */
+-void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector)
+-{
+-	struct ice_rx_ring *rx_ring;
+-	struct ice_tx_ring *tx_ring;
+-
+-	ice_for_each_rx_ring(rx_ring, q_vector->rx)
+-		ice_queue_set_napi(q_vector->vsi, rx_ring->q_index,
+-				   NETDEV_QUEUE_TYPE_RX, &q_vector->napi);
++	ice_for_each_q_vector(vsi, v_idx) {
++		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
+ 
+-	ice_for_each_tx_ring(tx_ring, q_vector->tx)
+-		ice_queue_set_napi(q_vector->vsi, tx_ring->q_index,
+-				   NETDEV_QUEUE_TYPE_TX, &q_vector->napi);
+-	/* Also set the interrupt number for the NAPI */
+-	netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
++		netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
++	}
+ }
+ 
+ /**
+- * ice_vsi_set_napi_queues
++ * ice_vsi_clear_napi_queues - dissociate netdev queues from napi
+  * @vsi: VSI pointer
+  *
+- * Associate queue[s] with napi for all vectors
++ * Clear the association between all VSI queues queue[s] and napi.
++ * The caller must hold rtnl_lock.
+  */
+-void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
++void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
+ {
+-	int i;
++	struct net_device *netdev = vsi->netdev;
++	int q_idx;
+ 
+-	if (!vsi->netdev)
++	if (!netdev)
+ 		return;
+ 
+-	ice_for_each_q_vector(vsi, i)
+-		ice_q_vector_set_napi_queues(vsi->q_vectors[i]);
++	ice_for_each_txq(vsi, q_idx)
++		netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, NULL);
++
++	ice_for_each_rxq(vsi, q_idx)
++		netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL);
+ }
+ 
+ /**
+@@ -3039,19 +2975,23 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
+ 	if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf))
+ 		return -EINVAL;
+ 
++	mutex_lock(&vsi->xdp_state_lock);
++
+ 	ret = ice_vsi_realloc_stat_arrays(vsi);
+ 	if (ret)
+-		goto err_vsi_cfg;
++		goto unlock;
+ 
+ 	ice_vsi_decfg(vsi);
+ 	ret = ice_vsi_cfg_def(vsi);
+ 	if (ret)
+-		goto err_vsi_cfg;
++		goto unlock;
+ 
+ 	coalesce = kcalloc(vsi->num_q_vectors,
+ 			   sizeof(struct ice_coalesce_stored), GFP_KERNEL);
+-	if (!coalesce)
+-		return -ENOMEM;
++	if (!coalesce) {
++		ret = -ENOMEM;
++		goto decfg;
++	}
+ 
+ 	prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
+ 
+@@ -3059,22 +2999,23 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
+ 	if (ret) {
+ 		if (vsi_flags & ICE_VSI_FLAG_INIT) {
+ 			ret = -EIO;
+-			goto err_vsi_cfg_tc_lan;
++			goto free_coalesce;
+ 		}
+ 
+-		kfree(coalesce);
+-		return ice_schedule_reset(pf, ICE_RESET_PFR);
++		ret = ice_schedule_reset(pf, ICE_RESET_PFR);
++		goto free_coalesce;
+ 	}
+ 
+ 	ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors);
+-	kfree(coalesce);
+-
+-	return 0;
++	clear_bit(ICE_VSI_REBUILD_PENDING, vsi->state);
+ 
+-err_vsi_cfg_tc_lan:
+-	ice_vsi_decfg(vsi);
++free_coalesce:
+ 	kfree(coalesce);
+-err_vsi_cfg:
++decfg:
++	if (ret)
++		ice_vsi_decfg(vsi);
++unlock:
++	mutex_unlock(&vsi->xdp_state_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
+index 94ce8964dda66..36d86535695dd 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_lib.h
+@@ -44,16 +44,10 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc);
+ struct ice_vsi *
+ ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params);
+ 
+-void
+-ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index,
+-		   enum netdev_queue_type type, struct napi_struct *napi);
+-
+-void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked);
+-
+-void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector);
+-
+ void ice_vsi_set_napi_queues(struct ice_vsi *vsi);
+ 
++void ice_vsi_clear_napi_queues(struct ice_vsi *vsi);
++
+ int ice_vsi_release(struct ice_vsi *vsi);
+ 
+ void ice_vsi_close(struct ice_vsi *vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index f16d13e9ff6e3..766f9a466bc35 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -609,11 +609,15 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
+ 			memset(&vsi->mqprio_qopt, 0, sizeof(vsi->mqprio_qopt));
+ 		}
+ 	}
++
++	if (vsi->netdev)
++		netif_device_detach(vsi->netdev);
+ skip:
+ 
+ 	/* clear SW filtering DB */
+ 	ice_clear_hw_tbls(hw);
+ 	/* disable the VSIs and their queues that are not already DOWN */
++	set_bit(ICE_VSI_REBUILD_PENDING, ice_get_main_vsi(pf)->state);
+ 	ice_pf_dis_all_vsi(pf, false);
+ 
+ 	if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
+@@ -3002,8 +3006,8 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		   struct netlink_ext_ack *extack)
+ {
+ 	unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD;
+-	bool if_running = netif_running(vsi->netdev);
+ 	int ret = 0, xdp_ring_err = 0;
++	bool if_running;
+ 
+ 	if (prog && !prog->aux->xdp_has_frags) {
+ 		if (frame_size > ice_max_xdp_frame_size(vsi)) {
+@@ -3014,13 +3018,17 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	}
+ 
+ 	/* hot swap progs and avoid toggling link */
+-	if (ice_is_xdp_ena_vsi(vsi) == !!prog) {
++	if (ice_is_xdp_ena_vsi(vsi) == !!prog ||
++	    test_bit(ICE_VSI_REBUILD_PENDING, vsi->state)) {
+ 		ice_vsi_assign_bpf_prog(vsi, prog);
+ 		return 0;
+ 	}
+ 
++	if_running = netif_running(vsi->netdev) &&
++		     !test_and_set_bit(ICE_VSI_DOWN, vsi->state);
++
+ 	/* need to stop netdev while setting up the program for Rx rings */
+-	if (if_running && !test_and_set_bit(ICE_VSI_DOWN, vsi->state)) {
++	if (if_running) {
+ 		ret = ice_down(vsi);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Preparing device for XDP attach failed");
+@@ -3086,21 +3094,28 @@ static int ice_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(dev);
+ 	struct ice_vsi *vsi = np->vsi;
++	int ret;
+ 
+ 	if (vsi->type != ICE_VSI_PF) {
+ 		NL_SET_ERR_MSG_MOD(xdp->extack, "XDP can be loaded only on PF VSI");
+ 		return -EINVAL;
+ 	}
+ 
++	mutex_lock(&vsi->xdp_state_lock);
++
+ 	switch (xdp->command) {
+ 	case XDP_SETUP_PROG:
+-		return ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack);
++		ret = ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack);
++		break;
+ 	case XDP_SETUP_XSK_POOL:
+-		return ice_xsk_pool_setup(vsi, xdp->xsk.pool,
+-					  xdp->xsk.queue_id);
++		ret = ice_xsk_pool_setup(vsi, xdp->xsk.pool, xdp->xsk.queue_id);
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
+ 	}
++
++	mutex_unlock(&vsi->xdp_state_lock);
++	return ret;
+ }
+ 
+ /**
+@@ -3556,11 +3571,9 @@ static void ice_napi_add(struct ice_vsi *vsi)
+ 	if (!vsi->netdev)
+ 		return;
+ 
+-	ice_for_each_q_vector(vsi, v_idx) {
++	ice_for_each_q_vector(vsi, v_idx)
+ 		netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi,
+ 			       ice_napi_poll);
+-		__ice_q_vector_set_napi_queues(vsi->q_vectors[v_idx], false);
+-	}
+ }
+ 
+ /**
+@@ -4160,13 +4173,17 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
+ 
+ 	/* set for the next time the netdev is started */
+ 	if (!netif_running(vsi->netdev)) {
+-		ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
++		err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
++		if (err)
++			goto rebuild_err;
+ 		dev_dbg(ice_pf_to_dev(pf), "Link is down, queue count change happens when link is brought up\n");
+ 		goto done;
+ 	}
+ 
+ 	ice_vsi_close(vsi);
+-	ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
++	err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
++	if (err)
++		goto rebuild_err;
+ 
+ 	ice_for_each_traffic_class(i) {
+ 		if (vsi->tc_cfg.ena_tc & BIT(i))
+@@ -4177,6 +4194,11 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
+ 	}
+ 	ice_pf_dcb_recfg(pf, locked);
+ 	ice_vsi_open(vsi);
++	goto done;
++
++rebuild_err:
++	dev_err(ice_pf_to_dev(pf), "Error during VSI rebuild: %d. Unload and reload the driver.\n",
++		err);
+ done:
+ 	clear_bit(ICE_CFG_BUSY, pf->state);
+ 	return err;
+@@ -5529,7 +5551,9 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
+ 		if (ret)
+ 			goto err_reinit;
+ 		ice_vsi_map_rings_to_vectors(pf->vsi[v]);
++		rtnl_lock();
+ 		ice_vsi_set_napi_queues(pf->vsi[v]);
++		rtnl_unlock();
+ 	}
+ 
+ 	ret = ice_req_irq_msix_misc(pf);
+@@ -5543,8 +5567,12 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
+ 
+ err_reinit:
+ 	while (v--)
+-		if (pf->vsi[v])
++		if (pf->vsi[v]) {
++			rtnl_lock();
++			ice_vsi_clear_napi_queues(pf->vsi[v]);
++			rtnl_unlock();
+ 			ice_vsi_free_q_vectors(pf->vsi[v]);
++		}
+ 
+ 	return ret;
+ }
+@@ -5609,6 +5637,9 @@ static int ice_suspend(struct device *dev)
+ 	ice_for_each_vsi(pf, v) {
+ 		if (!pf->vsi[v])
+ 			continue;
++		rtnl_lock();
++		ice_vsi_clear_napi_queues(pf->vsi[v]);
++		rtnl_unlock();
+ 		ice_vsi_free_q_vectors(pf->vsi[v]);
+ 	}
+ 	ice_clear_interrupt_scheme(pf);
+@@ -7444,6 +7475,8 @@ int ice_vsi_open(struct ice_vsi *vsi)
+ 		err = netif_set_real_num_rx_queues(vsi->netdev, vsi->num_rxq);
+ 		if (err)
+ 			goto err_set_qs;
++
++		ice_vsi_set_napi_queues(vsi);
+ 	}
+ 
+ 	err = ice_up_complete(vsi);
+@@ -7581,6 +7614,7 @@ static void ice_update_pf_netdev_link(struct ice_pf *pf)
+  */
+ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+ {
++	struct ice_vsi *vsi = ice_get_main_vsi(pf);
+ 	struct device *dev = ice_pf_to_dev(pf);
+ 	struct ice_hw *hw = &pf->hw;
+ 	bool dvm;
+@@ -7725,6 +7759,9 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+ 		ice_rebuild_arfs(pf);
+ 	}
+ 
++	if (vsi && vsi->netdev)
++		netif_device_attach(vsi->netdev);
++
+ 	ice_update_pf_netdev_link(pf);
+ 
+ 	/* tell the firmware we are up */
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 240a7bec242be..87a5427570d76 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -165,7 +165,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 	struct ice_q_vector *q_vector;
+ 	struct ice_tx_ring *tx_ring;
+ 	struct ice_rx_ring *rx_ring;
+-	int timeout = 50;
+ 	int fail = 0;
+ 	int err;
+ 
+@@ -176,13 +175,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 	rx_ring = vsi->rx_rings[q_idx];
+ 	q_vector = rx_ring->q_vector;
+ 
+-	while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) {
+-		timeout--;
+-		if (!timeout)
+-			return -EBUSY;
+-		usleep_range(1000, 2000);
+-	}
+-
+ 	synchronize_net();
+ 	netif_carrier_off(vsi->netdev);
+ 	netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
+@@ -261,7 +253,6 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+ 		netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
+ 		netif_carrier_on(vsi->netdev);
+ 	}
+-	clear_bit(ICE_CFG_BUSY, vsi->state);
+ 
+ 	return fail;
+ }
+@@ -390,7 +381,8 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
+ 		goto failure;
+ 	}
+ 
+-	if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
++	if_running = !test_bit(ICE_VSI_DOWN, vsi->state) &&
++		     ice_is_xdp_ena_vsi(vsi);
+ 
+ 	if (if_running) {
+ 		struct ice_rx_ring *rx_ring = vsi->rx_rings[qid];
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b6aa449aa56af..a27d0a4d3d9c4 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6961,10 +6961,20 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
+ 
+ static void igb_tsync_interrupt(struct igb_adapter *adapter)
+ {
++	const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS |
++			  TSINTR_TT0 | TSINTR_TT1 |
++			  TSINTR_AUTT0 | TSINTR_AUTT1);
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	u32 tsicr = rd32(E1000_TSICR);
+ 	struct ptp_clock_event event;
+ 
++	if (hw->mac.type == e1000_82580) {
++		/* 82580 has a hardware bug that requires an explicit
++		 * write to clear the TimeSync interrupt cause.
++		 */
++		wr32(E1000_TSICR, tsicr & mask);
++	}
++
+ 	if (tsicr & TSINTR_SYS_WRAP) {
+ 		event.type = PTP_CLOCK_PPS;
+ 		if (adapter->ptp_caps.pps)
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 3041f8142324f..773136925fd07 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7417,6 +7417,7 @@ static void igc_io_resume(struct pci_dev *pdev)
+ 	rtnl_lock();
+ 	if (netif_running(netdev)) {
+ 		if (igc_open(netdev)) {
++			rtnl_unlock();
+ 			netdev_err(netdev, "igc_open failed after reset\n");
+ 			return;
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index e85fb71bf0b46..3cebc3a435db5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -80,6 +80,7 @@ struct page_pool;
+ 				 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+ 
+ #define MLX5E_RX_MAX_HEAD (256)
++#define MLX5E_SHAMPO_LOG_HEADER_ENTRY_SIZE (8)
+ #define MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE (9)
+ #define MLX5E_SHAMPO_WQ_HEADER_PER_PAGE (PAGE_SIZE >> MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE)
+ #define MLX5E_SHAMPO_WQ_BASE_HEAD_ENTRY_SIZE (64)
+@@ -146,25 +147,6 @@ struct page_pool;
+ #define MLX5E_TX_XSK_POLL_BUDGET       64
+ #define MLX5E_SQ_RECOVER_MIN_INTERVAL  500 /* msecs */
+ 
+-#define MLX5E_KLM_UMR_WQE_SZ(sgl_len)\
+-	(sizeof(struct mlx5e_umr_wqe) +\
+-	(sizeof(struct mlx5_klm) * (sgl_len)))
+-
+-#define MLX5E_KLM_UMR_WQEBBS(klm_entries) \
+-	(DIV_ROUND_UP(MLX5E_KLM_UMR_WQE_SZ(klm_entries), MLX5_SEND_WQE_BB))
+-
+-#define MLX5E_KLM_UMR_DS_CNT(klm_entries)\
+-	(DIV_ROUND_UP(MLX5E_KLM_UMR_WQE_SZ(klm_entries), MLX5_SEND_WQE_DS))
+-
+-#define MLX5E_KLM_MAX_ENTRIES_PER_WQE(wqe_size)\
+-	(((wqe_size) - sizeof(struct mlx5e_umr_wqe)) / sizeof(struct mlx5_klm))
+-
+-#define MLX5E_KLM_ENTRIES_PER_WQE(wqe_size)\
+-	ALIGN_DOWN(MLX5E_KLM_MAX_ENTRIES_PER_WQE(wqe_size), MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT)
+-
+-#define MLX5E_MAX_KLM_PER_WQE(mdev) \
+-	MLX5E_KLM_ENTRIES_PER_WQE(MLX5_SEND_WQE_BB * mlx5e_get_max_sq_aligned_wqebbs(mdev))
+-
+ #define mlx5e_state_dereference(priv, p) \
+ 	rcu_dereference_protected((p), lockdep_is_held(&(priv)->state_lock))
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index ec819dfc98be2..6c9ccccca81e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -1071,18 +1071,18 @@ static u32 mlx5e_shampo_icosq_sz(struct mlx5_core_dev *mdev,
+ 				 struct mlx5e_params *params,
+ 				 struct mlx5e_rq_param *rq_param)
+ {
+-	int max_num_of_umr_per_wqe, max_hd_per_wqe, max_klm_per_umr, rest;
++	int max_num_of_umr_per_wqe, max_hd_per_wqe, max_ksm_per_umr, rest;
+ 	void *wqc = MLX5_ADDR_OF(rqc, rq_param->rqc, wq);
+ 	int wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz));
+ 	u32 wqebbs;
+ 
+-	max_klm_per_umr = MLX5E_MAX_KLM_PER_WQE(mdev);
++	max_ksm_per_umr = MLX5E_MAX_KSM_PER_WQE(mdev);
+ 	max_hd_per_wqe = mlx5e_shampo_hd_per_wqe(mdev, params, rq_param);
+-	max_num_of_umr_per_wqe = max_hd_per_wqe / max_klm_per_umr;
+-	rest = max_hd_per_wqe % max_klm_per_umr;
+-	wqebbs = MLX5E_KLM_UMR_WQEBBS(max_klm_per_umr) * max_num_of_umr_per_wqe;
++	max_num_of_umr_per_wqe = max_hd_per_wqe / max_ksm_per_umr;
++	rest = max_hd_per_wqe % max_ksm_per_umr;
++	wqebbs = MLX5E_KSM_UMR_WQEBBS(max_ksm_per_umr) * max_num_of_umr_per_wqe;
+ 	if (rest)
+-		wqebbs += MLX5E_KLM_UMR_WQEBBS(rest);
++		wqebbs += MLX5E_KSM_UMR_WQEBBS(rest);
+ 	wqebbs *= wq_size;
+ 	return wqebbs;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+index 879d698b61193..d1f0f868d494e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+@@ -34,6 +34,25 @@
+ 
+ #define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
+ 
++#define MLX5E_KSM_UMR_WQE_SZ(sgl_len)\
++	(sizeof(struct mlx5e_umr_wqe) +\
++	(sizeof(struct mlx5_ksm) * (sgl_len)))
++
++#define MLX5E_KSM_UMR_WQEBBS(ksm_entries) \
++	(DIV_ROUND_UP(MLX5E_KSM_UMR_WQE_SZ(ksm_entries), MLX5_SEND_WQE_BB))
++
++#define MLX5E_KSM_UMR_DS_CNT(ksm_entries)\
++	(DIV_ROUND_UP(MLX5E_KSM_UMR_WQE_SZ(ksm_entries), MLX5_SEND_WQE_DS))
++
++#define MLX5E_KSM_MAX_ENTRIES_PER_WQE(wqe_size)\
++	(((wqe_size) - sizeof(struct mlx5e_umr_wqe)) / sizeof(struct mlx5_ksm))
++
++#define MLX5E_KSM_ENTRIES_PER_WQE(wqe_size)\
++	ALIGN_DOWN(MLX5E_KSM_MAX_ENTRIES_PER_WQE(wqe_size), MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT)
++
++#define MLX5E_MAX_KSM_PER_WQE(mdev) \
++	MLX5E_KSM_ENTRIES_PER_WQE(MLX5_SEND_WQE_BB * mlx5e_get_max_sq_aligned_wqebbs(mdev))
++
+ static inline
+ ktime_t mlx5e_cqe_ts_to_ns(cqe_ts_to_ns func, struct mlx5_clock *clock, u64 cqe_ts)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 409f525f1703c..632129de24ba1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -504,8 +504,8 @@ static int mlx5e_create_umr_mkey(struct mlx5_core_dev *mdev,
+ 	return err;
+ }
+ 
+-static int mlx5e_create_umr_klm_mkey(struct mlx5_core_dev *mdev,
+-				     u64 nentries,
++static int mlx5e_create_umr_ksm_mkey(struct mlx5_core_dev *mdev,
++				     u64 nentries, u8 log_entry_size,
+ 				     u32 *umr_mkey)
+ {
+ 	int inlen;
+@@ -525,12 +525,13 @@ static int mlx5e_create_umr_klm_mkey(struct mlx5_core_dev *mdev,
+ 	MLX5_SET(mkc, mkc, umr_en, 1);
+ 	MLX5_SET(mkc, mkc, lw, 1);
+ 	MLX5_SET(mkc, mkc, lr, 1);
+-	MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_KLMS);
++	MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_KSM);
+ 	mlx5e_mkey_set_relaxed_ordering(mdev, mkc);
+ 	MLX5_SET(mkc, mkc, qpn, 0xffffff);
+ 	MLX5_SET(mkc, mkc, pd, mdev->mlx5e_res.hw_objs.pdn);
+ 	MLX5_SET(mkc, mkc, translations_octword_size, nentries);
+-	MLX5_SET(mkc, mkc, length64, 1);
++	MLX5_SET(mkc, mkc, log_page_size, log_entry_size);
++	MLX5_SET64(mkc, mkc, len, nentries << log_entry_size);
+ 	err = mlx5_core_create_mkey(mdev, umr_mkey, in, inlen);
+ 
+ 	kvfree(in);
+@@ -565,14 +566,16 @@ static int mlx5e_create_rq_umr_mkey(struct mlx5_core_dev *mdev, struct mlx5e_rq
+ static int mlx5e_create_rq_hd_umr_mkey(struct mlx5_core_dev *mdev,
+ 				       struct mlx5e_rq *rq)
+ {
+-	u32 max_klm_size = BIT(MLX5_CAP_GEN(mdev, log_max_klm_list_size));
++	u32 max_ksm_size = BIT(MLX5_CAP_GEN(mdev, log_max_klm_list_size));
+ 
+-	if (max_klm_size < rq->mpwqe.shampo->hd_per_wq) {
+-		mlx5_core_err(mdev, "max klm list size 0x%x is smaller than shampo header buffer list size 0x%x\n",
+-			      max_klm_size, rq->mpwqe.shampo->hd_per_wq);
++	if (max_ksm_size < rq->mpwqe.shampo->hd_per_wq) {
++		mlx5_core_err(mdev, "max ksm list size 0x%x is smaller than shampo header buffer list size 0x%x\n",
++			      max_ksm_size, rq->mpwqe.shampo->hd_per_wq);
+ 		return -EINVAL;
+ 	}
+-	return mlx5e_create_umr_klm_mkey(mdev, rq->mpwqe.shampo->hd_per_wq,
++
++	return mlx5e_create_umr_ksm_mkey(mdev, rq->mpwqe.shampo->hd_per_wq,
++					 MLX5E_SHAMPO_LOG_HEADER_ENTRY_SIZE,
+ 					 &rq->mpwqe.shampo->mkey);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 0138f77eaeed0..cbc45dc34a60f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -619,25 +619,25 @@ static int bitmap_find_window(unsigned long *bitmap, int len,
+ 	return min(len, count);
+ }
+ 
+-static void build_klm_umr(struct mlx5e_icosq *sq, struct mlx5e_umr_wqe *umr_wqe,
+-			  __be32 key, u16 offset, u16 klm_len, u16 wqe_bbs)
++static void build_ksm_umr(struct mlx5e_icosq *sq, struct mlx5e_umr_wqe *umr_wqe,
++			  __be32 key, u16 offset, u16 ksm_len)
+ {
+-	memset(umr_wqe, 0, offsetof(struct mlx5e_umr_wqe, inline_klms));
++	memset(umr_wqe, 0, offsetof(struct mlx5e_umr_wqe, inline_ksms));
+ 	umr_wqe->ctrl.opmod_idx_opcode =
+ 		cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
+ 			     MLX5_OPCODE_UMR);
+ 	umr_wqe->ctrl.umr_mkey = key;
+ 	umr_wqe->ctrl.qpn_ds = cpu_to_be32((sq->sqn << MLX5_WQE_CTRL_QPN_SHIFT)
+-					    | MLX5E_KLM_UMR_DS_CNT(klm_len));
++					    | MLX5E_KSM_UMR_DS_CNT(ksm_len));
+ 	umr_wqe->uctrl.flags = MLX5_UMR_TRANSLATION_OFFSET_EN | MLX5_UMR_INLINE;
+ 	umr_wqe->uctrl.xlt_offset = cpu_to_be16(offset);
+-	umr_wqe->uctrl.xlt_octowords = cpu_to_be16(klm_len);
++	umr_wqe->uctrl.xlt_octowords = cpu_to_be16(ksm_len);
+ 	umr_wqe->uctrl.mkey_mask     = cpu_to_be64(MLX5_MKEY_MASK_FREE);
+ }
+ 
+ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
+ 				     struct mlx5e_icosq *sq,
+-				     u16 klm_entries, u16 index)
++				     u16 ksm_entries, u16 index)
+ {
+ 	struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
+ 	u16 entries, pi, header_offset, err, wqe_bbs, new_entries;
+@@ -650,20 +650,20 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
+ 	int headroom, i;
+ 
+ 	headroom = rq->buff.headroom;
+-	new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT - 1));
+-	entries = ALIGN(klm_entries, MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT);
+-	wqe_bbs = MLX5E_KLM_UMR_WQEBBS(entries);
++	new_entries = ksm_entries - (shampo->pi & (MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT - 1));
++	entries = ALIGN(ksm_entries, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
++	wqe_bbs = MLX5E_KSM_UMR_WQEBBS(entries);
+ 	pi = mlx5e_icosq_get_next_pi(sq, wqe_bbs);
+ 	umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi);
+-	build_klm_umr(sq, umr_wqe, shampo->key, index, entries, wqe_bbs);
++	build_ksm_umr(sq, umr_wqe, shampo->key, index, entries);
+ 
+ 	frag_page = &shampo->pages[page_index];
+ 
+ 	for (i = 0; i < entries; i++, index++) {
+ 		dma_info = &shampo->info[index];
+-		if (i >= klm_entries || (index < shampo->pi && shampo->pi - index <
+-					 MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT))
+-			goto update_klm;
++		if (i >= ksm_entries || (index < shampo->pi && shampo->pi - index <
++					 MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT))
++			goto update_ksm;
+ 		header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) <<
+ 			MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE;
+ 		if (!(header_offset & (PAGE_SIZE - 1))) {
+@@ -683,12 +683,11 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
+ 			dma_info->frag_page = frag_page;
+ 		}
+ 
+-update_klm:
+-		umr_wqe->inline_klms[i].bcount =
+-			cpu_to_be32(MLX5E_RX_MAX_HEAD);
+-		umr_wqe->inline_klms[i].key    = cpu_to_be32(lkey);
+-		umr_wqe->inline_klms[i].va     =
+-			cpu_to_be64(dma_info->addr + headroom);
++update_ksm:
++		umr_wqe->inline_ksms[i] = (struct mlx5_ksm) {
++			.key = cpu_to_be32(lkey),
++			.va  = cpu_to_be64(dma_info->addr + headroom),
++		};
+ 	}
+ 
+ 	sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) {
+@@ -720,37 +719,38 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
+ static int mlx5e_alloc_rx_hd_mpwqe(struct mlx5e_rq *rq)
+ {
+ 	struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
+-	u16 klm_entries, num_wqe, index, entries_before;
++	u16 ksm_entries, num_wqe, index, entries_before;
+ 	struct mlx5e_icosq *sq = rq->icosq;
+-	int i, err, max_klm_entries, len;
++	int i, err, max_ksm_entries, len;
+ 
+-	max_klm_entries = MLX5E_MAX_KLM_PER_WQE(rq->mdev);
+-	klm_entries = bitmap_find_window(shampo->bitmap,
++	max_ksm_entries = MLX5E_MAX_KSM_PER_WQE(rq->mdev);
++	ksm_entries = bitmap_find_window(shampo->bitmap,
+ 					 shampo->hd_per_wqe,
+ 					 shampo->hd_per_wq, shampo->pi);
+-	if (!klm_entries)
++	ksm_entries = ALIGN_DOWN(ksm_entries, MLX5E_SHAMPO_WQ_HEADER_PER_PAGE);
++	if (!ksm_entries)
+ 		return 0;
+ 
+-	klm_entries += (shampo->pi & (MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT - 1));
+-	index = ALIGN_DOWN(shampo->pi, MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT);
++	ksm_entries += (shampo->pi & (MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT - 1));
++	index = ALIGN_DOWN(shampo->pi, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
+ 	entries_before = shampo->hd_per_wq - index;
+ 
+-	if (unlikely(entries_before < klm_entries))
+-		num_wqe = DIV_ROUND_UP(entries_before, max_klm_entries) +
+-			  DIV_ROUND_UP(klm_entries - entries_before, max_klm_entries);
++	if (unlikely(entries_before < ksm_entries))
++		num_wqe = DIV_ROUND_UP(entries_before, max_ksm_entries) +
++			  DIV_ROUND_UP(ksm_entries - entries_before, max_ksm_entries);
+ 	else
+-		num_wqe = DIV_ROUND_UP(klm_entries, max_klm_entries);
++		num_wqe = DIV_ROUND_UP(ksm_entries, max_ksm_entries);
+ 
+ 	for (i = 0; i < num_wqe; i++) {
+-		len = (klm_entries > max_klm_entries) ? max_klm_entries :
+-							klm_entries;
++		len = (ksm_entries > max_ksm_entries) ? max_ksm_entries :
++							ksm_entries;
+ 		if (unlikely(index + len > shampo->hd_per_wq))
+ 			len = shampo->hd_per_wq - index;
+ 		err = mlx5e_build_shampo_hd_umr(rq, sq, len, index);
+ 		if (unlikely(err))
+ 			return err;
+ 		index = (index + len) & (rq->mpwqe.shampo->hd_per_wq - 1);
+-		klm_entries -= len;
++		ksm_entries -= len;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index fe4e166de8a04..79276bc3d4951 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -1442,18 +1442,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
+ 	vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0,
+ 			    rule->cookie, false);
+ 
+-	vcap_free_rule(rule);
+-
+-	/* Check that the rule has been freed: tricky to access since this
+-	 * memory should not be accessible anymore
+-	 */
+-	KUNIT_EXPECT_PTR_NE(test, NULL, rule);
+-	ret = list_empty(&rule->keyfields);
+-	KUNIT_EXPECT_EQ(test, true, ret);
+-	ret = list_empty(&rule->actionfields);
+-	KUNIT_EXPECT_EQ(test, true, ret);
+-
+-	vcap_del_rule(&test_vctrl, &test_netdev, id);
++	ret = vcap_del_rule(&test_vctrl, &test_netdev, id);
++	KUNIT_EXPECT_EQ(test, 0, ret);
+ }
+ 
+ static void vcap_api_set_rule_counter_test(struct kunit *test)
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index 482b9cd369508..bb77327bfa815 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -1857,10 +1857,12 @@ static void mana_destroy_txq(struct mana_port_context *apc)
+ 
+ 	for (i = 0; i < apc->num_queues; i++) {
+ 		napi = &apc->tx_qp[i].tx_cq.napi;
+-		napi_synchronize(napi);
+-		napi_disable(napi);
+-		netif_napi_del(napi);
+-
++		if (apc->tx_qp[i].txq.napi_initialized) {
++			napi_synchronize(napi);
++			napi_disable(napi);
++			netif_napi_del(napi);
++			apc->tx_qp[i].txq.napi_initialized = false;
++		}
+ 		mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qp[i].tx_object);
+ 
+ 		mana_deinit_cq(apc, &apc->tx_qp[i].tx_cq);
+@@ -1916,6 +1918,7 @@ static int mana_create_txq(struct mana_port_context *apc,
+ 		txq->ndev = net;
+ 		txq->net_txq = netdev_get_tx_queue(net, i);
+ 		txq->vp_offset = apc->tx_vp_offset;
++		txq->napi_initialized = false;
+ 		skb_queue_head_init(&txq->pending_skbs);
+ 
+ 		memset(&spec, 0, sizeof(spec));
+@@ -1982,6 +1985,7 @@ static int mana_create_txq(struct mana_port_context *apc,
+ 
+ 		netif_napi_add_tx(net, &cq->napi, mana_poll);
+ 		napi_enable(&cq->napi);
++		txq->napi_initialized = true;
+ 
+ 		mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT);
+ 	}
+@@ -1993,7 +1997,7 @@ static int mana_create_txq(struct mana_port_context *apc,
+ }
+ 
+ static void mana_destroy_rxq(struct mana_port_context *apc,
+-			     struct mana_rxq *rxq, bool validate_state)
++			     struct mana_rxq *rxq, bool napi_initialized)
+ 
+ {
+ 	struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
+@@ -2008,15 +2012,15 @@ static void mana_destroy_rxq(struct mana_port_context *apc,
+ 
+ 	napi = &rxq->rx_cq.napi;
+ 
+-	if (validate_state)
++	if (napi_initialized) {
+ 		napi_synchronize(napi);
+ 
+-	napi_disable(napi);
++		napi_disable(napi);
+ 
++		netif_napi_del(napi);
++	}
+ 	xdp_rxq_info_unreg(&rxq->xdp_rxq);
+ 
+-	netif_napi_del(napi);
+-
+ 	mana_destroy_wq_obj(apc, GDMA_RQ, rxq->rxobj);
+ 
+ 	mana_deinit_cq(apc, &rxq->rx_cq);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 4e50b37928885..330eea349caae 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -156,12 +156,13 @@
+ #define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7
+ 
+ /* XDP */
+-#define AM65_CPSW_XDP_CONSUMED 2
+-#define AM65_CPSW_XDP_REDIRECT 1
++#define AM65_CPSW_XDP_CONSUMED BIT(1)
++#define AM65_CPSW_XDP_REDIRECT BIT(0)
+ #define AM65_CPSW_XDP_PASS     0
+ 
+ /* Include headroom compatible with both skb and xdpf */
+-#define AM65_CPSW_HEADROOM (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN)
++#define AM65_CPSW_HEADROOM_NA (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN)
++#define AM65_CPSW_HEADROOM ALIGN(AM65_CPSW_HEADROOM_NA, sizeof(long))
+ 
+ static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave,
+ 				      const u8 *dev_addr)
+@@ -933,7 +934,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
+ 	host_desc = k3_cppi_desc_pool_alloc(tx_chn->desc_pool);
+ 	if (unlikely(!host_desc)) {
+ 		ndev->stats.tx_dropped++;
+-		return -ENOMEM;
++		return AM65_CPSW_XDP_CONSUMED;	/* drop */
+ 	}
+ 
+ 	am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type);
+@@ -942,7 +943,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
+ 				 pkt_len, DMA_TO_DEVICE);
+ 	if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) {
+ 		ndev->stats.tx_dropped++;
+-		ret = -ENOMEM;
++		ret = AM65_CPSW_XDP_CONSUMED;	/* drop */
+ 		goto pool_free;
+ 	}
+ 
+@@ -977,6 +978,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
+ 		/* Inform BQL */
+ 		netdev_tx_completed_queue(netif_txq, 1, pkt_len);
+ 		ndev->stats.tx_errors++;
++		ret = AM65_CPSW_XDP_CONSUMED; /* drop */
+ 		goto dma_unmap;
+ 	}
+ 
+@@ -996,7 +998,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
+ 			     int desc_idx, int cpu, int *len)
+ {
+ 	struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns;
++	struct am65_cpsw_ndev_priv *ndev_priv;
+ 	struct net_device *ndev = port->ndev;
++	struct am65_cpsw_ndev_stats *stats;
+ 	int ret = AM65_CPSW_XDP_CONSUMED;
+ 	struct am65_cpsw_tx_chn *tx_chn;
+ 	struct netdev_queue *netif_txq;
+@@ -1004,6 +1008,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
+ 	struct bpf_prog *prog;
+ 	struct page *page;
+ 	u32 act;
++	int err;
+ 
+ 	prog = READ_ONCE(port->xdp_prog);
+ 	if (!prog)
+@@ -1013,6 +1018,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
+ 	/* XDP prog might have changed packet data and boundaries */
+ 	*len = xdp->data_end - xdp->data;
+ 
++	ndev_priv = netdev_priv(ndev);
++	stats = this_cpu_ptr(ndev_priv->stats);
++
+ 	switch (act) {
+ 	case XDP_PASS:
+ 		ret = AM65_CPSW_XDP_PASS;
+@@ -1023,31 +1031,36 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
+ 
+ 		xdpf = xdp_convert_buff_to_frame(xdp);
+ 		if (unlikely(!xdpf))
+-			break;
++			goto drop;
+ 
+ 		__netif_tx_lock(netif_txq, cpu);
+-		ret = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf,
++		err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf,
+ 					     AM65_CPSW_TX_BUF_TYPE_XDP_TX);
+ 		__netif_tx_unlock(netif_txq);
+-		if (ret)
+-			break;
++		if (err)
++			goto drop;
+ 
+-		ndev->stats.rx_bytes += *len;
+-		ndev->stats.rx_packets++;
++		u64_stats_update_begin(&stats->syncp);
++		stats->rx_bytes += *len;
++		stats->rx_packets++;
++		u64_stats_update_end(&stats->syncp);
+ 		ret = AM65_CPSW_XDP_CONSUMED;
+ 		goto out;
+ 	case XDP_REDIRECT:
+ 		if (unlikely(xdp_do_redirect(ndev, xdp, prog)))
+-			break;
++			goto drop;
+ 
+-		ndev->stats.rx_bytes += *len;
+-		ndev->stats.rx_packets++;
++		u64_stats_update_begin(&stats->syncp);
++		stats->rx_bytes += *len;
++		stats->rx_packets++;
++		u64_stats_update_end(&stats->syncp);
+ 		ret = AM65_CPSW_XDP_REDIRECT;
+ 		goto out;
+ 	default:
+ 		bpf_warn_invalid_xdp_action(ndev, prog, act);
+ 		fallthrough;
+ 	case XDP_ABORTED:
++drop:
+ 		trace_xdp_exception(ndev, prog, act);
+ 		fallthrough;
+ 	case XDP_DROP:
+@@ -1056,7 +1069,6 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
+ 
+ 	page = virt_to_head_page(xdp->data);
+ 	am65_cpsw_put_page(rx_chn, page, true, desc_idx);
+-
+ out:
+ 	return ret;
+ }
+@@ -1095,7 +1107,7 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *skb, u32 csum_info)
+ }
+ 
+ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
+-				     u32 flow_idx, int cpu)
++				     u32 flow_idx, int cpu, int *xdp_state)
+ {
+ 	struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns;
+ 	u32 buf_dma_len, pkt_len, port_id = 0, csum_info;
+@@ -1114,6 +1126,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
+ 	void **swdata;
+ 	u32 *psdata;
+ 
++	*xdp_state = AM65_CPSW_XDP_PASS;
+ 	ret = k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma);
+ 	if (ret) {
+ 		if (ret != -ENODATA)
+@@ -1161,15 +1174,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
+ 	}
+ 
+ 	if (port->xdp_prog) {
+-		xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq);
+-
+-		xdp_prepare_buff(&xdp, page_addr, skb_headroom(skb),
++		xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq);
++		xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM,
+ 				 pkt_len, false);
+-
+-		ret = am65_cpsw_run_xdp(common, port, &xdp, desc_idx,
+-					cpu, &pkt_len);
+-		if (ret != AM65_CPSW_XDP_PASS)
+-			return ret;
++		*xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx,
++					       cpu, &pkt_len);
++		if (*xdp_state != AM65_CPSW_XDP_PASS)
++			goto allocate;
+ 
+ 		/* Compute additional headroom to be reserved */
+ 		headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb);
+@@ -1193,9 +1204,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
+ 	stats->rx_bytes += pkt_len;
+ 	u64_stats_update_end(&stats->syncp);
+ 
++allocate:
+ 	new_page = page_pool_dev_alloc_pages(rx_chn->page_pool);
+-	if (unlikely(!new_page))
++	if (unlikely(!new_page)) {
++		dev_err(dev, "page alloc failed\n");
+ 		return -ENOMEM;
++	}
++
+ 	rx_chn->pages[desc_idx] = new_page;
+ 
+ 	if (netif_dormant(ndev)) {
+@@ -1229,8 +1244,9 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
+ 	struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx);
+ 	int flow = AM65_CPSW_MAX_RX_FLOWS;
+ 	int cpu = smp_processor_id();
+-	bool xdp_redirect = false;
++	int xdp_state_or = 0;
+ 	int cur_budget, ret;
++	int xdp_state;
+ 	int num_rx = 0;
+ 
+ 	/* process every flow */
+@@ -1238,12 +1254,11 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
+ 		cur_budget = budget - num_rx;
+ 
+ 		while (cur_budget--) {
+-			ret = am65_cpsw_nuss_rx_packets(common, flow, cpu);
+-			if (ret) {
+-				if (ret == AM65_CPSW_XDP_REDIRECT)
+-					xdp_redirect = true;
++			ret = am65_cpsw_nuss_rx_packets(common, flow, cpu,
++							&xdp_state);
++			xdp_state_or |= xdp_state;
++			if (ret)
+ 				break;
+-			}
+ 			num_rx++;
+ 		}
+ 
+@@ -1251,7 +1266,7 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
+ 			break;
+ 	}
+ 
+-	if (xdp_redirect)
++	if (xdp_state_or & AM65_CPSW_XDP_REDIRECT)
+ 		xdp_do_flush();
+ 
+ 	dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget);
+@@ -1918,12 +1933,13 @@ static int am65_cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf)
+ static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n,
+ 				  struct xdp_frame **frames, u32 flags)
+ {
++	struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+ 	struct am65_cpsw_tx_chn *tx_chn;
+ 	struct netdev_queue *netif_txq;
+ 	int cpu = smp_processor_id();
+ 	int i, nxmit = 0;
+ 
+-	tx_chn = &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES];
++	tx_chn = &common->tx_chns[cpu % common->tx_ch_num];
+ 	netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
+ 
+ 	__netif_tx_lock(netif_txq, cpu);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+index 09c9f9787180b..1223fcc1a8dae 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
+@@ -436,6 +436,8 @@ struct skbuf_dma_descriptor {
+  * @tx_bytes:	TX byte count for statistics
+  * @tx_stat_sync: Synchronization object for TX stats
+  * @dma_err_task: Work structure to process Axi DMA errors
++ * @stopping:   Set when @dma_err_task shouldn't do anything because we are
++ *              about to stop the device.
+  * @tx_irq:	Axidma TX IRQ number
+  * @rx_irq:	Axidma RX IRQ number
+  * @eth_irq:	Ethernet core IRQ number
+@@ -507,6 +509,7 @@ struct axienet_local {
+ 	struct u64_stats_sync tx_stat_sync;
+ 
+ 	struct work_struct dma_err_task;
++	bool stopping;
+ 
+ 	int tx_irq;
+ 	int rx_irq;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 559c0d60d9483..88d7bc2ea7132 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1460,6 +1460,7 @@ static int axienet_init_legacy_dma(struct net_device *ndev)
+ 	struct axienet_local *lp = netdev_priv(ndev);
+ 
+ 	/* Enable worker thread for Axi DMA error handling */
++	lp->stopping = false;
+ 	INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler);
+ 
+ 	napi_enable(&lp->napi_rx);
+@@ -1580,6 +1581,9 @@ static int axienet_stop(struct net_device *ndev)
+ 	dev_dbg(&ndev->dev, "axienet_close()\n");
+ 
+ 	if (!lp->use_dmaengine) {
++		WRITE_ONCE(lp->stopping, true);
++		flush_work(&lp->dma_err_task);
++
+ 		napi_disable(&lp->napi_tx);
+ 		napi_disable(&lp->napi_rx);
+ 	}
+@@ -2154,6 +2158,10 @@ static void axienet_dma_err_handler(struct work_struct *work)
+ 						dma_err_task);
+ 	struct net_device *ndev = lp->ndev;
+ 
++	/* Don't bother if we are going to stop anyway */
++	if (READ_ONCE(lp->stopping))
++		return;
++
+ 	napi_disable(&lp->napi_tx);
+ 	napi_disable(&lp->napi_rx);
+ 
+diff --git a/drivers/net/mctp/mctp-serial.c b/drivers/net/mctp/mctp-serial.c
+index 5bf6fdff701cd..346e6ad36054e 100644
+--- a/drivers/net/mctp/mctp-serial.c
++++ b/drivers/net/mctp/mctp-serial.c
+@@ -91,8 +91,8 @@ static int next_chunk_len(struct mctp_serial *dev)
+ 	 * will be those non-escaped bytes, and does not include the escaped
+ 	 * byte.
+ 	 */
+-	for (i = 1; i + dev->txpos + 1 < dev->txlen; i++) {
+-		if (needs_escape(dev->txbuf[dev->txpos + i + 1]))
++	for (i = 1; i + dev->txpos < dev->txlen; i++) {
++		if (needs_escape(dev->txbuf[dev->txpos + i]))
+ 			break;
+ 	}
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 6c6ec94757092..2c0ee5cf8b6e0 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3346,11 +3346,13 @@ static int of_phy_leds(struct phy_device *phydev)
+ 		err = of_phy_led(phydev, led);
+ 		if (err) {
+ 			of_node_put(led);
++			of_node_put(leds);
+ 			phy_leds_unregister(phydev);
+ 			return err;
+ 		}
+ 	}
+ 
++	of_node_put(leds);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 687d70cfc5563..6eeef10edadad 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -475,8 +475,8 @@ static int ipheth_close(struct net_device *net)
+ {
+ 	struct ipheth_device *dev = netdev_priv(net);
+ 
+-	cancel_delayed_work_sync(&dev->carrier_work);
+ 	netif_stop_queue(net);
++	cancel_delayed_work_sync(&dev->carrier_work);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 19df1cd9f0724..51d5d4f0a8f9f 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5177,14 +5177,23 @@ static void rtl8152_fw_mac_apply(struct r8152 *tp, struct fw_mac *mac)
+ 	data = (u8 *)mac;
+ 	data += __le16_to_cpu(mac->fw_offset);
+ 
+-	generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, data,
+-			  type);
++	if (generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length,
++			      data, type) < 0) {
++		dev_err(&tp->intf->dev, "Write %s fw fail\n",
++			type ? "PLA" : "USB");
++		return;
++	}
+ 
+ 	ocp_write_word(tp, type, __le16_to_cpu(mac->bp_ba_addr),
+ 		       __le16_to_cpu(mac->bp_ba_value));
+ 
+-	generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD,
+-			  __le16_to_cpu(mac->bp_num) << 1, mac->bp, type);
++	if (generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD,
++			      ALIGN(__le16_to_cpu(mac->bp_num) << 1, 4),
++			      mac->bp, type) < 0) {
++		dev_err(&tp->intf->dev, "Write %s bp fail\n",
++			type ? "PLA" : "USB");
++		return;
++	}
+ 
+ 	bp_en_addr = __le16_to_cpu(mac->bp_en_addr);
+ 	if (bp_en_addr)
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 9fd516e8bb107..18eb5ba436df6 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -61,9 +61,6 @@
+ 
+ /*-------------------------------------------------------------------------*/
+ 
+-// randomly generated ethernet address
+-static u8	node_id [ETH_ALEN];
+-
+ /* use ethtool to change the level for any given device */
+ static int msg_level = -1;
+ module_param (msg_level, int, 0);
+@@ -1725,7 +1722,6 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 
+ 	dev->net = net;
+ 	strscpy(net->name, "usb%d", sizeof(net->name));
+-	eth_hw_addr_set(net, node_id);
+ 
+ 	/* rx and tx sides can use different message sizes;
+ 	 * bind() should set rx_urb_size in that case.
+@@ -1801,9 +1797,9 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 		goto out4;
+ 	}
+ 
+-	/* let userspace know we have a random address */
+-	if (ether_addr_equal(net->dev_addr, node_id))
+-		net->addr_assign_type = NET_ADDR_RANDOM;
++	/* this flags the device for user space */
++	if (!is_valid_ether_addr(net->dev_addr))
++		eth_hw_addr_random(net);
+ 
+ 	if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+ 		SET_NETDEV_DEVTYPE(net, &wlan_type);
+@@ -2211,7 +2207,6 @@ static int __init usbnet_init(void)
+ 	BUILD_BUG_ON(
+ 		sizeof_field(struct sk_buff, cb) < sizeof(struct skb_data));
+ 
+-	eth_random_addr(node_id);
+ 	return 0;
+ }
+ module_init(usbnet_init);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index ca0f17ddebbaa..0360ee5f0b65c 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -413,7 +413,7 @@ static int ath11k_ahb_power_up(struct ath11k_base *ab)
+ 	return ret;
+ }
+ 
+-static void ath11k_ahb_power_down(struct ath11k_base *ab, bool is_suspend)
++static void ath11k_ahb_power_down(struct ath11k_base *ab)
+ {
+ 	struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab);
+ 
+@@ -1261,7 +1261,7 @@ static void ath11k_ahb_remove(struct platform_device *pdev)
+ 	struct ath11k_base *ab = platform_get_drvdata(pdev);
+ 
+ 	if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+-		ath11k_ahb_power_down(ab, false);
++		ath11k_ahb_power_down(ab);
+ 		ath11k_debugfs_soc_destroy(ab);
+ 		ath11k_qmi_deinit_service(ab);
+ 		goto qmi_fail;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 47554c3619633..0edfea68afadf 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -906,6 +906,12 @@ int ath11k_core_suspend(struct ath11k_base *ab)
+ 		return ret;
+ 	}
+ 
++	ret = ath11k_wow_enable(ab);
++	if (ret) {
++		ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret);
++		return ret;
++	}
++
+ 	ret = ath11k_dp_rx_pktlog_stop(ab, false);
+ 	if (ret) {
+ 		ath11k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
+@@ -916,85 +922,29 @@ int ath11k_core_suspend(struct ath11k_base *ab)
+ 	ath11k_ce_stop_shadow_timers(ab);
+ 	ath11k_dp_stop_shadow_timers(ab);
+ 
+-	/* PM framework skips suspend_late/resume_early callbacks
+-	 * if other devices report errors in their suspend callbacks.
+-	 * However ath11k_core_resume() would still be called because
+-	 * here we return success thus kernel put us on dpm_suspended_list.
+-	 * Since we won't go through a power down/up cycle, there is
+-	 * no chance to call complete(&ab->restart_completed) in
+-	 * ath11k_core_restart(), making ath11k_core_resume() timeout.
+-	 * So call it here to avoid this issue. This also works in case
+-	 * no error happens thus suspend_late/resume_early get called,
+-	 * because it will be reinitialized in ath11k_core_resume_early().
+-	 */
+-	complete(&ab->restart_completed);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(ath11k_core_suspend);
+-
+-int ath11k_core_suspend_late(struct ath11k_base *ab)
+-{
+-	struct ath11k_pdev *pdev;
+-	struct ath11k *ar;
+-
+-	if (!ab->hw_params.supports_suspend)
+-		return -EOPNOTSUPP;
+-
+-	/* so far single_pdev_only chips have supports_suspend as true
+-	 * and only the first pdev is valid.
+-	 */
+-	pdev = ath11k_core_get_single_pdev(ab);
+-	ar = pdev->ar;
+-	if (!ar || ar->state != ATH11K_STATE_OFF)
+-		return 0;
+-
+ 	ath11k_hif_irq_disable(ab);
+ 	ath11k_hif_ce_irq_disable(ab);
+ 
+-	ath11k_hif_power_down(ab, true);
++	ret = ath11k_hif_suspend(ab);
++	if (ret) {
++		ath11k_warn(ab, "failed to suspend hif: %d\n", ret);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(ath11k_core_suspend_late);
+-
+-int ath11k_core_resume_early(struct ath11k_base *ab)
+-{
+-	int ret;
+-	struct ath11k_pdev *pdev;
+-	struct ath11k *ar;
+-
+-	if (!ab->hw_params.supports_suspend)
+-		return -EOPNOTSUPP;
+-
+-	/* so far single_pdev_only chips have supports_suspend as true
+-	 * and only the first pdev is valid.
+-	 */
+-	pdev = ath11k_core_get_single_pdev(ab);
+-	ar = pdev->ar;
+-	if (!ar || ar->state != ATH11K_STATE_OFF)
+-		return 0;
+-
+-	reinit_completion(&ab->restart_completed);
+-	ret = ath11k_hif_power_up(ab);
+-	if (ret)
+-		ath11k_warn(ab, "failed to power up hif during resume: %d\n", ret);
+-
+-	return ret;
+-}
+-EXPORT_SYMBOL(ath11k_core_resume_early);
++EXPORT_SYMBOL(ath11k_core_suspend);
+ 
+ int ath11k_core_resume(struct ath11k_base *ab)
+ {
+ 	int ret;
+ 	struct ath11k_pdev *pdev;
+ 	struct ath11k *ar;
+-	long time_left;
+ 
+ 	if (!ab->hw_params.supports_suspend)
+ 		return -EOPNOTSUPP;
+ 
+-	/* so far single_pdev_only chips have supports_suspend as true
++	/* so far signle_pdev_only chips have supports_suspend as true
+ 	 * and only the first pdev is valid.
+ 	 */
+ 	pdev = ath11k_core_get_single_pdev(ab);
+@@ -1002,29 +952,29 @@ int ath11k_core_resume(struct ath11k_base *ab)
+ 	if (!ar || ar->state != ATH11K_STATE_OFF)
+ 		return 0;
+ 
+-	time_left = wait_for_completion_timeout(&ab->restart_completed,
+-						ATH11K_RESET_TIMEOUT_HZ);
+-	if (time_left == 0) {
+-		ath11k_warn(ab, "timeout while waiting for restart complete");
+-		return -ETIMEDOUT;
++	ret = ath11k_hif_resume(ab);
++	if (ret) {
++		ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret);
++		return ret;
+ 	}
+ 
+-	if (ab->hw_params.current_cc_support &&
+-	    ar->alpha2[0] != 0 && ar->alpha2[1] != 0) {
+-		ret = ath11k_reg_set_cc(ar);
+-		if (ret) {
+-			ath11k_warn(ab, "failed to set country code during resume: %d\n",
+-				    ret);
+-			return ret;
+-		}
+-	}
++	ath11k_hif_ce_irq_enable(ab);
++	ath11k_hif_irq_enable(ab);
+ 
+ 	ret = ath11k_dp_rx_pktlog_start(ab);
+-	if (ret)
++	if (ret) {
+ 		ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n",
+ 			    ret);
++		return ret;
++	}
+ 
+-	return ret;
++	ret = ath11k_wow_wakeup(ab);
++	if (ret) {
++		ath11k_warn(ab, "failed to wakeup wow during resume: %d\n", ret);
++		return ret;
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(ath11k_core_resume);
+ 
+@@ -2119,8 +2069,6 @@ static void ath11k_core_restart(struct work_struct *work)
+ 
+ 	if (!ab->is_reset)
+ 		ath11k_core_post_reconfigure_recovery(ab);
+-
+-	complete(&ab->restart_completed);
+ }
+ 
+ static void ath11k_core_reset(struct work_struct *work)
+@@ -2190,7 +2138,7 @@ static void ath11k_core_reset(struct work_struct *work)
+ 	ath11k_hif_irq_disable(ab);
+ 	ath11k_hif_ce_irq_disable(ab);
+ 
+-	ath11k_hif_power_down(ab, false);
++	ath11k_hif_power_down(ab);
+ 	ath11k_hif_power_up(ab);
+ 
+ 	ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n");
+@@ -2263,7 +2211,7 @@ void ath11k_core_deinit(struct ath11k_base *ab)
+ 
+ 	mutex_unlock(&ab->core_lock);
+ 
+-	ath11k_hif_power_down(ab, false);
++	ath11k_hif_power_down(ab);
+ 	ath11k_mac_destroy(ab);
+ 	ath11k_core_soc_destroy(ab);
+ 	ath11k_fw_destroy(ab);
+@@ -2316,7 +2264,6 @@ struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
+ 	timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0);
+ 	init_completion(&ab->htc_suspend);
+ 	init_completion(&ab->wow.wakeup_completed);
+-	init_completion(&ab->restart_completed);
+ 
+ 	ab->dev = dev;
+ 	ab->hif.bus = bus;
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 205f40ee6b666..141ba4487cb42 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -1033,8 +1033,6 @@ struct ath11k_base {
+ 		DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT);
+ 	} fw;
+ 
+-	struct completion restart_completed;
+-
+ #ifdef CONFIG_NL80211_TESTMODE
+ 	struct {
+ 		u32 data_pos;
+@@ -1234,10 +1232,8 @@ void ath11k_core_free_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd);
+ int ath11k_core_check_dt(struct ath11k_base *ath11k);
+ int ath11k_core_check_smbios(struct ath11k_base *ab);
+ void ath11k_core_halt(struct ath11k *ar);
+-int ath11k_core_resume_early(struct ath11k_base *ab);
+ int ath11k_core_resume(struct ath11k_base *ab);
+ int ath11k_core_suspend(struct ath11k_base *ab);
+-int ath11k_core_suspend_late(struct ath11k_base *ab);
+ void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab);
+ bool ath11k_core_coldboot_cal_support(struct ath11k_base *ab);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/hif.h b/drivers/net/wireless/ath/ath11k/hif.h
+index c4c6cc09c7c16..674ff772b181b 100644
+--- a/drivers/net/wireless/ath/ath11k/hif.h
++++ b/drivers/net/wireless/ath/ath11k/hif.h
+@@ -18,7 +18,7 @@ struct ath11k_hif_ops {
+ 	int (*start)(struct ath11k_base *ab);
+ 	void (*stop)(struct ath11k_base *ab);
+ 	int (*power_up)(struct ath11k_base *ab);
+-	void (*power_down)(struct ath11k_base *ab, bool is_suspend);
++	void (*power_down)(struct ath11k_base *ab);
+ 	int (*suspend)(struct ath11k_base *ab);
+ 	int (*resume)(struct ath11k_base *ab);
+ 	int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id,
+@@ -67,18 +67,12 @@ static inline void ath11k_hif_irq_disable(struct ath11k_base *ab)
+ 
+ static inline int ath11k_hif_power_up(struct ath11k_base *ab)
+ {
+-	if (!ab->hif.ops->power_up)
+-		return -EOPNOTSUPP;
+-
+ 	return ab->hif.ops->power_up(ab);
+ }
+ 
+-static inline void ath11k_hif_power_down(struct ath11k_base *ab, bool is_suspend)
++static inline void ath11k_hif_power_down(struct ath11k_base *ab)
+ {
+-	if (!ab->hif.ops->power_down)
+-		return;
+-
+-	ab->hif.ops->power_down(ab, is_suspend);
++	ab->hif.ops->power_down(ab);
+ }
+ 
+ static inline int ath11k_hif_suspend(struct ath11k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index ab182690aed32..6974a551883fc 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -453,17 +453,9 @@ int ath11k_mhi_start(struct ath11k_pci *ab_pci)
+ 	return 0;
+ }
+ 
+-void ath11k_mhi_stop(struct ath11k_pci *ab_pci, bool is_suspend)
++void ath11k_mhi_stop(struct ath11k_pci *ab_pci)
+ {
+-	/* During suspend we need to use mhi_power_down_keep_dev()
+-	 * workaround, otherwise ath11k_core_resume() will timeout
+-	 * during resume.
+-	 */
+-	if (is_suspend)
+-		mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true);
+-	else
+-		mhi_power_down(ab_pci->mhi_ctrl, true);
+-
++	mhi_power_down(ab_pci->mhi_ctrl, true);
+ 	mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.h b/drivers/net/wireless/ath/ath11k/mhi.h
+index 2d567705e7323..a682aad52fc51 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.h
++++ b/drivers/net/wireless/ath/ath11k/mhi.h
+@@ -18,7 +18,7 @@
+ #define MHICTRL_RESET_MASK			0x2
+ 
+ int ath11k_mhi_start(struct ath11k_pci *ar_pci);
+-void ath11k_mhi_stop(struct ath11k_pci *ar_pci, bool is_suspend);
++void ath11k_mhi_stop(struct ath11k_pci *ar_pci);
+ int ath11k_mhi_register(struct ath11k_pci *ar_pci);
+ void ath11k_mhi_unregister(struct ath11k_pci *ar_pci);
+ void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab);
+@@ -26,4 +26,5 @@ void ath11k_mhi_clear_vector(struct ath11k_base *ab);
+ 
+ int ath11k_mhi_suspend(struct ath11k_pci *ar_pci);
+ int ath11k_mhi_resume(struct ath11k_pci *ar_pci);
++
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 8d63b84d12614..be9d2c69cc413 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -638,7 +638,7 @@ static int ath11k_pci_power_up(struct ath11k_base *ab)
+ 	return 0;
+ }
+ 
+-static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend)
++static void ath11k_pci_power_down(struct ath11k_base *ab)
+ {
+ 	struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ 
+@@ -649,7 +649,7 @@ static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend)
+ 
+ 	ath11k_pci_msi_disable(ab_pci);
+ 
+-	ath11k_mhi_stop(ab_pci, is_suspend);
++	ath11k_mhi_stop(ab_pci);
+ 	clear_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags);
+ 	ath11k_pci_sw_reset(ab_pci->ab, false);
+ }
+@@ -970,7 +970,7 @@ static void ath11k_pci_remove(struct pci_dev *pdev)
+ 	ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ 
+ 	if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+-		ath11k_pci_power_down(ab, false);
++		ath11k_pci_power_down(ab);
+ 		ath11k_debugfs_soc_destroy(ab);
+ 		ath11k_qmi_deinit_service(ab);
+ 		goto qmi_fail;
+@@ -998,7 +998,7 @@ static void ath11k_pci_shutdown(struct pci_dev *pdev)
+ 	struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ 
+ 	ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
+-	ath11k_pci_power_down(ab, false);
++	ath11k_pci_power_down(ab);
+ }
+ 
+ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
+@@ -1035,39 +1035,9 @@ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
+ 	return ret;
+ }
+ 
+-static __maybe_unused int ath11k_pci_pm_suspend_late(struct device *dev)
+-{
+-	struct ath11k_base *ab = dev_get_drvdata(dev);
+-	int ret;
+-
+-	ret = ath11k_core_suspend_late(ab);
+-	if (ret)
+-		ath11k_warn(ab, "failed to late suspend core: %d\n", ret);
+-
+-	/* Similar to ath11k_pci_pm_suspend(), we return success here
+-	 * even error happens, to allow system suspend/hibernation survive.
+-	 */
+-	return 0;
+-}
+-
+-static __maybe_unused int ath11k_pci_pm_resume_early(struct device *dev)
+-{
+-	struct ath11k_base *ab = dev_get_drvdata(dev);
+-	int ret;
+-
+-	ret = ath11k_core_resume_early(ab);
+-	if (ret)
+-		ath11k_warn(ab, "failed to early resume core: %d\n", ret);
+-
+-	return ret;
+-}
+-
+-static const struct dev_pm_ops __maybe_unused ath11k_pci_pm_ops = {
+-	SET_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend,
+-				ath11k_pci_pm_resume)
+-	SET_LATE_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend_late,
+-				     ath11k_pci_pm_resume_early)
+-};
++static SIMPLE_DEV_PM_OPS(ath11k_pci_pm_ops,
++			 ath11k_pci_pm_suspend,
++			 ath11k_pci_pm_resume);
+ 
+ static struct pci_driver ath11k_pci_driver = {
+ 	.name = "ath11k_pci",
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index aa160e6fe24f1..e0d052f186b2f 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2877,7 +2877,7 @@ int ath11k_qmi_fwreset_from_cold_boot(struct ath11k_base *ab)
+ 	}
+ 
+ 	/* reset the firmware */
+-	ath11k_hif_power_down(ab, false);
++	ath11k_hif_power_down(ab);
+ 	ath11k_hif_power_up(ab);
+ 	ath11k_dbg(ab, ATH11K_DBG_QMI, "exit wait for cold boot done\n");
+ 	return 0;
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 8474e25d2ac64..7037004ce9771 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -1881,7 +1881,9 @@ static void ath12k_peer_assoc_h_he(struct ath12k *ar,
+ {
+ 	const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ 	int i;
+-	u8 ampdu_factor, rx_mcs_80, rx_mcs_160, max_nss;
++	u8 ampdu_factor, max_nss;
++	u8 rx_mcs_80 = IEEE80211_HE_MCS_NOT_SUPPORTED;
++	u8 rx_mcs_160 = IEEE80211_HE_MCS_NOT_SUPPORTED;
+ 	u16 mcs_160_map, mcs_80_map;
+ 	bool support_160;
+ 	u16 v;
+@@ -3845,6 +3847,11 @@ static int ath12k_station_assoc(struct ath12k *ar,
+ 
+ 	ath12k_peer_assoc_prepare(ar, vif, sta, &peer_arg, reassoc);
+ 
++	if (peer_arg.peer_nss < 1) {
++		ath12k_warn(ar->ab,
++			    "invalid peer NSS %d\n", peer_arg.peer_nss);
++		return -EINVAL;
++	}
+ 	ret = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+index 92860dc0a92eb..676604cb5a227 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+@@ -1090,6 +1090,7 @@ static int ieee_hw_init(struct ieee80211_hw *hw)
+ 	ieee80211_hw_set(hw, AMPDU_AGGREGATION);
+ 	ieee80211_hw_set(hw, SIGNAL_DBM);
+ 	ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS);
++	ieee80211_hw_set(hw, MFP_CAPABLE);
+ 
+ 	hw->extra_tx_headroom = brcms_c_get_header_len();
+ 	hw->queues = N_TX_QUEUES;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index ded094b6b63df..bc40242aaadd3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -1442,7 +1442,8 @@ iwl_mvm_rcu_dereference_vif_id(struct iwl_mvm *mvm, u8 vif_id, bool rcu)
+ static inline struct ieee80211_bss_conf *
+ iwl_mvm_rcu_fw_link_id_to_link_conf(struct iwl_mvm *mvm, u8 link_id, bool rcu)
+ {
+-	if (WARN_ON(link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf)))
++	if (IWL_FW_CHECK(mvm, link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf),
++			 "erroneous FW link ID: %d\n", link_id))
+ 		return NULL;
+ 
+ 	if (rcu)
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h
+index 175882485a195..c5164ae41b547 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.h
++++ b/drivers/net/wireless/marvell/mwifiex/main.h
+@@ -1287,6 +1287,9 @@ mwifiex_get_priv_by_id(struct mwifiex_adapter *adapter,
+ 
+ 	for (i = 0; i < adapter->priv_num; i++) {
+ 		if (adapter->priv[i]) {
++			if (adapter->priv[i]->bss_mode == NL80211_IFTYPE_UNSPECIFIED)
++				continue;
++
+ 			if ((adapter->priv[i]->bss_num == bss_num) &&
+ 			    (adapter->priv[i]->bss_type == bss_type))
+ 				break;
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index 0001a1ab6f38b..edc1507514f6d 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -744,7 +744,6 @@ static struct rtw_hci_ops rtw_usb_ops = {
+ static int rtw_usb_init_rx(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+-	int i;
+ 
+ 	rtwusb->rxwq = create_singlethread_workqueue("rtw88_usb: rx wq");
+ 	if (!rtwusb->rxwq) {
+@@ -756,13 +755,19 @@ static int rtw_usb_init_rx(struct rtw_dev *rtwdev)
+ 
+ 	INIT_WORK(&rtwusb->rx_work, rtw_usb_rx_handler);
+ 
++	return 0;
++}
++
++static void rtw_usb_setup_rx(struct rtw_dev *rtwdev)
++{
++	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
++	int i;
++
+ 	for (i = 0; i < RTW_USB_RXCB_NUM; i++) {
+ 		struct rx_usb_ctrl_block *rxcb = &rtwusb->rx_cb[i];
+ 
+ 		rtw_usb_rx_resubmit(rtwusb, rxcb);
+ 	}
+-
+-	return 0;
+ }
+ 
+ static void rtw_usb_deinit_rx(struct rtw_dev *rtwdev)
+@@ -899,6 +904,8 @@ int rtw_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 		goto err_destroy_rxwq;
+ 	}
+ 
++	rtw_usb_setup_rx(rtwdev);
++
+ 	return 0;
+ 
+ err_destroy_rxwq:
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index ddc390d24ec1a..ddf45828086d2 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -1917,7 +1917,8 @@ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
+ 		return;
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+-		if (vif->type == NL80211_IFTYPE_STATION) {
++		if (vif->type == NL80211_IFTYPE_STATION &&
++		    !test_bit(RTW89_FLAG_WOWLAN, rtwdev->flags)) {
+ 			rtw89_vif_sync_bcn_tsf(rtwvif, hdr, skb->len);
+ 			rtw89_fw_h2c_rssi_offload(rtwdev, phy_ppdu);
+ 		}
+diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
+index 6f2ebb5fcdb05..2b9e6cfaf2a80 100644
+--- a/drivers/nvme/host/constants.c
++++ b/drivers/nvme/host/constants.c
+@@ -173,7 +173,7 @@ static const char * const nvme_statuses[] = {
+ 
+ const char *nvme_get_error_status_str(u16 status)
+ {
+-	status &= 0x7ff;
++	status &= NVME_SCT_SC_MASK;
+ 	if (status < ARRAY_SIZE(nvme_statuses) && nvme_statuses[status])
+ 		return nvme_statuses[status];
+ 	return "Unknown";
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index d973d063bbf50..5569cf4183b2a 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -261,7 +261,7 @@ void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
+ 
+ static blk_status_t nvme_error_status(u16 status)
+ {
+-	switch (status & 0x7ff) {
++	switch (status & NVME_SCT_SC_MASK) {
+ 	case NVME_SC_SUCCESS:
+ 		return BLK_STS_OK;
+ 	case NVME_SC_CAP_EXCEEDED:
+@@ -307,7 +307,7 @@ static void nvme_retry_req(struct request *req)
+ 	u16 crd;
+ 
+ 	/* The mask and shift result must be <= 3 */
+-	crd = (nvme_req(req)->status & NVME_SC_CRD) >> 11;
++	crd = (nvme_req(req)->status & NVME_STATUS_CRD) >> 11;
+ 	if (crd)
+ 		delay = nvme_req(req)->ctrl->crdt[crd - 1] * 100;
+ 
+@@ -329,10 +329,10 @@ static void nvme_log_error(struct request *req)
+ 		       nvme_sect_to_lba(ns->head, blk_rq_pos(req)),
+ 		       blk_rq_bytes(req) >> ns->head->lba_shift,
+ 		       nvme_get_error_status_str(nr->status),
+-		       nr->status >> 8 & 7,	/* Status Code Type */
+-		       nr->status & 0xff,	/* Status Code */
+-		       nr->status & NVME_SC_MORE ? "MORE " : "",
+-		       nr->status & NVME_SC_DNR  ? "DNR "  : "");
++		       NVME_SCT(nr->status),		/* Status Code Type */
++		       nr->status & NVME_SC_MASK,	/* Status Code */
++		       nr->status & NVME_STATUS_MORE ? "MORE " : "",
++		       nr->status & NVME_STATUS_DNR  ? "DNR "  : "");
+ 		return;
+ 	}
+ 
+@@ -341,10 +341,10 @@ static void nvme_log_error(struct request *req)
+ 			   nvme_get_admin_opcode_str(nr->cmd->common.opcode),
+ 			   nr->cmd->common.opcode,
+ 			   nvme_get_error_status_str(nr->status),
+-			   nr->status >> 8 & 7,	/* Status Code Type */
+-			   nr->status & 0xff,	/* Status Code */
+-			   nr->status & NVME_SC_MORE ? "MORE " : "",
+-			   nr->status & NVME_SC_DNR  ? "DNR "  : "");
++			   NVME_SCT(nr->status),	/* Status Code Type */
++			   nr->status & NVME_SC_MASK,	/* Status Code */
++			   nr->status & NVME_STATUS_MORE ? "MORE " : "",
++			   nr->status & NVME_STATUS_DNR  ? "DNR "  : "");
+ }
+ 
+ static void nvme_log_err_passthru(struct request *req)
+@@ -359,10 +359,10 @@ static void nvme_log_err_passthru(struct request *req)
+ 		     nvme_get_admin_opcode_str(nr->cmd->common.opcode),
+ 		nr->cmd->common.opcode,
+ 		nvme_get_error_status_str(nr->status),
+-		nr->status >> 8 & 7,	/* Status Code Type */
+-		nr->status & 0xff,	/* Status Code */
+-		nr->status & NVME_SC_MORE ? "MORE " : "",
+-		nr->status & NVME_SC_DNR  ? "DNR "  : "",
++		NVME_SCT(nr->status),		/* Status Code Type */
++		nr->status & NVME_SC_MASK,	/* Status Code */
++		nr->status & NVME_STATUS_MORE ? "MORE " : "",
++		nr->status & NVME_STATUS_DNR  ? "DNR "  : "",
+ 		nr->cmd->common.cdw10,
+ 		nr->cmd->common.cdw11,
+ 		nr->cmd->common.cdw12,
+@@ -384,11 +384,11 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
+ 		return COMPLETE;
+ 
+ 	if (blk_noretry_request(req) ||
+-	    (nvme_req(req)->status & NVME_SC_DNR) ||
++	    (nvme_req(req)->status & NVME_STATUS_DNR) ||
+ 	    nvme_req(req)->retries >= nvme_max_retries)
+ 		return COMPLETE;
+ 
+-	if ((nvme_req(req)->status & 0x7ff) == NVME_SC_AUTH_REQUIRED)
++	if ((nvme_req(req)->status & NVME_SCT_SC_MASK) == NVME_SC_AUTH_REQUIRED)
+ 		return AUTHENTICATE;
+ 
+ 	if (req->cmd_flags & REQ_NVME_MPATH) {
+@@ -1224,7 +1224,7 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
+ 
+ /*
+  * Recommended frequency for KATO commands per NVMe 1.4 section 7.12.1:
+- * 
++ *
+  *   The host should send Keep Alive commands at half of the Keep Alive Timeout
+  *   accounting for transport roundtrip times [..].
+  */
+@@ -3887,7 +3887,7 @@ static void nvme_ns_remove_by_nsid(struct nvme_ctrl *ctrl, u32 nsid)
+ 
+ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_info *info)
+ {
+-	int ret = NVME_SC_INVALID_NS | NVME_SC_DNR;
++	int ret = NVME_SC_INVALID_NS | NVME_STATUS_DNR;
+ 
+ 	if (!nvme_ns_ids_equal(&ns->head->ids, &info->ids)) {
+ 		dev_err(ns->ctrl->device,
+@@ -3903,7 +3903,7 @@ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_info *info)
+ 	 *
+ 	 * TODO: we should probably schedule a delayed retry here.
+ 	 */
+-	if (ret > 0 && (ret & NVME_SC_DNR))
++	if (ret > 0 && (ret & NVME_STATUS_DNR))
+ 		nvme_ns_remove(ns);
+ }
+ 
+@@ -4095,7 +4095,7 @@ static void nvme_scan_work(struct work_struct *work)
+ 		 * they report) but don't actually support it.
+ 		 */
+ 		ret = nvme_scan_ns_list(ctrl);
+-		if (ret > 0 && ret & NVME_SC_DNR)
++		if (ret > 0 && ret & NVME_STATUS_DNR)
+ 			nvme_scan_ns_sequential(ctrl);
+ 	}
+ 	mutex_unlock(&ctrl->scan_lock);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index ceb9c0ed3120b..b5a4b5fd573e0 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -187,7 +187,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
+ 	if (unlikely(ret != 0))
+ 		dev_err(ctrl->device,
+ 			"Property Get error: %d, offset %#x\n",
+-			ret > 0 ? ret & ~NVME_SC_DNR : ret, off);
++			ret > 0 ? ret & ~NVME_STATUS_DNR : ret, off);
+ 
+ 	return ret;
+ }
+@@ -233,7 +233,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
+ 	if (unlikely(ret != 0))
+ 		dev_err(ctrl->device,
+ 			"Property Get error: %d, offset %#x\n",
+-			ret > 0 ? ret & ~NVME_SC_DNR : ret, off);
++			ret > 0 ? ret & ~NVME_STATUS_DNR : ret, off);
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(nvmf_reg_read64);
+@@ -275,7 +275,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
+ 	if (unlikely(ret))
+ 		dev_err(ctrl->device,
+ 			"Property Set error: %d, offset %#x\n",
+-			ret > 0 ? ret & ~NVME_SC_DNR : ret, off);
++			ret > 0 ? ret & ~NVME_STATUS_DNR : ret, off);
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(nvmf_reg_write32);
+@@ -295,7 +295,7 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl,
+ 		int errval, int offset, struct nvme_command *cmd,
+ 		struct nvmf_connect_data *data)
+ {
+-	int err_sctype = errval & ~NVME_SC_DNR;
++	int err_sctype = errval & ~NVME_STATUS_DNR;
+ 
+ 	if (errval < 0) {
+ 		dev_err(ctrl->device,
+@@ -573,7 +573,7 @@ EXPORT_SYMBOL_GPL(nvmf_connect_io_queue);
+  */
+ bool nvmf_should_reconnect(struct nvme_ctrl *ctrl, int status)
+ {
+-	if (status > 0 && (status & NVME_SC_DNR))
++	if (status > 0 && (status & NVME_STATUS_DNR))
+ 		return false;
+ 
+ 	if (status == -EKEYREJECTED)
+diff --git a/drivers/nvme/host/fault_inject.c b/drivers/nvme/host/fault_inject.c
+index 1ba10a5c656d1..1d1b6441a3398 100644
+--- a/drivers/nvme/host/fault_inject.c
++++ b/drivers/nvme/host/fault_inject.c
+@@ -75,7 +75,7 @@ void nvme_should_fail(struct request *req)
+ 		/* inject status code and DNR bit */
+ 		status = fault_inject->status;
+ 		if (fault_inject->dont_retry)
+-			status |= NVME_SC_DNR;
++			status |= NVME_STATUS_DNR;
+ 		nvme_req(req)->status =	status;
+ 	}
+ }
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index f0b0813327491..beaad6576a670 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3132,7 +3132,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
+ 	if (ctrl->ctrl.icdoff) {
+ 		dev_err(ctrl->ctrl.device, "icdoff %d is not supported!\n",
+ 				ctrl->ctrl.icdoff);
+-		ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		ret = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out_stop_keep_alive;
+ 	}
+ 
+@@ -3140,7 +3140,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
+ 	if (!nvme_ctrl_sgl_supported(&ctrl->ctrl)) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"Mandatory sgls are not supported!\n");
+-		ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		ret = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out_stop_keep_alive;
+ 	}
+ 
+@@ -3325,7 +3325,7 @@ nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status)
+ 		queue_delayed_work(nvme_wq, &ctrl->connect_work, recon_delay);
+ 	} else {
+ 		if (portptr->port_state == FC_OBJSTATE_ONLINE) {
+-			if (status > 0 && (status & NVME_SC_DNR))
++			if (status > 0 && (status & NVME_STATUS_DNR))
+ 				dev_warn(ctrl->ctrl.device,
+ 					 "NVME-FC{%d}: reconnect failure\n",
+ 					 ctrl->cnum);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index d8b6b4648eaff..03a6868f4dbc1 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -83,7 +83,7 @@ void nvme_mpath_start_freeze(struct nvme_subsystem *subsys)
+ void nvme_failover_req(struct request *req)
+ {
+ 	struct nvme_ns *ns = req->q->queuedata;
+-	u16 status = nvme_req(req)->status & 0x7ff;
++	u16 status = nvme_req(req)->status & NVME_SCT_SC_MASK;
+ 	unsigned long flags;
+ 	struct bio *bio;
+ 
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 68b400f9c42d5..5e66bcb34d530 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -689,7 +689,7 @@ static inline u32 nvme_bytes_to_numd(size_t len)
+ 
+ static inline bool nvme_is_ana_error(u16 status)
+ {
+-	switch (status & 0x7ff) {
++	switch (status & NVME_SCT_SC_MASK) {
+ 	case NVME_SC_ANA_TRANSITION:
+ 	case NVME_SC_ANA_INACCESSIBLE:
+ 	case NVME_SC_ANA_PERSISTENT_LOSS:
+@@ -702,7 +702,7 @@ static inline bool nvme_is_ana_error(u16 status)
+ static inline bool nvme_is_path_error(u16 status)
+ {
+ 	/* check for a status code type of 'path related status' */
+-	return (status & 0x700) == 0x300;
++	return (status & NVME_SCT_MASK) == NVME_SCT_PATH;
+ }
+ 
+ /*
+@@ -877,7 +877,7 @@ enum {
+ 	NVME_SUBMIT_NOWAIT = (__force nvme_submit_flags_t)(1 << 1),
+ 	/* Set BLK_MQ_REQ_RESERVED when allocating request */
+ 	NVME_SUBMIT_RESERVED = (__force nvme_submit_flags_t)(1 << 2),
+-	/* Retry command when NVME_SC_DNR is not set in the result */
++	/* Retry command when NVME_STATUS_DNR is not set in the result */
+ 	NVME_SUBMIT_RETRY = (__force nvme_submit_flags_t)(1 << 3),
+ };
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a823330567ff8..18d85575cdb48 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2473,6 +2473,12 @@ static unsigned int nvme_pci_nr_maps(struct nvme_dev *dev)
+ 
+ static void nvme_pci_update_nr_queues(struct nvme_dev *dev)
+ {
++	if (!dev->ctrl.tagset) {
++		nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops,
++				nvme_pci_nr_maps(dev), sizeof(struct nvme_iod));
++		return;
++	}
++
+ 	blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);
+ 	/* free previously allocated queues that are no longer usable */
+ 	nvme_free_queues(dev, dev->online_queues);
+@@ -2931,6 +2937,17 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ 		    dmi_match(DMI_BOARD_NAME, "NS5x_7xPU") ||
+ 		    dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1"))
+ 			return NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND;
++	} else if (pdev->vendor == 0x144d && pdev->device == 0xa80d) {
++		/*
++		 * Exclude Samsung 990 Evo from NVME_QUIRK_SIMPLE_SUSPEND
++		 * because of high power consumption (> 2 Watt) in s2idle
++		 * sleep. Only some boards with Intel CPU are affected.
++		 */
++		if (dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
++		    dmi_match(DMI_BOARD_NAME, "PH4PG31") ||
++		    dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") ||
++		    dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71"))
++			return NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c
+index 8fa1ffcdaed48..7347ddf85f00b 100644
+--- a/drivers/nvme/host/pr.c
++++ b/drivers/nvme/host/pr.c
+@@ -72,12 +72,12 @@ static int nvme_send_ns_pr_command(struct nvme_ns *ns, struct nvme_command *c,
+ 	return nvme_submit_sync_cmd(ns->queue, c, data, data_len);
+ }
+ 
+-static int nvme_sc_to_pr_err(int nvme_sc)
++static int nvme_status_to_pr_err(int status)
+ {
+-	if (nvme_is_path_error(nvme_sc))
++	if (nvme_is_path_error(status))
+ 		return PR_STS_PATH_FAILED;
+ 
+-	switch (nvme_sc & 0x7ff) {
++	switch (status & NVME_SCT_SC_MASK) {
+ 	case NVME_SC_SUCCESS:
+ 		return PR_STS_SUCCESS;
+ 	case NVME_SC_RESERVATION_CONFLICT:
+@@ -121,7 +121,7 @@ static int nvme_pr_command(struct block_device *bdev, u32 cdw10,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return nvme_sc_to_pr_err(ret);
++	return nvme_status_to_pr_err(ret);
+ }
+ 
+ static int nvme_pr_register(struct block_device *bdev, u64 old,
+@@ -196,7 +196,7 @@ static int nvme_pr_resv_report(struct block_device *bdev, void *data,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return nvme_sc_to_pr_err(ret);
++	return nvme_status_to_pr_err(ret);
+ }
+ 
+ static int nvme_pr_read_keys(struct block_device *bdev,
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index f5b7054a4a05e..85006b2df8ae0 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -344,7 +344,7 @@ static void nvmet_execute_get_log_page(struct nvmet_req *req)
+ 	pr_debug("unhandled lid %d on qid %d\n",
+ 	       req->cmd->get_log_page.lid, req->sq->qid);
+ 	req->error_loc = offsetof(struct nvme_get_log_page_command, lid);
+-	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
++	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_STATUS_DNR);
+ }
+ 
+ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
+@@ -496,7 +496,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
+ 
+ 	if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) {
+ 		req->error_loc = offsetof(struct nvme_identify, nsid);
+-		status = NVME_SC_INVALID_NS | NVME_SC_DNR;
++		status = NVME_SC_INVALID_NS | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -587,6 +587,16 @@ static void nvmet_execute_identify_nslist(struct nvmet_req *req)
+ 	u16 status = 0;
+ 	int i = 0;
+ 
++	/*
++	 * NSID values 0xFFFFFFFE and NVME_NSID_ALL are invalid
++	 * See NVMe Base Specification, Active Namespace ID list (CNS 02h).
++	 */
++	if (min_nsid == 0xFFFFFFFE || min_nsid == NVME_NSID_ALL) {
++		req->error_loc = offsetof(struct nvme_identify, nsid);
++		status = NVME_SC_INVALID_NS | NVME_STATUS_DNR;
++		goto out;
++	}
++
+ 	list = kzalloc(buf_size, GFP_KERNEL);
+ 	if (!list) {
+ 		status = NVME_SC_INTERNAL;
+@@ -662,7 +672,7 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req)
+ 
+ 	if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off,
+ 			off) != NVME_IDENTIFY_DATA_SIZE - off)
+-		status = NVME_SC_INTERNAL | NVME_SC_DNR;
++		status = NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ 
+ out:
+ 	nvmet_req_complete(req, status);
+@@ -724,7 +734,7 @@ static void nvmet_execute_identify(struct nvmet_req *req)
+ 	pr_debug("unhandled identify cns %d on qid %d\n",
+ 	       req->cmd->identify.cns, req->sq->qid);
+ 	req->error_loc = offsetof(struct nvme_identify, cns);
+-	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
++	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_STATUS_DNR);
+ }
+ 
+ /*
+@@ -807,7 +817,7 @@ u16 nvmet_set_feat_async_event(struct nvmet_req *req, u32 mask)
+ 
+ 	if (val32 & ~mask) {
+ 		req->error_loc = offsetof(struct nvme_common_command, cdw11);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	WRITE_ONCE(req->sq->ctrl->aen_enabled, val32);
+@@ -833,7 +843,7 @@ void nvmet_execute_set_features(struct nvmet_req *req)
+ 		ncqr = (cdw11 >> 16) & 0xffff;
+ 		nsqr = cdw11 & 0xffff;
+ 		if (ncqr == 0xffff || nsqr == 0xffff) {
+-			status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 			break;
+ 		}
+ 		nvmet_set_result(req,
+@@ -846,14 +856,14 @@ void nvmet_execute_set_features(struct nvmet_req *req)
+ 		status = nvmet_set_feat_async_event(req, NVMET_AEN_CFG_ALL);
+ 		break;
+ 	case NVME_FEAT_HOST_ID:
+-		status = NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
++		status = NVME_SC_CMD_SEQ_ERROR | NVME_STATUS_DNR;
+ 		break;
+ 	case NVME_FEAT_WRITE_PROTECT:
+ 		status = nvmet_set_feat_write_protect(req);
+ 		break;
+ 	default:
+ 		req->error_loc = offsetof(struct nvme_common_command, cdw10);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		break;
+ 	}
+ 
+@@ -939,7 +949,7 @@ void nvmet_execute_get_features(struct nvmet_req *req)
+ 		if (!(req->cmd->common.cdw11 & cpu_to_le32(1 << 0))) {
+ 			req->error_loc =
+ 				offsetof(struct nvme_common_command, cdw11);
+-			status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 			break;
+ 		}
+ 
+@@ -952,7 +962,7 @@ void nvmet_execute_get_features(struct nvmet_req *req)
+ 	default:
+ 		req->error_loc =
+ 			offsetof(struct nvme_common_command, cdw10);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		break;
+ 	}
+ 
+@@ -969,7 +979,7 @@ void nvmet_execute_async_event(struct nvmet_req *req)
+ 	mutex_lock(&ctrl->lock);
+ 	if (ctrl->nr_async_event_cmds >= NVMET_ASYNC_EVENTS) {
+ 		mutex_unlock(&ctrl->lock);
+-		nvmet_req_complete(req, NVME_SC_ASYNC_LIMIT | NVME_SC_DNR);
++		nvmet_req_complete(req, NVME_SC_ASYNC_LIMIT | NVME_STATUS_DNR);
+ 		return;
+ 	}
+ 	ctrl->async_event_cmds[ctrl->nr_async_event_cmds++] = req;
+@@ -1006,7 +1016,7 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
+ 	if (nvme_is_fabrics(cmd))
+ 		return nvmet_parse_fabrics_admin_cmd(req);
+ 	if (unlikely(!nvmet_check_auth_status(req)))
+-		return NVME_SC_AUTH_REQUIRED | NVME_SC_DNR;
++		return NVME_SC_AUTH_REQUIRED | NVME_STATUS_DNR;
+ 	if (nvmet_is_disc_subsys(nvmet_req_subsys(req)))
+ 		return nvmet_parse_discovery_cmd(req);
+ 
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 4ff460ba28263..c0973810e6af5 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -55,18 +55,18 @@ inline u16 errno_to_nvme_status(struct nvmet_req *req, int errno)
+ 		return NVME_SC_SUCCESS;
+ 	case -ENOSPC:
+ 		req->error_loc = offsetof(struct nvme_rw_command, length);
+-		return NVME_SC_CAP_EXCEEDED | NVME_SC_DNR;
++		return NVME_SC_CAP_EXCEEDED | NVME_STATUS_DNR;
+ 	case -EREMOTEIO:
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+-		return  NVME_SC_LBA_RANGE | NVME_SC_DNR;
++		return  NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 	case -EOPNOTSUPP:
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+ 		switch (req->cmd->common.opcode) {
+ 		case nvme_cmd_dsm:
+ 		case nvme_cmd_write_zeroes:
+-			return NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR;
++			return NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+ 		default:
+-			return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++			return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		}
+ 		break;
+ 	case -ENODATA:
+@@ -76,7 +76,7 @@ inline u16 errno_to_nvme_status(struct nvmet_req *req, int errno)
+ 		fallthrough;
+ 	default:
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		return NVME_SC_INTERNAL | NVME_SC_DNR;
++		return NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ 	}
+ }
+ 
+@@ -86,7 +86,7 @@ u16 nvmet_report_invalid_opcode(struct nvmet_req *req)
+ 		 req->sq->qid);
+ 
+ 	req->error_loc = offsetof(struct nvme_common_command, opcode);
+-	return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++	return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ }
+ 
+ static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port,
+@@ -97,7 +97,7 @@ u16 nvmet_copy_to_sgl(struct nvmet_req *req, off_t off, const void *buf,
+ {
+ 	if (sg_pcopy_from_buffer(req->sg, req->sg_cnt, buf, len, off) != len) {
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+-		return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR;
++		return NVME_SC_SGL_INVALID_DATA | NVME_STATUS_DNR;
+ 	}
+ 	return 0;
+ }
+@@ -106,7 +106,7 @@ u16 nvmet_copy_from_sgl(struct nvmet_req *req, off_t off, void *buf, size_t len)
+ {
+ 	if (sg_pcopy_to_buffer(req->sg, req->sg_cnt, buf, len, off) != len) {
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+-		return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR;
++		return NVME_SC_SGL_INVALID_DATA | NVME_STATUS_DNR;
+ 	}
+ 	return 0;
+ }
+@@ -115,7 +115,7 @@ u16 nvmet_zero_sgl(struct nvmet_req *req, off_t off, size_t len)
+ {
+ 	if (sg_zero_buffer(req->sg, req->sg_cnt, len, off) != len) {
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+-		return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR;
++		return NVME_SC_SGL_INVALID_DATA | NVME_STATUS_DNR;
+ 	}
+ 	return 0;
+ }
+@@ -145,7 +145,7 @@ static void nvmet_async_events_failall(struct nvmet_ctrl *ctrl)
+ 	while (ctrl->nr_async_event_cmds) {
+ 		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+ 		mutex_unlock(&ctrl->lock);
+-		nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
++		nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_STATUS_DNR);
+ 		mutex_lock(&ctrl->lock);
+ 	}
+ 	mutex_unlock(&ctrl->lock);
+@@ -444,7 +444,7 @@ u16 nvmet_req_find_ns(struct nvmet_req *req)
+ 		req->error_loc = offsetof(struct nvme_common_command, nsid);
+ 		if (nvmet_subsys_nsid_exists(subsys, nsid))
+ 			return NVME_SC_INTERNAL_PATH_ERROR;
+-		return NVME_SC_INVALID_NS | NVME_SC_DNR;
++		return NVME_SC_INVALID_NS | NVME_STATUS_DNR;
+ 	}
+ 
+ 	percpu_ref_get(&req->ns->ref);
+@@ -904,7 +904,7 @@ static u16 nvmet_parse_io_cmd(struct nvmet_req *req)
+ 		return nvmet_parse_fabrics_io_cmd(req);
+ 
+ 	if (unlikely(!nvmet_check_auth_status(req)))
+-		return NVME_SC_AUTH_REQUIRED | NVME_SC_DNR;
++		return NVME_SC_AUTH_REQUIRED | NVME_STATUS_DNR;
+ 
+ 	ret = nvmet_check_ctrl_status(req);
+ 	if (unlikely(ret))
+@@ -967,7 +967,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
+ 	/* no support for fused commands yet */
+ 	if (unlikely(flags & (NVME_CMD_FUSE_FIRST | NVME_CMD_FUSE_SECOND))) {
+ 		req->error_loc = offsetof(struct nvme_common_command, flags);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto fail;
+ 	}
+ 
+@@ -978,7 +978,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
+ 	 */
+ 	if (unlikely((flags & NVME_CMD_SGL_ALL) != NVME_CMD_SGL_METABUF)) {
+ 		req->error_loc = offsetof(struct nvme_common_command, flags);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto fail;
+ 	}
+ 
+@@ -996,7 +996,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
+ 	trace_nvmet_req_init(req, req->cmd);
+ 
+ 	if (unlikely(!percpu_ref_tryget_live(&sq->ref))) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto fail;
+ 	}
+ 
+@@ -1023,7 +1023,7 @@ bool nvmet_check_transfer_len(struct nvmet_req *req, size_t len)
+ {
+ 	if (unlikely(len != req->transfer_len)) {
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+-		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR);
++		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_STATUS_DNR);
+ 		return false;
+ 	}
+ 
+@@ -1035,7 +1035,7 @@ bool nvmet_check_data_len_lte(struct nvmet_req *req, size_t data_len)
+ {
+ 	if (unlikely(data_len > req->transfer_len)) {
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+-		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR);
++		nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_STATUS_DNR);
+ 		return false;
+ 	}
+ 
+@@ -1304,18 +1304,18 @@ u16 nvmet_check_ctrl_status(struct nvmet_req *req)
+ 	if (unlikely(!(req->sq->ctrl->cc & NVME_CC_ENABLE))) {
+ 		pr_err("got cmd %d while CC.EN == 0 on qid = %d\n",
+ 		       req->cmd->common.opcode, req->sq->qid);
+-		return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
++		return NVME_SC_CMD_SEQ_ERROR | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (unlikely(!(req->sq->ctrl->csts & NVME_CSTS_RDY))) {
+ 		pr_err("got cmd %d while CSTS.RDY == 0 on qid = %d\n",
+ 		       req->cmd->common.opcode, req->sq->qid);
+-		return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
++		return NVME_SC_CMD_SEQ_ERROR | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (unlikely(!nvmet_check_auth_status(req))) {
+ 		pr_warn("qid %d not authenticated\n", req->sq->qid);
+-		return NVME_SC_AUTH_REQUIRED | NVME_SC_DNR;
++		return NVME_SC_AUTH_REQUIRED | NVME_STATUS_DNR;
+ 	}
+ 	return 0;
+ }
+@@ -1389,7 +1389,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 	int ret;
+ 	u16 status;
+ 
+-	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 	subsys = nvmet_find_get_subsys(req->port, subsysnqn);
+ 	if (!subsys) {
+ 		pr_warn("connect request for invalid subsystem %s!\n",
+@@ -1405,7 +1405,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 			hostnqn, subsysnqn);
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
+ 		up_read(&nvmet_config_sem);
+-		status = NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_INVALID_HOST | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_common_command, dptr);
+ 		goto out_put_subsystem;
+ 	}
+@@ -1456,7 +1456,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 			     subsys->cntlid_min, subsys->cntlid_max,
+ 			     GFP_KERNEL);
+ 	if (ret < 0) {
+-		status = NVME_SC_CONNECT_CTRL_BUSY | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_CTRL_BUSY | NVME_STATUS_DNR;
+ 		goto out_free_sqs;
+ 	}
+ 	ctrl->cntlid = ret;
+diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
+index ce54da8c6b366..28843df5fa7c7 100644
+--- a/drivers/nvme/target/discovery.c
++++ b/drivers/nvme/target/discovery.c
+@@ -179,7 +179,7 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
+ 	if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
+ 		req->error_loc =
+ 			offsetof(struct nvme_get_log_page_command, lid);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -187,7 +187,7 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
+ 	if (offset & 0x3) {
+ 		req->error_loc =
+ 			offsetof(struct nvme_get_log_page_command, lpo);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -256,7 +256,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
+ 
+ 	if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
+ 		req->error_loc = offsetof(struct nvme_identify, cns);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -320,7 +320,7 @@ static void nvmet_execute_disc_set_features(struct nvmet_req *req)
+ 	default:
+ 		req->error_loc =
+ 			offsetof(struct nvme_common_command, cdw10);
+-		stat = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		stat = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		break;
+ 	}
+ 
+@@ -345,7 +345,7 @@ static void nvmet_execute_disc_get_features(struct nvmet_req *req)
+ 	default:
+ 		req->error_loc =
+ 			offsetof(struct nvme_common_command, cdw10);
+-		stat = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		stat = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		break;
+ 	}
+ 
+@@ -361,7 +361,7 @@ u16 nvmet_parse_discovery_cmd(struct nvmet_req *req)
+ 		       cmd->common.opcode);
+ 		req->error_loc =
+ 			offsetof(struct nvme_common_command, opcode);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	switch (cmd->common.opcode) {
+@@ -386,7 +386,7 @@ u16 nvmet_parse_discovery_cmd(struct nvmet_req *req)
+ 	default:
+ 		pr_debug("unhandled cmd %d\n", cmd->common.opcode);
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ }
+diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
+index cb34d644ed086..3f2857c17d956 100644
+--- a/drivers/nvme/target/fabrics-cmd-auth.c
++++ b/drivers/nvme/target/fabrics-cmd-auth.c
+@@ -189,26 +189,26 @@ void nvmet_execute_auth_send(struct nvmet_req *req)
+ 	u8 dhchap_status;
+ 
+ 	if (req->cmd->auth_send.secp != NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_send_command, secp);
+ 		goto done;
+ 	}
+ 	if (req->cmd->auth_send.spsp0 != 0x01) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_send_command, spsp0);
+ 		goto done;
+ 	}
+ 	if (req->cmd->auth_send.spsp1 != 0x01) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_send_command, spsp1);
+ 		goto done;
+ 	}
+ 	tl = le32_to_cpu(req->cmd->auth_send.tl);
+ 	if (!tl) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_send_command, tl);
+ 		goto done;
+@@ -437,26 +437,26 @@ void nvmet_execute_auth_receive(struct nvmet_req *req)
+ 	u16 status = 0;
+ 
+ 	if (req->cmd->auth_receive.secp != NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_receive_command, secp);
+ 		goto done;
+ 	}
+ 	if (req->cmd->auth_receive.spsp0 != 0x01) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_receive_command, spsp0);
+ 		goto done;
+ 	}
+ 	if (req->cmd->auth_receive.spsp1 != 0x01) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_receive_command, spsp1);
+ 		goto done;
+ 	}
+ 	al = le32_to_cpu(req->cmd->auth_receive.al);
+ 	if (!al) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc =
+ 			offsetof(struct nvmf_auth_receive_command, al);
+ 		goto done;
+diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
+index 69d77d34bec11..c4b2eddd5666a 100644
+--- a/drivers/nvme/target/fabrics-cmd.c
++++ b/drivers/nvme/target/fabrics-cmd.c
+@@ -18,7 +18,7 @@ static void nvmet_execute_prop_set(struct nvmet_req *req)
+ 	if (req->cmd->prop_set.attrib & 1) {
+ 		req->error_loc =
+ 			offsetof(struct nvmf_property_set_command, attrib);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -29,7 +29,7 @@ static void nvmet_execute_prop_set(struct nvmet_req *req)
+ 	default:
+ 		req->error_loc =
+ 			offsetof(struct nvmf_property_set_command, offset);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ out:
+ 	nvmet_req_complete(req, status);
+@@ -50,7 +50,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
+ 			val = ctrl->cap;
+ 			break;
+ 		default:
+-			status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 			break;
+ 		}
+ 	} else {
+@@ -65,7 +65,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
+ 			val = ctrl->csts;
+ 			break;
+ 		default:
+-			status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 			break;
+ 		}
+ 	}
+@@ -105,7 +105,7 @@ u16 nvmet_parse_fabrics_admin_cmd(struct nvmet_req *req)
+ 		pr_debug("received unknown capsule type 0x%x\n",
+ 			cmd->fabrics.fctype);
+ 		req->error_loc = offsetof(struct nvmf_common_command, fctype);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	return 0;
+@@ -128,7 +128,7 @@ u16 nvmet_parse_fabrics_io_cmd(struct nvmet_req *req)
+ 		pr_debug("received unknown capsule type 0x%x\n",
+ 			cmd->fabrics.fctype);
+ 		req->error_loc = offsetof(struct nvmf_common_command, fctype);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	return 0;
+@@ -147,14 +147,14 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+ 		pr_warn("queue size zero!\n");
+ 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(sqsize);
+-		ret = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		ret = NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 		goto err;
+ 	}
+ 
+ 	if (ctrl->sqs[qid] != NULL) {
+ 		pr_warn("qid %u has already been created\n", qid);
+ 		req->error_loc = offsetof(struct nvmf_connect_command, qid);
+-		return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
++		return NVME_SC_CMD_SEQ_ERROR | NVME_STATUS_DNR;
+ 	}
+ 
+ 	/* for fabrics, this value applies to only the I/O Submission Queues */
+@@ -163,14 +163,14 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+ 				sqsize, mqes, ctrl->cntlid);
+ 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(sqsize);
+-		return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		return NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 	}
+ 
+ 	old = cmpxchg(&req->sq->ctrl, NULL, ctrl);
+ 	if (old) {
+ 		pr_warn("queue already connected!\n");
+ 		req->error_loc = offsetof(struct nvmf_connect_command, opcode);
+-		return NVME_SC_CONNECT_CTRL_BUSY | NVME_SC_DNR;
++		return NVME_SC_CONNECT_CTRL_BUSY | NVME_STATUS_DNR;
+ 	}
+ 
+ 	/* note: convert queue size from 0's-based value to 1's-based value */
+@@ -230,14 +230,14 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
+ 		pr_warn("invalid connect version (%d).\n",
+ 			le16_to_cpu(c->recfmt));
+ 		req->error_loc = offsetof(struct nvmf_connect_command, recfmt);
+-		status = NVME_SC_CONNECT_FORMAT | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_FORMAT | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+ 	if (unlikely(d->cntlid != cpu_to_le16(0xffff))) {
+ 		pr_warn("connect attempt for invalid controller ID %#x\n",
+ 			d->cntlid);
+-		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
+ 		goto out;
+ 	}
+@@ -257,7 +257,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
+ 		       dhchap_status);
+ 		nvmet_ctrl_put(ctrl);
+ 		if (dhchap_status == NVME_AUTH_DHCHAP_FAILURE_FAILED)
+-			status = (NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR);
++			status = (NVME_SC_CONNECT_INVALID_HOST | NVME_STATUS_DNR);
+ 		else
+ 			status = NVME_SC_INTERNAL;
+ 		goto out;
+@@ -305,7 +305,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 	if (c->recfmt != 0) {
+ 		pr_warn("invalid connect version (%d).\n",
+ 			le16_to_cpu(c->recfmt));
+-		status = NVME_SC_CONNECT_FORMAT | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_FORMAT | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -314,13 +314,13 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 	ctrl = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn,
+ 				   le16_to_cpu(d->cntlid), req);
+ 	if (!ctrl) {
+-		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+ 	if (unlikely(qid > ctrl->subsys->max_qid)) {
+ 		pr_warn("invalid queue id (%d)\n", qid);
+-		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
++		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_STATUS_DNR;
+ 		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(qid);
+ 		goto out_ctrl_put;
+ 	}
+@@ -350,13 +350,13 @@ u16 nvmet_parse_connect_cmd(struct nvmet_req *req)
+ 		pr_debug("invalid command 0x%x on unconnected queue.\n",
+ 			cmd->fabrics.opcode);
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 	if (cmd->fabrics.fctype != nvme_fabrics_type_connect) {
+ 		pr_debug("invalid capsule type 0x%x on unconnected queue.\n",
+ 			cmd->fabrics.fctype);
+ 		req->error_loc = offsetof(struct nvmf_common_command, fctype);
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (cmd->connect.qid == 0)
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 6426aac2634ae..e511b055ece71 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -135,11 +135,11 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
+ 	 */
+ 	switch (blk_sts) {
+ 	case BLK_STS_NOSPC:
+-		status = NVME_SC_CAP_EXCEEDED | NVME_SC_DNR;
++		status = NVME_SC_CAP_EXCEEDED | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_rw_command, length);
+ 		break;
+ 	case BLK_STS_TARGET:
+-		status = NVME_SC_LBA_RANGE | NVME_SC_DNR;
++		status = NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+ 		break;
+ 	case BLK_STS_NOTSUPP:
+@@ -147,10 +147,10 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
+ 		switch (req->cmd->common.opcode) {
+ 		case nvme_cmd_dsm:
+ 		case nvme_cmd_write_zeroes:
+-			status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR;
++			status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+ 			break;
+ 		default:
+-			status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++			status = NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		}
+ 		break;
+ 	case BLK_STS_MEDIUM:
+@@ -159,7 +159,7 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
+ 		break;
+ 	case BLK_STS_IOERR:
+ 	default:
+-		status = NVME_SC_INTERNAL | NVME_SC_DNR;
++		status = NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+ 	}
+ 
+@@ -356,7 +356,7 @@ u16 nvmet_bdev_flush(struct nvmet_req *req)
+ 		return 0;
+ 
+ 	if (blkdev_issue_flush(req->ns->bdev))
+-		return NVME_SC_INTERNAL | NVME_SC_DNR;
++		return NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index f003782d4ecff..24d0e2418d2e6 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -306,7 +306,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
+ 		ns = nvme_find_get_ns(ctrl, nsid);
+ 		if (unlikely(!ns)) {
+ 			pr_err("failed to get passthru ns nsid:%u\n", nsid);
+-			status = NVME_SC_INVALID_NS | NVME_SC_DNR;
++			status = NVME_SC_INVALID_NS | NVME_STATUS_DNR;
+ 			goto out;
+ 		}
+ 
+@@ -426,7 +426,7 @@ u16 nvmet_parse_passthru_io_cmd(struct nvmet_req *req)
+ 		 * emulated in the future if regular targets grow support for
+ 		 * this feature.
+ 		 */
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	return nvmet_setup_passthru_command(req);
+@@ -478,7 +478,7 @@ static u16 nvmet_passthru_get_set_features(struct nvmet_req *req)
+ 	case NVME_FEAT_RESV_PERSIST:
+ 		/* No reservations, see nvmet_parse_passthru_io_cmd() */
+ 	default:
+-		return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	}
+ }
+ 
+@@ -546,7 +546,7 @@ u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+ 				req->p.use_workqueue = true;
+ 				return NVME_SC_SUCCESS;
+ 			}
+-			return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++			return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		case NVME_ID_CNS_NS:
+ 			req->execute = nvmet_passthru_execute_cmd;
+ 			req->p.use_workqueue = true;
+@@ -558,7 +558,7 @@ u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
+ 				req->p.use_workqueue = true;
+ 				return NVME_SC_SUCCESS;
+ 			}
+-			return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
++			return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		default:
+ 			return nvmet_setup_passthru_command(req);
+ 		}
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 689bb5d3cfdcf..498b3ca596510 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -852,12 +852,12 @@ static u16 nvmet_rdma_map_sgl_inline(struct nvmet_rdma_rsp *rsp)
+ 	if (!nvme_is_write(rsp->req.cmd)) {
+ 		rsp->req.error_loc =
+ 			offsetof(struct nvme_common_command, opcode);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (off + len > rsp->queue->dev->inline_data_size) {
+ 		pr_err("invalid inline data offset!\n");
+-		return NVME_SC_SGL_INVALID_OFFSET | NVME_SC_DNR;
++		return NVME_SC_SGL_INVALID_OFFSET | NVME_STATUS_DNR;
+ 	}
+ 
+ 	/* no data command? */
+@@ -919,7 +919,7 @@ static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
+ 			pr_err("invalid SGL subtype: %#x\n", sgl->type);
+ 			rsp->req.error_loc =
+ 				offsetof(struct nvme_common_command, dptr);
+-			return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		}
+ 	case NVME_KEY_SGL_FMT_DATA_DESC:
+ 		switch (sgl->type & 0xf) {
+@@ -931,12 +931,12 @@ static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
+ 			pr_err("invalid SGL subtype: %#x\n", sgl->type);
+ 			rsp->req.error_loc =
+ 				offsetof(struct nvme_common_command, dptr);
+-			return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		}
+ 	default:
+ 		pr_err("invalid SGL type: %#x\n", sgl->type);
+ 		rsp->req.error_loc = offsetof(struct nvme_common_command, dptr);
+-		return NVME_SC_SGL_INVALID_TYPE | NVME_SC_DNR;
++		return NVME_SC_SGL_INVALID_TYPE | NVME_STATUS_DNR;
+ 	}
+ }
+ 
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 380f22ee3ebba..45b46c55681f7 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -416,10 +416,10 @@ static int nvmet_tcp_map_data(struct nvmet_tcp_cmd *cmd)
+ 	if (sgl->type == ((NVME_SGL_FMT_DATA_DESC << 4) |
+ 			  NVME_SGL_FMT_OFFSET)) {
+ 		if (!nvme_is_write(cmd->req.cmd))
+-			return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++			return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 
+ 		if (len > cmd->req.port->inline_data_size)
+-			return NVME_SC_SGL_INVALID_OFFSET | NVME_SC_DNR;
++			return NVME_SC_SGL_INVALID_OFFSET | NVME_STATUS_DNR;
+ 		cmd->pdu_len = len;
+ 	}
+ 	cmd->req.transfer_len += len;
+@@ -2146,8 +2146,10 @@ static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq)
+ 	}
+ 
+ 	queue->nr_cmds = sq->size * 2;
+-	if (nvmet_tcp_alloc_cmds(queue))
++	if (nvmet_tcp_alloc_cmds(queue)) {
++		queue->nr_cmds = 0;
+ 		return NVME_SC_INTERNAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
+index 0021d06041c1e..af9e13be76786 100644
+--- a/drivers/nvme/target/zns.c
++++ b/drivers/nvme/target/zns.c
+@@ -100,7 +100,7 @@ void nvmet_execute_identify_ns_zns(struct nvmet_req *req)
+ 
+ 	if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) {
+ 		req->error_loc = offsetof(struct nvme_identify, nsid);
+-		status = NVME_SC_INVALID_NS | NVME_SC_DNR;
++		status = NVME_SC_INVALID_NS | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -121,7 +121,7 @@ void nvmet_execute_identify_ns_zns(struct nvmet_req *req)
+ 	}
+ 
+ 	if (!bdev_is_zoned(req->ns->bdev)) {
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_identify, nsid);
+ 		goto out;
+ 	}
+@@ -158,17 +158,17 @@ static u16 nvmet_bdev_validate_zone_mgmt_recv(struct nvmet_req *req)
+ 
+ 	if (sect >= get_capacity(req->ns->bdev->bd_disk)) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_recv_cmd, slba);
+-		return NVME_SC_LBA_RANGE | NVME_SC_DNR;
++		return NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (out_bufsize < sizeof(struct nvme_zone_report)) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_recv_cmd, numd);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_recv_cmd, zra);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	switch (req->cmd->zmr.pr) {
+@@ -177,7 +177,7 @@ static u16 nvmet_bdev_validate_zone_mgmt_recv(struct nvmet_req *req)
+ 		break;
+ 	default:
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_recv_cmd, pr);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	switch (req->cmd->zmr.zrasf) {
+@@ -193,7 +193,7 @@ static u16 nvmet_bdev_validate_zone_mgmt_recv(struct nvmet_req *req)
+ 	default:
+ 		req->error_loc =
+ 			offsetof(struct nvme_zone_mgmt_recv_cmd, zrasf);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	return NVME_SC_SUCCESS;
+@@ -341,7 +341,7 @@ static u16 blkdev_zone_mgmt_errno_to_nvme_status(int ret)
+ 		return NVME_SC_SUCCESS;
+ 	case -EINVAL:
+ 	case -EIO:
+-		return NVME_SC_ZONE_INVALID_TRANSITION | NVME_SC_DNR;
++		return NVME_SC_ZONE_INVALID_TRANSITION | NVME_STATUS_DNR;
+ 	default:
+ 		return NVME_SC_INTERNAL;
+ 	}
+@@ -463,7 +463,7 @@ static u16 nvmet_bdev_execute_zmgmt_send_all(struct nvmet_req *req)
+ 	default:
+ 		/* this is needed to quiet compiler warning */
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_send_cmd, zsa);
+-		return NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 	}
+ 
+ 	return NVME_SC_SUCCESS;
+@@ -481,7 +481,7 @@ static void nvmet_bdev_zmgmt_send_work(struct work_struct *w)
+ 
+ 	if (op == REQ_OP_LAST) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_send_cmd, zsa);
+-		status = NVME_SC_ZONE_INVALID_TRANSITION | NVME_SC_DNR;
++		status = NVME_SC_ZONE_INVALID_TRANSITION | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -493,13 +493,13 @@ static void nvmet_bdev_zmgmt_send_work(struct work_struct *w)
+ 
+ 	if (sect >= get_capacity(bdev->bd_disk)) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_send_cmd, slba);
+-		status = NVME_SC_LBA_RANGE | NVME_SC_DNR;
++		status = NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+ 	if (sect & (zone_sectors - 1)) {
+ 		req->error_loc = offsetof(struct nvme_zone_mgmt_send_cmd, slba);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -551,13 +551,13 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
+ 
+ 	if (sect >= get_capacity(req->ns->bdev->bd_disk)) {
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+-		status = NVME_SC_LBA_RANGE | NVME_SC_DNR;
++		status = NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+ 	if (sect & (bdev_zone_sectors(req->ns->bdev) - 1)) {
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+-		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ 		goto out;
+ 	}
+ 
+@@ -590,7 +590,7 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
+ 	}
+ 
+ 	if (total_len != nvmet_rw_data_len(req)) {
+-		status = NVME_SC_INTERNAL | NVME_SC_DNR;
++		status = NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ 		goto out_put_bio;
+ 	}
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f8dd7eb40fbe1..cccfa9fe81192 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1258,13 +1258,13 @@ void nvmem_device_put(struct nvmem_device *nvmem)
+ EXPORT_SYMBOL_GPL(nvmem_device_put);
+ 
+ /**
+- * devm_nvmem_device_get() - Get nvmem cell of device form a given id
++ * devm_nvmem_device_get() - Get nvmem device of device form a given id
+  *
+  * @dev: Device that requests the nvmem device.
+  * @id: name id for the requested nvmem device.
+  *
+- * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_cell
+- * on success.  The nvmem_cell will be freed by the automatically once the
++ * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_device
++ * on success.  The nvmem_device will be freed by the automatically once the
+  * device is freed.
+  */
+ struct nvmem_device *devm_nvmem_device_get(struct device *dev, const char *id)
+diff --git a/drivers/nvmem/u-boot-env.c b/drivers/nvmem/u-boot-env.c
+index befbab156cda1..adabbfdad6fb6 100644
+--- a/drivers/nvmem/u-boot-env.c
++++ b/drivers/nvmem/u-boot-env.c
+@@ -176,6 +176,13 @@ static int u_boot_env_parse(struct u_boot_env *priv)
+ 		data_offset = offsetof(struct u_boot_env_image_broadcom, data);
+ 		break;
+ 	}
++
++	if (dev_size < data_offset) {
++		dev_err(dev, "Device too small for u-boot-env\n");
++		err = -EIO;
++		goto err_kfree;
++	}
++
+ 	crc32_addr = (__le32 *)(buf + crc32_offset);
+ 	crc32 = le32_to_cpu(*crc32_addr);
+ 	crc32_data_len = dev_size - crc32_data_offset;
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index c94203ce65bb3..8fd63100ba8f0 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -344,7 +344,8 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 	struct device_node *p;
+ 	const __be32 *addr;
+ 	u32 intsize;
+-	int i, res;
++	int i, res, addr_len;
++	__be32 addr_buf[3] = { 0 };
+ 
+ 	pr_debug("of_irq_parse_one: dev=%pOF, index=%d\n", device, index);
+ 
+@@ -353,13 +354,19 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 		return of_irq_parse_oldworld(device, index, out_irq);
+ 
+ 	/* Get the reg property (if any) */
+-	addr = of_get_property(device, "reg", NULL);
++	addr = of_get_property(device, "reg", &addr_len);
++
++	/* Prevent out-of-bounds read in case of longer interrupt parent address size */
++	if (addr_len > (3 * sizeof(__be32)))
++		addr_len = 3 * sizeof(__be32);
++	if (addr)
++		memcpy(addr_buf, addr, addr_len);
+ 
+ 	/* Try the new-style interrupts-extended first */
+ 	res = of_parse_phandle_with_args(device, "interrupts-extended",
+ 					"#interrupt-cells", index, out_irq);
+ 	if (!res)
+-		return of_irq_parse_raw(addr, out_irq);
++		return of_irq_parse_raw(addr_buf, out_irq);
+ 
+ 	/* Look for the interrupt parent. */
+ 	p = of_irq_find_parent(device);
+@@ -389,7 +396,7 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 
+ 
+ 	/* Check if there are any interrupt-map translations to process */
+-	res = of_irq_parse_raw(addr, out_irq);
++	res = of_irq_parse_raw(addr_buf, out_irq);
+  out:
+ 	of_node_put(p);
+ 	return res;
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index cd0e0022f91d6..483c954065135 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -34,6 +34,11 @@
+ #define PCIE_DEVICEID_SHIFT	16
+ 
+ /* Application registers */
++#define PID				0x000
++#define RTL				GENMASK(15, 11)
++#define RTL_SHIFT			11
++#define AM6_PCI_PG1_RTL_VER		0x15
++
+ #define CMD_STATUS			0x004
+ #define LTSSM_EN_VAL		        BIT(0)
+ #define OB_XLAT_EN_VAL		        BIT(1)
+@@ -104,6 +109,8 @@
+ 
+ #define to_keystone_pcie(x)		dev_get_drvdata((x)->dev)
+ 
++#define PCI_DEVICE_ID_TI_AM654X		0xb00c
++
+ struct ks_pcie_of_data {
+ 	enum dw_pcie_device_mode mode;
+ 	const struct dw_pcie_host_ops *host_ops;
+@@ -516,7 +523,11 @@ static int ks_pcie_start_link(struct dw_pcie *pci)
+ static void ks_pcie_quirk(struct pci_dev *dev)
+ {
+ 	struct pci_bus *bus = dev->bus;
++	struct keystone_pcie *ks_pcie;
++	struct device *bridge_dev;
+ 	struct pci_dev *bridge;
++	u32 val;
++
+ 	static const struct pci_device_id rc_pci_devids[] = {
+ 		{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
+ 		 .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
+@@ -528,6 +539,11 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ 		 .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
+ 		{ 0, },
+ 	};
++	static const struct pci_device_id am6_pci_devids[] = {
++		{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654X),
++		 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
++		{ 0, },
++	};
+ 
+ 	if (pci_is_root_bus(bus))
+ 		bridge = dev;
+@@ -549,10 +565,36 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ 	 */
+ 	if (pci_match_id(rc_pci_devids, bridge)) {
+ 		if (pcie_get_readrq(dev) > 256) {
+-			dev_info(&dev->dev, "limiting MRRS to 256\n");
++			dev_info(&dev->dev, "limiting MRRS to 256 bytes\n");
+ 			pcie_set_readrq(dev, 256);
+ 		}
+ 	}
++
++	/*
++	 * Memory transactions fail with PCI controller in AM654 PG1.0
++	 * when MRRS is set to more than 128 bytes. Force the MRRS to
++	 * 128 bytes in all downstream devices.
++	 */
++	if (pci_match_id(am6_pci_devids, bridge)) {
++		bridge_dev = pci_get_host_bridge_device(dev);
++		if (!bridge_dev && !bridge_dev->parent)
++			return;
++
++		ks_pcie = dev_get_drvdata(bridge_dev->parent);
++		if (!ks_pcie)
++			return;
++
++		val = ks_pcie_app_readl(ks_pcie, PID);
++		val &= RTL;
++		val >>= RTL_SHIFT;
++		if (val != AM6_PCI_PG1_RTL_VER)
++			return;
++
++		if (pcie_get_readrq(dev) > 128) {
++			dev_info(&dev->dev, "limiting MRRS to 128 bytes\n");
++			pcie_set_readrq(dev, 128);
++		}
++	}
+ }
+ DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 14772edcf0d34..7fa1fe5a29e3d 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -51,6 +51,7 @@
+ #define PARF_SID_OFFSET				0x234
+ #define PARF_BDF_TRANSLATE_CFG			0x24c
+ #define PARF_SLV_ADDR_SPACE_SIZE		0x358
++#define PARF_NO_SNOOP_OVERIDE			0x3d4
+ #define PARF_DEVICE_TYPE			0x1000
+ #define PARF_BDF_TO_SID_TABLE_N			0x2000
+ #define PARF_BDF_TO_SID_CFG			0x2c00
+@@ -118,6 +119,10 @@
+ /* PARF_LTSSM register fields */
+ #define LTSSM_EN				BIT(8)
+ 
++/* PARF_NO_SNOOP_OVERIDE register fields */
++#define WR_NO_SNOOP_OVERIDE_EN			BIT(1)
++#define RD_NO_SNOOP_OVERIDE_EN			BIT(3)
++
+ /* PARF_DEVICE_TYPE register fields */
+ #define DEVICE_TYPE_RC				0x4
+ 
+@@ -231,8 +236,15 @@ struct qcom_pcie_ops {
+ 	int (*config_sid)(struct qcom_pcie *pcie);
+ };
+ 
++ /**
++  * struct qcom_pcie_cfg - Per SoC config struct
++  * @ops: qcom PCIe ops structure
++  * @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache
++  * snooping
++  */
+ struct qcom_pcie_cfg {
+ 	const struct qcom_pcie_ops *ops;
++	bool override_no_snoop;
+ 	bool no_l0s;
+ };
+ 
+@@ -986,6 +998,12 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 
+ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
+ {
++	const struct qcom_pcie_cfg *pcie_cfg = pcie->cfg;
++
++	if (pcie_cfg->override_no_snoop)
++		writel(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN,
++				pcie->parf + PARF_NO_SNOOP_OVERIDE);
++
+ 	qcom_pcie_clear_aspm_l0s(pcie->pci);
+ 	qcom_pcie_clear_hpc(pcie->pci);
+ 
+@@ -1366,6 +1384,11 @@ static const struct qcom_pcie_cfg cfg_1_9_0 = {
+ 	.ops = &ops_1_9_0,
+ };
+ 
++static const struct qcom_pcie_cfg cfg_1_34_0 = {
++	.ops = &ops_1_9_0,
++	.override_no_snoop = true,
++};
++
+ static const struct qcom_pcie_cfg cfg_2_1_0 = {
+ 	.ops = &ops_2_1_0,
+ };
+@@ -1667,7 +1690,7 @@ static const struct of_device_id qcom_pcie_match[] = {
+ 	{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
+ 	{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
+ 	{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
+-	{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0},
++	{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_34_0},
+ 	{ .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 },
+ 	{ .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 },
+ 	{ .compatible = "qcom,pcie-sc8280xp", .data = &cfg_sc8280xp },
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 694349be9d0aa..573a41869c153 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -40,7 +40,6 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 				bool disable_device)
+ {
+ 	struct pci_dev *pdev = php_slot->pdev;
+-	int irq = php_slot->irq;
+ 	u16 ctrl;
+ 
+ 	if (php_slot->irq > 0) {
+@@ -59,7 +58,7 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 		php_slot->wq = NULL;
+ 	}
+ 
+-	if (disable_device || irq > 0) {
++	if (disable_device) {
+ 		if (pdev->msix_enabled)
+ 			pci_disable_msix(pdev);
+ 		else if (pdev->msi_enabled)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index dff09e4892d39..8db214d4b1d46 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5441,10 +5441,12 @@ static void pci_bus_lock(struct pci_bus *bus)
+ {
+ 	struct pci_dev *dev;
+ 
++	pci_dev_lock(bus->self);
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+-		pci_dev_lock(dev);
+ 		if (dev->subordinate)
+ 			pci_bus_lock(dev->subordinate);
++		else
++			pci_dev_lock(dev);
+ 	}
+ }
+ 
+@@ -5456,8 +5458,10 @@ static void pci_bus_unlock(struct pci_bus *bus)
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
++	pci_dev_unlock(bus->self);
+ }
+ 
+ /* Return 1 on successful lock, 0 on contention */
+@@ -5465,15 +5469,15 @@ static int pci_bus_trylock(struct pci_bus *bus)
+ {
+ 	struct pci_dev *dev;
+ 
++	if (!pci_dev_trylock(bus->self))
++		return 0;
++
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+-		if (!pci_dev_trylock(dev))
+-			goto unlock;
+ 		if (dev->subordinate) {
+-			if (!pci_bus_trylock(dev->subordinate)) {
+-				pci_dev_unlock(dev);
++			if (!pci_bus_trylock(dev->subordinate))
+ 				goto unlock;
+-			}
+-		}
++		} else if (!pci_dev_trylock(dev))
++			goto unlock;
+ 	}
+ 	return 1;
+ 
+@@ -5481,8 +5485,10 @@ static int pci_bus_trylock(struct pci_bus *bus)
+ 	list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) {
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
++	pci_dev_unlock(bus->self);
+ 	return 0;
+ }
+ 
+@@ -5514,9 +5520,10 @@ static void pci_slot_lock(struct pci_slot *slot)
+ 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ 		if (!dev->slot || dev->slot != slot)
+ 			continue;
+-		pci_dev_lock(dev);
+ 		if (dev->subordinate)
+ 			pci_bus_lock(dev->subordinate);
++		else
++			pci_dev_lock(dev);
+ 	}
+ }
+ 
+@@ -5542,14 +5549,13 @@ static int pci_slot_trylock(struct pci_slot *slot)
+ 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ 		if (!dev->slot || dev->slot != slot)
+ 			continue;
+-		if (!pci_dev_trylock(dev))
+-			goto unlock;
+ 		if (dev->subordinate) {
+ 			if (!pci_bus_trylock(dev->subordinate)) {
+ 				pci_dev_unlock(dev);
+ 				goto unlock;
+ 			}
+-		}
++		} else if (!pci_dev_trylock(dev))
++			goto unlock;
+ 	}
+ 	return 1;
+ 
+@@ -5560,7 +5566,8 @@ static int pci_slot_trylock(struct pci_slot *slot)
+ 			continue;
+ 		if (dev->subordinate)
+ 			pci_bus_unlock(dev->subordinate);
+-		pci_dev_unlock(dev);
++		else
++			pci_dev_unlock(dev);
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index 1365eaa20ff49..ff169124929cc 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -638,11 +638,11 @@ static int yenta_search_one_res(struct resource *root, struct resource *res,
+ 		start = PCIBIOS_MIN_CARDBUS_IO;
+ 		end = ~0U;
+ 	} else {
+-		unsigned long avail = root->end - root->start;
++		unsigned long avail = resource_size(root);
+ 		int i;
+ 		size = BRIDGE_MEM_MAX;
+-		if (size > avail/8) {
+-			size = (avail+1)/8;
++		if (size > (avail - 1) / 8) {
++			size = avail / 8;
+ 			/* round size down to next power of 2 */
+ 			i = 0;
+ 			while ((size /= 2) != 0)
+diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c
+index d7d12cf3011a2..9cf0007cfd647 100644
+--- a/drivers/phy/xilinx/phy-zynqmp.c
++++ b/drivers/phy/xilinx/phy-zynqmp.c
+@@ -846,6 +846,7 @@ static struct phy *xpsgtr_xlate(struct device *dev,
+ 	phy_type = args->args[1];
+ 	phy_instance = args->args[2];
+ 
++	guard(mutex)(&gtr_phy->phy->mutex);
+ 	ret = xpsgtr_set_lane_type(gtr_phy, phy_type, phy_instance);
+ 	if (ret < 0) {
+ 		dev_err(gtr_dev->dev, "Invalid PHY type and/or instance\n");
+diff --git a/drivers/pinctrl/qcom/pinctrl-x1e80100.c b/drivers/pinctrl/qcom/pinctrl-x1e80100.c
+index 65ed933f05ce1..abfcdd3da9e82 100644
+--- a/drivers/pinctrl/qcom/pinctrl-x1e80100.c
++++ b/drivers/pinctrl/qcom/pinctrl-x1e80100.c
+@@ -1839,7 +1839,9 @@ static const struct msm_pinctrl_soc_data x1e80100_pinctrl = {
+ 	.ngroups = ARRAY_SIZE(x1e80100_groups),
+ 	.ngpios = 239,
+ 	.wakeirq_map = x1e80100_pdc_map,
+-	.nwakeirq_map = ARRAY_SIZE(x1e80100_pdc_map),
++	/* TODO: Enabling PDC currently breaks GPIO interrupts */
++	.nwakeirq_map = 0,
++	/* .nwakeirq_map = ARRAY_SIZE(x1e80100_pdc_map), */
+ 	.egpio_func = 9,
+ };
+ 
+diff --git a/drivers/platform/x86/dell/dell-smbios-base.c b/drivers/platform/x86/dell/dell-smbios-base.c
+index b562ed99ec4e7..4702669dbb605 100644
+--- a/drivers/platform/x86/dell/dell-smbios-base.c
++++ b/drivers/platform/x86/dell/dell-smbios-base.c
+@@ -587,7 +587,10 @@ static int __init dell_smbios_init(void)
+ 	return 0;
+ 
+ fail_sysfs:
+-	free_group(platform_device);
++	if (!wmi)
++		exit_dell_smbios_wmi();
++	if (!smm)
++		exit_dell_smbios_smm();
+ 
+ fail_create_group:
+ 	platform_device_del(platform_device);
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index ee2ced88ab34f..e7479b9b90cb1 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -316,6 +316,15 @@ struct ptp_ocp_serial_port {
+ #define OCP_SERIAL_LEN			6
+ #define OCP_SMA_NUM			4
+ 
++enum {
++	PORT_GNSS,
++	PORT_GNSS2,
++	PORT_MAC, /* miniature atomic clock */
++	PORT_NMEA,
++
++	__PORT_COUNT,
++};
++
+ struct ptp_ocp {
+ 	struct pci_dev		*pdev;
+ 	struct device		dev;
+@@ -357,10 +366,7 @@ struct ptp_ocp {
+ 	struct delayed_work	sync_work;
+ 	int			id;
+ 	int			n_irqs;
+-	struct ptp_ocp_serial_port	gnss_port;
+-	struct ptp_ocp_serial_port	gnss2_port;
+-	struct ptp_ocp_serial_port	mac_port;   /* miniature atomic clock */
+-	struct ptp_ocp_serial_port	nmea_port;
++	struct ptp_ocp_serial_port	port[__PORT_COUNT];
+ 	bool			fw_loader;
+ 	u8			fw_tag;
+ 	u16			fw_version;
+@@ -655,28 +661,28 @@ static struct ocp_resource ocp_fb_resource[] = {
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(gnss_port),
++		OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
+ 		.offset = 0x00160000 + 0x1000, .irq_vec = 3,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 115200,
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(gnss2_port),
++		OCP_SERIAL_RESOURCE(port[PORT_GNSS2]),
+ 		.offset = 0x00170000 + 0x1000, .irq_vec = 4,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 115200,
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(mac_port),
++		OCP_SERIAL_RESOURCE(port[PORT_MAC]),
+ 		.offset = 0x00180000 + 0x1000, .irq_vec = 5,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 57600,
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(nmea_port),
++		OCP_SERIAL_RESOURCE(port[PORT_NMEA]),
+ 		.offset = 0x00190000 + 0x1000, .irq_vec = 10,
+ 	},
+ 	{
+@@ -740,7 +746,7 @@ static struct ocp_resource ocp_art_resource[] = {
+ 		.offset = 0x01000000, .size = 0x10000,
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(gnss_port),
++		OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
+ 		.offset = 0x00160000 + 0x1000, .irq_vec = 3,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 115200,
+@@ -839,7 +845,7 @@ static struct ocp_resource ocp_art_resource[] = {
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(mac_port),
++		OCP_SERIAL_RESOURCE(port[PORT_MAC]),
+ 		.offset = 0x00190000, .irq_vec = 7,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 9600,
+@@ -950,14 +956,14 @@ static struct ocp_resource ocp_adva_resource[] = {
+ 		.offset = 0x00220000, .size = 0x1000,
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(gnss_port),
++		OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
+ 		.offset = 0x00160000 + 0x1000, .irq_vec = 3,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 9600,
+ 		},
+ 	},
+ 	{
+-		OCP_SERIAL_RESOURCE(mac_port),
++		OCP_SERIAL_RESOURCE(port[PORT_MAC]),
+ 		.offset = 0x00180000 + 0x1000, .irq_vec = 5,
+ 		.extra = &(struct ptp_ocp_serial_port) {
+ 			.baud = 115200,
+@@ -1649,6 +1655,15 @@ ptp_ocp_tod_gnss_name(int idx)
+ 	return gnss_name[idx];
+ }
+ 
++static const char *
++ptp_ocp_tty_port_name(int idx)
++{
++	static const char * const tty_name[] = {
++		"GNSS", "GNSS2", "MAC", "NMEA"
++	};
++	return tty_name[idx];
++}
++
+ struct ptp_ocp_nvmem_match_info {
+ 	struct ptp_ocp *bp;
+ 	const void * const tag;
+@@ -3346,6 +3361,54 @@ static EXT_ATTR_RO(freq, frequency, 1);
+ static EXT_ATTR_RO(freq, frequency, 2);
+ static EXT_ATTR_RO(freq, frequency, 3);
+ 
++static ssize_t
++ptp_ocp_tty_show(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	struct dev_ext_attribute *ea = to_ext_attr(attr);
++	struct ptp_ocp *bp = dev_get_drvdata(dev);
++
++	return sysfs_emit(buf, "ttyS%d", bp->port[(uintptr_t)ea->var].line);
++}
++
++static umode_t
++ptp_ocp_timecard_tty_is_visible(struct kobject *kobj, struct attribute *attr, int n)
++{
++	struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
++	struct ptp_ocp_serial_port *port;
++	struct device_attribute *dattr;
++	struct dev_ext_attribute *ea;
++
++	if (strncmp(attr->name, "tty", 3))
++		return attr->mode;
++
++	dattr = container_of(attr, struct device_attribute, attr);
++	ea = container_of(dattr, struct dev_ext_attribute, attr);
++	port = &bp->port[(uintptr_t)ea->var];
++	return port->line == -1 ? 0 : 0444;
++}
++
++#define EXT_TTY_ATTR_RO(_name, _val)			\
++	struct dev_ext_attribute dev_attr_tty##_name =	\
++		{ __ATTR(tty##_name, 0444, ptp_ocp_tty_show, NULL), (void *)_val }
++
++static EXT_TTY_ATTR_RO(GNSS, PORT_GNSS);
++static EXT_TTY_ATTR_RO(GNSS2, PORT_GNSS2);
++static EXT_TTY_ATTR_RO(MAC, PORT_MAC);
++static EXT_TTY_ATTR_RO(NMEA, PORT_NMEA);
++static struct attribute *ptp_ocp_timecard_tty_attrs[] = {
++	&dev_attr_ttyGNSS.attr.attr,
++	&dev_attr_ttyGNSS2.attr.attr,
++	&dev_attr_ttyMAC.attr.attr,
++	&dev_attr_ttyNMEA.attr.attr,
++	NULL,
++};
++
++static const struct attribute_group ptp_ocp_timecard_tty_group = {
++	.name = "tty",
++	.attrs = ptp_ocp_timecard_tty_attrs,
++	.is_visible = ptp_ocp_timecard_tty_is_visible,
++};
++
+ static ssize_t
+ serialnum_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+@@ -3775,6 +3838,7 @@ static const struct attribute_group fb_timecard_group = {
+ 
+ static const struct ocp_attr_group fb_timecard_groups[] = {
+ 	{ .cap = OCP_CAP_BASIC,	    .group = &fb_timecard_group },
++	{ .cap = OCP_CAP_BASIC,	    .group = &ptp_ocp_timecard_tty_group },
+ 	{ .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal0_group },
+ 	{ .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal1_group },
+ 	{ .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal2_group },
+@@ -3814,6 +3878,7 @@ static const struct attribute_group art_timecard_group = {
+ 
+ static const struct ocp_attr_group art_timecard_groups[] = {
+ 	{ .cap = OCP_CAP_BASIC,	    .group = &art_timecard_group },
++	{ .cap = OCP_CAP_BASIC,	    .group = &ptp_ocp_timecard_tty_group },
+ 	{ },
+ };
+ 
+@@ -3841,6 +3906,7 @@ static const struct attribute_group adva_timecard_group = {
+ 
+ static const struct ocp_attr_group adva_timecard_groups[] = {
+ 	{ .cap = OCP_CAP_BASIC,	    .group = &adva_timecard_group },
++	{ .cap = OCP_CAP_BASIC,	    .group = &ptp_ocp_timecard_tty_group },
+ 	{ .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal0_group },
+ 	{ .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal1_group },
+ 	{ .cap = OCP_CAP_FREQ,	    .group = &fb_timecard_freq0_group },
+@@ -3960,16 +4026,11 @@ ptp_ocp_summary_show(struct seq_file *s, void *data)
+ 	bp = dev_get_drvdata(dev);
+ 
+ 	seq_printf(s, "%7s: /dev/ptp%d\n", "PTP", ptp_clock_index(bp->ptp));
+-	if (bp->gnss_port.line != -1)
+-		seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1",
+-			   bp->gnss_port.line);
+-	if (bp->gnss2_port.line != -1)
+-		seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2",
+-			   bp->gnss2_port.line);
+-	if (bp->mac_port.line != -1)
+-		seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port.line);
+-	if (bp->nmea_port.line != -1)
+-		seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port.line);
++	for (i = 0; i < __PORT_COUNT; i++) {
++		if (bp->port[i].line != -1)
++			seq_printf(s, "%7s: /dev/ttyS%d\n", ptp_ocp_tty_port_name(i),
++				   bp->port[i].line);
++	}
+ 
+ 	memset(sma_val, 0xff, sizeof(sma_val));
+ 	if (bp->sma_map1) {
+@@ -4279,7 +4340,7 @@ ptp_ocp_dev_release(struct device *dev)
+ static int
+ ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
+ {
+-	int err;
++	int i, err;
+ 
+ 	mutex_lock(&ptp_ocp_lock);
+ 	err = idr_alloc(&ptp_ocp_idr, bp, 0, 0, GFP_KERNEL);
+@@ -4292,10 +4353,10 @@ ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
+ 
+ 	bp->ptp_info = ptp_ocp_clock_info;
+ 	spin_lock_init(&bp->lock);
+-	bp->gnss_port.line = -1;
+-	bp->gnss2_port.line = -1;
+-	bp->mac_port.line = -1;
+-	bp->nmea_port.line = -1;
++
++	for (i = 0; i < __PORT_COUNT; i++)
++		bp->port[i].line = -1;
++
+ 	bp->pdev = pdev;
+ 
+ 	device_initialize(&bp->dev);
+@@ -4352,22 +4413,6 @@ ptp_ocp_complete(struct ptp_ocp *bp)
+ 	struct pps_device *pps;
+ 	char buf[32];
+ 
+-	if (bp->gnss_port.line != -1) {
+-		sprintf(buf, "ttyS%d", bp->gnss_port.line);
+-		ptp_ocp_link_child(bp, buf, "ttyGNSS");
+-	}
+-	if (bp->gnss2_port.line != -1) {
+-		sprintf(buf, "ttyS%d", bp->gnss2_port.line);
+-		ptp_ocp_link_child(bp, buf, "ttyGNSS2");
+-	}
+-	if (bp->mac_port.line != -1) {
+-		sprintf(buf, "ttyS%d", bp->mac_port.line);
+-		ptp_ocp_link_child(bp, buf, "ttyMAC");
+-	}
+-	if (bp->nmea_port.line != -1) {
+-		sprintf(buf, "ttyS%d", bp->nmea_port.line);
+-		ptp_ocp_link_child(bp, buf, "ttyNMEA");
+-	}
+ 	sprintf(buf, "ptp%d", ptp_clock_index(bp->ptp));
+ 	ptp_ocp_link_child(bp, buf, "ptp");
+ 
+@@ -4416,23 +4461,20 @@ ptp_ocp_info(struct ptp_ocp *bp)
+ 	};
+ 	struct device *dev = &bp->pdev->dev;
+ 	u32 reg;
++	int i;
+ 
+ 	ptp_ocp_phc_info(bp);
+ 
+-	ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port.line,
+-			    bp->gnss_port.baud);
+-	ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port.line,
+-			    bp->gnss2_port.baud);
+-	ptp_ocp_serial_info(dev, "MAC", bp->mac_port.line, bp->mac_port.baud);
+-	if (bp->nmea_out && bp->nmea_port.line != -1) {
+-		bp->nmea_port.baud = -1;
++	for (i = 0; i < __PORT_COUNT; i++) {
++		if (i == PORT_NMEA && bp->nmea_out && bp->port[PORT_NMEA].line != -1) {
++			bp->port[PORT_NMEA].baud = -1;
+ 
+-		reg = ioread32(&bp->nmea_out->uart_baud);
+-		if (reg < ARRAY_SIZE(nmea_baud))
+-			bp->nmea_port.baud = nmea_baud[reg];
+-
+-		ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port.line,
+-				    bp->nmea_port.baud);
++			reg = ioread32(&bp->nmea_out->uart_baud);
++			if (reg < ARRAY_SIZE(nmea_baud))
++				bp->port[PORT_NMEA].baud = nmea_baud[reg];
++		}
++		ptp_ocp_serial_info(dev, ptp_ocp_tty_port_name(i), bp->port[i].line,
++				    bp->port[i].baud);
+ 	}
+ }
+ 
+@@ -4441,9 +4483,6 @@ ptp_ocp_detach_sysfs(struct ptp_ocp *bp)
+ {
+ 	struct device *dev = &bp->dev;
+ 
+-	sysfs_remove_link(&dev->kobj, "ttyGNSS");
+-	sysfs_remove_link(&dev->kobj, "ttyGNSS2");
+-	sysfs_remove_link(&dev->kobj, "ttyMAC");
+ 	sysfs_remove_link(&dev->kobj, "ptp");
+ 	sysfs_remove_link(&dev->kobj, "pps");
+ }
+@@ -4473,14 +4512,9 @@ ptp_ocp_detach(struct ptp_ocp *bp)
+ 	for (i = 0; i < 4; i++)
+ 		if (bp->signal_out[i])
+ 			ptp_ocp_unregister_ext(bp->signal_out[i]);
+-	if (bp->gnss_port.line != -1)
+-		serial8250_unregister_port(bp->gnss_port.line);
+-	if (bp->gnss2_port.line != -1)
+-		serial8250_unregister_port(bp->gnss2_port.line);
+-	if (bp->mac_port.line != -1)
+-		serial8250_unregister_port(bp->mac_port.line);
+-	if (bp->nmea_port.line != -1)
+-		serial8250_unregister_port(bp->nmea_port.line);
++	for (i = 0; i < __PORT_COUNT; i++)
++		if (bp->port[i].line != -1)
++			serial8250_unregister_port(bp->port[i].line);
+ 	platform_device_unregister(bp->spi_flash);
+ 	platform_device_unregister(bp->i2c_ctrl);
+ 	if (bp->i2c_clk)
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index c32bc773ab29d..445cb6c2e80f5 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -7302,13 +7302,13 @@ int lpfc_get_sfp_info_wait(struct lpfc_hba *phba,
+ 		mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys);
+ 	}
+ 	mbox->vport = phba->pport;
+-
+-	rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30);
++	rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_SLI4_CONFIG_TMO);
+ 	if (rc == MBX_NOT_FINISHED) {
+ 		rc = 1;
+ 		goto error;
+ 	}
+-
++	if (rc == MBX_TIMEOUT)
++		goto error;
+ 	if (phba->sli_rev == LPFC_SLI_REV4)
+ 		mp = mbox->ctx_buf;
+ 	else
+@@ -7361,7 +7361,10 @@ int lpfc_get_sfp_info_wait(struct lpfc_hba *phba,
+ 		mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys);
+ 	}
+ 
+-	rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30);
++	rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_SLI4_CONFIG_TMO);
++
++	if (rc == MBX_TIMEOUT)
++		goto error;
+ 	if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) {
+ 		rc = 1;
+ 		goto error;
+@@ -7372,8 +7375,10 @@ int lpfc_get_sfp_info_wait(struct lpfc_hba *phba,
+ 			     DMP_SFF_PAGE_A2_SIZE);
+ 
+ error:
+-	mbox->ctx_buf = mpsave;
+-	lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED);
++	if (mbox->mbox_flag & LPFC_MBX_WAKE) {
++		mbox->ctx_buf = mpsave;
++		lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED);
++	}
+ 
+ 	return rc;
+ 
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index a5a31dfa45122..ee2da8e49d4cf 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -166,7 +166,6 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 	unsigned long flags;
+ 	pm8001_ha = sas_phy->ha->lldd_ha;
+ 	phy = &pm8001_ha->phy[phy_id];
+-	pm8001_ha->phy[phy_id].enable_completion = &completion;
+ 
+ 	if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) {
+ 		/*
+@@ -190,6 +189,7 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 				rates->maximum_linkrate;
+ 		}
+ 		if (pm8001_ha->phy[phy_id].phy_state ==  PHY_LINK_DISABLE) {
++			pm8001_ha->phy[phy_id].enable_completion = &completion;
+ 			PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id);
+ 			wait_for_completion(&completion);
+ 		}
+@@ -198,6 +198,7 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 		break;
+ 	case PHY_FUNC_HARD_RESET:
+ 		if (pm8001_ha->phy[phy_id].phy_state == PHY_LINK_DISABLE) {
++			pm8001_ha->phy[phy_id].enable_completion = &completion;
+ 			PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id);
+ 			wait_for_completion(&completion);
+ 		}
+@@ -206,6 +207,7 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
+ 		break;
+ 	case PHY_FUNC_LINK_RESET:
+ 		if (pm8001_ha->phy[phy_id].phy_state == PHY_LINK_DISABLE) {
++			pm8001_ha->phy[phy_id].enable_completion = &completion;
+ 			PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id);
+ 			wait_for_completion(&completion);
+ 		}
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index f2d7eedd324b7..d2f603e1014cf 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -82,6 +82,10 @@
+ #define TCR_RXMSK	BIT(19)
+ #define TCR_TXMSK	BIT(18)
+ 
++struct fsl_lpspi_devtype_data {
++	u8 prescale_max;
++};
++
+ struct lpspi_config {
+ 	u8 bpw;
+ 	u8 chip_select;
+@@ -119,10 +123,25 @@ struct fsl_lpspi_data {
+ 	bool usedma;
+ 	struct completion dma_rx_completion;
+ 	struct completion dma_tx_completion;
++
++	const struct fsl_lpspi_devtype_data *devtype_data;
++};
++
++/*
++ * ERR051608 fixed or not:
++ * https://www.nxp.com/docs/en/errata/i.MX93_1P87f.pdf
++ */
++static struct fsl_lpspi_devtype_data imx93_lpspi_devtype_data = {
++	.prescale_max = 1,
++};
++
++static struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = {
++	.prescale_max = 7,
+ };
+ 
+ static const struct of_device_id fsl_lpspi_dt_ids[] = {
+-	{ .compatible = "fsl,imx7ulp-spi", },
++	{ .compatible = "fsl,imx7ulp-spi", .data = &imx7ulp_lpspi_devtype_data,},
++	{ .compatible = "fsl,imx93-spi", .data = &imx93_lpspi_devtype_data,},
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, fsl_lpspi_dt_ids);
+@@ -297,9 +316,11 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ {
+ 	struct lpspi_config config = fsl_lpspi->config;
+ 	unsigned int perclk_rate, scldiv, div;
++	u8 prescale_max;
+ 	u8 prescale;
+ 
+ 	perclk_rate = clk_get_rate(fsl_lpspi->clk_per);
++	prescale_max = fsl_lpspi->devtype_data->prescale_max;
+ 
+ 	if (!config.speed_hz) {
+ 		dev_err(fsl_lpspi->dev,
+@@ -315,7 +336,7 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ 
+ 	div = DIV_ROUND_UP(perclk_rate, config.speed_hz);
+ 
+-	for (prescale = 0; prescale < 8; prescale++) {
++	for (prescale = 0; prescale <= prescale_max; prescale++) {
+ 		scldiv = div / (1 << prescale) - 2;
+ 		if (scldiv < 256) {
+ 			fsl_lpspi->config.prescale = prescale;
+@@ -822,6 +843,7 @@ static int fsl_lpspi_init_rpm(struct fsl_lpspi_data *fsl_lpspi)
+ 
+ static int fsl_lpspi_probe(struct platform_device *pdev)
+ {
++	const struct fsl_lpspi_devtype_data *devtype_data;
+ 	struct fsl_lpspi_data *fsl_lpspi;
+ 	struct spi_controller *controller;
+ 	struct resource *res;
+@@ -830,6 +852,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ 	u32 temp;
+ 	bool is_target;
+ 
++	devtype_data = of_device_get_match_data(&pdev->dev);
++	if (!devtype_data)
++		return -ENODEV;
++
+ 	is_target = of_property_read_bool((&pdev->dev)->of_node, "spi-slave");
+ 	if (is_target)
+ 		controller = devm_spi_alloc_target(&pdev->dev,
+@@ -848,6 +874,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ 	fsl_lpspi->is_target = is_target;
+ 	fsl_lpspi->is_only_cs1 = of_property_read_bool((&pdev->dev)->of_node,
+ 						"fsl,spi-only-use-cs1-sel");
++	fsl_lpspi->devtype_data = devtype_data;
+ 
+ 	init_completion(&fsl_lpspi->xfer_done);
+ 
+diff --git a/drivers/spi/spi-hisi-kunpeng.c b/drivers/spi/spi-hisi-kunpeng.c
+index 6910b4d4c427b..16054695bdb04 100644
+--- a/drivers/spi/spi-hisi-kunpeng.c
++++ b/drivers/spi/spi-hisi-kunpeng.c
+@@ -481,6 +481,9 @@ static int hisi_spi_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (host->max_speed_hz == 0)
++		return dev_err_probe(dev, -EINVAL, "spi-max-frequency can't be 0\n");
++
+ 	ret = device_property_read_u16(dev, "num-cs",
+ 					&host->num_chipselect);
+ 	if (ret)
+diff --git a/drivers/spi/spi-intel.c b/drivers/spi/spi-intel.c
+index 3e5dcf2b3c8a1..795b7e72baead 100644
+--- a/drivers/spi/spi-intel.c
++++ b/drivers/spi/spi-intel.c
+@@ -1390,6 +1390,9 @@ static int intel_spi_populate_chip(struct intel_spi *ispi)
+ 
+ 	pdata->name = devm_kasprintf(ispi->dev, GFP_KERNEL, "%s-chip1",
+ 				     dev_name(ispi->dev));
++	if (!pdata->name)
++		return -ENOMEM;
++
+ 	pdata->nr_parts = 1;
+ 	parts = devm_kcalloc(ispi->dev, pdata->nr_parts, sizeof(*parts),
+ 			     GFP_KERNEL);
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index e1ecd96c78581..0bb33c43b1b46 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -945,14 +945,16 @@ static int rockchip_spi_suspend(struct device *dev)
+ {
+ 	int ret;
+ 	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ 
+ 	ret = spi_controller_suspend(ctlr);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	clk_disable_unprepare(rs->spiclk);
+-	clk_disable_unprepare(rs->apb_pclk);
++	ret = pm_runtime_force_suspend(dev);
++	if (ret < 0) {
++		spi_controller_resume(ctlr);
++		return ret;
++	}
+ 
+ 	pinctrl_pm_select_sleep_state(dev);
+ 
+@@ -963,25 +965,14 @@ static int rockchip_spi_resume(struct device *dev)
+ {
+ 	int ret;
+ 	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ 
+ 	pinctrl_pm_select_default_state(dev);
+ 
+-	ret = clk_prepare_enable(rs->apb_pclk);
++	ret = pm_runtime_force_resume(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = clk_prepare_enable(rs->spiclk);
+-	if (ret < 0)
+-		clk_disable_unprepare(rs->apb_pclk);
+-
+-	ret = spi_controller_resume(ctlr);
+-	if (ret < 0) {
+-		clk_disable_unprepare(rs->spiclk);
+-		clk_disable_unprepare(rs->apb_pclk);
+-	}
+-
+-	return 0;
++	return spi_controller_resume(ctlr);
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
+diff --git a/drivers/staging/iio/frequency/ad9834.c b/drivers/staging/iio/frequency/ad9834.c
+index a7a5cdcc65903..47e7d7e6d9208 100644
+--- a/drivers/staging/iio/frequency/ad9834.c
++++ b/drivers/staging/iio/frequency/ad9834.c
+@@ -114,7 +114,7 @@ static int ad9834_write_frequency(struct ad9834_state *st,
+ 
+ 	clk_freq = clk_get_rate(st->mclk);
+ 
+-	if (fout > (clk_freq / 2))
++	if (!clk_freq || fout > (clk_freq / 2))
+ 		return -EINVAL;
+ 
+ 	regval = ad9834_calc_freqreg(clk_freq, fout);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index df3af821f2187..fb1907414cc14 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -501,16 +501,21 @@ remote_event_create(wait_queue_head_t *wq, struct remote_event *event)
+  * routines where switched to the "interruptible" family of functions, as the
+  * former was deemed unjustified and the use "killable" set all VCHIQ's
+  * threads in D state.
++ *
++ * Returns: 0 on success, a negative error code on failure
+  */
+ static inline int
+ remote_event_wait(wait_queue_head_t *wq, struct remote_event *event)
+ {
++	int ret = 0;
++
+ 	if (!event->fired) {
+ 		event->armed = 1;
+ 		dsb(sy);
+-		if (wait_event_interruptible(*wq, event->fired)) {
++		ret = wait_event_interruptible(*wq, event->fired);
++		if (ret) {
+ 			event->armed = 0;
+-			return 0;
++			return ret;
+ 		}
+ 		event->armed = 0;
+ 		/* Ensure that the peer sees that we are not waiting (armed == 0). */
+@@ -518,7 +523,7 @@ remote_event_wait(wait_queue_head_t *wq, struct remote_event *event)
+ 	}
+ 
+ 	event->fired = 0;
+-	return 1;
++	return ret;
+ }
+ 
+ /*
+@@ -1140,6 +1145,7 @@ queue_message_sync(struct vchiq_state *state, struct vchiq_service *service,
+ 	struct vchiq_header *header;
+ 	ssize_t callback_result;
+ 	int svc_fourcc;
++	int ret;
+ 
+ 	local = state->local;
+ 
+@@ -1147,7 +1153,9 @@ queue_message_sync(struct vchiq_state *state, struct vchiq_service *service,
+ 	    mutex_lock_killable(&state->sync_mutex))
+ 		return -EAGAIN;
+ 
+-	remote_event_wait(&state->sync_release_event, &local->sync_release);
++	ret = remote_event_wait(&state->sync_release_event, &local->sync_release);
++	if (ret)
++		return ret;
+ 
+ 	/* Ensure that reads don't overtake the remote_event_wait. */
+ 	rmb();
+@@ -1929,13 +1937,16 @@ slot_handler_func(void *v)
+ {
+ 	struct vchiq_state *state = v;
+ 	struct vchiq_shared_state *local = state->local;
++	int ret;
+ 
+ 	DEBUG_INITIALISE(local);
+ 
+ 	while (1) {
+ 		DEBUG_COUNT(SLOT_HANDLER_COUNT);
+ 		DEBUG_TRACE(SLOT_HANDLER_LINE);
+-		remote_event_wait(&state->trigger_event, &local->trigger);
++		ret = remote_event_wait(&state->trigger_event, &local->trigger);
++		if (ret)
++			return ret;
+ 
+ 		/* Ensure that reads don't overtake the remote_event_wait. */
+ 		rmb();
+@@ -1966,6 +1977,7 @@ recycle_func(void *v)
+ 	struct vchiq_shared_state *local = state->local;
+ 	u32 *found;
+ 	size_t length;
++	int ret;
+ 
+ 	length = sizeof(*found) * BITSET_SIZE(VCHIQ_MAX_SERVICES);
+ 
+@@ -1975,7 +1987,9 @@ recycle_func(void *v)
+ 		return -ENOMEM;
+ 
+ 	while (1) {
+-		remote_event_wait(&state->recycle_event, &local->recycle);
++		ret = remote_event_wait(&state->recycle_event, &local->recycle);
++		if (ret)
++			return ret;
+ 
+ 		process_free_queue(state, found, length);
+ 	}
+@@ -1992,6 +2006,7 @@ sync_func(void *v)
+ 		(struct vchiq_header *)SLOT_DATA_FROM_INDEX(state,
+ 			state->remote->slot_sync);
+ 	int svc_fourcc;
++	int ret;
+ 
+ 	while (1) {
+ 		struct vchiq_service *service;
+@@ -1999,7 +2014,9 @@ sync_func(void *v)
+ 		int type;
+ 		unsigned int localport, remoteport;
+ 
+-		remote_event_wait(&state->sync_trigger_event, &local->sync_trigger);
++		ret = remote_event_wait(&state->sync_trigger_event, &local->sync_trigger);
++		if (ret)
++			return ret;
+ 
+ 		/* Ensure that reads don't overtake the remote_event_wait. */
+ 		rmb();
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 91bfdc17eedb3..b9c436a002a1b 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -10196,7 +10196,8 @@ void ufshcd_remove(struct ufs_hba *hba)
+ 	blk_mq_destroy_queue(hba->tmf_queue);
+ 	blk_put_queue(hba->tmf_queue);
+ 	blk_mq_free_tag_set(&hba->tmf_tag_set);
+-	scsi_remove_host(hba->host);
++	if (hba->scsi_host_added)
++		scsi_remove_host(hba->host);
+ 	/* disable interrupts */
+ 	ufshcd_disable_intr(hba, hba->intr_mask);
+ 	ufshcd_hba_stop(hba);
+@@ -10478,6 +10479,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 			dev_err(hba->dev, "scsi_add_host failed\n");
+ 			goto out_disable;
+ 		}
++		hba->scsi_host_added = true;
+ 	}
+ 
+ 	hba->tmf_tag_set = (struct blk_mq_tag_set) {
+@@ -10560,7 +10562,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ free_tmf_tag_set:
+ 	blk_mq_free_tag_set(&hba->tmf_tag_set);
+ out_remove_scsi_host:
+-	scsi_remove_host(hba->host);
++	if (hba->scsi_host_added)
++		scsi_remove_host(hba->host);
+ out_disable:
+ 	hba->is_irq_enabled = false;
+ 	ufshcd_hba_exit(hba);
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index b45653752301d..8704095994118 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -106,10 +106,11 @@ static void hv_uio_channel_cb(void *context)
+ 
+ /*
+  * Callback from vmbus_event when channel is rescinded.
++ * It is meant for rescind of primary channels only.
+  */
+ static void hv_uio_rescind(struct vmbus_channel *channel)
+ {
+-	struct hv_device *hv_dev = channel->primary_channel->device_obj;
++	struct hv_device *hv_dev = channel->device_obj;
+ 	struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);
+ 
+ 	/*
+@@ -120,6 +121,14 @@ static void hv_uio_rescind(struct vmbus_channel *channel)
+ 
+ 	/* Wake up reader */
+ 	uio_event_notify(&pdata->info);
++
++	/*
++	 * With rescind callback registered, rescind path will not unregister the device
++	 * from vmbus when the primary channel is rescinded.
++	 * Without it, rescind handling is incomplete and next onoffer msg does not come.
++	 * Unregister the device from vmbus here.
++	 */
++	vmbus_device_unregister(channel->device_obj);
+ }
+ 
+ /* Sysfs API to allow mmap of the ring buffers
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 31df6fdc233ef..ee95d39094302 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1367,6 +1367,21 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 		dwc3_writel(dwc->regs, DWC3_GUCTL2, reg);
+ 	}
+ 
++	/*
++	 * STAR 9001285599: This issue affects DWC_usb3 version 3.20a
++	 * only. If the PM TIMER ECM is enabled through GUCTL2[19], the
++	 * link compliance test (TD7.21) may fail. If the ECN is not
++	 * enabled (GUCTL2[19] = 0), the controller will use the old timer
++	 * value (5us), which is still acceptable for the link compliance
++	 * test. Therefore, do not enable PM TIMER ECM in 3.20a by
++	 * setting GUCTL2[19] by default; instead, use GUCTL2[19] = 0.
++	 */
++	if (DWC3_VER_IS(DWC3, 320A)) {
++		reg = dwc3_readl(dwc->regs, DWC3_GUCTL2);
++		reg &= ~DWC3_GUCTL2_LC_TIMER;
++		dwc3_writel(dwc->regs, DWC3_GUCTL2, reg);
++	}
++
+ 	/*
+ 	 * When configured in HOST mode, after issuing U3/L2 exit controller
+ 	 * fails to send proper CRC checksum in CRC5 feild. Because of this
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 3781c736c1a17..ed7f999e05bb5 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -417,6 +417,7 @@
+ 
+ /* Global User Control Register 2 */
+ #define DWC3_GUCTL2_RST_ACTBITLATER		BIT(14)
++#define DWC3_GUCTL2_LC_TIMER			BIT(19)
+ 
+ /* Global User Control Register 3 */
+ #define DWC3_GUCTL3_SPLITDISABLE		BIT(14)
+@@ -1262,6 +1263,7 @@ struct dwc3 {
+ #define DWC3_REVISION_290A	0x5533290a
+ #define DWC3_REVISION_300A	0x5533300a
+ #define DWC3_REVISION_310A	0x5533310a
++#define DWC3_REVISION_320A	0x5533320a
+ #define DWC3_REVISION_330A	0x5533330a
+ 
+ #define DWC31_REVISION_ANY	0x0
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 89fc690fdf34a..291bc549935bb 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -287,6 +287,23 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async);
+  *
+  * Caller should handle locking. This function will issue @cmd with given
+  * @params to @dep and wait for its completion.
++ *
++ * According to the programming guide, if the link state is in L1/L2/U3,
++ * then sending the Start Transfer command may not complete. The
++ * programming guide suggested to bring the link state back to ON/U0 by
++ * performing remote wakeup prior to sending the command. However, don't
++ * initiate remote wakeup when the user/function does not send wakeup
++ * request via wakeup ops. Send the command when it's allowed.
++ *
++ * Notes:
++ * For L1 link state, issuing a command requires the clearing of
++ * GUSB2PHYCFG.SUSPENDUSB2, which turns on the signal required to complete
++ * the given command (usually within 50us). This should happen within the
++ * command timeout set by driver. No additional step is needed.
++ *
++ * For L2 or U3 link state, the gadget is in USB suspend. Care should be
++ * taken when sending Start Transfer command to ensure that it's done after
++ * USB resume.
+  */
+ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ 		struct dwc3_gadget_ep_cmd_params *params)
+@@ -327,30 +344,6 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
+ 			dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+ 	}
+ 
+-	if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
+-		int link_state;
+-
+-		/*
+-		 * Initiate remote wakeup if the link state is in U3 when
+-		 * operating in SS/SSP or L1/L2 when operating in HS/FS. If the
+-		 * link state is in U1/U2, no remote wakeup is needed. The Start
+-		 * Transfer command will initiate the link recovery.
+-		 */
+-		link_state = dwc3_gadget_get_link_state(dwc);
+-		switch (link_state) {
+-		case DWC3_LINK_STATE_U2:
+-			if (dwc->gadget->speed >= USB_SPEED_SUPER)
+-				break;
+-
+-			fallthrough;
+-		case DWC3_LINK_STATE_U3:
+-			ret = __dwc3_gadget_wakeup(dwc, false);
+-			dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
+-					ret);
+-			break;
+-		}
+-	}
+-
+ 	/*
+ 	 * For some commands such as Update Transfer command, DEPCMDPARn
+ 	 * registers are reserved. Since the driver often sends Update Transfer
+diff --git a/drivers/usb/gadget/udc/aspeed_udc.c b/drivers/usb/gadget/udc/aspeed_udc.c
+index 821a6ab5da56f..f4781e611aaa2 100644
+--- a/drivers/usb/gadget/udc/aspeed_udc.c
++++ b/drivers/usb/gadget/udc/aspeed_udc.c
+@@ -1009,6 +1009,8 @@ static void ast_udc_getstatus(struct ast_udc_dev *udc)
+ 		break;
+ 	case USB_RECIP_ENDPOINT:
+ 		epnum = crq.wIndex & USB_ENDPOINT_NUMBER_MASK;
++		if (epnum >= AST_UDC_NUM_ENDPOINTS)
++			goto stall;
+ 		status = udc->ep[epnum].stopped;
+ 		break;
+ 	default:
+diff --git a/drivers/usb/gadget/udc/cdns2/cdns2-gadget.c b/drivers/usb/gadget/udc/cdns2/cdns2-gadget.c
+index 0eed0e03842cf..d394affb70723 100644
+--- a/drivers/usb/gadget/udc/cdns2/cdns2-gadget.c
++++ b/drivers/usb/gadget/udc/cdns2/cdns2-gadget.c
+@@ -2251,7 +2251,6 @@ static int cdns2_gadget_start(struct cdns2_device *pdev)
+ {
+ 	u32 max_speed;
+ 	void *buf;
+-	int val;
+ 	int ret;
+ 
+ 	pdev->usb_regs = pdev->regs;
+@@ -2261,14 +2260,9 @@ static int cdns2_gadget_start(struct cdns2_device *pdev)
+ 	pdev->adma_regs = pdev->regs + CDNS2_ADMA_REGS_OFFSET;
+ 
+ 	/* Reset controller. */
+-	set_reg_bit_8(&pdev->usb_regs->cpuctrl, CPUCTRL_SW_RST);
+-
+-	ret = readl_poll_timeout_atomic(&pdev->usb_regs->cpuctrl, val,
+-					!(val & CPUCTRL_SW_RST), 1, 10000);
+-	if (ret) {
+-		dev_err(pdev->dev, "Error: reset controller timeout\n");
+-		return -EINVAL;
+-	}
++	writeb(CPUCTRL_SW_RST | CPUCTRL_UPCLK | CPUCTRL_WUEN,
++	       &pdev->usb_regs->cpuctrl);
++	usleep_range(5, 10);
+ 
+ 	usb_initialize_gadget(pdev->dev, &pdev->gadget, NULL);
+ 
+diff --git a/drivers/usb/gadget/udc/cdns2/cdns2-gadget.h b/drivers/usb/gadget/udc/cdns2/cdns2-gadget.h
+index 71e2f62d653a5..b5d5ec12e986e 100644
+--- a/drivers/usb/gadget/udc/cdns2/cdns2-gadget.h
++++ b/drivers/usb/gadget/udc/cdns2/cdns2-gadget.h
+@@ -292,8 +292,17 @@ struct cdns2_usb_regs {
+ #define SPEEDCTRL_HSDISABLE	BIT(7)
+ 
+ /* CPUCTRL- bitmasks. */
++/* UP clock enable */
++#define CPUCTRL_UPCLK		BIT(0)
+ /* Controller reset bit. */
+ #define CPUCTRL_SW_RST		BIT(1)
++/**
++ * If the wuen bit is ‘1’, the upclken is automatically set to ‘1’ after
++ * detecting rising edge of wuintereq interrupt. If the wuen bit is ‘0’,
++ * the wuintereq interrupt is ignored.
++ */
++#define CPUCTRL_WUEN		BIT(7)
++
+ 
+ /**
+  * struct cdns2_adma_regs - ADMA controller registers.
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index b610a2de4ae5d..a04b4cb1382d5 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -423,6 +423,7 @@ static void uas_data_cmplt(struct urb *urb)
+ 			uas_log_cmd_state(cmnd, "data cmplt err", status);
+ 		/* error: no data transfered */
+ 		scsi_set_resid(cmnd, sdb->length);
++		set_host_byte(cmnd, DID_ERROR);
+ 	} else {
+ 		scsi_set_resid(cmnd, sdb->length - urb->actual_length);
+ 	}
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 2cc7aedd490f1..45e91d065b3bd 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -961,6 +961,27 @@ static void ucsi_unregister_cable(struct ucsi_connector *con)
+ 	con->cable = NULL;
+ }
+ 
++static int ucsi_check_connector_capability(struct ucsi_connector *con)
++{
++	u64 command;
++	int ret;
++
++	if (!con->partner || con->ucsi->version < UCSI_VERSION_2_1)
++		return 0;
++
++	command = UCSI_GET_CONNECTOR_CAPABILITY | UCSI_CONNECTOR_NUMBER(con->num);
++	ret = ucsi_send_command(con->ucsi, command, &con->cap, sizeof(con->cap));
++	if (ret < 0) {
++		dev_err(con->ucsi->dev, "GET_CONNECTOR_CAPABILITY failed (%d)\n", ret);
++		return ret;
++	}
++
++	typec_partner_set_pd_revision(con->partner,
++		UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags));
++
++	return ret;
++}
++
+ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+ {
+ 	switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) {
+@@ -970,6 +991,7 @@ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+ 		ucsi_partner_task(con, ucsi_get_src_pdos, 30, 0);
+ 		ucsi_partner_task(con, ucsi_check_altmodes, 30, 0);
+ 		ucsi_partner_task(con, ucsi_register_partner_pdos, 1, HZ);
++		ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ);
+ 		break;
+ 	case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
+ 		con->rdo = 0;
+@@ -1013,7 +1035,6 @@ static int ucsi_register_partner(struct ucsi_connector *con)
+ 
+ 	desc.identity = &con->partner_identity;
+ 	desc.usb_pd = pwr_opmode == UCSI_CONSTAT_PWR_OPMODE_PD;
+-	desc.pd_revision = UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags);
+ 
+ 	partner = typec_register_partner(con->port, &desc);
+ 	if (IS_ERR(partner)) {
+@@ -1090,27 +1111,6 @@ static void ucsi_partner_change(struct ucsi_connector *con)
+ 			con->num, u_role);
+ }
+ 
+-static int ucsi_check_connector_capability(struct ucsi_connector *con)
+-{
+-	u64 command;
+-	int ret;
+-
+-	if (!con->partner || con->ucsi->version < UCSI_VERSION_2_0)
+-		return 0;
+-
+-	command = UCSI_GET_CONNECTOR_CAPABILITY | UCSI_CONNECTOR_NUMBER(con->num);
+-	ret = ucsi_send_command(con->ucsi, command, &con->cap, sizeof(con->cap));
+-	if (ret < 0) {
+-		dev_err(con->ucsi->dev, "GET_CONNECTOR_CAPABILITY failed (%d)\n", ret);
+-		return ret;
+-	}
+-
+-	typec_partner_set_pd_revision(con->partner,
+-		UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags));
+-
+-	return ret;
+-}
+-
+ static int ucsi_check_connection(struct ucsi_connector *con)
+ {
+ 	u8 prev_flags = con->status.flags;
+@@ -1225,15 +1225,16 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 		if (con->status.flags & UCSI_CONSTAT_CONNECTED) {
+ 			ucsi_register_partner(con);
+ 			ucsi_partner_task(con, ucsi_check_connection, 1, HZ);
+-			ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ);
+ 			if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE)
+ 				ucsi_partner_task(con, ucsi_get_partner_identity, 1, HZ);
+ 			if (con->ucsi->cap.features & UCSI_CAP_CABLE_DETAILS)
+ 				ucsi_partner_task(con, ucsi_check_cable, 1, HZ);
+ 
+ 			if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) ==
+-			    UCSI_CONSTAT_PWR_OPMODE_PD)
++			    UCSI_CONSTAT_PWR_OPMODE_PD) {
+ 				ucsi_partner_task(con, ucsi_register_partner_pdos, 1, HZ);
++				ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ);
++			}
+ 		} else {
+ 			ucsi_unregister_partner(con);
+ 		}
+@@ -1650,6 +1651,7 @@ static int ucsi_register_port(struct ucsi *ucsi, struct ucsi_connector *con)
+ 		ucsi_register_device_pdos(con);
+ 		ucsi_get_src_pdos(con);
+ 		ucsi_check_altmodes(con);
++		ucsi_check_connector_capability(con);
+ 	}
+ 
+ 	trace_ucsi_register_port(con->num, &con->status);
+diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
+index a94ec6225d31a..5f9e7e4770783 100644
+--- a/drivers/vfio/vfio_iommu_spapr_tce.c
++++ b/drivers/vfio/vfio_iommu_spapr_tce.c
+@@ -364,7 +364,6 @@ static void tce_iommu_release(void *iommu_data)
+ 		if (!tbl)
+ 			continue;
+ 
+-		tce_iommu_clear(container, tbl, tbl->it_offset, tbl->it_size);
+ 		tce_iommu_free_table(container, tbl);
+ 	}
+ 
+@@ -720,6 +719,8 @@ static long tce_iommu_remove_window(struct tce_container *container,
+ 
+ 	BUG_ON(!tbl->it_size);
+ 
++	tce_iommu_clear(container, tbl, tbl->it_offset, tbl->it_size);
++
+ 	/* Detach groups from IOMMUs */
+ 	list_for_each_entry(tcegrp, &container->group_list, next) {
+ 		table_group = iommu_group_get_iommudata(tcegrp->grp);
+@@ -738,7 +739,6 @@ static long tce_iommu_remove_window(struct tce_container *container,
+ 	}
+ 
+ 	/* Free table */
+-	tce_iommu_clear(container, tbl, tbl->it_offset, tbl->it_size);
+ 	tce_iommu_free_table(container, tbl);
+ 	container->tables[num] = NULL;
+ 
+@@ -1197,9 +1197,14 @@ static void tce_iommu_release_ownership(struct tce_container *container,
+ 		return;
+ 	}
+ 
+-	for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i)
+-		if (container->tables[i])
++	for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) {
++		if (container->tables[i]) {
++			tce_iommu_clear(container, container->tables[i],
++					container->tables[i]->it_offset,
++					container->tables[i]->it_size);
+ 			table_group->ops->unset_window(table_group, i);
++		}
++	}
+ }
+ 
+ static long tce_iommu_take_ownership(struct tce_container *container,
+diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
+index 654290a8e1ba3..a100d6241992c 100644
+--- a/drivers/virt/coco/sev-guest/sev-guest.c
++++ b/drivers/virt/coco/sev-guest/sev-guest.c
+@@ -1009,8 +1009,13 @@ static void __exit sev_guest_remove(struct platform_device *pdev)
+  * This driver is meant to be a common SEV guest interface driver and to
+  * support any SEV guest API. As such, even though it has been introduced
+  * with the SEV-SNP support, it is named "sev-guest".
++ *
++ * sev_guest_remove() lives in .exit.text. For drivers registered via
++ * module_platform_driver_probe() this is ok because they cannot get unbound
++ * at runtime. So mark the driver struct with __refdata to prevent modpost
++ * triggering a section mismatch warning.
+  */
+-static struct platform_driver sev_guest_driver = {
++static struct platform_driver sev_guest_driver __refdata = {
+ 	.remove_new	= __exit_p(sev_guest_remove),
+ 	.driver		= {
+ 		.name = "sev-guest",
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 2a972752ff1bc..9d3a9942c8c82 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -3121,8 +3121,10 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr,
+ {
+ 	struct vring_virtqueue *vq = to_vvq(_vq);
+ 
+-	if (!vq->use_dma_api)
++	if (!vq->use_dma_api) {
++		kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir);
+ 		return (dma_addr_t)virt_to_phys(ptr);
++	}
+ 
+ 	return dma_map_single_attrs(vring_dma_dev(vq), ptr, size, dir, attrs);
+ }
+diff --git a/drivers/watchdog/imx7ulp_wdt.c b/drivers/watchdog/imx7ulp_wdt.c
+index b21d7a74a42df..94914a22daff7 100644
+--- a/drivers/watchdog/imx7ulp_wdt.c
++++ b/drivers/watchdog/imx7ulp_wdt.c
+@@ -290,6 +290,11 @@ static int imx7ulp_wdt_init(struct imx7ulp_wdt_device *wdt, unsigned int timeout
+ 	if (wdt->ext_reset)
+ 		val |= WDOG_CS_INT_EN;
+ 
++	if (readl(wdt->base + WDOG_CS) & WDOG_CS_EN) {
++		set_bit(WDOG_HW_RUNNING, &wdt->wdd.status);
++		val |= WDOG_CS_EN;
++	}
++
+ 	do {
+ 		ret = _imx7ulp_wdt_init(wdt, timeout, val);
+ 		toval = readl(wdt->base + WDOG_TOVAL);
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index c9c620e32fa8b..39e726d7280ef 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -17,6 +17,7 @@
+ #include <linux/poll.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
++#include <linux/srcu.h>
+ #include <linux/string.h>
+ #include <linux/workqueue.h>
+ #include <linux/errno.h>
+@@ -846,6 +847,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ /* Irqfd support */
+ static struct workqueue_struct *irqfd_cleanup_wq;
+ static DEFINE_SPINLOCK(irqfds_lock);
++DEFINE_STATIC_SRCU(irqfds_srcu);
+ static LIST_HEAD(irqfds_list);
+ 
+ struct privcmd_kernel_irqfd {
+@@ -873,6 +875,9 @@ static void irqfd_shutdown(struct work_struct *work)
+ 		container_of(work, struct privcmd_kernel_irqfd, shutdown);
+ 	u64 cnt;
+ 
++	/* Make sure irqfd has been initialized in assign path */
++	synchronize_srcu(&irqfds_srcu);
++
+ 	eventfd_ctx_remove_wait_queue(kirqfd->eventfd, &kirqfd->wait, &cnt);
+ 	eventfd_ctx_put(kirqfd->eventfd);
+ 	kfree(kirqfd);
+@@ -935,7 +940,7 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
+ 	__poll_t events;
+ 	struct fd f;
+ 	void *dm_op;
+-	int ret;
++	int ret, idx;
+ 
+ 	kirqfd = kzalloc(sizeof(*kirqfd) + irqfd->size, GFP_KERNEL);
+ 	if (!kirqfd)
+@@ -981,6 +986,7 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
+ 		}
+ 	}
+ 
++	idx = srcu_read_lock(&irqfds_srcu);
+ 	list_add_tail(&kirqfd->list, &irqfds_list);
+ 	spin_unlock_irqrestore(&irqfds_lock, flags);
+ 
+@@ -992,6 +998,8 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
+ 	if (events & EPOLLIN)
+ 		irqfd_inject(kirqfd);
+ 
++	srcu_read_unlock(&irqfds_srcu, idx);
++
+ 	/*
+ 	 * Do not drop the file until the kirqfd is fully initialized, otherwise
+ 	 * we might race against the EPOLLHUP.
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index a43897b03ce94..777405719de85 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1003,7 +1003,8 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 	if (elf_read_implies_exec(*elf_ex, executable_stack))
+ 		current->personality |= READ_IMPLIES_EXEC;
+ 
+-	if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
++	const int snapshot_randomize_va_space = READ_ONCE(randomize_va_space);
++	if (!(current->personality & ADDR_NO_RANDOMIZE) && snapshot_randomize_va_space)
+ 		current->flags |= PF_RANDOMIZE;
+ 
+ 	setup_new_exec(bprm);
+@@ -1251,7 +1252,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 	mm->end_data = end_data;
+ 	mm->start_stack = bprm->p;
+ 
+-	if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) {
++	if ((current->flags & PF_RANDOMIZE) && (snapshot_randomize_va_space > 1)) {
+ 		/*
+ 		 * For architectures with ELF randomization, when executing
+ 		 * a loader directly (i.e. no interpreter listed in ELF
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 8a791b648ac53..f56914507fceb 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -462,8 +462,16 @@ static noinline int update_ref_for_cow(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	owner = btrfs_header_owner(buf);
+-	BUG_ON(owner == BTRFS_TREE_RELOC_OBJECTID &&
+-	       !(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF));
++	if (unlikely(owner == BTRFS_TREE_RELOC_OBJECTID &&
++		     !(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF))) {
++		btrfs_crit(fs_info,
++"found tree block at bytenr %llu level %d root %llu refs %llu flags %llx without full backref flag set",
++			   buf->start, btrfs_header_level(buf),
++			   btrfs_root_id(root), refs, flags);
++		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
++		return ret;
++	}
+ 
+ 	if (refs > 1) {
+ 		if ((owner == btrfs_root_id(root) ||
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index a56209d275c15..b2e4b30b8fae9 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -457,7 +457,6 @@ struct btrfs_file_private {
+ 	void *filldir_buf;
+ 	u64 last_index;
+ 	struct extent_state *llseek_cached_state;
+-	bool fsync_skip_inode_lock;
+ };
+ 
+ static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8bf980123c5c5..55be8a7f0bb18 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -173,9 +173,16 @@ int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,
+ 
+ 		ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_extent_item);
+ 		num_refs = btrfs_extent_refs(leaf, ei);
++		if (unlikely(num_refs == 0)) {
++			ret = -EUCLEAN;
++			btrfs_err(fs_info,
++		"unexpected zero reference count for extent item (%llu %u %llu)",
++				  key.objectid, key.type, key.offset);
++			btrfs_abort_transaction(trans, ret);
++			goto out_free;
++		}
+ 		extent_flags = btrfs_extent_flags(leaf, ei);
+ 		owner = btrfs_get_extent_owner_root(fs_info, leaf, path->slots[0]);
+-		BUG_ON(num_refs == 0);
+ 	} else {
+ 		num_refs = 0;
+ 		extent_flags = 0;
+@@ -205,10 +212,19 @@ int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,
+ 			goto search_again;
+ 		}
+ 		spin_lock(&head->lock);
+-		if (head->extent_op && head->extent_op->update_flags)
++		if (head->extent_op && head->extent_op->update_flags) {
+ 			extent_flags |= head->extent_op->flags_to_set;
+-		else
+-			BUG_ON(num_refs == 0);
++		} else if (unlikely(num_refs == 0)) {
++			spin_unlock(&head->lock);
++			mutex_unlock(&head->mutex);
++			spin_unlock(&delayed_refs->lock);
++			ret = -EUCLEAN;
++			btrfs_err(fs_info,
++			  "unexpected zero reference count for extent %llu (%s)",
++				  bytenr, metadata ? "metadata" : "data");
++			btrfs_abort_transaction(trans, ret);
++			goto out_free;
++		}
+ 
+ 		num_refs += head->ref_mod;
+ 		spin_unlock(&head->lock);
+@@ -5275,7 +5291,15 @@ static noinline void reada_walk_down(struct btrfs_trans_handle *trans,
+ 		/* We don't care about errors in readahead. */
+ 		if (ret < 0)
+ 			continue;
+-		BUG_ON(refs == 0);
++
++		/*
++		 * This could be racey, it's conceivable that we raced and end
++		 * up with a bogus refs count, if that's the case just skip, if
++		 * we are actually corrupt we will notice when we look up
++		 * everything again with our locks.
++		 */
++		if (refs == 0)
++			continue;
+ 
+ 		if (wc->stage == DROP_REFERENCE) {
+ 			if (refs == 1)
+@@ -5333,16 +5357,19 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
+ 	if (lookup_info &&
+ 	    ((wc->stage == DROP_REFERENCE && wc->refs[level] != 1) ||
+ 	     (wc->stage == UPDATE_BACKREF && !(wc->flags[level] & flag)))) {
+-		BUG_ON(!path->locks[level]);
++		ASSERT(path->locks[level]);
+ 		ret = btrfs_lookup_extent_info(trans, fs_info,
+ 					       eb->start, level, 1,
+ 					       &wc->refs[level],
+ 					       &wc->flags[level],
+ 					       NULL);
+-		BUG_ON(ret == -ENOMEM);
+ 		if (ret)
+ 			return ret;
+-		BUG_ON(wc->refs[level] == 0);
++		if (unlikely(wc->refs[level] == 0)) {
++			btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++				  eb->start);
++			return -EUCLEAN;
++		}
+ 	}
+ 
+ 	if (wc->stage == DROP_REFERENCE) {
+@@ -5358,7 +5385,7 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
+ 
+ 	/* wc->stage == UPDATE_BACKREF */
+ 	if (!(wc->flags[level] & flag)) {
+-		BUG_ON(!path->locks[level]);
++		ASSERT(path->locks[level]);
+ 		ret = btrfs_inc_ref(trans, root, eb, 1);
+ 		BUG_ON(ret); /* -ENOMEM */
+ 		ret = btrfs_dec_ref(trans, root, eb, 0);
+@@ -5515,8 +5542,9 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ 		goto out_unlock;
+ 
+ 	if (unlikely(wc->refs[level - 1] == 0)) {
+-		btrfs_err(fs_info, "Missing references.");
+-		ret = -EIO;
++		btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++			  bytenr);
++		ret = -EUCLEAN;
+ 		goto out_unlock;
+ 	}
+ 	*lookup_info = 0;
+@@ -5719,7 +5747,12 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 				path->locks[level] = 0;
+ 				return ret;
+ 			}
+-			BUG_ON(wc->refs[level] == 0);
++			if (unlikely(wc->refs[level] == 0)) {
++				btrfs_tree_unlock_rw(eb, path->locks[level]);
++				btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0",
++					  eb->start);
++				return -EUCLEAN;
++			}
+ 			if (wc->refs[level] == 1) {
+ 				btrfs_tree_unlock_rw(eb, path->locks[level]);
+ 				path->locks[level] = 0;
+@@ -5737,7 +5770,10 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 				ret = btrfs_dec_ref(trans, root, eb, 1);
+ 			else
+ 				ret = btrfs_dec_ref(trans, root, eb, 0);
+-			BUG_ON(ret); /* -ENOMEM */
++			if (ret) {
++				btrfs_abort_transaction(trans, ret);
++				return ret;
++			}
+ 			if (is_fstree(btrfs_root_id(root))) {
+ 				ret = btrfs_qgroup_trace_leaf_items(trans, eb);
+ 				if (ret) {
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index ca434f0cd27f0..66dfee8739067 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1558,13 +1558,6 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (IS_ERR_OR_NULL(dio)) {
+ 		ret = PTR_ERR_OR_ZERO(dio);
+ 	} else {
+-		struct btrfs_file_private stack_private = { 0 };
+-		struct btrfs_file_private *private;
+-		const bool have_private = (file->private_data != NULL);
+-
+-		if (!have_private)
+-			file->private_data = &stack_private;
+-
+ 		/*
+ 		 * If we have a synchoronous write, we must make sure the fsync
+ 		 * triggered by the iomap_dio_complete() call below doesn't
+@@ -1573,13 +1566,10 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
+ 		 * partial writes due to the input buffer (or parts of it) not
+ 		 * being already faulted in.
+ 		 */
+-		private = file->private_data;
+-		private->fsync_skip_inode_lock = true;
++		ASSERT(current->journal_info == NULL);
++		current->journal_info = BTRFS_TRANS_DIO_WRITE_STUB;
+ 		ret = iomap_dio_complete(dio);
+-		private->fsync_skip_inode_lock = false;
+-
+-		if (!have_private)
+-			file->private_data = NULL;
++		current->journal_info = NULL;
+ 	}
+ 
+ 	/* No increment (+=) because iomap returns a cumulative value. */
+@@ -1811,7 +1801,6 @@ static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx)
+  */
+ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+-	struct btrfs_file_private *private = file->private_data;
+ 	struct dentry *dentry = file_dentry(file);
+ 	struct inode *inode = d_inode(dentry);
+ 	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+@@ -1821,7 +1810,13 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	int ret = 0, err;
+ 	u64 len;
+ 	bool full_sync;
+-	const bool skip_ilock = (private ? private->fsync_skip_inode_lock : false);
++	bool skip_ilock = false;
++
++	if (current->journal_info == BTRFS_TRANS_DIO_WRITE_STUB) {
++		skip_ilock = true;
++		current->journal_info = NULL;
++		lockdep_assert_held(&inode->i_rwsem);
++	}
+ 
+ 	trace_btrfs_sync_file(file, datasync);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index c2f48fc159e5a..2951aa0039fc6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5699,7 +5699,7 @@ struct inode *btrfs_lookup_dentry(struct inode *dir, struct dentry *dentry)
+ 	struct inode *inode;
+ 	struct btrfs_root *root = BTRFS_I(dir)->root;
+ 	struct btrfs_root *sub_root = root;
+-	struct btrfs_key location;
++	struct btrfs_key location = { 0 };
+ 	u8 di_type = 0;
+ 	int ret = 0;
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 3faf2181d1ee8..24df831770075 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1750,13 +1750,55 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	return ret;
+ }
+ 
+-static bool qgroup_has_usage(struct btrfs_qgroup *qgroup)
++/*
++ * Return 0 if we can not delete the qgroup (not empty or has children etc).
++ * Return >0 if we can delete the qgroup.
++ * Return <0 for other errors during tree search.
++ */
++static int can_delete_qgroup(struct btrfs_fs_info *fs_info, struct btrfs_qgroup *qgroup)
+ {
+-	return (qgroup->rfer > 0 || qgroup->rfer_cmpr > 0 ||
+-		qgroup->excl > 0 || qgroup->excl_cmpr > 0 ||
+-		qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] > 0 ||
+-		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] > 0 ||
+-		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > 0);
++	struct btrfs_key key;
++	struct btrfs_path *path;
++	int ret;
++
++	/*
++	 * Squota would never be inconsistent, but there can still be case
++	 * where a dropped subvolume still has qgroup numbers, and squota
++	 * relies on such qgroup for future accounting.
++	 *
++	 * So for squota, do not allow dropping any non-zero qgroup.
++	 */
++	if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_SIMPLE &&
++	    (qgroup->rfer || qgroup->excl || qgroup->excl_cmpr || qgroup->rfer_cmpr))
++		return 0;
++
++	/* For higher level qgroup, we can only delete it if it has no child. */
++	if (btrfs_qgroup_level(qgroup->qgroupid)) {
++		if (!list_empty(&qgroup->members))
++			return 0;
++		return 1;
++	}
++
++	/*
++	 * For level-0 qgroups, we can only delete it if it has no subvolume
++	 * for it.
++	 * This means even a subvolume is unlinked but not yet fully dropped,
++	 * we can not delete the qgroup.
++	 */
++	key.objectid = qgroup->qgroupid;
++	key.type = BTRFS_ROOT_ITEM_KEY;
++	key.offset = -1ULL;
++	path = btrfs_alloc_path();
++	if (!path)
++		return -ENOMEM;
++
++	ret = btrfs_find_root(fs_info->tree_root, &key, path, NULL, NULL);
++	btrfs_free_path(path);
++	/*
++	 * The @ret from btrfs_find_root() exactly matches our definition for
++	 * the return value, thus can be returned directly.
++	 */
++	return ret;
+ }
+ 
+ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+@@ -1778,7 +1820,10 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 		goto out;
+ 	}
+ 
+-	if (is_fstree(qgroupid) && qgroup_has_usage(qgroup)) {
++	ret = can_delete_qgroup(fs_info, qgroup);
++	if (ret < 0)
++		goto out;
++	if (ret == 0) {
+ 		ret = -EBUSY;
+ 		goto out;
+ 	}
+@@ -1803,6 +1848,34 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	}
+ 
+ 	spin_lock(&fs_info->qgroup_lock);
++	/*
++	 * Warn on reserved space. The subvolume should has no child nor
++	 * corresponding subvolume.
++	 * Thus its reserved space should all be zero, no matter if qgroup
++	 * is consistent or the mode.
++	 */
++	WARN_ON(qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]);
++	/*
++	 * The same for rfer/excl numbers, but that's only if our qgroup is
++	 * consistent and if it's in regular qgroup mode.
++	 * For simple mode it's not as accurate thus we can hit non-zero values
++	 * very frequently.
++	 */
++	if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_FULL &&
++	    !(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT)) {
++		if (WARN_ON(qgroup->rfer || qgroup->excl ||
++			    qgroup->rfer_cmpr || qgroup->excl_cmpr)) {
++			btrfs_warn_rl(fs_info,
++"to be deleted qgroup %u/%llu has non-zero numbers, rfer %llu rfer_cmpr %llu excl %llu excl_cmpr %llu",
++				      btrfs_qgroup_level(qgroup->qgroupid),
++				      btrfs_qgroup_subvolid(qgroup->qgroupid),
++				      qgroup->rfer, qgroup->rfer_cmpr,
++				      qgroup->excl, qgroup->excl_cmpr);
++			qgroup_mark_inconsistent(fs_info);
++		}
++	}
+ 	del_qgroup_rb(fs_info, qgroupid);
+ 	spin_unlock(&fs_info->qgroup_lock);
+ 
+@@ -4269,10 +4342,9 @@ static int __btrfs_qgroup_release_data(struct btrfs_inode *inode,
+ 	int ret;
+ 
+ 	if (btrfs_qgroup_mode(inode->root->fs_info) == BTRFS_QGROUP_MODE_DISABLED) {
+-		extent_changeset_init(&changeset);
+ 		return clear_record_extent_bits(&inode->io_tree, start,
+ 						start + len - 1,
+-						EXTENT_QGROUP_RESERVED, &changeset);
++						EXTENT_QGROUP_RESERVED, NULL);
+ 	}
+ 
+ 	/* In release case, we shouldn't have @reserved */
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index 4e451ab173b10..62ec85f4b7774 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -27,6 +27,12 @@ struct btrfs_root_item;
+ struct btrfs_root;
+ struct btrfs_path;
+ 
++/*
++ * Signal that a direct IO write is in progress, to avoid deadlock for sync
++ * direct IO writes when fsync is called during the direct IO write path.
++ */
++#define BTRFS_TRANS_DIO_WRITE_STUB	((void *) 1)
++
+ /* Radix-tree tag for roots that are part of the trasaction. */
+ #define BTRFS_ROOT_TRANS_TAG			0
+ 
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 947a87576f6c6..35829a1a2238e 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1408,6 +1408,8 @@ static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
+ 		return -EINVAL;
+ 	}
+ 
++	bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity);
++
+ 	if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
+ 		btrfs_err(bg->fs_info,
+ 			  "zoned: cannot recover write pointer for zone %llu",
+@@ -1434,7 +1436,6 @@ static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
+ 	}
+ 
+ 	bg->alloc_offset = zone_info[0].alloc_offset;
+-	bg->zone_capacity = min(zone_info[0].capacity, zone_info[1].capacity);
+ 	return 0;
+ }
+ 
+@@ -1452,6 +1453,9 @@ static int btrfs_load_block_group_raid1(struct btrfs_block_group *bg,
+ 		return -EINVAL;
+ 	}
+ 
++	/* In case a device is missing we have a cap of 0, so don't use it. */
++	bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity);
++
+ 	for (i = 0; i < map->num_stripes; i++) {
+ 		if (zone_info[i].alloc_offset == WP_MISSING_DEV ||
+ 		    zone_info[i].alloc_offset == WP_CONVENTIONAL)
+@@ -1473,9 +1477,6 @@ static int btrfs_load_block_group_raid1(struct btrfs_block_group *bg,
+ 			if (test_bit(0, active))
+ 				set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags);
+ 		}
+-		/* In case a device is missing we have a cap of 0, so don't use it. */
+-		bg->zone_capacity = min_not_zero(zone_info[0].capacity,
+-						 zone_info[1].capacity);
+ 	}
+ 
+ 	if (zone_info[0].alloc_offset != WP_MISSING_DEV)
+@@ -1565,6 +1566,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 	unsigned long *active = NULL;
+ 	u64 last_alloc = 0;
+ 	u32 num_sequential = 0, num_conventional = 0;
++	u64 profile;
+ 
+ 	if (!btrfs_is_zoned(fs_info))
+ 		return 0;
+@@ -1625,7 +1627,8 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 		}
+ 	}
+ 
+-	switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
++	profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
++	switch (profile) {
+ 	case 0: /* single */
+ 		ret = btrfs_load_block_group_single(cache, &zone_info[0], active);
+ 		break;
+@@ -1652,6 +1655,23 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 		goto out;
+ 	}
+ 
++	if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 &&
++	    profile != BTRFS_BLOCK_GROUP_RAID10) {
++		/*
++		 * Detected broken write pointer.  Make this block group
++		 * unallocatable by setting the allocation pointer at the end of
++		 * allocatable region. Relocating this block group will fix the
++		 * mismatch.
++		 *
++		 * Currently, we cannot handle RAID0 or RAID10 case like this
++		 * because we don't have a proper zone_capacity value. But,
++		 * reading from this block group won't work anyway by a missing
++		 * stripe.
++		 */
++		cache->alloc_offset = cache->zone_capacity;
++		ret = 0;
++	}
++
+ out:
+ 	/* Reject non SINGLE data profiles without RST */
+ 	if ((map->type & BTRFS_BLOCK_GROUP_DATA) &&
+diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
+index e667dbcd20e8c..a91acd03ee129 100644
+--- a/fs/cachefiles/io.c
++++ b/fs/cachefiles/io.c
+@@ -630,7 +630,7 @@ static void cachefiles_prepare_write_subreq(struct netfs_io_subrequest *subreq)
+ 
+ 	_enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start);
+ 
+-	subreq->max_len = ULONG_MAX;
++	subreq->max_len = MAX_RW_COUNT;
+ 	subreq->max_nr_segs = BIO_MAX_VECS;
+ 
+ 	if (!cachefiles_cres_file(cres)) {
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index d3a67bc06d109..3926a05eceeed 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -353,7 +353,7 @@ void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handl
+ 		read_unlock(&sbi->s_journal->j_state_lock);
+ 	}
+ 	spin_lock(&sbi->s_fc_lock);
+-	if (sbi->s_fc_ineligible_tid < tid)
++	if (tid_gt(tid, sbi->s_fc_ineligible_tid))
+ 		sbi->s_fc_ineligible_tid = tid;
+ 	spin_unlock(&sbi->s_fc_lock);
+ 	WARN_ON(reason >= EXT4_FC_REASON_MAX);
+@@ -1213,7 +1213,7 @@ int ext4_fc_commit(journal_t *journal, tid_t commit_tid)
+ 	if (ret == -EALREADY) {
+ 		/* There was an ongoing commit, check if we need to restart */
+ 		if (atomic_read(&sbi->s_fc_subtid) <= subtid &&
+-			commit_tid > journal->j_commit_sequence)
++		    tid_gt(commit_tid, journal->j_commit_sequence))
+ 			goto restart_fc;
+ 		ext4_fc_update_stats(sb, EXT4_FC_STATUS_SKIPPED, 0, 0,
+ 				commit_tid);
+@@ -1288,7 +1288,7 @@ static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
+ 		list_del_init(&iter->i_fc_list);
+ 		ext4_clear_inode_state(&iter->vfs_inode,
+ 				       EXT4_STATE_FC_COMMITTING);
+-		if (iter->i_sync_tid <= tid)
++		if (tid_geq(tid, iter->i_sync_tid))
+ 			ext4_fc_reset_inode(&iter->vfs_inode);
+ 		/* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
+ 		smp_mb();
+@@ -1319,7 +1319,7 @@ static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
+ 	list_splice_init(&sbi->s_fc_q[FC_Q_STAGING],
+ 				&sbi->s_fc_q[FC_Q_MAIN]);
+ 
+-	if (tid >= sbi->s_fc_ineligible_tid) {
++	if (tid_geq(tid, sbi->s_fc_ineligible_tid)) {
+ 		sbi->s_fc_ineligible_tid = 0;
+ 		ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+ 	}
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 7146038b2fe7d..f0c9cd1a0b397 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -31,6 +31,8 @@ MODULE_ALIAS("devname:fuse");
+ 
+ static struct kmem_cache *fuse_req_cachep;
+ 
++static void end_requests(struct list_head *head);
++
+ static struct fuse_dev *fuse_get_dev(struct file *file)
+ {
+ 	/*
+@@ -773,7 +775,6 @@ static int fuse_check_folio(struct folio *folio)
+ 	    (folio->flags & PAGE_FLAGS_CHECK_AT_PREP &
+ 	     ~(1 << PG_locked |
+ 	       1 << PG_referenced |
+-	       1 << PG_uptodate |
+ 	       1 << PG_lru |
+ 	       1 << PG_active |
+ 	       1 << PG_workingset |
+@@ -818,9 +819,7 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ 
+ 	newfolio = page_folio(buf->page);
+ 
+-	if (!folio_test_uptodate(newfolio))
+-		folio_mark_uptodate(newfolio);
+-
++	folio_clear_uptodate(newfolio);
+ 	folio_clear_mappedtodisk(newfolio);
+ 
+ 	if (fuse_check_folio(newfolio) != 0)
+@@ -1822,6 +1821,13 @@ static void fuse_resend(struct fuse_conn *fc)
+ 	}
+ 
+ 	spin_lock(&fiq->lock);
++	if (!fiq->connected) {
++		spin_unlock(&fiq->lock);
++		list_for_each_entry(req, &to_queue, list)
++			clear_bit(FR_PENDING, &req->flags);
++		end_requests(&to_queue);
++		return;
++	}
+ 	/* iq and pq requests are both oldest to newest */
+ 	list_splice(&to_queue, &fiq->pending);
+ 	fiq->ops->wake_pending_and_unlock(fiq);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 2b0d4781f3948..8e96df9fd76c9 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -670,7 +670,7 @@ static int fuse_create_open(struct inode *dir, struct dentry *entry,
+ 
+ 	err = get_create_ext(&args, dir, entry, mode);
+ 	if (err)
+-		goto out_put_forget_req;
++		goto out_free_ff;
+ 
+ 	err = fuse_simple_request(fm, &args);
+ 	free_ext_value(&args);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index f39456c65ed7f..ed76121f73f2e 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1832,10 +1832,16 @@ __acquires(fi->lock)
+ 	fuse_writepage_finish(fm, wpa);
+ 	spin_unlock(&fi->lock);
+ 
+-	/* After fuse_writepage_finish() aux request list is private */
++	/* After rb_erase() aux request list is private */
+ 	for (aux = wpa->next; aux; aux = next) {
++		struct backing_dev_info *bdi = inode_to_bdi(aux->inode);
++
+ 		next = aux->next;
+ 		aux->next = NULL;
++
++		dec_wb_stat(&bdi->wb, WB_WRITEBACK);
++		dec_node_page_state(aux->ia.ap.pages[0], NR_WRITEBACK_TEMP);
++		wb_writeout_inc(&bdi->wb);
+ 		fuse_writepage_free(aux);
+ 	}
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 32fe6fa72f460..57863e7739821 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1336,11 +1336,16 @@ static void process_init_reply(struct fuse_mount *fm, struct fuse_args *args,
+ 			 * on a stacked fs (e.g. overlayfs) themselves and with
+ 			 * max_stack_depth == 1, FUSE fs can be stacked as the
+ 			 * underlying fs of a stacked fs (e.g. overlayfs).
++			 *
++			 * Also don't allow the combination of FUSE_PASSTHROUGH
++			 * and FUSE_WRITEBACK_CACHE, current design doesn't handle
++			 * them together.
+ 			 */
+ 			if (IS_ENABLED(CONFIG_FUSE_PASSTHROUGH) &&
+ 			    (flags & FUSE_PASSTHROUGH) &&
+ 			    arg->max_stack_depth > 0 &&
+-			    arg->max_stack_depth <= FILESYSTEM_MAX_STACK_DEPTH) {
++			    arg->max_stack_depth <= FILESYSTEM_MAX_STACK_DEPTH &&
++			    !(flags & FUSE_WRITEBACK_CACHE))  {
+ 				fc->passthrough = 1;
+ 				fc->max_stack_depth = arg->max_stack_depth;
+ 				fm->sb->s_stack_depth = arg->max_stack_depth;
+diff --git a/fs/fuse/xattr.c b/fs/fuse/xattr.c
+index 5b423fdbb13f8..9f568d345c512 100644
+--- a/fs/fuse/xattr.c
++++ b/fs/fuse/xattr.c
+@@ -81,7 +81,7 @@ ssize_t fuse_getxattr(struct inode *inode, const char *name, void *value,
+ 	}
+ 	ret = fuse_simple_request(fm, &args);
+ 	if (!ret && !size)
+-		ret = min_t(ssize_t, outarg.size, XATTR_SIZE_MAX);
++		ret = min_t(size_t, outarg.size, XATTR_SIZE_MAX);
+ 	if (ret == -ENOSYS) {
+ 		fm->fc->no_getxattr = 1;
+ 		ret = -EOPNOTSUPP;
+@@ -143,7 +143,7 @@ ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
+ 	}
+ 	ret = fuse_simple_request(fm, &args);
+ 	if (!ret && !size)
+-		ret = min_t(ssize_t, outarg.size, XATTR_LIST_MAX);
++		ret = min_t(size_t, outarg.size, XATTR_LIST_MAX);
+ 	if (ret > 0 && size)
+ 		ret = fuse_verify_xattr_list(list, ret);
+ 	if (ret == -ENOSYS) {
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index 1f7664984d6e4..0d14b5f39be6a 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -443,6 +443,27 @@ static int jbd2_commit_block_csum_verify(journal_t *j, void *buf)
+ 	return provided == cpu_to_be32(calculated);
+ }
+ 
++static bool jbd2_commit_block_csum_verify_partial(journal_t *j, void *buf)
++{
++	struct commit_header *h;
++	__be32 provided;
++	__u32 calculated;
++	void *tmpbuf;
++
++	tmpbuf = kzalloc(j->j_blocksize, GFP_KERNEL);
++	if (!tmpbuf)
++		return false;
++
++	memcpy(tmpbuf, buf, sizeof(struct commit_header));
++	h = tmpbuf;
++	provided = h->h_chksum[0];
++	h->h_chksum[0] = 0;
++	calculated = jbd2_chksum(j, j->j_csum_seed, tmpbuf, j->j_blocksize);
++	kfree(tmpbuf);
++
++	return provided == cpu_to_be32(calculated);
++}
++
+ static int jbd2_block_tag_csum_verify(journal_t *j, journal_block_tag_t *tag,
+ 				      journal_block_tag3_t *tag3,
+ 				      void *buf, __u32 sequence)
+@@ -810,6 +831,13 @@ static int do_one_pass(journal_t *journal,
+ 			if (pass == PASS_SCAN &&
+ 			    !jbd2_commit_block_csum_verify(journal,
+ 							   bh->b_data)) {
++				if (jbd2_commit_block_csum_verify_partial(
++								  journal,
++								  bh->b_data)) {
++					pr_notice("JBD2: Find incomplete commit block in transaction %u block %lu\n",
++						  next_commit_ID, next_log_block);
++					goto chksum_ok;
++				}
+ 			chksum_error:
+ 				if (commit_time < last_trans_commit_time)
+ 					goto ignore_crc_mismatch;
+@@ -824,6 +852,7 @@ static int do_one_pass(journal_t *journal,
+ 				}
+ 			}
+ 			if (pass == PASS_SCAN) {
++			chksum_ok:
+ 				last_trans_commit_time = commit_time;
+ 				head_block = next_log_block;
+ 			}
+@@ -843,6 +872,7 @@ static int do_one_pass(journal_t *journal,
+ 					  next_log_block);
+ 				need_check_commit_time = true;
+ 			}
++
+ 			/* If we aren't in the REVOKE pass, then we can
+ 			 * just skip over this block. */
+ 			if (pass != PASS_REVOKE) {
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 65279e53fbf27..bb7d6e523ee34 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -2043,12 +2043,12 @@ struct timespec64 simple_inode_init_ts(struct inode *inode)
+ }
+ EXPORT_SYMBOL(simple_inode_init_ts);
+ 
+-static inline struct dentry *get_stashed_dentry(struct dentry *stashed)
++static inline struct dentry *get_stashed_dentry(struct dentry **stashed)
+ {
+ 	struct dentry *dentry;
+ 
+ 	guard(rcu)();
+-	dentry = READ_ONCE(stashed);
++	dentry = rcu_dereference(*stashed);
+ 	if (!dentry)
+ 		return NULL;
+ 	if (!lockref_get_not_dead(&dentry->d_lockref))
+@@ -2145,7 +2145,7 @@ int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data,
+ 	const struct stashed_operations *sops = mnt->mnt_sb->s_fs_info;
+ 
+ 	/* See if dentry can be reused. */
+-	path->dentry = get_stashed_dentry(*stashed);
++	path->dentry = get_stashed_dentry(stashed);
+ 	if (path->dentry) {
+ 		sops->put_data(data);
+ 		goto out_path;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 5a51315c66781..e1ced589d8357 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -4906,6 +4906,7 @@ static int copy_statmount_to_user(struct kstatmount *s)
+ static int do_statmount(struct kstatmount *s)
+ {
+ 	struct mount *m = real_mount(s->mnt);
++	struct mnt_namespace *ns = m->mnt_ns;
+ 	int err;
+ 
+ 	/*
+@@ -4913,7 +4914,7 @@ static int do_statmount(struct kstatmount *s)
+ 	 * mounts to show users.
+ 	 */
+ 	if (!is_path_reachable(m, m->mnt.mnt_root, &s->root) &&
+-	    !ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN))
++	    !ns_capable_noaudit(ns->user_ns, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	err = security_sb_statfs(s->mnt->mnt_root);
+@@ -5047,32 +5048,50 @@ static struct mount *listmnt_next(struct mount *curr)
+ 	return node_to_mount(rb_next(&curr->mnt_node));
+ }
+ 
+-static ssize_t do_listmount(struct mount *first, struct path *orig,
+-			    u64 mnt_parent_id, u64 __user *mnt_ids,
+-			    size_t nr_mnt_ids, const struct path *root)
++static ssize_t do_listmount(u64 mnt_parent_id, u64 last_mnt_id, u64 *mnt_ids,
++			    size_t nr_mnt_ids)
+ {
+-	struct mount *r;
++	struct path root __free(path_put) = {};
++	struct mnt_namespace *ns = current->nsproxy->mnt_ns;
++	struct path orig;
++	struct mount *r, *first;
+ 	ssize_t ret;
+ 
++	rwsem_assert_held(&namespace_sem);
++
++	get_fs_root(current->fs, &root);
++	if (mnt_parent_id == LSMT_ROOT) {
++		orig = root;
++	} else {
++		orig.mnt = lookup_mnt_in_ns(mnt_parent_id, ns);
++		if (!orig.mnt)
++			return -ENOENT;
++		orig.dentry = orig.mnt->mnt_root;
++	}
++
+ 	/*
+ 	 * Don't trigger audit denials. We just want to determine what
+ 	 * mounts to show users.
+ 	 */
+-	if (!is_path_reachable(real_mount(orig->mnt), orig->dentry, root) &&
+-	    !ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN))
++	if (!is_path_reachable(real_mount(orig.mnt), orig.dentry, &root) &&
++	    !ns_capable_noaudit(ns->user_ns, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+-	ret = security_sb_statfs(orig->dentry);
++	ret = security_sb_statfs(orig.dentry);
+ 	if (ret)
+ 		return ret;
+ 
++	if (!last_mnt_id)
++		first = node_to_mount(rb_first(&ns->mounts));
++	else
++		first = mnt_find_id_at(ns, last_mnt_id + 1);
++
+ 	for (ret = 0, r = first; r && nr_mnt_ids; r = listmnt_next(r)) {
+ 		if (r->mnt_id_unique == mnt_parent_id)
+ 			continue;
+-		if (!is_path_reachable(r, r->mnt.mnt_root, orig))
++		if (!is_path_reachable(r, r->mnt.mnt_root, &orig))
+ 			continue;
+-		if (put_user(r->mnt_id_unique, mnt_ids))
+-			return -EFAULT;
++		*mnt_ids = r->mnt_id_unique;
+ 		mnt_ids++;
+ 		nr_mnt_ids--;
+ 		ret++;
+@@ -5080,22 +5099,24 @@ static ssize_t do_listmount(struct mount *first, struct path *orig,
+ 	return ret;
+ }
+ 
+-SYSCALL_DEFINE4(listmount, const struct mnt_id_req __user *, req, u64 __user *,
+-		mnt_ids, size_t, nr_mnt_ids, unsigned int, flags)
++SYSCALL_DEFINE4(listmount, const struct mnt_id_req __user *, req,
++		u64 __user *, mnt_ids, size_t, nr_mnt_ids, unsigned int, flags)
+ {
+-	struct mnt_namespace *ns = current->nsproxy->mnt_ns;
++	u64 *kmnt_ids __free(kvfree) = NULL;
++	const size_t maxcount = 1000000;
+ 	struct mnt_id_req kreq;
+-	struct mount *first;
+-	struct path root, orig;
+-	u64 mnt_parent_id, last_mnt_id;
+-	const size_t maxcount = (size_t)-1 >> 3;
+ 	ssize_t ret;
+ 
+ 	if (flags)
+ 		return -EINVAL;
+ 
++	/*
++	 * If the mount namespace really has more than 1 million mounts the
++	 * caller must iterate over the mount namespace (and reconsider their
++	 * system design...).
++	 */
+ 	if (unlikely(nr_mnt_ids > maxcount))
+-		return -EFAULT;
++		return -EOVERFLOW;
+ 
+ 	if (!access_ok(mnt_ids, nr_mnt_ids * sizeof(*mnt_ids)))
+ 		return -EFAULT;
+@@ -5103,33 +5124,23 @@ SYSCALL_DEFINE4(listmount, const struct mnt_id_req __user *, req, u64 __user *,
+ 	ret = copy_mnt_id_req(req, &kreq);
+ 	if (ret)
+ 		return ret;
+-	mnt_parent_id = kreq.mnt_id;
+-	last_mnt_id = kreq.param;
+ 
+-	down_read(&namespace_sem);
+-	get_fs_root(current->fs, &root);
+-	if (mnt_parent_id == LSMT_ROOT) {
+-		orig = root;
+-	} else {
+-		ret = -ENOENT;
+-		orig.mnt = lookup_mnt_in_ns(mnt_parent_id, ns);
+-		if (!orig.mnt)
+-			goto err;
+-		orig.dentry = orig.mnt->mnt_root;
+-	}
+-	if (!last_mnt_id)
+-		first = node_to_mount(rb_first(&ns->mounts));
+-	else
+-		first = mnt_find_id_at(ns, last_mnt_id + 1);
++	kmnt_ids = kvmalloc_array(nr_mnt_ids, sizeof(*kmnt_ids),
++				  GFP_KERNEL_ACCOUNT);
++	if (!kmnt_ids)
++		return -ENOMEM;
++
++	scoped_guard(rwsem_read, &namespace_sem)
++		ret = do_listmount(kreq.mnt_id, kreq.param, kmnt_ids, nr_mnt_ids);
++	if (ret <= 0)
++		return ret;
++
++	if (copy_to_user(mnt_ids, kmnt_ids, ret * sizeof(*mnt_ids)))
++		return -EFAULT;
+ 
+-	ret = do_listmount(first, &orig, mnt_parent_id, mnt_ids, nr_mnt_ids, &root);
+-err:
+-	path_put(&root);
+-	up_read(&namespace_sem);
+ 	return ret;
+ }
+ 
+-
+ static void __init init_mount_tree(void)
+ {
+ 	struct vfsmount *mnt;
+diff --git a/fs/netfs/fscache_main.c b/fs/netfs/fscache_main.c
+index bf9b33d26e312..8e0aacbcfc8e6 100644
+--- a/fs/netfs/fscache_main.c
++++ b/fs/netfs/fscache_main.c
+@@ -103,6 +103,7 @@ void __exit fscache_exit(void)
+ 
+ 	kmem_cache_destroy(fscache_cookie_jar);
+ 	fscache_proc_cleanup();
++	timer_shutdown_sync(&fscache_cookie_lru_timer);
+ 	destroy_workqueue(fscache_wq);
+ 	pr_notice("FS-Cache unloaded\n");
+ }
+diff --git a/fs/netfs/io.c b/fs/netfs/io.c
+index c96431d3da6d8..c91e7b12bbf18 100644
+--- a/fs/netfs/io.c
++++ b/fs/netfs/io.c
+@@ -306,6 +306,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
+ 				break;
+ 			subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
+ 			subreq->error = 0;
++			__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
+ 			netfs_stat(&netfs_n_rh_download_instead);
+ 			trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead);
+ 			netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
+@@ -313,6 +314,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
+ 			netfs_reset_subreq_iter(rreq, subreq);
+ 			netfs_read_from_server(rreq, subreq);
+ 		} else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) {
++			__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
+ 			netfs_reset_subreq_iter(rreq, subreq);
+ 			netfs_rreq_short_read(rreq, subreq);
+ 		}
+@@ -366,7 +368,8 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
+ 		if (subreq->error || subreq->transferred == 0)
+ 			break;
+ 		transferred += subreq->transferred;
+-		if (subreq->transferred < subreq->len)
++		if (subreq->transferred < subreq->len ||
++		    test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags))
+ 			break;
+ 	}
+ 
+@@ -501,7 +504,8 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
+ 
+ 	subreq->error = 0;
+ 	subreq->transferred += transferred_or_error;
+-	if (subreq->transferred < subreq->len)
++	if (subreq->transferred < subreq->len &&
++	    !test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags))
+ 		goto incomplete;
+ 
+ complete:
+@@ -775,10 +779,13 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync)
+ 			    TASK_UNINTERRUPTIBLE);
+ 
+ 		ret = rreq->error;
+-		if (ret == 0 && rreq->submitted < rreq->len &&
+-		    rreq->origin != NETFS_DIO_READ) {
+-			trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
+-			ret = -EIO;
++		if (ret == 0) {
++			if (rreq->origin == NETFS_DIO_READ) {
++				ret = rreq->transferred;
++			} else if (rreq->submitted < rreq->len) {
++				trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
++				ret = -EIO;
++			}
+ 		}
+ 	} else {
+ 		/* If we decrement nr_outstanding to 0, the ref belongs to us. */
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index cbbd4866b0b7a..97b386032b717 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -47,6 +47,7 @@
+ #include <linux/vfs.h>
+ #include <linux/inet.h>
+ #include <linux/in6.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <net/ipv6.h>
+ #include <linux/netdevice.h>
+@@ -228,6 +229,7 @@ static int __nfs_list_for_each_server(struct list_head *head,
+ 		ret = fn(server, data);
+ 		if (ret)
+ 			goto out;
++		cond_resched();
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c
+index b638dc06df2f7..61e25a980f73e 100644
+--- a/fs/nilfs2/recovery.c
++++ b/fs/nilfs2/recovery.c
+@@ -715,6 +715,33 @@ static void nilfs_finish_roll_forward(struct the_nilfs *nilfs,
+ 	brelse(bh);
+ }
+ 
++/**
++ * nilfs_abort_roll_forward - cleaning up after a failed rollforward recovery
++ * @nilfs: nilfs object
++ */
++static void nilfs_abort_roll_forward(struct the_nilfs *nilfs)
++{
++	struct nilfs_inode_info *ii, *n;
++	LIST_HEAD(head);
++
++	/* Abandon inodes that have read recovery data */
++	spin_lock(&nilfs->ns_inode_lock);
++	list_splice_init(&nilfs->ns_dirty_files, &head);
++	spin_unlock(&nilfs->ns_inode_lock);
++	if (list_empty(&head))
++		return;
++
++	set_nilfs_purging(nilfs);
++	list_for_each_entry_safe(ii, n, &head, i_dirty) {
++		spin_lock(&nilfs->ns_inode_lock);
++		list_del_init(&ii->i_dirty);
++		spin_unlock(&nilfs->ns_inode_lock);
++
++		iput(&ii->vfs_inode);
++	}
++	clear_nilfs_purging(nilfs);
++}
++
+ /**
+  * nilfs_salvage_orphan_logs - salvage logs written after the latest checkpoint
+  * @nilfs: nilfs object
+@@ -773,15 +800,19 @@ int nilfs_salvage_orphan_logs(struct the_nilfs *nilfs,
+ 		if (unlikely(err)) {
+ 			nilfs_err(sb, "error %d writing segment for recovery",
+ 				  err);
+-			goto failed;
++			goto put_root;
+ 		}
+ 
+ 		nilfs_finish_roll_forward(nilfs, ri);
+ 	}
+ 
+- failed:
++put_root:
+ 	nilfs_put_root(root);
+ 	return err;
++
++failed:
++	nilfs_abort_roll_forward(nilfs);
++	goto put_root;
+ }
+ 
+ /**
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index d02fd92cdb432..f090f19df3e3b 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -1788,6 +1788,9 @@ static void nilfs_segctor_abort_construction(struct nilfs_sc_info *sci,
+ 	nilfs_abort_logs(&logs, ret ? : err);
+ 
+ 	list_splice_tail_init(&sci->sc_segbufs, &logs);
++	if (list_empty(&logs))
++		return; /* if the first segment buffer preparation failed */
++
+ 	nilfs_cancel_segusage(&logs, nilfs->ns_sufile);
+ 	nilfs_free_incomplete_logs(&logs, nilfs);
+ 
+@@ -2032,7 +2035,7 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 
+ 		err = nilfs_segctor_begin_construction(sci, nilfs);
+ 		if (unlikely(err))
+-			goto out;
++			goto failed;
+ 
+ 		/* Update time stamp */
+ 		sci->sc_seg_ctime = ktime_get_real_seconds();
+@@ -2099,10 +2102,9 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode)
+ 	return err;
+ 
+  failed_to_write:
+-	if (sci->sc_stage.flags & NILFS_CF_IFILE_STARTED)
+-		nilfs_redirty_inodes(&sci->sc_dirty_files);
+-
+  failed:
++	if (mode == SC_LSEG_SR && nilfs_sc_cstage_get(sci) >= NILFS_ST_IFILE)
++		nilfs_redirty_inodes(&sci->sc_dirty_files);
+ 	if (nilfs_doing_gc())
+ 		nilfs_redirty_inodes(&sci->sc_gc_inodes);
+ 	nilfs_segctor_abort_construction(sci, nilfs, err);
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 379d22e28ed62..905c7eadf9676 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -836,9 +836,15 @@ ssize_t nilfs_dev_revision_show(struct nilfs_dev_attr *attr,
+ 				struct the_nilfs *nilfs,
+ 				char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
+-	u32 major = le32_to_cpu(sbp[0]->s_rev_level);
+-	u16 minor = le16_to_cpu(sbp[0]->s_minor_rev_level);
++	struct nilfs_super_block *raw_sb;
++	u32 major;
++	u16 minor;
++
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	major = le32_to_cpu(raw_sb->s_rev_level);
++	minor = le16_to_cpu(raw_sb->s_minor_rev_level);
++	up_read(&nilfs->ns_sem);
+ 
+ 	return sysfs_emit(buf, "%d.%d\n", major, minor);
+ }
+@@ -856,8 +862,13 @@ ssize_t nilfs_dev_device_size_show(struct nilfs_dev_attr *attr,
+ 				    struct the_nilfs *nilfs,
+ 				    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
+-	u64 dev_size = le64_to_cpu(sbp[0]->s_dev_size);
++	struct nilfs_super_block *raw_sb;
++	u64 dev_size;
++
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	dev_size = le64_to_cpu(raw_sb->s_dev_size);
++	up_read(&nilfs->ns_sem);
+ 
+ 	return sysfs_emit(buf, "%llu\n", dev_size);
+ }
+@@ -879,9 +890,15 @@ ssize_t nilfs_dev_uuid_show(struct nilfs_dev_attr *attr,
+ 			    struct the_nilfs *nilfs,
+ 			    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
++	struct nilfs_super_block *raw_sb;
++	ssize_t len;
+ 
+-	return sysfs_emit(buf, "%pUb\n", sbp[0]->s_uuid);
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	len = sysfs_emit(buf, "%pUb\n", raw_sb->s_uuid);
++	up_read(&nilfs->ns_sem);
++
++	return len;
+ }
+ 
+ static
+@@ -889,10 +906,16 @@ ssize_t nilfs_dev_volume_name_show(struct nilfs_dev_attr *attr,
+ 				    struct the_nilfs *nilfs,
+ 				    char *buf)
+ {
+-	struct nilfs_super_block **sbp = nilfs->ns_sbp;
++	struct nilfs_super_block *raw_sb;
++	ssize_t len;
++
++	down_read(&nilfs->ns_sem);
++	raw_sb = nilfs->ns_sbp[0];
++	len = scnprintf(buf, sizeof(raw_sb->s_volume_name), "%s\n",
++			raw_sb->s_volume_name);
++	up_read(&nilfs->ns_sem);
+ 
+-	return scnprintf(buf, sizeof(sbp[0]->s_volume_name), "%s\n",
+-			 sbp[0]->s_volume_name);
++	return len;
+ }
+ 
+ static const char dev_readme_str[] =
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index 858efe255f6f3..1ec09f2fca64a 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -272,9 +272,12 @@ struct inode *dir_search_u(struct inode *dir, const struct cpu_str *uni,
+ 	return err == -ENOENT ? NULL : err ? ERR_PTR(err) : inode;
+ }
+ 
+-static inline int ntfs_filldir(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+-			       const struct NTFS_DE *e, u8 *name,
+-			       struct dir_context *ctx)
++/*
++ * returns false if 'ctx' if full
++ */
++static inline bool ntfs_dir_emit(struct ntfs_sb_info *sbi,
++				 struct ntfs_inode *ni, const struct NTFS_DE *e,
++				 u8 *name, struct dir_context *ctx)
+ {
+ 	const struct ATTR_FILE_NAME *fname;
+ 	unsigned long ino;
+@@ -284,29 +287,29 @@ static inline int ntfs_filldir(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 	fname = Add2Ptr(e, sizeof(struct NTFS_DE));
+ 
+ 	if (fname->type == FILE_NAME_DOS)
+-		return 0;
++		return true;
+ 
+ 	if (!mi_is_ref(&ni->mi, &fname->home))
+-		return 0;
++		return true;
+ 
+ 	ino = ino_get(&e->ref);
+ 
+ 	if (ino == MFT_REC_ROOT)
+-		return 0;
++		return true;
+ 
+ 	/* Skip meta files. Unless option to show metafiles is set. */
+ 	if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino))
+-		return 0;
++		return true;
+ 
+ 	if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))
+-		return 0;
++		return true;
+ 
+ 	name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,
+ 				     PATH_MAX);
+ 	if (name_len <= 0) {
+ 		ntfs_warn(sbi->sb, "failed to convert name for inode %lx.",
+ 			  ino);
+-		return 0;
++		return true;
+ 	}
+ 
+ 	/*
+@@ -336,17 +339,20 @@ static inline int ntfs_filldir(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 		}
+ 	}
+ 
+-	return !dir_emit(ctx, (s8 *)name, name_len, ino, dt_type);
++	return dir_emit(ctx, (s8 *)name, name_len, ino, dt_type);
+ }
+ 
+ /*
+  * ntfs_read_hdr - Helper function for ntfs_readdir().
++ *
++ * returns 0 if ok.
++ * returns -EINVAL if directory is corrupted.
++ * returns +1 if 'ctx' is full.
+  */
+ static int ntfs_read_hdr(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 			 const struct INDEX_HDR *hdr, u64 vbo, u64 pos,
+ 			 u8 *name, struct dir_context *ctx)
+ {
+-	int err;
+ 	const struct NTFS_DE *e;
+ 	u32 e_size;
+ 	u32 end = le32_to_cpu(hdr->used);
+@@ -354,12 +360,12 @@ static int ntfs_read_hdr(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 
+ 	for (;; off += e_size) {
+ 		if (off + sizeof(struct NTFS_DE) > end)
+-			return -1;
++			return -EINVAL;
+ 
+ 		e = Add2Ptr(hdr, off);
+ 		e_size = le16_to_cpu(e->size);
+ 		if (e_size < sizeof(struct NTFS_DE) || off + e_size > end)
+-			return -1;
++			return -EINVAL;
+ 
+ 		if (de_is_last(e))
+ 			return 0;
+@@ -369,14 +375,15 @@ static int ntfs_read_hdr(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 			continue;
+ 
+ 		if (le16_to_cpu(e->key_size) < SIZEOF_ATTRIBUTE_FILENAME)
+-			return -1;
++			return -EINVAL;
+ 
+ 		ctx->pos = vbo + off;
+ 
+ 		/* Submit the name to the filldir callback. */
+-		err = ntfs_filldir(sbi, ni, e, name, ctx);
+-		if (err)
+-			return err;
++		if (!ntfs_dir_emit(sbi, ni, e, name, ctx)) {
++			/* ctx is full. */
++			return +1;
++		}
+ 	}
+ }
+ 
+@@ -475,8 +482,6 @@ static int ntfs_readdir(struct file *file, struct dir_context *ctx)
+ 
+ 		vbo = (u64)bit << index_bits;
+ 		if (vbo >= i_size) {
+-			ntfs_inode_err(dir, "Looks like your dir is corrupt");
+-			ctx->pos = eod;
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
+@@ -499,9 +504,16 @@ static int ntfs_readdir(struct file *file, struct dir_context *ctx)
+ 	__putname(name);
+ 	put_indx_node(node);
+ 
+-	if (err == -ENOENT) {
++	if (err == 1) {
++		/* 'ctx' is full. */
++		err = 0;
++	} else if (err == -ENOENT) {
+ 		err = 0;
+ 		ctx->pos = pos;
++	} else if (err < 0) {
++		if (err == -EINVAL)
++			ntfs_inode_err(dir, "directory corrupted");
++		ctx->pos = eod;
+ 	}
+ 
+ 	return err;
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index ded451a84b773..7a73df871037a 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1601,8 +1601,10 @@ int ni_delete_all(struct ntfs_inode *ni)
+ 		asize = le32_to_cpu(attr->size);
+ 		roff = le16_to_cpu(attr->nres.run_off);
+ 
+-		if (roff > asize)
++		if (roff > asize) {
++			_ntfs_bad_inode(&ni->vfs_inode);
+ 			return -EINVAL;
++		}
+ 
+ 		/* run==1 means unpack and deallocate. */
+ 		run_unpack_ex(RUN_DEALLOCATE, sbi, ni->mi.rno, svcn, evcn, svcn,
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 2c4b357d85e22..a1acf5bd1e3a4 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -1341,7 +1341,6 @@ ssize_t cifs_file_copychunk_range(unsigned int xid,
+ 	struct cifsFileInfo *smb_file_target;
+ 	struct cifs_tcon *src_tcon;
+ 	struct cifs_tcon *target_tcon;
+-	unsigned long long destend, fstart, fend;
+ 	ssize_t rc;
+ 
+ 	cifs_dbg(FYI, "copychunk range\n");
+@@ -1391,25 +1390,13 @@ ssize_t cifs_file_copychunk_range(unsigned int xid,
+ 			goto unlock;
+ 	}
+ 
+-	destend = destoff + len - 1;
+-
+-	/* Flush the folios at either end of the destination range to prevent
+-	 * accidental loss of dirty data outside of the range.
++	/* Flush and invalidate all the folios in the destination region.  If
++	 * the copy was successful, then some of the flush is extra overhead,
++	 * but we need to allow for the copy failing in some way (eg. ENOSPC).
+ 	 */
+-	fstart = destoff;
+-	fend = destend;
+-
+-	rc = cifs_flush_folio(target_inode, destoff, &fstart, &fend, true);
++	rc = filemap_invalidate_inode(target_inode, true, destoff, destoff + len - 1);
+ 	if (rc)
+ 		goto unlock;
+-	rc = cifs_flush_folio(target_inode, destend, &fstart, &fend, false);
+-	if (rc)
+-		goto unlock;
+-	if (fend > target_cifsi->netfs.zero_point)
+-		target_cifsi->netfs.zero_point = fend + 1;
+-
+-	/* Discard all the folios that overlap the destination region. */
+-	truncate_inode_pages_range(&target_inode->i_data, fstart, fend);
+ 
+ 	fscache_invalidate(cifs_inode_cookie(target_inode), NULL,
+ 			   i_size_read(target_inode), 0);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 1e4da268de3b4..552792f28122c 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1508,6 +1508,7 @@ struct cifs_io_subrequest {
+ 		struct cifs_io_request *req;
+ 	};
+ 	ssize_t				got_bytes;
++	size_t				actual_len;
+ 	unsigned int			xid;
+ 	int				result;
+ 	bool				have_xid;
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 6dce70f172082..cfae2e9182099 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1261,16 +1261,32 @@ CIFS_open(const unsigned int xid, struct cifs_open_parms *oparms, int *oplock,
+ 	return rc;
+ }
+ 
++static void cifs_readv_worker(struct work_struct *work)
++{
++	struct cifs_io_subrequest *rdata =
++		container_of(work, struct cifs_io_subrequest, subreq.work);
++
++	netfs_subreq_terminated(&rdata->subreq,
++				(rdata->result == 0 || rdata->result == -EAGAIN) ?
++				rdata->got_bytes : rdata->result, true);
++}
++
+ static void
+ cifs_readv_callback(struct mid_q_entry *mid)
+ {
+ 	struct cifs_io_subrequest *rdata = mid->callback_data;
++	struct netfs_inode *ictx = netfs_inode(rdata->rreq->inode);
+ 	struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink);
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	struct smb_rqst rqst = { .rq_iov = rdata->iov,
+ 				 .rq_nvec = 2,
+ 				 .rq_iter = rdata->subreq.io_iter };
+-	struct cifs_credits credits = { .value = 1, .instance = 0 };
++	struct cifs_credits credits = {
++		.value = 1,
++		.instance = 0,
++		.rreq_debug_id = rdata->rreq->debug_id,
++		.rreq_debug_index = rdata->subreq.debug_index,
++	};
+ 
+ 	cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n",
+ 		 __func__, mid->mid, mid->mid_state, rdata->result,
+@@ -1282,6 +1298,7 @@ cifs_readv_callback(struct mid_q_entry *mid)
+ 		if (server->sign) {
+ 			int rc = 0;
+ 
++			iov_iter_truncate(&rqst.rq_iter, rdata->got_bytes);
+ 			rc = cifs_verify_signature(&rqst, server,
+ 						  mid->sequence_number);
+ 			if (rc)
+@@ -1306,13 +1323,21 @@ cifs_readv_callback(struct mid_q_entry *mid)
+ 		rdata->result = -EIO;
+ 	}
+ 
+-	if (rdata->result == 0 || rdata->result == -EAGAIN)
+-		iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes);
++	if (rdata->result == -ENODATA) {
++		__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
++		rdata->result = 0;
++	} else {
++		if (rdata->got_bytes < rdata->actual_len &&
++		    rdata->subreq.start + rdata->subreq.transferred + rdata->got_bytes ==
++		    ictx->remote_i_size) {
++			__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
++			rdata->result = 0;
++		}
++	}
++
+ 	rdata->credits.value = 0;
+-	netfs_subreq_terminated(&rdata->subreq,
+-				(rdata->result == 0 || rdata->result == -EAGAIN) ?
+-				rdata->got_bytes : rdata->result,
+-				false);
++	INIT_WORK(&rdata->subreq.work, cifs_readv_worker);
++	queue_work(cifsiod_wq, &rdata->subreq.work);
+ 	release_mid(mid);
+ 	add_credits(server, &credits, 0);
+ }
+@@ -1619,9 +1644,15 @@ static void
+ cifs_writev_callback(struct mid_q_entry *mid)
+ {
+ 	struct cifs_io_subrequest *wdata = mid->callback_data;
++	struct TCP_Server_Info *server = wdata->server;
+ 	struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink);
+ 	WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf;
+-	struct cifs_credits credits = { .value = 1, .instance = 0 };
++	struct cifs_credits credits = {
++		.value = 1,
++		.instance = 0,
++		.rreq_debug_id = wdata->rreq->debug_id,
++		.rreq_debug_index = wdata->subreq.debug_index,
++	};
+ 	ssize_t result;
+ 	size_t written;
+ 
+@@ -1657,9 +1688,16 @@ cifs_writev_callback(struct mid_q_entry *mid)
+ 		break;
+ 	}
+ 
++	trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index,
++			      wdata->credits.value,
++			      server->credits, server->in_flight,
++			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+ 	cifs_write_subrequest_terminated(wdata, result, true);
+ 	release_mid(mid);
++	trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index, 0,
++			      server->credits, server->in_flight,
++			      credits.value, cifs_trace_rw_credits_write_response_add);
+ 	add_credits(tcon->ses->server, &credits, 0);
+ }
+ 
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index b202eac6584e1..533f761183160 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -111,6 +111,7 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq)
+ 		goto fail;
+ 	}
+ 
++	wdata->actual_len = wdata->subreq.len;
+ 	rc = adjust_credits(wdata->server, wdata, cifs_trace_rw_credits_issue_write_adjust);
+ 	if (rc)
+ 		goto fail;
+@@ -153,7 +154,7 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq)
+ 	struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
+ 	struct TCP_Server_Info *server = req->server;
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);
+-	size_t rsize = 0;
++	size_t rsize;
+ 	int rc;
+ 
+ 	rdata->xid = get_xid();
+@@ -166,8 +167,8 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq)
+ 						     cifs_sb->ctx);
+ 
+ 
+-	rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, &rsize,
+-					   &rdata->credits);
++	rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
++					   &rsize, &rdata->credits);
+ 	if (rc) {
+ 		subreq->error = rc;
+ 		return false;
+@@ -183,7 +184,8 @@ static bool cifs_clamp_length(struct netfs_io_subrequest *subreq)
+ 			      server->credits, server->in_flight, 0,
+ 			      cifs_trace_rw_credits_read_submit);
+ 
+-	subreq->len = min_t(size_t, subreq->len, rsize);
++	subreq->len = umin(subreq->len, rsize);
++	rdata->actual_len = subreq->len;
+ 
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ 	if (server->smbd_conn)
+@@ -203,12 +205,39 @@ static void cifs_req_issue_read(struct netfs_io_subrequest *subreq)
+ 	struct netfs_io_request *rreq = subreq->rreq;
+ 	struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
+ 	struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
++	struct TCP_Server_Info *server = req->server;
++	struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);
+ 	int rc = 0;
+ 
+ 	cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n",
+ 		 __func__, rreq->debug_id, subreq->debug_index, rreq->mapping,
+ 		 subreq->transferred, subreq->len);
+ 
++	if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) {
++		/*
++		 * As we're issuing a retry, we need to negotiate some new
++		 * credits otherwise the server may reject the op with
++		 * INVALID_PARAMETER.  Note, however, we may get back less
++		 * credit than we need to complete the op, in which case, we
++		 * shorten the op and rely on additional rounds of retry.
++		 */
++		size_t rsize = umin(subreq->len - subreq->transferred,
++				    cifs_sb->ctx->rsize);
++
++		rc = server->ops->wait_mtu_credits(server, rsize, &rdata->actual_len,
++						   &rdata->credits);
++		if (rc)
++			goto out;
++
++		rdata->credits.in_flight_check = 1;
++
++		trace_smb3_rw_credits(rdata->rreq->debug_id,
++				      rdata->subreq.debug_index,
++				      rdata->credits.value,
++				      server->credits, server->in_flight, 0,
++				      cifs_trace_rw_credits_read_resubmit);
++	}
++
+ 	if (req->cfile->invalidHandle) {
+ 		do {
+ 			rc = cifs_reopen_file(req->cfile, true);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index dd0afa23734c8..73e2e6c230b73 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -172,6 +172,8 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr,
+ 		CIFS_I(inode)->time = 0; /* force reval */
+ 		return -ESTALE;
+ 	}
++	if (inode->i_state & I_NEW)
++		CIFS_I(inode)->netfs.zero_point = fattr->cf_eof;
+ 
+ 	cifs_revalidate_cache(inode, fattr);
+ 
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 9f5bc41433c15..11a1c53c64e0b 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1106,6 +1106,8 @@ int smb2_rename_path(const unsigned int xid,
+ 				  co, DELETE, SMB2_OP_RENAME, cfile, source_dentry);
+ 	if (rc == -EINVAL) {
+ 		cifs_dbg(FYI, "invalid lease key, resending request without lease");
++		cifs_get_writable_path(tcon, from_name,
++				       FIND_WR_WITH_DELETE, &cfile);
+ 		rc = smb2_set_path_attr(xid, tcon, from_name, to_name, cifs_sb,
+ 				  co, DELETE, SMB2_OP_RENAME, cfile, NULL);
+ 	}
+@@ -1149,6 +1151,7 @@ smb2_set_path_size(const unsigned int xid, struct cifs_tcon *tcon,
+ 			      cfile, NULL, NULL, dentry);
+ 	if (rc == -EINVAL) {
+ 		cifs_dbg(FYI, "invalid lease key, resending request without lease");
++		cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
+ 		rc = smb2_compound_op(xid, tcon, cifs_sb,
+ 				      full_path, &oparms, &in_iov,
+ 				      &(int){SMB2_OP_SET_EOF}, 1,
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index f44f5f2494006..1d6e8eacdd742 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -301,7 +301,7 @@ smb2_adjust_credits(struct TCP_Server_Info *server,
+ 		    unsigned int /*enum smb3_rw_credits_trace*/ trace)
+ {
+ 	struct cifs_credits *credits = &subreq->credits;
+-	int new_val = DIV_ROUND_UP(subreq->subreq.len, SMB2_MAX_BUFFER_SIZE);
++	int new_val = DIV_ROUND_UP(subreq->actual_len, SMB2_MAX_BUFFER_SIZE);
+ 	int scredits, in_flight;
+ 
+ 	if (!credits->value || credits->value == new_val)
+@@ -3219,13 +3219,15 @@ static long smb3_zero_data(struct file *file, struct cifs_tcon *tcon,
+ }
+ 
+ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+-			    loff_t offset, loff_t len, bool keep_size)
++			    unsigned long long offset, unsigned long long len,
++			    bool keep_size)
+ {
+ 	struct cifs_ses *ses = tcon->ses;
+ 	struct inode *inode = file_inode(file);
+ 	struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ 	struct cifsFileInfo *cfile = file->private_data;
+-	unsigned long long new_size;
++	struct netfs_inode *ictx = netfs_inode(inode);
++	unsigned long long i_size, new_size, remote_size;
+ 	long rc;
+ 	unsigned int xid;
+ 
+@@ -3237,6 +3239,16 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ 	inode_lock(inode);
+ 	filemap_invalidate_lock(inode->i_mapping);
+ 
++	i_size = i_size_read(inode);
++	remote_size = ictx->remote_i_size;
++	if (offset + len >= remote_size && offset < i_size) {
++		unsigned long long top = umin(offset + len, i_size);
++
++		rc = filemap_write_and_wait_range(inode->i_mapping, offset, top - 1);
++		if (rc < 0)
++			goto zero_range_exit;
++	}
++
+ 	/*
+ 	 * We zero the range through ioctl, so we need remove the page caches
+ 	 * first, otherwise the data may be inconsistent with the server.
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index d262e70100c9c..8e02e9f45e0e1 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4501,6 +4501,7 @@ static void
+ smb2_readv_callback(struct mid_q_entry *mid)
+ {
+ 	struct cifs_io_subrequest *rdata = mid->callback_data;
++	struct netfs_inode *ictx = netfs_inode(rdata->rreq->inode);
+ 	struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink);
+ 	struct TCP_Server_Info *server = rdata->server;
+ 	struct smb2_hdr *shdr =
+@@ -4523,9 +4524,9 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 		  "rdata server %p != mid server %p",
+ 		  rdata->server, mid->server);
+ 
+-	cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n",
++	cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu/%zu\n",
+ 		 __func__, mid->mid, mid->mid_state, rdata->result,
+-		 rdata->subreq.len);
++		 rdata->actual_len, rdata->subreq.len - rdata->subreq.transferred);
+ 
+ 	switch (mid->mid_state) {
+ 	case MID_RESPONSE_RECEIVED:
+@@ -4579,22 +4580,29 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 				    rdata->subreq.debug_index,
+ 				    rdata->xid,
+ 				    rdata->req->cfile->fid.persistent_fid,
+-				    tcon->tid, tcon->ses->Suid, rdata->subreq.start,
+-				    rdata->subreq.len, rdata->result);
++				    tcon->tid, tcon->ses->Suid,
++				    rdata->subreq.start + rdata->subreq.transferred,
++				    rdata->actual_len,
++				    rdata->result);
+ 	} else
+ 		trace_smb3_read_done(rdata->rreq->debug_id,
+ 				     rdata->subreq.debug_index,
+ 				     rdata->xid,
+ 				     rdata->req->cfile->fid.persistent_fid,
+ 				     tcon->tid, tcon->ses->Suid,
+-				     rdata->subreq.start, rdata->got_bytes);
++				     rdata->subreq.start + rdata->subreq.transferred,
++				     rdata->got_bytes);
+ 
+ 	if (rdata->result == -ENODATA) {
+-		/* We may have got an EOF error because fallocate
+-		 * failed to enlarge the file.
+-		 */
+-		if (rdata->subreq.start < rdata->subreq.rreq->i_size)
++		__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
++		rdata->result = 0;
++	} else {
++		if (rdata->got_bytes < rdata->actual_len &&
++		    rdata->subreq.start + rdata->subreq.transferred + rdata->got_bytes ==
++		    ictx->remote_i_size) {
++			__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
+ 			rdata->result = 0;
++		}
+ 	}
+ 	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.value,
+ 			      server->credits, server->in_flight,
+@@ -4615,6 +4623,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
+ {
+ 	int rc, flags = 0;
+ 	char *buf;
++	struct netfs_io_subrequest *subreq = &rdata->subreq;
+ 	struct smb2_hdr *shdr;
+ 	struct cifs_io_parms io_parms;
+ 	struct smb_rqst rqst = { .rq_iov = rdata->iov,
+@@ -4625,15 +4634,15 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
+ 	int credit_request;
+ 
+ 	cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n",
+-		 __func__, rdata->subreq.start, rdata->subreq.len);
++		 __func__, subreq->start, subreq->len);
+ 
+ 	if (!rdata->server)
+ 		rdata->server = cifs_pick_channel(tcon->ses);
+ 
+ 	io_parms.tcon = tlink_tcon(rdata->req->cfile->tlink);
+ 	io_parms.server = server = rdata->server;
+-	io_parms.offset = rdata->subreq.start;
+-	io_parms.length = rdata->subreq.len;
++	io_parms.offset = subreq->start + subreq->transferred;
++	io_parms.length = rdata->actual_len;
+ 	io_parms.persistent_fid = rdata->req->cfile->fid.persistent_fid;
+ 	io_parms.volatile_fid = rdata->req->cfile->fid.volatile_fid;
+ 	io_parms.pid = rdata->req->pid;
+@@ -4648,11 +4657,13 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
+ 
+ 	rdata->iov[0].iov_base = buf;
+ 	rdata->iov[0].iov_len = total_len;
++	rdata->got_bytes = 0;
++	rdata->result = 0;
+ 
+ 	shdr = (struct smb2_hdr *)buf;
+ 
+ 	if (rdata->credits.value > 0) {
+-		shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->subreq.len,
++		shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->actual_len,
+ 						SMB2_MAX_BUFFER_SIZE));
+ 		credit_request = le16_to_cpu(shdr->CreditCharge) + 8;
+ 		if (server->credits >= server->max_credits)
+@@ -4676,11 +4687,11 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
+ 	if (rc) {
+ 		cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE);
+ 		trace_smb3_read_err(rdata->rreq->debug_id,
+-				    rdata->subreq.debug_index,
++				    subreq->debug_index,
+ 				    rdata->xid, io_parms.persistent_fid,
+ 				    io_parms.tcon->tid,
+ 				    io_parms.tcon->ses->Suid,
+-				    io_parms.offset, io_parms.length, rc);
++				    io_parms.offset, rdata->actual_len, rc);
+ 	}
+ 
+ async_readv_out:
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 36d5295c2a6f9..13adfe550b992 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -30,6 +30,7 @@
+ 	EM(cifs_trace_rw_credits_old_session,		"old-session") \
+ 	EM(cifs_trace_rw_credits_read_response_add,	"rd-resp-add") \
+ 	EM(cifs_trace_rw_credits_read_response_clear,	"rd-resp-clr") \
++	EM(cifs_trace_rw_credits_read_resubmit,		"rd-resubmit") \
+ 	EM(cifs_trace_rw_credits_read_submit,		"rd-submit  ") \
+ 	EM(cifs_trace_rw_credits_write_prepare,		"wr-prepare ") \
+ 	EM(cifs_trace_rw_credits_write_response_add,	"wr-resp-add") \
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index a8f52c4ebbdad..e546ffa57b55a 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -1510,7 +1510,7 @@ void create_lease_buf(u8 *rbuf, struct lease *lease)
+  * parse_lease_state() - parse lease context containted in file open request
+  * @open_req:	buffer containing smb2 file open(create) request
+  *
+- * Return:  oplock state, -ENOENT if create lease context not found
++ * Return: allocated lease context object on success, otherwise NULL
+  */
+ struct lease_ctx_info *parse_lease_state(void *open_req)
+ {
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 8cfdf0d1a186e..39dfecf082ba0 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1687,6 +1687,8 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 		rc = ksmbd_session_register(conn, sess);
+ 		if (rc)
+ 			goto out_err;
++
++		conn->binding = false;
+ 	} else if (conn->dialect >= SMB30_PROT_ID &&
+ 		   (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) &&
+ 		   req->Flags & SMB2_SESSION_REQ_FLAG_BINDING) {
+@@ -1765,6 +1767,8 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 			sess = NULL;
+ 			goto out_err;
+ 		}
++
++		conn->binding = false;
+ 	}
+ 	work->sess = sess;
+ 
+@@ -2767,8 +2771,8 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ 				}
+ 			}
+ 
+-			if (((lc && (lc->req_state & SMB2_LEASE_HANDLE_CACHING_LE)) ||
+-			     req_op_level == SMB2_OPLOCK_LEVEL_BATCH)) {
++			if ((lc && (lc->req_state & SMB2_LEASE_HANDLE_CACHING_LE)) ||
++			    req_op_level == SMB2_OPLOCK_LEVEL_BATCH) {
+ 				dh_info->CreateGuid =
+ 					durable_v2_blob->CreateGuid;
+ 				dh_info->persistent =
+@@ -2788,8 +2792,8 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ 				goto out;
+ 			}
+ 
+-			if (((lc && (lc->req_state & SMB2_LEASE_HANDLE_CACHING_LE)) ||
+-			     req_op_level == SMB2_OPLOCK_LEVEL_BATCH)) {
++			if ((lc && (lc->req_state & SMB2_LEASE_HANDLE_CACHING_LE)) ||
++			    req_op_level == SMB2_OPLOCK_LEVEL_BATCH) {
+ 				ksmbd_debug(SMB, "Request for durable open\n");
+ 				dh_info->type = dh_idx;
+ 			}
+@@ -3411,7 +3415,7 @@ int smb2_open(struct ksmbd_work *work)
+ 			goto err_out1;
+ 		}
+ 	} else {
+-		if (req_op_level == SMB2_OPLOCK_LEVEL_LEASE) {
++		if (req_op_level == SMB2_OPLOCK_LEVEL_LEASE && lc) {
+ 			if (S_ISDIR(file_inode(filp)->i_mode)) {
+ 				lc->req_state &= ~SMB2_LEASE_WRITE_CACHING_LE;
+ 				lc->is_dir = true;
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index 6633fa78e9b96..2ce7f75059cb3 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -624,8 +624,10 @@ int ksmbd_tcp_set_interfaces(char *ifc_list, int ifc_list_sz)
+ 		for_each_netdev(&init_net, netdev) {
+ 			if (netif_is_bridge_port(netdev))
+ 				continue;
+-			if (!alloc_iface(kstrdup(netdev->name, GFP_KERNEL)))
++			if (!alloc_iface(kstrdup(netdev->name, GFP_KERNEL))) {
++				rtnl_unlock();
+ 				return -ENOMEM;
++			}
+ 		}
+ 		rtnl_unlock();
+ 		bind_additional_ifaces = 1;
+diff --git a/fs/squashfs/inode.c b/fs/squashfs/inode.c
+index 16bd693d0b3aa..d5918eba27e37 100644
+--- a/fs/squashfs/inode.c
++++ b/fs/squashfs/inode.c
+@@ -279,8 +279,13 @@ int squashfs_read_inode(struct inode *inode, long long ino)
+ 		if (err < 0)
+ 			goto failed_read;
+ 
+-		set_nlink(inode, le32_to_cpu(sqsh_ino->nlink));
+ 		inode->i_size = le32_to_cpu(sqsh_ino->symlink_size);
++		if (inode->i_size > PAGE_SIZE) {
++			ERROR("Corrupted symlink\n");
++			return -EINVAL;
++		}
++
++		set_nlink(inode, le32_to_cpu(sqsh_ino->nlink));
+ 		inode->i_op = &squashfs_symlink_inode_ops;
+ 		inode_nohighmem(inode);
+ 		inode->i_data.a_ops = &squashfs_symlink_aops;
+diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c
+index 01e99e98457dd..8705c77a9e75a 100644
+--- a/fs/tracefs/event_inode.c
++++ b/fs/tracefs/event_inode.c
+@@ -862,7 +862,7 @@ static void eventfs_remove_rec(struct eventfs_inode *ei, int level)
+ 	list_for_each_entry(ei_child, &ei->children, list)
+ 		eventfs_remove_rec(ei_child, level + 1);
+ 
+-	list_del(&ei->list);
++	list_del_rcu(&ei->list);
+ 	free_ei(ei);
+ }
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 92d4770539056..3460ecc826d16 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1111,12 +1111,19 @@ static int udf_fill_partdesc_info(struct super_block *sb,
+ 	struct udf_part_map *map;
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct partitionHeaderDesc *phd;
++	u32 sum;
+ 	int err;
+ 
+ 	map = &sbi->s_partmaps[p_index];
+ 
+ 	map->s_partition_len = le32_to_cpu(p->partitionLength); /* blocks */
+ 	map->s_partition_root = le32_to_cpu(p->partitionStartingLocation);
++	if (check_add_overflow(map->s_partition_root, map->s_partition_len,
++			       &sum)) {
++		udf_err(sb, "Partition %d has invalid location %u + %u\n",
++			p_index, map->s_partition_root, map->s_partition_len);
++		return -EFSCORRUPTED;
++	}
+ 
+ 	if (p->accessType == cpu_to_le32(PD_ACCESS_TYPE_READ_ONLY))
+ 		map->s_partition_flags |= UDF_PART_FLAG_READ_ONLY;
+@@ -1172,6 +1179,14 @@ static int udf_fill_partdesc_info(struct super_block *sb,
+ 		bitmap->s_extPosition = le32_to_cpu(
+ 				phd->unallocSpaceBitmap.extPosition);
+ 		map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_BITMAP;
++		/* Check whether math over bitmap won't overflow. */
++		if (check_add_overflow(map->s_partition_len,
++				       sizeof(struct spaceBitmapDesc) << 3,
++				       &sum)) {
++			udf_err(sb, "Partition %d is too long (%u)\n", p_index,
++				map->s_partition_len);
++			return -EFSCORRUPTED;
++		}
+ 		udf_debug("unallocSpaceBitmap (part %d) @ %u\n",
+ 			  p_index, bitmap->s_extPosition);
+ 	}
+diff --git a/fs/xattr.c b/fs/xattr.c
+index f8b643f91a981..7672ce5486c53 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -630,10 +630,9 @@ int do_setxattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 			ctx->kvalue, ctx->size, ctx->flags);
+ }
+ 
+-static long
+-setxattr(struct mnt_idmap *idmap, struct dentry *d,
+-	const char __user *name, const void __user *value, size_t size,
+-	int flags)
++static int path_setxattr(const char __user *pathname,
++			 const char __user *name, const void __user *value,
++			 size_t size, int flags, unsigned int lookup_flags)
+ {
+ 	struct xattr_name kname;
+ 	struct xattr_ctx ctx = {
+@@ -643,33 +642,20 @@ setxattr(struct mnt_idmap *idmap, struct dentry *d,
+ 		.kname    = &kname,
+ 		.flags    = flags,
+ 	};
++	struct path path;
+ 	int error;
+ 
+ 	error = setxattr_copy(name, &ctx);
+ 	if (error)
+ 		return error;
+ 
+-	error = do_setxattr(idmap, d, &ctx);
+-
+-	kvfree(ctx.kvalue);
+-	return error;
+-}
+-
+-static int path_setxattr(const char __user *pathname,
+-			 const char __user *name, const void __user *value,
+-			 size_t size, int flags, unsigned int lookup_flags)
+-{
+-	struct path path;
+-	int error;
+-
+ retry:
+ 	error = user_path_at(AT_FDCWD, pathname, lookup_flags, &path);
+ 	if (error)
+-		return error;
++		goto out;
+ 	error = mnt_want_write(path.mnt);
+ 	if (!error) {
+-		error = setxattr(mnt_idmap(path.mnt), path.dentry, name,
+-				 value, size, flags);
++		error = do_setxattr(mnt_idmap(path.mnt), path.dentry, &ctx);
+ 		mnt_drop_write(path.mnt);
+ 	}
+ 	path_put(&path);
+@@ -677,6 +663,9 @@ static int path_setxattr(const char __user *pathname,
+ 		lookup_flags |= LOOKUP_REVAL;
+ 		goto retry;
+ 	}
++
++out:
++	kvfree(ctx.kvalue);
+ 	return error;
+ }
+ 
+@@ -697,20 +686,32 @@ SYSCALL_DEFINE5(lsetxattr, const char __user *, pathname,
+ SYSCALL_DEFINE5(fsetxattr, int, fd, const char __user *, name,
+ 		const void __user *,value, size_t, size, int, flags)
+ {
+-	struct fd f = fdget(fd);
+-	int error = -EBADF;
++	struct xattr_name kname;
++	struct xattr_ctx ctx = {
++		.cvalue   = value,
++		.kvalue   = NULL,
++		.size     = size,
++		.kname    = &kname,
++		.flags    = flags,
++	};
++	int error;
+ 
++	CLASS(fd, f)(fd);
+ 	if (!f.file)
+-		return error;
++		return -EBADF;
++
+ 	audit_file(f.file);
++	error = setxattr_copy(name, &ctx);
++	if (error)
++		return error;
++
+ 	error = mnt_want_write_file(f.file);
+ 	if (!error) {
+-		error = setxattr(file_mnt_idmap(f.file),
+-				 f.file->f_path.dentry, name,
+-				 value, size, flags);
++		error = do_setxattr(file_mnt_idmap(f.file),
++				    f.file->f_path.dentry, &ctx);
+ 		mnt_drop_write_file(f.file);
+ 	}
+-	fdput(f);
++	kvfree(ctx.kvalue);
+ 	return error;
+ }
+ 
+@@ -899,9 +900,17 @@ SYSCALL_DEFINE3(flistxattr, int, fd, char __user *, list, size_t, size)
+  * Extended attribute REMOVE operations
+  */
+ static long
+-removexattr(struct mnt_idmap *idmap, struct dentry *d,
+-	    const char __user *name)
++removexattr(struct mnt_idmap *idmap, struct dentry *d, const char *name)
+ {
++	if (is_posix_acl_xattr(name))
++		return vfs_remove_acl(idmap, d, name);
++	return vfs_removexattr(idmap, d, name);
++}
++
++static int path_removexattr(const char __user *pathname,
++			    const char __user *name, unsigned int lookup_flags)
++{
++	struct path path;
+ 	int error;
+ 	char kname[XATTR_NAME_MAX + 1];
+ 
+@@ -910,25 +919,13 @@ removexattr(struct mnt_idmap *idmap, struct dentry *d,
+ 		error = -ERANGE;
+ 	if (error < 0)
+ 		return error;
+-
+-	if (is_posix_acl_xattr(kname))
+-		return vfs_remove_acl(idmap, d, kname);
+-
+-	return vfs_removexattr(idmap, d, kname);
+-}
+-
+-static int path_removexattr(const char __user *pathname,
+-			    const char __user *name, unsigned int lookup_flags)
+-{
+-	struct path path;
+-	int error;
+ retry:
+ 	error = user_path_at(AT_FDCWD, pathname, lookup_flags, &path);
+ 	if (error)
+ 		return error;
+ 	error = mnt_want_write(path.mnt);
+ 	if (!error) {
+-		error = removexattr(mnt_idmap(path.mnt), path.dentry, name);
++		error = removexattr(mnt_idmap(path.mnt), path.dentry, kname);
+ 		mnt_drop_write(path.mnt);
+ 	}
+ 	path_put(&path);
+@@ -954,15 +951,23 @@ SYSCALL_DEFINE2(lremovexattr, const char __user *, pathname,
+ SYSCALL_DEFINE2(fremovexattr, int, fd, const char __user *, name)
+ {
+ 	struct fd f = fdget(fd);
++	char kname[XATTR_NAME_MAX + 1];
+ 	int error = -EBADF;
+ 
+ 	if (!f.file)
+ 		return error;
+ 	audit_file(f.file);
++
++	error = strncpy_from_user(kname, name, sizeof(kname));
++	if (error == 0 || error == sizeof(kname))
++		error = -ERANGE;
++	if (error < 0)
++		return error;
++
+ 	error = mnt_want_write_file(f.file);
+ 	if (!error) {
+ 		error = removexattr(file_mnt_idmap(f.file),
+-				    f.file->f_path.dentry, name);
++				    f.file->f_path.dentry, kname);
+ 		mnt_drop_write_file(f.file);
+ 	}
+ 	fdput(f);
+diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
+index 42e9fd47f6c73..ffef1c75dd57c 100644
+--- a/fs/xfs/libxfs/xfs_ialloc_btree.c
++++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
+@@ -749,7 +749,7 @@ xfs_finobt_count_blocks(
+ 	if (error)
+ 		return error;
+ 
+-	cur = xfs_inobt_init_cursor(pag, tp, agbp);
++	cur = xfs_finobt_init_cursor(pag, tp, agbp);
+ 	error = xfs_btree_count_blocks(cur, tree_blocks);
+ 	xfs_btree_del_cursor(cur, error);
+ 	xfs_trans_brelse(tp, agbp);
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index fb3c3e7181e6d..ce91d9b2acb9f 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -390,14 +390,6 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
+ 	__ret;								       \
+ })
+ 
+-#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen)			       \
+-({									       \
+-	int __ret = 0;							       \
+-	if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT))			       \
+-		copy_from_sockptr(&__ret, optlen, sizeof(int));		       \
+-	__ret;								       \
+-})
+-
+ #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, optlen,   \
+ 				       max_optlen, retval)		       \
+ ({									       \
+@@ -518,7 +510,6 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
+ #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; })
+-#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; })
+ #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \
+ 				       optlen, max_optlen, retval) ({ retval; })
+ #define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \
+diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
+index d7bb31d9a4463..da09bfaa7b813 100644
+--- a/include/linux/mlx5/device.h
++++ b/include/linux/mlx5/device.h
+@@ -294,6 +294,7 @@ enum {
+ #define MLX5_UMR_FLEX_ALIGNMENT 0x40
+ #define MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT (MLX5_UMR_FLEX_ALIGNMENT / sizeof(struct mlx5_mtt))
+ #define MLX5_UMR_KLM_NUM_ENTRIES_ALIGNMENT (MLX5_UMR_FLEX_ALIGNMENT / sizeof(struct mlx5_klm))
++#define MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT (MLX5_UMR_FLEX_ALIGNMENT / sizeof(struct mlx5_ksm))
+ 
+ #define MLX5_USER_INDEX_LEN (MLX5_FLD_SZ_BYTES(qpc, user_index) * 8)
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index f32177152921e..81562397e8347 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -97,6 +97,10 @@ extern const int mmap_rnd_compat_bits_max;
+ extern int mmap_rnd_compat_bits __read_mostly;
+ #endif
+ 
++#ifndef PHYSMEM_END
++# define PHYSMEM_END	((1ULL << MAX_PHYSMEM_BITS) - 1)
++#endif
++
+ #include <asm/page.h>
+ #include <asm/processor.h>
+ 
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index 5d0288938cc2d..d8892b1a2dd77 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -200,6 +200,7 @@ struct netfs_io_subrequest {
+ #define NETFS_SREQ_NEED_RETRY		9	/* Set if the filesystem requests a retry */
+ #define NETFS_SREQ_RETRYING		10	/* Set if we're retrying */
+ #define NETFS_SREQ_FAILED		11	/* Set if the subreq failed unretryably */
++#define NETFS_SREQ_HIT_EOF		12	/* Set if we hit the EOF */
+ };
+ 
+ enum netfs_io_origin {
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index c693ac344ec05..efda407622c16 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -1848,6 +1848,7 @@ enum {
+ 	/*
+ 	 * Generic Command Status:
+ 	 */
++	NVME_SCT_GENERIC		= 0x0,
+ 	NVME_SC_SUCCESS			= 0x0,
+ 	NVME_SC_INVALID_OPCODE		= 0x1,
+ 	NVME_SC_INVALID_FIELD		= 0x2,
+@@ -1895,6 +1896,7 @@ enum {
+ 	/*
+ 	 * Command Specific Status:
+ 	 */
++	NVME_SCT_COMMAND_SPECIFIC	= 0x100,
+ 	NVME_SC_CQ_INVALID		= 0x100,
+ 	NVME_SC_QID_INVALID		= 0x101,
+ 	NVME_SC_QUEUE_SIZE		= 0x102,
+@@ -1968,6 +1970,7 @@ enum {
+ 	/*
+ 	 * Media and Data Integrity Errors:
+ 	 */
++	NVME_SCT_MEDIA_ERROR		= 0x200,
+ 	NVME_SC_WRITE_FAULT		= 0x280,
+ 	NVME_SC_READ_ERROR		= 0x281,
+ 	NVME_SC_GUARD_CHECK		= 0x282,
+@@ -1980,6 +1983,7 @@ enum {
+ 	/*
+ 	 * Path-related Errors:
+ 	 */
++	NVME_SCT_PATH			= 0x300,
+ 	NVME_SC_INTERNAL_PATH_ERROR	= 0x300,
+ 	NVME_SC_ANA_PERSISTENT_LOSS	= 0x301,
+ 	NVME_SC_ANA_INACCESSIBLE	= 0x302,
+@@ -1988,11 +1992,17 @@ enum {
+ 	NVME_SC_HOST_PATH_ERROR		= 0x370,
+ 	NVME_SC_HOST_ABORTED_CMD	= 0x371,
+ 
+-	NVME_SC_CRD			= 0x1800,
+-	NVME_SC_MORE			= 0x2000,
+-	NVME_SC_DNR			= 0x4000,
++	NVME_SC_MASK			= 0x00ff, /* Status Code */
++	NVME_SCT_MASK			= 0x0700, /* Status Code Type */
++	NVME_SCT_SC_MASK		= NVME_SCT_MASK | NVME_SC_MASK,
++
++	NVME_STATUS_CRD			= 0x1800, /* Command Retry Delayed */
++	NVME_STATUS_MORE		= 0x2000,
++	NVME_STATUS_DNR			= 0x4000, /* Do Not Retry */
+ };
+ 
++#define NVME_SCT(status) ((status) >> 8 & 7)
++
+ struct nvme_completion {
+ 	/*
+ 	 * Used by Admin and Fabrics commands to return data:
+diff --git a/include/linux/path.h b/include/linux/path.h
+index 475225a03d0dc..ca073e70decd5 100644
+--- a/include/linux/path.h
++++ b/include/linux/path.h
+@@ -24,4 +24,13 @@ static inline void path_put_init(struct path *path)
+ 	*path = (struct path) { };
+ }
+ 
++/*
++ * Cleanup macro for use with __free(path_put). Avoids dereference and
++ * copying @path unlike DEFINE_FREE(). path_put() will handle the empty
++ * path correctly just ensure @path is initialized:
++ *
++ * struct path path __free(path_put) = {};
++ */
++#define __free_path_put path_put
++
+ #endif  /* _LINUX_PATH_H */
+diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h
+index 59d0b9a79e6e0..e6ad927bb4a8a 100644
+--- a/include/linux/regulator/consumer.h
++++ b/include/linux/regulator/consumer.h
+@@ -451,6 +451,14 @@ static inline int of_regulator_bulk_get_all(struct device *dev, struct device_no
+ 	return 0;
+ }
+ 
++static inline int devm_regulator_bulk_get_const(
++	struct device *dev, int num_consumers,
++	const struct regulator_bulk_data *in_consumers,
++	struct regulator_bulk_data **out_consumers)
++{
++	return 0;
++}
++
+ static inline int regulator_bulk_enable(int num_consumers,
+ 					struct regulator_bulk_data *consumers)
+ {
+diff --git a/include/linux/zswap.h b/include/linux/zswap.h
+index 2a85b941db975..ce5e7bfe8f1ec 100644
+--- a/include/linux/zswap.h
++++ b/include/linux/zswap.h
+@@ -35,7 +35,7 @@ void zswap_swapoff(int type);
+ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
+ void zswap_lruvec_state_init(struct lruvec *lruvec);
+ void zswap_folio_swapin(struct folio *folio);
+-bool is_zswap_enabled(void);
++bool zswap_is_enabled(void);
+ #else
+ 
+ struct zswap_lruvec_state {};
+@@ -60,7 +60,7 @@ static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
+ static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {}
+ static inline void zswap_folio_swapin(struct folio *folio) {}
+ 
+-static inline bool is_zswap_enabled(void)
++static inline bool zswap_is_enabled(void)
+ {
+ 	return false;
+ }
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c97ff64c9189f..ecb6824e9add8 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -186,7 +186,6 @@ struct blocked_key {
+ struct smp_csrk {
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 type;
+ 	u8 val[16];
+ };
+@@ -196,7 +195,6 @@ struct smp_ltk {
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
+ 	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 authenticated;
+ 	u8 type;
+ 	u8 enc_size;
+@@ -211,7 +209,6 @@ struct smp_irk {
+ 	bdaddr_t rpa;
+ 	bdaddr_t bdaddr;
+ 	u8 addr_type;
+-	u8 link_type;
+ 	u8 val[16];
+ };
+ 
+@@ -219,8 +216,6 @@ struct link_key {
+ 	struct list_head list;
+ 	struct rcu_head rcu;
+ 	bdaddr_t bdaddr;
+-	u8 bdaddr_type;
+-	u8 link_type;
+ 	u8 type;
+ 	u8 val[HCI_LINK_KEY_SIZE];
+ 	u8 pin_len;
+diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
+index 534c3386e714f..3cb2d10cac930 100644
+--- a/include/net/bluetooth/hci_sync.h
++++ b/include/net/bluetooth/hci_sync.h
+@@ -52,6 +52,10 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 		       void *data, hci_cmd_sync_work_destroy_t destroy);
+ int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 			    void *data, hci_cmd_sync_work_destroy_t destroy);
++int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
++		     void *data, hci_cmd_sync_work_destroy_t destroy);
++int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
++			  void *data, hci_cmd_sync_work_destroy_t destroy);
+ struct hci_cmd_sync_work_entry *
+ hci_cmd_sync_lookup_entry(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 			  void *data, hci_cmd_sync_work_destroy_t destroy);
+diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
+index 9bdb1fdc7c51b..5927bd9d46bef 100644
+--- a/include/net/mana/mana.h
++++ b/include/net/mana/mana.h
+@@ -97,6 +97,8 @@ struct mana_txq {
+ 
+ 	atomic_t pending_sends;
+ 
++	bool napi_initialized;
++
+ 	struct mana_stats_tx stats;
+ };
+ 
+diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
+index 84d502e429614..2d84a8052b157 100644
+--- a/include/uapi/drm/drm_fourcc.h
++++ b/include/uapi/drm/drm_fourcc.h
+@@ -1476,6 +1476,7 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
+ #define AMD_FMT_MOD_TILE_VER_GFX10 2
+ #define AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS 3
+ #define AMD_FMT_MOD_TILE_VER_GFX11 4
++#define AMD_FMT_MOD_TILE_VER_GFX12 5
+ 
+ /*
+  * 64K_S is the same for GFX9/GFX10/GFX10_RBPLUS and hence has GFX9 as canonical
+@@ -1486,6 +1487,8 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
+ /*
+  * 64K_D for non-32 bpp is the same for GFX9/GFX10/GFX10_RBPLUS and hence has
+  * GFX9 as canonical version.
++ *
++ * 64K_D_2D on GFX12 is identical to 64K_D on GFX11.
+  */
+ #define AMD_FMT_MOD_TILE_GFX9_64K_D 10
+ #define AMD_FMT_MOD_TILE_GFX9_64K_S_X 25
+@@ -1493,6 +1496,21 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
+ #define AMD_FMT_MOD_TILE_GFX9_64K_R_X 27
+ #define AMD_FMT_MOD_TILE_GFX11_256K_R_X 31
+ 
++/* Gfx12 swizzle modes:
++ *    0 - LINEAR
++ *    1 - 256B_2D  - 2D block dimensions
++ *    2 - 4KB_2D
++ *    3 - 64KB_2D
++ *    4 - 256KB_2D
++ *    5 - 4KB_3D   - 3D block dimensions
++ *    6 - 64KB_3D
++ *    7 - 256KB_3D
++ */
++#define AMD_FMT_MOD_TILE_GFX12_256B_2D 1
++#define AMD_FMT_MOD_TILE_GFX12_4K_2D 2
++#define AMD_FMT_MOD_TILE_GFX12_64K_2D 3
++#define AMD_FMT_MOD_TILE_GFX12_256K_2D 4
++
+ #define AMD_FMT_MOD_DCC_BLOCK_64B 0
+ #define AMD_FMT_MOD_DCC_BLOCK_128B 1
+ #define AMD_FMT_MOD_DCC_BLOCK_256B 2
+diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
+index 926b1deb11166..e23a7f9b0eacd 100644
+--- a/include/uapi/drm/panthor_drm.h
++++ b/include/uapi/drm/panthor_drm.h
+@@ -692,7 +692,11 @@ enum drm_panthor_group_priority {
+ 	/** @PANTHOR_GROUP_PRIORITY_MEDIUM: Medium priority group. */
+ 	PANTHOR_GROUP_PRIORITY_MEDIUM,
+ 
+-	/** @PANTHOR_GROUP_PRIORITY_HIGH: High priority group. */
++	/**
++	 * @PANTHOR_GROUP_PRIORITY_HIGH: High priority group.
++	 *
++	 * Requires CAP_SYS_NICE or DRM_MASTER.
++	 */
+ 	PANTHOR_GROUP_PRIORITY_HIGH,
+ };
+ 
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index fe360b5b211d1..2f157ffbc67ce 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -817,9 +817,11 @@ static bool btf_name_valid_section(const struct btf *btf, u32 offset)
+ 	const char *src = btf_str_by_offset(btf, offset);
+ 	const char *src_limit;
+ 
++	if (!*src)
++		return false;
++
+ 	/* set a limit on identifier length */
+ 	src_limit = src + KSYM_NAME_LEN;
+-	src++;
+ 	while (*src && src < src_limit) {
+ 		if (!isprint(*src))
+ 			return false;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 521bd7efae038..73f55f4b945ee 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2982,8 +2982,10 @@ static int check_subprogs(struct bpf_verifier_env *env)
+ 
+ 		if (code == (BPF_JMP | BPF_CALL) &&
+ 		    insn[i].src_reg == 0 &&
+-		    insn[i].imm == BPF_FUNC_tail_call)
++		    insn[i].imm == BPF_FUNC_tail_call) {
+ 			subprog[cur_subprog].has_tail_call = true;
++			subprog[cur_subprog].tail_call_reachable = true;
++		}
+ 		if (BPF_CLASS(code) == BPF_LD &&
+ 		    (BPF_MODE(code) == BPF_ABS || BPF_MODE(code) == BPF_IND))
+ 			subprog[cur_subprog].has_ld_abs = true;
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index e32b6972c4784..278889170f941 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1839,9 +1839,9 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		RCU_INIT_POINTER(scgrp->subsys[ssid], NULL);
+ 		rcu_assign_pointer(dcgrp->subsys[ssid], css);
+ 		ss->root = dst_root;
+-		css->cgroup = dcgrp;
+ 
+ 		spin_lock_irq(&css_set_lock);
++		css->cgroup = dcgrp;
+ 		WARN_ON(!list_empty(&dcgrp->e_csets[ss->id]));
+ 		list_for_each_entry_safe(cset, cset_pos, &scgrp->e_csets[ss->id],
+ 					 e_cset_node[ss->id]) {
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index fc1c6236460d2..e8f24483e05f0 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -826,17 +826,41 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 
+ 	/*
+ 	 * If either I or some sibling (!= me) is exclusive, we can't
+-	 * overlap
++	 * overlap. exclusive_cpus cannot overlap with each other if set.
+ 	 */
+ 	ret = -EINVAL;
+ 	cpuset_for_each_child(c, css, par) {
+-		if ((is_cpu_exclusive(trial) || is_cpu_exclusive(c)) &&
+-		    c != cur) {
++		bool txset, cxset;	/* Are exclusive_cpus set? */
++
++		if (c == cur)
++			continue;
++
++		txset = !cpumask_empty(trial->exclusive_cpus);
++		cxset = !cpumask_empty(c->exclusive_cpus);
++		if (is_cpu_exclusive(trial) || is_cpu_exclusive(c) ||
++		    (txset && cxset)) {
+ 			if (!cpusets_are_exclusive(trial, c))
+ 				goto out;
++		} else if (txset || cxset) {
++			struct cpumask *xcpus, *acpus;
++
++			/*
++			 * When just one of the exclusive_cpus's is set,
++			 * cpus_allowed of the other cpuset, if set, cannot be
++			 * a subset of it or none of those CPUs will be
++			 * available if these exclusive CPUs are activated.
++			 */
++			if (txset) {
++				xcpus = trial->exclusive_cpus;
++				acpus = c->cpus_allowed;
++			} else {
++				xcpus = c->exclusive_cpus;
++				acpus = trial->cpus_allowed;
++			}
++			if (!cpumask_empty(acpus) && cpumask_subset(acpus, xcpus))
++				goto out;
+ 		}
+ 		if ((is_mem_exclusive(trial) || is_mem_exclusive(c)) &&
+-		    c != cur &&
+ 		    nodes_intersects(trial->mems_allowed, c->mems_allowed))
+ 			goto out;
+ 	}
+@@ -1376,7 +1400,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+  */
+ static int update_partition_exclusive(struct cpuset *cs, int new_prs)
+ {
+-	bool exclusive = (new_prs > 0);
++	bool exclusive = (new_prs > PRS_MEMBER);
+ 
+ 	if (exclusive && !is_cpu_exclusive(cs)) {
+ 		if (update_flag(CS_CPU_EXCLUSIVE, cs, 1))
+@@ -2624,8 +2648,6 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 		retval = cpulist_parse(buf, trialcs->exclusive_cpus);
+ 		if (retval < 0)
+ 			return retval;
+-		if (!is_cpu_exclusive(cs))
+-			set_bit(CS_CPU_EXCLUSIVE, &trialcs->flags);
+ 	}
+ 
+ 	/* Nothing to do if the CPUs didn't change */
+diff --git a/kernel/dma/map_benchmark.c b/kernel/dma/map_benchmark.c
+index 4950e0b622b1f..cc19a3efea896 100644
+--- a/kernel/dma/map_benchmark.c
++++ b/kernel/dma/map_benchmark.c
+@@ -89,6 +89,22 @@ static int map_benchmark_thread(void *data)
+ 		atomic64_add(map_sq, &map->sum_sq_map);
+ 		atomic64_add(unmap_sq, &map->sum_sq_unmap);
+ 		atomic64_inc(&map->loops);
++
++		/*
++		 * We may test for a long time so periodically check whether
++		 * we need to schedule to avoid starving the others. Otherwise
++		 * we may hangup the kernel in a non-preemptible kernel when
++		 * the test kthreads number >= CPU number, the test kthreads
++		 * will run endless on every CPU since the thread resposible
++		 * for notifying the kthread stop (in do_map_benchmark())
++		 * could not be scheduled.
++		 *
++		 * Note this may degrade the test concurrency since the test
++		 * threads may need to share the CPU time with other load
++		 * in the system. So it's recommended to run this benchmark
++		 * on an idle system.
++		 */
++		cond_resched();
+ 	}
+ 
+ out:
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 7891d5a75526a..36191add55c37 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1255,8 +1255,9 @@ static void put_ctx(struct perf_event_context *ctx)
+  *	  perf_event_context::mutex
+  *	    perf_event::child_mutex;
+  *	      perf_event_context::lock
+- *	    perf_event::mmap_mutex
+  *	    mmap_lock
++ *	      perf_event::mmap_mutex
++ *	        perf_buffer::aux_mutex
+  *	      perf_addr_filters_head::lock
+  *
+  *    cpu_hotplug_lock
+@@ -6383,12 +6384,11 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ 		event->pmu->event_unmapped(event, vma->vm_mm);
+ 
+ 	/*
+-	 * rb->aux_mmap_count will always drop before rb->mmap_count and
+-	 * event->mmap_count, so it is ok to use event->mmap_mutex to
+-	 * serialize with perf_mmap here.
++	 * The AUX buffer is strictly a sub-buffer, serialize using aux_mutex
++	 * to avoid complications.
+ 	 */
+ 	if (rb_has_aux(rb) && vma->vm_pgoff == rb->aux_pgoff &&
+-	    atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &event->mmap_mutex)) {
++	    atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &rb->aux_mutex)) {
+ 		/*
+ 		 * Stop all AUX events that are writing to this buffer,
+ 		 * so that we can free its AUX pages and corresponding PMU
+@@ -6405,7 +6405,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ 		rb_free_aux(rb);
+ 		WARN_ON_ONCE(refcount_read(&rb->aux_refcount));
+ 
+-		mutex_unlock(&event->mmap_mutex);
++		mutex_unlock(&rb->aux_mutex);
+ 	}
+ 
+ 	if (atomic_dec_and_test(&rb->mmap_count))
+@@ -6493,6 +6493,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	struct perf_event *event = file->private_data;
+ 	unsigned long user_locked, user_lock_limit;
+ 	struct user_struct *user = current_user();
++	struct mutex *aux_mutex = NULL;
+ 	struct perf_buffer *rb = NULL;
+ 	unsigned long locked, lock_limit;
+ 	unsigned long vma_size;
+@@ -6541,6 +6542,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		if (!rb)
+ 			goto aux_unlock;
+ 
++		aux_mutex = &rb->aux_mutex;
++		mutex_lock(aux_mutex);
++
+ 		aux_offset = READ_ONCE(rb->user_page->aux_offset);
+ 		aux_size = READ_ONCE(rb->user_page->aux_size);
+ 
+@@ -6691,6 +6695,8 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		atomic_dec(&rb->mmap_count);
+ 	}
+ aux_unlock:
++	if (aux_mutex)
++		mutex_unlock(aux_mutex);
+ 	mutex_unlock(&event->mmap_mutex);
+ 
+ 	/*
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index 386d21c7edfa0..f376b057320ce 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -40,6 +40,7 @@ struct perf_buffer {
+ 	struct user_struct		*mmap_user;
+ 
+ 	/* AUX area */
++	struct mutex			aux_mutex;
+ 	long				aux_head;
+ 	unsigned int			aux_nest;
+ 	long				aux_wakeup;	/* last aux_watermark boundary crossed by aux_head */
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 485cf0a66631b..e4f208df31029 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -337,6 +337,8 @@ ring_buffer_init(struct perf_buffer *rb, long watermark, int flags)
+ 	 */
+ 	if (!rb->nr_pages)
+ 		rb->paused = 1;
++
++	mutex_init(&rb->aux_mutex);
+ }
+ 
+ void perf_aux_output_flag(struct perf_output_handle *handle, u64 flags)
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 2c83ba776fc7b..47cdec3e1df11 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1480,7 +1480,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+ 	uprobe_opcode_t insn = UPROBE_SWBP_INSN;
+ 	struct xol_area *area;
+ 
+-	area = kmalloc(sizeof(*area), GFP_KERNEL);
++	area = kzalloc(sizeof(*area), GFP_KERNEL);
+ 	if (unlikely(!area))
+ 		goto out;
+ 
+@@ -1490,7 +1490,6 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+ 		goto free_area;
+ 
+ 	area->xol_mapping.name = "[uprobes]";
+-	area->xol_mapping.fault = NULL;
+ 	area->xol_mapping.pages = area->pages;
+ 	area->pages[0] = alloc_page(GFP_HIGHUSER);
+ 	if (!area->pages[0])
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 81fcee45d6302..be81342caf1bb 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -277,7 +277,6 @@ void release_task(struct task_struct *p)
+ 	}
+ 
+ 	write_unlock_irq(&tasklist_lock);
+-	seccomp_filter_release(p);
+ 	proc_flush_pid(thread_pid);
+ 	put_pid(thread_pid);
+ 	release_thread(p);
+@@ -834,6 +833,8 @@ void __noreturn do_exit(long code)
+ 	io_uring_files_cancel();
+ 	exit_signals(tsk);  /* sets PF_EXITING */
+ 
++	seccomp_filter_release(tsk);
++
+ 	acct_update_integrals(tsk);
+ 	group_dead = atomic_dec_and_test(&tsk->signal->live);
+ 	if (group_dead) {
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 3d64290d24c9a..3eedb8c226ad8 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -752,7 +752,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
+ 
+ #ifdef CONFIG_CRASH_HOTPLUG
+ 		/* Exclude elfcorehdr segment to allow future changes via hotplug */
+-		if (j == image->elfcorehdr_index)
++		if (i == image->elfcorehdr_index)
+ 			continue;
+ #endif
+ 
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 88d08eeb8bc03..fba1229f1de66 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1644,6 +1644,7 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock,
+ }
+ 
+ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
++					     struct rt_mutex_base *lock,
+ 					     struct rt_mutex_waiter *w)
+ {
+ 	/*
+@@ -1656,10 +1657,10 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
+ 	if (build_ww_mutex() && w->ww_ctx)
+ 		return;
+ 
+-	/*
+-	 * Yell loudly and stop the task right here.
+-	 */
++	raw_spin_unlock_irq(&lock->wait_lock);
++
+ 	WARN(1, "rtmutex deadlock detected\n");
++
+ 	while (1) {
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		rt_mutex_schedule();
+@@ -1713,7 +1714,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
+ 	} else {
+ 		__set_current_state(TASK_RUNNING);
+ 		remove_waiter(lock, waiter);
+-		rt_mutex_handle_deadlock(ret, chwalk, waiter);
++		rt_mutex_handle_deadlock(ret, chwalk, lock, waiter);
+ 	}
+ 
+ 	/*
+diff --git a/kernel/resource.c b/kernel/resource.c
+index fcbca39dbc450..b0e2b15ecb409 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1832,8 +1832,7 @@ static resource_size_t gfr_start(struct resource *base, resource_size_t size,
+ 	if (flags & GFR_DESCENDING) {
+ 		resource_size_t end;
+ 
+-		end = min_t(resource_size_t, base->end,
+-			    (1ULL << MAX_PHYSMEM_BITS) - 1);
++		end = min_t(resource_size_t, base->end, PHYSMEM_END);
+ 		return end - size + 1;
+ 	}
+ 
+@@ -1850,8 +1849,7 @@ static bool gfr_continue(struct resource *base, resource_size_t addr,
+ 	 * @size did not wrap 0.
+ 	 */
+ 	return addr > addr - size &&
+-	       addr <= min_t(resource_size_t, base->end,
+-			     (1ULL << MAX_PHYSMEM_BITS) - 1);
++	       addr <= min_t(resource_size_t, base->end, PHYSMEM_END);
+ }
+ 
+ static resource_size_t gfr_next(resource_size_t addr, resource_size_t size,
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index e30b60b57614e..b02337e956644 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -502,6 +502,9 @@ static inline pid_t seccomp_can_sync_threads(void)
+ 		/* Skip current, since it is initiating the sync. */
+ 		if (thread == caller)
+ 			continue;
++		/* Skip exited threads. */
++		if (thread->flags & PF_EXITING)
++			continue;
+ 
+ 		if (thread->seccomp.mode == SECCOMP_MODE_DISABLED ||
+ 		    (thread->seccomp.mode == SECCOMP_MODE_FILTER &&
+@@ -563,18 +566,21 @@ static void __seccomp_filter_release(struct seccomp_filter *orig)
+  * @tsk: task the filter should be released from.
+  *
+  * This function should only be called when the task is exiting as
+- * it detaches it from its filter tree. As such, READ_ONCE() and
+- * barriers are not needed here, as would normally be needed.
++ * it detaches it from its filter tree. PF_EXITING has to be set
++ * for the task.
+  */
+ void seccomp_filter_release(struct task_struct *tsk)
+ {
+-	struct seccomp_filter *orig = tsk->seccomp.filter;
++	struct seccomp_filter *orig;
+ 
+-	/* We are effectively holding the siglock by not having any sighand. */
+-	WARN_ON(tsk->sighand != NULL);
++	if (WARN_ON((tsk->flags & PF_EXITING) == 0))
++		return;
+ 
++	spin_lock_irq(&tsk->sighand->siglock);
++	orig = tsk->seccomp.filter;
+ 	/* Detach task from its filter tree. */
+ 	tsk->seccomp.filter = NULL;
++	spin_unlock_irq(&tsk->sighand->siglock);
+ 	__seccomp_filter_release(orig);
+ }
+ 
+@@ -602,6 +608,13 @@ static inline void seccomp_sync_threads(unsigned long flags)
+ 		if (thread == caller)
+ 			continue;
+ 
++		/*
++		 * Skip exited threads. seccomp_filter_release could have
++		 * been already called for this task.
++		 */
++		if (thread->flags & PF_EXITING)
++			continue;
++
+ 		/* Get a task reference for the new leaf node. */
+ 		get_seccomp_filter(caller);
+ 
+diff --git a/kernel/smp.c b/kernel/smp.c
+index f085ebcdf9e70..af9b2d0736c86 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -1119,6 +1119,7 @@ int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par, bool phys)
+ 
+ 	queue_work_on(cpu, system_wq, &sscs.work);
+ 	wait_for_completion(&sscs.done);
++	destroy_work_on_stack(&sscs.work);
+ 
+ 	return sscs.ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index cb507860163d0..c566ad068b40d 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3958,6 +3958,8 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu)
+ 			break;
+ 		entries++;
+ 		ring_buffer_iter_advance(buf_iter);
++		/* This could be a big loop */
++		cond_resched();
+ 	}
+ 
+ 	per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 16383247bdbf1..0d88922f8763c 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -678,6 +678,21 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
+ }
+ 
+ #ifdef CONFIG_MODULES
++static int validate_module_probe_symbol(const char *modname, const char *symbol);
++
++static int register_module_trace_kprobe(struct module *mod, struct trace_kprobe *tk)
++{
++	const char *p;
++	int ret = 0;
++
++	p = strchr(trace_kprobe_symbol(tk), ':');
++	if (p)
++		ret = validate_module_probe_symbol(module_name(mod), p + 1);
++	if (!ret)
++		ret = __register_trace_kprobe(tk);
++	return ret;
++}
++
+ /* Module notifier call back, checking event on the module */
+ static int trace_kprobe_module_callback(struct notifier_block *nb,
+ 				       unsigned long val, void *data)
+@@ -696,7 +711,7 @@ static int trace_kprobe_module_callback(struct notifier_block *nb,
+ 		if (trace_kprobe_within_module(tk, mod)) {
+ 			/* Don't need to check busy - this should have gone. */
+ 			__unregister_trace_kprobe(tk);
+-			ret = __register_trace_kprobe(tk);
++			ret = register_module_trace_kprobe(mod, tk);
+ 			if (ret)
+ 				pr_warn("Failed to re-register probe %s on %s: %d\n",
+ 					trace_probe_name(&tk->tp),
+@@ -747,17 +762,68 @@ static int count_mod_symbols(void *data, const char *name, unsigned long unused)
+ 	return 0;
+ }
+ 
+-static unsigned int number_of_same_symbols(char *func_name)
++static unsigned int number_of_same_symbols(const char *mod, const char *func_name)
+ {
+ 	struct sym_count_ctx ctx = { .count = 0, .name = func_name };
+ 
+-	kallsyms_on_each_match_symbol(count_symbols, func_name, &ctx.count);
++	if (!mod)
++		kallsyms_on_each_match_symbol(count_symbols, func_name, &ctx.count);
+ 
+-	module_kallsyms_on_each_symbol(NULL, count_mod_symbols, &ctx);
++	module_kallsyms_on_each_symbol(mod, count_mod_symbols, &ctx);
+ 
+ 	return ctx.count;
+ }
+ 
++static int validate_module_probe_symbol(const char *modname, const char *symbol)
++{
++	unsigned int count = number_of_same_symbols(modname, symbol);
++
++	if (count > 1) {
++		/*
++		 * Users should use ADDR to remove the ambiguity of
++		 * using KSYM only.
++		 */
++		return -EADDRNOTAVAIL;
++	} else if (count == 0) {
++		/*
++		 * We can return ENOENT earlier than when register the
++		 * kprobe.
++		 */
++		return -ENOENT;
++	}
++	return 0;
++}
++
++static int validate_probe_symbol(char *symbol)
++{
++	struct module *mod = NULL;
++	char *modname = NULL, *p;
++	int ret = 0;
++
++	p = strchr(symbol, ':');
++	if (p) {
++		modname = symbol;
++		symbol = p + 1;
++		*p = '\0';
++		/* Return 0 (defer) if the module does not exist yet. */
++		rcu_read_lock_sched();
++		mod = find_module(modname);
++		if (mod && !try_module_get(mod))
++			mod = NULL;
++		rcu_read_unlock_sched();
++		if (!mod)
++			goto out;
++	}
++
++	ret = validate_module_probe_symbol(modname, symbol);
++out:
++	if (p)
++		*p = ':';
++	if (mod)
++		module_put(mod);
++	return ret;
++}
++
+ static int trace_kprobe_entry_handler(struct kretprobe_instance *ri,
+ 				      struct pt_regs *regs);
+ 
+@@ -881,6 +947,14 @@ static int __trace_kprobe_create(int argc, const char *argv[])
+ 			trace_probe_log_err(0, BAD_PROBE_ADDR);
+ 			goto parse_error;
+ 		}
++		ret = validate_probe_symbol(symbol);
++		if (ret) {
++			if (ret == -EADDRNOTAVAIL)
++				trace_probe_log_err(0, NON_UNIQ_SYMBOL);
++			else
++				trace_probe_log_err(0, BAD_PROBE_ADDR);
++			goto parse_error;
++		}
+ 		if (is_return)
+ 			ctx.flags |= TPARG_FL_RETURN;
+ 		ret = kprobe_on_func_entry(NULL, symbol, offset);
+@@ -893,31 +967,6 @@ static int __trace_kprobe_create(int argc, const char *argv[])
+ 		}
+ 	}
+ 
+-	if (symbol && !strchr(symbol, ':')) {
+-		unsigned int count;
+-
+-		count = number_of_same_symbols(symbol);
+-		if (count > 1) {
+-			/*
+-			 * Users should use ADDR to remove the ambiguity of
+-			 * using KSYM only.
+-			 */
+-			trace_probe_log_err(0, NON_UNIQ_SYMBOL);
+-			ret = -EADDRNOTAVAIL;
+-
+-			goto error;
+-		} else if (count == 0) {
+-			/*
+-			 * We can return ENOENT earlier than when register the
+-			 * kprobe.
+-			 */
+-			trace_probe_log_err(0, BAD_PROBE_ADDR);
+-			ret = -ENOENT;
+-
+-			goto error;
+-		}
+-	}
+-
+ 	trace_probe_log_set_index(0);
+ 	if (event) {
+ 		ret = traceprobe_parse_event_name(&event, &group, gbuf,
+@@ -1835,21 +1884,9 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
+ 	char *event;
+ 
+ 	if (func) {
+-		unsigned int count;
+-
+-		count = number_of_same_symbols(func);
+-		if (count > 1)
+-			/*
+-			 * Users should use addr to remove the ambiguity of
+-			 * using func only.
+-			 */
+-			return ERR_PTR(-EADDRNOTAVAIL);
+-		else if (count == 0)
+-			/*
+-			 * We can return ENOENT earlier than when register the
+-			 * kprobe.
+-			 */
+-			return ERR_PTR(-ENOENT);
++		ret = validate_probe_symbol(func);
++		if (ret)
++			return ERR_PTR(ret);
+ 	}
+ 
+ 	/*
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index a8e28f9b9271c..5b06f67879f5f 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -252,6 +252,11 @@ static inline struct timerlat_variables *this_cpu_tmr_var(void)
+ 	return this_cpu_ptr(&per_cpu_timerlat_var);
+ }
+ 
++/*
++ * Protect the interface.
++ */
++static struct mutex interface_lock;
++
+ /*
+  * tlat_var_reset - Reset the values of the given timerlat_variables
+  */
+@@ -259,14 +264,20 @@ static inline void tlat_var_reset(void)
+ {
+ 	struct timerlat_variables *tlat_var;
+ 	int cpu;
++
++	/* Synchronize with the timerlat interfaces */
++	mutex_lock(&interface_lock);
+ 	/*
+ 	 * So far, all the values are initialized as 0, so
+ 	 * zeroing the structure is perfect.
+ 	 */
+ 	for_each_cpu(cpu, cpu_online_mask) {
+ 		tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu);
++		if (tlat_var->kthread)
++			hrtimer_cancel(&tlat_var->timer);
+ 		memset(tlat_var, 0, sizeof(*tlat_var));
+ 	}
++	mutex_unlock(&interface_lock);
+ }
+ #else /* CONFIG_TIMERLAT_TRACER */
+ #define tlat_var_reset()	do {} while (0)
+@@ -331,11 +342,6 @@ struct timerlat_sample {
+ };
+ #endif
+ 
+-/*
+- * Protect the interface.
+- */
+-static struct mutex interface_lock;
+-
+ /*
+  * Tracer data.
+  */
+@@ -1612,6 +1618,7 @@ static int run_osnoise(void)
+ 
+ static struct cpumask osnoise_cpumask;
+ static struct cpumask save_cpumask;
++static struct cpumask kthread_cpumask;
+ 
+ /*
+  * osnoise_sleep - sleep until the next period
+@@ -1675,6 +1682,7 @@ static inline int osnoise_migration_pending(void)
+ 	 */
+ 	mutex_lock(&interface_lock);
+ 	this_cpu_osn_var()->kthread = NULL;
++	cpumask_clear_cpu(smp_processor_id(), &kthread_cpumask);
+ 	mutex_unlock(&interface_lock);
+ 
+ 	return 1;
+@@ -1945,11 +1953,16 @@ static void stop_kthread(unsigned int cpu)
+ {
+ 	struct task_struct *kthread;
+ 
++	mutex_lock(&interface_lock);
+ 	kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
+ 	if (kthread) {
+-		if (test_bit(OSN_WORKLOAD, &osnoise_options)) {
++		per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
++		mutex_unlock(&interface_lock);
++
++		if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask) &&
++		    !WARN_ON(!test_bit(OSN_WORKLOAD, &osnoise_options))) {
+ 			kthread_stop(kthread);
+-		} else {
++		} else if (!WARN_ON(test_bit(OSN_WORKLOAD, &osnoise_options))) {
+ 			/*
+ 			 * This is a user thread waiting on the timerlat_fd. We need
+ 			 * to close all users, and the best way to guarantee this is
+@@ -1958,8 +1971,8 @@ static void stop_kthread(unsigned int cpu)
+ 			kill_pid(kthread->thread_pid, SIGKILL, 1);
+ 			put_task_struct(kthread);
+ 		}
+-		per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
+ 	} else {
++		mutex_unlock(&interface_lock);
+ 		/* if no workload, just return */
+ 		if (!test_bit(OSN_WORKLOAD, &osnoise_options)) {
+ 			/*
+@@ -1967,7 +1980,6 @@ static void stop_kthread(unsigned int cpu)
+ 			 */
+ 			per_cpu(per_cpu_osnoise_var, cpu).sampling = false;
+ 			barrier();
+-			return;
+ 		}
+ 	}
+ }
+@@ -1982,12 +1994,8 @@ static void stop_per_cpu_kthreads(void)
+ {
+ 	int cpu;
+ 
+-	cpus_read_lock();
+-
+-	for_each_online_cpu(cpu)
++	for_each_possible_cpu(cpu)
+ 		stop_kthread(cpu);
+-
+-	cpus_read_unlock();
+ }
+ 
+ /*
+@@ -2021,6 +2029,7 @@ static int start_kthread(unsigned int cpu)
+ 	}
+ 
+ 	per_cpu(per_cpu_osnoise_var, cpu).kthread = kthread;
++	cpumask_set_cpu(cpu, &kthread_cpumask);
+ 
+ 	return 0;
+ }
+@@ -2048,8 +2057,16 @@ static int start_per_cpu_kthreads(void)
+ 	 */
+ 	cpumask_and(current_mask, cpu_online_mask, &osnoise_cpumask);
+ 
+-	for_each_possible_cpu(cpu)
++	for_each_possible_cpu(cpu) {
++		if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask)) {
++			struct task_struct *kthread;
++
++			kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
++			if (!WARN_ON(!kthread))
++				kthread_stop(kthread);
++		}
+ 		per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
++	}
+ 
+ 	for_each_cpu(cpu, current_mask) {
+ 		retval = start_kthread(cpu);
+@@ -2579,7 +2596,8 @@ static int timerlat_fd_release(struct inode *inode, struct file *file)
+ 	osn_var = per_cpu_ptr(&per_cpu_osnoise_var, cpu);
+ 	tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu);
+ 
+-	hrtimer_cancel(&tlat_var->timer);
++	if (tlat_var->kthread)
++		hrtimer_cancel(&tlat_var->timer);
+ 	memset(tlat_var, 0, sizeof(*tlat_var));
+ 
+ 	osn_var->sampling = 0;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index c970eec25c5a0..ffbf99fb53bfb 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -7586,10 +7586,18 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 
+ notrace void wq_watchdog_touch(int cpu)
+ {
++	unsigned long thresh = READ_ONCE(wq_watchdog_thresh) * HZ;
++	unsigned long touch_ts = READ_ONCE(wq_watchdog_touched);
++	unsigned long now = jiffies;
++
+ 	if (cpu >= 0)
+-		per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies;
++		per_cpu(wq_watchdog_touched_cpu, cpu) = now;
++	else
++		WARN_ONCE(1, "%s should be called with valid CPU", __func__);
+ 
+-	wq_watchdog_touched = jiffies;
++	/* Don't unnecessarily store to global cacheline */
++	if (time_after(now, touch_ts + thresh / 4))
++		WRITE_ONCE(wq_watchdog_touched, jiffies);
+ }
+ 
+ static void wq_watchdog_set_thresh(unsigned long thresh)
+diff --git a/lib/codetag.c b/lib/codetag.c
+index 5ace625f2328f..afa8a2d4f3173 100644
+--- a/lib/codetag.c
++++ b/lib/codetag.c
+@@ -125,7 +125,6 @@ static inline size_t range_size(const struct codetag_type *cttype,
+ 			cttype->desc.tag_size;
+ }
+ 
+-#ifdef CONFIG_MODULES
+ static void *get_symbol(struct module *mod, const char *prefix, const char *name)
+ {
+ 	DECLARE_SEQ_BUF(sb, KSYM_NAME_LEN);
+@@ -155,6 +154,15 @@ static struct codetag_range get_section_range(struct module *mod,
+ 	};
+ }
+ 
++static const char *get_mod_name(__maybe_unused struct module *mod)
++{
++#ifdef CONFIG_MODULES
++	if (mod)
++		return mod->name;
++#endif
++	return "(built-in)";
++}
++
+ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ {
+ 	struct codetag_range range;
+@@ -164,8 +172,7 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ 	range = get_section_range(mod, cttype->desc.section);
+ 	if (!range.start || !range.stop) {
+ 		pr_warn("Failed to load code tags of type %s from the module %s\n",
+-			cttype->desc.section,
+-			mod ? mod->name : "(built-in)");
++			cttype->desc.section, get_mod_name(mod));
+ 		return -EINVAL;
+ 	}
+ 
+@@ -199,6 +206,7 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_MODULES
+ void codetag_load_module(struct module *mod)
+ {
+ 	struct codetag_type *cttype;
+@@ -248,9 +256,6 @@ bool codetag_unload_module(struct module *mod)
+ 
+ 	return unload_ok;
+ }
+-
+-#else /* CONFIG_MODULES */
+-static int codetag_module_init(struct codetag_type *cttype, struct module *mod) { return 0; }
+ #endif /* CONFIG_MODULES */
+ 
+ struct codetag_type *
+diff --git a/lib/generic-radix-tree.c b/lib/generic-radix-tree.c
+index aaefb9b678c8e..fa692c86f0696 100644
+--- a/lib/generic-radix-tree.c
++++ b/lib/generic-radix-tree.c
+@@ -121,6 +121,8 @@ void *__genradix_ptr_alloc(struct __genradix *radix, size_t offset,
+ 		if ((v = cmpxchg_release(&radix->root, r, new_root)) == r) {
+ 			v = new_root;
+ 			new_node = NULL;
++		} else {
++			new_node->children[0] = NULL;
+ 		}
+ 	}
+ 
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 2d7d27e6ae3c6..f5ae38fd66969 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -7569,14 +7569,14 @@ static void mt_validate_nulls(struct maple_tree *mt)
+  * 2. The gap is correctly set in the parents
+  */
+ void mt_validate(struct maple_tree *mt)
++	__must_hold(mas->tree->ma_lock)
+ {
+ 	unsigned char end;
+ 
+ 	MA_STATE(mas, mt, 0, 0);
+-	rcu_read_lock();
+ 	mas_start(&mas);
+ 	if (!mas_is_active(&mas))
+-		goto done;
++		return;
+ 
+ 	while (!mte_is_leaf(mas.node))
+ 		mas_descend(&mas);
+@@ -7597,9 +7597,6 @@ void mt_validate(struct maple_tree *mt)
+ 		mas_dfs_postorder(&mas, ULONG_MAX);
+ 	}
+ 	mt_validate_nulls(mt);
+-done:
+-	rcu_read_unlock();
+-
+ }
+ EXPORT_SYMBOL_GPL(mt_validate);
+ 
+diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c
+index d305b0c054bb7..9249181fff37a 100644
+--- a/lib/overflow_kunit.c
++++ b/lib/overflow_kunit.c
+@@ -668,7 +668,6 @@ DEFINE_TEST_ALLOC(devm_kzalloc,  devm_kfree, 1, 1, 0);
+ 
+ static void overflow_allocation_test(struct kunit *test)
+ {
+-	const char device_name[] = "overflow-test";
+ 	struct device *dev;
+ 	int count = 0;
+ 
+@@ -678,7 +677,7 @@ static void overflow_allocation_test(struct kunit *test)
+ } while (0)
+ 
+ 	/* Create dummy device for devm_kmalloc()-family tests. */
+-	dev = kunit_device_register(test, device_name);
++	dev = kunit_device_register(test, "overflow-test");
+ 	KUNIT_ASSERT_FALSE_MSG(test, IS_ERR(dev),
+ 			       "Cannot register test device\n");
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 332f190bf3d6b..5c44d3d304dac 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5804,8 +5804,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
+ 	WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
+ #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+ 	memcg->zswap_max = PAGE_COUNTER_MAX;
+-	WRITE_ONCE(memcg->zswap_writeback,
+-		!parent || READ_ONCE(parent->zswap_writeback));
++	WRITE_ONCE(memcg->zswap_writeback, true);
+ #endif
+ 	page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
+ 	if (parent) {
+@@ -8444,7 +8443,14 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
+ bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg)
+ {
+ 	/* if zswap is disabled, do not block pages going to the swapping device */
+-	return !is_zswap_enabled() || !memcg || READ_ONCE(memcg->zswap_writeback);
++	if (!zswap_is_enabled())
++		return true;
++
++	for (; memcg; memcg = parent_mem_cgroup(memcg))
++		if (!READ_ONCE(memcg->zswap_writeback))
++			return false;
++
++	return true;
+ }
+ 
+ static u64 zswap_current_read(struct cgroup_subsys_state *css,
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 431b1f6753c0b..52553b13e2770 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1682,7 +1682,7 @@ struct range __weak arch_get_mappable_range(void)
+ 
+ struct range mhp_get_pluggable_range(bool need_mapping)
+ {
+-	const u64 max_phys = (1ULL << MAX_PHYSMEM_BITS) - 1;
++	const u64 max_phys = PHYSMEM_END;
+ 	struct range mhp_range;
+ 
+ 	if (need_mapping) {
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index b50060405d947..21016573f1870 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1053,6 +1053,13 @@ __always_inline bool free_pages_prepare(struct page *page,
+ 		reset_page_owner(page, order);
+ 		page_table_check_free(page, order);
+ 		pgalloc_tag_sub(page, 1 << order);
++
++		/*
++		 * The page is isolated and accounted for.
++		 * Mark the codetag as empty to avoid accounting error
++		 * when the page is freed by unpoison_memory().
++		 */
++		clear_page_tag_ref(page);
+ 		return false;
+ 	}
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index 849e8972e2ffc..be0ef60984ac4 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2044,6 +2044,10 @@ alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p,
+ 	if (!mem_alloc_profiling_enabled())
+ 		return;
+ 
++	/* slab->obj_exts might not be NULL if it was created for MEMCG accounting. */
++	if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE))
++		return;
++
+ 	obj_exts = slab_obj_exts(slab);
+ 	if (!obj_exts)
+ 		return;
+diff --git a/mm/sparse.c b/mm/sparse.c
+index de40b2c734067..168278d61bdef 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -129,7 +129,7 @@ static inline int sparse_early_nid(struct mem_section *section)
+ static void __meminit mminit_validate_memmodel_limits(unsigned long *start_pfn,
+ 						unsigned long *end_pfn)
+ {
+-	unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT);
++	unsigned long max_sparsemem_pfn = (PHYSMEM_END + 1) >> PAGE_SHIFT;
+ 
+ 	/*
+ 	 * Sanity checks - do not allow an architecture to pass
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index defa5109cc62a..75f47706882fb 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -787,27 +787,30 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
+ 		}
+ 
+ 		dst_pmdval = pmdp_get_lockless(dst_pmd);
+-		/*
+-		 * If the dst_pmd is mapped as THP don't
+-		 * override it and just be strict.
+-		 */
+-		if (unlikely(pmd_trans_huge(dst_pmdval))) {
+-			err = -EEXIST;
+-			break;
+-		}
+ 		if (unlikely(pmd_none(dst_pmdval)) &&
+ 		    unlikely(__pte_alloc(dst_mm, dst_pmd))) {
+ 			err = -ENOMEM;
+ 			break;
+ 		}
+-		/* If an huge pmd materialized from under us fail */
+-		if (unlikely(pmd_trans_huge(*dst_pmd))) {
++		dst_pmdval = pmdp_get_lockless(dst_pmd);
++		/*
++		 * If the dst_pmd is THP don't override it and just be strict.
++		 * (This includes the case where the PMD used to be THP and
++		 * changed back to none after __pte_alloc().)
++		 */
++		if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval) ||
++			     pmd_devmap(dst_pmdval))) {
++			err = -EEXIST;
++			break;
++		}
++		if (unlikely(pmd_bad(dst_pmdval))) {
+ 			err = -EFAULT;
+ 			break;
+ 		}
+-
+-		BUG_ON(pmd_none(*dst_pmd));
+-		BUG_ON(pmd_trans_huge(*dst_pmd));
++		/*
++		 * For shmem mappings, khugepaged is allowed to remove page
++		 * tables under us; pte_offset_map_lock() will deal with that.
++		 */
+ 
+ 		err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr,
+ 				       src_addr, flags, &folio);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 881e497137e5d..bc6be460457d8 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2190,6 +2190,7 @@ static void purge_vmap_node(struct work_struct *work)
+ {
+ 	struct vmap_node *vn = container_of(work,
+ 		struct vmap_node, purge_work);
++	unsigned long nr_purged_pages = 0;
+ 	struct vmap_area *va, *n_va;
+ 	LIST_HEAD(local_list);
+ 
+@@ -2207,7 +2208,7 @@ static void purge_vmap_node(struct work_struct *work)
+ 			kasan_release_vmalloc(orig_start, orig_end,
+ 					      va->va_start, va->va_end);
+ 
+-		atomic_long_sub(nr, &vmap_lazy_nr);
++		nr_purged_pages += nr;
+ 		vn->nr_purged++;
+ 
+ 		if (is_vn_id_valid(vn_id) && !vn->skip_populate)
+@@ -2218,6 +2219,8 @@ static void purge_vmap_node(struct work_struct *work)
+ 		list_add(&va->list, &local_list);
+ 	}
+ 
++	atomic_long_sub(nr_purged_pages, &vmap_lazy_nr);
++
+ 	reclaim_list_global(&local_list);
+ }
+ 
+@@ -2625,6 +2628,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
+ 	vb->dirty_max = 0;
+ 	bitmap_set(vb->used_map, 0, (1UL << order));
+ 	INIT_LIST_HEAD(&vb->free_list);
++	vb->cpu = raw_smp_processor_id();
+ 
+ 	xa = addr_to_vb_xa(va->va_start);
+ 	vb_idx = addr_to_vb_idx(va->va_start);
+@@ -2641,7 +2645,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
+ 	 * integrity together with list_for_each_rcu from read
+ 	 * side.
+ 	 */
+-	vb->cpu = raw_smp_processor_id();
+ 	vbq = per_cpu_ptr(&vmap_block_queue, vb->cpu);
+ 	spin_lock(&vbq->lock);
+ 	list_add_tail_rcu(&vb->free_list, &vbq->free);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 68ac33bea3a3c..99a0c152a85e2 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1587,25 +1587,6 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
+ 
+ }
+ 
+-#ifdef CONFIG_CMA
+-/*
+- * It is waste of effort to scan and reclaim CMA pages if it is not available
+- * for current allocation context. Kswapd can not be enrolled as it can not
+- * distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
+- */
+-static bool skip_cma(struct folio *folio, struct scan_control *sc)
+-{
+-	return !current_is_kswapd() &&
+-			gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
+-			folio_migratetype(folio) == MIGRATE_CMA;
+-}
+-#else
+-static bool skip_cma(struct folio *folio, struct scan_control *sc)
+-{
+-	return false;
+-}
+-#endif
+-
+ /*
+  * Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
+  *
+@@ -1652,8 +1633,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
+ 		nr_pages = folio_nr_pages(folio);
+ 		total_scan += nr_pages;
+ 
+-		if (folio_zonenum(folio) > sc->reclaim_idx ||
+-				skip_cma(folio, sc)) {
++		if (folio_zonenum(folio) > sc->reclaim_idx) {
+ 			nr_skipped[folio_zonenum(folio)] += nr_pages;
+ 			move_to = &folios_skipped;
+ 			goto move;
+@@ -4314,7 +4294,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
+ 	}
+ 
+ 	/* ineligible */
+-	if (zone > sc->reclaim_idx || skip_cma(folio, sc)) {
++	if (zone > sc->reclaim_idx) {
+ 		gen = folio_inc_gen(lruvec, folio, false);
+ 		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
+ 		return true;
+diff --git a/mm/zswap.c b/mm/zswap.c
+index a50e2986cd2fa..ac65758dd2af6 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -131,7 +131,7 @@ static bool zswap_shrinker_enabled = IS_ENABLED(
+ 		CONFIG_ZSWAP_SHRINKER_DEFAULT_ON);
+ module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+ 
+-bool is_zswap_enabled(void)
++bool zswap_is_enabled(void)
+ {
+ 	return zswap_enabled;
+ }
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 080053a85b4d6..3c74d171085de 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -2953,5 +2953,9 @@ int hci_abort_conn(struct hci_conn *conn, u8 reason)
+ 		return 0;
+ 	}
+ 
+-	return hci_cmd_sync_queue_once(hdev, abort_conn_sync, conn, NULL);
++	/* Run immediately if on cmd_sync_work since this may be called
++	 * as a result to MGMT_OP_DISCONNECT/MGMT_OP_UNPAIR which does
++	 * already queue its callback on cmd_sync_work.
++	 */
++	return hci_cmd_sync_run_once(hdev, abort_conn_sync, conn, NULL);
+ }
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 4e90bd722e7b5..f4a54dbc07f19 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -114,7 +114,7 @@ static void hci_cmd_sync_add(struct hci_request *req, u16 opcode, u32 plen,
+ 	skb_queue_tail(&req->cmd_q, skb);
+ }
+ 
+-static int hci_cmd_sync_run(struct hci_request *req)
++static int hci_req_sync_run(struct hci_request *req)
+ {
+ 	struct hci_dev *hdev = req->hdev;
+ 	struct sk_buff *skb;
+@@ -164,7 +164,7 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ 
+ 	hdev->req_status = HCI_REQ_PEND;
+ 
+-	err = hci_cmd_sync_run(&req);
++	err = hci_req_sync_run(&req);
+ 	if (err < 0)
+ 		return ERR_PTR(err);
+ 
+@@ -730,6 +730,44 @@ int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ }
+ EXPORT_SYMBOL(hci_cmd_sync_queue_once);
+ 
++/* Run HCI command:
++ *
++ * - hdev must be running
++ * - if on cmd_sync_work then run immediately otherwise queue
++ */
++int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
++		     void *data, hci_cmd_sync_work_destroy_t destroy)
++{
++	/* Only queue command if hdev is running which means it had been opened
++	 * and is either on init phase or is already up.
++	 */
++	if (!test_bit(HCI_RUNNING, &hdev->flags))
++		return -ENETDOWN;
++
++	/* If on cmd_sync_work then run immediately otherwise queue */
++	if (current_work() == &hdev->cmd_sync_work)
++		return func(hdev, data);
++
++	return hci_cmd_sync_submit(hdev, func, data, destroy);
++}
++EXPORT_SYMBOL(hci_cmd_sync_run);
++
++/* Run HCI command entry once:
++ *
++ * - Lookup if an entry already exist and only if it doesn't creates a new entry
++ *   and run it.
++ * - if on cmd_sync_work then run immediately otherwise queue
++ */
++int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
++			  void *data, hci_cmd_sync_work_destroy_t destroy)
++{
++	if (hci_cmd_sync_lookup_entry(hdev, func, data, destroy))
++		return 0;
++
++	return hci_cmd_sync_run(hdev, func, data, destroy);
++}
++EXPORT_SYMBOL(hci_cmd_sync_run_once);
++
+ /* Lookup HCI command entry:
+  *
+  * - Return first entry that matches by function callback or data or
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index ad4793ea052df..ba28907afb3fa 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2831,16 +2831,6 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "debug_keys %u key_count %u", cp->debug_keys,
+ 		   key_count);
+ 
+-	for (i = 0; i < key_count; i++) {
+-		struct mgmt_link_key_info *key = &cp->keys[i];
+-
+-		/* Considering SMP over BREDR/LE, there is no need to check addr_type */
+-		if (key->type > 0x08)
+-			return mgmt_cmd_status(sk, hdev->id,
+-					       MGMT_OP_LOAD_LINK_KEYS,
+-					       MGMT_STATUS_INVALID_PARAMS);
+-	}
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	hci_link_keys_clear(hdev);
+@@ -2865,6 +2855,19 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
+ 			continue;
+ 		}
+ 
++		if (key->addr.type != BDADDR_BREDR) {
++			bt_dev_warn(hdev,
++				    "Invalid link address type %u for %pMR",
++				    key->addr.type, &key->addr.bdaddr);
++			continue;
++		}
++
++		if (key->type > 0x08) {
++			bt_dev_warn(hdev, "Invalid link key type %u for %pMR",
++				    key->type, &key->addr.bdaddr);
++			continue;
++		}
++
+ 		/* Always ignore debug keys and require a new pairing if
+ 		 * the user wants to use them.
+ 		 */
+@@ -2922,7 +2925,12 @@ static int unpair_device_sync(struct hci_dev *hdev, void *data)
+ 	if (!conn)
+ 		return 0;
+ 
+-	return hci_abort_conn_sync(hdev, conn, HCI_ERROR_REMOTE_USER_TERM);
++	/* Disregard any possible error since the likes of hci_abort_conn_sync
++	 * will clean up the connection no matter the error.
++	 */
++	hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
++
++	return 0;
+ }
+ 
+ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+@@ -3054,13 +3062,44 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	return err;
+ }
+ 
++static void disconnect_complete(struct hci_dev *hdev, void *data, int err)
++{
++	struct mgmt_pending_cmd *cmd = data;
++
++	cmd->cmd_complete(cmd, mgmt_status(err));
++	mgmt_pending_free(cmd);
++}
++
++static int disconnect_sync(struct hci_dev *hdev, void *data)
++{
++	struct mgmt_pending_cmd *cmd = data;
++	struct mgmt_cp_disconnect *cp = cmd->param;
++	struct hci_conn *conn;
++
++	if (cp->addr.type == BDADDR_BREDR)
++		conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK,
++					       &cp->addr.bdaddr);
++	else
++		conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr,
++					       le_addr_type(cp->addr.type));
++
++	if (!conn)
++		return -ENOTCONN;
++
++	/* Disregard any possible error since the likes of hci_abort_conn_sync
++	 * will clean up the connection no matter the error.
++	 */
++	hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
++
++	return 0;
++}
++
+ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		      u16 len)
+ {
+ 	struct mgmt_cp_disconnect *cp = data;
+ 	struct mgmt_rp_disconnect rp;
+ 	struct mgmt_pending_cmd *cmd;
+-	struct hci_conn *conn;
+ 	int err;
+ 
+ 	bt_dev_dbg(hdev, "sock %p", sk);
+@@ -3083,27 +3122,7 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		goto failed;
+ 	}
+ 
+-	if (pending_find(MGMT_OP_DISCONNECT, hdev)) {
+-		err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT,
+-					MGMT_STATUS_BUSY, &rp, sizeof(rp));
+-		goto failed;
+-	}
+-
+-	if (cp->addr.type == BDADDR_BREDR)
+-		conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK,
+-					       &cp->addr.bdaddr);
+-	else
+-		conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr,
+-					       le_addr_type(cp->addr.type));
+-
+-	if (!conn || conn->state == BT_OPEN || conn->state == BT_CLOSED) {
+-		err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT,
+-					MGMT_STATUS_NOT_CONNECTED, &rp,
+-					sizeof(rp));
+-		goto failed;
+-	}
+-
+-	cmd = mgmt_pending_add(sk, MGMT_OP_DISCONNECT, hdev, data, len);
++	cmd = mgmt_pending_new(sk, MGMT_OP_DISCONNECT, hdev, data, len);
+ 	if (!cmd) {
+ 		err = -ENOMEM;
+ 		goto failed;
+@@ -3111,9 +3130,10 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	cmd->cmd_complete = generic_cmd_complete;
+ 
+-	err = hci_disconnect(conn, HCI_ERROR_REMOTE_USER_TERM);
++	err = hci_cmd_sync_queue(hdev, disconnect_sync, cmd,
++				 disconnect_complete);
+ 	if (err < 0)
+-		mgmt_pending_remove(cmd);
++		mgmt_pending_free(cmd);
+ 
+ failed:
+ 	hci_dev_unlock(hdev);
+@@ -7073,7 +7093,6 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 
+ 	for (i = 0; i < irk_count; i++) {
+ 		struct mgmt_irk_info *irk = &cp->irks[i];
+-		u8 addr_type = le_addr_type(irk->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_IRK,
+@@ -7083,12 +7102,8 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
+ 			continue;
+ 		}
+ 
+-		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
+-		if (irk->addr.type == BDADDR_BREDR)
+-			addr_type = BDADDR_BREDR;
+-
+ 		hci_add_irk(hdev, &irk->addr.bdaddr,
+-			    addr_type, irk->val,
++			    le_addr_type(irk->addr.type), irk->val,
+ 			    BDADDR_ANY);
+ 	}
+ 
+@@ -7153,15 +7168,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 
+ 	bt_dev_dbg(hdev, "key_count %u", key_count);
+ 
+-	for (i = 0; i < key_count; i++) {
+-		struct mgmt_ltk_info *key = &cp->keys[i];
+-
+-		if (!ltk_is_valid(key))
+-			return mgmt_cmd_status(sk, hdev->id,
+-					       MGMT_OP_LOAD_LONG_TERM_KEYS,
+-					       MGMT_STATUS_INVALID_PARAMS);
+-	}
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	hci_smp_ltks_clear(hdev);
+@@ -7169,7 +7175,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 	for (i = 0; i < key_count; i++) {
+ 		struct mgmt_ltk_info *key = &cp->keys[i];
+ 		u8 type, authenticated;
+-		u8 addr_type = le_addr_type(key->addr.type);
+ 
+ 		if (hci_is_blocked_key(hdev,
+ 				       HCI_BLOCKED_KEY_TYPE_LTK,
+@@ -7179,6 +7184,12 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 			continue;
+ 		}
+ 
++		if (!ltk_is_valid(key)) {
++			bt_dev_warn(hdev, "Invalid LTK for %pMR",
++				    &key->addr.bdaddr);
++			continue;
++		}
++
+ 		switch (key->type) {
+ 		case MGMT_LTK_UNAUTHENTICATED:
+ 			authenticated = 0x00;
+@@ -7204,12 +7215,8 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
+ 			continue;
+ 		}
+ 
+-		/* When using SMP over BR/EDR, the addr type should be set to BREDR */
+-		if (key->addr.type == BDADDR_BREDR)
+-			addr_type = BDADDR_BREDR;
+-
+ 		hci_add_ltk(hdev, &key->addr.bdaddr,
+-			    addr_type, type, authenticated,
++			    le_addr_type(key->addr.type), type, authenticated,
+ 			    key->val, key->enc_size, key->ediv, key->rand);
+ 	}
+ 
+@@ -9457,7 +9464,7 @@ void mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key,
+ 
+ 	ev.store_hint = persistent;
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
++	ev.key.addr.type = BDADDR_BREDR;
+ 	ev.key.type = key->type;
+ 	memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE);
+ 	ev.key.pin_len = key->pin_len;
+@@ -9508,7 +9515,7 @@ void mgmt_new_ltk(struct hci_dev *hdev, struct smp_ltk *key, bool persistent)
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type);
+ 	ev.key.type = mgmt_ltk_type(key);
+ 	ev.key.enc_size = key->enc_size;
+ 	ev.key.ediv = key->ediv;
+@@ -9537,7 +9544,7 @@ void mgmt_new_irk(struct hci_dev *hdev, struct smp_irk *irk, bool persistent)
+ 
+ 	bacpy(&ev.rpa, &irk->rpa);
+ 	bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr);
+-	ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type);
++	ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type);
+ 	memcpy(ev.irk.val, irk->val, sizeof(irk->val));
+ 
+ 	mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL);
+@@ -9566,7 +9573,7 @@ void mgmt_new_csrk(struct hci_dev *hdev, struct smp_csrk *csrk,
+ 		ev.store_hint = persistent;
+ 
+ 	bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr);
+-	ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type);
++	ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type);
+ 	ev.key.type = csrk->type;
+ 	memcpy(ev.key.val, csrk->val, sizeof(csrk->val));
+ 
+@@ -9644,18 +9651,6 @@ void mgmt_device_connected(struct hci_dev *hdev, struct hci_conn *conn,
+ 	mgmt_event_skb(skb, NULL);
+ }
+ 
+-static void disconnect_rsp(struct mgmt_pending_cmd *cmd, void *data)
+-{
+-	struct sock **sk = data;
+-
+-	cmd->cmd_complete(cmd, 0);
+-
+-	*sk = cmd->sk;
+-	sock_hold(*sk);
+-
+-	mgmt_pending_remove(cmd);
+-}
+-
+ static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ {
+ 	struct hci_dev *hdev = data;
+@@ -9699,8 +9694,6 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	if (link_type != ACL_LINK && link_type != LE_LINK)
+ 		return;
+ 
+-	mgmt_pending_foreach(MGMT_OP_DISCONNECT, hdev, disconnect_rsp, &sk);
+-
+ 	bacpy(&ev.addr.bdaddr, bdaddr);
+ 	ev.addr.type = link_to_bdaddr(link_type, addr_type);
+ 	ev.reason = reason;
+@@ -9713,9 +9706,6 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 
+ 	if (sk)
+ 		sock_put(sk);
+-
+-	mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp,
+-			     hdev);
+ }
+ 
+ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 4f9fdf400584e..8b9724fd752a1 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -1060,7 +1060,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->remote_irk) {
+-		smp->remote_irk->link_type = hcon->type;
+ 		mgmt_new_irk(hdev, smp->remote_irk, persistent);
+ 
+ 		/* Now that user space can be considered to know the
+@@ -1080,28 +1079,24 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 	}
+ 
+ 	if (smp->csrk) {
+-		smp->csrk->link_type = hcon->type;
+ 		smp->csrk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->csrk->bdaddr, &hcon->dst);
+ 		mgmt_new_csrk(hdev, smp->csrk, persistent);
+ 	}
+ 
+ 	if (smp->responder_csrk) {
+-		smp->responder_csrk->link_type = hcon->type;
+ 		smp->responder_csrk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->responder_csrk->bdaddr, &hcon->dst);
+ 		mgmt_new_csrk(hdev, smp->responder_csrk, persistent);
+ 	}
+ 
+ 	if (smp->ltk) {
+-		smp->ltk->link_type = hcon->type;
+ 		smp->ltk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->ltk->bdaddr, &hcon->dst);
+ 		mgmt_new_ltk(hdev, smp->ltk, persistent);
+ 	}
+ 
+ 	if (smp->responder_ltk) {
+-		smp->responder_ltk->link_type = hcon->type;
+ 		smp->responder_ltk->bdaddr_type = hcon->dst_type;
+ 		bacpy(&smp->responder_ltk->bdaddr, &hcon->dst);
+ 		mgmt_new_ltk(hdev, smp->responder_ltk, persistent);
+@@ -1121,8 +1116,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
+ 		key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst,
+ 				       smp->link_key, type, 0, &persistent);
+ 		if (key) {
+-			key->link_type = hcon->type;
+-			key->bdaddr_type = hcon->dst_type;
+ 			mgmt_new_link_key(hdev, key, persistent);
+ 
+ 			/* Don't keep debug keys around if the relevant
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index c77591e638417..ad7a42b505ef9 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -1469,12 +1469,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 			modified = true;
+ 		}
+ 
+-		if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
++		if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
+ 			/* Refresh entry */
+ 			fdb->used = jiffies;
+-		} else if (!test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags)) {
+-			/* Take over SW learned entry */
+-			set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags);
++		} else {
+ 			modified = true;
+ 		}
+ 
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 27d5fcf0eac9d..46d3ec3aa44b4 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1470,6 +1470,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
+ 
+ 		/* remove device reference, if this is our bound device */
+ 		if (bo->bound && bo->ifindex == dev->ifindex) {
++#if IS_ENABLED(CONFIG_PROC_FS)
++			if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read)
++				remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir);
++#endif
+ 			bo->bound   = 0;
+ 			bo->ifindex = 0;
+ 			notify_enodev = 1;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index ab0455c64e49a..55b1d9de2334d 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -11047,7 +11047,6 @@ const struct bpf_verifier_ops lwt_seg6local_verifier_ops = {
+ };
+ 
+ const struct bpf_prog_ops lwt_seg6local_prog_ops = {
+-	.test_run		= bpf_prog_test_run_skb,
+ };
+ 
+ const struct bpf_verifier_ops cg_sock_verifier_ops = {
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index dc91921da4ea0..15ad775ddd3c1 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1524,7 +1524,7 @@ static const struct attribute_group dql_group = {
+ };
+ #else
+ /* Fake declaration, all the code using it should be dead */
+-extern const struct attribute_group dql_group;
++static const struct attribute_group dql_group = {};
+ #endif /* CONFIG_BQL */
+ 
+ #ifdef CONFIG_XPS
+diff --git a/net/ethtool/channels.c b/net/ethtool/channels.c
+index 7b4bbd674bae7..cee188da54f85 100644
+--- a/net/ethtool/channels.c
++++ b/net/ethtool/channels.c
+@@ -171,11 +171,9 @@ ethnl_set_channels(struct ethnl_req_info *req_info, struct genl_info *info)
+ 	 */
+ 	if (ethtool_get_max_rxnfc_channel(dev, &max_rxnfc_in_use))
+ 		max_rxnfc_in_use = 0;
+-	if (!netif_is_rxfh_configured(dev) ||
+-	    ethtool_get_max_rxfh_channel(dev, &max_rxfh_in_use))
+-		max_rxfh_in_use = 0;
++	max_rxfh_in_use = ethtool_get_max_rxfh_channel(dev);
+ 	if (channels.combined_count + channels.rx_count <= max_rxfh_in_use) {
+-		GENL_SET_ERR_MSG(info, "requested channel counts are too low for existing indirection table settings");
++		GENL_SET_ERR_MSG_FMT(info, "requested channel counts are too low for existing indirection table (%d)", max_rxfh_in_use);
+ 		return -EINVAL;
+ 	}
+ 	if (channels.combined_count + channels.rx_count <= max_rxnfc_in_use) {
+diff --git a/net/ethtool/common.c b/net/ethtool/common.c
+index 6b2a360dcdf06..8a62375ebd1f0 100644
+--- a/net/ethtool/common.c
++++ b/net/ethtool/common.c
+@@ -587,35 +587,39 @@ int ethtool_get_max_rxnfc_channel(struct net_device *dev, u64 *max)
+ 	return err;
+ }
+ 
+-int ethtool_get_max_rxfh_channel(struct net_device *dev, u32 *max)
++u32 ethtool_get_max_rxfh_channel(struct net_device *dev)
+ {
+ 	struct ethtool_rxfh_param rxfh = {};
+-	u32 dev_size, current_max = 0;
++	u32 dev_size, current_max;
+ 	int ret;
+ 
++	if (!netif_is_rxfh_configured(dev))
++		return 0;
++
+ 	if (!dev->ethtool_ops->get_rxfh_indir_size ||
+ 	    !dev->ethtool_ops->get_rxfh)
+-		return -EOPNOTSUPP;
++		return 0;
+ 	dev_size = dev->ethtool_ops->get_rxfh_indir_size(dev);
+ 	if (dev_size == 0)
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	rxfh.indir = kcalloc(dev_size, sizeof(rxfh.indir[0]), GFP_USER);
+ 	if (!rxfh.indir)
+-		return -ENOMEM;
++		return U32_MAX;
+ 
+ 	ret = dev->ethtool_ops->get_rxfh(dev, &rxfh);
+-	if (ret)
+-		goto out;
++	if (ret) {
++		current_max = U32_MAX;
++		goto out_free;
++	}
+ 
++	current_max = 0;
+ 	while (dev_size--)
+ 		current_max = max(current_max, rxfh.indir[dev_size]);
+ 
+-	*max = current_max;
+-
+-out:
++out_free:
+ 	kfree(rxfh.indir);
+-	return ret;
++	return current_max;
+ }
+ 
+ int ethtool_check_ops(const struct ethtool_ops *ops)
+diff --git a/net/ethtool/common.h b/net/ethtool/common.h
+index 28b8aaaf9bcb3..b55705a9ad5aa 100644
+--- a/net/ethtool/common.h
++++ b/net/ethtool/common.h
+@@ -42,7 +42,7 @@ int __ethtool_get_link(struct net_device *dev);
+ bool convert_legacy_settings_to_link_ksettings(
+ 	struct ethtool_link_ksettings *link_ksettings,
+ 	const struct ethtool_cmd *legacy_settings);
+-int ethtool_get_max_rxfh_channel(struct net_device *dev, u32 *max);
++u32 ethtool_get_max_rxfh_channel(struct net_device *dev);
+ int ethtool_get_max_rxnfc_channel(struct net_device *dev, u64 *max);
+ int __ethtool_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index f99fd564d0ee5..2f5b69d5d4b00 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1928,9 +1928,7 @@ static noinline_for_stack int ethtool_set_channels(struct net_device *dev,
+ 	 * indirection table/rxnfc settings */
+ 	if (ethtool_get_max_rxnfc_channel(dev, &max_rxnfc_in_use))
+ 		max_rxnfc_in_use = 0;
+-	if (!netif_is_rxfh_configured(dev) ||
+-	    ethtool_get_max_rxfh_channel(dev, &max_rxfh_in_use))
+-		max_rxfh_in_use = 0;
++	max_rxfh_in_use = ethtool_get_max_rxfh_channel(dev);
+ 	if (channels.combined_count + channels.rx_count <=
+ 	    max_t(u64, max_rxnfc_in_use, max_rxfh_in_use))
+ 		return -EINVAL;
+diff --git a/net/ipv4/fou_core.c b/net/ipv4/fou_core.c
+index 0abbc413e0fe5..78b869b314921 100644
+--- a/net/ipv4/fou_core.c
++++ b/net/ipv4/fou_core.c
+@@ -50,7 +50,7 @@ struct fou_net {
+ 
+ static inline struct fou *fou_from_sock(struct sock *sk)
+ {
+-	return sk->sk_user_data;
++	return rcu_dereference_sk_user_data(sk);
+ }
+ 
+ static int fou_recv_pull(struct sk_buff *skb, struct fou *fou, size_t len)
+@@ -233,9 +233,15 @@ static struct sk_buff *fou_gro_receive(struct sock *sk,
+ 				       struct sk_buff *skb)
+ {
+ 	const struct net_offload __rcu **offloads;
+-	u8 proto = fou_from_sock(sk)->protocol;
++	struct fou *fou = fou_from_sock(sk);
+ 	const struct net_offload *ops;
+ 	struct sk_buff *pp = NULL;
++	u8 proto;
++
++	if (!fou)
++		goto out;
++
++	proto = fou->protocol;
+ 
+ 	/* We can clear the encap_mark for FOU as we are essentially doing
+ 	 * one of two possible things.  We are either adding an L4 tunnel
+@@ -263,14 +269,24 @@ static int fou_gro_complete(struct sock *sk, struct sk_buff *skb,
+ 			    int nhoff)
+ {
+ 	const struct net_offload __rcu **offloads;
+-	u8 proto = fou_from_sock(sk)->protocol;
++	struct fou *fou = fou_from_sock(sk);
+ 	const struct net_offload *ops;
+-	int err = -ENOSYS;
++	u8 proto;
++	int err;
++
++	if (!fou) {
++		err = -ENOENT;
++		goto out;
++	}
++
++	proto = fou->protocol;
+ 
+ 	offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+ 	ops = rcu_dereference(offloads[proto]);
+-	if (WARN_ON(!ops || !ops->callbacks.gro_complete))
++	if (WARN_ON(!ops || !ops->callbacks.gro_complete)) {
++		err = -ENOSYS;
+ 		goto out;
++	}
+ 
+ 	err = ops->callbacks.gro_complete(skb, nhoff);
+ 
+@@ -320,6 +336,9 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 	struct gro_remcsum grc;
+ 	u8 proto;
+ 
++	if (!fou)
++		goto out;
++
+ 	skb_gro_remcsum_init(&grc);
+ 
+ 	off = skb_gro_offset(skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 53b0d62fd2c2d..fe6178715ba05 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -577,7 +577,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 		err = sk_stream_error(sk, msg->msg_flags, err);
+ 	release_sock(sk);
+ 	sk_psock_put(sk, psock);
+-	return copied ? copied : err;
++	return copied > 0 ? copied : err;
+ }
+ 
+ enum {
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2c52f6dcbd290..f130eccade393 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6004,6 +6004,11 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ 	 * RFC 5961 4.2 : Send a challenge ack
+ 	 */
+ 	if (th->syn) {
++		if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack &&
++		    TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq &&
++		    TCP_SKB_CB(skb)->seq + 1 == tp->rcv_nxt &&
++		    TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt)
++			goto pass;
+ syn_challenge:
+ 		if (syn_inerr)
+ 			TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
+@@ -6013,6 +6018,7 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ 		goto discard;
+ 	}
+ 
++pass:
+ 	bpf_skops_parse_hdr(sk, skb);
+ 
+ 	return true;
+diff --git a/net/ipv6/ila/ila.h b/net/ipv6/ila/ila.h
+index ad5f6f6ba3330..85b92917849bf 100644
+--- a/net/ipv6/ila/ila.h
++++ b/net/ipv6/ila/ila.h
+@@ -108,6 +108,7 @@ int ila_lwt_init(void);
+ void ila_lwt_fini(void);
+ 
+ int ila_xlat_init_net(struct net *net);
++void ila_xlat_pre_exit_net(struct net *net);
+ void ila_xlat_exit_net(struct net *net);
+ 
+ int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info);
+diff --git a/net/ipv6/ila/ila_main.c b/net/ipv6/ila/ila_main.c
+index 69caed07315f0..976c78efbae17 100644
+--- a/net/ipv6/ila/ila_main.c
++++ b/net/ipv6/ila/ila_main.c
+@@ -71,6 +71,11 @@ static __net_init int ila_init_net(struct net *net)
+ 	return err;
+ }
+ 
++static __net_exit void ila_pre_exit_net(struct net *net)
++{
++	ila_xlat_pre_exit_net(net);
++}
++
+ static __net_exit void ila_exit_net(struct net *net)
+ {
+ 	ila_xlat_exit_net(net);
+@@ -78,6 +83,7 @@ static __net_exit void ila_exit_net(struct net *net)
+ 
+ static struct pernet_operations ila_net_ops = {
+ 	.init = ila_init_net,
++	.pre_exit = ila_pre_exit_net,
+ 	.exit = ila_exit_net,
+ 	.id   = &ila_net_id,
+ 	.size = sizeof(struct ila_net),
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 67e8c9440977a..534a4498e280d 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -619,6 +619,15 @@ int ila_xlat_init_net(struct net *net)
+ 	return 0;
+ }
+ 
++void ila_xlat_pre_exit_net(struct net *net)
++{
++	struct ila_net *ilan = net_generic(net, ila_net_id);
++
++	if (ilan->xlat.hooks_registered)
++		nf_unregister_net_hooks(net, ila_nf_hook_ops,
++					ARRAY_SIZE(ila_nf_hook_ops));
++}
++
+ void ila_xlat_exit_net(struct net *net)
+ {
+ 	struct ila_net *ilan = net_generic(net, ila_net_id);
+@@ -626,10 +635,6 @@ void ila_xlat_exit_net(struct net *net)
+ 	rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL);
+ 
+ 	free_bucket_spinlocks(ilan->xlat.locks);
+-
+-	if (ilan->xlat.hooks_registered)
+-		nf_unregister_net_hooks(net, ila_nf_hook_ops,
+-					ARRAY_SIZE(ila_nf_hook_ops));
+ }
+ 
+ static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila)
+diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
+index 8715617b02fe6..34ba14e59e95a 100644
+--- a/net/netfilter/nf_conncount.c
++++ b/net/netfilter/nf_conncount.c
+@@ -321,7 +321,6 @@ insert_tree(struct net *net,
+ 	struct nf_conncount_rb *rbconn;
+ 	struct nf_conncount_tuple *conn;
+ 	unsigned int count = 0, gc_count = 0;
+-	u8 keylen = data->keylen;
+ 	bool do_gc = true;
+ 
+ 	spin_lock_bh(&nf_conncount_locks[hash]);
+@@ -333,7 +332,7 @@ insert_tree(struct net *net,
+ 		rbconn = rb_entry(*rbnode, struct nf_conncount_rb, node);
+ 
+ 		parent = *rbnode;
+-		diff = key_diff(key, rbconn->key, keylen);
++		diff = key_diff(key, rbconn->key, data->keylen);
+ 		if (diff < 0) {
+ 			rbnode = &((*rbnode)->rb_left);
+ 		} else if (diff > 0) {
+@@ -378,7 +377,7 @@ insert_tree(struct net *net,
+ 
+ 	conn->tuple = *tuple;
+ 	conn->zone = *zone;
+-	memcpy(rbconn->key, key, sizeof(u32) * keylen);
++	memcpy(rbconn->key, key, sizeof(u32) * data->keylen);
+ 
+ 	nf_conncount_list_init(&rbconn->list);
+ 	list_add(&conn->node, &rbconn->list.head);
+@@ -403,7 +402,6 @@ count_tree(struct net *net,
+ 	struct rb_node *parent;
+ 	struct nf_conncount_rb *rbconn;
+ 	unsigned int hash;
+-	u8 keylen = data->keylen;
+ 
+ 	hash = jhash2(key, data->keylen, conncount_rnd) % CONNCOUNT_SLOTS;
+ 	root = &data->root[hash];
+@@ -414,7 +412,7 @@ count_tree(struct net *net,
+ 
+ 		rbconn = rb_entry(parent, struct nf_conncount_rb, node);
+ 
+-		diff = key_diff(key, rbconn->key, keylen);
++		diff = key_diff(key, rbconn->key, data->keylen);
+ 		if (diff < 0) {
+ 			parent = rcu_dereference_raw(parent->rb_left);
+ 		} else if (diff > 0) {
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 9602dafe32e61..d2f49db705232 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -786,12 +786,15 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ 		 * queue, accept the collision, update the host tags.
+ 		 */
+ 		q->way_collisions++;
+-		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
+-			q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
+-			q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
+-		}
+ 		allocate_src = cake_dsrc(flow_mode);
+ 		allocate_dst = cake_ddst(flow_mode);
++
++		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
++			if (allocate_src)
++				q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
++			if (allocate_dst)
++				q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
++		}
+ found:
+ 		/* reserve queue for future packets in same flow */
+ 		reduced_hash = outer_hash + k;
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 0f8d581438c39..39382ee1e3310 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -742,11 +742,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ 
+ 				err = qdisc_enqueue(skb, q->qdisc, &to_free);
+ 				kfree_skb_list(to_free);
+-				if (err != NET_XMIT_SUCCESS &&
+-				    net_xmit_drop_count(err)) {
+-					qdisc_qstats_drop(sch);
+-					qdisc_tree_reduce_backlog(sch, 1,
+-								  pkt_len);
++				if (err != NET_XMIT_SUCCESS) {
++					if (net_xmit_drop_count(err))
++						qdisc_qstats_drop(sch);
++					qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ 				}
+ 				goto tfifo_dequeue;
+ 			}
+diff --git a/net/socket.c b/net/socket.c
+index e416920e9399e..b5a003974058f 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2350,7 +2350,7 @@ INDIRECT_CALLABLE_DECLARE(bool tcp_bpf_bypass_getsockopt(int level,
+ int do_sock_getsockopt(struct socket *sock, bool compat, int level,
+ 		       int optname, sockptr_t optval, sockptr_t optlen)
+ {
+-	int max_optlen __maybe_unused;
++	int max_optlen __maybe_unused = 0;
+ 	const struct proto_ops *ops;
+ 	int err;
+ 
+@@ -2359,7 +2359,7 @@ int do_sock_getsockopt(struct socket *sock, bool compat, int level,
+ 		return err;
+ 
+ 	if (!compat)
+-		max_optlen = BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen);
++		copy_from_sockptr(&max_optlen, optlen, sizeof(int));
+ 
+ 	ops = READ_ONCE(sock->ops);
+ 	if (level == SOL_SOCKET) {
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index be5266007b489..84a332f95aa85 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -692,9 +692,6 @@ static void init_peercred(struct sock *sk)
+ 
+ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ {
+-	const struct cred *old_cred;
+-	struct pid *old_pid;
+-
+ 	if (sk < peersk) {
+ 		spin_lock(&sk->sk_peer_lock);
+ 		spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+@@ -702,16 +699,12 @@ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ 		spin_lock(&peersk->sk_peer_lock);
+ 		spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+ 	}
+-	old_pid = sk->sk_peer_pid;
+-	old_cred = sk->sk_peer_cred;
++
+ 	sk->sk_peer_pid  = get_pid(peersk->sk_peer_pid);
+ 	sk->sk_peer_cred = get_cred(peersk->sk_peer_cred);
+ 
+ 	spin_unlock(&sk->sk_peer_lock);
+ 	spin_unlock(&peersk->sk_peer_lock);
+-
+-	put_pid(old_pid);
+-	put_cred(old_cred);
+ }
+ 
+ static int unix_listen(struct socket *sock, int backlog)
+diff --git a/rust/Makefile b/rust/Makefile
+index f70d5e244fee5..47f9a9f1bdb39 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -359,7 +359,7 @@ $(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers.c FORCE
+ quiet_cmd_exports = EXPORTS $@
+       cmd_exports = \
+ 	$(NM) -p --defined-only $< \
+-		| awk '/ (T|R|D) / {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@
++		| awk '/ (T|R|D|B) / {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@
+ 
+ $(obj)/exports_core_generated.h: $(obj)/core.o FORCE
+ 	$(call if_changed,exports)
+diff --git a/rust/macros/module.rs b/rust/macros/module.rs
+index acd0393b50957..7dee348ef0cc8 100644
+--- a/rust/macros/module.rs
++++ b/rust/macros/module.rs
+@@ -203,7 +203,11 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream {
+             // freed until the module is unloaded.
+             #[cfg(MODULE)]
+             static THIS_MODULE: kernel::ThisModule = unsafe {{
+-                kernel::ThisModule::from_ptr(&kernel::bindings::__this_module as *const _ as *mut _)
++                extern \"C\" {{
++                    static __this_module: kernel::types::Opaque<kernel::bindings::module>;
++                }}
++
++                kernel::ThisModule::from_ptr(__this_module.get())
+             }};
+             #[cfg(not(MODULE))]
+             static THIS_MODULE: kernel::ThisModule = unsafe {{
+diff --git a/scripts/gfp-translate b/scripts/gfp-translate
+index 6c9aed17cf563..8385ae0d5af93 100755
+--- a/scripts/gfp-translate
++++ b/scripts/gfp-translate
+@@ -62,25 +62,57 @@ if [ "$GFPMASK" = "none" ]; then
+ fi
+ 
+ # Extract GFP flags from the kernel source
+-TMPFILE=`mktemp -t gfptranslate-XXXXXX` || exit 1
+-grep -q ___GFP $SOURCE/include/linux/gfp_types.h
+-if [ $? -eq 0 ]; then
+-	grep "^#define ___GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/u$//' | grep -v GFP_BITS > $TMPFILE
+-else
+-	grep "^#define __GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/(__force gfp_t)//' | sed -e 's/u)/)/' | grep -v GFP_BITS | sed -e 's/)\//) \//' > $TMPFILE
+-fi
++TMPFILE=`mktemp -t gfptranslate-XXXXXX.c` || exit 1
+ 
+-# Parse the flags
+-IFS="
+-"
+ echo Source: $SOURCE
+ echo Parsing: $GFPMASK
+-for LINE in `cat $TMPFILE`; do
+-	MASK=`echo $LINE | awk '{print $3}'`
+-	if [ $(($GFPMASK&$MASK)) -ne 0 ]; then
+-		echo $LINE
+-	fi
+-done
+ 
+-rm -f $TMPFILE
++(
++    cat <<EOF
++#include <stdint.h>
++#include <stdio.h>
++
++// Try to fool compiler.h into not including extra stuff
++#define __ASSEMBLY__	1
++
++#include <generated/autoconf.h>
++#include <linux/gfp_types.h>
++
++static const char *masks[] = {
++EOF
++
++    sed -nEe 's/^[[:space:]]+(___GFP_.*)_BIT,.*$/\1/p' $SOURCE/include/linux/gfp_types.h |
++	while read b; do
++	    cat <<EOF
++#if defined($b) && ($b > 0)
++	[${b}_BIT]	= "$b",
++#endif
++EOF
++	done
++
++    cat <<EOF
++};
++
++int main(int argc, char *argv[])
++{
++	unsigned long long mask = $GFPMASK;
++
++	for (int i = 0; i < sizeof(mask) * 8; i++) {
++		unsigned long long bit = 1ULL << i;
++		if (mask & bit)
++			printf("\t%-25s0x%llx\n",
++			       (i < ___GFP_LAST_BIT && masks[i]) ?
++					masks[i] : "*** INVALID ***",
++			       bit);
++	}
++
++	return 0;
++}
++EOF
++) > $TMPFILE
++
++${CC:-gcc} -Wall -o ${TMPFILE}.bin -I $SOURCE/include $TMPFILE && ${TMPFILE}.bin
++
++rm -f $TMPFILE ${TMPFILE}.bin
++
+ exit 0
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index ab939e6449e41..002a1b9ed83a5 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -3871,12 +3871,18 @@ static int smack_unix_stream_connect(struct sock *sock,
+ 		}
+ 	}
+ 
+-	/*
+-	 * Cross reference the peer labels for SO_PEERSEC.
+-	 */
+ 	if (rc == 0) {
++		/*
++		 * Cross reference the peer labels for SO_PEERSEC.
++		 */
+ 		nsp->smk_packet = ssp->smk_out;
+ 		ssp->smk_packet = osp->smk_out;
++
++		/*
++		 * new/child/established socket must inherit listening socket labels
++		 */
++		nsp->smk_out = osp->smk_out;
++		nsp->smk_in  = osp->smk_in;
+ 	}
+ 
+ 	return rc;
+diff --git a/sound/core/control.c b/sound/core/control.c
+index fb0c60044f7b3..1dd2337e29300 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -1480,12 +1480,16 @@ static int snd_ctl_elem_user_get(struct snd_kcontrol *kcontrol,
+ static int snd_ctl_elem_user_put(struct snd_kcontrol *kcontrol,
+ 				 struct snd_ctl_elem_value *ucontrol)
+ {
+-	int change;
++	int err, change;
+ 	struct user_element *ue = kcontrol->private_data;
+ 	unsigned int size = ue->elem_data_size;
+ 	char *dst = ue->elem_data +
+ 			snd_ctl_get_ioff(kcontrol, &ucontrol->id) * size;
+ 
++	err = sanity_check_input_values(ue->card, ucontrol, &ue->info, false);
++	if (err < 0)
++		return err;
++
+ 	change = memcmp(&ucontrol->value, dst, size) != 0;
+ 	if (change)
+ 		memcpy(dst, &ucontrol->value, size);
+diff --git a/sound/hda/hdmi_chmap.c b/sound/hda/hdmi_chmap.c
+index 5d8e1d944b0af..7b276047f85a7 100644
+--- a/sound/hda/hdmi_chmap.c
++++ b/sound/hda/hdmi_chmap.c
+@@ -753,6 +753,20 @@ static int hdmi_chmap_ctl_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ 
++/* a simple sanity check for input values to chmap kcontrol */
++static int chmap_value_check(struct hdac_chmap *hchmap,
++			     const struct snd_ctl_elem_value *ucontrol)
++{
++	int i;
++
++	for (i = 0; i < hchmap->channels_max; i++) {
++		if (ucontrol->value.integer.value[i] < 0 ||
++		    ucontrol->value.integer.value[i] > SNDRV_CHMAP_LAST)
++			return -EINVAL;
++	}
++	return 0;
++}
++
+ static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ 			      struct snd_ctl_elem_value *ucontrol)
+ {
+@@ -764,6 +778,10 @@ static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ 	unsigned char chmap[8], per_pin_chmap[8];
+ 	int i, err, ca, prepared = 0;
+ 
++	err = chmap_value_check(hchmap, ucontrol);
++	if (err < 0)
++		return err;
++
+ 	/* No monitor is connected in dyn_pcm_assign.
+ 	 * It's invalid to setup the chmap
+ 	 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index f030669243f9a..e851785ff0581 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -307,6 +307,7 @@ enum {
+ 	CXT_FIXUP_HEADSET_MIC,
+ 	CXT_FIXUP_HP_MIC_NO_PRESENCE,
+ 	CXT_PINCFG_SWS_JS201D,
++	CXT_PINCFG_TOP_SPEAKER,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -974,6 +975,13 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = cxt_pincfg_sws_js201d,
+ 	},
++	[CXT_PINCFG_TOP_SPEAKER] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1d, 0x82170111 },
++			{ }
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -1070,6 +1078,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
+ 	SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
++	SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
++	SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
+ 	{}
+ };
+ 
+@@ -1089,6 +1099,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ 	{ .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
+ 	{ .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
++	{ .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1a7b7e790fca9..0cde024d1d33c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7428,6 +7428,7 @@ enum {
+ 	ALC236_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED,
+ 	ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++	ALC236_FIXUP_LENOVO_INV_DMIC,
+ 	ALC298_FIXUP_SAMSUNG_AMP,
+ 	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ 	ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+@@ -7525,6 +7526,7 @@ enum {
+ 	ALC256_FIXUP_CHROME_BOOK,
+ 	ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7,
+ 	ALC287_FIXUP_LENOVO_SSID_17AA3820,
++	ALC245_FIXUP_CLEVO_NOISY_MIC,
+ };
+ 
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -9049,6 +9051,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc236_fixup_hp_mute_led_micmute_vref,
+ 	},
++	[ALC236_FIXUP_LENOVO_INV_DMIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_inv_dmic,
++		.chained = true,
++		.chain_id = ALC283_FIXUP_INT_MIC,
++	},
+ 	[ALC298_FIXUP_SAMSUNG_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc298_fixup_samsung_amp,
+@@ -9850,6 +9858,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc287_fixup_lenovo_ssid_17aa3820,
+ 	},
++	[ALC245_FIXUP_CLEVO_NOISY_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost,
++		.chained = true,
++		.chain_id = ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -10098,6 +10112,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x87fd, "HP Laptop 14-dq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87fe, "HP Laptop 15s-fq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -10222,6 +10237,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8c17, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8c21, "HP Pavilion Plus Laptop 14-ey0XXX", ALC245_FIXUP_HP_X360_MUTE_LEDS),
++	SND_PCI_QUIRK(0x103c, 0x8c30, "HP Victus 15-fb1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8c46, "HP EliteBook 830 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c47, "HP EliteBook 840 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c48, "HP EliteBook 860 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -10348,6 +10364,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_CS35L56_I2C_2),
+@@ -10486,7 +10503,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xa741, "Clevo V54x_6x_TNE", ALC245_FIXUP_CLEVO_NOISY_MIC),
++	SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -10609,6 +10627,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+@@ -10860,6 +10879,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ 	{.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
++	{.id = ALC236_FIXUP_LENOVO_INV_DMIC, .name = "alc236-fixup-lenovo-inv-mic"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index 08082806d5892..8f9a3ae7153e9 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -21,7 +21,7 @@
+ #include <sound/soc.h>
+ #include <sound/tlv.h>
+ #include <sound/tas2781.h>
+-
++#include <asm/unaligned.h>
+ 
+ #define ERROR_PRAM_CRCCHK			0x0000000
+ #define ERROR_YRAM_CRCCHK			0x0000001
+@@ -187,8 +187,7 @@ static struct tasdevice_config_info *tasdevice_add_config(
+ 	/* convert data[offset], data[offset + 1], data[offset + 2] and
+ 	 * data[offset + 3] into host
+ 	 */
+-	cfg_info->nblocks =
+-		be32_to_cpup((__be32 *)&config_data[config_offset]);
++	cfg_info->nblocks = get_unaligned_be32(&config_data[config_offset]);
+ 	config_offset += 4;
+ 
+ 	/* Several kinds of dsp/algorithm firmwares can run on tas2781,
+@@ -232,14 +231,14 @@ static struct tasdevice_config_info *tasdevice_add_config(
+ 
+ 		}
+ 		bk_da[i]->yram_checksum =
+-			be16_to_cpup((__be16 *)&config_data[config_offset]);
++			get_unaligned_be16(&config_data[config_offset]);
+ 		config_offset += 2;
+ 		bk_da[i]->block_size =
+-			be32_to_cpup((__be32 *)&config_data[config_offset]);
++			get_unaligned_be32(&config_data[config_offset]);
+ 		config_offset += 4;
+ 
+ 		bk_da[i]->n_subblks =
+-			be32_to_cpup((__be32 *)&config_data[config_offset]);
++			get_unaligned_be32(&config_data[config_offset]);
+ 
+ 		config_offset += 4;
+ 
+@@ -289,7 +288,7 @@ int tasdevice_rca_parser(void *context, const struct firmware *fmw)
+ 	}
+ 	buf = (unsigned char *)fmw->data;
+ 
+-	fw_hdr->img_sz = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->img_sz = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+ 	if (fw_hdr->img_sz != fmw->size) {
+ 		dev_err(tas_priv->dev,
+@@ -300,9 +299,9 @@ int tasdevice_rca_parser(void *context, const struct firmware *fmw)
+ 		goto out;
+ 	}
+ 
+-	fw_hdr->checksum = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->checksum = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+-	fw_hdr->binary_version_num = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->binary_version_num = get_unaligned_be32(&buf[offset]);
+ 	if (fw_hdr->binary_version_num < 0x103) {
+ 		dev_err(tas_priv->dev, "File version 0x%04x is too low",
+ 			fw_hdr->binary_version_num);
+@@ -311,7 +310,7 @@ int tasdevice_rca_parser(void *context, const struct firmware *fmw)
+ 		goto out;
+ 	}
+ 	offset += 4;
+-	fw_hdr->drv_fw_version = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->drv_fw_version = get_unaligned_be32(&buf[offset]);
+ 	offset += 8;
+ 	fw_hdr->plat_type = buf[offset];
+ 	offset += 1;
+@@ -339,11 +338,11 @@ int tasdevice_rca_parser(void *context, const struct firmware *fmw)
+ 	for (i = 0; i < TASDEVICE_DEVICE_SUM; i++, offset++)
+ 		fw_hdr->devs[i] = buf[offset];
+ 
+-	fw_hdr->nconfig = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->nconfig = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+ 
+ 	for (i = 0; i < TASDEVICE_CONFIG_SUM; i++) {
+-		fw_hdr->config_size[i] = be32_to_cpup((__be32 *)&buf[offset]);
++		fw_hdr->config_size[i] = get_unaligned_be32(&buf[offset]);
+ 		offset += 4;
+ 		total_config_sz += fw_hdr->config_size[i];
+ 	}
+@@ -423,7 +422,7 @@ static int fw_parse_block_data_kernel(struct tasdevice_fw *tas_fmw,
+ 	/* convert data[offset], data[offset + 1], data[offset + 2] and
+ 	 * data[offset + 3] into host
+ 	 */
+-	block->type = be32_to_cpup((__be32 *)&data[offset]);
++	block->type = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+ 	block->is_pchksum_present = data[offset];
+@@ -438,10 +437,10 @@ static int fw_parse_block_data_kernel(struct tasdevice_fw *tas_fmw,
+ 	block->ychksum = data[offset];
+ 	offset++;
+ 
+-	block->blk_size = be32_to_cpup((__be32 *)&data[offset]);
++	block->blk_size = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+-	block->nr_subblocks = be32_to_cpup((__be32 *)&data[offset]);
++	block->nr_subblocks = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+ 	/* fixed m68k compiling issue:
+@@ -482,7 +481,7 @@ static int fw_parse_data_kernel(struct tasdevice_fw *tas_fmw,
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	img_data->nr_blk = be32_to_cpup((__be32 *)&data[offset]);
++	img_data->nr_blk = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+ 	img_data->dev_blks = kcalloc(img_data->nr_blk,
+@@ -578,14 +577,14 @@ static int fw_parse_variable_header_kernel(
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	fw_hdr->device_family = be16_to_cpup((__be16 *)&buf[offset]);
++	fw_hdr->device_family = get_unaligned_be16(&buf[offset]);
+ 	if (fw_hdr->device_family != 0) {
+ 		dev_err(tas_priv->dev, "%s:not TAS device\n", __func__);
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+ 	offset += 2;
+-	fw_hdr->device = be16_to_cpup((__be16 *)&buf[offset]);
++	fw_hdr->device = get_unaligned_be16(&buf[offset]);
+ 	if (fw_hdr->device >= TASDEVICE_DSP_TAS_MAX_DEVICE ||
+ 		fw_hdr->device == 6) {
+ 		dev_err(tas_priv->dev, "Unsupported dev %d\n", fw_hdr->device);
+@@ -603,7 +602,7 @@ static int fw_parse_variable_header_kernel(
+ 		goto out;
+ 	}
+ 
+-	tas_fmw->nr_programs = be32_to_cpup((__be32 *)&buf[offset]);
++	tas_fmw->nr_programs = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+ 
+ 	if (tas_fmw->nr_programs == 0 || tas_fmw->nr_programs >
+@@ -622,14 +621,14 @@ static int fw_parse_variable_header_kernel(
+ 
+ 	for (i = 0; i < tas_fmw->nr_programs; i++) {
+ 		program = &(tas_fmw->programs[i]);
+-		program->prog_size = be32_to_cpup((__be32 *)&buf[offset]);
++		program->prog_size = get_unaligned_be32(&buf[offset]);
+ 		offset += 4;
+ 	}
+ 
+ 	/* Skip the unused prog_size */
+ 	offset += 4 * (TASDEVICE_MAXPROGRAM_NUM_KERNEL - tas_fmw->nr_programs);
+ 
+-	tas_fmw->nr_configurations = be32_to_cpup((__be32 *)&buf[offset]);
++	tas_fmw->nr_configurations = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+ 
+ 	/* The max number of config in firmware greater than 4 pieces of
+@@ -661,7 +660,7 @@ static int fw_parse_variable_header_kernel(
+ 
+ 	for (i = 0; i < tas_fmw->nr_programs; i++) {
+ 		config = &(tas_fmw->configs[i]);
+-		config->cfg_size = be32_to_cpup((__be32 *)&buf[offset]);
++		config->cfg_size = get_unaligned_be32(&buf[offset]);
+ 		offset += 4;
+ 	}
+ 
+@@ -699,7 +698,7 @@ static int tasdevice_process_block(void *context, unsigned char *data,
+ 		switch (subblk_typ) {
+ 		case TASDEVICE_CMD_SING_W: {
+ 			int i;
+-			unsigned short len = be16_to_cpup((__be16 *)&data[2]);
++			unsigned short len = get_unaligned_be16(&data[2]);
+ 
+ 			subblk_offset += 2;
+ 			if (subblk_offset + 4 * len > sublocksize) {
+@@ -725,7 +724,7 @@ static int tasdevice_process_block(void *context, unsigned char *data,
+ 		}
+ 			break;
+ 		case TASDEVICE_CMD_BURST: {
+-			unsigned short len = be16_to_cpup((__be16 *)&data[2]);
++			unsigned short len = get_unaligned_be16(&data[2]);
+ 
+ 			subblk_offset += 2;
+ 			if (subblk_offset + 4 + len > sublocksize) {
+@@ -766,7 +765,7 @@ static int tasdevice_process_block(void *context, unsigned char *data,
+ 				is_err = true;
+ 				break;
+ 			}
+-			sleep_time = be16_to_cpup((__be16 *)&data[2]) * 1000;
++			sleep_time = get_unaligned_be16(&data[2]) * 1000;
+ 			usleep_range(sleep_time, sleep_time + 50);
+ 			subblk_offset += 2;
+ 		}
+@@ -910,7 +909,7 @@ static int fw_parse_variable_hdr(struct tasdevice_priv
+ 
+ 	offset += len;
+ 
+-	fw_hdr->device_family = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->device_family = get_unaligned_be32(&buf[offset]);
+ 	if (fw_hdr->device_family != 0) {
+ 		dev_err(tas_priv->dev, "%s: not TAS device\n", __func__);
+ 		offset = -EINVAL;
+@@ -918,7 +917,7 @@ static int fw_parse_variable_hdr(struct tasdevice_priv
+ 	}
+ 	offset += 4;
+ 
+-	fw_hdr->device = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_hdr->device = get_unaligned_be32(&buf[offset]);
+ 	if (fw_hdr->device >= TASDEVICE_DSP_TAS_MAX_DEVICE ||
+ 		fw_hdr->device == 6) {
+ 		dev_err(tas_priv->dev, "Unsupported dev %d\n", fw_hdr->device);
+@@ -963,7 +962,7 @@ static int fw_parse_block_data(struct tasdevice_fw *tas_fmw,
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	block->type = be32_to_cpup((__be32 *)&data[offset]);
++	block->type = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+ 	if (tas_fmw->fw_hdr.fixed_hdr.drv_ver >= PPC_DRIVER_CRCCHK) {
+@@ -988,7 +987,7 @@ static int fw_parse_block_data(struct tasdevice_fw *tas_fmw,
+ 		block->is_ychksum_present = 0;
+ 	}
+ 
+-	block->nr_cmds = be32_to_cpup((__be32 *)&data[offset]);
++	block->nr_cmds = get_unaligned_be32(&data[offset]);
+ 	offset += 4;
+ 
+ 	n = block->nr_cmds * 4;
+@@ -1039,7 +1038,7 @@ static int fw_parse_data(struct tasdevice_fw *tas_fmw,
+ 		goto out;
+ 	}
+ 	offset += n;
+-	img_data->nr_blk = be16_to_cpup((__be16 *)&data[offset]);
++	img_data->nr_blk = get_unaligned_be16(&data[offset]);
+ 	offset += 2;
+ 
+ 	img_data->dev_blks = kcalloc(img_data->nr_blk,
+@@ -1076,7 +1075,7 @@ static int fw_parse_program_data(struct tasdevice_priv *tas_priv,
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	tas_fmw->nr_programs = be16_to_cpup((__be16 *)&buf[offset]);
++	tas_fmw->nr_programs = get_unaligned_be16(&buf[offset]);
+ 	offset += 2;
+ 
+ 	if (tas_fmw->nr_programs == 0) {
+@@ -1143,7 +1142,7 @@ static int fw_parse_configuration_data(
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	tas_fmw->nr_configurations = be16_to_cpup((__be16 *)&data[offset]);
++	tas_fmw->nr_configurations = get_unaligned_be16(&data[offset]);
+ 	offset += 2;
+ 
+ 	if (tas_fmw->nr_configurations == 0) {
+@@ -1775,7 +1774,7 @@ static int fw_parse_header(struct tasdevice_priv *tas_priv,
+ 	/* Convert data[offset], data[offset + 1], data[offset + 2] and
+ 	 * data[offset + 3] into host
+ 	 */
+-	fw_fixed_hdr->fwsize = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_fixed_hdr->fwsize = get_unaligned_be32(&buf[offset]);
+ 	offset += 4;
+ 	if (fw_fixed_hdr->fwsize != fmw->size) {
+ 		dev_err(tas_priv->dev, "File size not match, %lu %u",
+@@ -1784,9 +1783,9 @@ static int fw_parse_header(struct tasdevice_priv *tas_priv,
+ 		goto out;
+ 	}
+ 	offset += 4;
+-	fw_fixed_hdr->ppcver = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_fixed_hdr->ppcver = get_unaligned_be32(&buf[offset]);
+ 	offset += 8;
+-	fw_fixed_hdr->drv_ver = be32_to_cpup((__be32 *)&buf[offset]);
++	fw_fixed_hdr->drv_ver = get_unaligned_be32(&buf[offset]);
+ 	offset += 72;
+ 
+  out:
+@@ -1828,7 +1827,7 @@ static int fw_parse_calibration_data(struct tasdevice_priv *tas_priv,
+ 		offset = -EINVAL;
+ 		goto out;
+ 	}
+-	tas_fmw->nr_calibrations = be16_to_cpup((__be16 *)&data[offset]);
++	tas_fmw->nr_calibrations = get_unaligned_be16(&data[offset]);
+ 	offset += 2;
+ 
+ 	if (tas_fmw->nr_calibrations != 1) {
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index dce6a2086f2a4..6da1517c53c6e 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -605,7 +605,7 @@ static int broxton_audio_probe(struct platform_device *pdev)
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(broxton_rt298_dais); i++) {
+-		if (card->dai_link[i].codecs->name &&
++		if (card->dai_link[i].num_codecs &&
+ 		    !strncmp(card->dai_link[i].codecs->name, "i2c-INT343A:00",
+ 			     I2C_NAME_SIZE)) {
+ 			if (!strncmp(card->name, "broxton-rt298",
+diff --git a/sound/soc/intel/boards/bytcht_cx2072x.c b/sound/soc/intel/boards/bytcht_cx2072x.c
+index c014d85a08b24..df3c2a7b64d23 100644
+--- a/sound/soc/intel/boards/bytcht_cx2072x.c
++++ b/sound/soc/intel/boards/bytcht_cx2072x.c
+@@ -241,7 +241,7 @@ static int snd_byt_cht_cx2072x_probe(struct platform_device *pdev)
+ 
+ 	/* fix index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(byt_cht_cx2072x_dais); i++) {
+-		if (byt_cht_cx2072x_dais[i].codecs->name &&
++		if (byt_cht_cx2072x_dais[i].num_codecs &&
+ 		    !strcmp(byt_cht_cx2072x_dais[i].codecs->name,
+ 			    "i2c-14F10720:00")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/bytcht_da7213.c b/sound/soc/intel/boards/bytcht_da7213.c
+index f4ac3ddd148b8..08c598b7e1eee 100644
+--- a/sound/soc/intel/boards/bytcht_da7213.c
++++ b/sound/soc/intel/boards/bytcht_da7213.c
+@@ -245,7 +245,7 @@ static int bytcht_da7213_probe(struct platform_device *pdev)
+ 
+ 	/* fix index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(dailink); i++) {
+-		if (dailink[i].codecs->name &&
++		if (dailink[i].num_codecs &&
+ 		    !strcmp(dailink[i].codecs->name, "i2c-DLGS7213:00")) {
+ 			dai_index = i;
+ 			break;
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index 2fcec2e02bb53..77b91ea4dc32c 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -546,7 +546,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 
+ 	/* fix index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) {
+-		if (byt_cht_es8316_dais[i].codecs->name &&
++		if (byt_cht_es8316_dais[i].num_codecs &&
+ 		    !strcmp(byt_cht_es8316_dais[i].codecs->name,
+ 			    "i2c-ESSX8316:00")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index a64d1989e28a5..db4a33680d948 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1677,7 +1677,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 
+ 	/* fix index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(byt_rt5640_dais); i++) {
+-		if (byt_rt5640_dais[i].codecs->name &&
++		if (byt_rt5640_dais[i].num_codecs &&
+ 		    !strcmp(byt_rt5640_dais[i].codecs->name,
+ 			    "i2c-10EC5640:00")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index 80c841b000a31..8514b79f389bb 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -910,7 +910,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 
+ 	/* fix index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(byt_rt5651_dais); i++) {
+-		if (byt_rt5651_dais[i].codecs->name &&
++		if (byt_rt5651_dais[i].num_codecs &&
+ 		    !strcmp(byt_rt5651_dais[i].codecs->name,
+ 			    "i2c-10EC5651:00")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/bytcr_wm5102.c b/sound/soc/intel/boards/bytcr_wm5102.c
+index cccb5e90c0fef..e5a7cc606aa90 100644
+--- a/sound/soc/intel/boards/bytcr_wm5102.c
++++ b/sound/soc/intel/boards/bytcr_wm5102.c
+@@ -605,7 +605,7 @@ static int snd_byt_wm5102_mc_probe(struct platform_device *pdev)
+ 
+ 	/* find index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(byt_wm5102_dais); i++) {
+-		if (byt_wm5102_dais[i].codecs->name &&
++		if (byt_wm5102_dais[i].num_codecs &&
+ 		    !strcmp(byt_wm5102_dais[i].codecs->name,
+ 			    "wm5102-codec")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
+index eb41b7115d01d..1da9ceee4d593 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
+@@ -569,7 +569,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 
+ 	/* set correct codec name */
+ 	for (i = 0; i < ARRAY_SIZE(cht_dailink); i++)
+-		if (cht_dailink[i].codecs->name &&
++		if (cht_dailink[i].num_codecs &&
+ 		    !strcmp(cht_dailink[i].codecs->name,
+ 			    "i2c-10EC5645:00")) {
+ 			dai_index = i;
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
+index be2d1a8dbca80..d68e5bc755dee 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
+@@ -466,7 +466,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 
+ 	/* find index of codec dai */
+ 	for (i = 0; i < ARRAY_SIZE(cht_dailink); i++) {
+-		if (cht_dailink[i].codecs->name &&
++		if (cht_dailink[i].num_codecs &&
+ 		    !strcmp(cht_dailink[i].codecs->name, RT5672_I2C_DEFAULT)) {
+ 			dai_index = i;
+ 			break;
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 16dad4a454434..1680c2498fe94 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4066,6 +4066,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ 
+ 	case SND_SOC_DAPM_POST_PMD:
+ 		kfree(substream->runtime);
++		substream->runtime = NULL;
+ 		break;
+ 
+ 	default:
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 6951ff7bc61e7..73d44dff45d6e 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -851,6 +851,8 @@ static int soc_tplg_denum_create_values(struct soc_tplg *tplg, struct soc_enum *
+ 		se->dobj.control.dvalues[i] = le32_to_cpu(ec->values[i]);
+ 	}
+ 
++	se->items = le32_to_cpu(ec->items);
++	se->values = (const unsigned int *)se->dobj.control.dvalues;
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index da182314aa874..ebbd99e341437 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -2050,6 +2050,8 @@ static int sof_link_unload(struct snd_soc_component *scomp, struct snd_soc_dobj
+ 	if (!slink)
+ 		return 0;
+ 
++	slink->link->platforms->name = NULL;
++
+ 	kfree(slink->tuples);
+ 	list_del(&slink->list);
+ 	kfree(slink->hw_configs);
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index 5f8d979585b69..3af0b2aab2914 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -100,8 +100,8 @@
+ #define SUN8I_I2S_CTRL_MODE_PCM			(0 << 4)
+ 
+ #define SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK	BIT(19)
+-#define SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED		(1 << 19)
+-#define SUN8I_I2S_FMT0_LRCLK_POLARITY_NORMAL		(0 << 19)
++#define SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH	(1 << 19)
++#define SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW		(0 << 19)
+ #define SUN8I_I2S_FMT0_LRCK_PERIOD_MASK		GENMASK(17, 8)
+ #define SUN8I_I2S_FMT0_LRCK_PERIOD(period)	((period - 1) << 8)
+ #define SUN8I_I2S_FMT0_BCLK_POLARITY_MASK	BIT(7)
+@@ -729,65 +729,37 @@ static int sun4i_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ static int sun8i_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ 				 unsigned int fmt)
+ {
+-	u32 mode, val;
++	u32 mode, lrclk_pol, bclk_pol, val;
+ 	u8 offset;
+ 
+-	/*
+-	 * DAI clock polarity
+-	 *
+-	 * The setup for LRCK contradicts the datasheet, but under a
+-	 * scope it's clear that the LRCK polarity is reversed
+-	 * compared to the expected polarity on the bus.
+-	 */
+-	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+-	case SND_SOC_DAIFMT_IB_IF:
+-		/* Invert both clocks */
+-		val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
+-		break;
+-	case SND_SOC_DAIFMT_IB_NF:
+-		/* Invert bit clock */
+-		val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED |
+-		      SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED;
+-		break;
+-	case SND_SOC_DAIFMT_NB_IF:
+-		/* Invert frame clock */
+-		val = 0;
+-		break;
+-	case SND_SOC_DAIFMT_NB_NF:
+-		val = SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED;
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG,
+-			   SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK |
+-			   SUN8I_I2S_FMT0_BCLK_POLARITY_MASK,
+-			   val);
+-
+ 	/* DAI Mode */
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_DSP_A:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_PCM;
+ 		offset = 1;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_DSP_B:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_PCM;
+ 		offset = 0;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_I2S:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW;
+ 		mode = SUN8I_I2S_CTRL_MODE_LEFT;
+ 		offset = 1;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_LEFT_J:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_LEFT;
+ 		offset = 0;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_RIGHT_J:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_RIGHT;
+ 		offset = 0;
+ 		break;
+@@ -805,6 +777,35 @@ static int sun8i_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ 			   SUN8I_I2S_TX_CHAN_OFFSET_MASK,
+ 			   SUN8I_I2S_TX_CHAN_OFFSET(offset));
+ 
++	/* DAI clock polarity */
++	bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_NORMAL;
++
++	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++	case SND_SOC_DAIFMT_IB_IF:
++		/* Invert both clocks */
++		lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK;
++		bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
++		break;
++	case SND_SOC_DAIFMT_IB_NF:
++		/* Invert bit clock */
++		bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
++		break;
++	case SND_SOC_DAIFMT_NB_IF:
++		/* Invert frame clock */
++		lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK;
++		break;
++	case SND_SOC_DAIFMT_NB_NF:
++		/* No inversion */
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG,
++			   SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK |
++			   SUN8I_I2S_FMT0_BCLK_POLARITY_MASK,
++			   lrclk_pol | bclk_pol);
++
+ 	/* DAI clock master masks */
+ 	switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {
+ 	case SND_SOC_DAIFMT_BP_FP:
+@@ -836,65 +837,37 @@ static int sun8i_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ static int sun50i_h6_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ 				     unsigned int fmt)
+ {
+-	u32 mode, val;
++	u32 mode, lrclk_pol, bclk_pol, val;
+ 	u8 offset;
+ 
+-	/*
+-	 * DAI clock polarity
+-	 *
+-	 * The setup for LRCK contradicts the datasheet, but under a
+-	 * scope it's clear that the LRCK polarity is reversed
+-	 * compared to the expected polarity on the bus.
+-	 */
+-	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+-	case SND_SOC_DAIFMT_IB_IF:
+-		/* Invert both clocks */
+-		val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
+-		break;
+-	case SND_SOC_DAIFMT_IB_NF:
+-		/* Invert bit clock */
+-		val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED |
+-		      SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED;
+-		break;
+-	case SND_SOC_DAIFMT_NB_IF:
+-		/* Invert frame clock */
+-		val = 0;
+-		break;
+-	case SND_SOC_DAIFMT_NB_NF:
+-		val = SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED;
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG,
+-			   SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK |
+-			   SUN8I_I2S_FMT0_BCLK_POLARITY_MASK,
+-			   val);
+-
+ 	/* DAI Mode */
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_DSP_A:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_PCM;
+ 		offset = 1;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_DSP_B:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_PCM;
+ 		offset = 0;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_I2S:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW;
+ 		mode = SUN8I_I2S_CTRL_MODE_LEFT;
+ 		offset = 1;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_LEFT_J:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_LEFT;
+ 		offset = 0;
+ 		break;
+ 
+ 	case SND_SOC_DAIFMT_RIGHT_J:
++		lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH;
+ 		mode = SUN8I_I2S_CTRL_MODE_RIGHT;
+ 		offset = 0;
+ 		break;
+@@ -912,6 +885,36 @@ static int sun50i_h6_i2s_set_soc_fmt(const struct sun4i_i2s *i2s,
+ 			   SUN50I_H6_I2S_TX_CHAN_SEL_OFFSET_MASK,
+ 			   SUN50I_H6_I2S_TX_CHAN_SEL_OFFSET(offset));
+ 
++	/* DAI clock polarity */
++	bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_NORMAL;
++
++	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++	case SND_SOC_DAIFMT_IB_IF:
++		/* Invert both clocks */
++		lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK;
++		bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
++		break;
++	case SND_SOC_DAIFMT_IB_NF:
++		/* Invert bit clock */
++		bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED;
++		break;
++	case SND_SOC_DAIFMT_NB_IF:
++		/* Invert frame clock */
++		lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK;
++		break;
++	case SND_SOC_DAIFMT_NB_NF:
++		/* No inversion */
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG,
++			   SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK |
++			   SUN8I_I2S_FMT0_BCLK_POLARITY_MASK,
++			   lrclk_pol | bclk_pol);
++
++
+ 	/* DAI clock master masks */
+ 	switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {
+ 	case SND_SOC_DAIFMT_BP_FP:
+diff --git a/sound/soc/tegra/tegra210_ahub.c b/sound/soc/tegra/tegra210_ahub.c
+index 3f114a2adfced..ab3c6b2544d20 100644
+--- a/sound/soc/tegra/tegra210_ahub.c
++++ b/sound/soc/tegra/tegra210_ahub.c
+@@ -2,7 +2,7 @@
+ //
+ // tegra210_ahub.c - Tegra210 AHUB driver
+ //
+-// Copyright (c) 2020-2022, NVIDIA CORPORATION.  All rights reserved.
++// Copyright (c) 2020-2024, NVIDIA CORPORATION.  All rights reserved.
+ 
+ #include <linux/clk.h>
+ #include <linux/device.h>
+@@ -1391,11 +1391,13 @@ static int tegra_ahub_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	pm_runtime_enable(&pdev->dev);
++
+ 	err = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+-	if (err)
++	if (err) {
++		pm_runtime_disable(&pdev->dev);
+ 		return err;
+-
+-	pm_runtime_enable(&pdev->dev);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 5401f2df463d2..5edb717647847 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -10336,7 +10336,7 @@ __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
+ struct bpf_map *
+ bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
+ {
+-	if (prev == NULL)
++	if (prev == NULL && obj != NULL)
+ 		return obj->maps;
+ 
+ 	return __bpf_map__iter(prev, obj, 1);
+@@ -10345,7 +10345,7 @@ bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
+ struct bpf_map *
+ bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *next)
+ {
+-	if (next == NULL) {
++	if (next == NULL && obj != NULL) {
+ 		if (!obj->nr_maps)
+ 			return NULL;
+ 		return obj->maps + obj->nr_maps - 1;
+diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py
+index 35e666928119b..ed7b6fff69997 100644
+--- a/tools/net/ynl/lib/ynl.py
++++ b/tools/net/ynl/lib/ynl.py
+@@ -388,6 +388,8 @@ class NetlinkProtocol:
+ 
+     def decode(self, ynl, nl_msg, op):
+         msg = self._decode(nl_msg)
++        if op is None:
++            op = ynl.rsp_by_value[msg.cmd()]
+         fixed_header_size = ynl._struct_size(op.fixed_header)
+         msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size)
+         return msg
+@@ -919,8 +921,7 @@ class YnlFamily(SpecFamily):
+                     print("Netlink done while checking for ntf!?")
+                     continue
+ 
+-                op = self.rsp_by_value[nl_msg.cmd()]
+-                decoded = self.nlproto.decode(self, nl_msg, op)
++                decoded = self.nlproto.decode(self, nl_msg, None)
+                 if decoded.cmd() not in self.async_msg_ids:
+                     print("Unexpected msg id done while checking for ntf", decoded)
+                     continue
+@@ -978,7 +979,7 @@ class YnlFamily(SpecFamily):
+                     if nl_msg.extack:
+                         self._decode_extack(req_msg, op, nl_msg.extack)
+                 else:
+-                    op = self.rsp_by_value[nl_msg.cmd()]
++                    op = None
+                     req_flags = []
+ 
+                 if nl_msg.error:
+diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
+index b4cb3fe5cc254..bc4e92c0c08b8 100644
+--- a/tools/perf/util/bpf_lock_contention.c
++++ b/tools/perf/util/bpf_lock_contention.c
+@@ -286,6 +286,9 @@ static void account_end_timestamp(struct lock_contention *con)
+ 			goto next;
+ 
+ 		for (int i = 0; i < total_cpus; i++) {
++			if (cpu_data[i].lock == 0)
++				continue;
++
+ 			update_lock_stat(stat_fd, -1, end_ts, aggr_mode,
+ 					 &cpu_data[i]);
+ 		}
+diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+index 5f541522364fb..5d0a809dc2df9 100644
+--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
++++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+@@ -29,9 +29,11 @@ static int check_vgem(int fd)
+ 	version.name = name;
+ 
+ 	ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
+-	if (ret)
++	if (ret || version.name_len != 4)
+ 		return 0;
+ 
++	name[4] = '\0';
++
+ 	return !strcmp(name, "vgem");
+ }
+ 
+diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
+index 41998cf1dcf57..09faffbc3d87c 100644
+--- a/tools/testing/selftests/mm/mseal_test.c
++++ b/tools/testing/selftests/mm/mseal_test.c
+@@ -128,17 +128,6 @@ static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+ 	return sret;
+ }
+ 
+-static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
+-	unsigned long flags, unsigned long fd, unsigned long offset)
+-{
+-	void *sret;
+-
+-	errno = 0;
+-	sret = (void *) syscall(__NR_mmap, addr, len, prot,
+-		flags, fd, offset);
+-	return sret;
+-}
+-
+ static int sys_munmap(void *ptr, size_t size)
+ {
+ 	int sret;
+@@ -219,7 +208,7 @@ static void setup_single_address(int size, void **ptrOut)
+ {
+ 	void *ptr;
+ 
+-	ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
++	ptr = mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ 	*ptrOut = ptr;
+ }
+ 
+@@ -228,7 +217,7 @@ static void setup_single_address_rw(int size, void **ptrOut)
+ 	void *ptr;
+ 	unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
+ 
+-	ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
++	ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
+ 	*ptrOut = ptr;
+ }
+ 
+@@ -252,7 +241,7 @@ bool seal_support(void)
+ 	void *ptr;
+ 	unsigned long page_size = getpagesize();
+ 
+-	ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
++	ptr = mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ 	if (ptr == (void *) -1)
+ 		return false;
+ 
+@@ -528,8 +517,8 @@ static void test_seal_zero_address(void)
+ 	int prot;
+ 
+ 	/* use mmap to change protection. */
+-	ptr = sys_mmap(0, size, PROT_NONE,
+-			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
++	ptr = mmap(0, size, PROT_NONE,
++		   MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ 	FAIL_TEST_IF_FALSE(ptr == 0);
+ 
+ 	size = get_vma_size(ptr, &prot);
+@@ -1256,8 +1245,8 @@ static void test_seal_mmap_overwrite_prot(bool seal)
+ 	}
+ 
+ 	/* use mmap to change protection. */
+-	ret2 = sys_mmap(ptr, size, PROT_NONE,
+-			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
++	ret2 = mmap(ptr, size, PROT_NONE,
++		    MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1287,8 +1276,8 @@ static void test_seal_mmap_expand(bool seal)
+ 	}
+ 
+ 	/* use mmap to expand. */
+-	ret2 = sys_mmap(ptr, size, PROT_READ,
+-			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
++	ret2 = mmap(ptr, size, PROT_READ,
++		    MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1315,8 +1304,8 @@ static void test_seal_mmap_shrink(bool seal)
+ 	}
+ 
+ 	/* use mmap to shrink. */
+-	ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
+-			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
++	ret2 = mmap(ptr, 8 * page_size, PROT_READ,
++		    MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1697,7 +1686,7 @@ static void test_seal_discard_ro_anon_on_filebacked(bool seal)
+ 	ret = fallocate(fd, 0, 0, size);
+ 	FAIL_TEST_IF_FALSE(!ret);
+ 
+-	ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
++	ptr = mmap(NULL, size, PROT_READ, mapflags, fd, 0);
+ 	FAIL_TEST_IF_FALSE(ptr != MAP_FAILED);
+ 
+ 	if (seal) {
+@@ -1727,7 +1716,7 @@ static void test_seal_discard_ro_anon_on_shared(bool seal)
+ 	int ret;
+ 	unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
+ 
+-	ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
++	ptr = mmap(NULL, size, PROT_READ, mapflags, -1, 0);
+ 	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+ 
+ 	if (seal) {
+diff --git a/tools/testing/selftests/mm/seal_elf.c b/tools/testing/selftests/mm/seal_elf.c
+index f2babec79bb63..07727d4892ac9 100644
+--- a/tools/testing/selftests/mm/seal_elf.c
++++ b/tools/testing/selftests/mm/seal_elf.c
+@@ -61,17 +61,6 @@ static int sys_mseal(void *start, size_t len)
+ 	return sret;
+ }
+ 
+-static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
+-	unsigned long flags, unsigned long fd, unsigned long offset)
+-{
+-	void *sret;
+-
+-	errno = 0;
+-	sret = (void *) syscall(__NR_mmap, addr, len, prot,
+-		flags, fd, offset);
+-	return sret;
+-}
+-
+ static inline int sys_mprotect(void *ptr, size_t size, unsigned long prot)
+ {
+ 	int sret;
+@@ -87,7 +76,7 @@ static bool seal_support(void)
+ 	void *ptr;
+ 	unsigned long page_size = getpagesize();
+ 
+-	ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
++	ptr = mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ 	if (ptr == (void *) -1)
+ 		return false;
+ 
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index d9393569d03a4..ec5377ffda315 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -84,7 +84,8 @@ TEST_GEN_PROGS += so_incoming_cpu
+ TEST_PROGS += sctp_vrf.sh
+ TEST_GEN_FILES += sctp_hello
+ TEST_GEN_FILES += ip_local_port_range
+-TEST_GEN_FILES += bind_wildcard
++TEST_GEN_PROGS += bind_wildcard
++TEST_GEN_PROGS += bind_timewait
+ TEST_PROGS += test_vxlan_mdb.sh
+ TEST_PROGS += test_bridge_neigh_suppress.sh
+ TEST_PROGS += test_vxlan_nolocalbypass.sh
+diff --git a/tools/testing/selftests/riscv/mm/mmap_bottomup.c b/tools/testing/selftests/riscv/mm/mmap_bottomup.c
+index 7f7d3eb8b9c92..f9ccae50349bc 100644
+--- a/tools/testing/selftests/riscv/mm/mmap_bottomup.c
++++ b/tools/testing/selftests/riscv/mm/mmap_bottomup.c
+@@ -7,8 +7,6 @@
+ TEST(infinite_rlimit)
+ {
+ 	EXPECT_EQ(BOTTOM_UP, memory_layout());
+-
+-	TEST_MMAPS;
+ }
+ 
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/riscv/mm/mmap_default.c b/tools/testing/selftests/riscv/mm/mmap_default.c
+index 2ba3ec9900064..3f53b6ecc3261 100644
+--- a/tools/testing/selftests/riscv/mm/mmap_default.c
++++ b/tools/testing/selftests/riscv/mm/mmap_default.c
+@@ -7,8 +7,6 @@
+ TEST(default_rlimit)
+ {
+ 	EXPECT_EQ(TOP_DOWN, memory_layout());
+-
+-	TEST_MMAPS;
+ }
+ 
+ TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/riscv/mm/mmap_test.h b/tools/testing/selftests/riscv/mm/mmap_test.h
+index 3b29ca3bb3d40..75918d15919f2 100644
+--- a/tools/testing/selftests/riscv/mm/mmap_test.h
++++ b/tools/testing/selftests/riscv/mm/mmap_test.h
+@@ -10,76 +10,9 @@
+ #define TOP_DOWN 0
+ #define BOTTOM_UP 1
+ 
+-#if __riscv_xlen == 64
+-uint64_t random_addresses[] = {
+-	0x19764f0d73b3a9f0, 0x016049584cecef59, 0x3580bdd3562f4acd,
+-	0x1164219f20b17da0, 0x07d97fcb40ff2373, 0x76ec528921272ee7,
+-	0x4dd48c38a3de3f70, 0x2e11415055f6997d, 0x14b43334ac476c02,
+-	0x375a60795aff19f6, 0x47f3051725b8ee1a, 0x4e697cf240494a9f,
+-	0x456b59b5c2f9e9d1, 0x101724379d63cb96, 0x7fe9ad31619528c1,
+-	0x2f417247c495c2ea, 0x329a5a5b82943a5e, 0x06d7a9d6adcd3827,
+-	0x327b0b9ee37f62d5, 0x17c7b1851dfd9b76, 0x006ebb6456ec2cd9,
+-	0x00836cd14146a134, 0x00e5c4dcde7126db, 0x004c29feadf75753,
+-	0x00d8b20149ed930c, 0x00d71574c269387a, 0x0006ebe4a82acb7a,
+-	0x0016135df51f471b, 0x00758bdb55455160, 0x00d0bdd949b13b32,
+-	0x00ecea01e7c5f54b, 0x00e37b071b9948b1, 0x0011fdd00ff57ab3,
+-	0x00e407294b52f5ea, 0x00567748c200ed20, 0x000d073084651046,
+-	0x00ac896f4365463c, 0x00eb0d49a0b26216, 0x0066a2564a982a31,
+-	0x002e0d20237784ae, 0x0000554ff8a77a76, 0x00006ce07a54c012,
+-	0x000009570516d799, 0x00000954ca15b84d, 0x0000684f0d453379,
+-	0x00002ae5816302b5, 0x0000042403fb54bf, 0x00004bad7392bf30,
+-	0x00003e73bfa4b5e3, 0x00005442c29978e0, 0x00002803f11286b6,
+-	0x000073875d745fc6, 0x00007cede9cb8240, 0x000027df84cc6a4f,
+-	0x00006d7e0e74242a, 0x00004afd0b836e02, 0x000047d0e837cd82,
+-	0x00003b42405efeda, 0x00001531bafa4c95, 0x00007172cae34ac4,
+-};
+-#else
+-uint32_t random_addresses[] = {
+-	0x8dc302e0, 0x929ab1e0, 0xb47683ba, 0xea519c73, 0xa19f1c90, 0xc49ba213,
+-	0x8f57c625, 0xadfe5137, 0x874d4d95, 0xaa20f09d, 0xcf21ebfc, 0xda7737f1,
+-	0xcedf392a, 0x83026c14, 0xccedca52, 0xc6ccf826, 0xe0cd9415, 0x997472ca,
+-	0xa21a44c1, 0xe82196f5, 0xa23fd66b, 0xc28d5590, 0xd009cdce, 0xcf0be646,
+-	0x8fc8c7ff, 0xe2a85984, 0xa3d3236b, 0x89a0619d, 0xc03db924, 0xb5d4cc1b,
+-	0xb96ee04c, 0xd191da48, 0xb432a000, 0xaa2bebbc, 0xa2fcb289, 0xb0cca89b,
+-	0xb0c18d6a, 0x88f58deb, 0xa4d42d1c, 0xe4d74e86, 0x99902b09, 0x8f786d31,
+-	0xbec5e381, 0x9a727e65, 0xa9a65040, 0xa880d789, 0x8f1b335e, 0xfc821c1e,
+-	0x97e34be4, 0xbbef84ed, 0xf447d197, 0xfd7ceee2, 0xe632348d, 0xee4590f4,
+-	0x958992a5, 0xd57e05d6, 0xfd240970, 0xc5b0dcff, 0xd96da2c2, 0xa7ae041d,
+-};
+-#endif
+-
+-// Only works on 64 bit
+-#if __riscv_xlen == 64
+ #define PROT (PROT_READ | PROT_WRITE)
+ #define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS)
+ 
+-/* mmap must return a value that doesn't use more bits than the hint address. */
+-static inline unsigned long get_max_value(unsigned long input)
+-{
+-	unsigned long max_bit = (1UL << (((sizeof(unsigned long) * 8) - 1 -
+-					  __builtin_clzl(input))));
+-
+-	return max_bit + (max_bit - 1);
+-}
+-
+-#define TEST_MMAPS                                                            \
+-	({                                                                    \
+-		void *mmap_addr;                                              \
+-		for (int i = 0; i < ARRAY_SIZE(random_addresses); i++) {      \
+-			mmap_addr = mmap((void *)random_addresses[i],         \
+-					 5 * sizeof(int), PROT, FLAGS, 0, 0); \
+-			EXPECT_NE(MAP_FAILED, mmap_addr);                     \
+-			EXPECT_GE((void *)get_max_value(random_addresses[i]), \
+-				  mmap_addr);                                 \
+-			mmap_addr = mmap((void *)random_addresses[i],         \
+-					 5 * sizeof(int), PROT, FLAGS, 0, 0); \
+-			EXPECT_NE(MAP_FAILED, mmap_addr);                     \
+-			EXPECT_GE((void *)get_max_value(random_addresses[i]), \
+-				  mmap_addr);                                 \
+-		}                                                             \
+-	})
+-#endif /* __riscv_xlen == 64 */
+-
+ static inline int memory_layout(void)
+ {
+ 	void *value1 = mmap(NULL, sizeof(int), PROT, FLAGS, 0, 0);


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-12 13:08 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-12 13:08 UTC (permalink / raw
  To: gentoo-commits

commit:     9ca4d20e2535253eb3da4c5d4b5d84ed351d1a11
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 12 13:07:11 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 12 13:07:11 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ca4d20e

Remove redundant patches

Removed:
1710_parisc-Delay-write-protection.patch
1900_xfs-finobt-count-blocks-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                              |  8 -----
 1710_parisc-Delay-write-protection.patch | 61 --------------------------------
 1900_xfs-finobt-count-blocks-fix.patch   | 55 ----------------------------
 3 files changed, 124 deletions(-)

diff --git a/0000_README b/0000_README
index 3b3f17cf..fbbb6719 100644
--- a/0000_README
+++ b/0000_README
@@ -91,18 +91,10 @@ Patch:  1700_sparc-address-warray-bound-warnings.patch
 From:		https://github.com/KSPP/linux/issues/109
 Desc:		Address -Warray-bounds warnings 
 
-Patch:  1710_parisc-Delay-write-protection.patch
-From:		https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
-Desc:		parisc: Delay write-protection until mark_rodata_ro() call
-
 Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1900_xfs-finobt-count-blocks-fix.patch
-From:   https://lore.kernel.org/linux-xfs/20240813152530.GF6051@frogsfrogsfrogs/T/#mdc718f38912ccc1b9b53b46d9adfaeff0828b55f
-Desc:   xfs: xfs_finobt_count_blocks() walks the wrong btree
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1710_parisc-Delay-write-protection.patch b/1710_parisc-Delay-write-protection.patch
deleted file mode 100644
index efdd4c27..00000000
--- a/1710_parisc-Delay-write-protection.patch
+++ /dev/null
@@ -1,61 +0,0 @@
-From 213aa670153ed675a007c1f35c5db544b0fefc94 Mon Sep 17 00:00:00 2001
-From: Helge Deller <deller@gmx.de>
-Date: Sat, 31 Aug 2024 14:02:06 +0200
-Subject: parisc: Delay write-protection until mark_rodata_ro() call
-
-Do not write-protect the kernel read-only and __ro_after_init sections
-earlier than before mark_rodata_ro() is called.  This fixes a boot issue on
-parisc which is triggered by commit 91a1d97ef482 ("jump_label,module: Don't
-alloc static_key_mod for __ro_after_init keys"). That commit may modify
-static key contents in the __ro_after_init section at bootup, so this
-section needs to be writable at least until mark_rodata_ro() is called.
-
-Signed-off-by: Helge Deller <deller@gmx.de>
-Reported-by: matoro <matoro_mailinglist_kernel@matoro.tk>
-Reported-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
-Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
-Link: https://lore.kernel.org/linux-parisc/096cad5aada514255cd7b0b9dbafc768@matoro.tk/#r
-Fixes: 91a1d97ef482 ("jump_label,module: Don't alloc static_key_mod for __ro_after_init keys")
-Cc: stable@vger.kernel.org # v6.10+
----
- arch/parisc/mm/init.c | 16 +++++++++++-----
- 1 file changed, 11 insertions(+), 5 deletions(-)
-
-diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
-index 34d91cb8b25905..96970fa75e4ac9 100644
---- a/arch/parisc/mm/init.c
-+++ b/arch/parisc/mm/init.c
-@@ -459,7 +459,6 @@ void free_initmem(void)
- 	unsigned long kernel_end  = (unsigned long)&_end;
- 
- 	/* Remap kernel text and data, but do not touch init section yet. */
--	kernel_set_to_readonly = true;
- 	map_pages(init_end, __pa(init_end), kernel_end - init_end,
- 		  PAGE_KERNEL, 0);
- 
-@@ -493,11 +492,18 @@ void free_initmem(void)
- #ifdef CONFIG_STRICT_KERNEL_RWX
- void mark_rodata_ro(void)
- {
--	/* rodata memory was already mapped with KERNEL_RO access rights by
--           pagetable_init() and map_pages(). No need to do additional stuff here */
--	unsigned long roai_size = __end_ro_after_init - __start_ro_after_init;
-+	unsigned long start = (unsigned long) &__start_rodata;
-+	unsigned long end = (unsigned long) &__end_rodata;
-+
-+	pr_info("Write protecting the kernel read-only data: %luk\n",
-+	       (end - start) >> 10);
-+
-+	kernel_set_to_readonly = true;
-+	map_pages(start, __pa(start), end - start, PAGE_KERNEL, 0);
- 
--	pr_info("Write protected read-only-after-init data: %luk\n", roai_size >> 10);
-+	/* force the kernel to see the new page table entries */
-+	flush_cache_all();
-+	flush_tlb_all();
- }
- #endif
- 
--- 
-cgit 1.2.3-korg
-

diff --git a/1900_xfs-finobt-count-blocks-fix.patch b/1900_xfs-finobt-count-blocks-fix.patch
deleted file mode 100644
index 02f60712..00000000
--- a/1900_xfs-finobt-count-blocks-fix.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-xfs: xfs_finobt_count_blocks() walks the wrong btree
-
-From: Dave Chinner <dchinner@redhat.com>
-
-As a result of the factoring in commit 14dd46cf31f4 ("xfs: split
-xfs_inobt_init_cursor"), mount started taking a long time on a
-user's filesystem.  For Anders, this made mount times regress from
-under a second to over 15 minutes for a filesystem with only 30
-million inodes in it.
-
-Anders bisected it down to the above commit, but even then the bug
-was not obvious. In this commit, over 20 calls to
-xfs_inobt_init_cursor() were modified, and some we modified to call
-a new function named xfs_finobt_init_cursor().
-
-If that takes you a moment to reread those function names to see
-what the rename was, then you have realised why this bug wasn't
-spotted during review. And it wasn't spotted on inspection even
-after the bisect pointed at this commit - a single missing "f" isn't
-the easiest thing for a human eye to notice....
-
-The result is that xfs_finobt_count_blocks() now incorrectly calls
-xfs_inobt_init_cursor() so it is now walking the inobt instead of
-the finobt. Hence when there are lots of allocated inodes in a
-filesystem, mount takes a -long- time run because it now walks a
-massive allocated inode btrees instead of the small, nearly empty
-free inode btrees. It also means all the finobt space reservations
-are wrong, so mount could potentially given ENOSPC on kernel
-upgrade.
-
-In hindsight, commit 14dd46cf31f4 should have been two commits - the
-first to convert the finobt callers to the new API, the second to
-modify the xfs_inobt_init_cursor() API for the inobt callers. That
-would have made the bug very obvious during review.
-
-Fixes: 14dd46cf31f4 ("xfs: split xfs_inobt_init_cursor")
-Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
-Signed-off-by: Dave Chinner <dchinner@redhat.com>
----
- fs/xfs/libxfs/xfs_ialloc_btree.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
-index 496e2f72a85b..797d5b5f7b72 100644
---- a/fs/xfs/libxfs/xfs_ialloc_btree.c
-+++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
-@@ -749,7 +749,7 @@ xfs_finobt_count_blocks(
- 	if (error)
- 		return error;
- 
--	cur = xfs_inobt_init_cursor(pag, tp, agbp);
-+	cur = xfs_finobt_init_cursor(pag, tp, agbp);
- 	error = xfs_btree_count_blocks(cur, tree_blocks);
- 	xfs_btree_del_cursor(cur, error);
- 	xfs_trans_brelse(tp, agbp);


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-18 18:01 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-18 18:01 UTC (permalink / raw
  To: gentoo-commits

commit:     a8b69318d24bf26247b03424e6663c02a7d0b5c5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 18 18:01:30 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 18 18:01:30 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8b69318

Linux patch 6.10.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-6.10.11.patch | 4668 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4672 insertions(+)

diff --git a/0000_README b/0000_README
index fbbb6719..f48df58b 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-6.10.10.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.10
 
+Patch:  1010_linux-6.10.11.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.11
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1010_linux-6.10.11.patch b/1010_linux-6.10.11.patch
new file mode 100644
index 00000000..76b166b4
--- /dev/null
+++ b/1010_linux-6.10.11.patch
@@ -0,0 +1,4668 @@
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index af525ed2979234..30d8342cacc870 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -109,7 +109,6 @@ attribute-sets:
+       -
+         name: port
+         type: u16
+-        byte-order: big-endian
+       -
+         name: flags
+         type: u32
+diff --git a/Makefile b/Makefile
+index 9b4614c0fcbb68..447856c43b3275 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
+index a608a219543e59..3e08e2fd0a7828 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
+@@ -387,7 +387,7 @@ led_pin: led-pin {
+ 
+ 	pmic {
+ 		pmic_int_l: pmic-int-l {
+-			rockchip,pins = <2 RK_PA6 RK_FUNC_GPIO &pcfg_pull_up>;
++			rockchip,pins = <0 RK_PA2 RK_FUNC_GPIO &pcfg_pull_up>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index ccbe3a7a1d2c2f..d24444cdf54afa 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -154,6 +154,22 @@ bios-disable-hog {
+ 	};
+ };
+ 
++&gpio3 {
++	/*
++	 * The Qseven BIOS_DISABLE signal on the RK3399-Q7 keeps the on-module
++	 * eMMC and SPI flash powered-down initially (in fact it keeps the
++	 * reset signal asserted). BIOS_DISABLE_OVERRIDE pin allows to override
++	 * that signal so that eMMC and SPI can be used regardless of the state
++	 * of the signal.
++	 */
++	bios-disable-override-hog {
++		gpios = <RK_PD5 GPIO_ACTIVE_LOW>;
++		gpio-hog;
++		line-name = "bios_disable_override";
++		output-high;
++	};
++};
++
+ &gmac {
+ 	assigned-clocks = <&cru SCLK_RMII_SRC>;
+ 	assigned-clock-parents = <&clkin_gmac>;
+@@ -409,6 +425,7 @@ vdd_cpu_b: regulator@60 {
+ 
+ &i2s0 {
+ 	pinctrl-0 = <&i2s0_2ch_bus>;
++	pinctrl-1 = <&i2s0_2ch_bus_bclk_off>;
+ 	rockchip,playback-channels = <2>;
+ 	rockchip,capture-channels = <2>;
+ 	status = "okay";
+@@ -417,8 +434,8 @@ &i2s0 {
+ /*
+  * As Q7 does not specify neither a global nor a RX clock for I2S these
+  * signals are not used. Furthermore I2S0_LRCK_RX is used as GPIO.
+- * Therefore we have to redefine the i2s0_2ch_bus definition to prevent
+- * conflicts.
++ * Therefore we have to redefine the i2s0_2ch_bus and i2s0_2ch_bus_bclk_off
++ * definitions to prevent conflicts.
+  */
+ &i2s0_2ch_bus {
+ 	rockchip,pins =
+@@ -428,6 +445,14 @@ &i2s0_2ch_bus {
+ 		<3 RK_PD7 1 &pcfg_pull_none>;
+ };
+ 
++&i2s0_2ch_bus_bclk_off {
++	rockchip,pins =
++		<3 RK_PD0 RK_FUNC_GPIO &pcfg_pull_none>,
++		<3 RK_PD2 1 &pcfg_pull_none>,
++		<3 RK_PD3 1 &pcfg_pull_none>,
++		<3 RK_PD7 1 &pcfg_pull_none>;
++};
++
+ &io_domains {
+ 	status = "okay";
+ 	bt656-supply = <&vcc_1v8>;
+@@ -449,9 +474,14 @@ &pcie_clkreqn_cpm {
+ 
+ &pinctrl {
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&q7_thermal_pin>;
++	pinctrl-0 = <&q7_thermal_pin &bios_disable_override_hog_pin>;
+ 
+ 	gpios {
++		bios_disable_override_hog_pin: bios-disable-override-hog-pin {
++			rockchip,pins =
++				<3 RK_PD5 RK_FUNC_GPIO &pcfg_pull_down>;
++		};
++
+ 		q7_thermal_pin: q7-thermal-pin {
+ 			rockchip,pins =
+ 				<0 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 4bd2f87616baa9..943430077375a4 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -959,6 +959,7 @@ void __init setup_arch(char **cmdline_p)
+ 	mem_topology_setup();
+ 	/* Set max_mapnr before paging_init() */
+ 	set_max_mapnr(max_pfn);
++	high_memory = (void *)__va(max_low_pfn * PAGE_SIZE);
+ 
+ 	/*
+ 	 * Release secondary cpus out of their spinloops at 0x60 now that
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index d325217ab20121..da21cb018984eb 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -290,8 +290,6 @@ void __init mem_init(void)
+ 	swiotlb_init(ppc_swiotlb_enable, ppc_swiotlb_flags);
+ #endif
+ 
+-	high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
+-
+ 	kasan_late_init();
+ 
+ 	memblock_free_all();
+diff --git a/arch/riscv/boot/dts/starfive/jh7110-common.dtsi b/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
+index 68d16717db8cdb..51d85f44762669 100644
+--- a/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
++++ b/arch/riscv/boot/dts/starfive/jh7110-common.dtsi
+@@ -354,6 +354,12 @@ spi_dev0: spi@0 {
+ 	};
+ };
+ 
++&syscrg {
++	assigned-clocks = <&syscrg JH7110_SYSCLK_CPU_CORE>,
++			  <&pllclk JH7110_PLLCLK_PLL0_OUT>;
++	assigned-clock-rates = <500000000>, <1500000000>;
++};
++
+ &sysgpio {
+ 	i2c0_pins: i2c0-0 {
+ 		i2c-pins {
+diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
+index a03c994eed3b6b..b8167272988723 100644
+--- a/arch/riscv/mm/cacheflush.c
++++ b/arch/riscv/mm/cacheflush.c
+@@ -158,6 +158,7 @@ void __init riscv_init_cbo_blocksizes(void)
+ #ifdef CONFIG_SMP
+ static void set_icache_stale_mask(void)
+ {
++	int cpu = get_cpu();
+ 	cpumask_t *mask;
+ 	bool stale_cpu;
+ 
+@@ -168,10 +169,11 @@ static void set_icache_stale_mask(void)
+ 	 * concurrently on different harts.
+ 	 */
+ 	mask = &current->mm->context.icache_stale_mask;
+-	stale_cpu = cpumask_test_cpu(smp_processor_id(), mask);
++	stale_cpu = cpumask_test_cpu(cpu, mask);
+ 
+ 	cpumask_setall(mask);
+-	cpumask_assign_cpu(smp_processor_id(), mask, stale_cpu);
++	cpumask_assign_cpu(cpu, mask, stale_cpu);
++	put_cpu();
+ }
+ #endif
+ 
+@@ -239,14 +241,12 @@ int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
+ 	case PR_RISCV_CTX_SW_FENCEI_OFF:
+ 		switch (scope) {
+ 		case PR_RISCV_SCOPE_PER_PROCESS:
+-			current->mm->context.force_icache_flush = false;
+-
+ 			set_icache_stale_mask();
++			current->mm->context.force_icache_flush = false;
+ 			break;
+ 		case PR_RISCV_SCOPE_PER_THREAD:
+-			current->thread.force_icache_flush = false;
+-
+ 			set_icache_stale_mask();
++			current->thread.force_icache_flush = false;
+ 			break;
+ 		default:
+ 			return -EINVAL;
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index c59d2b54df4940..4f7ed0cd12ccb2 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -602,6 +602,19 @@ config RANDOMIZE_BASE
+ 	  as a security feature that deters exploit attempts relying on
+ 	  knowledge of the location of kernel internals.
+ 
++config RANDOMIZE_IDENTITY_BASE
++	bool "Randomize the address of the identity mapping base"
++	depends on RANDOMIZE_BASE
++	default DEBUG_VM
++	help
++	  The identity mapping base address is pinned to zero by default.
++	  Allow randomization of that base to expose otherwise missed
++	  notion of physical and virtual addresses of data structures.
++	  That does not have any impact on the base address at which the
++	  kernel image is loaded.
++
++	  If unsure, say N
++
+ config KERNEL_IMAGE_BASE
+ 	hex "Kernel image base address"
+ 	range 0x100000 0x1FFFFFE0000000 if !KASAN
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 66ee97ac803de3..90c51368f93341 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -333,7 +333,8 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
+ 	BUILD_BUG_ON(MAX_DCSS_ADDR > (1UL << MAX_PHYSMEM_BITS));
+ 	max_mappable = max(ident_map_size, MAX_DCSS_ADDR);
+ 	max_mappable = min(max_mappable, vmemmap_start);
+-	__identity_base = round_down(vmemmap_start - max_mappable, rte_size);
++	if (IS_ENABLED(CONFIG_RANDOMIZE_IDENTITY_BASE))
++		__identity_base = round_down(vmemmap_start - max_mappable, rte_size);
+ 
+ 	return asce_limit;
+ }
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 17a71e92a343e1..95eada2994e150 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -35,7 +35,6 @@
+ #include <clocksource/hyperv_timer.h>
+ #include <linux/highmem.h>
+ 
+-int hyperv_init_cpuhp;
+ u64 hv_current_partition_id = ~0ull;
+ EXPORT_SYMBOL_GPL(hv_current_partition_id);
+ 
+@@ -607,8 +606,6 @@ void __init hyperv_init(void)
+ 
+ 	register_syscore_ops(&hv_syscore_ops);
+ 
+-	hyperv_init_cpuhp = cpuhp;
+-
+ 	if (cpuid_ebx(HYPERV_CPUID_FEATURES) & HV_ACCESS_PARTITION_ID)
+ 		hv_get_partition_id();
+ 
+@@ -637,7 +634,7 @@ void __init hyperv_init(void)
+ clean_guest_os_id:
+ 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+ 	hv_ivm_msr_write(HV_X64_MSR_GUEST_OS_ID, 0);
+-	cpuhp_remove_state(cpuhp);
++	cpuhp_remove_state(CPUHP_AP_HYPERV_ONLINE);
+ free_ghcb_page:
+ 	free_percpu(hv_ghcb_pg);
+ free_vp_assist_page:
+diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
+index 390c4d13956d0c..5f0bc6a6d02556 100644
+--- a/arch/x86/include/asm/mshyperv.h
++++ b/arch/x86/include/asm/mshyperv.h
+@@ -40,7 +40,6 @@ static inline unsigned char hv_get_nmi_reason(void)
+ }
+ 
+ #if IS_ENABLED(CONFIG_HYPERV)
+-extern int hyperv_init_cpuhp;
+ extern bool hyperv_paravisor_present;
+ 
+ extern void *hv_hypercall_pg;
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index e0fd57a8ba8404..41632fb57796de 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -199,8 +199,8 @@ static void hv_machine_shutdown(void)
+ 	 * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
+ 	 * corrupts the old VP Assist Pages and can crash the kexec kernel.
+ 	 */
+-	if (kexec_in_progress && hyperv_init_cpuhp > 0)
+-		cpuhp_remove_state(hyperv_init_cpuhp);
++	if (kexec_in_progress)
++		cpuhp_remove_state(CPUHP_AP_HYPERV_ONLINE);
+ 
+ 	/* The function calls stop_other_cpus(). */
+ 	native_machine_shutdown();
+@@ -449,9 +449,23 @@ static void __init ms_hyperv_init_platform(void)
+ 			ms_hyperv.hints &= ~HV_X64_APIC_ACCESS_RECOMMENDED;
+ 
+ 			if (!ms_hyperv.paravisor_present) {
+-				/* To be supported: more work is required.  */
++				/*
++				 * Mark the Hyper-V TSC page feature as disabled
++				 * in a TDX VM without paravisor so that the
++				 * Invariant TSC, which is a better clocksource
++				 * anyway, is used instead.
++				 */
+ 				ms_hyperv.features &= ~HV_MSR_REFERENCE_TSC_AVAILABLE;
+ 
++				/*
++				 * The Invariant TSC is expected to be available
++				 * in a TDX VM without paravisor, but if not,
++				 * print a warning message. The slower Hyper-V MSR-based
++				 * Ref Counter should end up being the clocksource.
++				 */
++				if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT))
++					pr_warn("Hyper-V: Invariant TSC is unavailable\n");
++
+ 				/* HV_MSR_CRASH_CTL is unsupported. */
+ 				ms_hyperv.misc_features &= ~HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE;
+ 
+diff --git a/drivers/clk/sophgo/clk-cv18xx-ip.c b/drivers/clk/sophgo/clk-cv18xx-ip.c
+index 805f561725ae15..b186e64d4813e2 100644
+--- a/drivers/clk/sophgo/clk-cv18xx-ip.c
++++ b/drivers/clk/sophgo/clk-cv18xx-ip.c
+@@ -613,7 +613,7 @@ static u8 mmux_get_parent_id(struct cv1800_clk_mmux *mmux)
+ 			return i;
+ 	}
+ 
+-	unreachable();
++	BUG();
+ }
+ 
+ static int mmux_enable(struct clk_hw *hw)
+diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
+index b2a080647e4132..99177835cadec4 100644
+--- a/drivers/clocksource/hyperv_timer.c
++++ b/drivers/clocksource/hyperv_timer.c
+@@ -137,7 +137,21 @@ static int hv_stimer_init(unsigned int cpu)
+ 	ce->name = "Hyper-V clockevent";
+ 	ce->features = CLOCK_EVT_FEAT_ONESHOT;
+ 	ce->cpumask = cpumask_of(cpu);
+-	ce->rating = 1000;
++
++	/*
++	 * Lower the rating of the Hyper-V timer in a TDX VM without paravisor,
++	 * so the local APIC timer (lapic_clockevent) is the default timer in
++	 * such a VM. The Hyper-V timer is not preferred in such a VM because
++	 * it depends on the slow VM Reference Counter MSR (the Hyper-V TSC
++	 * page is not enbled in such a VM because the VM uses Invariant TSC
++	 * as a better clocksource and it's challenging to mark the Hyper-V
++	 * TSC page shared in very early boot).
++	 */
++	if (!ms_hyperv.paravisor_present && hv_isolation_type_tdx())
++		ce->rating = 90;
++	else
++		ce->rating = 1000;
++
+ 	ce->set_state_shutdown = hv_ce_shutdown;
+ 	ce->set_state_oneshot = hv_ce_set_oneshot;
+ 	ce->set_next_event = hv_ce_set_next_event;
+diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
+index 571069863c6294..6b6ae9c81368ea 100644
+--- a/drivers/cxl/acpi.c
++++ b/drivers/cxl/acpi.c
+@@ -74,6 +74,43 @@ static struct cxl_dport *cxl_hb_xor(struct cxl_root_decoder *cxlrd, int pos)
+ 	return cxlrd->cxlsd.target[n];
+ }
+ 
++static u64 cxl_xor_hpa_to_spa(struct cxl_root_decoder *cxlrd, u64 hpa)
++{
++	struct cxl_cxims_data *cximsd = cxlrd->platform_data;
++	int hbiw = cxlrd->cxlsd.nr_targets;
++	u64 val;
++	int pos;
++
++	/* No xormaps for host bridge interleave ways of 1 or 3 */
++	if (hbiw == 1 || hbiw == 3)
++		return hpa;
++
++	/*
++	 * For root decoders using xormaps (hbiw: 2,4,6,8,12,16) restore
++	 * the position bit to its value before the xormap was applied at
++	 * HPA->DPA translation.
++	 *
++	 * pos is the lowest set bit in an XORMAP
++	 * val is the XORALLBITS(HPA & XORMAP)
++	 *
++	 * XORALLBITS: The CXL spec (3.1 Table 9-22) defines XORALLBITS
++	 * as an operation that outputs a single bit by XORing all the
++	 * bits in the input (hpa & xormap). Implement XORALLBITS using
++	 * hweight64(). If the hamming weight is even the XOR of those
++	 * bits results in val==0, if odd the XOR result is val==1.
++	 */
++
++	for (int i = 0; i < cximsd->nr_maps; i++) {
++		if (!cximsd->xormaps[i])
++			continue;
++		pos = __ffs(cximsd->xormaps[i]);
++		val = (hweight64(hpa & cximsd->xormaps[i]) & 1);
++		hpa = (hpa & ~(1ULL << pos)) | (val << pos);
++	}
++
++	return hpa;
++}
++
+ struct cxl_cxims_context {
+ 	struct device *dev;
+ 	struct cxl_root_decoder *cxlrd;
+@@ -434,6 +471,9 @@ static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
+ 
+ 	cxlrd->qos_class = cfmws->qtg_id;
+ 
++	if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR)
++		cxlrd->hpa_to_spa = cxl_xor_hpa_to_spa;
++
+ 	rc = cxl_decoder_add(cxld, target_map);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 0e30e0a29d400b..3345ccbade0bdf 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -2830,20 +2830,13 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa)
+ 	return ctx.cxlr;
+ }
+ 
+-static bool cxl_is_hpa_in_range(u64 hpa, struct cxl_region *cxlr, int pos)
++static bool cxl_is_hpa_in_chunk(u64 hpa, struct cxl_region *cxlr, int pos)
+ {
+ 	struct cxl_region_params *p = &cxlr->params;
+ 	int gran = p->interleave_granularity;
+ 	int ways = p->interleave_ways;
+ 	u64 offset;
+ 
+-	/* Is the hpa within this region at all */
+-	if (hpa < p->res->start || hpa > p->res->end) {
+-		dev_dbg(&cxlr->dev,
+-			"Addr trans fail: hpa 0x%llx not in region\n", hpa);
+-		return false;
+-	}
+-
+ 	/* Is the hpa in an expected chunk for its pos(-ition) */
+ 	offset = hpa - p->res->start;
+ 	offset = do_div(offset, gran * ways);
+@@ -2859,6 +2852,7 @@ static bool cxl_is_hpa_in_range(u64 hpa, struct cxl_region *cxlr, int pos)
+ static u64 cxl_dpa_to_hpa(u64 dpa,  struct cxl_region *cxlr,
+ 			  struct cxl_endpoint_decoder *cxled)
+ {
++	struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
+ 	u64 dpa_offset, hpa_offset, bits_upper, mask_upper, hpa;
+ 	struct cxl_region_params *p = &cxlr->params;
+ 	int pos = cxled->pos;
+@@ -2898,7 +2892,18 @@ static u64 cxl_dpa_to_hpa(u64 dpa,  struct cxl_region *cxlr,
+ 	/* Apply the hpa_offset to the region base address */
+ 	hpa = hpa_offset + p->res->start;
+ 
+-	if (!cxl_is_hpa_in_range(hpa, cxlr, cxled->pos))
++	/* Root decoder translation overrides typical modulo decode */
++	if (cxlrd->hpa_to_spa)
++		hpa = cxlrd->hpa_to_spa(cxlrd, hpa);
++
++	if (hpa < p->res->start || hpa > p->res->end) {
++		dev_dbg(&cxlr->dev,
++			"Addr trans fail: hpa 0x%llx not in region\n", hpa);
++		return ULLONG_MAX;
++	}
++
++	/* Simple chunk check, by pos & gran, only applies to modulo decodes */
++	if (!cxlrd->hpa_to_spa && (!cxl_is_hpa_in_chunk(hpa, cxlr, pos)))
+ 		return ULLONG_MAX;
+ 
+ 	return hpa;
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index a6613a6f892370..b8e16e8697e22e 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -436,12 +436,14 @@ struct cxl_switch_decoder {
+ struct cxl_root_decoder;
+ typedef struct cxl_dport *(*cxl_calc_hb_fn)(struct cxl_root_decoder *cxlrd,
+ 					    int pos);
++typedef u64 (*cxl_hpa_to_spa_fn)(struct cxl_root_decoder *cxlrd, u64 hpa);
+ 
+ /**
+  * struct cxl_root_decoder - Static platform CXL address decoder
+  * @res: host / parent resource for region allocations
+  * @region_id: region id for next region provisioning event
+  * @calc_hb: which host bridge covers the n'th position by granularity
++ * @hpa_to_spa: translate CXL host-physical-address to Platform system-physical-address
+  * @platform_data: platform specific configuration data
+  * @range_lock: sync region autodiscovery by address range
+  * @qos_class: QoS performance class cookie
+@@ -451,6 +453,7 @@ struct cxl_root_decoder {
+ 	struct resource *res;
+ 	atomic_t region_id;
+ 	cxl_calc_hb_fn calc_hb;
++	cxl_hpa_to_spa_fn hpa_to_spa;
+ 	void *platform_data;
+ 	struct mutex range_lock;
+ 	int qos_class;
+diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
+index af8169ccdbc055..feb1106559d21f 100644
+--- a/drivers/cxl/cxlmem.h
++++ b/drivers/cxl/cxlmem.h
+@@ -563,7 +563,7 @@ enum cxl_opcode {
+ 		  0x3b, 0x3f, 0x17)
+ 
+ #define DEFINE_CXL_VENDOR_DEBUG_UUID                                           \
+-	UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f, 0xd6, 0x07, 0x19,     \
++	UUID_INIT(0x5e1819d9, 0x11a9, 0x400c, 0x81, 0x1f, 0xd6, 0x07, 0x19,     \
+ 		  0x40, 0x3d, 0x86)
+ 
+ struct cxl_mbox_get_supported_logs {
+diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
+index 4a63567e93bae3..20dbe9265ad7ca 100644
+--- a/drivers/dma-buf/heaps/cma_heap.c
++++ b/drivers/dma-buf/heaps/cma_heap.c
+@@ -165,7 +165,7 @@ static vm_fault_t cma_heap_vm_fault(struct vm_fault *vmf)
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct cma_heap_buffer *buffer = vma->vm_private_data;
+ 
+-	if (vmf->pgoff > buffer->pagecount)
++	if (vmf->pgoff >= buffer->pagecount)
+ 		return VM_FAULT_SIGBUS;
+ 
+ 	return vmf_insert_pfn(vma, vmf->address, page_to_pfn(buffer->pages[vmf->pgoff]));
+diff --git a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+index bc550ad0dbe0c7..68b2c09ed22cd3 100644
+--- a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
++++ b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+@@ -786,6 +786,10 @@ static int qcuefi_set_reference(struct qcuefi_client *qcuefi)
+ static struct qcuefi_client *qcuefi_acquire(void)
+ {
+ 	mutex_lock(&__qcuefi_lock);
++	if (!__qcuefi) {
++		mutex_unlock(&__qcuefi_lock);
++		return NULL;
++	}
+ 	return __qcuefi;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+index 1a5439abd1a043..c87d68d4be5365 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+@@ -461,8 +461,11 @@ struct amdgpu_vcn5_fw_shared {
+ 	struct amdgpu_fw_shared_unified_queue_struct sq;
+ 	uint8_t pad1[8];
+ 	struct amdgpu_fw_shared_fw_logging fw_log;
++	uint8_t pad2[20];
+ 	struct amdgpu_fw_shared_rb_setup rb_setup;
+-	uint8_t pad2[4];
++	struct amdgpu_fw_shared_smu_interface_info smu_dpm_interface;
++	struct amdgpu_fw_shared_drm_key_wa drm_key_wa;
++	uint8_t pad3[9];
+ };
+ 
+ #define VCN_BLOCK_ENCODE_DISABLE_MASK 0x80
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
+index 77595e9622da34..7ac0228fe532ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
+@@ -23,6 +23,7 @@
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_jpeg.h"
++#include "amdgpu_cs.h"
+ #include "soc15.h"
+ #include "soc15d.h"
+ #include "vcn_v1_0.h"
+@@ -34,6 +35,9 @@
+ static void jpeg_v1_0_set_dec_ring_funcs(struct amdgpu_device *adev);
+ static void jpeg_v1_0_set_irq_funcs(struct amdgpu_device *adev);
+ static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring);
++static int jpeg_v1_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++				     struct amdgpu_job *job,
++				     struct amdgpu_ib *ib);
+ 
+ static void jpeg_v1_0_decode_ring_patch_wreg(struct amdgpu_ring *ring, uint32_t *ptr, uint32_t reg_offset, uint32_t val)
+ {
+@@ -300,7 +304,10 @@ static void jpeg_v1_0_decode_ring_emit_ib(struct amdgpu_ring *ring,
+ 
+ 	amdgpu_ring_write(ring,
+ 		PACKETJ(SOC15_REG_OFFSET(JPEG, 0, mmUVD_LMI_JRBC_IB_VMID), 0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
++	if (ring->funcs->parse_cs)
++		amdgpu_ring_write(ring, 0);
++	else
++		amdgpu_ring_write(ring, (vmid | (vmid << 4)));
+ 
+ 	amdgpu_ring_write(ring,
+ 		PACKETJ(SOC15_REG_OFFSET(JPEG, 0, mmUVD_LMI_JPEG_VMID), 0, 0, PACKETJ_TYPE0));
+@@ -554,6 +561,7 @@ static const struct amdgpu_ring_funcs jpeg_v1_0_decode_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v1_0_decode_ring_get_rptr,
+ 	.get_wptr = jpeg_v1_0_decode_ring_get_wptr,
+ 	.set_wptr = jpeg_v1_0_decode_ring_set_wptr,
++	.parse_cs = jpeg_v1_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		6 + 6 + /* hdp invalidate / flush */
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+@@ -612,3 +620,69 @@ static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring)
+ 
+ 	vcn_v1_0_set_pg_for_begin_use(ring, set_clocks);
+ }
++
++/**
++ * jpeg_v1_dec_ring_parse_cs - command submission parser
++ *
++ * @parser: Command submission parser context
++ * @job: the job to parse
++ * @ib: the IB to parse
++ *
++ * Parse the command stream, return -EINVAL for invalid packet,
++ * 0 otherwise
++ */
++static int jpeg_v1_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++				     struct amdgpu_job *job,
++				     struct amdgpu_ib *ib)
++{
++	u32 i, reg, res, cond, type;
++	int ret = 0;
++	struct amdgpu_device *adev = parser->adev;
++
++	for (i = 0; i < ib->length_dw ; i += 2) {
++		reg  = CP_PACKETJ_GET_REG(ib->ptr[i]);
++		res  = CP_PACKETJ_GET_RES(ib->ptr[i]);
++		cond = CP_PACKETJ_GET_COND(ib->ptr[i]);
++		type = CP_PACKETJ_GET_TYPE(ib->ptr[i]);
++
++		if (res || cond != PACKETJ_CONDITION_CHECK0) /* only allow 0 for now */
++			return -EINVAL;
++
++		if (reg >= JPEG_V1_REG_RANGE_START && reg <= JPEG_V1_REG_RANGE_END)
++			continue;
++
++		switch (type) {
++		case PACKETJ_TYPE0:
++			if (reg != JPEG_V1_LMI_JPEG_WRITE_64BIT_BAR_HIGH &&
++			    reg != JPEG_V1_LMI_JPEG_WRITE_64BIT_BAR_LOW &&
++			    reg != JPEG_V1_LMI_JPEG_READ_64BIT_BAR_HIGH &&
++			    reg != JPEG_V1_LMI_JPEG_READ_64BIT_BAR_LOW &&
++			    reg != JPEG_V1_REG_CTX_INDEX &&
++			    reg != JPEG_V1_REG_CTX_DATA) {
++				ret = -EINVAL;
++			}
++			break;
++		case PACKETJ_TYPE1:
++			if (reg != JPEG_V1_REG_CTX_DATA)
++				ret = -EINVAL;
++			break;
++		case PACKETJ_TYPE3:
++			if (reg != JPEG_V1_REG_SOFT_RESET)
++				ret = -EINVAL;
++			break;
++		case PACKETJ_TYPE6:
++			if (ib->ptr[i] != CP_PACKETJ_NOP)
++				ret = -EINVAL;
++			break;
++		default:
++			ret = -EINVAL;
++		}
++
++		if (ret) {
++			dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++			break;
++		}
++	}
++
++	return ret;
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.h
+index bbf33a6a397298..9654d22e03763c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.h
+@@ -29,4 +29,15 @@ int jpeg_v1_0_sw_init(void *handle);
+ void jpeg_v1_0_sw_fini(void *handle);
+ void jpeg_v1_0_start(struct amdgpu_device *adev, int mode);
+ 
++#define JPEG_V1_REG_RANGE_START	0x8000
++#define JPEG_V1_REG_RANGE_END	0x803f
++
++#define JPEG_V1_LMI_JPEG_WRITE_64BIT_BAR_HIGH	0x8238
++#define JPEG_V1_LMI_JPEG_WRITE_64BIT_BAR_LOW	0x8239
++#define JPEG_V1_LMI_JPEG_READ_64BIT_BAR_HIGH	0x825a
++#define JPEG_V1_LMI_JPEG_READ_64BIT_BAR_LOW	0x825b
++#define JPEG_V1_REG_CTX_INDEX			0x8328
++#define JPEG_V1_REG_CTX_DATA			0x8329
++#define JPEG_V1_REG_SOFT_RESET			0x83a0
++
+ #endif /*__JPEG_V1_0_H__*/
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index 63f84ef6dfcf27..9e9bcf184df2b4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -23,6 +23,7 @@
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_jpeg.h"
++#include "amdgpu_cs.h"
+ #include "amdgpu_pm.h"
+ #include "soc15.h"
+ #include "soc15d.h"
+@@ -543,7 +544,11 @@ void jpeg_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+-	amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
++
++	if (ring->funcs->parse_cs)
++		amdgpu_ring_write(ring, 0);
++	else
++		amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8)));
+ 
+ 	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET,
+ 		0, 0, PACKETJ_TYPE0));
+@@ -769,6 +774,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_0_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v2_0_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v2_0_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v2_0_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+@@ -816,3 +822,58 @@ const struct amdgpu_ip_block_version jpeg_v2_0_ip_block = {
+ 		.rev = 0,
+ 		.funcs = &jpeg_v2_0_ip_funcs,
+ };
++
++/**
++ * jpeg_v2_dec_ring_parse_cs - command submission parser
++ *
++ * @parser: Command submission parser context
++ * @job: the job to parse
++ * @ib: the IB to parse
++ *
++ * Parse the command stream, return -EINVAL for invalid packet,
++ * 0 otherwise
++ */
++int jpeg_v2_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++			      struct amdgpu_job *job,
++			      struct amdgpu_ib *ib)
++{
++	u32 i, reg, res, cond, type;
++	struct amdgpu_device *adev = parser->adev;
++
++	for (i = 0; i < ib->length_dw ; i += 2) {
++		reg  = CP_PACKETJ_GET_REG(ib->ptr[i]);
++		res  = CP_PACKETJ_GET_RES(ib->ptr[i]);
++		cond = CP_PACKETJ_GET_COND(ib->ptr[i]);
++		type = CP_PACKETJ_GET_TYPE(ib->ptr[i]);
++
++		if (res) /* only support 0 at the moment */
++			return -EINVAL;
++
++		switch (type) {
++		case PACKETJ_TYPE0:
++			if (cond != PACKETJ_CONDITION_CHECK0 || reg < JPEG_REG_RANGE_START ||
++			    reg > JPEG_REG_RANGE_END) {
++				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++				return -EINVAL;
++			}
++			break;
++		case PACKETJ_TYPE3:
++			if (cond != PACKETJ_CONDITION_CHECK3 || reg < JPEG_REG_RANGE_START ||
++			    reg > JPEG_REG_RANGE_END) {
++				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++				return -EINVAL;
++			}
++			break;
++		case PACKETJ_TYPE6:
++			if (ib->ptr[i] == CP_PACKETJ_NOP)
++				continue;
++			dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
++			return -EINVAL;
++		default:
++			dev_err(adev->dev, "Unknown packet type %d !\n", type);
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
+index 654e43e83e2c43..63fadda7a67332 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
+@@ -45,6 +45,9 @@
+ 
+ #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR				0x18000
+ 
++#define JPEG_REG_RANGE_START						0x4000
++#define JPEG_REG_RANGE_END						0x41c2
++
+ void jpeg_v2_0_dec_ring_insert_start(struct amdgpu_ring *ring);
+ void jpeg_v2_0_dec_ring_insert_end(struct amdgpu_ring *ring);
+ void jpeg_v2_0_dec_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+@@ -57,6 +60,9 @@ void jpeg_v2_0_dec_ring_emit_vm_flush(struct amdgpu_ring *ring,
+ 				unsigned vmid, uint64_t pd_addr);
+ void jpeg_v2_0_dec_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val);
+ void jpeg_v2_0_dec_ring_nop(struct amdgpu_ring *ring, uint32_t count);
++int jpeg_v2_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
++			      struct amdgpu_job *job,
++			      struct amdgpu_ib *ib);
+ 
+ extern const struct amdgpu_ip_block_version jpeg_v2_0_ip_block;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index afeaf3c64e2780..c27f2d30ef0c10 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -664,6 +664,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_5_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v2_5_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v2_5_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v2_5_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+@@ -693,6 +694,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_6_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v2_5_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v2_5_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v2_5_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+index 1c7cf4800bf7bb..5cdd3897358eda 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+@@ -567,6 +567,7 @@ static const struct amdgpu_ring_funcs jpeg_v3_0_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v3_0_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v3_0_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v3_0_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+index 237fe5df5a8fb5..0115f83bbde680 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+@@ -729,6 +729,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v4_0_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v4_0_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v4_0_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.h
+index 07d36c2abd6bb9..47638fd4d4e212 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.h
+@@ -32,5 +32,4 @@ enum amdgpu_jpeg_v4_0_sub_block {
+ };
+ 
+ extern const struct amdgpu_ip_block_version jpeg_v4_0_ip_block;
+-
+ #endif /* __JPEG_V4_0_H__ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index d24d06f6d682aa..dfce56d672ff25 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -23,9 +23,9 @@
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_jpeg.h"
+-#include "amdgpu_cs.h"
+ #include "soc15.h"
+ #include "soc15d.h"
++#include "jpeg_v2_0.h"
+ #include "jpeg_v4_0_3.h"
+ #include "mmsch_v4_0_3.h"
+ 
+@@ -1068,7 +1068,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_3_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v4_0_3_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v4_0_3_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v4_0_3_dec_ring_set_wptr,
+-	.parse_cs = jpeg_v4_0_3_dec_ring_parse_cs,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+@@ -1233,56 +1233,3 @@ static void jpeg_v4_0_3_set_ras_funcs(struct amdgpu_device *adev)
+ {
+ 	adev->jpeg.ras = &jpeg_v4_0_3_ras;
+ }
+-
+-/**
+- * jpeg_v4_0_3_dec_ring_parse_cs - command submission parser
+- *
+- * @parser: Command submission parser context
+- * @job: the job to parse
+- * @ib: the IB to parse
+- *
+- * Parse the command stream, return -EINVAL for invalid packet,
+- * 0 otherwise
+- */
+-int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
+-			     struct amdgpu_job *job,
+-			     struct amdgpu_ib *ib)
+-{
+-	uint32_t i, reg, res, cond, type;
+-	struct amdgpu_device *adev = parser->adev;
+-
+-	for (i = 0; i < ib->length_dw ; i += 2) {
+-		reg  = CP_PACKETJ_GET_REG(ib->ptr[i]);
+-		res  = CP_PACKETJ_GET_RES(ib->ptr[i]);
+-		cond = CP_PACKETJ_GET_COND(ib->ptr[i]);
+-		type = CP_PACKETJ_GET_TYPE(ib->ptr[i]);
+-
+-		if (res) /* only support 0 at the moment */
+-			return -EINVAL;
+-
+-		switch (type) {
+-		case PACKETJ_TYPE0:
+-			if (cond != PACKETJ_CONDITION_CHECK0 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) {
+-				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
+-				return -EINVAL;
+-			}
+-			break;
+-		case PACKETJ_TYPE3:
+-			if (cond != PACKETJ_CONDITION_CHECK3 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) {
+-				dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
+-				return -EINVAL;
+-			}
+-			break;
+-		case PACKETJ_TYPE6:
+-			if (ib->ptr[i] == CP_PACKETJ_NOP)
+-				continue;
+-			dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]);
+-			return -EINVAL;
+-		default:
+-			dev_err(adev->dev, "Unknown packet type %d !\n", type);
+-			return -EINVAL;
+-		}
+-	}
+-
+-	return 0;
+-}
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+index 71c54b294e157e..747a3e5f68564c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+@@ -46,9 +46,6 @@
+ 
+ #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR				0x18000
+ 
+-#define JPEG_REG_RANGE_START						0x4000
+-#define JPEG_REG_RANGE_END						0x41c2
+-
+ extern const struct amdgpu_ip_block_version jpeg_v4_0_3_ip_block;
+ 
+ void jpeg_v4_0_3_dec_ring_emit_ib(struct amdgpu_ring *ring,
+@@ -65,7 +62,5 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring);
+ void jpeg_v4_0_3_dec_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val);
+ void jpeg_v4_0_3_dec_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+ 					uint32_t val, uint32_t mask);
+-int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser,
+-				  struct amdgpu_job *job,
+-				  struct amdgpu_ib *ib);
++
+ #endif /* __JPEG_V4_0_3_H__ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+index 4c8f9772437b52..713cedd57c3483 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+@@ -772,6 +772,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_5_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v4_0_5_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v4_0_5_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v4_0_5_dec_ring_set_wptr,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+index 90299f66a44456..1036867d35c88e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+@@ -26,6 +26,7 @@
+ #include "amdgpu_pm.h"
+ #include "soc15.h"
+ #include "soc15d.h"
++#include "jpeg_v2_0.h"
+ #include "jpeg_v4_0_3.h"
+ 
+ #include "vcn/vcn_5_0_0_offset.h"
+@@ -523,7 +524,7 @@ static const struct amdgpu_ring_funcs jpeg_v5_0_0_dec_ring_vm_funcs = {
+ 	.get_rptr = jpeg_v5_0_0_dec_ring_get_rptr,
+ 	.get_wptr = jpeg_v5_0_0_dec_ring_get_wptr,
+ 	.set_wptr = jpeg_v5_0_0_dec_ring_set_wptr,
+-	.parse_cs = jpeg_v4_0_3_dec_ring_parse_cs,
++	.parse_cs = jpeg_v2_dec_ring_parse_cs,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 872f994dd356e1..422eb1d2c5d385 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -3223,15 +3223,19 @@ void dcn10_set_drr(struct pipe_ctx **pipe_ctx,
+ 	 * as well.
+ 	 */
+ 	for (i = 0; i < num_pipes; i++) {
+-		if ((pipe_ctx[i]->stream_res.tg != NULL) && pipe_ctx[i]->stream_res.tg->funcs) {
+-			if (pipe_ctx[i]->stream_res.tg->funcs->set_drr)
+-				pipe_ctx[i]->stream_res.tg->funcs->set_drr(
+-					pipe_ctx[i]->stream_res.tg, &params);
++		/* dc_state_destruct() might null the stream resources, so fetch tg
++		 * here first to avoid a race condition. The lifetime of the pointee
++		 * itself (the timing_generator object) is not a problem here.
++		 */
++		struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
++
++		if ((tg != NULL) && tg->funcs) {
++			if (tg->funcs->set_drr)
++				tg->funcs->set_drr(tg, &params);
+ 			if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+-				if (pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control)
+-					pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(
+-						pipe_ctx[i]->stream_res.tg,
+-						event_triggers, num_frames);
++				if (tg->funcs->set_static_screen_control)
++					tg->funcs->set_static_screen_control(
++						tg, event_triggers, num_frames);
+ 		}
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index f829ff82797e72..4f0e9e0f701dd2 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -1371,7 +1371,13 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+ 	params.vertical_total_mid_frame_num = adjust.v_total_mid_frame_num;
+ 
+ 	for (i = 0; i < num_pipes; i++) {
+-		if ((pipe_ctx[i]->stream_res.tg != NULL) && pipe_ctx[i]->stream_res.tg->funcs) {
++		/* dc_state_destruct() might null the stream resources, so fetch tg
++		 * here first to avoid a race condition. The lifetime of the pointee
++		 * itself (the timing_generator object) is not a problem here.
++		 */
++		struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
++
++		if ((tg != NULL) && tg->funcs) {
+ 			struct dc_crtc_timing *timing = &pipe_ctx[i]->stream->timing;
+ 			struct dc *dc = pipe_ctx[i]->stream->ctx->dc;
+ 
+@@ -1384,14 +1390,12 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+ 					num_frames = 2 * (frame_rate % 60);
+ 				}
+ 			}
+-			if (pipe_ctx[i]->stream_res.tg->funcs->set_drr)
+-				pipe_ctx[i]->stream_res.tg->funcs->set_drr(
+-					pipe_ctx[i]->stream_res.tg, &params);
++			if (tg->funcs->set_drr)
++				tg->funcs->set_drr(tg, &params);
+ 			if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+-				if (pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control)
+-					pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(
+-						pipe_ctx[i]->stream_res.tg,
+-						event_triggers, num_frames);
++				if (tg->funcs->set_static_screen_control)
++					tg->funcs->set_static_screen_control(
++						tg, event_triggers, num_frames);
+ 		}
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
+index 2fa4e64e24306b..bafa52a0165a08 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
+@@ -147,32 +147,25 @@ enum dc_status dp_set_fec_ready(struct dc_link *link, const struct link_resource
+ 
+ 	link_enc = link_enc_cfg_get_link_enc(link);
+ 	ASSERT(link_enc);
++	if (link_enc->funcs->fec_set_ready == NULL)
++		return DC_NOT_SUPPORTED;
+ 
+-	if (!dp_should_enable_fec(link))
+-		return status;
+-
+-	if (link_enc->funcs->fec_set_ready &&
+-			link->dpcd_caps.fec_cap.bits.FEC_CAPABLE) {
+-		if (ready) {
+-			fec_config = 1;
+-			status = core_link_write_dpcd(link,
+-					DP_FEC_CONFIGURATION,
+-					&fec_config,
+-					sizeof(fec_config));
+-			if (status == DC_OK) {
+-				link_enc->funcs->fec_set_ready(link_enc, true);
+-				link->fec_state = dc_link_fec_ready;
+-			} else {
+-				link_enc->funcs->fec_set_ready(link_enc, false);
+-				link->fec_state = dc_link_fec_not_ready;
+-				dm_error("dpcd write failed to set fec_ready");
+-			}
+-		} else if (link->fec_state == dc_link_fec_ready) {
++	if (ready && dp_should_enable_fec(link)) {
++		fec_config = 1;
++
++		status = core_link_write_dpcd(link, DP_FEC_CONFIGURATION,
++				&fec_config, sizeof(fec_config));
++
++		if (status == DC_OK) {
++			link_enc->funcs->fec_set_ready(link_enc, true);
++			link->fec_state = dc_link_fec_ready;
++		}
++	} else {
++		if (link->fec_state == dc_link_fec_ready) {
+ 			fec_config = 0;
+-			status = core_link_write_dpcd(link,
+-					DP_FEC_CONFIGURATION,
+-					&fec_config,
+-					sizeof(fec_config));
++			core_link_write_dpcd(link, DP_FEC_CONFIGURATION,
++				&fec_config, sizeof(fec_config));
++
+ 			link_enc->funcs->fec_set_ready(link_enc, false);
+ 			link->fec_state = dc_link_fec_not_ready;
+ 		}
+@@ -187,14 +180,12 @@ void dp_set_fec_enable(struct dc_link *link, bool enable)
+ 
+ 	link_enc = link_enc_cfg_get_link_enc(link);
+ 	ASSERT(link_enc);
+-
+-	if (!dp_should_enable_fec(link))
++	if (link_enc->funcs->fec_set_enable == NULL)
+ 		return;
+ 
+-	if (link_enc->funcs->fec_set_enable &&
+-			link->dpcd_caps.fec_cap.bits.FEC_CAPABLE) {
+-		if (link->fec_state == dc_link_fec_ready && enable) {
+-			/* Accord to DP spec, FEC enable sequence can first
++	if (enable && dp_should_enable_fec(link)) {
++		if (link->fec_state == dc_link_fec_ready) {
++			/* According to DP spec, FEC enable sequence can first
+ 			 * be transmitted anytime after 1000 LL codes have
+ 			 * been transmitted on the link after link training
+ 			 * completion. Using 1 lane RBR should have the maximum
+@@ -204,7 +195,9 @@ void dp_set_fec_enable(struct dc_link *link, bool enable)
+ 			udelay(7);
+ 			link_enc->funcs->fec_set_enable(link_enc, true);
+ 			link->fec_state = dc_link_fec_enabled;
+-		} else if (link->fec_state == dc_link_fec_enabled && !enable) {
++		}
++	} else {
++		if (link->fec_state == dc_link_fec_enabled) {
+ 			link_enc->funcs->fec_set_enable(link_enc, false);
+ 			link->fec_state = dc_link_fec_ready;
+ 		}
+diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h
+index 09cbc3afd6d89d..b0fc22383e2874 100644
+--- a/drivers/gpu/drm/amd/include/atomfirmware.h
++++ b/drivers/gpu/drm/amd/include/atomfirmware.h
+@@ -1038,7 +1038,7 @@ struct display_object_info_table_v1_4
+   uint16_t  supporteddevices;
+   uint8_t   number_of_path;
+   uint8_t   reserved;
+-  struct    atom_display_object_path_v2 display_path[8];   //the real number of this included in the structure is calculated by using the (whole structure size - the header size- number_of_path)/size of atom_display_object_path
++  struct    atom_display_object_path_v2 display_path[];   //the real number of this included in the structure is calculated by using the (whole structure size - the header size- number_of_path)/size of atom_display_object_path
+ };
+ 
+ struct display_object_info_table_v1_5 {
+@@ -1048,7 +1048,7 @@ struct display_object_info_table_v1_5 {
+ 	uint8_t reserved;
+ 	// the real number of this included in the structure is calculated by using the
+ 	// (whole structure size - the header size- number_of_path)/size of atom_display_object_path
+-	struct atom_display_object_path_v3 display_path[8];
++	struct atom_display_object_path_v3 display_path[];
+ };
+ 
+ /* 
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 903f4bfea7e837..70c9bd25f78d01 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -208,6 +208,18 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_MATCH(DMI_BOARD_NAME, "KUN"),
+ 		},
+ 		.driver_data = (void *)&lcd1600x2560_rightside_up,
++	}, {    /* AYN Loki Max */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Loki Max"),
++		},
++		.driver_data = (void *)&lcd1080x1920_leftside_up,
++	}, {	/* AYN Loki Zero */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Loki Zero"),
++		},
++		.driver_data = (void *)&lcd1080x1920_leftside_up,
+ 	}, {	/* Chuwi HiBook (CWI514) */
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index a0e94217b511a1..4fcfc0b9b386cf 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -1464,6 +1464,7 @@ drm_syncobj_eventfd_ioctl(struct drm_device *dev, void *data,
+ 	struct drm_syncobj *syncobj;
+ 	struct eventfd_ctx *ev_fd_ctx;
+ 	struct syncobj_eventfd_entry *entry;
++	int ret;
+ 
+ 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
+ 		return -EOPNOTSUPP;
+@@ -1479,13 +1480,15 @@ drm_syncobj_eventfd_ioctl(struct drm_device *dev, void *data,
+ 		return -ENOENT;
+ 
+ 	ev_fd_ctx = eventfd_ctx_fdget(args->fd);
+-	if (IS_ERR(ev_fd_ctx))
+-		return PTR_ERR(ev_fd_ctx);
++	if (IS_ERR(ev_fd_ctx)) {
++		ret = PTR_ERR(ev_fd_ctx);
++		goto err_fdget;
++	}
+ 
+ 	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ 	if (!entry) {
+-		eventfd_ctx_put(ev_fd_ctx);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_kzalloc;
+ 	}
+ 	entry->syncobj = syncobj;
+ 	entry->ev_fd_ctx = ev_fd_ctx;
+@@ -1496,6 +1499,12 @@ drm_syncobj_eventfd_ioctl(struct drm_device *dev, void *data,
+ 	drm_syncobj_put(syncobj);
+ 
+ 	return 0;
++
++err_kzalloc:
++	eventfd_ctx_put(ev_fd_ctx);
++err_fdget:
++	drm_syncobj_put(syncobj);
++	return ret;
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index 0eaa1064242c3e..f8e189a73a7905 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -2842,9 +2842,9 @@ static void prepare_context_registration_info_v70(struct intel_context *ce,
+ 		ce->parallel.guc.wqi_tail = 0;
+ 		ce->parallel.guc.wqi_head = 0;
+ 
+-		wq_desc_offset = i915_ggtt_offset(ce->state) +
++		wq_desc_offset = (u64)i915_ggtt_offset(ce->state) +
+ 				 __get_parent_scratch_offset(ce);
+-		wq_base_offset = i915_ggtt_offset(ce->state) +
++		wq_base_offset = (u64)i915_ggtt_offset(ce->state) +
+ 				 __get_wq_offset(ce);
+ 		info->wq_desc_lo = lower_32_bits(wq_desc_offset);
+ 		info->wq_desc_hi = upper_32_bits(wq_desc_offset);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 56f409ad7f390f..ab2bace792e46a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -539,8 +539,8 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 	}
+ 
+ 	/* IGT will check if the cursor size is configured */
+-	drm->mode_config.cursor_width = drm->mode_config.max_width;
+-	drm->mode_config.cursor_height = drm->mode_config.max_height;
++	drm->mode_config.cursor_width = 512;
++	drm->mode_config.cursor_height = 512;
+ 
+ 	/* Use OVL device for all DMA memory allocations */
+ 	crtc = drm_crtc_from_index(drm, 0);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 074fb498706f26..b93ed15f04a30e 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -99,7 +99,7 @@ static int zap_shader_load_mdt(struct msm_gpu *gpu, const char *fwname,
+ 		 * was a bad idea, and is only provided for backwards
+ 		 * compatibility for older targets.
+ 		 */
+-		return -ENODEV;
++		return -ENOENT;
+ 	}
+ 
+ 	if (IS_ERR(fw)) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h
+index 50f0c1914f58e8..4c3f7439657987 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h
+@@ -46,6 +46,8 @@ u32 gm107_ram_probe_fbp(const struct nvkm_ram_func *,
+ u32 gm200_ram_probe_fbp_amount(const struct nvkm_ram_func *, u32,
+ 			       struct nvkm_device *, int, int *);
+ 
++int gp100_ram_init(struct nvkm_ram *);
++
+ /* RAM type-specific MR calculation routines */
+ int nvkm_sddr2_calc(struct nvkm_ram *);
+ int nvkm_sddr3_calc(struct nvkm_ram *);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.c
+index 378f6fb7099077..8987a21e81d174 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.c
+@@ -27,7 +27,7 @@
+ #include <subdev/bios/init.h>
+ #include <subdev/bios/rammap.h>
+ 
+-static int
++int
+ gp100_ram_init(struct nvkm_ram *ram)
+ {
+ 	struct nvkm_subdev *subdev = &ram->fb->subdev;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp102.c
+index 8550f5e473474b..b6b6ee59019d70 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp102.c
+@@ -5,6 +5,7 @@
+ 
+ static const struct nvkm_ram_func
+ gp102_ram = {
++	.init = gp100_ram_init,
+ };
+ 
+ int
+diff --git a/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h b/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
+index cd4632276141b0..68ade1a05ca915 100644
+--- a/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
++++ b/drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
+@@ -96,7 +96,7 @@ static inline struct drm_i915_private *kdev_to_i915(struct device *kdev)
+ #define HAS_GMD_ID(xe) GRAPHICS_VERx100(xe) >= 1270
+ 
+ /* Workarounds not handled yet */
+-#define IS_DISPLAY_STEP(xe, first, last) ({u8 __step = (xe)->info.step.display; first <= __step && __step <= last; })
++#define IS_DISPLAY_STEP(xe, first, last) ({u8 __step = (xe)->info.step.display; first <= __step && __step < last; })
+ 
+ #define IS_LP(xe) (0)
+ #define IS_GEN9_LP(xe) (0)
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index b6f3a43d637f70..f5e3012eff20d8 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -1539,7 +1539,7 @@ struct xe_bo *xe_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
+ 	return bo;
+ }
+ 
+-static void __xe_bo_unpin_map_no_vm(struct drm_device *drm, void *arg)
++static void __xe_bo_unpin_map_no_vm(void *arg)
+ {
+ 	xe_bo_unpin_map_no_vm(arg);
+ }
+@@ -1554,7 +1554,7 @@ struct xe_bo *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile
+ 	if (IS_ERR(bo))
+ 		return bo;
+ 
+-	ret = drmm_add_action_or_reset(&xe->drm, __xe_bo_unpin_map_no_vm, bo);
++	ret = devm_add_action_or_reset(xe->drm.dev, __xe_bo_unpin_map_no_vm, bo);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+@@ -1602,7 +1602,7 @@ int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, str
+ 	if (IS_ERR(bo))
+ 		return PTR_ERR(bo);
+ 
+-	drmm_release_action(&xe->drm, __xe_bo_unpin_map_no_vm, *src);
++	devm_release_action(xe->drm.dev, __xe_bo_unpin_map_no_vm, *src);
+ 	*src = bo;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
+index 08f0b7c95901f7..ee15daa2150796 100644
+--- a/drivers/gpu/drm/xe/xe_drm_client.c
++++ b/drivers/gpu/drm/xe/xe_drm_client.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ 
++#include "xe_assert.h"
+ #include "xe_bo.h"
+ #include "xe_bo_types.h"
+ #include "xe_device_types.h"
+@@ -93,10 +94,13 @@ void xe_drm_client_add_bo(struct xe_drm_client *client,
+  */
+ void xe_drm_client_remove_bo(struct xe_bo *bo)
+ {
++	struct xe_device *xe = ttm_to_xe_device(bo->ttm.bdev);
+ 	struct xe_drm_client *client = bo->client;
+ 
++	xe_assert(xe, !kref_read(&bo->ttm.base.refcount));
++
+ 	spin_lock(&client->bos_lock);
+-	list_del(&bo->client_link);
++	list_del_init(&bo->client_link);
+ 	spin_unlock(&client->bos_lock);
+ 
+ 	xe_drm_client_put(client);
+@@ -108,6 +112,8 @@ static void bo_meminfo(struct xe_bo *bo,
+ 	u64 sz = bo->size;
+ 	u32 mem_type;
+ 
++	xe_bo_assert_held(bo);
++
+ 	if (bo->placement.placement)
+ 		mem_type = bo->placement.placement->mem_type;
+ 	else
+@@ -138,6 +144,7 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
+ 	struct xe_drm_client *client;
+ 	struct drm_gem_object *obj;
+ 	struct xe_bo *bo;
++	LLIST_HEAD(deferred);
+ 	unsigned int id;
+ 	u32 mem_type;
+ 
+@@ -148,7 +155,20 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
+ 	idr_for_each_entry(&file->object_idr, obj, id) {
+ 		struct xe_bo *bo = gem_to_xe_bo(obj);
+ 
+-		bo_meminfo(bo, stats);
++		if (dma_resv_trylock(bo->ttm.base.resv)) {
++			bo_meminfo(bo, stats);
++			xe_bo_unlock(bo);
++		} else {
++			xe_bo_get(bo);
++			spin_unlock(&file->table_lock);
++
++			xe_bo_lock(bo, false);
++			bo_meminfo(bo, stats);
++			xe_bo_unlock(bo);
++
++			xe_bo_put(bo);
++			spin_lock(&file->table_lock);
++		}
+ 	}
+ 	spin_unlock(&file->table_lock);
+ 
+@@ -157,11 +177,28 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
+ 	list_for_each_entry(bo, &client->bos_list, client_link) {
+ 		if (!kref_get_unless_zero(&bo->ttm.base.refcount))
+ 			continue;
+-		bo_meminfo(bo, stats);
+-		xe_bo_put(bo);
++
++		if (dma_resv_trylock(bo->ttm.base.resv)) {
++			bo_meminfo(bo, stats);
++			xe_bo_unlock(bo);
++		} else {
++			spin_unlock(&client->bos_lock);
++
++			xe_bo_lock(bo, false);
++			bo_meminfo(bo, stats);
++			xe_bo_unlock(bo);
++
++			spin_lock(&client->bos_lock);
++			/* The bo ref will prevent this bo from being removed from the list */
++			xe_assert(xef->xe, !list_empty(&bo->client_link));
++		}
++
++		xe_bo_put_deferred(bo, &deferred);
+ 	}
+ 	spin_unlock(&client->bos_lock);
+ 
++	xe_bo_put_commit(&deferred);
++
+ 	for (mem_type = XE_PL_SYSTEM; mem_type < TTM_NUM_MEM_TYPES; ++mem_type) {
+ 		if (!xe_mem_type_to_name[mem_type])
+ 			continue;
+diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
+index 95c17b72fa5746..c9a4ffcfdcca99 100644
+--- a/drivers/gpu/drm/xe/xe_gsc.c
++++ b/drivers/gpu/drm/xe/xe_gsc.c
+@@ -256,7 +256,7 @@ static int gsc_upload_and_init(struct xe_gsc *gsc)
+ 	struct xe_tile *tile = gt_to_tile(gt);
+ 	int ret;
+ 
+-	if (XE_WA(gt, 14018094691)) {
++	if (XE_WA(tile->primary_gt, 14018094691)) {
+ 		ret = xe_force_wake_get(gt_to_fw(tile->primary_gt), XE_FORCEWAKE_ALL);
+ 
+ 		/*
+@@ -274,7 +274,7 @@ static int gsc_upload_and_init(struct xe_gsc *gsc)
+ 
+ 	ret = gsc_upload(gsc);
+ 
+-	if (XE_WA(gt, 14018094691))
++	if (XE_WA(tile->primary_gt, 14018094691))
+ 		xe_force_wake_put(gt_to_fw(tile->primary_gt), XE_FORCEWAKE_ALL);
+ 
+ 	if (ret)
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 66dafe980b9c2a..1f7699d7fffba1 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -542,6 +542,16 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ 	  XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
+ 	},
+ 
++	/* Xe2_LPM */
++
++	{ XE_RTP_NAME("16021639441"),
++	  XE_RTP_RULES(MEDIA_VERSION(2000)),
++	  XE_RTP_ACTIONS(SET(CSFE_CHICKEN1(0),
++			     GHWSP_CSB_REPORT_DIS |
++			     PPHWSP_CSB_AND_TIMESTAMP_REPORT_DIS,
++			     XE_RTP_ACTION_FLAG(ENGINE_BASE)))
++	},
++
+ 	/* Xe2_HPM */
+ 
+ 	{ XE_RTP_NAME("16021639441"),
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index 37e6d25593c211..a282388b7aa5c1 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -1248,6 +1248,9 @@ static const struct hid_device_id asus_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+ 	    USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY),
+ 	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
++	    USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X),
++	  QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+ 	    USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD),
+ 	  QUIRK_ROG_CLAYMORE_II_KEYBOARD },
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 72d56ee7ce1b98..781c5aa298598a 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -210,6 +210,7 @@
+ #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_KEYBOARD3	0x1a30
+ #define USB_DEVICE_ID_ASUSTEK_ROG_Z13_LIGHTBAR		0x18c6
+ #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY		0x1abe
++#define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X		0x1b4c
+ #define USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD	0x196b
+ #define USB_DEVICE_ID_ASUSTEK_FX503VD_KEYBOARD	0x1869
+ 
+@@ -520,6 +521,8 @@
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
+ 
+ #define I2C_VENDOR_ID_GOODIX		0x27c6
++#define I2C_DEVICE_ID_GOODIX_01E8	0x01e8
++#define I2C_DEVICE_ID_GOODIX_01E9	0x01e9
+ #define I2C_DEVICE_ID_GOODIX_01F0	0x01f0
+ 
+ #define USB_VENDOR_ID_GOODTOUCH		0x1aad
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 56fc78841f245a..99812c0f830b5e 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1441,6 +1441,30 @@ static int mt_event(struct hid_device *hid, struct hid_field *field,
+ 	return 0;
+ }
+ 
++static __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
++			     unsigned int *size)
++{
++	if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
++	    (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
++	     hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
++		if (rdesc[607] == 0x15) {
++			rdesc[607] = 0x25;
++			dev_info(
++				&hdev->dev,
++				"GT7868Q report descriptor fixup is applied.\n");
++		} else {
++			dev_info(
++				&hdev->dev,
++				"The byte is not expected for fixing the report descriptor. \
++It's possible that the touchpad firmware is not suitable for applying the fix. \
++got: %x\n",
++				rdesc[607]);
++		}
++	}
++
++	return rdesc;
++}
++
+ static void mt_report(struct hid_device *hid, struct hid_report *report)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hid);
+@@ -2035,6 +2059,14 @@ static const struct hid_device_id mt_devices[] = {
+ 		MT_BT_DEVICE(USB_VENDOR_ID_FRUCTEL,
+ 			USB_DEVICE_ID_GAMETEL_MT_MODE) },
+ 
++	/* Goodix GT7868Q devices */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++	  HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
++		     I2C_DEVICE_ID_GOODIX_01E8) },
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++	  HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
++		     I2C_DEVICE_ID_GOODIX_01E8) },
++
+ 	/* GoodTouch panels */
+ 	{ .driver_data = MT_CLS_NSMU,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_GOODTOUCH,
+@@ -2270,6 +2302,7 @@ static struct hid_driver mt_driver = {
+ 	.feature_mapping = mt_feature_mapping,
+ 	.usage_table = mt_grabbed_usages,
+ 	.event = mt_event,
++	.report_fixup = mt_report_fixup,
+ 	.report = mt_report,
+ 	.suspend = pm_ptr(mt_suspend),
+ 	.reset_resume = pm_ptr(mt_reset_resume),
+diff --git a/drivers/hwmon/pmbus/pmbus.h b/drivers/hwmon/pmbus/pmbus.h
+index fb442fae7b3e35..0bea603994e7b2 100644
+--- a/drivers/hwmon/pmbus/pmbus.h
++++ b/drivers/hwmon/pmbus/pmbus.h
+@@ -418,6 +418,12 @@ enum pmbus_sensor_classes {
+ enum pmbus_data_format { linear = 0, ieee754, direct, vid };
+ enum vrm_version { vr11 = 0, vr12, vr13, imvp9, amd625mv };
+ 
++/* PMBus revision identifiers */
++#define PMBUS_REV_10 0x00	/* PMBus revision 1.0 */
++#define PMBUS_REV_11 0x11	/* PMBus revision 1.1 */
++#define PMBUS_REV_12 0x22	/* PMBus revision 1.2 */
++#define PMBUS_REV_13 0x33	/* PMBus revision 1.3 */
++
+ struct pmbus_driver_info {
+ 	int pages;		/* Total number of pages */
+ 	u8 phases[PMBUS_PAGES];	/* Number of phases per page */
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index cb4c65a7f288c1..e592446b26653e 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -85,6 +85,8 @@ struct pmbus_data {
+ 
+ 	u32 flags;		/* from platform data */
+ 
++	u8 revision;	/* The PMBus revision the device is compliant with */
++
+ 	int exponent[PMBUS_PAGES];
+ 				/* linear mode: exponent for output voltages */
+ 
+@@ -1095,9 +1097,14 @@ static int pmbus_get_boolean(struct i2c_client *client, struct pmbus_boolean *b,
+ 
+ 	regval = status & mask;
+ 	if (regval) {
+-		ret = _pmbus_write_byte_data(client, page, reg, regval);
+-		if (ret)
+-			goto unlock;
++		if (data->revision >= PMBUS_REV_12) {
++			ret = _pmbus_write_byte_data(client, page, reg, regval);
++			if (ret)
++				goto unlock;
++		} else {
++			pmbus_clear_fault_page(client, page);
++		}
++
+ 	}
+ 	if (s1 && s2) {
+ 		s64 v1, v2;
+@@ -2640,6 +2647,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 			data->flags |= PMBUS_WRITE_PROTECTED | PMBUS_SKIP_STATUS_CHECK;
+ 	}
+ 
++	ret = i2c_smbus_read_byte_data(client, PMBUS_REVISION);
++	if (ret >= 0)
++		data->revision = ret;
++
+ 	if (data->info->pages)
+ 		pmbus_clear_faults(client);
+ 	else
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 7a303a9d6bf72b..cff3393f0dd000 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -189,6 +189,7 @@ static const char * const smbus_pnp_ids[] = {
+ 	"LEN2054", /* E480 */
+ 	"LEN2055", /* E580 */
+ 	"LEN2068", /* T14 Gen 1 */
++	"SYN3015", /* HP EliteBook 840 G2 */
+ 	"SYN3052", /* HP EliteBook 840 G4 */
+ 	"SYN3221", /* HP 15-ay000 */
+ 	"SYN323d", /* HP Spectre X360 13-w013dx */
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index e9eb9554dd7bdc..bad238f69a7afd 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -627,6 +627,15 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
+ 	},
++	{
++		/* Fujitsu Lifebook E756 */
++		/* https://bugzilla.suse.com/show_bug.cgi?id=1229056 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E756"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX)
++	},
+ 	{
+ 		/* Fujitsu Lifebook E5411 */
+ 		.matches = {
+diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c
+index 4d13db13b9e578..b42096c797bf84 100644
+--- a/drivers/input/touchscreen/ads7846.c
++++ b/drivers/input/touchscreen/ads7846.c
+@@ -805,7 +805,7 @@ static void ads7846_read_state(struct ads7846 *ts)
+ 		m = &ts->msg[msg_idx];
+ 		error = spi_sync(ts->spi, m);
+ 		if (error) {
+-			dev_err(&ts->spi->dev, "spi_sync --> %d\n", error);
++			dev_err_ratelimited(&ts->spi->dev, "spi_sync --> %d\n", error);
+ 			packet->ignore = true;
+ 			return;
+ 		}
+diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
+index 06ec0f2e18ae3b..b0b5b6241b4434 100644
+--- a/drivers/input/touchscreen/edt-ft5x06.c
++++ b/drivers/input/touchscreen/edt-ft5x06.c
+@@ -1474,6 +1474,10 @@ static const struct edt_i2c_chip_data edt_ft6236_data = {
+ 	.max_support_points = 2,
+ };
+ 
++static const struct edt_i2c_chip_data edt_ft8201_data = {
++	.max_support_points = 10,
++};
++
+ static const struct edt_i2c_chip_data edt_ft8719_data = {
+ 	.max_support_points = 10,
+ };
+@@ -1485,6 +1489,7 @@ static const struct i2c_device_id edt_ft5x06_ts_id[] = {
+ 	{ .name = "ft5452", .driver_data = (long)&edt_ft5452_data },
+ 	/* Note no edt- prefix for compatibility with the ft6236.c driver */
+ 	{ .name = "ft6236", .driver_data = (long)&edt_ft6236_data },
++	{ .name = "ft8201", .driver_data = (long)&edt_ft8201_data },
+ 	{ .name = "ft8719", .driver_data = (long)&edt_ft8719_data },
+ 	{ /* sentinel */ }
+ };
+@@ -1499,6 +1504,7 @@ static const struct of_device_id edt_ft5x06_of_match[] = {
+ 	{ .compatible = "focaltech,ft5452", .data = &edt_ft5452_data },
+ 	/* Note focaltech vendor prefix for compatibility with ft6236.c */
+ 	{ .compatible = "focaltech,ft6236", .data = &edt_ft6236_data },
++	{ .compatible = "focaltech,ft8201", .data = &edt_ft8201_data },
+ 	{ .compatible = "focaltech,ft8719", .data = &edt_ft8719_data },
+ 	{ /* sentinel */ }
+ };
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 417fddebe367a2..a74c22cb0f5319 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2173,6 +2173,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map
+ 	struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
+ 	unsigned int journal_section, journal_entry;
+ 	unsigned int journal_read_pos;
++	sector_t recalc_sector;
+ 	struct completion read_comp;
+ 	bool discard_retried = false;
+ 	bool need_sync_io = ic->internal_hash && dio->op == REQ_OP_READ;
+@@ -2313,6 +2314,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map
+ 			goto lock_retry;
+ 		}
+ 	}
++	recalc_sector = le64_to_cpu(ic->sb->recalc_sector);
+ 	spin_unlock_irq(&ic->endio_wait.lock);
+ 
+ 	if (unlikely(journal_read_pos != NOT_FOUND)) {
+@@ -2367,7 +2369,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map
+ 	if (need_sync_io) {
+ 		wait_for_completion_io(&read_comp);
+ 		if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING) &&
+-		    dio->range.logical_sector + dio->range.n_sectors > le64_to_cpu(ic->sb->recalc_sector))
++		    dio->range.logical_sector + dio->range.n_sectors > recalc_sector)
+ 			goto skip_check;
+ 		if (ic->mode == 'B') {
+ 			if (!block_bitmap_op(ic, ic->recalc_bitmap, dio->range.logical_sector,
+diff --git a/drivers/misc/eeprom/digsy_mtc_eeprom.c b/drivers/misc/eeprom/digsy_mtc_eeprom.c
+index f1f766b709657b..4eddc5ba1af9c8 100644
+--- a/drivers/misc/eeprom/digsy_mtc_eeprom.c
++++ b/drivers/misc/eeprom/digsy_mtc_eeprom.c
+@@ -42,7 +42,7 @@ static void digsy_mtc_op_finish(void *p)
+ }
+ 
+ struct eeprom_93xx46_platform_data digsy_mtc_eeprom_data = {
+-	.flags		= EE_ADDR8,
++	.flags		= EE_ADDR8 | EE_SIZE1K,
+ 	.prepare	= digsy_mtc_op_prepare,
+ 	.finish		= digsy_mtc_op_finish,
+ };
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 85952d841f2856..bd061997618d95 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1474,10 +1474,13 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
+ 	/* Hardware errata -  Admin config could not be overwritten if
+ 	 * config is pending, need reset the TAS module
+ 	 */
+-	val = ocelot_read(ocelot, QSYS_PARAM_STATUS_REG_8);
+-	if (val & QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING) {
+-		ret = -EBUSY;
+-		goto err_reset_tc;
++	val = ocelot_read_rix(ocelot, QSYS_TAG_CONFIG, port);
++	if (val & QSYS_TAG_CONFIG_ENABLE) {
++		val = ocelot_read(ocelot, QSYS_PARAM_STATUS_REG_8);
++		if (val & QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING) {
++			ret = -EBUSY;
++			goto err_reset_tc;
++		}
+ 	}
+ 
+ 	ocelot_rmw_rix(ocelot,
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.h b/drivers/net/ethernet/faraday/ftgmac100.h
+index 63b3e02fab162e..4968f6f0bdbc25 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.h
++++ b/drivers/net/ethernet/faraday/ftgmac100.h
+@@ -84,7 +84,7 @@
+ 			    FTGMAC100_INT_RPKT_BUF)
+ 
+ /* All the interrupts we care about */
+-#define FTGMAC100_INT_ALL (FTGMAC100_INT_RPKT_BUF  |  \
++#define FTGMAC100_INT_ALL (FTGMAC100_INT_RXTX  |  \
+ 			   FTGMAC100_INT_BAD)
+ 
+ /*
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 946c3d3b69d946..669fb5804d3ba0 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -2285,12 +2285,12 @@ static netdev_tx_t
+ dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
+ {
+ 	const int queue_mapping = skb_get_queue_mapping(skb);
+-	bool nonlinear = skb_is_nonlinear(skb);
+ 	struct rtnl_link_stats64 *percpu_stats;
+ 	struct dpaa_percpu_priv *percpu_priv;
+ 	struct netdev_queue *txq;
+ 	struct dpaa_priv *priv;
+ 	struct qm_fd fd;
++	bool nonlinear;
+ 	int offset = 0;
+ 	int err = 0;
+ 
+@@ -2300,6 +2300,13 @@ dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
+ 
+ 	qm_fd_clear_fd(&fd);
+ 
++	/* Packet data is always read as 32-bit words, so zero out any part of
++	 * the skb which might be sent if we have to pad the packet
++	 */
++	if (__skb_put_padto(skb, ETH_ZLEN, false))
++		goto enomem;
++
++	nonlinear = skb_is_nonlinear(skb);
+ 	if (!nonlinear) {
+ 		/* We're going to store the skb backpointer at the beginning
+ 		 * of the data buffer, so we need a privately owned skb
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 465f0d58228374..6c33195a1168f8 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -11456,7 +11456,7 @@ static void hclge_pci_uninit(struct hclge_dev *hdev)
+ 
+ 	pcim_iounmap(pdev, hdev->hw.hw.io_base);
+ 	pci_free_irq_vectors(pdev);
+-	pci_release_mem_regions(pdev);
++	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 7076a77388641c..c2ba586593475c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2413,13 +2413,6 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
+ 	struct ice_pf *pf = vsi->back;
+ 	int err;
+ 
+-	/* The Rx rule will only exist to remove if the LLDP FW
+-	 * engine is currently stopped
+-	 */
+-	if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF &&
+-	    !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
+-		ice_cfg_sw_lldp(vsi, false, false);
+-
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ 	err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+ 	if (err)
+@@ -2764,6 +2757,14 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 		ice_rss_clean(vsi);
+ 
+ 	ice_vsi_close(vsi);
++
++	/* The Rx rule will only exist to remove if the LLDP FW
++	 * engine is currently stopped
++	 */
++	if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF &&
++	    !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
++		ice_cfg_sw_lldp(vsi, false, false);
++
+ 	ice_vsi_decfg(vsi);
+ 
+ 	/* retain SW VSI data structure since it is needed to unregister and
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index ffd6c42bda1ed4..0b85b3653a686d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -3219,7 +3219,7 @@ ice_add_update_vsi_list(struct ice_hw *hw,
+ 
+ 		/* A rule already exists with the new VSI being added */
+ 		if (test_bit(vsi_handle, m_entry->vsi_list_info->vsi_map))
+-			return 0;
++			return -EEXIST;
+ 
+ 		/* Update the previously created VSI list set with
+ 		 * the new VSI ID passed in
+@@ -3289,7 +3289,7 @@ ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+ 
+ 	list_head = &sw->recp_list[recp_id].filt_rules;
+ 	list_for_each_entry(list_itr, list_head, list_entry) {
+-		if (list_itr->vsi_list_info) {
++		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+ 			map_info = list_itr->vsi_list_info;
+ 			if (test_bit(vsi_handle, map_info->vsi_map)) {
+ 				*vsi_list_id = map_info->vsi_list_id;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index a27d0a4d3d9c4b..6dc5c11aebbd3a 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -33,6 +33,7 @@
+ #include <linux/bpf_trace.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/etherdevice.h>
++#include <linux/lockdep.h>
+ #ifdef CONFIG_IGB_DCA
+ #include <linux/dca.h>
+ #endif
+@@ -2915,8 +2916,11 @@ static int igb_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+ 	}
+ }
+ 
++/* This function assumes __netif_tx_lock is held by the caller. */
+ static void igb_xdp_ring_update_tail(struct igb_ring *ring)
+ {
++	lockdep_assert_held(&txring_txq(ring)->_xmit_lock);
++
+ 	/* Force memory writes to complete before letting h/w know there
+ 	 * are new descriptors to fetch.
+ 	 */
+@@ -3001,11 +3005,11 @@ static int igb_xdp_xmit(struct net_device *dev, int n,
+ 		nxmit++;
+ 	}
+ 
+-	__netif_tx_unlock(nq);
+-
+ 	if (unlikely(flags & XDP_XMIT_FLUSH))
+ 		igb_xdp_ring_update_tail(tx_ring);
+ 
++	__netif_tx_unlock(nq);
++
+ 	return nxmit;
+ }
+ 
+@@ -8865,12 +8869,14 @@ static void igb_put_rx_buffer(struct igb_ring *rx_ring,
+ 
+ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
+ {
++	unsigned int total_bytes = 0, total_packets = 0;
+ 	struct igb_adapter *adapter = q_vector->adapter;
+ 	struct igb_ring *rx_ring = q_vector->rx.ring;
+-	struct sk_buff *skb = rx_ring->skb;
+-	unsigned int total_bytes = 0, total_packets = 0;
+ 	u16 cleaned_count = igb_desc_unused(rx_ring);
++	struct sk_buff *skb = rx_ring->skb;
++	int cpu = smp_processor_id();
+ 	unsigned int xdp_xmit = 0;
++	struct netdev_queue *nq;
+ 	struct xdp_buff xdp;
+ 	u32 frame_sz = 0;
+ 	int rx_buf_pgcnt;
+@@ -8998,7 +9004,10 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget)
+ 	if (xdp_xmit & IGB_XDP_TX) {
+ 		struct igb_ring *tx_ring = igb_xdp_tx_queue_mapping(adapter);
+ 
++		nq = txring_txq(tx_ring);
++		__netif_tx_lock(nq, cpu);
+ 		igb_xdp_ring_update_tail(tx_ring);
++		__netif_tx_unlock(nq);
+ 	}
+ 
+ 	u64_stats_update_begin(&rx_ring->rx_syncp);
+diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c
+index b06e245629739f..d8be0e4dcb072b 100644
+--- a/drivers/net/ethernet/jme.c
++++ b/drivers/net/ethernet/jme.c
+@@ -946,15 +946,13 @@ jme_udpsum(struct sk_buff *skb)
+ 	if (skb->protocol != htons(ETH_P_IP))
+ 		return csum;
+ 	skb_set_network_header(skb, ETH_HLEN);
+-	if ((ip_hdr(skb)->protocol != IPPROTO_UDP) ||
+-	    (skb->len < (ETH_HLEN +
+-			(ip_hdr(skb)->ihl << 2) +
+-			sizeof(struct udphdr)))) {
++
++	if (ip_hdr(skb)->protocol != IPPROTO_UDP ||
++	    skb->len < (ETH_HLEN + ip_hdrlen(skb) + sizeof(struct udphdr))) {
+ 		skb_reset_network_header(skb);
+ 		return csum;
+ 	}
+-	skb_set_transport_header(skb,
+-			ETH_HLEN + (ip_hdr(skb)->ihl << 2));
++	skb_set_transport_header(skb, ETH_HLEN + ip_hdrlen(skb));
+ 	csum = udp_hdr(skb)->check;
+ 	skb_reset_transport_header(skb);
+ 	skb_reset_network_header(skb);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 35834687e40fe9..96a7b23428be24 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -318,6 +318,7 @@ struct nix_mark_format {
+ 
+ /* smq(flush) to tl1 cir/pir info */
+ struct nix_smq_tree_ctx {
++	u16 schq;
+ 	u64 cir_off;
+ 	u64 cir_val;
+ 	u64 pir_off;
+@@ -327,8 +328,6 @@ struct nix_smq_tree_ctx {
+ /* smq flush context */
+ struct nix_smq_flush_ctx {
+ 	int smq;
+-	u16 tl1_schq;
+-	u16 tl2_schq;
+ 	struct nix_smq_tree_ctx smq_tree_ctx[NIX_TXSCH_LVL_CNT];
+ };
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 3dc828cf6c5a6f..10f8efff7843de 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -2259,14 +2259,13 @@ static void nix_smq_flush_fill_ctx(struct rvu *rvu, int blkaddr, int smq,
+ 	schq = smq;
+ 	for (lvl = NIX_TXSCH_LVL_SMQ; lvl <= NIX_TXSCH_LVL_TL1; lvl++) {
+ 		smq_tree_ctx = &smq_flush_ctx->smq_tree_ctx[lvl];
++		smq_tree_ctx->schq = schq;
+ 		if (lvl == NIX_TXSCH_LVL_TL1) {
+-			smq_flush_ctx->tl1_schq = schq;
+ 			smq_tree_ctx->cir_off = NIX_AF_TL1X_CIR(schq);
+ 			smq_tree_ctx->pir_off = 0;
+ 			smq_tree_ctx->pir_val = 0;
+ 			parent_off = 0;
+ 		} else if (lvl == NIX_TXSCH_LVL_TL2) {
+-			smq_flush_ctx->tl2_schq = schq;
+ 			smq_tree_ctx->cir_off = NIX_AF_TL2X_CIR(schq);
+ 			smq_tree_ctx->pir_off = NIX_AF_TL2X_PIR(schq);
+ 			parent_off = NIX_AF_TL2X_PARENT(schq);
+@@ -2301,8 +2300,8 @@ static void nix_smq_flush_enadis_xoff(struct rvu *rvu, int blkaddr,
+ {
+ 	struct nix_txsch *txsch;
+ 	struct nix_hw *nix_hw;
++	int tl2, tl2_schq;
+ 	u64 regoff;
+-	int tl2;
+ 
+ 	nix_hw = get_nix_hw(rvu->hw, blkaddr);
+ 	if (!nix_hw)
+@@ -2310,16 +2309,17 @@ static void nix_smq_flush_enadis_xoff(struct rvu *rvu, int blkaddr,
+ 
+ 	/* loop through all TL2s with matching PF_FUNC */
+ 	txsch = &nix_hw->txsch[NIX_TXSCH_LVL_TL2];
++	tl2_schq = smq_flush_ctx->smq_tree_ctx[NIX_TXSCH_LVL_TL2].schq;
+ 	for (tl2 = 0; tl2 < txsch->schq.max; tl2++) {
+ 		/* skip the smq(flush) TL2 */
+-		if (tl2 == smq_flush_ctx->tl2_schq)
++		if (tl2 == tl2_schq)
+ 			continue;
+ 		/* skip unused TL2s */
+ 		if (TXSCH_MAP_FLAGS(txsch->pfvf_map[tl2]) & NIX_TXSCHQ_FREE)
+ 			continue;
+ 		/* skip if PF_FUNC doesn't match */
+ 		if ((TXSCH_MAP_FUNC(txsch->pfvf_map[tl2]) & ~RVU_PFVF_FUNC_MASK) !=
+-		    (TXSCH_MAP_FUNC(txsch->pfvf_map[smq_flush_ctx->tl2_schq] &
++		    (TXSCH_MAP_FUNC(txsch->pfvf_map[tl2_schq] &
+ 				    ~RVU_PFVF_FUNC_MASK)))
+ 			continue;
+ 		/* enable/disable XOFF */
+@@ -2361,10 +2361,12 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ 			 int smq, u16 pcifunc, int nixlf)
+ {
+ 	struct nix_smq_flush_ctx *smq_flush_ctx;
++	int err, restore_tx_en = 0, i;
+ 	int pf = rvu_get_pf(pcifunc);
+ 	u8 cgx_id = 0, lmac_id = 0;
+-	int err, restore_tx_en = 0;
+-	u64 cfg;
++	u16 tl2_tl3_link_schq;
++	u8 link, link_level;
++	u64 cfg, bmap = 0;
+ 
+ 	if (!is_rvu_otx2(rvu)) {
+ 		/* Skip SMQ flush if pkt count is zero */
+@@ -2388,16 +2390,38 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ 	nix_smq_flush_enadis_xoff(rvu, blkaddr, smq_flush_ctx, true);
+ 	nix_smq_flush_enadis_rate(rvu, blkaddr, smq_flush_ctx, false);
+ 
+-	cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq));
+-	/* Do SMQ flush and set enqueue xoff */
+-	cfg |= BIT_ULL(50) | BIT_ULL(49);
+-	rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq), cfg);
+-
+ 	/* Disable backpressure from physical link,
+ 	 * otherwise SMQ flush may stall.
+ 	 */
+ 	rvu_cgx_enadis_rx_bp(rvu, pf, false);
+ 
++	link_level = rvu_read64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL) & 0x01 ?
++			NIX_TXSCH_LVL_TL3 : NIX_TXSCH_LVL_TL2;
++	tl2_tl3_link_schq = smq_flush_ctx->smq_tree_ctx[link_level].schq;
++	link = smq_flush_ctx->smq_tree_ctx[NIX_TXSCH_LVL_TL1].schq;
++
++	/* SMQ set enqueue xoff */
++	cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq));
++	cfg |= BIT_ULL(50);
++	rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq), cfg);
++
++	/* Clear all NIX_AF_TL3_TL2_LINK_CFG[ENA] for the TL3/TL2 queue */
++	for (i = 0; i < (rvu->hw->cgx_links + rvu->hw->lbk_links); i++) {
++		cfg = rvu_read64(rvu, blkaddr,
++				 NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link));
++		if (!(cfg & BIT_ULL(12)))
++			continue;
++		bmap |= (1 << i);
++		cfg &= ~BIT_ULL(12);
++		rvu_write64(rvu, blkaddr,
++			    NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link), cfg);
++	}
++
++	/* Do SMQ flush and set enqueue xoff */
++	cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq));
++	cfg |= BIT_ULL(50) | BIT_ULL(49);
++	rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq), cfg);
++
+ 	/* Wait for flush to complete */
+ 	err = rvu_poll_reg(rvu, blkaddr,
+ 			   NIX_AF_SMQX_CFG(smq), BIT_ULL(49), true);
+@@ -2406,6 +2430,17 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ 			 "NIXLF%d: SMQ%d flush failed, txlink might be busy\n",
+ 			 nixlf, smq);
+ 
++	/* Set NIX_AF_TL3_TL2_LINKX_CFG[ENA] for the TL3/TL2 queue */
++	for (i = 0; i < (rvu->hw->cgx_links + rvu->hw->lbk_links); i++) {
++		if (!(bmap & (1 << i)))
++			continue;
++		cfg = rvu_read64(rvu, blkaddr,
++				 NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link));
++		cfg |= BIT_ULL(12);
++		rvu_write64(rvu, blkaddr,
++			    NIX_AF_TL3_TL2X_LINKX_CFG(tl2_tl3_link_schq, link), cfg);
++	}
++
+ 	/* clear XOFF on TL2s */
+ 	nix_smq_flush_enadis_rate(rvu, blkaddr, smq_flush_ctx, true);
+ 	nix_smq_flush_enadis_xoff(rvu, blkaddr, smq_flush_ctx, false);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 58eb96a688533c..9d2d67e24205ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -139,6 +139,10 @@ void mlx5e_build_ptys2ethtool_map(void)
+ 				       ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT);
+ 	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_100GBASE_LR4, legacy,
+ 				       ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT);
++	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_100BASE_TX, legacy,
++				       ETHTOOL_LINK_MODE_100baseT_Full_BIT);
++	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_1000BASE_T, legacy,
++				       ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
+ 	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_10GBASE_T, legacy,
+ 				       ETHTOOL_LINK_MODE_10000baseT_Full_BIT);
+ 	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_25GBASE_CR, legacy,
+@@ -204,6 +208,12 @@ void mlx5e_build_ptys2ethtool_map(void)
+ 				       ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT,
+ 				       ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT,
+ 				       ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT);
++	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_400GAUI_8_400GBASE_CR8, ext,
++				       ETHTOOL_LINK_MODE_400000baseKR8_Full_BIT,
++				       ETHTOOL_LINK_MODE_400000baseSR8_Full_BIT,
++				       ETHTOOL_LINK_MODE_400000baseLR8_ER8_FR8_Full_BIT,
++				       ETHTOOL_LINK_MODE_400000baseDR8_Full_BIT,
++				       ETHTOOL_LINK_MODE_400000baseCR8_Full_BIT);
+ 	MLX5_BUILD_PTYS2ETHTOOL_CONFIG(MLX5E_100GAUI_1_100GBASE_CR_KR, ext,
+ 				       ETHTOOL_LINK_MODE_100000baseKR_Full_BIT,
+ 				       ETHTOOL_LINK_MODE_100000baseSR_Full_BIT,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+index 255bc8b749f9a5..8587cd572da536 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+@@ -319,7 +319,7 @@ int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting)
+ 		return -EPERM;
+ 
+ 	mutex_lock(&esw->state_lock);
+-	if (esw->mode != MLX5_ESWITCH_LEGACY) {
++	if (esw->mode != MLX5_ESWITCH_LEGACY || !mlx5_esw_is_fdb_created(esw)) {
+ 		err = -EOPNOTSUPP;
+ 		goto out;
+ 	}
+@@ -339,7 +339,7 @@ int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting)
+ 	if (!mlx5_esw_allowed(esw))
+ 		return -EPERM;
+ 
+-	if (esw->mode != MLX5_ESWITCH_LEGACY)
++	if (esw->mode != MLX5_ESWITCH_LEGACY || !mlx5_esw_is_fdb_created(esw))
+ 		return -EOPNOTSUPP;
+ 
+ 	*setting = esw->fdb_table.legacy.vepa_uplink_rule ? 1 : 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+index d2ebe56c3977cc..02a3563f51ad26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+@@ -312,6 +312,25 @@ static int esw_qos_set_group_max_rate(struct mlx5_eswitch *esw,
+ 	return err;
+ }
+ 
++static bool esw_qos_element_type_supported(struct mlx5_core_dev *dev, int type)
++{
++	switch (type) {
++	case SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR:
++		return MLX5_CAP_QOS(dev, esw_element_type) &
++		       ELEMENT_TYPE_CAP_MASK_TSAR;
++	case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT:
++		return MLX5_CAP_QOS(dev, esw_element_type) &
++		       ELEMENT_TYPE_CAP_MASK_VPORT;
++	case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC:
++		return MLX5_CAP_QOS(dev, esw_element_type) &
++		       ELEMENT_TYPE_CAP_MASK_VPORT_TC;
++	case SCHEDULING_CONTEXT_ELEMENT_TYPE_PARA_VPORT_TC:
++		return MLX5_CAP_QOS(dev, esw_element_type) &
++		       ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC;
++	}
++	return false;
++}
++
+ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw,
+ 					      struct mlx5_vport *vport,
+ 					      u32 max_rate, u32 bw_share)
+@@ -323,6 +342,9 @@ static int esw_qos_vport_create_sched_element(struct mlx5_eswitch *esw,
+ 	void *vport_elem;
+ 	int err;
+ 
++	if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT))
++		return -EOPNOTSUPP;
++
+ 	parent_tsar_ix = group ? group->tsar_ix : esw->qos.root_tsar_ix;
+ 	MLX5_SET(scheduling_context, sched_ctx, element_type,
+ 		 SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT);
+@@ -421,6 +443,7 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex
+ {
+ 	u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
+ 	struct mlx5_esw_rate_group *group;
++	__be32 *attr;
+ 	u32 divider;
+ 	int err;
+ 
+@@ -428,6 +451,12 @@ __esw_qos_create_rate_group(struct mlx5_eswitch *esw, struct netlink_ext_ack *ex
+ 	if (!group)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	MLX5_SET(scheduling_context, tsar_ctx, element_type,
++		 SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR);
++
++	attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes);
++	*attr = cpu_to_be32(TSAR_ELEMENT_TSAR_TYPE_DWRR << 16);
++
+ 	MLX5_SET(scheduling_context, tsar_ctx, parent_element_id,
+ 		 esw->qos.root_tsar_ix);
+ 	err = mlx5_create_scheduling_element_cmd(esw->dev,
+@@ -526,25 +555,6 @@ static int esw_qos_destroy_rate_group(struct mlx5_eswitch *esw,
+ 	return err;
+ }
+ 
+-static bool esw_qos_element_type_supported(struct mlx5_core_dev *dev, int type)
+-{
+-	switch (type) {
+-	case SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR:
+-		return MLX5_CAP_QOS(dev, esw_element_type) &
+-		       ELEMENT_TYPE_CAP_MASK_TASR;
+-	case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT:
+-		return MLX5_CAP_QOS(dev, esw_element_type) &
+-		       ELEMENT_TYPE_CAP_MASK_VPORT;
+-	case SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT_TC:
+-		return MLX5_CAP_QOS(dev, esw_element_type) &
+-		       ELEMENT_TYPE_CAP_MASK_VPORT_TC;
+-	case SCHEDULING_CONTEXT_ELEMENT_TYPE_PARA_VPORT_TC:
+-		return MLX5_CAP_QOS(dev, esw_element_type) &
+-		       ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC;
+-	}
+-	return false;
+-}
+-
+ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *extack)
+ {
+ 	u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
+@@ -555,7 +565,8 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta
+ 	if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR))
++	if (!esw_qos_element_type_supported(dev, SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR) ||
++	    !(MLX5_CAP_QOS(dev, esw_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR))
+ 		return -EOPNOTSUPP;
+ 
+ 	MLX5_SET(scheduling_context, tsar_ctx, element_type,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 3e55a6c6a7c9bf..211194df9619c4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -2215,6 +2215,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
+ 	{ PCI_VDEVICE(MELLANOX, 0x101f) },			/* ConnectX-6 LX */
+ 	{ PCI_VDEVICE(MELLANOX, 0x1021) },			/* ConnectX-7 */
+ 	{ PCI_VDEVICE(MELLANOX, 0x1023) },			/* ConnectX-8 */
++	{ PCI_VDEVICE(MELLANOX, 0x1025) },			/* ConnectX-9 */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d2) },			/* BlueField integrated ConnectX-5 network controller */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF},	/* BlueField integrated ConnectX-5 network controller VF */
+ 	{ PCI_VDEVICE(MELLANOX, 0xa2d6) },			/* BlueField-2 integrated ConnectX-6 Dx network controller */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/qos.c
+index 8bce730b5c5bef..db2bd3ad63ba36 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/qos.c
+@@ -28,6 +28,9 @@ int mlx5_qos_create_leaf_node(struct mlx5_core_dev *mdev, u32 parent_id,
+ {
+ 	u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {0};
+ 
++	if (!(MLX5_CAP_QOS(mdev, nic_element_type) & ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP))
++		return -EOPNOTSUPP;
++
+ 	MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_id);
+ 	MLX5_SET(scheduling_context, sched_ctx, element_type,
+ 		 SCHEDULING_CONTEXT_ELEMENT_TYPE_QUEUE_GROUP);
+@@ -44,6 +47,10 @@ int mlx5_qos_create_inner_node(struct mlx5_core_dev *mdev, u32 parent_id,
+ 	u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {0};
+ 	void *attr;
+ 
++	if (!(MLX5_CAP_QOS(mdev, nic_element_type) & ELEMENT_TYPE_CAP_MASK_TSAR) ||
++	    !(MLX5_CAP_QOS(mdev, nic_tsar_type) & TSAR_TYPE_CAP_MASK_DWRR))
++		return -EOPNOTSUPP;
++
+ 	MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_id);
+ 	MLX5_SET(scheduling_context, sched_ctx, element_type,
+ 		 SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+index 0df7f5712b6f71..9f98f5749064b6 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
++++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+@@ -424,9 +424,9 @@ enum WX_MSCA_CMD_value {
+ #define WX_MIN_RXD                   128
+ #define WX_MIN_TXD                   128
+ 
+-/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
+-#define WX_REQ_RX_DESCRIPTOR_MULTIPLE   8
+-#define WX_REQ_TX_DESCRIPTOR_MULTIPLE   8
++/* Number of Transmit and Receive Descriptors must be a multiple of 128 */
++#define WX_REQ_RX_DESCRIPTOR_MULTIPLE   128
++#define WX_REQ_TX_DESCRIPTOR_MULTIPLE   128
+ 
+ #define WX_MAX_JUMBO_FRAME_SIZE      9432 /* max payload 9414 */
+ #define VMDQ_P(p)                    p
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index efeb643c13733c..fc247f479257ae 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -271,8 +271,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 				DP83822_ENERGY_DET_INT_EN |
+ 				DP83822_LINK_QUAL_INT_EN);
+ 
+-		/* Private data pointer is NULL on DP83825 */
+-		if (!dp83822 || !dp83822->fx_enabled)
++		if (!dp83822->fx_enabled)
+ 			misr_status |= DP83822_ANEG_COMPLETE_INT_EN |
+ 				       DP83822_DUP_MODE_CHANGE_INT_EN |
+ 				       DP83822_SPEED_CHANGED_INT_EN;
+@@ -292,8 +291,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
+ 				DP83822_PAGE_RX_INT_EN |
+ 				DP83822_EEE_ERROR_CHANGE_INT_EN);
+ 
+-		/* Private data pointer is NULL on DP83825 */
+-		if (!dp83822 || !dp83822->fx_enabled)
++		if (!dp83822->fx_enabled)
+ 			misr_status |= DP83822_ANEG_ERR_INT_EN |
+ 				       DP83822_WOL_PKT_INT_EN;
+ 
+@@ -691,10 +689,9 @@ static int dp83822_read_straps(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
+-static int dp83822_probe(struct phy_device *phydev)
++static int dp8382x_probe(struct phy_device *phydev)
+ {
+ 	struct dp83822_private *dp83822;
+-	int ret;
+ 
+ 	dp83822 = devm_kzalloc(&phydev->mdio.dev, sizeof(*dp83822),
+ 			       GFP_KERNEL);
+@@ -703,6 +700,20 @@ static int dp83822_probe(struct phy_device *phydev)
+ 
+ 	phydev->priv = dp83822;
+ 
++	return 0;
++}
++
++static int dp83822_probe(struct phy_device *phydev)
++{
++	struct dp83822_private *dp83822;
++	int ret;
++
++	ret = dp8382x_probe(phydev);
++	if (ret)
++		return ret;
++
++	dp83822 = phydev->priv;
++
+ 	ret = dp83822_read_straps(phydev);
+ 	if (ret)
+ 		return ret;
+@@ -717,14 +728,11 @@ static int dp83822_probe(struct phy_device *phydev)
+ 
+ static int dp83826_probe(struct phy_device *phydev)
+ {
+-	struct dp83822_private *dp83822;
+-
+-	dp83822 = devm_kzalloc(&phydev->mdio.dev, sizeof(*dp83822),
+-			       GFP_KERNEL);
+-	if (!dp83822)
+-		return -ENOMEM;
++	int ret;
+ 
+-	phydev->priv = dp83822;
++	ret = dp8382x_probe(phydev);
++	if (ret)
++		return ret;
+ 
+ 	dp83826_of_init(phydev);
+ 
+@@ -795,6 +803,7 @@ static int dp83822_resume(struct phy_device *phydev)
+ 		PHY_ID_MATCH_MODEL(_id),			\
+ 		.name		= (_name),			\
+ 		/* PHY_BASIC_FEATURES */			\
++		.probe          = dp8382x_probe,		\
+ 		.soft_reset	= dp83822_phy_reset,		\
+ 		.config_init	= dp8382x_config_init,		\
+ 		.get_wol = dp83822_get_wol,			\
+diff --git a/drivers/net/phy/vitesse.c b/drivers/net/phy/vitesse.c
+index 897b979ec03c81..3b5fcaf0dd36db 100644
+--- a/drivers/net/phy/vitesse.c
++++ b/drivers/net/phy/vitesse.c
+@@ -237,16 +237,6 @@ static int vsc739x_config_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
+-static int vsc73xx_config_aneg(struct phy_device *phydev)
+-{
+-	/* The VSC73xx switches does not like to be instructed to
+-	 * do autonegotiation in any way, it prefers that you just go
+-	 * with the power-on/reset defaults. Writing some registers will
+-	 * just make autonegotiation permanently fail.
+-	 */
+-	return 0;
+-}
+-
+ /* This adds a skew for both TX and RX clocks, so the skew should only be
+  * applied to "rgmii-id" interfaces. It may not work as expected
+  * on "rgmii-txid", "rgmii-rxid" or "rgmii" interfaces.
+@@ -444,7 +434,6 @@ static struct phy_driver vsc82xx_driver[] = {
+ 	.phy_id_mask    = 0x000ffff0,
+ 	/* PHY_GBIT_FEATURES */
+ 	.config_init    = vsc738x_config_init,
+-	.config_aneg    = vsc73xx_config_aneg,
+ 	.read_page      = vsc73xx_read_page,
+ 	.write_page     = vsc73xx_write_page,
+ }, {
+@@ -453,7 +442,6 @@ static struct phy_driver vsc82xx_driver[] = {
+ 	.phy_id_mask    = 0x000ffff0,
+ 	/* PHY_GBIT_FEATURES */
+ 	.config_init    = vsc738x_config_init,
+-	.config_aneg    = vsc73xx_config_aneg,
+ 	.read_page      = vsc73xx_read_page,
+ 	.write_page     = vsc73xx_write_page,
+ }, {
+@@ -462,7 +450,6 @@ static struct phy_driver vsc82xx_driver[] = {
+ 	.phy_id_mask    = 0x000ffff0,
+ 	/* PHY_GBIT_FEATURES */
+ 	.config_init    = vsc739x_config_init,
+-	.config_aneg    = vsc73xx_config_aneg,
+ 	.read_page      = vsc73xx_read_page,
+ 	.write_page     = vsc73xx_write_page,
+ }, {
+@@ -471,7 +458,6 @@ static struct phy_driver vsc82xx_driver[] = {
+ 	.phy_id_mask    = 0x000ffff0,
+ 	/* PHY_GBIT_FEATURES */
+ 	.config_init    = vsc739x_config_init,
+-	.config_aneg    = vsc73xx_config_aneg,
+ 	.read_page      = vsc73xx_read_page,
+ 	.write_page     = vsc73xx_write_page,
+ }, {
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 6eeef10edadad1..46afb95ffabe3b 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -286,10 +286,11 @@ static void ipheth_rcvbulk_callback(struct urb *urb)
+ 		return;
+ 	}
+ 
+-	if (urb->actual_length <= IPHETH_IP_ALIGN) {
+-		dev->net->stats.rx_length_errors++;
+-		return;
+-	}
++	/* iPhone may periodically send URBs with no payload
++	 * on the "bulk in" endpoint. It is safe to ignore them.
++	 */
++	if (urb->actual_length == 0)
++		goto rx_submit;
+ 
+ 	/* RX URBs starting with 0x00 0x01 do not encapsulate Ethernet frames,
+ 	 * but rather are control frames. Their purpose is not documented, and
+@@ -298,7 +299,8 @@ static void ipheth_rcvbulk_callback(struct urb *urb)
+ 	 * URB received from the bulk IN endpoint.
+ 	 */
+ 	if (unlikely
+-		(((char *)urb->transfer_buffer)[0] == 0 &&
++		(urb->actual_length == 4 &&
++		 ((char *)urb->transfer_buffer)[0] == 0 &&
+ 		 ((char *)urb->transfer_buffer)[1] == 1))
+ 		goto rx_submit;
+ 
+@@ -306,7 +308,6 @@ static void ipheth_rcvbulk_callback(struct urb *urb)
+ 	if (retval != 0) {
+ 		dev_err(&dev->intf->dev, "%s: callback retval: %d\n",
+ 			__func__, retval);
+-		return;
+ 	}
+ 
+ rx_submit:
+@@ -354,13 +355,14 @@ static int ipheth_carrier_set(struct ipheth_device *dev)
+ 			0x02, /* index */
+ 			dev->ctrl_buf, IPHETH_CTRL_BUF_SIZE,
+ 			IPHETH_CTRL_TIMEOUT);
+-	if (retval < 0) {
++	if (retval <= 0) {
+ 		dev_err(&dev->intf->dev, "%s: usb_control_msg: %d\n",
+ 			__func__, retval);
+ 		return retval;
+ 	}
+ 
+-	if (dev->ctrl_buf[0] == IPHETH_CARRIER_ON) {
++	if ((retval == 1 && dev->ctrl_buf[0] == IPHETH_CARRIER_ON) ||
++	    (retval >= 2 && dev->ctrl_buf[1] == IPHETH_CARRIER_ON)) {
+ 		netif_carrier_on(dev->net);
+ 		if (dev->tx_urb->status != -EINPROGRESS)
+ 			netif_wake_queue(dev->net);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 3e3ad3518d85f9..cca7132ed6ab19 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -1182,7 +1182,7 @@ static void mt7921_ipv6_addr_change(struct ieee80211_hw *hw,
+ 				    struct inet6_dev *idev)
+ {
+ 	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+-	struct mt792x_dev *dev = mvif->phy->dev;
++	struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ 	struct inet6_ifaddr *ifa;
+ 	struct in6_addr ns_addrs[IEEE80211_BSS_ARP_ADDR_LIST_LEN];
+ 	struct sk_buff *skb;
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 11c7c85047ed42..765bda7924f7bf 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -1368,11 +1368,15 @@ static int pmu_sbi_device_probe(struct platform_device *pdev)
+ 
+ 	/* SBI PMU Snapsphot is only available in SBI v2.0 */
+ 	if (sbi_v2_available) {
++		int cpu;
++
+ 		ret = pmu_sbi_snapshot_alloc(pmu);
+ 		if (ret)
+ 			goto out_unregister;
+ 
+-		ret = pmu_sbi_snapshot_setup(pmu, smp_processor_id());
++		cpu = get_cpu();
++
++		ret = pmu_sbi_snapshot_setup(pmu, cpu);
+ 		if (ret) {
+ 			/* Snapshot is an optional feature. Continue if not available */
+ 			pmu_sbi_snapshot_free(pmu);
+@@ -1386,6 +1390,7 @@ static int pmu_sbi_device_probe(struct platform_device *pdev)
+ 			 */
+ 			static_branch_enable(&sbi_pmu_snapshot_available);
+ 		}
++		put_cpu();
+ 	}
+ 
+ 	register_sysctl("kernel", sbi_pmu_sysctl_table);
+diff --git a/drivers/pinctrl/intel/pinctrl-meteorlake.c b/drivers/pinctrl/intel/pinctrl-meteorlake.c
+index cc44890c6699dc..885fa3b0d6d95f 100644
+--- a/drivers/pinctrl/intel/pinctrl-meteorlake.c
++++ b/drivers/pinctrl/intel/pinctrl-meteorlake.c
+@@ -584,6 +584,7 @@ static const struct intel_pinctrl_soc_data mtls_soc_data = {
+ };
+ 
+ static const struct acpi_device_id mtl_pinctrl_acpi_match[] = {
++	{ "INTC105E", (kernel_ulong_t)&mtlp_soc_data },
+ 	{ "INTC1083", (kernel_ulong_t)&mtlp_soc_data },
+ 	{ "INTC1082", (kernel_ulong_t)&mtls_soc_data },
+ 	{ }
+diff --git a/drivers/platform/surface/surface_aggregator_registry.c b/drivers/platform/surface/surface_aggregator_registry.c
+index 1c4d74db08c954..a23dff35f8ca23 100644
+--- a/drivers/platform/surface/surface_aggregator_registry.c
++++ b/drivers/platform/surface/surface_aggregator_registry.c
+@@ -265,16 +265,34 @@ static const struct software_node *ssam_node_group_sl5[] = {
+ 	&ssam_node_root,
+ 	&ssam_node_bat_ac,
+ 	&ssam_node_bat_main,
+-	&ssam_node_tmp_perf_profile,
++	&ssam_node_tmp_perf_profile_with_fan,
++	&ssam_node_tmp_sensors,
++	&ssam_node_fan_speed,
++	&ssam_node_hid_main_keyboard,
++	&ssam_node_hid_main_touchpad,
++	&ssam_node_hid_main_iid5,
++	&ssam_node_hid_sam_ucm_ucsi,
++	NULL,
++};
++
++/* Devices for Surface Laptop 6. */
++static const struct software_node *ssam_node_group_sl6[] = {
++	&ssam_node_root,
++	&ssam_node_bat_ac,
++	&ssam_node_bat_main,
++	&ssam_node_tmp_perf_profile_with_fan,
++	&ssam_node_tmp_sensors,
++	&ssam_node_fan_speed,
+ 	&ssam_node_hid_main_keyboard,
+ 	&ssam_node_hid_main_touchpad,
+ 	&ssam_node_hid_main_iid5,
++	&ssam_node_hid_sam_sensors,
+ 	&ssam_node_hid_sam_ucm_ucsi,
+ 	NULL,
+ };
+ 
+-/* Devices for Surface Laptop Studio. */
+-static const struct software_node *ssam_node_group_sls[] = {
++/* Devices for Surface Laptop Studio 1. */
++static const struct software_node *ssam_node_group_sls1[] = {
+ 	&ssam_node_root,
+ 	&ssam_node_bat_ac,
+ 	&ssam_node_bat_main,
+@@ -289,6 +307,22 @@ static const struct software_node *ssam_node_group_sls[] = {
+ 	NULL,
+ };
+ 
++/* Devices for Surface Laptop Studio 2. */
++static const struct software_node *ssam_node_group_sls2[] = {
++	&ssam_node_root,
++	&ssam_node_bat_ac,
++	&ssam_node_bat_main,
++	&ssam_node_tmp_perf_profile_with_fan,
++	&ssam_node_tmp_sensors,
++	&ssam_node_fan_speed,
++	&ssam_node_pos_tablet_switch,
++	&ssam_node_hid_sam_keyboard,
++	&ssam_node_hid_sam_penstash,
++	&ssam_node_hid_sam_sensors,
++	&ssam_node_hid_sam_ucm_ucsi,
++	NULL,
++};
++
+ /* Devices for Surface Laptop Go. */
+ static const struct software_node *ssam_node_group_slg1[] = {
+ 	&ssam_node_root,
+@@ -324,7 +358,7 @@ static const struct software_node *ssam_node_group_sp8[] = {
+ 	NULL,
+ };
+ 
+-/* Devices for Surface Pro 9 */
++/* Devices for Surface Pro 9 and 10 */
+ static const struct software_node *ssam_node_group_sp9[] = {
+ 	&ssam_node_root,
+ 	&ssam_node_hub_kip,
+@@ -365,6 +399,9 @@ static const struct acpi_device_id ssam_platform_hub_match[] = {
+ 	/* Surface Pro 9 */
+ 	{ "MSHW0343", (unsigned long)ssam_node_group_sp9 },
+ 
++	/* Surface Pro 10 */
++	{ "MSHW0510", (unsigned long)ssam_node_group_sp9 },
++
+ 	/* Surface Book 2 */
+ 	{ "MSHW0107", (unsigned long)ssam_node_group_gen5 },
+ 
+@@ -389,14 +426,23 @@ static const struct acpi_device_id ssam_platform_hub_match[] = {
+ 	/* Surface Laptop 5 */
+ 	{ "MSHW0350", (unsigned long)ssam_node_group_sl5 },
+ 
++	/* Surface Laptop 6 */
++	{ "MSHW0530", (unsigned long)ssam_node_group_sl6 },
++
+ 	/* Surface Laptop Go 1 */
+ 	{ "MSHW0118", (unsigned long)ssam_node_group_slg1 },
+ 
+ 	/* Surface Laptop Go 2 */
+ 	{ "MSHW0290", (unsigned long)ssam_node_group_slg1 },
+ 
+-	/* Surface Laptop Studio */
+-	{ "MSHW0123", (unsigned long)ssam_node_group_sls },
++	/* Surface Laptop Go 3 */
++	{ "MSHW0440", (unsigned long)ssam_node_group_slg1 },
++
++	/* Surface Laptop Studio 1 */
++	{ "MSHW0123", (unsigned long)ssam_node_group_sls1 },
++
++	/* Surface Laptop Studio 2 */
++	{ "MSHW0360", (unsigned long)ssam_node_group_sls2 },
+ 
+ 	{ },
+ };
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index bc9c5db3832445..5fe5149549cc9e 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -146,6 +146,20 @@ static const char * const ashs_ids[] = { "ATK4001", "ATK4002", NULL };
+ 
+ static int throttle_thermal_policy_write(struct asus_wmi *);
+ 
++static const struct dmi_system_id asus_ally_mcu_quirk[] = {
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "RC71L"),
++		},
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "RC72L"),
++		},
++	},
++	{ },
++};
++
+ static bool ashs_present(void)
+ {
+ 	int i = 0;
+@@ -4650,7 +4664,7 @@ static int asus_wmi_add(struct platform_device *pdev)
+ 	asus->dgpu_disable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_DGPU);
+ 	asus->kbd_rgb_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_TUF_RGB_STATE);
+ 	asus->ally_mcu_usb_switch = acpi_has_method(NULL, ASUS_USB0_PWR_EC0_CSEE)
+-						&& dmi_match(DMI_BOARD_NAME, "RC71L");
++						&& dmi_check_system(asus_ally_mcu_quirk);
+ 
+ 	if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_MINI_LED_MODE))
+ 		asus->mini_led_dev_id = ASUS_WMI_DEVID_MINI_LED_MODE;
+diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c
+index cf845ee1c7b1f0..ebd81846e2d564 100644
+--- a/drivers/platform/x86/panasonic-laptop.c
++++ b/drivers/platform/x86/panasonic-laptop.c
+@@ -337,7 +337,8 @@ static int acpi_pcc_retrieve_biosdata(struct pcc_acpi *pcc)
+ 	}
+ 
+ 	if (pcc->num_sifr < hkey->package.count) {
+-		pr_err("SQTY reports bad SINF length\n");
++		pr_err("SQTY reports bad SINF length SQTY: %lu SINF-pkg-count: %u\n",
++		       pcc->num_sifr, hkey->package.count);
+ 		status = AE_ERROR;
+ 		goto end;
+ 	}
+@@ -773,6 +774,24 @@ static DEVICE_ATTR_RW(dc_brightness);
+ static DEVICE_ATTR_RW(current_brightness);
+ static DEVICE_ATTR_RW(cdpower);
+ 
++static umode_t pcc_sysfs_is_visible(struct kobject *kobj, struct attribute *attr, int idx)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct acpi_device *acpi = to_acpi_device(dev);
++	struct pcc_acpi *pcc = acpi_driver_data(acpi);
++
++	if (attr == &dev_attr_mute.attr)
++		return (pcc->num_sifr > SINF_MUTE) ? attr->mode : 0;
++
++	if (attr == &dev_attr_eco_mode.attr)
++		return (pcc->num_sifr > SINF_ECO_MODE) ? attr->mode : 0;
++
++	if (attr == &dev_attr_current_brightness.attr)
++		return (pcc->num_sifr > SINF_CUR_BRIGHT) ? attr->mode : 0;
++
++	return attr->mode;
++}
++
+ static struct attribute *pcc_sysfs_entries[] = {
+ 	&dev_attr_numbatt.attr,
+ 	&dev_attr_lcdtype.attr,
+@@ -787,8 +806,9 @@ static struct attribute *pcc_sysfs_entries[] = {
+ };
+ 
+ static const struct attribute_group pcc_attr_group = {
+-	.name	= NULL,		/* put in device directory */
+-	.attrs	= pcc_sysfs_entries,
++	.name		= NULL,		/* put in device directory */
++	.attrs		= pcc_sysfs_entries,
++	.is_visible	= pcc_sysfs_is_visible,
+ };
+ 
+ 
+@@ -941,12 +961,15 @@ static int acpi_pcc_hotkey_resume(struct device *dev)
+ 	if (!pcc)
+ 		return -EINVAL;
+ 
+-	acpi_pcc_write_sset(pcc, SINF_MUTE, pcc->mute);
+-	acpi_pcc_write_sset(pcc, SINF_ECO_MODE, pcc->eco_mode);
++	if (pcc->num_sifr > SINF_MUTE)
++		acpi_pcc_write_sset(pcc, SINF_MUTE, pcc->mute);
++	if (pcc->num_sifr > SINF_ECO_MODE)
++		acpi_pcc_write_sset(pcc, SINF_ECO_MODE, pcc->eco_mode);
+ 	acpi_pcc_write_sset(pcc, SINF_STICKY_KEY, pcc->sticky_key);
+ 	acpi_pcc_write_sset(pcc, SINF_AC_CUR_BRIGHT, pcc->ac_brightness);
+ 	acpi_pcc_write_sset(pcc, SINF_DC_CUR_BRIGHT, pcc->dc_brightness);
+-	acpi_pcc_write_sset(pcc, SINF_CUR_BRIGHT, pcc->current_brightness);
++	if (pcc->num_sifr > SINF_CUR_BRIGHT)
++		acpi_pcc_write_sset(pcc, SINF_CUR_BRIGHT, pcc->current_brightness);
+ 
+ 	return 0;
+ }
+@@ -963,11 +986,21 @@ static int acpi_pcc_hotkey_add(struct acpi_device *device)
+ 
+ 	num_sifr = acpi_pcc_get_sqty(device);
+ 
+-	if (num_sifr < 0 || num_sifr > 255) {
+-		pr_err("num_sifr out of range");
++	/*
++	 * pcc->sinf is expected to at least have the AC+DC brightness entries.
++	 * Accesses to higher SINF entries are checked against num_sifr.
++	 */
++	if (num_sifr <= SINF_DC_CUR_BRIGHT || num_sifr > 255) {
++		pr_err("num_sifr %d out of range %d - 255\n", num_sifr, SINF_DC_CUR_BRIGHT + 1);
+ 		return -ENODEV;
+ 	}
+ 
++	/*
++	 * Some DSDT-s have an off-by-one bug where the SINF package count is
++	 * one higher than the SQTY reported value, allocate 1 entry extra.
++	 */
++	num_sifr++;
++
+ 	pcc = kzalloc(sizeof(struct pcc_acpi), GFP_KERNEL);
+ 	if (!pcc) {
+ 		pr_err("Couldn't allocate mem for pcc");
+@@ -1020,11 +1053,14 @@ static int acpi_pcc_hotkey_add(struct acpi_device *device)
+ 	acpi_pcc_write_sset(pcc, SINF_STICKY_KEY, 0);
+ 	pcc->sticky_key = 0;
+ 
+-	pcc->eco_mode = pcc->sinf[SINF_ECO_MODE];
+-	pcc->mute = pcc->sinf[SINF_MUTE];
+ 	pcc->ac_brightness = pcc->sinf[SINF_AC_CUR_BRIGHT];
+ 	pcc->dc_brightness = pcc->sinf[SINF_DC_CUR_BRIGHT];
+-	pcc->current_brightness = pcc->sinf[SINF_CUR_BRIGHT];
++	if (pcc->num_sifr > SINF_MUTE)
++		pcc->mute = pcc->sinf[SINF_MUTE];
++	if (pcc->num_sifr > SINF_ECO_MODE)
++		pcc->eco_mode = pcc->sinf[SINF_ECO_MODE];
++	if (pcc->num_sifr > SINF_CUR_BRIGHT)
++		pcc->current_brightness = pcc->sinf[SINF_CUR_BRIGHT];
+ 
+ 	/* add sysfs attributes */
+ 	result = sysfs_create_group(&device->dev.kobj, &pcc_attr_group);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 00191b1d226014..4e9e7d2a942d8a 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1286,18 +1286,18 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave,
+ 					    unsigned int port_num)
+ {
+ 	struct sdw_dpn_prop *dpn_prop;
+-	unsigned long mask;
++	u8 num_ports;
+ 	int i;
+ 
+ 	if (direction == SDW_DATA_DIR_TX) {
+-		mask = slave->prop.source_ports;
++		num_ports = hweight32(slave->prop.source_ports);
+ 		dpn_prop = slave->prop.src_dpn_prop;
+ 	} else {
+-		mask = slave->prop.sink_ports;
++		num_ports = hweight32(slave->prop.sink_ports);
+ 		dpn_prop = slave->prop.sink_dpn_prop;
+ 	}
+ 
+-	for_each_set_bit(i, &mask, 32) {
++	for (i = 0; i < num_ports; i++) {
+ 		if (dpn_prop[i].num == port_num)
+ 			return &dpn_prop[i];
+ 	}
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index 37ef8c40b2762e..6f4057330444d5 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -1110,25 +1110,27 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	spin_lock_init(&mas->lock);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, 250);
+-	pm_runtime_enable(dev);
++	ret = devm_pm_runtime_enable(dev);
++	if (ret)
++		return ret;
+ 
+ 	if (device_property_read_bool(&pdev->dev, "spi-slave"))
+ 		spi->target = true;
+ 
+ 	ret = geni_icc_get(&mas->se, NULL);
+ 	if (ret)
+-		goto spi_geni_probe_runtime_disable;
++		return ret;
+ 	/* Set the bus quota to a reasonable value for register access */
+ 	mas->se.icc_paths[GENI_TO_CORE].avg_bw = Bps_to_icc(CORE_2X_50_MHZ);
+ 	mas->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
+ 
+ 	ret = geni_icc_set_bw(&mas->se);
+ 	if (ret)
+-		goto spi_geni_probe_runtime_disable;
++		return ret;
+ 
+ 	ret = spi_geni_init(mas);
+ 	if (ret)
+-		goto spi_geni_probe_runtime_disable;
++		return ret;
+ 
+ 	/*
+ 	 * check the mode supported and set_cs for fifo mode only
+@@ -1157,8 +1159,6 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	free_irq(mas->irq, spi);
+ spi_geni_release_dma:
+ 	spi_geni_release_dma_chan(mas);
+-spi_geni_probe_runtime_disable:
+-	pm_runtime_disable(dev);
+ 	return ret;
+ }
+ 
+@@ -1170,10 +1170,9 @@ static void spi_geni_remove(struct platform_device *pdev)
+ 	/* Unregister _before_ disabling pm_runtime() so we stop transfers */
+ 	spi_unregister_controller(spi);
+ 
+-	spi_geni_release_dma_chan(mas);
+-
+ 	free_irq(mas->irq, spi);
+-	pm_runtime_disable(&pdev->dev);
++
++	spi_geni_release_dma_chan(mas);
+ }
+ 
+ static int __maybe_unused spi_geni_runtime_suspend(struct device *dev)
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index 88397f712a3b5e..6585b19a48662d 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -805,14 +805,15 @@ static void nxp_fspi_fill_txfifo(struct nxp_fspi *f,
+ 	if (i < op->data.nbytes) {
+ 		u32 data = 0;
+ 		int j;
++		int remaining = op->data.nbytes - i;
+ 		/* Wait for TXFIFO empty */
+ 		ret = fspi_readl_poll_tout(f, f->iobase + FSPI_INTR,
+ 					   FSPI_INTR_IPTXWE, 0,
+ 					   POLL_TOUT, true);
+ 		WARN_ON(ret);
+ 
+-		for (j = 0; j < ALIGN(op->data.nbytes - i, 4); j += 4) {
+-			memcpy(&data, buf + i + j, 4);
++		for (j = 0; j < ALIGN(remaining, 4); j += 4) {
++			memcpy(&data, buf + i + j, min_t(int, 4, remaining - j));
+ 			fspi_writel(f, data, base + FSPI_TFDR + j);
+ 		}
+ 		fspi_writel(f, FSPI_INTR_IPTXWE, base + FSPI_INTR);
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 99524a3c9f382e..558c466135a51b 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1033,6 +1033,18 @@ static int __maybe_unused zynqmp_runtime_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static unsigned long zynqmp_qspi_timeout(struct zynqmp_qspi *xqspi, u8 bits,
++					 unsigned long bytes)
++{
++	unsigned long timeout;
++
++	/* Assume we are at most 2x slower than the nominal bus speed */
++	timeout = mult_frac(bytes, 2 * 8 * MSEC_PER_SEC,
++			    bits * xqspi->speed_hz);
++	/* And add 100 ms for scheduling delays */
++	return msecs_to_jiffies(timeout + 100);
++}
++
+ /**
+  * zynqmp_qspi_exec_op() - Initiates the QSPI transfer
+  * @mem: The SPI memory
+@@ -1049,6 +1061,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ {
+ 	struct zynqmp_qspi *xqspi = spi_controller_get_devdata
+ 				    (mem->spi->controller);
++	unsigned long timeout;
+ 	int err = 0, i;
+ 	u32 genfifoentry = 0;
+ 	u16 opcode = op->cmd.opcode;
+@@ -1077,8 +1090,10 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 		zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_timeout
+-		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
++		timeout = zynqmp_qspi_timeout(xqspi, op->cmd.buswidth,
++					      op->cmd.nbytes);
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
++						 timeout)) {
+ 			err = -ETIMEDOUT;
+ 			goto return_err;
+ 		}
+@@ -1104,8 +1119,10 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 				   GQSPI_IER_TXEMPTY_MASK |
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_timeout
+-		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
++		timeout = zynqmp_qspi_timeout(xqspi, op->addr.buswidth,
++					      op->addr.nbytes);
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
++						 timeout)) {
+ 			err = -ETIMEDOUT;
+ 			goto return_err;
+ 		}
+@@ -1173,8 +1190,9 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 						   GQSPI_IER_RXEMPTY_MASK);
+ 			}
+ 		}
+-		if (!wait_for_completion_timeout
+-		    (&xqspi->data_completion, msecs_to_jiffies(1000)))
++		timeout = zynqmp_qspi_timeout(xqspi, op->data.buswidth,
++					      op->data.nbytes);
++		if (!wait_for_completion_timeout(&xqspi->data_completion, timeout))
+ 			err = -ETIMEDOUT;
+ 	}
+ 
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_frac.h b/drivers/staging/media/atomisp/pci/sh_css_frac.h
+index 8f08df5c88cc36..569a2f59e5519f 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_frac.h
++++ b/drivers/staging/media/atomisp/pci/sh_css_frac.h
+@@ -30,12 +30,24 @@
+ #define uISP_VAL_MAX		      ((unsigned int)((1 << uISP_REG_BIT) - 1))
+ 
+ /* a:fraction bits for 16bit precision, b:fraction bits for ISP precision */
+-#define sDIGIT_FITTING(v, a, b) \
+-	min_t(int, max_t(int, (((v) >> sSHIFT) >> max(sFRACTION_BITS_FITTING(a) - (b), 0)), \
+-	  sISP_VAL_MIN), sISP_VAL_MAX)
+-#define uDIGIT_FITTING(v, a, b) \
+-	min((unsigned int)max((unsigned)(((v) >> uSHIFT) \
+-	>> max((int)(uFRACTION_BITS_FITTING(a) - (b)), 0)), \
+-	  uISP_VAL_MIN), uISP_VAL_MAX)
++static inline int sDIGIT_FITTING(int v, int a, int b)
++{
++	int fit_shift = sFRACTION_BITS_FITTING(a) - b;
++
++	v >>= sSHIFT;
++	v >>= fit_shift > 0 ? fit_shift : 0;
++
++	return clamp_t(int, v, sISP_VAL_MIN, sISP_VAL_MAX);
++}
++
++static inline unsigned int uDIGIT_FITTING(unsigned int v, int a, int b)
++{
++	int fit_shift = uFRACTION_BITS_FITTING(a) - b;
++
++	v >>= uSHIFT;
++	v >>= fit_shift > 0 ? fit_shift : 0;
++
++	return clamp_t(unsigned int, v, uISP_VAL_MIN, uISP_VAL_MAX);
++}
+ 
+ #endif /* __SH_CSS_FRAC_H */
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 45e91d065b3bdb..8e00f21c7d139d 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -817,10 +817,11 @@ static int ucsi_check_altmodes(struct ucsi_connector *con)
+ 	/* Ignoring the errors in this case. */
+ 	if (con->partner_altmode[0]) {
+ 		num_partner_am = ucsi_get_num_altmode(con->partner_altmode);
+-		if (num_partner_am > 0)
+-			typec_partner_set_num_altmodes(con->partner, num_partner_am);
++		typec_partner_set_num_altmodes(con->partner, num_partner_am);
+ 		ucsi_altmode_update_active(con);
+ 		return 0;
++	} else {
++		typec_partner_set_num_altmodes(con->partner, 0);
+ 	}
+ 
+ 	return ret;
+@@ -914,10 +915,20 @@ static void ucsi_unregister_plug(struct ucsi_connector *con)
+ 
+ static int ucsi_register_cable(struct ucsi_connector *con)
+ {
++	struct ucsi_cable_property cable_prop;
+ 	struct typec_cable *cable;
+ 	struct typec_cable_desc desc = {};
++	u64 command;
++	int ret;
++
++	command = UCSI_GET_CABLE_PROPERTY | UCSI_CONNECTOR_NUMBER(con->num);
++	ret = ucsi_send_command(con->ucsi, command, &cable_prop, sizeof(cable_prop));
++	if (ret < 0) {
++		dev_err(con->ucsi->dev, "GET_CABLE_PROPERTY failed (%d)\n", ret);
++		return ret;
++	}
+ 
+-	switch (UCSI_CABLE_PROP_FLAG_PLUG_TYPE(con->cable_prop.flags)) {
++	switch (UCSI_CABLE_PROP_FLAG_PLUG_TYPE(cable_prop.flags)) {
+ 	case UCSI_CABLE_PROPERTY_PLUG_TYPE_A:
+ 		desc.type = USB_PLUG_TYPE_A;
+ 		break;
+@@ -933,10 +944,10 @@ static int ucsi_register_cable(struct ucsi_connector *con)
+ 	}
+ 
+ 	desc.identity = &con->cable_identity;
+-	desc.active = !!(UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE &
+-			 con->cable_prop.flags);
+-	desc.pd_revision = UCSI_CABLE_PROP_FLAG_PD_MAJOR_REV_AS_BCD(
+-	    con->cable_prop.flags);
++	desc.active = !!(UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE & cable_prop.flags);
++
++	if (con->ucsi->version >= UCSI_VERSION_2_1)
++		desc.pd_revision = UCSI_CABLE_PROP_FLAG_PD_MAJOR_REV_AS_BCD(cable_prop.flags);
+ 
+ 	cable = typec_register_cable(con->port, &desc);
+ 	if (IS_ERR(cable)) {
+@@ -1142,21 +1153,11 @@ static int ucsi_check_connection(struct ucsi_connector *con)
+ 
+ static int ucsi_check_cable(struct ucsi_connector *con)
+ {
+-	u64 command;
+-	int ret;
++	int ret, num_plug_am;
+ 
+ 	if (con->cable)
+ 		return 0;
+ 
+-	command = UCSI_GET_CABLE_PROPERTY | UCSI_CONNECTOR_NUMBER(con->num);
+-	ret = ucsi_send_command(con->ucsi, command, &con->cable_prop,
+-				sizeof(con->cable_prop));
+-	if (ret < 0) {
+-		dev_err(con->ucsi->dev, "GET_CABLE_PROPERTY failed (%d)\n",
+-			ret);
+-		return ret;
+-	}
+-
+ 	ret = ucsi_register_cable(con);
+ 	if (ret < 0)
+ 		return ret;
+@@ -1175,6 +1176,13 @@ static int ucsi_check_cable(struct ucsi_connector *con)
+ 		ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP_P);
+ 		if (ret < 0)
+ 			return ret;
++
++		if (con->plug_altmode[0]) {
++			num_plug_am = ucsi_get_num_altmode(con->plug_altmode);
++			typec_plug_set_num_altmodes(con->plug, num_plug_am);
++		} else {
++			typec_plug_set_num_altmodes(con->plug, 0);
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index f66224a270bc6a..46c37643b59cad 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -444,7 +444,6 @@ struct ucsi_connector {
+ 
+ 	struct ucsi_connector_status status;
+ 	struct ucsi_connector_capability cap;
+-	struct ucsi_cable_property cable_prop;
+ 	struct power_supply *psy;
+ 	struct power_supply_desc psy_desc;
+ 	u32 rdo;
+diff --git a/fs/bcachefs/extents.c b/fs/bcachefs/extents.c
+index 410b8bd81b5a6e..7582e8ee6c21c7 100644
+--- a/fs/bcachefs/extents.c
++++ b/fs/bcachefs/extents.c
+@@ -932,8 +932,29 @@ bool bch2_extents_match(struct bkey_s_c k1, struct bkey_s_c k2)
+ 			bkey_for_each_ptr_decode(k2.k, ptrs2, p2, entry2)
+ 				if (p1.ptr.dev		== p2.ptr.dev &&
+ 				    p1.ptr.gen		== p2.ptr.gen &&
++
++				    /*
++				     * This checks that the two pointers point
++				     * to the same region on disk - adjusting
++				     * for the difference in where the extents
++				     * start, since one may have been trimmed:
++				     */
+ 				    (s64) p1.ptr.offset + p1.crc.offset - bkey_start_offset(k1.k) ==
+-				    (s64) p2.ptr.offset + p2.crc.offset - bkey_start_offset(k2.k))
++				    (s64) p2.ptr.offset + p2.crc.offset - bkey_start_offset(k2.k) &&
++
++				    /*
++				     * This additionally checks that the
++				     * extents overlap on disk, since the
++				     * previous check may trigger spuriously
++				     * when one extent is immediately partially
++				     * overwritten with another extent (so that
++				     * on disk they are adjacent) and
++				     * compression is in use:
++				     */
++				    ((p1.ptr.offset >= p2.ptr.offset &&
++				      p1.ptr.offset  < p2.ptr.offset + p2.crc.compressed_size) ||
++				     (p2.ptr.offset >= p1.ptr.offset &&
++				      p2.ptr.offset  < p1.ptr.offset + p1.crc.compressed_size)))
+ 					return true;
+ 
+ 		return false;
+diff --git a/fs/bcachefs/fs-io-buffered.c b/fs/bcachefs/fs-io-buffered.c
+index 54873ecc635cb0..98c1e26a313a67 100644
+--- a/fs/bcachefs/fs-io-buffered.c
++++ b/fs/bcachefs/fs-io-buffered.c
+@@ -802,8 +802,7 @@ static noinline void folios_trunc(folios *fs, struct folio **fi)
+ static int __bch2_buffered_write(struct bch_inode_info *inode,
+ 				 struct address_space *mapping,
+ 				 struct iov_iter *iter,
+-				 loff_t pos, unsigned len,
+-				 bool inode_locked)
++				 loff_t pos, unsigned len)
+ {
+ 	struct bch_fs *c = inode->v.i_sb->s_fs_info;
+ 	struct bch2_folio_reservation res;
+@@ -828,15 +827,6 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
+ 
+ 	BUG_ON(!fs.nr);
+ 
+-	/*
+-	 * If we're not using the inode lock, we need to lock all the folios for
+-	 * atomiticity of writes vs. other writes:
+-	 */
+-	if (!inode_locked && folio_end_pos(darray_last(fs)) < end) {
+-		ret = -BCH_ERR_need_inode_lock;
+-		goto out;
+-	}
+-
+ 	f = darray_first(fs);
+ 	if (pos != folio_pos(f) && !folio_test_uptodate(f)) {
+ 		ret = bch2_read_single_folio(f, mapping);
+@@ -931,10 +921,8 @@ static int __bch2_buffered_write(struct bch_inode_info *inode,
+ 	end = pos + copied;
+ 
+ 	spin_lock(&inode->v.i_lock);
+-	if (end > inode->v.i_size) {
+-		BUG_ON(!inode_locked);
++	if (end > inode->v.i_size)
+ 		i_size_write(&inode->v, end);
+-	}
+ 	spin_unlock(&inode->v.i_lock);
+ 
+ 	f_pos = pos;
+@@ -978,68 +966,12 @@ static ssize_t bch2_buffered_write(struct kiocb *iocb, struct iov_iter *iter)
+ 	struct file *file = iocb->ki_filp;
+ 	struct address_space *mapping = file->f_mapping;
+ 	struct bch_inode_info *inode = file_bch_inode(file);
+-	loff_t pos;
+-	bool inode_locked = false;
+-	ssize_t written = 0, written2 = 0, ret = 0;
+-
+-	/*
+-	 * We don't take the inode lock unless i_size will be changing. Folio
+-	 * locks provide exclusion with other writes, and the pagecache add lock
+-	 * provides exclusion with truncate and hole punching.
+-	 *
+-	 * There is one nasty corner case where atomicity would be broken
+-	 * without great care: when copying data from userspace to the page
+-	 * cache, we do that with faults disable - a page fault would recurse
+-	 * back into the filesystem, taking filesystem locks again, and
+-	 * deadlock; so it's done with faults disabled, and we fault in the user
+-	 * buffer when we aren't holding locks.
+-	 *
+-	 * If we do part of the write, but we then race and in the userspace
+-	 * buffer have been evicted and are no longer resident, then we have to
+-	 * drop our folio locks to re-fault them in, breaking write atomicity.
+-	 *
+-	 * To fix this, we restart the write from the start, if we weren't
+-	 * holding the inode lock.
+-	 *
+-	 * There is another wrinkle after that; if we restart the write from the
+-	 * start, and then get an unrecoverable error, we _cannot_ claim to
+-	 * userspace that we did not write data we actually did - so we must
+-	 * track (written2) the most we ever wrote.
+-	 */
+-
+-	if ((iocb->ki_flags & IOCB_APPEND) ||
+-	    (iocb->ki_pos + iov_iter_count(iter) > i_size_read(&inode->v))) {
+-		inode_lock(&inode->v);
+-		inode_locked = true;
+-	}
+-
+-	ret = generic_write_checks(iocb, iter);
+-	if (ret <= 0)
+-		goto unlock;
+-
+-	ret = file_remove_privs_flags(file, !inode_locked ? IOCB_NOWAIT : 0);
+-	if (ret) {
+-		if (!inode_locked) {
+-			inode_lock(&inode->v);
+-			inode_locked = true;
+-			ret = file_remove_privs_flags(file, 0);
+-		}
+-		if (ret)
+-			goto unlock;
+-	}
+-
+-	ret = file_update_time(file);
+-	if (ret)
+-		goto unlock;
+-
+-	pos = iocb->ki_pos;
++	loff_t pos = iocb->ki_pos;
++	ssize_t written = 0;
++	int ret = 0;
+ 
+ 	bch2_pagecache_add_get(inode);
+ 
+-	if (!inode_locked &&
+-	    (iocb->ki_pos + iov_iter_count(iter) > i_size_read(&inode->v)))
+-		goto get_inode_lock;
+-
+ 	do {
+ 		unsigned offset = pos & (PAGE_SIZE - 1);
+ 		unsigned bytes = iov_iter_count(iter);
+@@ -1064,17 +996,12 @@ static ssize_t bch2_buffered_write(struct kiocb *iocb, struct iov_iter *iter)
+ 			}
+ 		}
+ 
+-		if (unlikely(bytes != iov_iter_count(iter) && !inode_locked))
+-			goto get_inode_lock;
+-
+ 		if (unlikely(fatal_signal_pending(current))) {
+ 			ret = -EINTR;
+ 			break;
+ 		}
+ 
+-		ret = __bch2_buffered_write(inode, mapping, iter, pos, bytes, inode_locked);
+-		if (ret == -BCH_ERR_need_inode_lock)
+-			goto get_inode_lock;
++		ret = __bch2_buffered_write(inode, mapping, iter, pos, bytes);
+ 		if (unlikely(ret < 0))
+ 			break;
+ 
+@@ -1095,46 +1022,50 @@ static ssize_t bch2_buffered_write(struct kiocb *iocb, struct iov_iter *iter)
+ 		}
+ 		pos += ret;
+ 		written += ret;
+-		written2 = max(written, written2);
+-
+-		if (ret != bytes && !inode_locked)
+-			goto get_inode_lock;
+ 		ret = 0;
+ 
+ 		balance_dirty_pages_ratelimited(mapping);
+-
+-		if (0) {
+-get_inode_lock:
+-			bch2_pagecache_add_put(inode);
+-			inode_lock(&inode->v);
+-			inode_locked = true;
+-			bch2_pagecache_add_get(inode);
+-
+-			iov_iter_revert(iter, written);
+-			pos -= written;
+-			written = 0;
+-			ret = 0;
+-		}
+ 	} while (iov_iter_count(iter));
+-	bch2_pagecache_add_put(inode);
+-unlock:
+-	if (inode_locked)
+-		inode_unlock(&inode->v);
+ 
+-	iocb->ki_pos += written;
++	bch2_pagecache_add_put(inode);
+ 
+-	ret = max(written, written2) ?: ret;
+-	if (ret > 0)
+-		ret = generic_write_sync(iocb, ret);
+-	return ret;
++	return written ? written : ret;
+ }
+ 
+-ssize_t bch2_write_iter(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t bch2_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ {
+-	ssize_t ret = iocb->ki_flags & IOCB_DIRECT
+-		? bch2_direct_write(iocb, iter)
+-		: bch2_buffered_write(iocb, iter);
++	struct file *file = iocb->ki_filp;
++	struct bch_inode_info *inode = file_bch_inode(file);
++	ssize_t ret;
++
++	if (iocb->ki_flags & IOCB_DIRECT) {
++		ret = bch2_direct_write(iocb, from);
++		goto out;
++	}
++
++	inode_lock(&inode->v);
++
++	ret = generic_write_checks(iocb, from);
++	if (ret <= 0)
++		goto unlock;
++
++	ret = file_remove_privs(file);
++	if (ret)
++		goto unlock;
++
++	ret = file_update_time(file);
++	if (ret)
++		goto unlock;
++
++	ret = bch2_buffered_write(iocb, from);
++	if (likely(ret > 0))
++		iocb->ki_pos += ret;
++unlock:
++	inode_unlock(&inode->v);
+ 
++	if (ret > 0)
++		ret = generic_write_sync(iocb, ret);
++out:
+ 	return bch2_err_class(ret);
+ }
+ 
+diff --git a/fs/bcachefs/fs.c b/fs/bcachefs/fs.c
+index fa1fee05cf8f57..162f7836ca795c 100644
+--- a/fs/bcachefs/fs.c
++++ b/fs/bcachefs/fs.c
+@@ -177,6 +177,14 @@ static unsigned bch2_inode_hash(subvol_inum inum)
+ 	return jhash_3words(inum.subvol, inum.inum >> 32, inum.inum, JHASH_INITVAL);
+ }
+ 
++struct bch_inode_info *__bch2_inode_hash_find(struct bch_fs *c, subvol_inum inum)
++{
++	return to_bch_ei(ilookup5_nowait(c->vfs_sb,
++					 bch2_inode_hash(inum),
++					 bch2_iget5_test,
++					 &inum));
++}
++
+ static struct bch_inode_info *bch2_inode_insert(struct bch_fs *c, struct bch_inode_info *inode)
+ {
+ 	subvol_inum inum = inode_inum(inode);
+diff --git a/fs/bcachefs/fs.h b/fs/bcachefs/fs.h
+index c3af7225ff693e..990ec43e0365d3 100644
+--- a/fs/bcachefs/fs.h
++++ b/fs/bcachefs/fs.h
+@@ -56,6 +56,8 @@ static inline subvol_inum inode_inum(struct bch_inode_info *inode)
+ 	};
+ }
+ 
++struct bch_inode_info *__bch2_inode_hash_find(struct bch_fs *, subvol_inum);
++
+ /*
+  * Set if we've gotten a btree error for this inode, and thus the vfs inode and
+  * btree inode may be inconsistent:
+@@ -194,6 +196,11 @@ int bch2_vfs_init(void);
+ 
+ #define bch2_inode_update_after_write(_trans, _inode, _inode_u, _fields)	({ do {} while (0); })
+ 
++static inline struct bch_inode_info *__bch2_inode_hash_find(struct bch_fs *c, subvol_inum inum)
++{
++	return NULL;
++}
++
+ static inline void bch2_evict_subvolume_inodes(struct bch_fs *c,
+ 					       snapshot_id_list *s) {}
+ static inline void bch2_vfs_exit(void) {}
+diff --git a/fs/bcachefs/fsck.c b/fs/bcachefs/fsck.c
+index 921bcdb3e5e4ed..08d0eb39e7d65d 100644
+--- a/fs/bcachefs/fsck.c
++++ b/fs/bcachefs/fsck.c
+@@ -8,6 +8,7 @@
+ #include "darray.h"
+ #include "dirent.h"
+ #include "error.h"
++#include "fs.h"
+ #include "fs-common.h"
+ #include "fsck.h"
+ #include "inode.h"
+@@ -948,6 +949,22 @@ static int check_inode_dirent_inode(struct btree_trans *trans, struct bkey_s_c i
+ 	return ret;
+ }
+ 
++static bool bch2_inode_open(struct bch_fs *c, struct bpos p)
++{
++	subvol_inum inum = {
++		.subvol = snapshot_t(c, p.snapshot)->subvol,
++		.inum	= p.offset,
++	};
++
++	/* snapshot tree corruption, can't safely delete */
++	if (!inum.subvol) {
++		bch_err_ratelimited(c, "%s(): snapshot %u has no subvol", __func__, p.snapshot);
++		return true;
++	}
++
++	return __bch2_inode_hash_find(c, inum) != NULL;
++}
++
+ static int check_inode(struct btree_trans *trans,
+ 		       struct btree_iter *iter,
+ 		       struct bkey_s_c k,
+@@ -1025,6 +1042,7 @@ static int check_inode(struct btree_trans *trans,
+ 	}
+ 
+ 	if (u.bi_flags & BCH_INODE_unlinked &&
++	    !bch2_inode_open(c, k.k->p) &&
+ 	    (!c->sb.clean ||
+ 	     fsck_err(c, inode_unlinked_but_clean,
+ 		      "filesystem marked clean, but inode %llu unlinked",
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 2951aa0039fc67..5d23421b62437e 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4213,6 +4213,7 @@ static int __btrfs_unlink_inode(struct btrfs_trans_handle *trans,
+ 
+ 	btrfs_i_size_write(dir, dir->vfs_inode.i_size - name->len * 2);
+ 	inode_inc_iversion(&inode->vfs_inode);
++	inode_set_ctime_current(&inode->vfs_inode);
+ 	inode_inc_iversion(&dir->vfs_inode);
+  	inode_set_mtime_to_ts(&dir->vfs_inode, inode_set_ctime_current(&dir->vfs_inode));
+ 	ret = btrfs_update_inode(trans, dir);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 6bace5fece04e2..ed362d291b902f 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -624,6 +624,9 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ 				prev = delegation;
+ 			continue;
+ 		}
++		inode = nfs_delegation_grab_inode(delegation);
++		if (inode == NULL)
++			continue;
+ 
+ 		if (prev) {
+ 			struct inode *tmp = nfs_delegation_grab_inode(prev);
+@@ -634,12 +637,6 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ 			}
+ 		}
+ 
+-		inode = nfs_delegation_grab_inode(delegation);
+-		if (inode == NULL) {
+-			rcu_read_unlock();
+-			iput(to_put);
+-			goto restart;
+-		}
+ 		delegation = nfs_start_delegation_return_locked(NFS_I(inode));
+ 		rcu_read_unlock();
+ 
+@@ -1161,7 +1158,6 @@ static int nfs_server_reap_unclaimed_delegations(struct nfs_server *server,
+ 	struct inode *inode;
+ restart:
+ 	rcu_read_lock();
+-restart_locked:
+ 	list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
+ 		if (test_bit(NFS_DELEGATION_INODE_FREEING,
+ 					&delegation->flags) ||
+@@ -1172,7 +1168,7 @@ static int nfs_server_reap_unclaimed_delegations(struct nfs_server *server,
+ 			continue;
+ 		inode = nfs_delegation_grab_inode(delegation);
+ 		if (inode == NULL)
+-			goto restart_locked;
++			continue;
+ 		delegation = nfs_start_delegation_return_locked(NFS_I(inode));
+ 		rcu_read_unlock();
+ 		if (delegation != NULL) {
+@@ -1295,7 +1291,6 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ 
+ restart:
+ 	rcu_read_lock();
+-restart_locked:
+ 	list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
+ 		if (test_bit(NFS_DELEGATION_INODE_FREEING,
+ 					&delegation->flags) ||
+@@ -1307,7 +1302,7 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ 			continue;
+ 		inode = nfs_delegation_grab_inode(delegation);
+ 		if (inode == NULL)
+-			goto restart_locked;
++			continue;
+ 		spin_lock(&delegation->lock);
+ 		cred = get_cred_rcu(delegation->cred);
+ 		nfs4_stateid_copy(&stateid, &delegation->stateid);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index bff9d6600741e2..3b76e89b6d028f 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -9873,13 +9873,16 @@ static void nfs4_layoutreturn_done(struct rpc_task *task, void *calldata)
+ 		fallthrough;
+ 	default:
+ 		task->tk_status = 0;
++		lrp->res.lrs_present = 0;
+ 		fallthrough;
+ 	case 0:
+ 		break;
+ 	case -NFS4ERR_DELAY:
+-		if (nfs4_async_handle_error(task, server, NULL, NULL) != -EAGAIN)
+-			break;
+-		goto out_restart;
++		if (nfs4_async_handle_error(task, server, NULL, NULL) ==
++		    -EAGAIN)
++			goto out_restart;
++		lrp->res.lrs_present = 0;
++		break;
+ 	}
+ 	return;
+ out_restart:
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index b5834728f31b32..d1e3c17dcceb0d 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1172,10 +1172,9 @@ void pnfs_layoutreturn_free_lsegs(struct pnfs_layout_hdr *lo,
+ 	LIST_HEAD(freeme);
+ 
+ 	spin_lock(&inode->i_lock);
+-	if (!pnfs_layout_is_valid(lo) ||
+-	    !nfs4_stateid_match_other(&lo->plh_stateid, arg_stateid))
++	if (!nfs4_stateid_match_other(&lo->plh_stateid, arg_stateid))
+ 		goto out_unlock;
+-	if (stateid) {
++	if (stateid && pnfs_layout_is_valid(lo)) {
+ 		u32 seq = be32_to_cpu(arg_stateid->seqid);
+ 
+ 		pnfs_mark_matching_lsegs_invalid(lo, &freeme, range, seq);
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 6322f0f68a176b..b0473c2567fe68 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -129,7 +129,7 @@ static ssize_t cifs_shash_xarray(const struct iov_iter *iter, ssize_t maxsize,
+ 			for (j = foffset / PAGE_SIZE; j < npages; j++) {
+ 				len = min_t(size_t, maxsize, PAGE_SIZE - offset);
+ 				p = kmap_local_page(folio_page(folio, j));
+-				ret = crypto_shash_update(shash, p, len);
++				ret = crypto_shash_update(shash, p + offset, len);
+ 				kunmap_local(p);
+ 				if (ret < 0)
+ 					return ret;
+diff --git a/fs/smb/server/mgmt/share_config.c b/fs/smb/server/mgmt/share_config.c
+index e0a6b758094fc5..d8d03070ae44b4 100644
+--- a/fs/smb/server/mgmt/share_config.c
++++ b/fs/smb/server/mgmt/share_config.c
+@@ -15,6 +15,7 @@
+ #include "share_config.h"
+ #include "user_config.h"
+ #include "user_session.h"
++#include "../connection.h"
+ #include "../transport_ipc.h"
+ #include "../misc.h"
+ 
+@@ -120,12 +121,13 @@ static int parse_veto_list(struct ksmbd_share_config *share,
+ 	return 0;
+ }
+ 
+-static struct ksmbd_share_config *share_config_request(struct unicode_map *um,
++static struct ksmbd_share_config *share_config_request(struct ksmbd_work *work,
+ 						       const char *name)
+ {
+ 	struct ksmbd_share_config_response *resp;
+ 	struct ksmbd_share_config *share = NULL;
+ 	struct ksmbd_share_config *lookup;
++	struct unicode_map *um = work->conn->um;
+ 	int ret;
+ 
+ 	resp = ksmbd_ipc_share_config_request(name);
+@@ -181,7 +183,14 @@ static struct ksmbd_share_config *share_config_request(struct unicode_map *um,
+ 				      KSMBD_SHARE_CONFIG_VETO_LIST(resp),
+ 				      resp->veto_list_sz);
+ 		if (!ret && share->path) {
++			if (__ksmbd_override_fsids(work, share)) {
++				kill_share(share);
++				share = NULL;
++				goto out;
++			}
++
+ 			ret = kern_path(share->path, 0, &share->vfs_path);
++			ksmbd_revert_fsids(work);
+ 			if (ret) {
+ 				ksmbd_debug(SMB, "failed to access '%s'\n",
+ 					    share->path);
+@@ -214,7 +223,7 @@ static struct ksmbd_share_config *share_config_request(struct unicode_map *um,
+ 	return share;
+ }
+ 
+-struct ksmbd_share_config *ksmbd_share_config_get(struct unicode_map *um,
++struct ksmbd_share_config *ksmbd_share_config_get(struct ksmbd_work *work,
+ 						  const char *name)
+ {
+ 	struct ksmbd_share_config *share;
+@@ -227,7 +236,7 @@ struct ksmbd_share_config *ksmbd_share_config_get(struct unicode_map *um,
+ 
+ 	if (share)
+ 		return share;
+-	return share_config_request(um, name);
++	return share_config_request(work, name);
+ }
+ 
+ bool ksmbd_share_veto_filename(struct ksmbd_share_config *share,
+diff --git a/fs/smb/server/mgmt/share_config.h b/fs/smb/server/mgmt/share_config.h
+index 5f591751b92365..d4ac2dd4de2040 100644
+--- a/fs/smb/server/mgmt/share_config.h
++++ b/fs/smb/server/mgmt/share_config.h
+@@ -11,6 +11,8 @@
+ #include <linux/path.h>
+ #include <linux/unicode.h>
+ 
++struct ksmbd_work;
++
+ struct ksmbd_share_config {
+ 	char			*name;
+ 	char			*path;
+@@ -68,7 +70,7 @@ static inline void ksmbd_share_config_put(struct ksmbd_share_config *share)
+ 	__ksmbd_share_config_put(share);
+ }
+ 
+-struct ksmbd_share_config *ksmbd_share_config_get(struct unicode_map *um,
++struct ksmbd_share_config *ksmbd_share_config_get(struct ksmbd_work *work,
+ 						  const char *name);
+ bool ksmbd_share_veto_filename(struct ksmbd_share_config *share,
+ 			       const char *filename);
+diff --git a/fs/smb/server/mgmt/tree_connect.c b/fs/smb/server/mgmt/tree_connect.c
+index d2c81a8a11dda1..94a52a75014a43 100644
+--- a/fs/smb/server/mgmt/tree_connect.c
++++ b/fs/smb/server/mgmt/tree_connect.c
+@@ -16,17 +16,18 @@
+ #include "user_session.h"
+ 
+ struct ksmbd_tree_conn_status
+-ksmbd_tree_conn_connect(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+-			const char *share_name)
++ksmbd_tree_conn_connect(struct ksmbd_work *work, const char *share_name)
+ {
+ 	struct ksmbd_tree_conn_status status = {-ENOENT, NULL};
+ 	struct ksmbd_tree_connect_response *resp = NULL;
+ 	struct ksmbd_share_config *sc;
+ 	struct ksmbd_tree_connect *tree_conn = NULL;
+ 	struct sockaddr *peer_addr;
++	struct ksmbd_conn *conn = work->conn;
++	struct ksmbd_session *sess = work->sess;
+ 	int ret;
+ 
+-	sc = ksmbd_share_config_get(conn->um, share_name);
++	sc = ksmbd_share_config_get(work, share_name);
+ 	if (!sc)
+ 		return status;
+ 
+@@ -61,7 +62,7 @@ ksmbd_tree_conn_connect(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+ 		struct ksmbd_share_config *new_sc;
+ 
+ 		ksmbd_share_config_del(sc);
+-		new_sc = ksmbd_share_config_get(conn->um, share_name);
++		new_sc = ksmbd_share_config_get(work, share_name);
+ 		if (!new_sc) {
+ 			pr_err("Failed to update stale share config\n");
+ 			status.ret = -ESTALE;
+diff --git a/fs/smb/server/mgmt/tree_connect.h b/fs/smb/server/mgmt/tree_connect.h
+index 6377a70b811c89..a42cdd05104114 100644
+--- a/fs/smb/server/mgmt/tree_connect.h
++++ b/fs/smb/server/mgmt/tree_connect.h
+@@ -13,6 +13,7 @@
+ struct ksmbd_share_config;
+ struct ksmbd_user;
+ struct ksmbd_conn;
++struct ksmbd_work;
+ 
+ enum {
+ 	TREE_NEW = 0,
+@@ -50,8 +51,7 @@ static inline int test_tree_conn_flag(struct ksmbd_tree_connect *tree_conn,
+ struct ksmbd_session;
+ 
+ struct ksmbd_tree_conn_status
+-ksmbd_tree_conn_connect(struct ksmbd_conn *conn, struct ksmbd_session *sess,
+-			const char *share_name);
++ksmbd_tree_conn_connect(struct ksmbd_work *work, const char *share_name);
+ void ksmbd_tree_connect_put(struct ksmbd_tree_connect *tcon);
+ 
+ int ksmbd_tree_conn_disconnect(struct ksmbd_session *sess,
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 39dfecf082ba08..adfd6046275a56 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1959,7 +1959,7 @@ int smb2_tree_connect(struct ksmbd_work *work)
+ 	ksmbd_debug(SMB, "tree connect request for tree %s treename %s\n",
+ 		    name, treename);
+ 
+-	status = ksmbd_tree_conn_connect(conn, sess, name);
++	status = ksmbd_tree_conn_connect(work, name);
+ 	if (status.ret == KSMBD_TREE_CONN_STATUS_OK)
+ 		rsp->hdr.Id.SyncId.TreeId = cpu_to_le32(status.tree_conn->id);
+ 	else
+@@ -3714,7 +3714,7 @@ int smb2_open(struct ksmbd_work *work)
+ 	kfree(name);
+ 	kfree(lc);
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ static int readdir_info_level_struct_sz(int info_level)
+@@ -5601,6 +5601,11 @@ int smb2_query_info(struct ksmbd_work *work)
+ 
+ 	ksmbd_debug(SMB, "GOT query info request\n");
+ 
++	if (ksmbd_override_fsids(work)) {
++		rc = -ENOMEM;
++		goto err_out;
++	}
++
+ 	switch (req->InfoType) {
+ 	case SMB2_O_INFO_FILE:
+ 		ksmbd_debug(SMB, "GOT SMB2_O_INFO_FILE\n");
+@@ -5619,6 +5624,7 @@ int smb2_query_info(struct ksmbd_work *work)
+ 			    req->InfoType);
+ 		rc = -EOPNOTSUPP;
+ 	}
++	ksmbd_revert_fsids(work);
+ 
+ 	if (!rc) {
+ 		rsp->StructureSize = cpu_to_le16(9);
+@@ -5628,6 +5634,7 @@ int smb2_query_info(struct ksmbd_work *work)
+ 					le32_to_cpu(rsp->OutputBufferLength));
+ 	}
+ 
++err_out:
+ 	if (rc < 0) {
+ 		if (rc == -EACCES)
+ 			rsp->hdr.Status = STATUS_ACCESS_DENIED;
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 474dadf6b7b8bc..13818ecb6e1b2f 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -732,10 +732,10 @@ bool is_asterisk(char *p)
+ 	return p && p[0] == '*';
+ }
+ 
+-int ksmbd_override_fsids(struct ksmbd_work *work)
++int __ksmbd_override_fsids(struct ksmbd_work *work,
++		struct ksmbd_share_config *share)
+ {
+ 	struct ksmbd_session *sess = work->sess;
+-	struct ksmbd_share_config *share = work->tcon->share_conf;
+ 	struct cred *cred;
+ 	struct group_info *gi;
+ 	unsigned int uid;
+@@ -775,6 +775,11 @@ int ksmbd_override_fsids(struct ksmbd_work *work)
+ 	return 0;
+ }
+ 
++int ksmbd_override_fsids(struct ksmbd_work *work)
++{
++	return __ksmbd_override_fsids(work, work->tcon->share_conf);
++}
++
+ void ksmbd_revert_fsids(struct ksmbd_work *work)
+ {
+ 	const struct cred *cred;
+diff --git a/fs/smb/server/smb_common.h b/fs/smb/server/smb_common.h
+index f1092519c0c288..4a3148b0167f54 100644
+--- a/fs/smb/server/smb_common.h
++++ b/fs/smb/server/smb_common.h
+@@ -447,6 +447,8 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn,
+ int ksmbd_smb_negotiate_common(struct ksmbd_work *work, unsigned int command);
+ 
+ int ksmbd_smb_check_shared_mode(struct file *filp, struct ksmbd_file *curr_fp);
++int __ksmbd_override_fsids(struct ksmbd_work *work,
++			   struct ksmbd_share_config *share);
+ int ksmbd_override_fsids(struct ksmbd_work *work);
+ void ksmbd_revert_fsids(struct ksmbd_work *work);
+ 
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index d45bfb7cf81d0f..6ffafd596d39d3 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1027,7 +1027,8 @@ struct mlx5_ifc_qos_cap_bits {
+ 
+ 	u8         max_tsar_bw_share[0x20];
+ 
+-	u8         reserved_at_100[0x20];
++	u8         nic_element_type[0x10];
++	u8         nic_tsar_type[0x10];
+ 
+ 	u8         reserved_at_120[0x3];
+ 	u8         log_meter_aso_granularity[0x5];
+@@ -3912,10 +3913,11 @@ enum {
+ };
+ 
+ enum {
+-	ELEMENT_TYPE_CAP_MASK_TASR		= 1 << 0,
++	ELEMENT_TYPE_CAP_MASK_TSAR		= 1 << 0,
+ 	ELEMENT_TYPE_CAP_MASK_VPORT		= 1 << 1,
+ 	ELEMENT_TYPE_CAP_MASK_VPORT_TC		= 1 << 2,
+ 	ELEMENT_TYPE_CAP_MASK_PARA_VPORT_TC	= 1 << 3,
++	ELEMENT_TYPE_CAP_MASK_QUEUE_GROUP	= 1 << 4,
+ };
+ 
+ struct mlx5_ifc_scheduling_context_bits {
+@@ -4623,6 +4625,12 @@ enum {
+ 	TSAR_ELEMENT_TSAR_TYPE_ETS = 0x2,
+ };
+ 
++enum {
++	TSAR_TYPE_CAP_MASK_DWRR		= 1 << 0,
++	TSAR_TYPE_CAP_MASK_ROUND_ROBIN	= 1 << 1,
++	TSAR_TYPE_CAP_MASK_ETS		= 1 << 2,
++};
++
+ struct mlx5_ifc_tsar_element_bits {
+ 	u8         reserved_at_0[0x8];
+ 	u8         tsar_type[0x8];
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 6c395a2600e8d1..276ca543ef44d8 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -173,7 +173,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 			break;
+ 		case SKB_GSO_TCPV4:
+ 		case SKB_GSO_TCPV6:
+-			if (skb->csum_offset != offsetof(struct tcphdr, check))
++			if (skb->ip_summed == CHECKSUM_PARTIAL &&
++			    skb->csum_offset != offsetof(struct tcphdr, check))
+ 				return -EINVAL;
+ 			break;
+ 		}
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index e8f24483e05f04..c03a2bc106d4bc 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -223,6 +223,13 @@ static cpumask_var_t	isolated_cpus;
+ /* List of remote partition root children */
+ static struct list_head remote_children;
+ 
++/*
++ * A flag to force sched domain rebuild at the end of an operation while
++ * inhibiting it in the intermediate stages when set. Currently it is only
++ * set in hotplug code.
++ */
++static bool force_sd_rebuild;
++
+ /*
+  * Partition root states:
+  *
+@@ -1442,7 +1449,7 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
+ 			clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+ 	}
+ 
+-	if (rebuild_domains)
++	if (rebuild_domains && !force_sd_rebuild)
+ 		rebuild_sched_domains_locked();
+ }
+ 
+@@ -1805,7 +1812,7 @@ static void remote_partition_check(struct cpuset *cs, struct cpumask *newmask,
+ 			remote_partition_disable(child, tmp);
+ 			disable_cnt++;
+ 		}
+-	if (disable_cnt)
++	if (disable_cnt && !force_sd_rebuild)
+ 		rebuild_sched_domains_locked();
+ }
+ 
+@@ -2415,7 +2422,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ 	}
+ 	rcu_read_unlock();
+ 
+-	if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD))
++	if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD) &&
++	    !force_sd_rebuild)
+ 		rebuild_sched_domains_locked();
+ }
+ 
+@@ -3077,7 +3085,8 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
+ 	cs->flags = trialcs->flags;
+ 	spin_unlock_irq(&callback_lock);
+ 
+-	if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
++	if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed &&
++	    !force_sd_rebuild)
+ 		rebuild_sched_domains_locked();
+ 
+ 	if (spread_flag_changed)
+@@ -4478,11 +4487,9 @@ hotplug_update_tasks(struct cpuset *cs,
+ 		update_tasks_nodemask(cs);
+ }
+ 
+-static bool force_rebuild;
+-
+ void cpuset_force_rebuild(void)
+ {
+-	force_rebuild = true;
++	force_sd_rebuild = true;
+ }
+ 
+ /**
+@@ -4630,15 +4637,9 @@ static void cpuset_handle_hotplug(void)
+ 		       !cpumask_empty(subpartitions_cpus);
+ 	mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
+ 
+-	/*
+-	 * In the rare case that hotplug removes all the cpus in
+-	 * subpartitions_cpus, we assumed that cpus are updated.
+-	 */
+-	if (!cpus_updated && !cpumask_empty(subpartitions_cpus))
+-		cpus_updated = true;
+-
+ 	/* For v1, synchronize cpus_allowed to cpu_active_mask */
+ 	if (cpus_updated) {
++		cpuset_force_rebuild();
+ 		spin_lock_irq(&callback_lock);
+ 		if (!on_dfl)
+ 			cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
+@@ -4694,8 +4695,8 @@ static void cpuset_handle_hotplug(void)
+ 	}
+ 
+ 	/* rebuild sched domains if cpus_allowed has changed */
+-	if (cpus_updated || force_rebuild) {
+-		force_rebuild = false;
++	if (force_sd_rebuild) {
++		force_sd_rebuild = false;
+ 		rebuild_sched_domains_cpuslocked();
+ 	}
+ 
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 0d88922f8763c9..4481f8913dcc35 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -794,6 +794,24 @@ static int validate_module_probe_symbol(const char *modname, const char *symbol)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_MODULES
++/* Return NULL if the module is not loaded or under unloading. */
++static struct module *try_module_get_by_name(const char *name)
++{
++	struct module *mod;
++
++	rcu_read_lock_sched();
++	mod = find_module(name);
++	if (mod && !try_module_get(mod))
++		mod = NULL;
++	rcu_read_unlock_sched();
++
++	return mod;
++}
++#else
++#define try_module_get_by_name(name)	(NULL)
++#endif
++
+ static int validate_probe_symbol(char *symbol)
+ {
+ 	struct module *mod = NULL;
+@@ -805,12 +823,7 @@ static int validate_probe_symbol(char *symbol)
+ 		modname = symbol;
+ 		symbol = p + 1;
+ 		*p = '\0';
+-		/* Return 0 (defer) if the module does not exist yet. */
+-		rcu_read_lock_sched();
+-		mod = find_module(modname);
+-		if (mod && !try_module_get(mod))
+-			mod = NULL;
+-		rcu_read_unlock_sched();
++		mod = try_module_get_by_name(modname);
+ 		if (!mod)
+ 			goto out;
+ 	}
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 5b06f67879f5fa..461b4ab60b501a 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -228,6 +228,11 @@ static inline struct osnoise_variables *this_cpu_osn_var(void)
+ 	return this_cpu_ptr(&per_cpu_osnoise_var);
+ }
+ 
++/*
++ * Protect the interface.
++ */
++static struct mutex interface_lock;
++
+ #ifdef CONFIG_TIMERLAT_TRACER
+ /*
+  * Runtime information for the timer mode.
+@@ -252,11 +257,6 @@ static inline struct timerlat_variables *this_cpu_tmr_var(void)
+ 	return this_cpu_ptr(&per_cpu_timerlat_var);
+ }
+ 
+-/*
+- * Protect the interface.
+- */
+-static struct mutex interface_lock;
+-
+ /*
+  * tlat_var_reset - Reset the values of the given timerlat_variables
+  */
+diff --git a/mm/memory.c b/mm/memory.c
+index 72d00a38585d02..7a898b85788dd9 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2581,11 +2581,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ 	return 0;
+ }
+ 
+-/*
+- * Variant of remap_pfn_range that does not call track_pfn_remap.  The caller
+- * must have pre-validated the caching bits of the pgprot_t.
+- */
+-int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
++static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long addr,
+ 		unsigned long pfn, unsigned long size, pgprot_t prot)
+ {
+ 	pgd_t *pgd;
+@@ -2638,6 +2634,27 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
+ 	return 0;
+ }
+ 
++/*
++ * Variant of remap_pfn_range that does not call track_pfn_remap.  The caller
++ * must have pre-validated the caching bits of the pgprot_t.
++ */
++int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
++		unsigned long pfn, unsigned long size, pgprot_t prot)
++{
++	int error = remap_pfn_range_internal(vma, addr, pfn, size, prot);
++
++	if (!error)
++		return 0;
++
++	/*
++	 * A partial pfn range mapping is dangerous: it does not
++	 * maintain page reference counts, and callers may free
++	 * pages due to the error. So zap it early.
++	 */
++	zap_page_range_single(vma, addr, size, NULL);
++	return error;
++}
++
+ /**
+  * remap_pfn_range - remap kernel memory to userspace
+  * @vma: user vma to map to
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index e6904288d40dca..b3191968e53af3 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -73,9 +73,15 @@ static void hsr_check_announce(struct net_device *hsr_dev)
+ 			mod_timer(&hsr->announce_timer, jiffies +
+ 				  msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL));
+ 		}
++
++		if (hsr->redbox && !timer_pending(&hsr->announce_proxy_timer))
++			mod_timer(&hsr->announce_proxy_timer, jiffies +
++				  msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL) / 2);
+ 	} else {
+ 		/* Deactivate the announce timer  */
+ 		timer_delete(&hsr->announce_timer);
++		if (hsr->redbox)
++			timer_delete(&hsr->announce_proxy_timer);
+ 	}
+ }
+ 
+@@ -279,10 +285,11 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+ 	return NULL;
+ }
+ 
+-static void send_hsr_supervision_frame(struct hsr_port *master,
+-				       unsigned long *interval)
++static void send_hsr_supervision_frame(struct hsr_port *port,
++				       unsigned long *interval,
++				       const unsigned char *addr)
+ {
+-	struct hsr_priv *hsr = master->hsr;
++	struct hsr_priv *hsr = port->hsr;
+ 	__u8 type = HSR_TLV_LIFE_CHECK;
+ 	struct hsr_sup_payload *hsr_sp;
+ 	struct hsr_sup_tlv *hsr_stlv;
+@@ -296,9 +303,9 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 		hsr->announce_count++;
+ 	}
+ 
+-	skb = hsr_init_skb(master);
++	skb = hsr_init_skb(port);
+ 	if (!skb) {
+-		netdev_warn_once(master->dev, "HSR: Could not send supervision frame\n");
++		netdev_warn_once(port->dev, "HSR: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+@@ -321,11 +328,12 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 	hsr_stag->tlv.HSR_TLV_length = hsr->prot_version ?
+ 				sizeof(struct hsr_sup_payload) : 12;
+ 
+-	/* Payload: MacAddressA */
++	/* Payload: MacAddressA / SAN MAC from ProxyNodeTable */
+ 	hsr_sp = skb_put(skb, sizeof(struct hsr_sup_payload));
+-	ether_addr_copy(hsr_sp->macaddress_A, master->dev->dev_addr);
++	ether_addr_copy(hsr_sp->macaddress_A, addr);
+ 
+-	if (hsr->redbox) {
++	if (hsr->redbox &&
++	    hsr_is_node_in_db(&hsr->proxy_node_db, addr)) {
+ 		hsr_stlv = skb_put(skb, sizeof(struct hsr_sup_tlv));
+ 		hsr_stlv->HSR_TLV_type = PRP_TLV_REDBOX_MAC;
+ 		hsr_stlv->HSR_TLV_length = sizeof(struct hsr_sup_payload);
+@@ -340,13 +348,14 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 		return;
+ 	}
+ 
+-	hsr_forward_skb(skb, master);
++	hsr_forward_skb(skb, port);
+ 	spin_unlock_bh(&hsr->seqnr_lock);
+ 	return;
+ }
+ 
+ static void send_prp_supervision_frame(struct hsr_port *master,
+-				       unsigned long *interval)
++				       unsigned long *interval,
++				       const unsigned char *addr)
+ {
+ 	struct hsr_priv *hsr = master->hsr;
+ 	struct hsr_sup_payload *hsr_sp;
+@@ -396,7 +405,7 @@ static void hsr_announce(struct timer_list *t)
+ 
+ 	rcu_read_lock();
+ 	master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+-	hsr->proto_ops->send_sv_frame(master, &interval);
++	hsr->proto_ops->send_sv_frame(master, &interval, master->dev->dev_addr);
+ 
+ 	if (is_admin_up(master->dev))
+ 		mod_timer(&hsr->announce_timer, jiffies + interval);
+@@ -404,6 +413,41 @@ static void hsr_announce(struct timer_list *t)
+ 	rcu_read_unlock();
+ }
+ 
++/* Announce (supervision frame) timer function for RedBox
++ */
++static void hsr_proxy_announce(struct timer_list *t)
++{
++	struct hsr_priv *hsr = from_timer(hsr, t, announce_proxy_timer);
++	struct hsr_port *interlink;
++	unsigned long interval = 0;
++	struct hsr_node *node;
++
++	rcu_read_lock();
++	/* RedBOX sends supervisory frames to HSR network with MAC addresses
++	 * of SAN nodes stored in ProxyNodeTable.
++	 */
++	interlink = hsr_port_get_hsr(hsr, HSR_PT_INTERLINK);
++	if (!interlink)
++		goto done;
++
++	list_for_each_entry_rcu(node, &hsr->proxy_node_db, mac_list) {
++		if (hsr_addr_is_redbox(hsr, node->macaddress_A))
++			continue;
++		hsr->proto_ops->send_sv_frame(interlink, &interval,
++					      node->macaddress_A);
++	}
++
++	if (is_admin_up(interlink->dev)) {
++		if (!interval)
++			interval = msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
++
++		mod_timer(&hsr->announce_proxy_timer, jiffies + interval);
++	}
++
++done:
++	rcu_read_unlock();
++}
++
+ void hsr_del_ports(struct hsr_priv *hsr)
+ {
+ 	struct hsr_port *port;
+@@ -590,6 +634,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ 	timer_setup(&hsr->announce_timer, hsr_announce, 0);
+ 	timer_setup(&hsr->prune_timer, hsr_prune_nodes, 0);
+ 	timer_setup(&hsr->prune_proxy_timer, hsr_prune_proxy_nodes, 0);
++	timer_setup(&hsr->announce_proxy_timer, hsr_proxy_announce, 0);
+ 
+ 	ether_addr_copy(hsr->sup_multicast_addr, def_multicast_addr);
+ 	hsr->sup_multicast_addr[ETH_ALEN - 1] = multicast_spec;
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 05a61b8286ec13..960ef386bc3a14 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -117,6 +117,35 @@ static bool is_supervision_frame(struct hsr_priv *hsr, struct sk_buff *skb)
+ 	return true;
+ }
+ 
++static bool is_proxy_supervision_frame(struct hsr_priv *hsr,
++				       struct sk_buff *skb)
++{
++	struct hsr_sup_payload *payload;
++	struct ethhdr *eth_hdr;
++	u16 total_length = 0;
++
++	eth_hdr = (struct ethhdr *)skb_mac_header(skb);
++
++	/* Get the HSR protocol revision. */
++	if (eth_hdr->h_proto == htons(ETH_P_HSR))
++		total_length = sizeof(struct hsrv1_ethhdr_sp);
++	else
++		total_length = sizeof(struct hsrv0_ethhdr_sp);
++
++	if (!pskb_may_pull(skb, total_length + sizeof(struct hsr_sup_payload)))
++		return false;
++
++	skb_pull(skb, total_length);
++	payload = (struct hsr_sup_payload *)skb->data;
++	skb_push(skb, total_length);
++
++	/* For RedBox (HSR-SAN) check if we have received the supervision
++	 * frame with MAC addresses from own ProxyNodeTable.
++	 */
++	return hsr_is_node_in_db(&hsr->proxy_node_db,
++				 payload->macaddress_A);
++}
++
+ static struct sk_buff *create_stripped_skb_hsr(struct sk_buff *skb_in,
+ 					       struct hsr_frame_info *frame)
+ {
+@@ -499,7 +528,8 @@ static void hsr_forward_do(struct hsr_frame_info *frame)
+ 					   frame->sequence_nr))
+ 			continue;
+ 
+-		if (frame->is_supervision && port->type == HSR_PT_MASTER) {
++		if (frame->is_supervision && port->type == HSR_PT_MASTER &&
++		    !frame->is_proxy_supervision) {
+ 			hsr_handle_sup_frame(frame);
+ 			continue;
+ 		}
+@@ -637,6 +667,9 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ 
+ 	memset(frame, 0, sizeof(*frame));
+ 	frame->is_supervision = is_supervision_frame(port->hsr, skb);
++	if (frame->is_supervision && hsr->redbox)
++		frame->is_proxy_supervision =
++			is_proxy_supervision_frame(port->hsr, skb);
+ 
+ 	n_db = &hsr->node_db;
+ 	if (port->type == HSR_PT_INTERLINK)
+@@ -688,7 +721,7 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
+ 	/* Gets called for ingress frames as well as egress from master port.
+ 	 * So check and increment stats for master port only here.
+ 	 */
+-	if (port->type == HSR_PT_MASTER) {
++	if (port->type == HSR_PT_MASTER || port->type == HSR_PT_INTERLINK) {
+ 		port->dev->stats.tx_packets++;
+ 		port->dev->stats.tx_bytes += skb->len;
+ 	}
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 614df964979407..73bc6f659812f6 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -36,6 +36,14 @@ static bool seq_nr_after(u16 a, u16 b)
+ #define seq_nr_before(a, b)		seq_nr_after((b), (a))
+ #define seq_nr_before_or_eq(a, b)	(!seq_nr_after((a), (b)))
+ 
++bool hsr_addr_is_redbox(struct hsr_priv *hsr, unsigned char *addr)
++{
++	if (!hsr->redbox || !is_valid_ether_addr(hsr->macaddress_redbox))
++		return false;
++
++	return ether_addr_equal(addr, hsr->macaddress_redbox);
++}
++
+ bool hsr_addr_is_self(struct hsr_priv *hsr, unsigned char *addr)
+ {
+ 	struct hsr_self_node *sn;
+@@ -591,6 +599,10 @@ void hsr_prune_proxy_nodes(struct timer_list *t)
+ 
+ 	spin_lock_bh(&hsr->list_lock);
+ 	list_for_each_entry_safe(node, tmp, &hsr->proxy_node_db, mac_list) {
++		/* Don't prune RedBox node. */
++		if (hsr_addr_is_redbox(hsr, node->macaddress_A))
++			continue;
++
+ 		timestamp = node->time_in[HSR_PT_INTERLINK];
+ 
+ 		/* Prune old entries */
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 7619e31c1d2de2..993fa950d81449 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -22,6 +22,7 @@ struct hsr_frame_info {
+ 	struct hsr_node *node_src;
+ 	u16 sequence_nr;
+ 	bool is_supervision;
++	bool is_proxy_supervision;
+ 	bool is_vlan;
+ 	bool is_local_dest;
+ 	bool is_local_exclusive;
+@@ -35,6 +36,7 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
+ 			      enum hsr_port_type rx_port);
+ void hsr_handle_sup_frame(struct hsr_frame_info *frame);
+ bool hsr_addr_is_self(struct hsr_priv *hsr, unsigned char *addr);
++bool hsr_addr_is_redbox(struct hsr_priv *hsr, unsigned char *addr);
+ 
+ void hsr_addr_subst_source(struct hsr_node *node, struct sk_buff *skb);
+ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 23850b16d1eacd..ab1f8d35d9dcf5 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -170,7 +170,8 @@ struct hsr_node;
+ 
+ struct hsr_proto_ops {
+ 	/* format and send supervision frame */
+-	void (*send_sv_frame)(struct hsr_port *port, unsigned long *interval);
++	void (*send_sv_frame)(struct hsr_port *port, unsigned long *interval,
++			      const unsigned char addr[ETH_ALEN]);
+ 	void (*handle_san_frame)(bool san, enum hsr_port_type port,
+ 				 struct hsr_node *node);
+ 	bool (*drop_frame)(struct hsr_frame_info *frame, struct hsr_port *port);
+@@ -197,6 +198,7 @@ struct hsr_priv {
+ 	struct list_head	proxy_node_db;	/* RedBox HSR proxy nodes */
+ 	struct hsr_self_node	__rcu *self_node;	/* MACs of slaves */
+ 	struct timer_list	announce_timer;	/* Supervision frame dispatch */
++	struct timer_list	announce_proxy_timer;
+ 	struct timer_list	prune_timer;
+ 	struct timer_list	prune_proxy_timer;
+ 	int announce_count;
+diff --git a/net/hsr/hsr_netlink.c b/net/hsr/hsr_netlink.c
+index 898f18c6da53eb..f6ff0b61e08a96 100644
+--- a/net/hsr/hsr_netlink.c
++++ b/net/hsr/hsr_netlink.c
+@@ -131,6 +131,7 @@ static void hsr_dellink(struct net_device *dev, struct list_head *head)
+ 	del_timer_sync(&hsr->prune_timer);
+ 	del_timer_sync(&hsr->prune_proxy_timer);
+ 	del_timer_sync(&hsr->announce_timer);
++	timer_delete_sync(&hsr->announce_proxy_timer);
+ 
+ 	hsr_debugfs_term(hsr);
+ 	hsr_del_ports(hsr);
+diff --git a/net/ipv4/fou_core.c b/net/ipv4/fou_core.c
+index 78b869b314921b..3e30745e2c09ac 100644
+--- a/net/ipv4/fou_core.c
++++ b/net/ipv4/fou_core.c
+@@ -336,11 +336,11 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
+ 	struct gro_remcsum grc;
+ 	u8 proto;
+ 
++	skb_gro_remcsum_init(&grc);
++
+ 	if (!fou)
+ 		goto out;
+ 
+-	skb_gro_remcsum_init(&grc);
+-
+ 	off = skb_gro_offset(skb);
+ 	len = off + sizeof(*guehdr);
+ 
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index f891bc714668c5..ad935d34c973a9 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -334,15 +334,21 @@ mptcp_pm_del_add_timer(struct mptcp_sock *msk,
+ {
+ 	struct mptcp_pm_add_entry *entry;
+ 	struct sock *sk = (struct sock *)msk;
++	struct timer_list *add_timer = NULL;
+ 
+ 	spin_lock_bh(&msk->pm.lock);
+ 	entry = mptcp_lookup_anno_list_by_saddr(msk, addr);
+-	if (entry && (!check_id || entry->addr.id == addr->id))
++	if (entry && (!check_id || entry->addr.id == addr->id)) {
+ 		entry->retrans_times = ADD_ADDR_RETRANS_MAX;
++		add_timer = &entry->add_timer;
++	}
++	if (!check_id && entry)
++		list_del(&entry->list);
+ 	spin_unlock_bh(&msk->pm.lock);
+ 
+-	if (entry && (!check_id || entry->addr.id == addr->id))
+-		sk_stop_timer_sync(sk, &entry->add_timer);
++	/* no lock, because sk_stop_timer_sync() is calling del_timer_sync() */
++	if (add_timer)
++		sk_stop_timer_sync(sk, add_timer);
+ 
+ 	return entry;
+ }
+@@ -1462,7 +1468,6 @@ static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
+ 
+ 	entry = mptcp_pm_del_add_timer(msk, addr, false);
+ 	if (entry) {
+-		list_del(&entry->list);
+ 		kfree(entry);
+ 		return true;
+ 	}
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index f30163e2ca6207..765ffd6e06bc41 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -110,13 +110,13 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ 			*dest = READ_ONCE(sk->sk_mark);
+ 		} else {
+ 			regs->verdict.code = NFT_BREAK;
+-			return;
++			goto out_put_sk;
+ 		}
+ 		break;
+ 	case NFT_SOCKET_WILDCARD:
+ 		if (!sk_fullsock(sk)) {
+ 			regs->verdict.code = NFT_BREAK;
+-			return;
++			goto out_put_sk;
+ 		}
+ 		nft_socket_wildcard(pkt, regs, sk, dest);
+ 		break;
+@@ -124,7 +124,7 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ 	case NFT_SOCKET_CGROUPV2:
+ 		if (!nft_sock_get_eval_cgroupv2(dest, sk, pkt, priv->level)) {
+ 			regs->verdict.code = NFT_BREAK;
+-			return;
++			goto out_put_sk;
+ 		}
+ 		break;
+ #endif
+@@ -133,6 +133,7 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ 		regs->verdict.code = NFT_BREAK;
+ 	}
+ 
++out_put_sk:
+ 	if (sk != skb->sk)
+ 		sock_gen_put(sk);
+ }
+diff --git a/scripts/kconfig/merge_config.sh b/scripts/kconfig/merge_config.sh
+index 902eb429b9dbd9..0b7952471c18f6 100755
+--- a/scripts/kconfig/merge_config.sh
++++ b/scripts/kconfig/merge_config.sh
+@@ -167,6 +167,8 @@ for ORIG_MERGE_FILE in $MERGE_LIST ; do
+ 			sed -i "/$CFG[ =]/d" $MERGE_FILE
+ 		fi
+ 	done
++	# In case the previous file lacks a new line at the end
++	echo >> $TMP_FILE
+ 	cat $MERGE_FILE >> $TMP_FILE
+ done
+ 
+diff --git a/sound/soc/codecs/peb2466.c b/sound/soc/codecs/peb2466.c
+index 5dec69be0acb2e..06c83d2042f3e5 100644
+--- a/sound/soc/codecs/peb2466.c
++++ b/sound/soc/codecs/peb2466.c
+@@ -229,7 +229,8 @@ static int peb2466_reg_read(void *context, unsigned int reg, unsigned int *val)
+ 	case PEB2466_CMD_XOP:
+ 	case PEB2466_CMD_SOP:
+ 		ret = peb2466_read_byte(peb2466, reg, &tmp);
+-		*val = tmp;
++		if (!ret)
++			*val = tmp;
+ 		break;
+ 	default:
+ 		dev_err(&peb2466->spi->dev, "Not a XOP or SOP command\n");
+diff --git a/sound/soc/intel/common/soc-acpi-intel-lnl-match.c b/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
+index e6ffcd5be6c5af..edfb668d0580d3 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-lnl-match.c
+@@ -208,6 +208,7 @@ static const struct snd_soc_acpi_link_adr lnl_cs42l43_l0[] = {
+ 		.num_adr = ARRAY_SIZE(cs42l43_0_adr),
+ 		.adr_d = cs42l43_0_adr,
+ 	},
++	{}
+ };
+ 
+ static const struct snd_soc_acpi_link_adr lnl_rvp[] = {
+diff --git a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+index 8e0ae3635a35d7..d4435a34a3a3f4 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+@@ -674,6 +674,7 @@ static const struct snd_soc_acpi_link_adr mtl_cs42l43_l0[] = {
+ 		.num_adr = ARRAY_SIZE(cs42l43_0_adr),
+ 		.adr_d = cs42l43_0_adr,
+ 	},
++	{}
+ };
+ 
+ static const struct snd_soc_acpi_link_adr mtl_cs42l43_cs35l56[] = {
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index 8c5605c1e34e8a..eb0302f2074070 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -104,7 +104,7 @@ static int axg_card_add_tdm_loopback(struct snd_soc_card *card,
+ 				     int *index)
+ {
+ 	struct meson_card *priv = snd_soc_card_get_drvdata(card);
+-	struct snd_soc_dai_link *pad = &card->dai_link[*index];
++	struct snd_soc_dai_link *pad;
+ 	struct snd_soc_dai_link *lb;
+ 	struct snd_soc_dai_link_component *dlc;
+ 	int ret;
+@@ -114,6 +114,7 @@ static int axg_card_add_tdm_loopback(struct snd_soc_card *card,
+ 	if (ret)
+ 		return ret;
+ 
++	pad = &card->dai_link[*index];
+ 	lb = &card->dai_link[*index + 1];
+ 
+ 	lb->name = devm_kasprintf(card->dev, GFP_KERNEL, "%s-lb", pad->name);
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+index e91b5936603018..c075d376fcabfd 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+@@ -1828,7 +1828,7 @@ static void unix_inet_redir_to_connected(int family, int type,
+ 	if (err)
+ 		return;
+ 
+-	if (socketpair(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0, sfd))
++	if (socketpair(AF_UNIX, type | SOCK_NONBLOCK, 0, sfd))
+ 		goto close_cli0;
+ 	c1 = sfd[0], p1 = sfd[1];
+ 
+@@ -1840,7 +1840,6 @@ static void unix_inet_redir_to_connected(int family, int type,
+ close_cli0:
+ 	xclose(c0);
+ 	xclose(p0);
+-
+ }
+ 
+ static void unix_inet_skb_redir_to_connected(struct test_sockmap_listen *skel,
+diff --git a/tools/testing/selftests/net/lib/csum.c b/tools/testing/selftests/net/lib/csum.c
+index b9f3fc3c34263d..e0a34e5e8dd5c6 100644
+--- a/tools/testing/selftests/net/lib/csum.c
++++ b/tools/testing/selftests/net/lib/csum.c
+@@ -654,10 +654,16 @@ static int recv_verify_packet_ipv4(void *nh, int len)
+ {
+ 	struct iphdr *iph = nh;
+ 	uint16_t proto = cfg_encap ? IPPROTO_UDP : cfg_proto;
++	uint16_t ip_len;
+ 
+ 	if (len < sizeof(*iph) || iph->protocol != proto)
+ 		return -1;
+ 
++	ip_len = ntohs(iph->tot_len);
++	if (ip_len > len || ip_len < sizeof(*iph))
++		return -1;
++
++	len = ip_len;
+ 	iph_addr_p = &iph->saddr;
+ 	if (proto == IPPROTO_TCP)
+ 		return recv_verify_packet_tcp(iph + 1, len - sizeof(*iph));
+@@ -669,16 +675,22 @@ static int recv_verify_packet_ipv6(void *nh, int len)
+ {
+ 	struct ipv6hdr *ip6h = nh;
+ 	uint16_t proto = cfg_encap ? IPPROTO_UDP : cfg_proto;
++	uint16_t ip_len;
+ 
+ 	if (len < sizeof(*ip6h) || ip6h->nexthdr != proto)
+ 		return -1;
+ 
++	ip_len = ntohs(ip6h->payload_len);
++	if (ip_len > len - sizeof(*ip6h))
++		return -1;
++
++	len = ip_len;
+ 	iph_addr_p = &ip6h->saddr;
+ 
+ 	if (proto == IPPROTO_TCP)
+-		return recv_verify_packet_tcp(ip6h + 1, len - sizeof(*ip6h));
++		return recv_verify_packet_tcp(ip6h + 1, len);
+ 	else
+-		return recv_verify_packet_udp(ip6h + 1, len - sizeof(*ip6h));
++		return recv_verify_packet_udp(ip6h + 1, len);
+ }
+ 
+ /* return whether auxdata includes TP_STATUS_CSUM_VALID */
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index a4762c49a87861..cde041c93906df 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -3064,7 +3064,9 @@ fullmesh_tests()
+ 		pm_nl_set_limits $ns1 1 3
+ 		pm_nl_set_limits $ns2 1 3
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+-		pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,fullmesh
++		if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then
++			pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,fullmesh
++		fi
+ 		fullmesh=1 speed=slow \
+ 			run_tests $ns1 $ns2 10.0.1.1
+ 		chk_join_nr 3 3 3


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-24 18:52 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-24 18:52 UTC (permalink / raw
  To: gentoo-commits

commit:     bd01035c68fd9cf21dcfa87e5de209e0aa0f2950
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 24 18:51:40 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 24 18:51:40 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd01035c

dtrace patch for 6.10.X (CTF, modules.builtin.objs) p4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |    2 +-
 2995_dtrace-6.10_p4.patch | 2373 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2374 insertions(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index f48df58b..23b2116a 100644
--- a/0000_README
+++ b/0000_README
@@ -127,7 +127,7 @@ Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
-Patch:  2995_dtrace-6.10_p3.patch
+Patch:  2995_dtrace-6.10_p4.patch
 From:   https://github.com/thesamesam/linux/tree/dtrace-sam/v2/6.10
 Desc:   dtrace patch for 6.10.X (CTF, modules.builtin.objs)
 

diff --git a/2995_dtrace-6.10_p4.patch b/2995_dtrace-6.10_p4.patch
new file mode 100644
index 00000000..be81fda1
--- /dev/null
+++ b/2995_dtrace-6.10_p4.patch
@@ -0,0 +1,2373 @@
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index 3c399f132e2db..75b9655e57914 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -179,7 +179,7 @@ mkutf8data
+ modpost
+ modules-only.symvers
+ modules.builtin
+-modules.builtin.modinfo
++modules.builtin.*
+ modules.nsdeps
+ modules.order
+ modversions.h*
+diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
+index 9c8d1d046ea56..4e2d666f167aa 100644
+--- a/Documentation/kbuild/kbuild.rst
++++ b/Documentation/kbuild/kbuild.rst
+@@ -17,11 +17,21 @@ modules.builtin
+ This file lists all modules that are built into the kernel. This is used
+ by modprobe to not fail when trying to load something builtin.
+ 
++modules.builtin.objs
++-----------------------
++This file contains object mapping of modules that are built into the kernel
++to their corresponding object files used to build the module.
++
+ modules.builtin.modinfo
+ -----------------------
+ This file contains modinfo from all modules that are built into the kernel.
+ Unlike modinfo of a separate module, all fields are prefixed with module name.
+ 
++modules.builtin.ranges
++----------------------
++This file contains address offset ranges (per ELF section) for all modules
++that are built into the kernel. Together with System.map, it can be used
++to associate module names with symbols.
+ 
+ Environment variables
+ =====================
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index 5685d7bfe4d0f..68fd086ac6538 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -63,9 +63,14 @@ cpio                   any              cpio --version
+ GNU tar                1.28             tar --version
+ gtags (optional)       6.6.5            gtags --version
+ mkimage (optional)     2017.01          mkimage --version
++Python (optional)      3.5.x            python3 --version
++GNU AWK (optional)     5.1.0            gawk --version
++GNU C\ [#f2]_          12.0             gcc --version
++binutils\ [#f2]_       2.36             ld -v
+ ====================== ===============  ========================================
+ 
+ .. [#f1] Sphinx is needed only to build the Kernel documentation
++.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
+ 
+ Kernel compilation
+ ******************
+@@ -198,6 +203,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
+ built from the U-Boot source code. See the instructions at
+ https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
+ 
++GNU AWK
++-------
++
++GNU AWK is needed if you want kernel builds to generate address range data for
++builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
++
+ System utilities
+ ****************
+ 
+diff --git a/Makefile b/Makefile
+index 447856c43b327..9977b007ad898 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1024,6 +1024,7 @@ include-$(CONFIG_UBSAN)		+= scripts/Makefile.ubsan
+ include-$(CONFIG_KCOV)		+= scripts/Makefile.kcov
+ include-$(CONFIG_RANDSTRUCT)	+= scripts/Makefile.randstruct
+ include-$(CONFIG_GCC_PLUGINS)	+= scripts/Makefile.gcc-plugins
++include-$(CONFIG_CTF)		+= scripts/Makefile.ctfa-toplevel
+ 
+ include $(addprefix $(srctree)/, $(include-y))
+ 
+@@ -1151,7 +1152,11 @@ PHONY += vmlinux_o
+ vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
+ 
+-vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
++MODULES_BUILTIN := modules.builtin.modinfo
++MODULES_BUILTIN += modules.builtin
++MODULES_BUILTIN += modules.builtin.objs
++
++vmlinux.o $(MODULES_BUILTIN): vmlinux_o
+ 	@:
+ 
+ PHONY += vmlinux
+@@ -1490,9 +1495,10 @@ endif # CONFIG_MODULES
+ 
+ # Directories & files removed with 'make clean'
+ CLEAN_FILES += vmlinux.symvers modules-only.symvers \
+-	       modules.builtin modules.builtin.modinfo modules.nsdeps \
++	       modules.builtin modules.builtin.* modules.nsdeps vmlinux.o.map \
+ 	       compile_commands.json rust/test \
+-	       rust-project.json .vmlinux.objs .vmlinux.export.c
++	       rust-project.json .vmlinux.objs .vmlinux.export.c \
++	       vmlinux.ctfa
+ 
+ # Directories & files removed with 'make mrproper'
+ MRPROPER_FILES += include/config include/generated          \
+@@ -1586,6 +1592,8 @@ help:
+ 	@echo  '                    (requires a recent binutils and recent build (System.map))'
+ 	@echo  '  dir/file.ko     - Build module including final link'
+ 	@echo  '  modules_prepare - Set up for building external modules'
++	@echo  '  ctf             - Generate CTF type information, installed by make ctf_install'
++	@echo  '  ctf_install     - Install CTF to INSTALL_MOD_PATH (default: /)'
+ 	@echo  '  tags/TAGS	  - Generate tags file for editors'
+ 	@echo  '  cscope	  - Generate cscope index'
+ 	@echo  '  gtags           - Generate GNU GLOBAL index'
+@@ -1942,7 +1950,7 @@ clean: $(clean-dirs)
+ 	$(call cmd,rmfiles)
+ 	@find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
+ 		\( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
+-		-o -name '*.ko.*' \
++		-o -name '*.ko.*' -o -name '*.ctf' \
+ 		-o -name '*.dtb' -o -name '*.dtbo' \
+ 		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
+ 		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
+diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
+index 01067a2bc43b7..d2193b8dfad83 100644
+--- a/arch/arm/vdso/Makefile
++++ b/arch/arm/vdso/Makefile
+@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
+ 
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+ 	    -z max-page-size=4096 -shared $(ldflags-y) \
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index d63930c828397..6e84d3822cfe3 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -33,6 +33,10 @@ ldflags-y += -T
+ ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
+ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
++# CTF in the vDSO would introduce a new section, which would
++# expand the vDSO to more than a page.
++ccflags-y += $(call cc-option,-gctf0)
++
+ # -Wmissing-prototypes and -Wmissing-declarations are removed from
+ # the CFLAGS of vgettimeofday.c to make possible to build the
+ # kernel with CONFIG_WERROR enabled.
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index d724d46b07c84..fbedb95223ae1 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ 	$(call cc-option, -fno-asynchronous-unwind-tables) \
+-	$(call cc-option, -fno-stack-protector)
++	$(call cc-option, -fno-stack-protector) \
++	$(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index b289b2c1b2946..6c8d777525f9b 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
+ 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
+ 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+-	$(call cc-option, -fno-asynchronous-unwind-tables)
++	$(call cc-option, -fno-asynchronous-unwind-tables) \
++	$(call cc-option,-gctf0)
+ aflags-vdso := $(ccflags-vdso) \
+ 	-D__ASSEMBLY__ -Wa,-gdwarf-2
+ 
+diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
+index 243dbfc4609d8..e4f3e47074e9d 100644
+--- a/arch/sparc/vdso/Makefile
++++ b/arch/sparc/vdso/Makefile
+@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
+ SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
+ 
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 215a1b202a918..2fa1613a06275 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
++       $(call cc-option,-gctf0) \
+        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+@@ -131,6 +132,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += -fno-stack-protector
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
++KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+ 
+ ifdef CONFIG_MITIGATION_RETPOLINE
+diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
+index 6a77ea6434ffd..6db233b5edd75 100644
+--- a/arch/x86/um/vdso/Makefile
++++ b/arch/x86/um/vdso/Makefile
+@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ #
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
+-       -fno-omit-frame-pointer -foptimize-sibling-calls
++       -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
+ 
+ $(vobjs): KBUILD_CFLAGS += $(CFL)
+ 
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index f00a8e18f389f..2e307c0824574 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -1014,6 +1014,7 @@
+ 	*(.discard.*)							\
+ 	*(.export_symbol)						\
+ 	*(.modinfo)							\
++	*(.ctf)								\
+ 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
+ 	*(.gnu.version*)						\
+ 
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 330ffb59efe51..2d9fcca542d13 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -180,7 +180,13 @@ extern void cleanup_module(void);
+ #ifdef MODULE
+ #define MODULE_FILE
+ #else
+-#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
++#ifdef CONFIG_CTF
++#define MODULE_FILE					                      \
++			MODULE_INFO(file, KBUILD_MODFILE);                    \
++			MODULE_INFO(objs, KBUILD_MODOBJS);
++#else
++#define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
++#endif
+ #endif
+ 
+ /*
+diff --git a/init/Kconfig b/init/Kconfig
+index 9684e5d2b81c6..c1b00b2e4a43d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -111,6 +111,12 @@ config PAHOLE_VERSION
+ 	int
+ 	default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+ 
++config HAVE_CTF_TOOLCHAIN
++	def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
++	depends on CC_IS_GCC
++	help
++	  GCC and binutils support CTF generation.
++
+ config CONSTRUCTORS
+ 	bool
+ 
+diff --git a/lib/Kconfig b/lib/Kconfig
+index b0a76dff5c182..61d0be30b3562 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -633,6 +633,16 @@ config DIMLIB
+ #
+ config LIBFDT
+ 	bool
++#
++# CTF support is select'ed if needed
++#
++config CTF
++        bool "Compact Type Format generation"
++        depends on HAVE_CTF_TOOLCHAIN
++        help
++          Emit a compact, compressed description of the kernel's datatypes and
++          global variables into the vmlinux.ctfa archive (for in-tree modules)
++          or into .ctf sections in kernel modules (for out-of-tree modules).
+ 
+ config OID_REGISTRY
+ 	tristate
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 59b6765d86b8f..dab7e6983eace 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -571,6 +571,21 @@ config VMLINUX_MAP
+ 	  pieces of code get eliminated with
+ 	  CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
+ 
++config BUILTIN_MODULE_RANGES
++	bool "Generate address range information for builtin modules"
++	depends on !LTO
++	depends on VMLINUX_MAP
++	help
++	 When modules are built into the kernel, there will be no module name
++	 associated with its symbols in /proc/kallsyms.  Tracers may want to
++	 identify symbols by module name and symbol name regardless of whether
++	 the module is configured as loadable or not.
++
++	 This option generates modules.builtin.ranges in the build tree with
++	 offset ranges (per ELF section) for the module(s) they belong to.
++	 It also records an anchor symbol to determine the load address of the
++	 section.
++
+ config DEBUG_FORCE_WEAK_PER_CPU
+ 	bool "Force weak per-cpu definitions"
+ 	depends on DEBUG_KERNEL
+diff --git a/scripts/Makefile b/scripts/Makefile
+index fe56eeef09dd4..8e7eb174d3154 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -54,6 +54,7 @@ targets += module.lds
+ 
+ subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
+ subdir-$(CONFIG_MODVERSIONS) += genksyms
++subdir-$(CONFIG_CTF)         += ctf
+ subdir-$(CONFIG_SECURITY_SELINUX) += selinux
+ 
+ # Let clean descend into subdirs
+diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
+new file mode 100644
+index 0000000000000..b65d9d391c29c
+--- /dev/null
++++ b/scripts/Makefile.ctfa
+@@ -0,0 +1,92 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# Module CTF/CTFA generation
++# ===========================================================================
++
++include include/config/auto.conf
++include $(srctree)/scripts/Kbuild.include
++
++# CTF is already present in every object file if CONFIG_CTF is enabled.
++# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
++# it will be deduplicated into module .ko's.  For out-of-tree module builds,
++# this is what we want, but for in-tree modules we can save substantial
++# space by deduplicating it against all the core kernel types as well.  So
++# split the CTF out of in-tree module .ko's into separate .ctf files so that
++# it doesn't take up space in the modules on disk, and let the specialized
++# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
++# 'make ctf' is invoked, and use the same machinery that the linker uses to
++# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
++
++# Nothing special needs to be done if CTF is turned off or if a standalone
++# module is being built.
++module-ctf-postlink = mv $(1).tmp $(1)
++
++ifdef CONFIG_CTF
++
++# This is quite tricky.  The CTF machinery needs to be told about all the
++# built-in objects as well as all the external modules -- but Makefile.modfinal
++# only knows about the latter.  So the toplevel makefile emits the names of the
++# built-in objects into a temporary file, which is then catted and its contents
++# used as prerequisites by this rule.
++#
++# We write the names of the object files to be scanned for CTF content into a
++# file, then use that, to avoid hitting command-line length limits.
++
++ifeq ($(KBUILD_EXTMOD),)
++ctf-modules := $(shell find . -name '*.ko.ctf' -print)
++quiet_cmd_ctfa_raw = CTFARAW
++      cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
++ctf-builtins := .tmp_objects.builtin
++ctf-filelist := .tmp_ctf.filelist
++ctf-filelist-raw := .tmp_ctf.filelist.raw
++
++define module-ctf-postlink =
++	$(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
++	$(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
++endef
++
++# Split a list up like shell xargs does.
++define xargs =
++$(1) $(wordlist 1,1024,$(2))
++$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
++endef
++
++$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
++	@rm -f $(ctf-filelist-raw);
++	$(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
++	@touch $(ctf-filelist-raw)
++
++$(ctf-filelist): $(ctf-filelist-raw)
++	@rm -f $(ctf-filelist);
++	@cat $(ctf-filelist-raw) | while read -r obj; do \
++		case $$obj in \
++		$(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
++		*.a) $(AR) t $$obj > $(ctf-filelist);; \
++		*.builtin) cat $$obj >> $(ctf-filelist);; \
++		*) echo "$$obj" >> $(ctf-filelist);; \
++		esac; \
++	done
++	@touch $(ctf-filelist)
++
++# The raw CTF depends on the output CTF file list, and that depends
++# on the .ko files for the modules.
++.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
++	$(call if_changed,ctfa_raw)
++
++quiet_cmd_ctfa = CTFA
++      cmd_ctfa = { echo 'int main () { return 0; } ' | \
++		$(CC) -x c -c -o $<.stub -; \
++	$(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
++		 $<.stub $@; }
++
++# The CTF itself is an ELF executable with one section: the CTF.  This lets
++# objdump work on it, at minimal size cost.
++vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
++	$(call if_changed,ctfa)
++
++targets += vmlinux.ctfa
++
++endif		# KBUILD_EXTMOD
++
++endif		# !CONFIG_CTF
++
+diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
+new file mode 100644
+index 0000000000000..210bef3854e9b
+--- /dev/null
++++ b/scripts/Makefile.ctfa-toplevel
+@@ -0,0 +1,54 @@
++# SPDX-License-Identifier: GPL-2.0-only
++# ===========================================================================
++# CTF rules for the top-level makefile only
++# ===========================================================================
++
++KBUILD_CFLAGS	+= $(call cc-option,-gctf)
++KBUILD_LDFLAGS	+= $(call ld-option, --ctf-variables)
++
++ifeq ($(KBUILD_EXTMOD),)
++
++# CTF generation for in-tree code (modules, built-in and not, and core kernel)
++
++# This contains all the object files that are built directly into the
++# kernel (including built-in modules), for consumption by ctfarchive in
++# Makefile.modfinal.
++# This is made doubly annoying by the presence of '.o' files which are actually
++# thin ar archives, and the need to support file(1) versions too old to
++# recognize them as archives at all.  (So we assume that everything that is notr
++# an ELF object is an archive.)
++ifeq ($(SRCARCH),x86)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
++else
++ifeq ($(SRCARCH),arm64)
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
++else
++.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
++endif
++endif
++	@echo $(KBUILD_VMLINUX_OBJS) | \
++		tr " " "\n" | grep "\.o$$" | xargs -r file | \
++		grep ELF | cut -d: -f1 > .tmp_objects.builtin
++	@for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
++		tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
++		$(AR) t "$$archive" >> .tmp_objects.builtin; \
++	done
++
++ctf: vmlinux.ctfa
++PHONY += ctf ctf_install
++
++# Making CTF needs the builtin files.  We need to force everything to be
++# built if not already done, since we need the .o files for the machinery
++# above to work.
++vmlinux.ctfa: KBUILD_BUILTIN := 1
++vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
++	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
++
++ctf_install:
++	$(Q)mkdir -p $(MODLIB)/kernel
++	@ln -sf $(abspath $(srctree)) $(MODLIB)/source
++	$(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
++
++CLEAN_FILES += vmlinux.ctfa
++
++endif
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 7f8ec77bf35c9..8e67961ba2ec9 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
+ __modname = $(or $(modname-multi),$(basetarget))
+ 
+ modname = $(subst $(space),:,$(__modname))
++modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
++modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
+ modfile = $(addprefix $(obj)/,$(__modname))
+ 
+ # target with $(obj)/ and its suffix stripped
+@@ -133,6 +135,10 @@ modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+ 		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
+ modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
+ 
++ifdef CONFIG_CTF
++modfile_flags  += -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
++endif
++
+ _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
+                      $(filter-out $(ccflags-remove-y), \
+                          $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(ccflags-y)) \
+@@ -238,7 +244,7 @@ modkern_rustflags =                                              \
+ 
+ modkern_aflags = $(if $(part-of-module),				\
+ 			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
+-			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
++			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
+ 
+ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 -include $(srctree)/include/linux/compiler_types.h       \
+@@ -248,7 +254,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
+ 
+ a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+-		 $(_a_flags) $(modkern_aflags)
++		 $(_a_flags) $(modkern_aflags) $(modname_flags)
+ 
+ cpp_flags      = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
+ 		 $(_cpp_flags)
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 3bec9043e4f38..06807e403d162 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M]  $@
+ %.mod.o: %.mod.c FORCE
+ 	$(call if_changed_dep,cc_o_c)
+ 
++# for module-ctf-postlink
++include $(srctree)/scripts/Makefile.ctfa
++
+ quiet_cmd_ld_ko_o = LD [M]  $@
+       cmd_ld_ko_o +=							\
+ 	$(LD) -r $(KBUILD_LDFLAGS)					\
+ 		$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE)		\
+-		-T scripts/module.lds -o $@ $(filter %.o, $^)
++		-T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp	\
++		$(filter %.o, $^) &&					\
++	$(call module-ctf-postlink,$@)					\
+ 
+ quiet_cmd_btf_ko = BTF [M] $@
+       cmd_btf_ko = 							\
+diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
+index 0afd75472679f..e668469ce098c 100644
+--- a/scripts/Makefile.modinst
++++ b/scripts/Makefile.modinst
+@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
+ quiet_cmd_install_modorder = INSTALL $@
+       cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
+ 
+-# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
+-install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
++# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
++install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
+ 
+-$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
++install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
++
++$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
+ 	$(call cmd,install)
+ 
+ endif
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 49946cb968440..2524c8e2edbdb 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -33,7 +33,25 @@ targets += vmlinux
+ vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
+ 	+$(call if_changed_dep,link_vmlinux)
+ 
+-# Add FORCE to the prequisites of a target to force it to be always rebuilt.
++# ---------------------------------------------------------------------------
++ifdef CONFIG_BUILTIN_MODULE_RANGES
++__default: modules.builtin.ranges
++
++quiet_cmd_modules_builtin_ranges = GEN     $@
++      cmd_modules_builtin_ranges = gawk -f $(real-prereqs) > $@
++
++targets += modules.builtin.ranges
++modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
++			modules.builtin vmlinux.map vmlinux.o.map FORCE
++	$(call if_changed,modules_builtin_ranges)
++
++vmlinux.map: vmlinux
++	@:
++
++endif
++
++# Add FORCE to the prerequisites of a target to force it to be always rebuilt.
++
+ # ---------------------------------------------------------------------------
+ 
+ PHONY += FORCE
+diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
+index 6de297916ce68..e8d5c98173d74 100644
+--- a/scripts/Makefile.vmlinux_o
++++ b/scripts/Makefile.vmlinux_o
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
+ PHONY := __default
+-__default: vmlinux.o modules.builtin.modinfo modules.builtin
++__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
+ 
+ include include/config/auto.conf
+ include $(srctree)/scripts/Kbuild.include
+@@ -27,6 +27,20 @@ ifdef CONFIG_LTO_CLANG
+ initcalls-lds := .tmp_initcalls.lds
+ endif
+ 
++# Generate a linker script to delete CTF sections
++# -----------------------------------------------
++
++quiet_cmd_gen_remove_ctf.lds = GEN     $@
++      cmd_gen_remove_ctf.lds = \
++	$(LD) $(KBUILD_LDFLAGS) -r --verbose | awk -f $(real-prereqs) > $@
++
++.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
++	$(call if_changed,gen_remove_ctf.lds)
++
++ifdef CONFIG_CTF
++targets := .tmp_remove-ctf.lds
++endif
++
+ # objtool for vmlinux.o
+ # ---------------------------------------------------------------------------
+ #
+@@ -42,13 +56,26 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+ 
+ objtool-args = $(vmlinux-objtool-args-y) --link
+ 
+-# Link of vmlinux.o used for section mismatch analysis
++# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
++# section out at this stage, since ctfarchive gets it from the underlying object
++# files and linking it further is a waste of time.
+ # ---------------------------------------------------------------------------
+ 
++vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES)	+= -Map=$@.map
++
++ifdef CONFIG_CTF
++ctf_strip_script_arg = -T .tmp_remove-ctf.lds
++ctf_target = .tmp_remove-ctf.lds
++else
++ctf_strip_script_arg =
++ctf_target =
++endif
++
+ quiet_cmd_ld_vmlinux.o = LD      $@
+       cmd_ld_vmlinux.o = \
+ 	$(LD) ${KBUILD_LDFLAGS} -r -o $@ \
+-	$(addprefix -T , $(initcalls-lds)) \
++	$(vmlinux-o-ld-args-y) \
++	$(addprefix -T , $(initcalls-lds)) $(ctf_strip_script_arg) \
+ 	--whole-archive vmlinux.a --no-whole-archive \
+ 	--start-group $(KBUILD_VMLINUX_LIBS) --end-group \
+ 	$(cmd_objtool)
+@@ -58,7 +85,7 @@ define rule_ld_vmlinux.o
+ 	$(call cmd,gen_objtooldep)
+ endef
+ 
+-vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
++vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) $(ctf_target) FORCE
+ 	$(call if_changed_rule,ld_vmlinux.o)
+ 
+ targets += vmlinux.o
+@@ -87,6 +114,19 @@ targets += modules.builtin
+ modules.builtin: modules.builtin.modinfo FORCE
+ 	$(call if_changed,modules_builtin)
+ 
++# module.builtin.objs
++# ---------------------------------------------------------------------------
++quiet_cmd_modules_builtin_objs = GEN     $@
++      cmd_modules_builtin_objs = \
++	tr '\0' '\n' < $< | \
++	sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
++	tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
++	tr -s ' ' > $@
++
++targets += modules.builtin.objs
++modules.builtin.objs: modules.builtin.modinfo FORCE
++	$(call if_changed,modules_builtin_objs)
++
+ # Add FORCE to the prequisites of a target to force it to be always rebuilt.
+ # ---------------------------------------------------------------------------
+ 
+diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
+new file mode 100644
+index 0000000000000..3b83f93bb9f9a
+--- /dev/null
++++ b/scripts/ctf/Makefile
+@@ -0,0 +1,5 @@
++ifdef CONFIG_CTF
++hostprogs-always-y	:= ctfarchive
++ctfarchive-objs		:= ctfarchive.o modules_builtin.o
++HOSTLDLIBS_ctfarchive := -lctf
++endif
+diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
+new file mode 100644
+index 0000000000000..92cc4912ed0ee
+--- /dev/null
++++ b/scripts/ctf/ctfarchive.c
+@@ -0,0 +1,413 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * ctfmerge.c: Read in CTF extracted from generated object files from a
++ * specified directory and generate a CTF archive whose members are the
++ * deduplicated CTF derived from those object files, split up by kernel
++ * module.
++ *
++ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#define _GNU_SOURCE 1
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++#include <ctf-api.h>
++#include "modules_builtin.h"
++
++static ctf_file_t *output;
++
++static int private_ctf_link_add_ctf(ctf_file_t *fp,
++				    const char *name)
++{
++#if !defined (CTF_LINK_FINAL)
++	return ctf_link_add_ctf(fp, NULL, name);
++#else
++	/* Non-upstreamed, erroneously-broken API.  */
++	return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
++#endif
++}
++
++/*
++ * Add a file to the link.
++ */
++static void add_to_link(const char *fn)
++{
++	if (private_ctf_link_add_ctf(output, fn) < 0)
++	{
++		fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++struct from_to
++{
++	char *from;
++	char *to;
++};
++
++/*
++ * The world's stupidest hash table of FROM -> TO.
++ */
++static struct from_to **from_tos[256];
++static size_t alloc_from_tos[256];
++static size_t num_from_tos[256];
++
++static unsigned char from_to_hash(const char *from)
++{
++	unsigned char hval = 0;
++
++	const char *p;
++	for (p = from; *p; p++)
++		hval += *p;
++
++	return hval;
++}
++
++/*
++ * Note that we will need to add a CU mapping later on.
++ *
++ * Present purely to work around a binutils bug that stops
++ * ctf_link_add_cu_mapping() working right when called repeatedly
++ * with the same FROM.
++ */
++static int add_cu_mapping(const char *from, const char *to)
++{
++	ssize_t i, j;
++
++	i = from_to_hash(from);
++
++	for (j = 0; j < num_from_tos[i]; j++)
++		if (strcmp(from, from_tos[i][j]->from) == 0) {
++			char *tmp;
++
++			free(from_tos[i][j]->to);
++			tmp = strdup(to);
++			if (!tmp)
++				goto oom;
++			from_tos[i][j]->to = tmp;
++			return 0;
++		    }
++
++	if (num_from_tos[i] >= alloc_from_tos[i]) {
++		struct from_to **tmp;
++		if (alloc_from_tos[i] < 16)
++			alloc_from_tos[i] = 16;
++		else
++			alloc_from_tos[i] *= 2;
++
++		tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
++		if (!tmp)
++			goto oom;
++
++		from_tos[i] = tmp;
++	}
++
++	j = num_from_tos[i];
++	from_tos[i][j] = malloc(sizeof(struct from_to));
++	if (from_tos[i][j] == NULL)
++		goto oom;
++	from_tos[i][j]->from = strdup(from);
++	from_tos[i][j]->to = strdup(to);
++	if (!from_tos[i][j]->from || !from_tos[i][j]->to)
++		goto oom;
++	num_from_tos[i]++;
++
++	return 0;
++  oom:
++	fprintf(stderr,
++		"out of memory in add_cu_mapping\n");
++	exit(1);
++}
++
++/*
++ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
++ * replaced with the most recent one.
++ */
++static void commit_cu_mappings(void)
++{
++	ssize_t i, j;
++
++	for (i = 0; i < 256; i++)
++		for (j = 0; j < num_from_tos[i]; j++)
++			ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
++						from_tos[i][j]->to);
++}
++
++/*
++ * Add a CU mapping to the link.
++ *
++ * CU mappings for built-in modules are added by suck_in_modules, below: here,
++ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
++ * modules, which appear only in the filelist (since they are not built-in).
++ * The pathnames are stripped off because modules don't have any, and hyphens
++ * are translated into underscores.
++ */
++static void add_cu_mappings(const char *fn)
++{
++	const char *last_slash;
++	const char *modname = fn;
++	char *dynmodname = NULL;
++	char *dash;
++	size_t n;
++
++	last_slash = strrchr(modname, '/');
++	if (last_slash)
++		last_slash++;
++	else
++		last_slash = modname;
++	modname = last_slash;
++	if (strchr(modname, '-') != NULL)
++	{
++		dynmodname = strdup(last_slash);
++		dash = dynmodname;
++		while (dash != NULL) {
++			dash = strchr(dash, '-');
++			if (dash != NULL)
++				*dash = '_';
++		}
++		modname = dynmodname;
++	}
++
++	n = strlen(modname);
++	if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
++		char *mod;
++
++		n -= strlen(".ko.ctf");
++		mod = strndup(modname, n);
++		add_cu_mapping(fn, mod);
++		free(mod);
++	}
++	free(dynmodname);
++}
++
++/*
++ * Add the passed names as mappings to "vmlinux".
++ */
++static void add_builtins(const char *fn)
++{
++	if (add_cu_mapping(fn, "vmlinux") < 0)
++	{
++		fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
++			ctf_errmsg(ctf_errno(output)));
++		exit(1);
++	}
++}
++
++/*
++ * Do something with a file, line by line.
++ */
++static void suck_in_lines(const char *filename, void (*func)(const char *line))
++{
++	FILE *f;
++	char *line = NULL;
++	size_t line_size = 0;
++
++	f = fopen(filename, "r");
++	if (f == NULL) {
++		fprintf(stderr, "Cannot open %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	while (getline(&line, &line_size, f) >= 0) {
++		size_t len = strlen(line);
++
++		if (len == 0)
++			continue;
++
++		if (line[len-1] == '\n')
++			line[len-1] = '\0';
++
++		func(line);
++	}
++	free(line);
++
++	if (ferror(f)) {
++		fprintf(stderr, "Error reading from %s: %s\n", filename,
++			strerror(errno));
++		exit(1);
++	}
++
++	fclose(f);
++}
++
++/*
++ * Pull in modules.builtin.objs and turn it into CU mappings.
++ */
++static void suck_in_modules(const char *modules_builtin_name)
++{
++	struct modules_builtin_iter *i;
++	char *module_name = NULL;
++	char **paths;
++
++	i = modules_builtin_iter_new(modules_builtin_name);
++	if (i == NULL) {
++		fprintf(stderr, "Cannot iterate over builtin module file.\n");
++		exit(1);
++	}
++
++	while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
++		size_t j;
++
++		for (j = 0; paths[j] != NULL; j++) {
++			char *alloc = NULL;
++			char *path = paths[j];
++			/*
++			 * If the name doesn't start in ./, add it, to match the names
++			 * passed to add_builtins.
++			 */
++			if (strncmp(paths[j], "./", 2) != 0) {
++				char *p;
++				if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
++					fprintf(stderr, "Cannot allocate memory for "
++						"builtin module object name %s.\n",
++						paths[j]);
++					exit(1);
++				}
++				p = alloc;
++				p = stpcpy(p, "./");
++				p = stpcpy(p, paths[j]);
++				path = alloc;
++			}
++			if (add_cu_mapping(path, module_name) < 0) {
++				fprintf(stderr, "Cannot add path -> module mapping for "
++					"%s -> %s: %s\n", path, module_name,
++					ctf_errmsg(ctf_errno(output)));
++				exit(1);
++			}
++			free (alloc);
++		}
++		free(paths);
++	}
++	free(module_name);
++	modules_builtin_iter_free(i);
++}
++
++/*
++ * Strip the leading .ctf. off all the module names: transform the default name
++ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
++ * derives from the intermediate file used to keep the CTF out of the final
++ * module).
++ */
++static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
++				    const char *name,
++				    void *arg __attribute__((__unused__)))
++{
++	if (strcmp(name, ".ctf") == 0)
++		return strdup("shared_ctf");
++
++	if (strncmp(name, ".ctf", 4) == 0) {
++		size_t n = strlen (name);
++		if (strcmp(name + n - 4, ".ctf") == 0)
++			n -= 4;
++		return strndup(name + 4, n - 4);
++	}
++	return NULL;
++}
++
++int main(int argc, char *argv[])
++{
++	int err;
++	const char *output_file;
++	unsigned char *file_data = NULL;
++	size_t file_size;
++	FILE *fp;
++
++	if (argc != 5) {
++		fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
++		fprintf(stderr, "                   filelist\n");
++		exit(1);
++	}
++
++	output_file = argv[1];
++
++	/*
++	 * First pull in the input files and add them to the link.
++	 */
++
++	output = ctf_create(&err);
++	if (!output) {
++		fprintf(stderr, "Cannot create output CTF archive: %s\n",
++			ctf_errmsg(err));
++		return 1;
++	}
++
++	suck_in_lines(argv[4], add_to_link);
++
++	/*
++	 * Make sure that, even if all their types are shared, all modules have
++	 * a ctf member that can be used as a child of the shared CTF.
++	 */
++	suck_in_lines(argv[4], add_cu_mappings);
++
++	/*
++	 * Then pull in the builtin objects list and add them as
++	 * mappings to "vmlinux".
++	 */
++
++	suck_in_lines(argv[2], add_builtins);
++
++	/*
++	 * Finally, pull in the object -> module mapping and add it
++	 * as appropriate mappings.
++	 */
++	suck_in_modules(argv[3]);
++
++	/*
++	 * Commit the added CU mappings.
++	 */
++	commit_cu_mappings();
++
++	/*
++	 * Arrange to fix up the module names.
++	 */
++	ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
++
++	/*
++	 * Do the link.
++	 */
++	if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
++                     CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
++		goto ctf_err;
++
++	/*
++	 * Write the output.
++	 */
++
++	file_data = ctf_link_write(output, &file_size, 4096);
++	if (!file_data)
++		goto ctf_err;
++
++	fp = fopen(output_file, "w");
++	if (!fp)
++		goto err;
++
++	while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
++	if (ferror(fp)) {
++		errno = ferror(fp);
++		goto err;
++	}
++	if (fclose(fp) < 0)
++		goto err;
++	free(file_data);
++	ctf_file_close(output);
++
++	return 0;
++err:
++	free(file_data);
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		strerror(errno));
++	return 1;
++ctf_err:
++	fprintf(stderr, "Cannot create output CTF archive: %s\n",
++		ctf_errmsg(ctf_errno(output)));
++	return 1;
++}
+diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
+new file mode 100644
+index 0000000000000..10af2bbc80e0c
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.c
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.c"
+diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
+new file mode 100644
+index 0000000000000..5e0299e5600c2
+--- /dev/null
++++ b/scripts/ctf/modules_builtin.h
+@@ -0,0 +1,2 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#include "../modules_builtin.h"
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..b9ec761b3befc
+--- /dev/null
++++ b/scripts/generate_builtin_ranges.awk
+@@ -0,0 +1,508 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# generate_builtin_ranges.awk: Generate address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
++#		vmlinux.o.map > modules.builtin.ranges
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Update the ranges entry for the given module 'mod' in section 'osect'.
++#
++# We use a modified absolute start address (soff + base) as index because we
++# may need to insert an anchor record later that must be at the start of the
++# section data, and the first module may very well start at the same address.
++# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
++# (addr << 1).  This is safe because the index is only used to sort the entries
++# before writing them out.
++#
++function update_entry(osect, mod, soff, eoff, sect, idx) {
++	sect = sect_in[osect];
++	idx = sprintf("%016x", (soff + sect_base[osect]) * 2 + 1);
++	entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
++	count[sect]++;
++}
++
++# (1) Build a lookup map of built-in module names.
++#
++# The first file argument is used as input (modules.builtin).
++#
++# Lines will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 1 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (2) Collect address information for each section.
++#
++# The second file argument is used as input (vmlinux.map).
++#
++# We collect the base address of the section in order to convert all addresses
++# in the section into offset values.
++#
++# We collect the address of the anchor (or first symbol in the section if there
++# is no explicit anchor) to allow users of the range data to calculate address
++# ranges based on the actual load address of the section in the running kernel.
++#
++# We collect the start address of any sub-section (section included in the top
++# level section being processed).  This is needed when the final linking was
++# done using vmlinux.a because then the list of objects contained in each
++# section is to be obtained from vmlinux.o.map.  The offset of the sub-section
++# is recorded here, to be used as an addend when processing vmlinux.o.map
++# later.
++#
++
++# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
++# lld linker map records into equivalent GNU ld linker map records.
++#
++# The first record of the vmlinux.map file provides enough information to know
++# which format we are dealing with.
++#
++ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	if (dbg)
++		printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++# lld: ffffffff82c00000          2c00000   2493c0  8192 .data
++#  ->
++# ld:  .data           0xffffffff82c00000   0x2493c0 load address 0x0000000002c00000
++#
++ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an anchor record from lld format to ld format.
++#
++# lld: ffffffff81000000          1000000        0     1         _text = .
++#  ->
++# ld:                  0xffffffff81000000                _text = .
++#
++ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
++	$0 = "  0x"$1 " " $5 " = .";
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++# lld:            11480            11480     1f07    16         vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
++#  ->
++# ld:   .text          0x0000000000011480     0x1f07 arch/x86/events/amd/uncore.o
++#
++ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/ vmlinux\.a\(/, " ");
++	sub(/:\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++# We only care about these while processing a section for which no anchor has
++# been determined yet.
++#
++# lld: ffffffff82a859a4          2a859a4        0     1                 btf_ksym_iter_id
++#  ->
++# ld:                  0xffffffff82a859a4                btf_ksym_iter_id
++#
++ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
++	$0 = "  0x"$1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# (LD) Section records with just the section name at the start of the line
++#      need to have the next line pulled in to determine whether it is a
++#      loadable section.  If it is, the next line will contains a hex value
++#      as first and second items.
++#
++ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	if ($1 !~ /^0x/ || $2 !~ /^0x/)
++		next;
++
++	$0 = s " " $0;
++}
++
++# (LD) Object records with just the section name denote records with a long
++#      section name for which the remainder of the record can be found on the
++#      next line.
++#
++# (This is also needed for vmlinux.o.map, when used.)
++#
++ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Beginning a new section - done with the previous one (if any).
++#
++ARGIND == 2 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Process a loadable section (we only care about .-sections).
++#
++# Record the section name and its base address.
++# We also record the raw (non-stripped) address of the section because it can
++# be used to identify an anchor record.
++#
++# Note:
++# Since some AWK implementations cannot handle large integers, we strip off the
++# first 4 hex digits from the address.  This is safe because the kernel space
++# is not large enough for addresses to extend into those digits.  The portion
++# to strip off is stored in addr_prefix as a regexp, so further clauses can
++# perform a simple substitution to do the address stripping.
++#
++ARGIND == 2 && /^\./ {
++	# Explicitly ignore a few sections that are not relevant here.
++	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++		next;
++
++	# Sections with a 0-address can be ignored as well.
++	if ($2 ~ /^0x0+$/)
++		next;
++
++	raw_addr = $2;
++	addr_prefix = "^" substr($2, 1, 6);
++	base = $2;
++	sub(addr_prefix, "0x", base);
++	base = strtonum(base);
++	sect = $1;
++	anchor = 0;
++	sect_base[sect] = base;
++	sect_size[sect] = strtonum($3);
++
++	if (dbg)
++		printf "[%s] BASE   %016x\n", sect, base >"/dev/stderr";
++
++	next;
++}
++
++# If we are not in a section we care about, we ignore the record.
++#
++ARGIND == 2 && !sect {
++	next;
++}
++
++# Record the first anchor symbol for the current section.
++#
++# An anchor record for the section bears the same raw address as the section
++# record.
++#
++ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
++	anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
++
++	next;
++}
++
++# If no anchor record was found for the current section, use the first symbol
++# in the section as anchor.
++#
++ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
++	addr = $1;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr) - base;
++	anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
++	sect_anchor[sect] = anchor;
++
++	if (dbg)
++		printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
++
++	next;
++}
++
++# The first occurrence of a section name in an object record establishes the
++# addend (often 0) for that section.  This information is needed to handle
++# sections that get combined in the final linking of vmlinux (e.g. .head.text
++# getting included at the start of .text).
++#
++# If the section does not have a base yet, use the base of the encapsulating
++# section.
++#
++ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++	if (!($1 in sect_base)) {
++		sect_base[$1] = base;
++
++		if (dbg)
++			printf "[%s] BASE   %016x\n", $1, base >"/dev/stderr";
++	}
++
++	addr = $2;
++	sub(addr_prefix, "0x", addr);
++	addr = strtonum(addr);
++	sect_addend[$1] = addr - sect_base[$1];
++	sect_in[$1] = sect;
++
++	if (dbg)
++		printf "[%s] ADDEND %016x - %016x = %016x\n",  $1, addr, base, sect_addend[$1] >"/dev/stderr";
++
++	# If the object is vmlinux.o then we will need vmlinux.o.map to get the
++	# actual offsets of objects.
++	if ($4 == "vmlinux.o")
++		need_o_map = 1;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++# If the final link was done using the actual objects, vmlinux.map contains all
++# the information we need (see section (3a)).
++# If linking was done using vmlinux.a as intermediary, we will need to process
++# vmlinux.o.map (see section (3b)).
++
++# (3a) Determine offset range info using vmlinux.map.
++#
++# Since we are already processing vmlinux.map, the top level section that is
++# being processed is already known.  If we do not have a base address for it,
++# we do not need to process records for it.
++#
++# Given the object name, we determine the module(s) (if any) that the current
++# object is associated with.
++#
++# If we were already processing objects for a (list of) module(s):
++#  - If the current object belongs to the same module(s), update the range data
++#    to include the current object.
++#  - Otherwise, ensure that the end offset of the range is valid.
++#
++# If the current object does not belong to a built-in module, ignore it.
++#
++# If it does, we add a new built-in module offset range record.
++#
++ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	if (!(sect in sect_base))
++		next;
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) - sect_base[sect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = $1;
++	update_entry($1, mod, soff, mod_eoff);
++
++	next;
++}
++
++# If we do not need to parse the vmlinux.o.map file, we are done.
++#
++ARGIND == 3 && !need_o_map {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# (3) Collect offset ranges (relative to the section base address) for built-in
++# modules.
++#
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++
++	sect = $6;
++	if (!(sect in sect_addend))
++		next;
++
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "sect " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (3b) Determine offset range info using vmlinux.o.map.
++#
++# If we do not know an addend for the object's section, we are interested in
++# anything within that section.
++#
++# Determine the top-level section that the object's section was included in
++# during the final link.  This is the section name offset range data will be
++# associated with for this object.
++#
++# The remainder of the processing of the current object record follows the
++# procedure outlined in (3a).
++#
++ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
++	osect = $1;
++	if (!(osect in sect_addend))
++		next;
++
++	# We need to work with the main section.
++	sect = sect_in[osect];
++
++	# Turn the address into an offset from the section base.
++	soff = $2;
++	sub(addr_prefix, "0x", soff);
++	soff = strtonum(soff) + sect_addend[osect];
++	eoff = soff + strtonum($3);
++
++	# Determine which (if any) built-in modules the object belongs to.
++	mod = get_module_info($4);
++
++	# If we are processing a built-in module:
++	#   - If the current object is within the same module, we update its
++	#     entry by extending the range and move on
++	#   - Otherwise:
++	#       + If we are still processing within the same main section, we
++	#         validate the end offset against the start offset of the
++	#         current object (e.g. .rodata.str1.[18] objects are often
++	#         listed with an incorrect size in the linker map)
++	#       + Otherwise, we validate the end offset against the section
++	#         size
++	if (mod_name) {
++		if (mod == mod_name) {
++			mod_eoff = eoff;
++			update_entry(mod_sect, mod_name, mod_soff, eoff);
++
++			next;
++		} else if (sect == sect_in[mod_sect]) {
++			if (mod_eoff > soff)
++				update_entry(mod_sect, mod_name, mod_soff, soff);
++		} else {
++			v = sect_size[sect_in[mod_sect]];
++			if (mod_eoff > v)
++				update_entry(mod_sect, mod_name, mod_soff, v);
++		}
++	}
++
++	mod_name = mod;
++
++	# If we encountered an object that is not part of a built-in module, we
++	# do not need to record any data.
++	if (!mod)
++		next;
++
++	# At this point, we encountered the start of a new built-in module.
++	mod_name = mod;
++	mod_soff = soff;
++	mod_eoff = eoff;
++	mod_sect = osect;
++	update_entry(osect, mod, soff, mod_eoff);
++
++	next;
++}
++
++# (4) Generate the output.
++#
++# Anchor records are added for each section that contains offset range data
++# records.  They are added at an adjusted section base address (base << 1) to
++# ensure they come first in the second records (see update_entry() above for
++# more information).
++#
++# All entries are sorted by (adjusted) address to ensure that the output can be
++# parsed in strict ascending address order.
++#
++END {
++	for (sect in count) {
++		if (sect in sect_anchor) {
++			idx = sprintf("%016x", sect_base[sect] * 2);
++			entries[idx] = sect_anchor[sect];
++		}
++	}
++
++	n = asorti(entries, indices);
++	for (i = 1; i <= n; i++)
++		print entries[indices[i]];
++}
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index f48d72d22dc2a..d7e6cd7781256 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -733,6 +733,7 @@ static const char *const section_white_list[] =
+ 	".comment*",
+ 	".debug*",
+ 	".zdebug*",		/* Compressed debug sections. */
++        ".ctf",			/* Type info */
+ 	".GCC.command.line",	/* record-gcc-switches */
+ 	".mdebug*",        /* alpha, score, mips etc. */
+ 	".pdr",            /* alpha, score, mips etc. */
+diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
+new file mode 100644
+index 0000000000000..df52932a4417b
+--- /dev/null
++++ b/scripts/modules_builtin.c
+@@ -0,0 +1,200 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules_builtin reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#include <errno.h>
++#include <stdio.h>
++#include <stdlib.h>
++#include <string.h>
++
++#include "modules_builtin.h"
++
++/*
++ * Read a modules.builtin.objs file and translate it into a stream of
++ * name / module-name pairs.
++ */
++
++/*
++ * Construct a modules.builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file)
++{
++	struct modules_builtin_iter *i;
++
++	i = calloc(1, sizeof(struct modules_builtin_iter));
++	if (i == NULL)
++		return NULL;
++
++	i->f = fopen(modules_builtin_file, "r");
++
++	if (i->f == NULL) {
++		fprintf(stderr, "Cannot open builtin module file %s: %s\n",
++			modules_builtin_file, strerror(errno));
++		return NULL;
++	}
++
++	return i;
++}
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
++{
++	size_t npaths = 1;
++	char **module_paths;
++	char *last_slash;
++	char *last_dot;
++	char *trailing_linefeed;
++	char *object_name = i->line;
++	char *dash;
++	int composite = 0;
++
++	/*
++	 * Read in all module entries, computing the suffixless, pathless name
++	 * of the module and building the next arrayful of object file names for
++	 * return.
++	 *
++	 * Modules can consist of multiple files: in this case, the portion
++	 * before the colon is the path to the module (as before): the portion
++	 * after the colon is a space-separated list of files that should be
++	 * considered part of this module.  In this case, the portion before the
++	 * name is an "object file" that does not actually exist: it is merged
++	 * into built-in.a without ever being written out.
++	 *
++	 * All module names have - translated to _, to match what is done to the
++	 * names of the same things when built as modules.
++	 */
++
++	/*
++	 * Reinvocation of exhausted iterator. Return NULL, once.
++	 */
++retry:
++	if (getline(&i->line, &i->line_size, i->f) < 0) {
++		if (ferror(i->f)) {
++			fprintf(stderr, "Error reading from modules_builtin file:"
++				" %s\n", strerror(errno));
++			exit(1);
++		}
++		rewind(i->f);
++		return NULL;
++	}
++
++	if (i->line[0] == '\0')
++		goto retry;
++
++	trailing_linefeed = strchr(i->line, '\n');
++	if (trailing_linefeed != NULL)
++		*trailing_linefeed = '\0';
++
++	/*
++	 * Slice the line in two at the colon, if any.  If there is anything
++	 * past the ': ', this is a composite module.  (We allow for no colon
++	 * for robustness, even though one should always be present.)
++	 */
++	if (strchr(i->line, ':') != NULL) {
++		char *name_start;
++
++		object_name = strchr(i->line, ':');
++		*object_name = '\0';
++		object_name++;
++		name_start = object_name + strspn(object_name, " \n");
++		if (*name_start != '\0') {
++			composite = 1;
++			object_name = name_start;
++		}
++	}
++
++	/*
++	 * Figure out the module name.
++	 */
++	last_slash = strrchr(i->line, '/');
++	last_slash = (!last_slash) ? i->line :
++		last_slash + 1;
++	free(*module_name);
++	*module_name = strdup(last_slash);
++	dash = *module_name;
++
++	while (dash != NULL) {
++		dash = strchr(dash, '-');
++		if (dash != NULL)
++			*dash = '_';
++	}
++
++	last_dot = strrchr(*module_name, '.');
++	if (last_dot != NULL)
++		*last_dot = '\0';
++
++	/*
++	 * Multifile separator? Object file names explicitly stated:
++	 * slice them up and shuffle them in.
++	 *
++	 * The array size may be an overestimate if any object file
++	 * names start or end with spaces (very unlikely) but cannot be
++	 * an underestimate.  (Check for it anyway.)
++	 */
++	if (composite) {
++		char *one_object;
++
++		for (npaths = 0, one_object = object_name;
++		     one_object != NULL;
++		     npaths++, one_object = strchr(one_object + 1, ' '));
++	}
++
++	module_paths = malloc((npaths + 1) * sizeof(char *));
++	if (!module_paths) {
++		fprintf(stderr, "%s: out of memory on module %s\n", __func__,
++			*module_name);
++		exit(1);
++	}
++
++	if (composite) {
++		char *one_object;
++		size_t i = 0;
++
++		while ((one_object = strsep(&object_name, " ")) != NULL) {
++			if (i >= npaths) {
++				fprintf(stderr, "%s: num_objs overflow on module "
++					"%s: this is a bug.\n", __func__,
++					*module_name);
++				exit(1);
++			}
++
++			module_paths[i++] = one_object;
++		}
++	} else
++		module_paths[0] = i->line;	/* untransformed module name */
++
++	module_paths[npaths] = NULL;
++
++	return module_paths;
++}
++
++/*
++ * Free an iterator. Can be called while iteration is underway, so even
++ * state that is freed at the end of iteration must be freed here too.
++ */
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i)
++{
++	if (i == NULL)
++		return;
++	fclose(i->f);
++	free(i->line);
++	free(i);
++}
+diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
+new file mode 100644
+index 0000000000000..5138792b42ef0
+--- /dev/null
++++ b/scripts/modules_builtin.h
+@@ -0,0 +1,48 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * A simple modules.builtin.objs reader.
++ *
++ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ */
++
++#ifndef _LINUX_MODULES_BUILTIN_H
++#define _LINUX_MODULES_BUILTIN_H
++
++#include <stdio.h>
++#include <stddef.h>
++
++/*
++ * modules.builtin.objs iteration state.
++ */
++struct modules_builtin_iter {
++	FILE *f;
++	char *line;
++	size_t line_size;
++};
++
++/*
++ * Construct a modules_builtin.objs iterator.
++ */
++struct modules_builtin_iter *
++modules_builtin_iter_new(const char *modules_builtin_file);
++
++/*
++ * Iterate, returning a new null-terminated array of object file names, and a
++ * new dynamically-allocated module name.  (The module name passed in is freed.)
++ *
++ * The array of object file names should be freed by the caller: the strings it
++ * points to are owned by the iterator, and should not be freed.
++ */
++
++char ** __attribute__((__nonnull__))
++modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
++
++void
++modules_builtin_iter_free(struct modules_builtin_iter *i);
++
++#endif
+diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
+index c52d517b93647..8f75906a96314 100644
+--- a/scripts/package/kernel.spec
++++ b/scripts/package/kernel.spec
+@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
+ 
+ %build
+ %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
++%if %{with_ctf}
++%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
++%endif
+ 
+ %install
+ mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
+ # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
+ %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
++%if %{with_ctf}
++%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
++%endif
+ %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
+ cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
+ cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index ce201bfa8377c..aeb43c7ab1229 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -21,10 +21,16 @@ else
+ echo '%define with_devel 0'
+ fi
+ 
++if grep -q CONFIG_CTF=y include/config/auto.conf; then
++echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
++else
++echo '%define with_ctf 0'
++fi
+ cat<<EOF
+ %define ARCH ${ARCH}
+ %define KERNELRELEASE ${KERNELRELEASE}
+ %define pkg_release $("${srctree}/init/build-version")
++
+ EOF
+ 
+ cat "${srctree}/scripts/package/kernel.spec"
+diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
+new file mode 100644
+index 0000000000000..5d94d6ee99227
+--- /dev/null
++++ b/scripts/remove-ctf-lds.awk
+@@ -0,0 +1,12 @@
++# SPDX-License-Identifier: GPL-2.0
++# See Makefile.vmlinux_o
++
++BEGIN {
++    discards = 0; p = 0
++}
++
++/^====/ { p = 1; next; }
++p && /\.ctf/ { next; }
++p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
++p && /^\}/ && !discards { print "  /DISCARD/ : { *(.ctf) }"; }
++p { print $0; }
+diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
+new file mode 100755
+index 0000000000000..0de7ed5216011
+--- /dev/null
++++ b/scripts/verify_builtin_ranges.awk
+@@ -0,0 +1,370 @@
++#!/usr/bin/gawk -f
++# SPDX-License-Identifier: GPL-2.0
++# verify_builtin_ranges.awk: Verify address range data for builtin modules
++# Written by Kris Van Hees <kris.van.hees@oracle.com>
++#
++# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
++#				   modules.builtin vmlinux.map vmlinux.o.map
++#
++
++# Return the module name(s) (if any) associated with the given object.
++#
++# If we have seen this object before, return information from the cache.
++# Otherwise, retrieve it from the corresponding .cmd file.
++#
++function get_module_info(fn, mod, obj, s) {
++	if (fn in omod)
++		return omod[fn];
++
++	if (match(fn, /\/[^/]+$/) == 0)
++		return "";
++
++	obj = fn;
++	mod = "";
++	fn = substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
++	if (getline s <fn == 1) {
++		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
++			mod = substr(s, RSTART + 16, RLENGTH - 16);
++			gsub(/['"]/, "", mod);
++		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
++			mod = substr(s, RSTART + 13, RLENGTH - 13);
++	} else {
++		print "ERROR: Failed to read: " fn "\n\n" \
++		      "  For kernels built with O=<objdir>, cd to <objdir>\n" \
++		      "  and execute this script as ./source/scripts/..." \
++		      >"/dev/stderr";
++		close(fn);
++		total = 0;
++		exit(1);
++	}
++	close(fn);
++
++	# A single module (common case) also reflects objects that are not part
++	# of a module.  Some of those objects have names that are also a module
++	# name (e.g. core).  We check the associated module file name, and if
++	# they do not match, the object is not part of a module.
++	if (mod !~ / /) {
++		if (!(mod in mods))
++			mod = "";
++	}
++
++	gsub(/([^/ ]*\/)+/, "", mod);
++	gsub(/-/, "_", mod);
++
++	# At this point, mod is a single (valid) module name, or a list of
++	# module names (that do not need validation).
++	omod[obj] = mod;
++
++	return mod;
++}
++
++# Return a representative integer value for a given hexadecimal address.
++#
++# Since all kernel addresses fall within the same memory region, we can safely
++# strip off the first 6 hex digits before performing the hex-to-dec conversion,
++# thereby avoiding integer overflows.
++#
++function addr2val(val) {
++	sub(/^0x/, "", val);
++	if (length(val) == 16)
++		val = substr(val, 5);
++	return strtonum("0x" val);
++}
++
++# Determine the kernel build directory to use (default is .).
++#
++BEGIN {
++	if (ARGC < 6) {
++		print "Syntax: verify_builtin_ranges.awk <ranges-file> <system-map>\n" \
++		      "          <builtin-file> <vmlinux-map> <vmlinux-o-map>\n" \
++		      >"/dev/stderr";
++		total = 0;
++		exit(1);
++	}
++}
++
++# (1) Load the built-in module address range data.
++#
++ARGIND == 1 {
++	ranges[FNR] = $0;
++	rcnt++;
++	next;
++}
++
++# (2) Annotate System.map symbols with module names.
++#
++ARGIND == 2 {
++	addr = addr2val($1);
++	name = $3;
++
++	while (addr >= mod_eaddr) {
++		if (sect_symb) {
++			if (sect_symb != name)
++				next;
++
++			sect_base = addr - sect_off;
++			if (dbg)
++				printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
++			sect_symb = 0;
++		}
++
++		if (++ridx > rcnt)
++			break;
++
++		$0 = ranges[ridx];
++		sub(/-/, " ");
++		if ($4 != "=") {
++			sub(/-/, " ");
++			mod_saddr = strtonum("0x" $2) + sect_base;
++			mod_eaddr = strtonum("0x" $3) + sect_base;
++			$1 = $2 = $3 = "";
++			sub(/^ +/, "");
++			mod_name = $0;
++
++			if (dbg)
++				printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
++		} else {
++			sect_name = $1;
++			sect_off = strtonum("0x" $2);
++			sect_symb = $5;
++		}
++	}
++
++	idx = addr"-"name;
++	if (addr >= mod_saddr && addr < mod_eaddr)
++		sym2mod[idx] = mod_name;
++
++	next;
++}
++
++# Once we are done annotating the System.map, we no longer need the ranges data.
++#
++FNR == 1 && ARGIND == 3 {
++	delete ranges;
++}
++
++# (3) Build a lookup map of built-in module names.
++#
++# Lines from modules.builtin will be like:
++#	kernel/crypto/lzo-rle.ko
++# and we record the object name "crypto/lzo-rle".
++#
++ARGIND == 3 {
++	sub(/kernel\//, "");			# strip off "kernel/" prefix
++	sub(/\.ko$/, "");			# strip off .ko suffix
++
++	mods[$1] = 1;
++	next;
++}
++
++# (4) Get a list of symbols (per object).
++#
++# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
++# if vmlinux is found to have inked in vmlinux.o.
++#
++
++# If we were able to get the data we need from vmlinux.map, there is no need to
++# process vmlinux.o.map.
++#
++FNR == 1 && ARGIND == 5 && total > 0 {
++	if (dbg)
++		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
++	exit;
++}
++
++# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
++#
++ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
++	map_is_lld = 1;
++	next;
++}
++
++# (LLD) Convert a section record fronm lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
++	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
++}
++
++# (LLD) Convert an object record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
++	if (/\.a\(/ && !/ vmlinux\.a\(/)
++		next;
++
++	gsub(/\)/, "");
++	sub(/:\(/, " ");
++	sub(/ vmlinux\.a\(/, " ");
++	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
++}
++
++# (LLD) Convert a symbol record from lld format to ld format.
++#
++ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
++	$0 = "  0x" $1 " " $5;
++}
++
++# (LLD) We do not need any other ldd linker map records.
++#
++ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
++	next;
++}
++
++# Handle section records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Next section - previous one is done.
++#
++ARGIND >= 4 && /^[^ ]/ {
++	sect = 0;
++}
++
++# Get the (top level) section name.
++#
++ARGIND >= 4 && /^\./ {
++	# Explicitly ignore a few sections that are not relevant here.
++	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
++		next;
++
++	# Sections with a 0-address can be ignored as well (in vmlinux.map).
++	if (ARGIND == 4 && $2 ~ /^0x0+$/)
++		next;
++
++	sect = $1;
++
++	next;
++}
++
++# If we are not currently in a section we care about, ignore records.
++#
++!sect {
++	next;
++}
++
++# Handle object records with long section names (spilling onto a 2nd line).
++#
++ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
++	# If the section name is long, the remainder of the entry is found on
++	# the next line.
++	s = $0;
++	getline;
++	$0 = s " " $0;
++}
++
++# Objects linked in from static libraries are ignored.
++# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
++# symbol information
++#
++ARGIND == 4 && /^ [^ ]/ && NF == 4 {
++	if ($4 ~ /\.a\(/)
++		next;
++
++	idx = sect":"$1;
++	if (!(idx in sect_addend)) {
++		sect_addend[idx] = addr2val($2);
++		if (dbg)
++			printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
++	}
++	if ($4 == "vmlinux.o") {
++		need_o_map = 1;
++		next;
++	}
++}
++
++# If data from vmlinux.o.map is needed, we only process section and object
++# records from vmlinux.map to determine which section we need to pay attention
++# to in vmlinux.o.map.  So skip everything else from vmlinux.map.
++#
++ARGIND == 4 && need_o_map {
++	next;
++}
++
++# Get module information for the current object.
++#
++ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
++	msect = $1;
++	mod_name = get_module_info($4);
++	mod_eaddr = addr2val($2) + addr2val($3);
++
++	next;
++}
++
++# Process a symbol record.
++#
++# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
++# as follows:
++#  - For all symbols in a given object:
++#     - If the symbol is annotated with the same module name(s) that the object
++#       belongs to, count it as a match.
++#     - Otherwise:
++#        - If the symbol is known to have duplicates of which at least one is
++#          in a built-in module, disregard it.
++#        - If the symbol us not annotated with any module name(s) AND the
++#          object belongs to built-in modules, count it as missing.
++#        - Otherwise, count it as a mismatch.
++#
++ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
++	idx = sect":"msect;
++	if (!(idx in sect_addend))
++		next;
++
++	addr = addr2val($1);
++
++	# Handle the rare but annoying case where a 0-size symbol is placed at
++	# the byte *after* the module range.  Based on vmlinux.map it will be
++	# considered part of the current object, but it falls just beyond the
++	# module address range.  Unfortunately, its address could be at the
++	# start of another built-in module, so the only safe thing to do is to
++	# ignore it.
++	if (mod_name && addr == mod_eaddr)
++		next;
++
++	# If we are processing vmlinux.o.map, we need to apply the base address
++	# of the section to the relative address on the record.
++	#
++	if (ARGIND == 5)
++		addr += sect_addend[idx];
++
++	idx = addr"-"$2;
++	mod = "";
++	if (idx in sym2mod) {
++		mod = sym2mod[idx];
++		if (sym2mod[idx] == mod_name) {
++			mod_matches++;
++			matches++;
++		} else if (mod_name == "") {
++			print $2 " in " mod " (should NOT be)";
++			mismatches++;
++		} else {
++			print $2 " in " mod " (should be " mod_name ")";
++			mismatches++;
++		}
++	} else if (mod_name != "") {
++		print $2 " should be in " mod_name;
++		missing++;
++	} else
++		matches++;
++
++	total++;
++
++	next;
++}
++
++# Issue the comparison report.
++#
++END {
++	if (total) {
++		printf "Verification of %s:\n", ARGV[1];
++		printf "  Correct matches:  %6d (%d%% of total)\n", matches, 100 * matches / total;
++		printf "    Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
++		printf "  Mismatches:       %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
++		printf "  Missing:          %6d (%d%% of total)\n", missing, 100 * missing / total;
++
++		if (mismatches || missing)
++			exit(1);
++	}
++}


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-30 16:03 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-30 16:03 UTC (permalink / raw
  To: gentoo-commits

commit:     214d41397bbe150fe75f123d504ba3b7f6d0f1f4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 30 16:02:51 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 30 16:02:51 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=214d4139

Linux patch 6.10.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-6.10.12.patch | 2389 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2393 insertions(+)

diff --git a/0000_README b/0000_README
index 23b2116a..f735905c 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-6.10.11.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.11
 
+Patch:  1011_linux-6.10.12.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.12
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1011_linux-6.10.12.patch b/1011_linux-6.10.12.patch
new file mode 100644
index 00000000..d6d1ff01
--- /dev/null
+++ b/1011_linux-6.10.12.patch
@@ -0,0 +1,2389 @@
+diff --git a/Makefile b/Makefile
+index 447856c43b3275..175d7f27ea32a4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/loongarch/include/asm/hw_irq.h b/arch/loongarch/include/asm/hw_irq.h
+index af4f4e8fbd858f..8156ffb6741591 100644
+--- a/arch/loongarch/include/asm/hw_irq.h
++++ b/arch/loongarch/include/asm/hw_irq.h
+@@ -9,6 +9,8 @@
+ 
+ extern atomic_t irq_err_count;
+ 
++#define ARCH_IRQ_INIT_FLAGS	IRQ_NOPROBE
++
+ /*
+  * interrupt-retrigger: NOP for now. This may not be appropriate for all
+  * machines, we'll see ...
+diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
+index 590a92cb54165c..d741c3e9933a51 100644
+--- a/arch/loongarch/include/asm/kvm_vcpu.h
++++ b/arch/loongarch/include/asm/kvm_vcpu.h
+@@ -76,7 +76,6 @@ static inline void kvm_restore_lasx(struct loongarch_fpu *fpu) { }
+ #endif
+ 
+ void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
+-void kvm_reset_timer(struct kvm_vcpu *vcpu);
+ void kvm_save_timer(struct kvm_vcpu *vcpu);
+ void kvm_restore_timer(struct kvm_vcpu *vcpu);
+ 
+diff --git a/arch/loongarch/kernel/irq.c b/arch/loongarch/kernel/irq.c
+index f4991c03514f48..adac8fcbb2aca4 100644
+--- a/arch/loongarch/kernel/irq.c
++++ b/arch/loongarch/kernel/irq.c
+@@ -102,9 +102,6 @@ void __init init_IRQ(void)
+ 	mp_ops.init_ipi();
+ #endif
+ 
+-	for (i = 0; i < NR_IRQS; i++)
+-		irq_set_noprobe(i);
+-
+ 	for_each_possible_cpu(i) {
+ 		page = alloc_pages_node(cpu_to_node(i), GFP_KERNEL, order);
+ 
+diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c
+index bcc6b6d063d914..74a4b5c272d60e 100644
+--- a/arch/loongarch/kvm/timer.c
++++ b/arch/loongarch/kvm/timer.c
+@@ -188,10 +188,3 @@ void kvm_save_timer(struct kvm_vcpu *vcpu)
+ 	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+ 	preempt_enable();
+ }
+-
+-void kvm_reset_timer(struct kvm_vcpu *vcpu)
+-{
+-	write_gcsr_timercfg(0);
+-	kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0);
+-	hrtimer_cancel(&vcpu->arch.swtimer);
+-}
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 9e8030d4512902..0b53f4d9fddf91 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -572,7 +572,7 @@ static int kvm_set_one_reg(struct kvm_vcpu *vcpu,
+ 				vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
+ 			break;
+ 		case KVM_REG_LOONGARCH_VCPU_RESET:
+-			kvm_reset_timer(vcpu);
++			vcpu->arch.st.guest_addr = 0;
+ 			memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
+ 			memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
+ 			break;
+diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
+index 3827dc76edd823..4520c57415797f 100644
+--- a/arch/microblaze/mm/init.c
++++ b/arch/microblaze/mm/init.c
+@@ -193,11 +193,6 @@ asmlinkage void __init mmu_init(void)
+ {
+ 	unsigned int kstart, ksize;
+ 
+-	if (!memblock.reserved.cnt) {
+-		pr_emerg("Error memory count\n");
+-		machine_restart(NULL);
+-	}
+-
+ 	if ((u32) memblock.memory.regions[0].size < 0x400000) {
+ 		pr_emerg("Memory must be greater than 4MB\n");
+ 		machine_restart(NULL);
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 41632fb57796de..ead967479fa634 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -424,6 +424,7 @@ static void __init ms_hyperv_init_platform(void)
+ 	    ms_hyperv.misc_features & HV_FEATURE_FREQUENCY_MSRS_AVAILABLE) {
+ 		x86_platform.calibrate_tsc = hv_get_tsc_khz;
+ 		x86_platform.calibrate_cpu = hv_get_tsc_khz;
++		setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ);
+ 	}
+ 
+ 	if (ms_hyperv.priv_high & HV_ISOLATION) {
+diff --git a/drivers/accel/drm_accel.c b/drivers/accel/drm_accel.c
+index 16c3edb8c46ee1..aa826033b0ceb9 100644
+--- a/drivers/accel/drm_accel.c
++++ b/drivers/accel/drm_accel.c
+@@ -8,7 +8,7 @@
+ 
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+-#include <linux/idr.h>
++#include <linux/xarray.h>
+ 
+ #include <drm/drm_accel.h>
+ #include <drm/drm_auth.h>
+@@ -18,8 +18,7 @@
+ #include <drm/drm_ioctl.h>
+ #include <drm/drm_print.h>
+ 
+-static DEFINE_SPINLOCK(accel_minor_lock);
+-static struct idr accel_minors_idr;
++DEFINE_XARRAY_ALLOC(accel_minors_xa);
+ 
+ static struct dentry *accel_debugfs_root;
+ 
+@@ -117,99 +116,6 @@ void accel_set_device_instance_params(struct device *kdev, int index)
+ 	kdev->type = &accel_sysfs_device_minor;
+ }
+ 
+-/**
+- * accel_minor_alloc() - Allocates a new accel minor
+- *
+- * This function access the accel minors idr and allocates from it
+- * a new id to represent a new accel minor
+- *
+- * Return: A new id on success or error code in case idr_alloc failed
+- */
+-int accel_minor_alloc(void)
+-{
+-	unsigned long flags;
+-	int r;
+-
+-	spin_lock_irqsave(&accel_minor_lock, flags);
+-	r = idr_alloc(&accel_minors_idr, NULL, 0, ACCEL_MAX_MINORS, GFP_NOWAIT);
+-	spin_unlock_irqrestore(&accel_minor_lock, flags);
+-
+-	return r;
+-}
+-
+-/**
+- * accel_minor_remove() - Remove an accel minor
+- * @index: The minor id to remove.
+- *
+- * This function access the accel minors idr and removes from
+- * it the member with the id that is passed to this function.
+- */
+-void accel_minor_remove(int index)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&accel_minor_lock, flags);
+-	idr_remove(&accel_minors_idr, index);
+-	spin_unlock_irqrestore(&accel_minor_lock, flags);
+-}
+-
+-/**
+- * accel_minor_replace() - Replace minor pointer in accel minors idr.
+- * @minor: Pointer to the new minor.
+- * @index: The minor id to replace.
+- *
+- * This function access the accel minors idr structure and replaces the pointer
+- * that is associated with an existing id. Because the minor pointer can be
+- * NULL, we need to explicitly pass the index.
+- *
+- * Return: 0 for success, negative value for error
+- */
+-void accel_minor_replace(struct drm_minor *minor, int index)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&accel_minor_lock, flags);
+-	idr_replace(&accel_minors_idr, minor, index);
+-	spin_unlock_irqrestore(&accel_minor_lock, flags);
+-}
+-
+-/*
+- * Looks up the given minor-ID and returns the respective DRM-minor object. The
+- * refence-count of the underlying device is increased so you must release this
+- * object with accel_minor_release().
+- *
+- * The object can be only a drm_minor that represents an accel device.
+- *
+- * As long as you hold this minor, it is guaranteed that the object and the
+- * minor->dev pointer will stay valid! However, the device may get unplugged and
+- * unregistered while you hold the minor.
+- */
+-static struct drm_minor *accel_minor_acquire(unsigned int minor_id)
+-{
+-	struct drm_minor *minor;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&accel_minor_lock, flags);
+-	minor = idr_find(&accel_minors_idr, minor_id);
+-	if (minor)
+-		drm_dev_get(minor->dev);
+-	spin_unlock_irqrestore(&accel_minor_lock, flags);
+-
+-	if (!minor) {
+-		return ERR_PTR(-ENODEV);
+-	} else if (drm_dev_is_unplugged(minor->dev)) {
+-		drm_dev_put(minor->dev);
+-		return ERR_PTR(-ENODEV);
+-	}
+-
+-	return minor;
+-}
+-
+-static void accel_minor_release(struct drm_minor *minor)
+-{
+-	drm_dev_put(minor->dev);
+-}
+-
+ /**
+  * accel_open - open method for ACCEL file
+  * @inode: device inode
+@@ -227,7 +133,7 @@ int accel_open(struct inode *inode, struct file *filp)
+ 	struct drm_minor *minor;
+ 	int retcode;
+ 
+-	minor = accel_minor_acquire(iminor(inode));
++	minor = drm_minor_acquire(&accel_minors_xa, iminor(inode));
+ 	if (IS_ERR(minor))
+ 		return PTR_ERR(minor);
+ 
+@@ -246,7 +152,7 @@ int accel_open(struct inode *inode, struct file *filp)
+ 
+ err_undo:
+ 	atomic_dec(&dev->open_count);
+-	accel_minor_release(minor);
++	drm_minor_release(minor);
+ 	return retcode;
+ }
+ EXPORT_SYMBOL_GPL(accel_open);
+@@ -257,7 +163,7 @@ static int accel_stub_open(struct inode *inode, struct file *filp)
+ 	struct drm_minor *minor;
+ 	int err;
+ 
+-	minor = accel_minor_acquire(iminor(inode));
++	minor = drm_minor_acquire(&accel_minors_xa, iminor(inode));
+ 	if (IS_ERR(minor))
+ 		return PTR_ERR(minor);
+ 
+@@ -274,7 +180,7 @@ static int accel_stub_open(struct inode *inode, struct file *filp)
+ 		err = 0;
+ 
+ out:
+-	accel_minor_release(minor);
++	drm_minor_release(minor);
+ 
+ 	return err;
+ }
+@@ -290,15 +196,13 @@ void accel_core_exit(void)
+ 	unregister_chrdev(ACCEL_MAJOR, "accel");
+ 	debugfs_remove(accel_debugfs_root);
+ 	accel_sysfs_destroy();
+-	idr_destroy(&accel_minors_idr);
++	WARN_ON(!xa_empty(&accel_minors_xa));
+ }
+ 
+ int __init accel_core_init(void)
+ {
+ 	int ret;
+ 
+-	idr_init(&accel_minors_idr);
+-
+ 	ret = accel_sysfs_init();
+ 	if (ret < 0) {
+ 		DRM_ERROR("Cannot create ACCEL class: %d\n", ret);
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 1fd3b7073ab90d..b5bfe12a97e1bc 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1208,7 +1208,7 @@ static int btintel_pcie_setup_hdev(struct btintel_pcie_data *data)
+ 	int err;
+ 	struct hci_dev *hdev;
+ 
+-	hdev = hci_alloc_dev();
++	hdev = hci_alloc_dev_priv(sizeof(struct btintel_data));
+ 	if (!hdev)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index 9d1cbdf860fb3b..10834c3141d07f 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -713,7 +713,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s0_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -728,7 +728,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s1_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -743,7 +743,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s2_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -758,7 +758,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s3_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -773,7 +773,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s4_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -788,7 +788,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s5_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -803,7 +803,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s6_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -818,7 +818,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s7_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -833,7 +833,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s8_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -848,7 +848,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s9_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -863,7 +863,7 @@ static struct clk_init_data gcc_qupv3_wrap1_qspi_ref_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_qspi_ref_clk_src = {
+@@ -899,7 +899,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -916,7 +916,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -948,7 +948,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -980,7 +980,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -997,7 +997,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -1014,7 +1014,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -1031,7 +1031,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+@@ -1059,7 +1059,7 @@ static struct clk_rcg2 gcc_qupv3_wrap2_ibi_ctrl_0_clk_src = {
+ 		.parent_data = gcc_parent_data_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_shared_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -1068,7 +1068,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+@@ -1085,7 +1085,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+@@ -1102,7 +1102,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+@@ -1119,7 +1119,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+@@ -1136,7 +1136,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+@@ -1153,7 +1153,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+@@ -1186,7 +1186,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_10,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_10),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = {
+@@ -1203,7 +1203,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = {
+@@ -1226,7 +1226,7 @@ static struct clk_init_data gcc_qupv3_wrap3_qspi_ref_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_shared_ops,
++	.ops = &clk_rcg2_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap3_qspi_ref_clk_src = {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 06b65159f7b4a8..33c7740dd50a70 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -672,6 +672,9 @@ static int smu_v14_0_2_set_default_dpm_table(struct smu_context *smu)
+ 		pcie_table->clk_freq[pcie_table->num_of_link_levels] =
+ 					skutable->LclkFreq[link_level];
+ 		pcie_table->num_of_link_levels++;
++
++		if (link_level == 0)
++			link_level++;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+index fe46b0ebefea37..e5eb5d672bcd7d 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+@@ -160,6 +160,7 @@ static int komeda_crtc_normalize_zpos(struct drm_crtc *crtc,
+ 	struct drm_plane *plane;
+ 	struct list_head zorder_list;
+ 	int order = 0, err;
++	u32 slave_zpos = 0;
+ 
+ 	DRM_DEBUG_ATOMIC("[CRTC:%d:%s] calculating normalized zpos values\n",
+ 			 crtc->base.id, crtc->name);
+@@ -199,10 +200,13 @@ static int komeda_crtc_normalize_zpos(struct drm_crtc *crtc,
+ 				 plane_st->zpos, plane_st->normalized_zpos);
+ 
+ 		/* calculate max slave zorder */
+-		if (has_bit(drm_plane_index(plane), kcrtc->slave_planes))
++		if (has_bit(drm_plane_index(plane), kcrtc->slave_planes)) {
++			slave_zpos = plane_st->normalized_zpos;
++			if (to_kplane_st(plane_st)->layer_split)
++				slave_zpos++;
+ 			kcrtc_st->max_slave_zorder =
+-				max(plane_st->normalized_zpos,
+-				    kcrtc_st->max_slave_zorder);
++				max(slave_zpos, kcrtc_st->max_slave_zorder);
++		}
+ 	}
+ 
+ 	crtc_st->zpos_changed = true;
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 535b624d4c9da9..928824b9194568 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -34,6 +34,7 @@
+ #include <linux/pseudo_fs.h>
+ #include <linux/slab.h>
+ #include <linux/srcu.h>
++#include <linux/xarray.h>
+ 
+ #include <drm/drm_accel.h>
+ #include <drm/drm_cache.h>
+@@ -54,8 +55,7 @@ MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
+ MODULE_DESCRIPTION("DRM shared core routines");
+ MODULE_LICENSE("GPL and additional rights");
+ 
+-static DEFINE_SPINLOCK(drm_minor_lock);
+-static struct idr drm_minors_idr;
++DEFINE_XARRAY_ALLOC(drm_minors_xa);
+ 
+ /*
+  * If the drm core fails to init for whatever reason,
+@@ -83,6 +83,18 @@ DEFINE_STATIC_SRCU(drm_unplug_srcu);
+  * registered and unregistered dynamically according to device-state.
+  */
+ 
++static struct xarray *drm_minor_get_xa(enum drm_minor_type type)
++{
++	if (type == DRM_MINOR_PRIMARY || type == DRM_MINOR_RENDER)
++		return &drm_minors_xa;
++#if IS_ENABLED(CONFIG_DRM_ACCEL)
++	else if (type == DRM_MINOR_ACCEL)
++		return &accel_minors_xa;
++#endif
++	else
++		return ERR_PTR(-EOPNOTSUPP);
++}
++
+ static struct drm_minor **drm_minor_get_slot(struct drm_device *dev,
+ 					     enum drm_minor_type type)
+ {
+@@ -101,25 +113,31 @@ static struct drm_minor **drm_minor_get_slot(struct drm_device *dev,
+ static void drm_minor_alloc_release(struct drm_device *dev, void *data)
+ {
+ 	struct drm_minor *minor = data;
+-	unsigned long flags;
+ 
+ 	WARN_ON(dev != minor->dev);
+ 
+ 	put_device(minor->kdev);
+ 
+-	if (minor->type == DRM_MINOR_ACCEL) {
+-		accel_minor_remove(minor->index);
+-	} else {
+-		spin_lock_irqsave(&drm_minor_lock, flags);
+-		idr_remove(&drm_minors_idr, minor->index);
+-		spin_unlock_irqrestore(&drm_minor_lock, flags);
+-	}
++	xa_erase(drm_minor_get_xa(minor->type), minor->index);
+ }
+ 
++/*
++ * DRM used to support 64 devices, for backwards compatibility we need to maintain the
++ * minor allocation scheme where minors 0-63 are primary nodes, 64-127 are control nodes,
++ * and 128-191 are render nodes.
++ * After reaching the limit, we're allocating minors dynamically - first-come, first-serve.
++ * Accel nodes are using a distinct major, so the minors are allocated in continuous 0-MAX
++ * range.
++ */
++#define DRM_MINOR_LIMIT(t) ({ \
++	typeof(t) _t = (t); \
++	_t == DRM_MINOR_ACCEL ? XA_LIMIT(0, ACCEL_MAX_MINORS) : XA_LIMIT(64 * _t, 64 * _t + 63); \
++})
++#define DRM_EXTENDED_MINOR_LIMIT XA_LIMIT(192, (1 << MINORBITS) - 1)
++
+ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ {
+ 	struct drm_minor *minor;
+-	unsigned long flags;
+ 	int r;
+ 
+ 	minor = drmm_kzalloc(dev, sizeof(*minor), GFP_KERNEL);
+@@ -129,25 +147,14 @@ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ 	minor->type = type;
+ 	minor->dev = dev;
+ 
+-	idr_preload(GFP_KERNEL);
+-	if (type == DRM_MINOR_ACCEL) {
+-		r = accel_minor_alloc();
+-	} else {
+-		spin_lock_irqsave(&drm_minor_lock, flags);
+-		r = idr_alloc(&drm_minors_idr,
+-			NULL,
+-			64 * type,
+-			64 * (type + 1),
+-			GFP_NOWAIT);
+-		spin_unlock_irqrestore(&drm_minor_lock, flags);
+-	}
+-	idr_preload_end();
+-
++	r = xa_alloc(drm_minor_get_xa(type), &minor->index,
++		     NULL, DRM_MINOR_LIMIT(type), GFP_KERNEL);
++	if (r == -EBUSY && (type == DRM_MINOR_PRIMARY || type == DRM_MINOR_RENDER))
++		r = xa_alloc(&drm_minors_xa, &minor->index,
++			     NULL, DRM_EXTENDED_MINOR_LIMIT, GFP_KERNEL);
+ 	if (r < 0)
+ 		return r;
+ 
+-	minor->index = r;
+-
+ 	r = drmm_add_action_or_reset(dev, drm_minor_alloc_release, minor);
+ 	if (r)
+ 		return r;
+@@ -163,7 +170,7 @@ static int drm_minor_alloc(struct drm_device *dev, enum drm_minor_type type)
+ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ {
+ 	struct drm_minor *minor;
+-	unsigned long flags;
++	void *entry;
+ 	int ret;
+ 
+ 	DRM_DEBUG("\n");
+@@ -186,13 +193,12 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ 		goto err_debugfs;
+ 
+ 	/* replace NULL with @minor so lookups will succeed from now on */
+-	if (minor->type == DRM_MINOR_ACCEL) {
+-		accel_minor_replace(minor, minor->index);
+-	} else {
+-		spin_lock_irqsave(&drm_minor_lock, flags);
+-		idr_replace(&drm_minors_idr, minor, minor->index);
+-		spin_unlock_irqrestore(&drm_minor_lock, flags);
++	entry = xa_store(drm_minor_get_xa(type), minor->index, minor, GFP_KERNEL);
++	if (xa_is_err(entry)) {
++		ret = xa_err(entry);
++		goto err_debugfs;
+ 	}
++	WARN_ON(entry);
+ 
+ 	DRM_DEBUG("new minor registered %d\n", minor->index);
+ 	return 0;
+@@ -205,20 +211,13 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
+ static void drm_minor_unregister(struct drm_device *dev, enum drm_minor_type type)
+ {
+ 	struct drm_minor *minor;
+-	unsigned long flags;
+ 
+ 	minor = *drm_minor_get_slot(dev, type);
+ 	if (!minor || !device_is_registered(minor->kdev))
+ 		return;
+ 
+ 	/* replace @minor with NULL so lookups will fail from now on */
+-	if (minor->type == DRM_MINOR_ACCEL) {
+-		accel_minor_replace(NULL, minor->index);
+-	} else {
+-		spin_lock_irqsave(&drm_minor_lock, flags);
+-		idr_replace(&drm_minors_idr, NULL, minor->index);
+-		spin_unlock_irqrestore(&drm_minor_lock, flags);
+-	}
++	xa_store(drm_minor_get_xa(type), minor->index, NULL, GFP_KERNEL);
+ 
+ 	device_del(minor->kdev);
+ 	dev_set_drvdata(minor->kdev, NULL); /* safety belt */
+@@ -234,16 +233,15 @@ static void drm_minor_unregister(struct drm_device *dev, enum drm_minor_type typ
+  * minor->dev pointer will stay valid! However, the device may get unplugged and
+  * unregistered while you hold the minor.
+  */
+-struct drm_minor *drm_minor_acquire(unsigned int minor_id)
++struct drm_minor *drm_minor_acquire(struct xarray *minor_xa, unsigned int minor_id)
+ {
+ 	struct drm_minor *minor;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&drm_minor_lock, flags);
+-	minor = idr_find(&drm_minors_idr, minor_id);
++	xa_lock(minor_xa);
++	minor = xa_load(minor_xa, minor_id);
+ 	if (minor)
+ 		drm_dev_get(minor->dev);
+-	spin_unlock_irqrestore(&drm_minor_lock, flags);
++	xa_unlock(minor_xa);
+ 
+ 	if (!minor) {
+ 		return ERR_PTR(-ENODEV);
+@@ -1036,7 +1034,7 @@ static int drm_stub_open(struct inode *inode, struct file *filp)
+ 
+ 	DRM_DEBUG("\n");
+ 
+-	minor = drm_minor_acquire(iminor(inode));
++	minor = drm_minor_acquire(&drm_minors_xa, iminor(inode));
+ 	if (IS_ERR(minor))
+ 		return PTR_ERR(minor);
+ 
+@@ -1071,7 +1069,7 @@ static void drm_core_exit(void)
+ 	unregister_chrdev(DRM_MAJOR, "drm");
+ 	debugfs_remove(drm_debugfs_root);
+ 	drm_sysfs_destroy();
+-	idr_destroy(&drm_minors_idr);
++	WARN_ON(!xa_empty(&drm_minors_xa));
+ 	drm_connector_ida_destroy();
+ }
+ 
+@@ -1080,7 +1078,6 @@ static int __init drm_core_init(void)
+ 	int ret;
+ 
+ 	drm_connector_ida_init();
+-	idr_init(&drm_minors_idr);
+ 	drm_memcpy_init_early();
+ 
+ 	ret = drm_sysfs_init();
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 714e42b0510800..f917b259b33421 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -364,7 +364,7 @@ int drm_open(struct inode *inode, struct file *filp)
+ 	struct drm_minor *minor;
+ 	int retcode;
+ 
+-	minor = drm_minor_acquire(iminor(inode));
++	minor = drm_minor_acquire(&drm_minors_xa, iminor(inode));
+ 	if (IS_ERR(minor))
+ 		return PTR_ERR(minor);
+ 
+diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
+index 690505a1f7a5db..12acf44c4e2400 100644
+--- a/drivers/gpu/drm/drm_internal.h
++++ b/drivers/gpu/drm/drm_internal.h
+@@ -81,10 +81,6 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
+ void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv,
+ 				 uint32_t handle);
+ 
+-/* drm_drv.c */
+-struct drm_minor *drm_minor_acquire(unsigned int minor_id);
+-void drm_minor_release(struct drm_minor *minor);
+-
+ /* drm_managed.c */
+ void drm_managed_release(struct drm_device *dev);
+ void drmm_add_final_kfree(struct drm_device *dev, void *container);
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index 36f9e38000d5ef..3b3b8beed83a51 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -412,7 +412,7 @@ static const struct ec_board_info board_info_strix_b550_i_gaming = {
+ 
+ static const struct ec_board_info board_info_strix_x570_e_gaming = {
+ 	.sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB |
+-		SENSOR_TEMP_T_SENSOR | SENSOR_TEMP_VRM |
++		SENSOR_TEMP_T_SENSOR |
+ 		SENSOR_FAN_CHIPSET | SENSOR_CURR_CPU |
+ 		SENSOR_IN_CPU_CORE,
+ 	.mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index e4f0a382c21652..ddde067f593fcf 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -2141,7 +2141,7 @@ static int m_can_set_coalesce(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static const struct ethtool_ops m_can_ethtool_ops = {
++static const struct ethtool_ops m_can_ethtool_ops_coalescing = {
+ 	.supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ |
+ 		ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ |
+ 		ETHTOOL_COALESCE_TX_USECS_IRQ |
+@@ -2152,18 +2152,20 @@ static const struct ethtool_ops m_can_ethtool_ops = {
+ 	.set_coalesce = m_can_set_coalesce,
+ };
+ 
+-static const struct ethtool_ops m_can_ethtool_ops_polling = {
++static const struct ethtool_ops m_can_ethtool_ops = {
+ 	.get_ts_info = ethtool_op_get_ts_info,
+ };
+ 
+-static int register_m_can_dev(struct net_device *dev)
++static int register_m_can_dev(struct m_can_classdev *cdev)
+ {
++	struct net_device *dev = cdev->net;
++
+ 	dev->flags |= IFF_ECHO;	/* we support local echo */
+ 	dev->netdev_ops = &m_can_netdev_ops;
+-	if (dev->irq)
+-		dev->ethtool_ops = &m_can_ethtool_ops;
++	if (dev->irq && cdev->is_peripheral)
++		dev->ethtool_ops = &m_can_ethtool_ops_coalescing;
+ 	else
+-		dev->ethtool_ops = &m_can_ethtool_ops_polling;
++		dev->ethtool_ops = &m_can_ethtool_ops;
+ 
+ 	return register_candev(dev);
+ }
+@@ -2349,7 +2351,7 @@ int m_can_class_register(struct m_can_classdev *cdev)
+ 	if (ret)
+ 		goto rx_offload_del;
+ 
+-	ret = register_m_can_dev(cdev->net);
++	ret = register_m_can_dev(cdev);
+ 	if (ret) {
+ 		dev_err(cdev->dev, "registering %s failed (err=%d)\n",
+ 			cdev->net->name, ret);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index f1e6007b74cec7..262c6c7002860c 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -744,6 +744,7 @@ static void mcp251xfd_chip_stop(struct mcp251xfd_priv *priv,
+ 
+ 	mcp251xfd_chip_interrupts_disable(priv);
+ 	mcp251xfd_chip_rx_int_disable(priv);
++	mcp251xfd_timestamp_stop(priv);
+ 	mcp251xfd_chip_sleep(priv);
+ }
+ 
+@@ -763,6 +764,8 @@ static int mcp251xfd_chip_start(struct mcp251xfd_priv *priv)
+ 	if (err)
+ 		goto out_chip_stop;
+ 
++	mcp251xfd_timestamp_start(priv);
++
+ 	err = mcp251xfd_set_bittiming(priv);
+ 	if (err)
+ 		goto out_chip_stop;
+@@ -791,7 +794,7 @@ static int mcp251xfd_chip_start(struct mcp251xfd_priv *priv)
+ 
+ 	return 0;
+ 
+- out_chip_stop:
++out_chip_stop:
+ 	mcp251xfd_dump(priv);
+ 	mcp251xfd_chip_stop(priv, CAN_STATE_STOPPED);
+ 
+@@ -1576,7 +1579,7 @@ static irqreturn_t mcp251xfd_irq(int irq, void *dev_id)
+ 		handled = IRQ_HANDLED;
+ 	} while (1);
+ 
+- out_fail:
++out_fail:
+ 	can_rx_offload_threaded_irq_finish(&priv->offload);
+ 
+ 	netdev_err(priv->ndev, "IRQ handler returned %d (intf=0x%08x).\n",
+@@ -1610,11 +1613,12 @@ static int mcp251xfd_open(struct net_device *ndev)
+ 	if (err)
+ 		goto out_mcp251xfd_ring_free;
+ 
++	mcp251xfd_timestamp_init(priv);
++
+ 	err = mcp251xfd_chip_start(priv);
+ 	if (err)
+ 		goto out_transceiver_disable;
+ 
+-	mcp251xfd_timestamp_init(priv);
+ 	clear_bit(MCP251XFD_FLAGS_DOWN, priv->flags);
+ 	can_rx_offload_enable(&priv->offload);
+ 
+@@ -1641,22 +1645,21 @@ static int mcp251xfd_open(struct net_device *ndev)
+ 
+ 	return 0;
+ 
+- out_free_irq:
++out_free_irq:
+ 	free_irq(spi->irq, priv);
+- out_destroy_workqueue:
++out_destroy_workqueue:
+ 	destroy_workqueue(priv->wq);
+- out_can_rx_offload_disable:
++out_can_rx_offload_disable:
+ 	can_rx_offload_disable(&priv->offload);
+ 	set_bit(MCP251XFD_FLAGS_DOWN, priv->flags);
+-	mcp251xfd_timestamp_stop(priv);
+- out_transceiver_disable:
++out_transceiver_disable:
+ 	mcp251xfd_transceiver_disable(priv);
+- out_mcp251xfd_ring_free:
++out_mcp251xfd_ring_free:
+ 	mcp251xfd_ring_free(priv);
+- out_pm_runtime_put:
++out_pm_runtime_put:
+ 	mcp251xfd_chip_stop(priv, CAN_STATE_STOPPED);
+ 	pm_runtime_put(ndev->dev.parent);
+- out_close_candev:
++out_close_candev:
+ 	close_candev(ndev);
+ 
+ 	return err;
+@@ -1674,7 +1677,6 @@ static int mcp251xfd_stop(struct net_device *ndev)
+ 	free_irq(ndev->irq, priv);
+ 	destroy_workqueue(priv->wq);
+ 	can_rx_offload_disable(&priv->offload);
+-	mcp251xfd_timestamp_stop(priv);
+ 	mcp251xfd_chip_stop(priv, CAN_STATE_STOPPED);
+ 	mcp251xfd_transceiver_disable(priv);
+ 	mcp251xfd_ring_free(priv);
+@@ -1820,9 +1822,9 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv, u32 *dev_id,
+ 	*effective_speed_hz_slow = xfer[0].effective_speed_hz;
+ 	*effective_speed_hz_fast = xfer[1].effective_speed_hz;
+ 
+- out_kfree_buf_tx:
++out_kfree_buf_tx:
+ 	kfree(buf_tx);
+- out_kfree_buf_rx:
++out_kfree_buf_rx:
+ 	kfree(buf_rx);
+ 
+ 	return err;
+@@ -1936,13 +1938,13 @@ static int mcp251xfd_register(struct mcp251xfd_priv *priv)
+ 
+ 	return 0;
+ 
+- out_unregister_candev:
++out_unregister_candev:
+ 	unregister_candev(ndev);
+- out_chip_sleep:
++out_chip_sleep:
+ 	mcp251xfd_chip_sleep(priv);
+- out_runtime_disable:
++out_runtime_disable:
+ 	pm_runtime_disable(ndev->dev.parent);
+- out_runtime_put_noidle:
++out_runtime_put_noidle:
+ 	pm_runtime_put_noidle(ndev->dev.parent);
+ 	mcp251xfd_clks_and_vdd_disable(priv);
+ 
+@@ -2162,9 +2164,9 @@ static int mcp251xfd_probe(struct spi_device *spi)
+ 
+ 	return 0;
+ 
+- out_can_rx_offload_del:
++out_can_rx_offload_del:
+ 	can_rx_offload_del(&priv->offload);
+- out_free_candev:
++out_free_candev:
+ 	spi->max_speed_hz = priv->spi_max_speed_hz_orig;
+ 
+ 	free_candev(ndev);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-dump.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-dump.c
+index 004eaf96262bfd..050321345304be 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-dump.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-dump.c
+@@ -94,7 +94,7 @@ static void mcp251xfd_dump_registers(const struct mcp251xfd_priv *priv,
+ 		kfree(buf);
+ 	}
+ 
+- out:
++out:
+ 	mcp251xfd_dump_header(iter, MCP251XFD_DUMP_OBJECT_TYPE_REG, reg);
+ }
+ 
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
+index 92b7bc7f14b9eb..65150e76200720 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
+@@ -397,7 +397,7 @@ mcp251xfd_regmap_crc_read(void *context,
+ 
+ 		return err;
+ 	}
+- out:
++out:
+ 	memcpy(val_buf, buf_rx->data, val_len);
+ 
+ 	return 0;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+index f72582d4d3e8e2..83c18035b2a24d 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
+@@ -290,7 +290,7 @@ int mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
+ 	const struct mcp251xfd_rx_ring *rx_ring;
+ 	u16 base = 0, ram_used;
+ 	u8 fifo_nr = 1;
+-	int i;
++	int err = 0, i;
+ 
+ 	netdev_reset_queue(priv->ndev);
+ 
+@@ -386,10 +386,18 @@ int mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
+ 		netdev_err(priv->ndev,
+ 			   "Error during ring configuration, using more RAM (%u bytes) than available (%u bytes).\n",
+ 			   ram_used, MCP251XFD_RAM_SIZE);
+-		return -ENOMEM;
++		err = -ENOMEM;
+ 	}
+ 
+-	return 0;
++	if (priv->tx_obj_num_coalesce_irq &&
++	    priv->tx_obj_num_coalesce_irq * 2 != priv->tx->obj_num) {
++		netdev_err(priv->ndev,
++			   "Error during ring configuration, number of TEF coalescing buffers (%u) must be half of TEF buffers (%u).\n",
++			   priv->tx_obj_num_coalesce_irq, priv->tx->obj_num);
++		err = -EINVAL;
++	}
++
++	return err;
+ }
+ 
+ void mcp251xfd_ring_free(struct mcp251xfd_priv *priv)
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index 3886476a8f8efb..f732556d233a7b 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -219,7 +219,7 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
+ 		total_frame_len += frame_len;
+ 	}
+ 
+- out_netif_wake_queue:
++out_netif_wake_queue:
+ 	len = i;	/* number of handled goods TEFs */
+ 	if (len) {
+ 		struct mcp251xfd_tef_ring *ring = priv->tef;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
+index 1db99aabe85c56..202ca0d24d03b9 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-timestamp.c
+@@ -48,9 +48,12 @@ void mcp251xfd_timestamp_init(struct mcp251xfd_priv *priv)
+ 	cc->shift = 1;
+ 	cc->mult = clocksource_hz2mult(priv->can.clock.freq, cc->shift);
+ 
+-	timecounter_init(&priv->tc, &priv->cc, ktime_get_real_ns());
+-
+ 	INIT_DELAYED_WORK(&priv->timestamp, mcp251xfd_timestamp_work);
++}
++
++void mcp251xfd_timestamp_start(struct mcp251xfd_priv *priv)
++{
++	timecounter_init(&priv->tc, &priv->cc, ktime_get_real_ns());
+ 	schedule_delayed_work(&priv->timestamp,
+ 			      MCP251XFD_TIMESTAMP_WORK_DELAY_SEC * HZ);
+ }
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index 991662fbba42e8..dcbbd2b2fae827 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -957,6 +957,7 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv);
+ int mcp251xfd_handle_rxif(struct mcp251xfd_priv *priv);
+ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv);
+ void mcp251xfd_timestamp_init(struct mcp251xfd_priv *priv);
++void mcp251xfd_timestamp_start(struct mcp251xfd_priv *priv);
+ void mcp251xfd_timestamp_stop(struct mcp251xfd_priv *priv);
+ 
+ void mcp251xfd_tx_obj_write_sync(struct work_struct *work);
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index fddfd1dd50709f..4c546c3aef0fec 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -572,7 +572,7 @@ static bool ftgmac100_rx_packet(struct ftgmac100 *priv, int *processed)
+ 	(*processed)++;
+ 	return true;
+ 
+- drop:
++drop:
+ 	/* Clean rxdes0 (which resets own bit) */
+ 	rxdes->rxdes0 = cpu_to_le32(status & priv->rxdes0_edorr_mask);
+ 	priv->rx_pointer = ftgmac100_next_rx_pointer(priv, pointer);
+@@ -656,6 +656,11 @@ static bool ftgmac100_tx_complete_packet(struct ftgmac100 *priv)
+ 	ftgmac100_free_tx_packet(priv, pointer, skb, txdes, ctl_stat);
+ 	txdes->txdes0 = cpu_to_le32(ctl_stat & priv->txdes0_edotr_mask);
+ 
++	/* Ensure the descriptor config is visible before setting the tx
++	 * pointer.
++	 */
++	smp_wmb();
++
+ 	priv->tx_clean_pointer = ftgmac100_next_tx_pointer(priv, pointer);
+ 
+ 	return true;
+@@ -809,6 +814,11 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ 	dma_wmb();
+ 	first->txdes0 = cpu_to_le32(f_ctl_stat);
+ 
++	/* Ensure the descriptor config is visible before setting the tx
++	 * pointer.
++	 */
++	smp_wmb();
++
+ 	/* Update next TX pointer */
+ 	priv->tx_pointer = pointer;
+ 
+@@ -829,7 +839,7 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ 
+ 	return NETDEV_TX_OK;
+ 
+- dma_err:
++dma_err:
+ 	if (net_ratelimit())
+ 		netdev_err(netdev, "map tx fragment failed\n");
+ 
+@@ -851,7 +861,7 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ 	 * last fragment, so we know ftgmac100_free_tx_packet()
+ 	 * hasn't freed the skb yet.
+ 	 */
+- drop:
++drop:
+ 	/* Drop the packet */
+ 	dev_kfree_skb_any(skb);
+ 	netdev->stats.tx_dropped++;
+@@ -1344,7 +1354,7 @@ static void ftgmac100_reset(struct ftgmac100 *priv)
+ 	ftgmac100_init_all(priv, true);
+ 
+ 	netdev_dbg(netdev, "Reset done !\n");
+- bail:
++bail:
+ 	if (priv->mii_bus)
+ 		mutex_unlock(&priv->mii_bus->mdio_lock);
+ 	if (netdev->phydev)
+@@ -1543,15 +1553,15 @@ static int ftgmac100_open(struct net_device *netdev)
+ 
+ 	return 0;
+ 
+- err_ncsi:
++err_ncsi:
+ 	napi_disable(&priv->napi);
+ 	netif_stop_queue(netdev);
+- err_alloc:
++err_alloc:
+ 	ftgmac100_free_buffers(priv);
+ 	free_irq(netdev->irq, netdev);
+- err_irq:
++err_irq:
+ 	netif_napi_del(&priv->napi);
+- err_hw:
++err_hw:
+ 	iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
+ 	ftgmac100_free_rings(priv);
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index c2ba586593475c..a3ee7697edb562 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2419,7 +2419,7 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
+ 		dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
+ 			vsi->vsi_num, err);
+ 
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (vsi->xdp_rings)
+ 		/* return value check can be skipped here, it always returns
+ 		 * 0 if reset is in progress
+ 		 */
+@@ -2521,7 +2521,7 @@ static void ice_vsi_release_msix(struct ice_vsi *vsi)
+ 		for (q = 0; q < q_vector->num_ring_tx; q++) {
+ 			ice_write_itr(&q_vector->tx, 0);
+ 			wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0);
+-			if (ice_is_xdp_ena_vsi(vsi)) {
++			if (vsi->xdp_rings) {
+ 				u32 xdp_txq = txq + vsi->num_xdp_txq;
+ 
+ 				wr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]), 0);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 766f9a466bc350..c82715eb5b93cf 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -7253,7 +7253,7 @@ int ice_down(struct ice_vsi *vsi)
+ 	if (tx_err)
+ 		netdev_err(vsi->netdev, "Failed stop Tx rings, VSI %d error %d\n",
+ 			   vsi->vsi_num, tx_err);
+-	if (!tx_err && ice_is_xdp_ena_vsi(vsi)) {
++	if (!tx_err && vsi->xdp_rings) {
+ 		tx_err = ice_vsi_stop_xdp_tx_rings(vsi);
+ 		if (tx_err)
+ 			netdev_err(vsi->netdev, "Failed stop XDP rings, VSI %d error %d\n",
+@@ -7270,7 +7270,7 @@ int ice_down(struct ice_vsi *vsi)
+ 	ice_for_each_txq(vsi, i)
+ 		ice_clean_tx_ring(vsi->tx_rings[i]);
+ 
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (vsi->xdp_rings)
+ 		ice_for_each_xdp_txq(vsi, i)
+ 			ice_clean_tx_ring(vsi->xdp_rings[i]);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 87a5427570d762..5dee829bfc47c5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -39,7 +39,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ 	       sizeof(vsi_stat->rx_ring_stats[q_idx]->rx_stats));
+ 	memset(&vsi_stat->tx_ring_stats[q_idx]->stats, 0,
+ 	       sizeof(vsi_stat->tx_ring_stats[q_idx]->stats));
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (vsi->xdp_rings)
+ 		memset(&vsi->xdp_rings[q_idx]->ring_stats->stats, 0,
+ 		       sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats));
+ }
+@@ -52,7 +52,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
+ {
+ 	ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+-	if (ice_is_xdp_ena_vsi(vsi))
++	if (vsi->xdp_rings)
+ 		ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
+ 	ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+ }
+@@ -186,7 +186,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 	err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
+ 	if (!fail)
+ 		fail = err;
+-	if (ice_is_xdp_ena_vsi(vsi)) {
++	if (vsi->xdp_rings) {
+ 		struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx];
+ 
+ 		memset(&txq_meta, 0, sizeof(txq_meta));
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 945ffc083d25c4..79fffa82cfd928 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -3352,7 +3352,7 @@ void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt,
+ {
+ 	int ret __maybe_unused = 0;
+ 
+-	if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status))
++	if (!iwl_trans_fw_running(fwrt->trans))
+ 		return;
+ 
+ 	if (fw_has_capa(&fwrt->fw->ucode_capa,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index b93cef7b233018..4c56dccb18a9b3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -1579,8 +1579,8 @@ static inline void iwl_trans_fw_error(struct iwl_trans *trans, bool sync)
+ 
+ 	/* prevent double restarts due to the same erroneous FW */
+ 	if (!test_and_set_bit(STATUS_FW_ERROR, &trans->status)) {
+-		iwl_op_mode_nic_error(trans->op_mode, sync);
+ 		trans->state = IWL_TRANS_NO_FW;
++		iwl_op_mode_nic_error(trans->op_mode, sync);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 259afecd1a98da..83551d962a46c4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -5819,6 +5819,10 @@ static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop)
+ 	int i;
+ 
+ 	if (!iwl_mvm_has_new_tx_api(mvm)) {
++		/* we can't ask the firmware anything if it is dead */
++		if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++			     &mvm->status))
++			return;
+ 		if (drop) {
+ 			mutex_lock(&mvm->mutex);
+ 			iwl_mvm_flush_tx_path(mvm,
+@@ -5914,8 +5918,11 @@ void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 
+ 	/* this can take a while, and we may need/want other operations
+ 	 * to succeed while doing this, so do it without the mutex held
++	 * If the firmware is dead, this can't work...
+ 	 */
+-	if (!drop && !iwl_mvm_has_new_tx_api(mvm))
++	if (!drop && !iwl_mvm_has_new_tx_api(mvm) &&
++	    !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++		      &mvm->status))
+ 		iwl_trans_wait_tx_queues_empty(mvm->trans, msk);
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 498afbe4ee6beb..6375cc3c48f3c9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1521,6 +1521,8 @@ void iwl_mvm_stop_device(struct iwl_mvm *mvm)
+ 
+ 	clear_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status);
+ 
++	iwl_mvm_pause_tcm(mvm, false);
++
+ 	iwl_fw_dbg_stop_sync(&mvm->fwrt);
+ 	iwl_trans_stop_device(mvm->trans);
+ 	iwl_free_fw_paging(&mvm->fwrt);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 7615c91a55c623..d7c276237c74ea 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -48,6 +48,8 @@
+ /* Number of iterations on the channel for mei filtered scan */
+ #define IWL_MEI_SCAN_NUM_ITER	5U
+ 
++#define WFA_TPC_IE_LEN	9
++
+ struct iwl_mvm_scan_timing_params {
+ 	u32 suspend_time;
+ 	u32 max_out_time;
+@@ -303,8 +305,8 @@ static int iwl_mvm_max_scan_ie_fw_cmd_room(struct iwl_mvm *mvm)
+ 
+ 	max_probe_len = SCAN_OFFLOAD_PROBE_REQ_SIZE;
+ 
+-	/* we create the 802.11 header and SSID element */
+-	max_probe_len -= 24 + 2;
++	/* we create the 802.11 header SSID element and WFA TPC element */
++	max_probe_len -= 24 + 2 + WFA_TPC_IE_LEN;
+ 
+ 	/* DS parameter set element is added on 2.4GHZ band if required */
+ 	if (iwl_mvm_rrm_scan_needed(mvm))
+@@ -731,8 +733,6 @@ static u8 *iwl_mvm_copy_and_insert_ds_elem(struct iwl_mvm *mvm, const u8 *ies,
+ 	return newpos;
+ }
+ 
+-#define WFA_TPC_IE_LEN	9
+-
+ static void iwl_mvm_add_tpc_report_ie(u8 *pos)
+ {
+ 	pos[0] = WLAN_EID_VENDOR_SPECIFIC;
+@@ -837,8 +837,8 @@ static inline bool iwl_mvm_scan_fits(struct iwl_mvm *mvm, int n_ssids,
+ 	return ((n_ssids <= PROBE_OPTION_MAX) &&
+ 		(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &
+ 		(ies->common_ie_len +
+-		 ies->len[NL80211_BAND_2GHZ] +
+-		 ies->len[NL80211_BAND_5GHZ] <=
++		 ies->len[NL80211_BAND_2GHZ] + ies->len[NL80211_BAND_5GHZ] +
++		 ies->len[NL80211_BAND_6GHZ] <=
+ 		 iwl_mvm_max_scan_ie_fw_cmd_room(mvm)));
+ }
+ 
+@@ -3179,18 +3179,16 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm,
+ 		params.n_channels = j;
+ 	}
+ 
+-	if (non_psc_included &&
+-	    !iwl_mvm_scan_fits(mvm, req->n_ssids, ies, params.n_channels)) {
+-		kfree(params.channels);
+-		return -ENOBUFS;
++	if (!iwl_mvm_scan_fits(mvm, req->n_ssids, ies, params.n_channels)) {
++		ret = -ENOBUFS;
++		goto out;
+ 	}
+ 
+ 	uid = iwl_mvm_build_scan_cmd(mvm, vif, &hcmd, &params, type);
+-
+-	if (non_psc_included)
+-		kfree(params.channels);
+-	if (uid < 0)
+-		return uid;
++	if (uid < 0) {
++		ret = uid;
++		goto out;
++	}
+ 
+ 	ret = iwl_mvm_send_cmd(mvm, &hcmd);
+ 	if (!ret) {
+@@ -3208,6 +3206,9 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm,
+ 		mvm->sched_scan_pass_all = SCHED_SCAN_PASS_ALL_DISABLED;
+ 	}
+ 
++out:
++	if (non_psc_included)
++		kfree(params.channels);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index ebf11f276b20a1..37f0bc9e0d9639 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -89,7 +89,8 @@ iwl_pcie_ctxt_info_dbg_enable(struct iwl_trans *trans,
+ 		}
+ 		break;
+ 	default:
+-		IWL_ERR(trans, "WRT: Invalid buffer destination\n");
++		IWL_DEBUG_FW(trans, "WRT: Invalid buffer destination (%d)\n",
++			     le32_to_cpu(fw_mon_cfg->buf_location));
+ 	}
+ out:
+ 	if (dbg_flags)
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 5e66bcb34d530c..ff1769172778be 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -89,6 +89,11 @@ enum nvme_quirks {
+ 	 */
+ 	NVME_QUIRK_NO_DEEPEST_PS		= (1 << 5),
+ 
++	/*
++	 *  Problems seen with concurrent commands
++	 */
++	NVME_QUIRK_QDEPTH_ONE			= (1 << 6),
++
+ 	/*
+ 	 * Set MEDIUM priority on SQ creation
+ 	 */
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 18d85575cdb482..fde4073fa418ed 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2528,15 +2528,8 @@ static int nvme_pci_enable(struct nvme_dev *dev)
+ 	else
+ 		dev->io_sqes = NVME_NVM_IOSQES;
+ 
+-	/*
+-	 * Temporary fix for the Apple controller found in the MacBook8,1 and
+-	 * some MacBook7,1 to avoid controller resets and data loss.
+-	 */
+-	if (pdev->vendor == PCI_VENDOR_ID_APPLE && pdev->device == 0x2001) {
++	if (dev->ctrl.quirks & NVME_QUIRK_QDEPTH_ONE) {
+ 		dev->q_depth = 2;
+-		dev_warn(dev->ctrl.device, "detected Apple NVMe controller, "
+-			"set queue depth=%u to work around controller resets\n",
+-			dev->q_depth);
+ 	} else if (pdev->vendor == PCI_VENDOR_ID_SAMSUNG &&
+ 		   (pdev->device == 0xa821 || pdev->device == 0xa822) &&
+ 		   NVME_CAP_MQES(dev->ctrl.cap) == 0) {
+@@ -3401,6 +3394,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 				NVME_QUIRK_BOGUS_NID, },
+ 	{ PCI_VDEVICE(REDHAT, 0x0010),	/* Qemu emulated controller */
+ 		.driver_data = NVME_QUIRK_BOGUS_NID, },
++	{ PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
++		.driver_data = NVME_QUIRK_QDEPTH_ONE },
+ 	{ PCI_DEVICE(0x126f, 0x2262),	/* Silicon Motion generic */
+ 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ 				NVME_QUIRK_BOGUS_NID, },
+@@ -3535,7 +3530,12 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0xcd02),
+ 		.driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+-		.driver_data = NVME_QUIRK_SINGLE_VECTOR },
++		/*
++		 * Fix for the Apple controller found in the MacBook8,1 and
++		 * some MacBook7,1 to avoid controller resets and data loss.
++		 */
++		.driver_data = NVME_QUIRK_SINGLE_VECTOR |
++				NVME_QUIRK_QDEPTH_ONE },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2005),
+ 		.driver_data = NVME_QUIRK_SINGLE_VECTOR |
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 5aa9d5c533c6a9..d7b66928a4e50d 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1409,8 +1409,11 @@ static int at91_pinctrl_probe(struct platform_device *pdev)
+ 
+ 	/* We will handle a range of GPIO pins */
+ 	for (i = 0; i < gpio_banks; i++)
+-		if (gpio_chips[i])
++		if (gpio_chips[i]) {
+ 			pinctrl_add_gpio_range(info->pctl, &gpio_chips[i]->range);
++			gpiochip_add_pin_range(&gpio_chips[i]->chip, dev_name(info->pctl->dev), 0,
++				gpio_chips[i]->range.pin_base, gpio_chips[i]->range.npins);
++		}
+ 
+ 	dev_info(dev, "initialized AT91 pinctrl driver\n");
+ 
+diff --git a/drivers/platform/x86/amd/pmf/pmf-quirks.c b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+index 460444cda1b295..48870ca52b4139 100644
+--- a/drivers/platform/x86/amd/pmf/pmf-quirks.c
++++ b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+@@ -25,7 +25,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 		.ident = "ROG Zephyrus G14",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA403UV"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "GA403U"),
+ 		},
+ 		.driver_data = &quirk_no_sps_bug,
+ 	},
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index fceffe2082ec58..ed3633c5955d9f 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -145,6 +145,10 @@ static struct quirk_entry quirk_asus_ignore_fan = {
+ 	.wmi_ignore_fan = true,
+ };
+ 
++static struct quirk_entry quirk_asus_zenbook_duo_kbd = {
++	.ignore_key_wlan = true,
++};
++
+ static int dmi_matched(const struct dmi_system_id *dmi)
+ {
+ 	pr_info("Identified laptop model '%s'\n", dmi->ident);
+@@ -516,6 +520,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_ignore_fan,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS Zenbook Duo UX8406MA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "UX8406MA"),
++		},
++		.driver_data = &quirk_asus_zenbook_duo_kbd,
++	},
+ 	{},
+ };
+ 
+@@ -630,7 +643,12 @@ static void asus_nb_wmi_key_filter(struct asus_wmi_driver *asus_wmi, int *code,
+ 	case 0x32: /* Volume Mute */
+ 		if (atkbd_reports_vol_keys)
+ 			*code = ASUS_WMI_KEY_IGNORE;
+-
++		break;
++	case 0x5D: /* Wireless console Toggle */
++	case 0x5E: /* Wireless console Enable */
++	case 0x5F: /* Wireless console Disable */
++		if (quirks->ignore_key_wlan)
++			*code = ASUS_WMI_KEY_IGNORE;
+ 		break;
+ 	}
+ }
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index cc30f185384723..d02f15fd3482fa 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -40,6 +40,7 @@ struct quirk_entry {
+ 	bool wmi_force_als_set;
+ 	bool wmi_ignore_fan;
+ 	bool filter_i8042_e1_extended_codes;
++	bool ignore_key_wlan;
+ 	enum asus_wmi_tablet_switch_mode tablet_switch_mode;
+ 	int wapf;
+ 	/*
+diff --git a/drivers/platform/x86/x86-android-tablets/dmi.c b/drivers/platform/x86/x86-android-tablets/dmi.c
+index 141a2d25e83be6..387dd092c4dd01 100644
+--- a/drivers/platform/x86/x86-android-tablets/dmi.c
++++ b/drivers/platform/x86/x86-android-tablets/dmi.c
+@@ -140,7 +140,6 @@ const struct dmi_system_id x86_android_tablet_ids[] __initconst = {
+ 		/* Lenovo Yoga Tab 3 Pro YT3-X90F */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ 		},
+ 		.driver_data = (void *)&lenovo_yt3_info,
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index aac0744011a3aa..8e7f4c0473ab94 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -1285,6 +1285,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
+ 
+ 	X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
+ 	X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
++	X86_MATCH_VENDOR_FAM(AMD, 0x1A, &rapl_defaults_amd),
+ 	X86_MATCH_VENDOR_FAM(HYGON, 0x18, &rapl_defaults_amd),
+ 	{}
+ };
+@@ -2128,6 +2129,21 @@ void rapl_remove_package(struct rapl_package *rp)
+ }
+ EXPORT_SYMBOL_GPL(rapl_remove_package);
+ 
++/*
++ * RAPL Package energy counter scope:
++ * 1. AMD/HYGON platforms use per-PKG package energy counter
++ * 2. For Intel platforms
++ *	2.1 CLX-AP platform has per-DIE package energy counter
++ *	2.2 Other platforms that uses MSR RAPL are single die systems so the
++ *          package energy counter can be considered as per-PKG/per-DIE,
++ *          here it is considered as per-DIE.
++ *	2.3 New platforms that use TPMI RAPL doesn't care about the
++ *	    scope because they are not MSR/CPU based.
++ */
++#define rapl_msrs_are_pkg_scope()				\
++	(boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||	\
++	 boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++
+ /* caller to ensure CPU hotplug lock is held */
+ struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_priv *priv,
+ 							 bool id_is_cpu)
+@@ -2135,8 +2151,14 @@ struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_
+ 	struct rapl_package *rp;
+ 	int uid;
+ 
+-	if (id_is_cpu)
+-		uid = topology_logical_die_id(id);
++	if (id_is_cpu) {
++		uid = rapl_msrs_are_pkg_scope() ?
++		      topology_physical_package_id(id) : topology_logical_die_id(id);
++		if (uid < 0) {
++			pr_err("topology_logical_(package/die)_id() returned a negative value");
++			return NULL;
++		}
++	}
+ 	else
+ 		uid = id;
+ 
+@@ -2168,9 +2190,14 @@ struct rapl_package *rapl_add_package_cpuslocked(int id, struct rapl_if_priv *pr
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	if (id_is_cpu) {
+-		rp->id = topology_logical_die_id(id);
++		rp->id = rapl_msrs_are_pkg_scope() ?
++			 topology_physical_package_id(id) : topology_logical_die_id(id);
++		if ((int)(rp->id) < 0) {
++			pr_err("topology_logical_(package/die)_id() returned a negative value");
++			return ERR_PTR(-EINVAL);
++		}
+ 		rp->lead_cpu = id;
+-		if (topology_max_dies_per_package() > 1)
++		if (!rapl_msrs_are_pkg_scope() && topology_max_dies_per_package() > 1)
+ 			snprintf(rp->name, PACKAGE_DOMAIN_NAME_LENGTH, "package-%d-die-%d",
+ 				 topology_physical_package_id(id), topology_die_id(id));
+ 		else
+diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c
+index 4156419c52c78a..4756a3f825310c 100644
+--- a/drivers/scsi/lpfc/lpfc_bsg.c
++++ b/drivers/scsi/lpfc/lpfc_bsg.c
+@@ -5410,7 +5410,7 @@ lpfc_get_cgnbuf_info(struct bsg_job *job)
+ 	struct get_cgnbuf_info_req *cgnbuf_req;
+ 	struct lpfc_cgn_info *cp;
+ 	uint8_t *cgn_buff;
+-	int size, cinfosz;
++	size_t size, cinfosz;
+ 	int  rc = 0;
+ 
+ 	if (job->request_len < sizeof(struct fc_bsg_request) +
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index aac41bd05f98f8..2fb8d4e55c7773 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -472,6 +472,7 @@ static const struct of_device_id bcm63xx_spi_of_match[] = {
+ 	{ .compatible = "brcm,bcm6358-spi", .data = &bcm6358_spi_reg_offsets },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, bcm63xx_spi_of_match);
+ 
+ static int bcm63xx_spi_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 5304728c68c20d..face93a9cf2035 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -702,6 +702,7 @@ static const struct class spidev_class = {
+ static const struct spi_device_id spidev_spi_ids[] = {
+ 	{ .name = "bh2228fv" },
+ 	{ .name = "dh2228fv" },
++	{ .name = "jg10309-01" },
+ 	{ .name = "ltc2488" },
+ 	{ .name = "sx1301" },
+ 	{ .name = "bk4" },
+@@ -731,6 +732,7 @@ static int spidev_of_check(struct device *dev)
+ static const struct of_device_id spidev_dt_ids[] = {
+ 	{ .compatible = "cisco,spi-petra", .data = &spidev_of_check },
+ 	{ .compatible = "dh,dhcom-board", .data = &spidev_of_check },
++	{ .compatible = "elgin,jg10309-01", .data = &spidev_of_check },
+ 	{ .compatible = "lineartechnology,ltc2488", .data = &spidev_of_check },
+ 	{ .compatible = "lwn,bk4", .data = &spidev_of_check },
+ 	{ .compatible = "menlo,m53cpld", .data = &spidev_of_check },
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 311007b1d90465..c2e666e82857c1 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -754,7 +754,7 @@ static struct urb *usbtmc_create_urb(void)
+ 	if (!urb)
+ 		return NULL;
+ 
+-	dmabuf = kmalloc(bufsize, GFP_KERNEL);
++	dmabuf = kzalloc(bufsize, GFP_KERNEL);
+ 	if (!dmabuf) {
+ 		usb_free_urb(urb);
+ 		return NULL;
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index d93f5d58455782..8e327fcb222f73 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -118,6 +118,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ 	{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
+ 	{ USB_DEVICE(IBM_VENDOR_ID, IBM_PRODUCT_ID) },
++	{ USB_DEVICE(MACROSILICON_VENDOR_ID, MACROSILICON_MS3020_PRODUCT_ID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 732f9b13ad5d59..d60eda7f6edaf8 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -171,3 +171,7 @@
+ /* Allied Telesis VT-Kit3 */
+ #define AT_VENDOR_ID		0x0caa
+ #define AT_VTKIT3_PRODUCT_ID	0x3001
++
++/* Macrosilicon MS3020 */
++#define MACROSILICON_VENDOR_ID		0x345f
++#define MACROSILICON_MS3020_PRODUCT_ID	0x3020
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 3b81213ed7b85b..35c0cc2a51af82 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -1062,13 +1062,13 @@ ssize_t ocfs2_listxattr(struct dentry *dentry,
+ 	return i_ret + b_ret;
+ }
+ 
+-static int ocfs2_xattr_find_entry(int name_index,
++static int ocfs2_xattr_find_entry(struct inode *inode, int name_index,
+ 				  const char *name,
+ 				  struct ocfs2_xattr_search *xs)
+ {
+ 	struct ocfs2_xattr_entry *entry;
+ 	size_t name_len;
+-	int i, cmp = 1;
++	int i, name_offset, cmp = 1;
+ 
+ 	if (name == NULL)
+ 		return -EINVAL;
+@@ -1076,13 +1076,22 @@ static int ocfs2_xattr_find_entry(int name_index,
+ 	name_len = strlen(name);
+ 	entry = xs->here;
+ 	for (i = 0; i < le16_to_cpu(xs->header->xh_count); i++) {
++		if ((void *)entry >= xs->end) {
++			ocfs2_error(inode->i_sb, "corrupted xattr entries");
++			return -EFSCORRUPTED;
++		}
+ 		cmp = name_index - ocfs2_xattr_get_type(entry);
+ 		if (!cmp)
+ 			cmp = name_len - entry->xe_name_len;
+-		if (!cmp)
+-			cmp = memcmp(name, (xs->base +
+-				     le16_to_cpu(entry->xe_name_offset)),
+-				     name_len);
++		if (!cmp) {
++			name_offset = le16_to_cpu(entry->xe_name_offset);
++			if ((xs->base + name_offset + name_len) > xs->end) {
++				ocfs2_error(inode->i_sb,
++					    "corrupted xattr entries");
++				return -EFSCORRUPTED;
++			}
++			cmp = memcmp(name, (xs->base + name_offset), name_len);
++		}
+ 		if (cmp == 0)
+ 			break;
+ 		entry += 1;
+@@ -1166,7 +1175,7 @@ static int ocfs2_xattr_ibody_get(struct inode *inode,
+ 	xs->base = (void *)xs->header;
+ 	xs->here = xs->header->xh_entries;
+ 
+-	ret = ocfs2_xattr_find_entry(name_index, name, xs);
++	ret = ocfs2_xattr_find_entry(inode, name_index, name, xs);
+ 	if (ret)
+ 		return ret;
+ 	size = le64_to_cpu(xs->here->xe_value_size);
+@@ -2698,7 +2707,7 @@ static int ocfs2_xattr_ibody_find(struct inode *inode,
+ 
+ 	/* Find the named attribute. */
+ 	if (oi->ip_dyn_features & OCFS2_INLINE_XATTR_FL) {
+-		ret = ocfs2_xattr_find_entry(name_index, name, xs);
++		ret = ocfs2_xattr_find_entry(inode, name_index, name, xs);
+ 		if (ret && ret != -ENODATA)
+ 			return ret;
+ 		xs->not_found = ret;
+@@ -2833,7 +2842,7 @@ static int ocfs2_xattr_block_find(struct inode *inode,
+ 		xs->end = (void *)(blk_bh->b_data) + blk_bh->b_size;
+ 		xs->here = xs->header->xh_entries;
+ 
+-		ret = ocfs2_xattr_find_entry(name_index, name, xs);
++		ret = ocfs2_xattr_find_entry(inode, name_index, name, xs);
+ 	} else
+ 		ret = ocfs2_xattr_index_block_find(inode, blk_bh,
+ 						   name_index,
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index d2307162a2de15..e325e06357ffb7 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -656,6 +656,19 @@ allocate_buffers(struct TCP_Server_Info *server)
+ static bool
+ server_unresponsive(struct TCP_Server_Info *server)
+ {
++	/*
++	 * If we're in the process of mounting a share or reconnecting a session
++	 * and the server abruptly shut down (e.g. socket wasn't closed, packet
++	 * had been ACK'ed but no SMB response), don't wait longer than 20s to
++	 * negotiate protocol.
++	 */
++	spin_lock(&server->srv_lock);
++	if (server->tcpStatus == CifsInNegotiate &&
++	    time_after(jiffies, server->lstrp + 20 * HZ)) {
++		spin_unlock(&server->srv_lock);
++		cifs_reconnect(server, false);
++		return true;
++	}
+ 	/*
+ 	 * We need to wait 3 echo intervals to make sure we handle such
+ 	 * situations right:
+@@ -667,7 +680,6 @@ server_unresponsive(struct TCP_Server_Info *server)
+ 	 * 65s kernel_recvmsg times out, and we see that we haven't gotten
+ 	 *     a response in >60s.
+ 	 */
+-	spin_lock(&server->srv_lock);
+ 	if ((server->tcpStatus == CifsGood ||
+ 	    server->tcpStatus == CifsNeedNegotiate) &&
+ 	    (!server->ops->can_echo || server->ops->can_echo(server)) &&
+diff --git a/include/drm/drm_accel.h b/include/drm/drm_accel.h
+index f4d3784b1dce05..8867ce0be94cdd 100644
+--- a/include/drm/drm_accel.h
++++ b/include/drm/drm_accel.h
+@@ -51,11 +51,10 @@
+ 
+ #if IS_ENABLED(CONFIG_DRM_ACCEL)
+ 
++extern struct xarray accel_minors_xa;
++
+ void accel_core_exit(void);
+ int accel_core_init(void);
+-void accel_minor_remove(int index);
+-int accel_minor_alloc(void);
+-void accel_minor_replace(struct drm_minor *minor, int index);
+ void accel_set_device_instance_params(struct device *kdev, int index);
+ int accel_open(struct inode *inode, struct file *filp);
+ void accel_debugfs_init(struct drm_device *dev);
+@@ -73,19 +72,6 @@ static inline int __init accel_core_init(void)
+ 	return 0;
+ }
+ 
+-static inline void accel_minor_remove(int index)
+-{
+-}
+-
+-static inline int accel_minor_alloc(void)
+-{
+-	return -EOPNOTSUPP;
+-}
+-
+-static inline void accel_minor_replace(struct drm_minor *minor, int index)
+-{
+-}
+-
+ static inline void accel_set_device_instance_params(struct device *kdev, int index)
+ {
+ }
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index ab230d3af138db..8c0030c7730816 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -45,6 +45,8 @@ struct drm_printer;
+ struct device;
+ struct file;
+ 
++extern struct xarray drm_minors_xa;
++
+ /*
+  * FIXME: Not sure we want to have drm_minor here in the end, but to avoid
+  * header include loops we need it here for now.
+@@ -434,6 +436,9 @@ static inline bool drm_is_accel_client(const struct drm_file *file_priv)
+ 
+ void drm_file_update_pid(struct drm_file *);
+ 
++struct drm_minor *drm_minor_acquire(struct xarray *minors_xa, unsigned int minor_id);
++void drm_minor_release(struct drm_minor *minor);
++
+ int drm_open(struct inode *inode, struct file *filp);
+ int drm_open_helper(struct file *filp, struct drm_minor *minor);
+ ssize_t drm_read(struct file *filp, char __user *buffer,
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index edba4a31844fb6..bca7b341dd772d 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -5348,8 +5348,10 @@ ieee80211_beacon_get_ap(struct ieee80211_hw *hw,
+ 	if (beacon->tail)
+ 		skb_put_data(skb, beacon->tail, beacon->tail_len);
+ 
+-	if (ieee80211_beacon_protect(skb, local, sdata, link) < 0)
++	if (ieee80211_beacon_protect(skb, local, sdata, link) < 0) {
++		dev_kfree_skb(skb);
+ 		return NULL;
++	}
+ 
+ 	ieee80211_beacon_get_finish(hw, vif, link, offs, beacon, skb,
+ 				    chanctx_conf, csa_off_base);
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index 765ffd6e06bc41..0a8883a93e8369 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -9,7 +9,8 @@
+ 
+ struct nft_socket {
+ 	enum nft_socket_keys		key:8;
+-	u8				level;
++	u8				level;		/* cgroupv2 level to extract */
++	u8				level_user;	/* cgroupv2 level provided by userspace */
+ 	u8				len;
+ 	union {
+ 		u8			dreg;
+@@ -53,6 +54,28 @@ nft_sock_get_eval_cgroupv2(u32 *dest, struct sock *sk, const struct nft_pktinfo
+ 	memcpy(dest, &cgid, sizeof(u64));
+ 	return true;
+ }
++
++/* process context only, uses current->nsproxy. */
++static noinline int nft_socket_cgroup_subtree_level(void)
++{
++	struct cgroup *cgrp = cgroup_get_from_path("/");
++	int level;
++
++	if (IS_ERR(cgrp))
++		return PTR_ERR(cgrp);
++
++	level = cgrp->level;
++
++	cgroup_put(cgrp);
++
++	if (WARN_ON_ONCE(level > 255))
++		return -ERANGE;
++
++	if (WARN_ON_ONCE(level < 0))
++		return -EINVAL;
++
++	return level;
++}
+ #endif
+ 
+ static struct sock *nft_socket_do_lookup(const struct nft_pktinfo *pkt)
+@@ -174,9 +197,10 @@ static int nft_socket_init(const struct nft_ctx *ctx,
+ 	case NFT_SOCKET_MARK:
+ 		len = sizeof(u32);
+ 		break;
+-#ifdef CONFIG_CGROUPS
++#ifdef CONFIG_SOCK_CGROUP_DATA
+ 	case NFT_SOCKET_CGROUPV2: {
+ 		unsigned int level;
++		int err;
+ 
+ 		if (!tb[NFTA_SOCKET_LEVEL])
+ 			return -EINVAL;
+@@ -185,6 +209,17 @@ static int nft_socket_init(const struct nft_ctx *ctx,
+ 		if (level > 255)
+ 			return -EOPNOTSUPP;
+ 
++		err = nft_socket_cgroup_subtree_level();
++		if (err < 0)
++			return err;
++
++		priv->level_user = level;
++
++		level += err;
++		/* Implies a giant cgroup tree */
++		if (WARN_ON_ONCE(level > 255))
++			return -EOPNOTSUPP;
++
+ 		priv->level = level;
+ 		len = sizeof(u64);
+ 		break;
+@@ -209,7 +244,7 @@ static int nft_socket_dump(struct sk_buff *skb,
+ 	if (nft_dump_register(skb, NFTA_SOCKET_DREG, priv->dreg))
+ 		return -1;
+ 	if (priv->key == NFT_SOCKET_CGROUPV2 &&
+-	    nla_put_be32(skb, NFTA_SOCKET_LEVEL, htonl(priv->level)))
++	    nla_put_be32(skb, NFTA_SOCKET_LEVEL, htonl(priv->level_user)))
+ 		return -1;
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 78042ac2b71f21..643e0496b09362 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4639,6 +4639,7 @@ HDA_CODEC_ENTRY(0x8086281d, "Meteor Lake HDMI",	patch_i915_adlp_hdmi),
+ HDA_CODEC_ENTRY(0x8086281e, "Battlemage HDMI",	patch_i915_adlp_hdmi),
+ HDA_CODEC_ENTRY(0x8086281f, "Raptor Lake P HDMI",	patch_i915_adlp_hdmi),
+ HDA_CODEC_ENTRY(0x80862820, "Lunar Lake HDMI",	patch_i915_adlp_hdmi),
++HDA_CODEC_ENTRY(0x80862822, "Panther Lake HDMI",	patch_i915_adlp_hdmi),
+ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI",	patch_generic_hdmi),
+ HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI",	patch_i915_byt_hdmi),
+ HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI",	patch_i915_byt_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0cde024d1d33cd..2b674691ce4b64 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4925,6 +4925,30 @@ static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc_hp_mute_disable(struct hda_codec *codec, unsigned int delay)
++{
++	if (delay <= 0)
++		delay = 75;
++	snd_hda_codec_write(codec, 0x21, 0,
++		    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++	msleep(delay);
++	snd_hda_codec_write(codec, 0x21, 0,
++		    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	msleep(delay);
++}
++
++static void alc_hp_enable_unmute(struct hda_codec *codec, unsigned int delay)
++{
++	if (delay <= 0)
++		delay = 75;
++	snd_hda_codec_write(codec, 0x21, 0,
++		    AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++	msleep(delay);
++	snd_hda_codec_write(codec, 0x21, 0,
++		    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++	msleep(delay);
++}
++
+ static const struct coef_fw alc225_pre_hsmode[] = {
+ 	UPDATE_COEF(0x4a, 1<<8, 0),
+ 	UPDATE_COEFEX(0x57, 0x05, 1<<14, 0),
+@@ -5026,6 +5050,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 	case 0x19e58326:
++		alc_hp_mute_disable(codec, 75);
+ 		alc_process_coef_fw(codec, coef0256);
+ 		break;
+ 	case 0x10ec0234:
+@@ -5060,6 +5085,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 	case 0x10ec0295:
+ 	case 0x10ec0289:
+ 	case 0x10ec0299:
++		alc_hp_mute_disable(codec, 75);
+ 		alc_process_coef_fw(codec, alc225_pre_hsmode);
+ 		alc_process_coef_fw(codec, coef0225);
+ 		break;
+@@ -5285,6 +5311,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 	case 0x10ec0299:
+ 		alc_process_coef_fw(codec, alc225_pre_hsmode);
+ 		alc_process_coef_fw(codec, coef0225);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+@@ -5297,6 +5324,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 		alc_write_coef_idx(codec, 0x45, 0xc089);
+ 		msleep(50);
+ 		alc_process_coef_fw(codec, coef0256);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -5394,6 +5422,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 	case 0x10ec0256:
+ 	case 0x19e58326:
+ 		alc_process_coef_fw(codec, coef0256);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -5442,6 +5471,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 			alc_process_coef_fw(codec, coef0225_2);
+ 		else
+ 			alc_process_coef_fw(codec, coef0225_1);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	case 0x10ec0867:
+ 		alc_update_coefex_idx(codec, 0x57, 0x5, 1<<14, 0);
+@@ -5509,6 +5539,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	case 0x10ec0256:
+ 	case 0x19e58326:
+ 		alc_process_coef_fw(codec, coef0256);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -5546,6 +5577,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	case 0x10ec0289:
+ 	case 0x10ec0299:
+ 		alc_process_coef_fw(codec, coef0225);
++		alc_hp_enable_unmute(codec, 75);
+ 		break;
+ 	}
+ 	codec_dbg(codec, "Headset jack set to Nokia-style headset mode.\n");
+@@ -5614,25 +5646,21 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 		alc_write_coef_idx(codec, 0x06, 0x6104);
+ 		alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3);
+ 
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-		msleep(80);
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+-
+ 		alc_process_coef_fw(codec, coef0255);
+ 		msleep(300);
+ 		val = alc_read_coef_idx(codec, 0x46);
+ 		is_ctia = (val & 0x0070) == 0x0070;
+-
++		if (!is_ctia) {
++			alc_write_coef_idx(codec, 0x45, 0xe089);
++			msleep(100);
++			val = alc_read_coef_idx(codec, 0x46);
++			if ((val & 0x0070) == 0x0070)
++				is_ctia = false;
++			else
++				is_ctia = true;
++		}
+ 		alc_write_coefex_idx(codec, 0x57, 0x3, 0x0da3);
+ 		alc_update_coefex_idx(codec, 0x57, 0x5, 1<<14, 0);
+-
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+-		msleep(80);
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ 		break;
+ 	case 0x10ec0234:
+ 	case 0x10ec0274:
+@@ -5709,12 +5737,6 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 	case 0x10ec0295:
+ 	case 0x10ec0289:
+ 	case 0x10ec0299:
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-		msleep(80);
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+-
+ 		alc_process_coef_fw(codec, alc225_pre_hsmode);
+ 		alc_update_coef_idx(codec, 0x67, 0xf000, 0x1000);
+ 		val = alc_read_coef_idx(codec, 0x45);
+@@ -5731,15 +5753,19 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 			val = alc_read_coef_idx(codec, 0x46);
+ 			is_ctia = (val & 0x00f0) == 0x00f0;
+ 		}
++		if (!is_ctia) {
++			alc_update_coef_idx(codec, 0x45, 0x3f<<10, 0x38<<10);
++			alc_update_coef_idx(codec, 0x49, 3<<8, 1<<8);
++			msleep(100);
++			val = alc_read_coef_idx(codec, 0x46);
++			if ((val & 0x00f0) == 0x00f0)
++				is_ctia = false;
++			else
++				is_ctia = true;
++		}
+ 		alc_update_coef_idx(codec, 0x4a, 7<<6, 7<<6);
+ 		alc_update_coef_idx(codec, 0x4a, 3<<4, 3<<4);
+ 		alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+-
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+-		msleep(80);
+-		snd_hda_codec_write(codec, 0x21, 0,
+-			    AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ 		break;
+ 	case 0x10ec0867:
+ 		is_ctia = true;
+diff --git a/sound/soc/amd/acp/acp-sof-mach.c b/sound/soc/amd/acp/acp-sof-mach.c
+index fc59ea34e687ad..b3a702dcd9911a 100644
+--- a/sound/soc/amd/acp/acp-sof-mach.c
++++ b/sound/soc/amd/acp/acp-sof-mach.c
+@@ -158,6 +158,8 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
++
+ static struct platform_driver acp_asoc_audio = {
+ 	.driver = {
+ 		.name = "sof_mach",
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index f6c1dbd0ebcf57..248e3bcbf386b0 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -353,6 +353,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 C7VF"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VEK"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/au1x/db1200.c b/sound/soc/au1x/db1200.c
+index 83a75a38705b4e..81abe2e184024e 100644
+--- a/sound/soc/au1x/db1200.c
++++ b/sound/soc/au1x/db1200.c
+@@ -44,6 +44,7 @@ static const struct platform_device_id db1200_pids[] = {
+ 	},
+ 	{},
+ };
++MODULE_DEVICE_TABLE(platform, db1200_pids);
+ 
+ /*-------------------------  AC97 PART  ---------------------------*/
+ 
+diff --git a/sound/soc/codecs/chv3-codec.c b/sound/soc/codecs/chv3-codec.c
+index ab99effa68748d..40020500b1fe89 100644
+--- a/sound/soc/codecs/chv3-codec.c
++++ b/sound/soc/codecs/chv3-codec.c
+@@ -26,6 +26,7 @@ static const struct of_device_id chv3_codec_of_match[] = {
+ 	{ .compatible = "google,chv3-codec", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, chv3_codec_of_match);
+ 
+ static struct platform_driver chv3_codec_platform_driver = {
+ 	.driver = {
+diff --git a/sound/soc/codecs/tda7419.c b/sound/soc/codecs/tda7419.c
+index 386b99c8023bdc..7d6fcba9986ea4 100644
+--- a/sound/soc/codecs/tda7419.c
++++ b/sound/soc/codecs/tda7419.c
+@@ -623,6 +623,7 @@ static const struct of_device_id tda7419_of_match[] = {
+ 	{ .compatible = "st,tda7419" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, tda7419_of_match);
+ 
+ static struct i2c_driver tda7419_driver = {
+ 	.driver = {
+diff --git a/sound/soc/google/chv3-i2s.c b/sound/soc/google/chv3-i2s.c
+index 08e558f24af865..0ff24653d49f47 100644
+--- a/sound/soc/google/chv3-i2s.c
++++ b/sound/soc/google/chv3-i2s.c
+@@ -322,6 +322,7 @@ static const struct of_device_id chv3_i2s_of_match[] = {
+ 	{ .compatible = "google,chv3-i2s" },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, chv3_i2s_of_match);
+ 
+ static struct platform_driver chv3_i2s_driver = {
+ 	.probe = chv3_i2s_probe,
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cht-match.c b/sound/soc/intel/common/soc-acpi-intel-cht-match.c
+index 5e2ec60e2954b2..e4c3492a0c2824 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cht-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cht-match.c
+@@ -84,7 +84,6 @@ static const struct dmi_system_id lenovo_yoga_tab3_x90[] = {
+ 		/* Lenovo Yoga Tab 3 Pro YT3-X90, codec missing from DSDT */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ 		},
+ 	},
+diff --git a/sound/soc/intel/keembay/kmb_platform.c b/sound/soc/intel/keembay/kmb_platform.c
+index 37ea2e1d2e9220..aa5de167e7909d 100644
+--- a/sound/soc/intel/keembay/kmb_platform.c
++++ b/sound/soc/intel/keembay/kmb_platform.c
+@@ -814,6 +814,7 @@ static const struct of_device_id kmb_plat_of_match[] = {
+ 	{ .compatible = "intel,keembay-tdm", .data = &intel_kmb_tdm_dai},
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, kmb_plat_of_match);
+ 
+ static int kmb_plat_dai_probe(struct platform_device *pdev)
+ {
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+index ccb6c1f3adc7d9..73e5c63aeec878 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+@@ -2748,6 +2748,7 @@ static bool mt8188_is_volatile_reg(struct device *dev, unsigned int reg)
+ 	case AFE_ASRC12_NEW_CON9:
+ 	case AFE_LRCK_CNT:
+ 	case AFE_DAC_MON0:
++	case AFE_DAC_CON0:
+ 	case AFE_DL2_CUR:
+ 	case AFE_DL3_CUR:
+ 	case AFE_DL6_CUR:
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index eba6f4c445ffbf..08ae962afeb929 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -734,6 +734,7 @@ static int mt8188_headset_codec_init(struct snd_soc_pcm_runtime *rtd)
+ 	struct mtk_soc_card_data *soc_card_data = snd_soc_card_get_drvdata(rtd->card);
+ 	struct snd_soc_jack *jack = &soc_card_data->card_data->jacks[MT8188_JACK_HEADSET];
+ 	struct snd_soc_component *component = snd_soc_rtd_to_codec(rtd, 0)->component;
++	struct mtk_platform_card_data *card_data = soc_card_data->card_data;
+ 	int ret;
+ 
+ 	ret = snd_soc_dapm_new_controls(&card->dapm, mt8188_nau8825_widgets,
+@@ -762,10 +763,18 @@ static int mt8188_headset_codec_init(struct snd_soc_pcm_runtime *rtd)
+ 		return ret;
+ 	}
+ 
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE);
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOICECOMMAND);
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEUP);
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOLUMEDOWN);
++	if (card_data->flags & ES8326_HS_PRESENT) {
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOLUMEUP);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEDOWN);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOICECOMMAND);			
++	} else {
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOICECOMMAND);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEUP);
++		snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOLUMEDOWN);	
++	}
++	
+ 	ret = snd_soc_component_set_jack(component, jack, NULL);
+ 
+ 	if (ret) {
+diff --git a/sound/soc/sof/mediatek/mt8195/mt8195.c b/sound/soc/sof/mediatek/mt8195/mt8195.c
+index 8d3fc167cd8103..b9d4508bbb85dd 100644
+--- a/sound/soc/sof/mediatek/mt8195/mt8195.c
++++ b/sound/soc/sof/mediatek/mt8195/mt8195.c
+@@ -574,6 +574,9 @@ static struct snd_sof_of_mach sof_mt8195_machs[] = {
+ 	{
+ 		.compatible = "google,tomato",
+ 		.sof_tplg_filename = "sof-mt8195-mt6359-rt1019-rt5682.tplg"
++	}, {
++		.compatible = "google,dojo",
++		.sof_tplg_filename = "sof-mt8195-mt6359-max98390-rt5682.tplg"
+ 	}, {
+ 		.compatible = "mediatek,mt8195",
+ 		.sof_tplg_filename = "sof-mt8195.tplg"
+diff --git a/tools/hv/Makefile b/tools/hv/Makefile
+index 2e60e2c212cd9e..34ffcec264ab0f 100644
+--- a/tools/hv/Makefile
++++ b/tools/hv/Makefile
+@@ -52,7 +52,7 @@ $(OUTPUT)hv_fcopy_uio_daemon: $(HV_FCOPY_UIO_DAEMON_IN)
+ 
+ clean:
+ 	rm -f $(ALL_PROGRAMS)
+-	find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
++	find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete
+ 
+ install: $(ALL_PROGRAMS)
+ 	install -d -m 755 $(DESTDIR)$(sbindir); \


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-09-30 16:11 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-09-30 16:11 UTC (permalink / raw
  To: gentoo-commits

commit:     13d4521ec8233c1a6e250857c77d1c62a7fcb1e0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 30 16:11:22 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 30 16:11:22 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=13d4521e

Remove older dtrace patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 2995_dtrace-6.10_p3.patch | 2352 ---------------------------------------------
 1 file changed, 2352 deletions(-)

diff --git a/2995_dtrace-6.10_p3.patch b/2995_dtrace-6.10_p3.patch
deleted file mode 100644
index 775a7868..00000000
--- a/2995_dtrace-6.10_p3.patch
+++ /dev/null
@@ -1,2352 +0,0 @@
-diff --git a/Documentation/dontdiff b/Documentation/dontdiff
-index 3c399f132e2db..75b9655e57914 100644
---- a/Documentation/dontdiff
-+++ b/Documentation/dontdiff
-@@ -179,7 +179,7 @@ mkutf8data
- modpost
- modules-only.symvers
- modules.builtin
--modules.builtin.modinfo
-+modules.builtin.*
- modules.nsdeps
- modules.order
- modversions.h*
-diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst
-index 9c8d1d046ea56..79e104ffee715 100644
---- a/Documentation/kbuild/kbuild.rst
-+++ b/Documentation/kbuild/kbuild.rst
-@@ -17,6 +17,11 @@ modules.builtin
- This file lists all modules that are built into the kernel. This is used
- by modprobe to not fail when trying to load something builtin.
- 
-+modules.builtin.objs
-+-----------------------
-+This file contains object mapping of modules that are built into the kernel
-+to their corresponding object files used to build the module.
-+
- modules.builtin.modinfo
- -----------------------
- This file contains modinfo from all modules that are built into the kernel.
-diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
-index 5685d7bfe4d0f..8db62fe4dadff 100644
---- a/Documentation/process/changes.rst
-+++ b/Documentation/process/changes.rst
-@@ -63,9 +63,13 @@ cpio                   any              cpio --version
- GNU tar                1.28             tar --version
- gtags (optional)       6.6.5            gtags --version
- mkimage (optional)     2017.01          mkimage --version
-+GNU AWK (optional)     5.1.0            gawk --version
-+GNU C\ [#f2]_          12.0             gcc --version
-+binutils\ [#f2]_       2.36             ld -v
- ====================== ===============  ========================================
- 
- .. [#f1] Sphinx is needed only to build the Kernel documentation
-+.. [#f2] These are needed at build-time when CONFIG_CTF is enabled
- 
- Kernel compilation
- ******************
-@@ -198,6 +202,12 @@ platforms. The tool is available via the ``u-boot-tools`` package or can be
- built from the U-Boot source code. See the instructions at
- https://docs.u-boot.org/en/latest/build/tools.html#building-tools-for-linux
- 
-+GNU AWK
-+-------
-+
-+GNU AWK is needed if you want kernel builds to generate address range data for
-+builtin modules (CONFIG_BUILTIN_MODULE_RANGES).
-+
- System utilities
- ****************
- 
-diff --git a/Makefile b/Makefile
-index 2e5ac6ab3d476..635896f269f1f 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1024,6 +1024,7 @@ include-$(CONFIG_UBSAN)		+= scripts/Makefile.ubsan
- include-$(CONFIG_KCOV)		+= scripts/Makefile.kcov
- include-$(CONFIG_RANDSTRUCT)	+= scripts/Makefile.randstruct
- include-$(CONFIG_GCC_PLUGINS)	+= scripts/Makefile.gcc-plugins
-+include-$(CONFIG_CTF)		+= scripts/Makefile.ctfa-toplevel
- 
- include $(addprefix $(srctree)/, $(include-y))
- 
-@@ -1151,7 +1152,11 @@ PHONY += vmlinux_o
- vmlinux_o: vmlinux.a $(KBUILD_VMLINUX_LIBS)
- 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.vmlinux_o
- 
--vmlinux.o modules.builtin.modinfo modules.builtin: vmlinux_o
-+MODULES_BUILTIN := modules.builtin.modinfo
-+MODULES_BUILTIN += modules.builtin
-+MODULES_BUILTIN += modules.builtin.objs
-+
-+vmlinux.o $(MODULES_BUILTIN): vmlinux_o
- 	@:
- 
- PHONY += vmlinux
-@@ -1490,9 +1495,10 @@ endif # CONFIG_MODULES
- 
- # Directories & files removed with 'make clean'
- CLEAN_FILES += vmlinux.symvers modules-only.symvers \
--	       modules.builtin modules.builtin.modinfo modules.nsdeps \
-+	       modules.builtin modules.builtin.* modules.nsdeps \
- 	       compile_commands.json rust/test \
--	       rust-project.json .vmlinux.objs .vmlinux.export.c
-+	       rust-project.json .vmlinux.objs .vmlinux.export.c \
-+	       vmlinux.ctfa
- 
- # Directories & files removed with 'make mrproper'
- MRPROPER_FILES += include/config include/generated          \
-@@ -1586,6 +1592,8 @@ help:
- 	@echo  '                    (requires a recent binutils and recent build (System.map))'
- 	@echo  '  dir/file.ko     - Build module including final link'
- 	@echo  '  modules_prepare - Set up for building external modules'
-+	@echo  '  ctf             - Generate CTF type information, installed by make ctf_install'
-+	@echo  '  ctf_install     - Install CTF to INSTALL_MOD_PATH (default: /)'
- 	@echo  '  tags/TAGS	  - Generate tags file for editors'
- 	@echo  '  cscope	  - Generate cscope index'
- 	@echo  '  gtags           - Generate GNU GLOBAL index'
-@@ -1942,7 +1950,7 @@ clean: $(clean-dirs)
- 	$(call cmd,rmfiles)
- 	@find $(or $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \
- 		\( -name '*.[aios]' -o -name '*.rsi' -o -name '*.ko' -o -name '.*.cmd' \
--		-o -name '*.ko.*' \
-+		-o -name '*.ko.*' -o -name '*.ctf' \
- 		-o -name '*.dtb' -o -name '*.dtbo' \
- 		-o -name '*.dtb.S' -o -name '*.dtbo.S' \
- 		-o -name '*.dt.yaml' -o -name 'dtbs-list' \
-diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
-index 01067a2bc43b7..d2193b8dfad83 100644
---- a/arch/arm/vdso/Makefile
-+++ b/arch/arm/vdso/Makefile
-@@ -14,6 +14,10 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
- ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector
- ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
- 
-+# CTF in the vDSO would introduce a new section, which would
-+# expand the vDSO to more than a page.
-+ccflags-y += $(call cc-option,-gctf0)
-+
- ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
- ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
- 	    -z max-page-size=4096 -shared $(ldflags-y) \
-diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
-index d63930c828397..6e84d3822cfe3 100644
---- a/arch/arm64/kernel/vdso/Makefile
-+++ b/arch/arm64/kernel/vdso/Makefile
-@@ -33,6 +33,10 @@ ldflags-y += -T
- ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
- ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
-+# CTF in the vDSO would introduce a new section, which would
-+# expand the vDSO to more than a page.
-+ccflags-y += $(call cc-option,-gctf0)
-+
- # -Wmissing-prototypes and -Wmissing-declarations are removed from
- # the CFLAGS of vgettimeofday.c to make possible to build the
- # kernel with CONFIG_WERROR enabled.
-diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
-index d724d46b07c84..fbedb95223ae1 100644
---- a/arch/loongarch/vdso/Makefile
-+++ b/arch/loongarch/vdso/Makefile
-@@ -21,7 +21,8 @@ cflags-vdso := $(ccflags-vdso) \
- 	-O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
- 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
- 	$(call cc-option, -fno-asynchronous-unwind-tables) \
--	$(call cc-option, -fno-stack-protector)
-+	$(call cc-option, -fno-stack-protector) \
-+	$(call cc-option,-gctf0)
- aflags-vdso := $(ccflags-vdso) \
- 	-D__ASSEMBLY__ -Wa,-gdwarf-2
- 
-diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
-index b289b2c1b2946..6c8d777525f9b 100644
---- a/arch/mips/vdso/Makefile
-+++ b/arch/mips/vdso/Makefile
-@@ -30,7 +30,8 @@ cflags-vdso := $(ccflags-vdso) \
- 	-O3 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
- 	-mrelax-pic-calls $(call cc-option, -mexplicit-relocs) \
- 	-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
--	$(call cc-option, -fno-asynchronous-unwind-tables)
-+	$(call cc-option, -fno-asynchronous-unwind-tables) \
-+	$(call cc-option,-gctf0)
- aflags-vdso := $(ccflags-vdso) \
- 	-D__ASSEMBLY__ -Wa,-gdwarf-2
- 
-diff --git a/arch/sparc/vdso/Makefile b/arch/sparc/vdso/Makefile
-index 243dbfc4609d8..e4f3e47074e9d 100644
---- a/arch/sparc/vdso/Makefile
-+++ b/arch/sparc/vdso/Makefile
-@@ -44,7 +44,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
- CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
-        -fno-omit-frame-pointer -foptimize-sibling-calls \
--       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
-+       $(call cc-option,-gctf0) -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
- SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 -fcall-used-g5 -fcall-used-g7
- 
-diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
-index 215a1b202a918..2fa1613a06275 100644
---- a/arch/x86/entry/vdso/Makefile
-+++ b/arch/x86/entry/vdso/Makefile
-@@ -54,6 +54,7 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
- CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
-        -fno-omit-frame-pointer -foptimize-sibling-calls \
-+       $(call cc-option,-gctf0) \
-        -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
- 
- ifdef CONFIG_MITIGATION_RETPOLINE
-@@ -131,6 +132,7 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
- KBUILD_CFLAGS_32 += -fno-stack-protector
- KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
- KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-+KBUILD_CFLAGS_32 += $(call cc-option,-gctf0)
- KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
- 
- ifdef CONFIG_MITIGATION_RETPOLINE
-diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile
-index 6a77ea6434ffd..6db233b5edd75 100644
---- a/arch/x86/um/vdso/Makefile
-+++ b/arch/x86/um/vdso/Makefile
-@@ -40,7 +40,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
- #
- CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-        $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
--       -fno-omit-frame-pointer -foptimize-sibling-calls
-+       -fno-omit-frame-pointer -foptimize-sibling-calls $(call cc-option,-gctf0)
- 
- $(vobjs): KBUILD_CFLAGS += $(CFL)
- 
-diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
-index f00a8e18f389f..2e307c0824574 100644
---- a/include/asm-generic/vmlinux.lds.h
-+++ b/include/asm-generic/vmlinux.lds.h
-@@ -1014,6 +1014,7 @@
- 	*(.discard.*)							\
- 	*(.export_symbol)						\
- 	*(.modinfo)							\
-+	*(.ctf)								\
- 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
- 	*(.gnu.version*)						\
- 
-diff --git a/include/linux/module.h b/include/linux/module.h
-index 330ffb59efe51..2d9fcca542d13 100644
---- a/include/linux/module.h
-+++ b/include/linux/module.h
-@@ -180,7 +180,13 @@ extern void cleanup_module(void);
- #ifdef MODULE
- #define MODULE_FILE
- #else
--#define MODULE_FILE	MODULE_INFO(file, KBUILD_MODFILE);
-+#ifdef CONFIG_CTF
-+#define MODULE_FILE					                      \
-+			MODULE_INFO(file, KBUILD_MODFILE);                    \
-+			MODULE_INFO(objs, KBUILD_MODOBJS);
-+#else
-+#define MODULE_FILE MODULE_INFO(file, KBUILD_MODFILE);
-+#endif
- #endif
- 
- /*
-diff --git a/init/Kconfig b/init/Kconfig
-index 9684e5d2b81c6..c1b00b2e4a43d 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -111,6 +111,12 @@ config PAHOLE_VERSION
- 	int
- 	default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
- 
-+config HAVE_CTF_TOOLCHAIN
-+	def_bool $(cc-option,-gctf) && $(ld-option,-lbfd -liberty -lctf -lbfd -liberty -lz -ldl -lc -o /dev/null)
-+	depends on CC_IS_GCC
-+	help
-+	  GCC and binutils support CTF generation.
-+
- config CONSTRUCTORS
- 	bool
- 
-diff --git a/lib/Kconfig b/lib/Kconfig
-index b0a76dff5c182..61d0be30b3562 100644
---- a/lib/Kconfig
-+++ b/lib/Kconfig
-@@ -633,6 +633,16 @@ config DIMLIB
- #
- config LIBFDT
- 	bool
-+#
-+# CTF support is select'ed if needed
-+#
-+config CTF
-+        bool "Compact Type Format generation"
-+        depends on HAVE_CTF_TOOLCHAIN
-+        help
-+          Emit a compact, compressed description of the kernel's datatypes and
-+          global variables into the vmlinux.ctfa archive (for in-tree modules)
-+          or into .ctf sections in kernel modules (for out-of-tree modules).
- 
- config OID_REGISTRY
- 	tristate
-diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
-index 59b6765d86b8f..dab7e6983eace 100644
---- a/lib/Kconfig.debug
-+++ b/lib/Kconfig.debug
-@@ -571,6 +571,21 @@ config VMLINUX_MAP
- 	  pieces of code get eliminated with
- 	  CONFIG_LD_DEAD_CODE_DATA_ELIMINATION.
- 
-+config BUILTIN_MODULE_RANGES
-+	bool "Generate address range information for builtin modules"
-+	depends on !LTO
-+	depends on VMLINUX_MAP
-+	help
-+	 When modules are built into the kernel, there will be no module name
-+	 associated with its symbols in /proc/kallsyms.  Tracers may want to
-+	 identify symbols by module name and symbol name regardless of whether
-+	 the module is configured as loadable or not.
-+
-+	 This option generates modules.builtin.ranges in the build tree with
-+	 offset ranges (per ELF section) for the module(s) they belong to.
-+	 It also records an anchor symbol to determine the load address of the
-+	 section.
-+
- config DEBUG_FORCE_WEAK_PER_CPU
- 	bool "Force weak per-cpu definitions"
- 	depends on DEBUG_KERNEL
-+y!/Makefile.ctf
-diff --git a/scripts/Makefile b/scripts/Makefile
-index fe56eeef09dd4..8e7eb174d3154 100644
---- a/scripts/Makefile
-+++ b/scripts/Makefile
-@@ -54,6 +54,7 @@ targets += module.lds
- 
- subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
- subdir-$(CONFIG_MODVERSIONS) += genksyms
-+subdir-$(CONFIG_CTF)         += ctf
- subdir-$(CONFIG_SECURITY_SELINUX) += selinux
- 
- # Let clean descend into subdirs
-diff --git a/scripts/Makefile.ctfa b/scripts/Makefile.ctfa
-new file mode 100644
-index 0000000000000..b65d9d391c29c
---- /dev/null
-+++ b/scripts/Makefile.ctfa
-@@ -0,0 +1,92 @@
-+# SPDX-License-Identifier: GPL-2.0-only
-+# ===========================================================================
-+# Module CTF/CTFA generation
-+# ===========================================================================
-+
-+include include/config/auto.conf
-+include $(srctree)/scripts/Kbuild.include
-+
-+# CTF is already present in every object file if CONFIG_CTF is enabled.
-+# vmlinux.lds.h strips it out of the finished kernel, but if nothing is done
-+# it will be deduplicated into module .ko's.  For out-of-tree module builds,
-+# this is what we want, but for in-tree modules we can save substantial
-+# space by deduplicating it against all the core kernel types as well.  So
-+# split the CTF out of in-tree module .ko's into separate .ctf files so that
-+# it doesn't take up space in the modules on disk, and let the specialized
-+# ctfarchive tool consume it and all the CTF in the vmlinux.o files when
-+# 'make ctf' is invoked, and use the same machinery that the linker uses to
-+# do CTF deduplication to emit vmlinux.ctfa containing the deduplicated CTF.
-+
-+# Nothing special needs to be done if CTF is turned off or if a standalone
-+# module is being built.
-+module-ctf-postlink = mv $(1).tmp $(1)
-+
-+ifdef CONFIG_CTF
-+
-+# This is quite tricky.  The CTF machinery needs to be told about all the
-+# built-in objects as well as all the external modules -- but Makefile.modfinal
-+# only knows about the latter.  So the toplevel makefile emits the names of the
-+# built-in objects into a temporary file, which is then catted and its contents
-+# used as prerequisites by this rule.
-+#
-+# We write the names of the object files to be scanned for CTF content into a
-+# file, then use that, to avoid hitting command-line length limits.
-+
-+ifeq ($(KBUILD_EXTMOD),)
-+ctf-modules := $(shell find . -name '*.ko.ctf' -print)
-+quiet_cmd_ctfa_raw = CTFARAW
-+      cmd_ctfa_raw = scripts/ctf/ctfarchive $@ .tmp_objects.builtin modules.builtin.objs $(ctf-filelist)
-+ctf-builtins := .tmp_objects.builtin
-+ctf-filelist := .tmp_ctf.filelist
-+ctf-filelist-raw := .tmp_ctf.filelist.raw
-+
-+define module-ctf-postlink =
-+	$(OBJCOPY) --only-section=.ctf $(1).tmp $(1).ctf && \
-+	$(OBJCOPY) --remove-section=.ctf $(1).tmp $(1) && rm -f $(1).tmp
-+endef
-+
-+# Split a list up like shell xargs does.
-+define xargs =
-+$(1) $(wordlist 1,1024,$(2))
-+$(if $(word 1025,$(2)),$(call xargs,$(1),$(wordlist 1025,$(words $(2)),$(2))))
-+endef
-+
-+$(ctf-filelist-raw): $(ctf-builtins) $(ctf-modules)
-+	@rm -f $(ctf-filelist-raw);
-+	$(call xargs,@printf "%s\n" >> $(ctf-filelist-raw),$^)
-+	@touch $(ctf-filelist-raw)
-+
-+$(ctf-filelist): $(ctf-filelist-raw)
-+	@rm -f $(ctf-filelist);
-+	@cat $(ctf-filelist-raw) | while read -r obj; do \
-+		case $$obj in \
-+		$(ctf-builtins)) cat $$obj >> $(ctf-filelist);; \
-+		*.a) $(AR) t $$obj > $(ctf-filelist);; \
-+		*.builtin) cat $$obj >> $(ctf-filelist);; \
-+		*) echo "$$obj" >> $(ctf-filelist);; \
-+		esac; \
-+	done
-+	@touch $(ctf-filelist)
-+
-+# The raw CTF depends on the output CTF file list, and that depends
-+# on the .ko files for the modules.
-+.tmp_vmlinux.ctfa.raw: $(ctf-filelist) FORCE
-+	$(call if_changed,ctfa_raw)
-+
-+quiet_cmd_ctfa = CTFA
-+      cmd_ctfa = { echo 'int main () { return 0; } ' | \
-+		$(CC) -x c -c -o $<.stub -; \
-+	$(OBJCOPY) '--remove-section=.*' --add-section=.ctf=$< \
-+		 $<.stub $@; }
-+
-+# The CTF itself is an ELF executable with one section: the CTF.  This lets
-+# objdump work on it, at minimal size cost.
-+vmlinux.ctfa: .tmp_vmlinux.ctfa.raw FORCE
-+	$(call if_changed,ctfa)
-+
-+targets += vmlinux.ctfa
-+
-+endif		# KBUILD_EXTMOD
-+
-+endif		# !CONFIG_CTF
-+
-diff --git a/scripts/Makefile.ctfa-toplevel b/scripts/Makefile.ctfa-toplevel
-new file mode 100644
-index 0000000000000..210bef3854e9b
---- /dev/null
-+++ b/scripts/Makefile.ctfa-toplevel
-@@ -0,0 +1,54 @@
-+# SPDX-License-Identifier: GPL-2.0-only
-+# ===========================================================================
-+# CTF rules for the top-level makefile only
-+# ===========================================================================
-+
-+KBUILD_CFLAGS	+= $(call cc-option,-gctf)
-+KBUILD_LDFLAGS	+= $(call ld-option, --ctf-variables)
-+
-+ifeq ($(KBUILD_EXTMOD),)
-+
-+# CTF generation for in-tree code (modules, built-in and not, and core kernel)
-+
-+# This contains all the object files that are built directly into the
-+# kernel (including built-in modules), for consumption by ctfarchive in
-+# Makefile.modfinal.
-+# This is made doubly annoying by the presence of '.o' files which are actually
-+# thin ar archives, and the need to support file(1) versions too old to
-+# recognize them as archives at all.  (So we assume that everything that is notr
-+# an ELF object is an archive.)
-+ifeq ($(SRCARCH),x86)
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),bzImage) FORCE
-+else
-+ifeq ($(SRCARCH),arm64)
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),Image) FORCE
-+else
-+.tmp_objects.builtin: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) FORCE
-+endif
-+endif
-+	@echo $(KBUILD_VMLINUX_OBJS) | \
-+		tr " " "\n" | grep "\.o$$" | xargs -r file | \
-+		grep ELF | cut -d: -f1 > .tmp_objects.builtin
-+	@for archive in $$(echo $(KBUILD_VMLINUX_OBJS) |\
-+		tr " " "\n" | xargs -r file | grep -v ELF | cut -d: -f1); do \
-+		$(AR) t "$$archive" >> .tmp_objects.builtin; \
-+	done
-+
-+ctf: vmlinux.ctfa
-+PHONY += ctf ctf_install
-+
-+# Making CTF needs the builtin files.  We need to force everything to be
-+# built if not already done, since we need the .o files for the machinery
-+# above to work.
-+vmlinux.ctfa: KBUILD_BUILTIN := 1
-+vmlinux.ctfa: modules.builtin.objs .tmp_objects.builtin
-+	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modfinal vmlinux.ctfa
-+
-+ctf_install:
-+	$(Q)mkdir -p $(MODLIB)/kernel
-+	@ln -sf $(abspath $(srctree)) $(MODLIB)/source
-+	$(Q)cp -f $(objtree)/vmlinux.ctfa $(MODLIB)/kernel
-+
-+CLEAN_FILES += vmlinux.ctfa
-+
-+endif
-diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index 7f8ec77bf35c9..8e67961ba2ec9 100644
---- a/scripts/Makefile.lib
-+++ b/scripts/Makefile.lib
-@@ -118,6 +118,8 @@ modname-multi = $(sort $(foreach m,$(multi-obj-ym),\
- __modname = $(or $(modname-multi),$(basetarget))
- 
- modname = $(subst $(space),:,$(__modname))
-+modname-objs = $($(modname)-objs) $($(modname)-y) $($(modname)-Y)
-+modname-objs-prefixed = $(sort $(strip $(addprefix $(obj)/, $(modname-objs))))
- modfile = $(addprefix $(obj)/,$(__modname))
- 
- # target with $(obj)/ and its suffix stripped
-@@ -133,6 +135,10 @@ modname_flags  = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
- 		 -D__KBUILD_MODNAME=kmod_$(call name-fix-token,$(modname))
- modfile_flags  = -DKBUILD_MODFILE=$(call stringify,$(modfile))
- 
-+ifdef CONFIG_CTF
-+modfile_flags  += -DKBUILD_MODOBJS=$(call stringify,$(modfile).o:$(subst $(space),|,$(modname-objs-prefixed)))
-+endif
-+
- _c_flags       = $(filter-out $(CFLAGS_REMOVE_$(target-stem).o), \
-                      $(filter-out $(ccflags-remove-y), \
-                          $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(ccflags-y)) \
-@@ -238,7 +244,7 @@ modkern_rustflags =                                              \
- 
- modkern_aflags = $(if $(part-of-module),				\
- 			$(KBUILD_AFLAGS_MODULE) $(AFLAGS_MODULE),	\
--			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL))
-+			$(KBUILD_AFLAGS_KERNEL) $(AFLAGS_KERNEL) $(modfile_flags))
- 
- c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- 		 -include $(srctree)/include/linux/compiler_types.h       \
-@@ -248,7 +254,7 @@ c_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- rust_flags     = $(_rust_flags) $(modkern_rustflags) @$(objtree)/include/generated/rustc_cfg
- 
- a_flags        = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
--		 $(_a_flags) $(modkern_aflags)
-+		 $(_a_flags) $(modkern_aflags) $(modname_flags)
- 
- cpp_flags      = -Wp,-MMD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \
- 		 $(_cpp_flags)
-diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
-index 3bec9043e4f38..06807e403d162 100644
---- a/scripts/Makefile.modfinal
-+++ b/scripts/Makefile.modfinal
-@@ -30,11 +30,16 @@ quiet_cmd_cc_o_c = CC [M]  $@
- %.mod.o: %.mod.c FORCE
- 	$(call if_changed_dep,cc_o_c)
- 
-+# for module-ctf-postlink
-+include $(srctree)/scripts/Makefile.ctfa
-+
- quiet_cmd_ld_ko_o = LD [M]  $@
-       cmd_ld_ko_o +=							\
- 	$(LD) -r $(KBUILD_LDFLAGS)					\
- 		$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE)		\
--		-T scripts/module.lds -o $@ $(filter %.o, $^)
-+		-T scripts/module.lds $(LDFLAGS_$(modname)) -o $@.tmp	\
-+		$(filter %.o, $^) &&					\
-+	$(call module-ctf-postlink,$@)					\
- 
- quiet_cmd_btf_ko = BTF [M] $@
-       cmd_btf_ko = 							\
-diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst
-index 0afd75472679f..e668469ce098c 100644
---- a/scripts/Makefile.modinst
-+++ b/scripts/Makefile.modinst
-@@ -30,10 +30,12 @@ $(MODLIB)/modules.order: modules.order FORCE
- quiet_cmd_install_modorder = INSTALL $@
-       cmd_install_modorder = sed 's:^\(.*\)\.o$$:kernel/\1.ko:' $< > $@
- 
--# Install modules.builtin(.modinfo) even when CONFIG_MODULES is disabled.
--install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo)
-+# Install modules.builtin(.modinfo,.ranges,.objs) even when CONFIG_MODULES is disabled.
-+install-y += $(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.objs)
- 
--$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo): $(MODLIB)/%: % FORCE
-+install-$(CONFIG_BUILTIN_MODULE_RANGES) += $(MODLIB)/modules.builtin.ranges
-+
-+$(addprefix $(MODLIB)/, modules.builtin modules.builtin.modinfo modules.builtin.ranges modules.builtin.objs): $(MODLIB)/%: % FORCE
- 	$(call cmd,install)
- 
- endif
-diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
-index 49946cb968440..7e8b703799c85 100644
---- a/scripts/Makefile.vmlinux
-+++ b/scripts/Makefile.vmlinux
-@@ -33,6 +33,24 @@ targets += vmlinux
- vmlinux: scripts/link-vmlinux.sh vmlinux.o $(KBUILD_LDS) FORCE
- 	+$(call if_changed_dep,link_vmlinux)
- 
-+# module.builtin.ranges
-+# ---------------------------------------------------------------------------
-+ifdef CONFIG_BUILTIN_MODULE_RANGES
-+__default: modules.builtin.ranges
-+
-+quiet_cmd_modules_builtin_ranges = GEN     $@
-+      cmd_modules_builtin_ranges = $(real-prereqs) > $@
-+
-+targets += modules.builtin.ranges
-+modules.builtin.ranges: $(srctree)/scripts/generate_builtin_ranges.awk \
-+			modules.builtin vmlinux.map vmlinux.o.map FORCE
-+	$(call if_changed,modules_builtin_ranges)
-+
-+vmlinux.map: vmlinux
-+	@:
-+
-+endif
-+
- # Add FORCE to the prequisites of a target to force it to be always rebuilt.
- # ---------------------------------------------------------------------------
- 
-diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
-index 6de297916ce68..86d6f8887313f 100644
---- a/scripts/Makefile.vmlinux_o
-+++ b/scripts/Makefile.vmlinux_o
-@@ -1,7 +1,7 @@
- # SPDX-License-Identifier: GPL-2.0-only
- 
- PHONY := __default
--__default: vmlinux.o modules.builtin.modinfo modules.builtin
-+__default: vmlinux.o modules.builtin.modinfo modules.builtin modules.builtin.objs
- 
- include include/config/auto.conf
- include $(srctree)/scripts/Kbuild.include
-@@ -27,6 +27,18 @@ ifdef CONFIG_LTO_CLANG
- initcalls-lds := .tmp_initcalls.lds
- endif
- 
-+# Generate a linker script to delete CTF sections
-+# -----------------------------------------------
-+
-+quiet_cmd_gen_remove_ctf.lds = GEN     $@
-+      cmd_gen_remove_ctf.lds = \
-+	$(LD) $(KBUILD_LDFLAGS) -r --verbose | awk -f $(real-prereqs) > $@
-+
-+.tmp_remove-ctf.lds: $(srctree)/scripts/remove-ctf-lds.awk FORCE
-+	$(call if_changed,gen_remove_ctf.lds)
-+
-+targets := .tmp_remove-ctf.lds
-+
- # objtool for vmlinux.o
- # ---------------------------------------------------------------------------
- #
-@@ -42,13 +54,18 @@ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
- 
- objtool-args = $(vmlinux-objtool-args-y) --link
- 
--# Link of vmlinux.o used for section mismatch analysis
-+# Link of vmlinux.o used for section mismatch analysis: we also strip the CTF
-+# section out at this stage, since ctfarchive gets it from the underlying object
-+# files and linking it further is a waste of time.
- # ---------------------------------------------------------------------------
- 
-+vmlinux-o-ld-args-$(CONFIG_BUILTIN_MODULE_RANGES)	+= -Map=$@.map
-+
- quiet_cmd_ld_vmlinux.o = LD      $@
-       cmd_ld_vmlinux.o = \
- 	$(LD) ${KBUILD_LDFLAGS} -r -o $@ \
--	$(addprefix -T , $(initcalls-lds)) \
-+	$(vmlinux-o-ld-args-y) \
-+	$(addprefix -T , $(initcalls-lds)) -T .tmp_remove-ctf.lds \
- 	--whole-archive vmlinux.a --no-whole-archive \
- 	--start-group $(KBUILD_VMLINUX_LIBS) --end-group \
- 	$(cmd_objtool)
-@@ -58,7 +75,7 @@ define rule_ld_vmlinux.o
- 	$(call cmd,gen_objtooldep)
- endef
- 
--vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) FORCE
-+vmlinux.o: $(initcalls-lds) vmlinux.a $(KBUILD_VMLINUX_LIBS) .tmp_remove-ctf.lds FORCE
- 	$(call if_changed_rule,ld_vmlinux.o)
- 
- targets += vmlinux.o
-@@ -87,6 +104,19 @@ targets += modules.builtin
- modules.builtin: modules.builtin.modinfo FORCE
- 	$(call if_changed,modules_builtin)
- 
-+# module.builtin.objs
-+# ---------------------------------------------------------------------------
-+quiet_cmd_modules_builtin_objs = GEN     $@
-+      cmd_modules_builtin_objs = \
-+	tr '\0' '\n' < $< | \
-+	sed -n 's/^[[:alnum:]:_]*\.objs=//p' | \
-+	tr ' ' '\n' | uniq | sed -e 's|:|: |' -e 's:|: :g' | \
-+	tr -s ' ' > $@
-+
-+targets += modules.builtin.objs
-+modules.builtin.objs: modules.builtin.modinfo FORCE
-+	$(call if_changed,modules_builtin_objs)
-+
- # Add FORCE to the prequisites of a target to force it to be always rebuilt.
- # ---------------------------------------------------------------------------
- 
-diff --git a/scripts/ctf/.gitignore b/scripts/ctf/.gitignore
-new file mode 100644
-index 0000000000000..6a0eb1c3ceeab
---- /dev/null
-+++ b/scripts/ctf/.gitignore
-@@ -0,0 +1 @@
-+ctfarchive
-diff --git a/scripts/ctf/Makefile b/scripts/ctf/Makefile
-new file mode 100644
-index 0000000000000..3b83f93bb9f9a
---- /dev/null
-+++ b/scripts/ctf/Makefile
-@@ -0,0 +1,5 @@
-+ifdef CONFIG_CTF
-+hostprogs-always-y	:= ctfarchive
-+ctfarchive-objs		:= ctfarchive.o modules_builtin.o
-+HOSTLDLIBS_ctfarchive := -lctf
-+endif
-diff --git a/scripts/ctf/ctfarchive.c b/scripts/ctf/ctfarchive.c
-new file mode 100644
-index 0000000000000..92cc4912ed0ee
---- /dev/null
-+++ b/scripts/ctf/ctfarchive.c
-@@ -0,0 +1,413 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * ctfmerge.c: Read in CTF extracted from generated object files from a
-+ * specified directory and generate a CTF archive whose members are the
-+ * deduplicated CTF derived from those object files, split up by kernel
-+ * module.
-+ *
-+ * Copyright (c) 2019, 2023, Oracle and/or its affiliates.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#define _GNU_SOURCE 1
-+#include <errno.h>
-+#include <stdio.h>
-+#include <stdlib.h>
-+#include <string.h>
-+#include <ctf-api.h>
-+#include "modules_builtin.h"
-+
-+static ctf_file_t *output;
-+
-+static int private_ctf_link_add_ctf(ctf_file_t *fp,
-+				    const char *name)
-+{
-+#if !defined (CTF_LINK_FINAL)
-+	return ctf_link_add_ctf(fp, NULL, name);
-+#else
-+	/* Non-upstreamed, erroneously-broken API.  */
-+	return ctf_link_add_ctf(fp, NULL, name, NULL, 0);
-+#endif
-+}
-+
-+/*
-+ * Add a file to the link.
-+ */
-+static void add_to_link(const char *fn)
-+{
-+	if (private_ctf_link_add_ctf(output, fn) < 0)
-+	{
-+		fprintf(stderr, "Cannot add CTF file %s: %s\n", fn,
-+			ctf_errmsg(ctf_errno(output)));
-+		exit(1);
-+	}
-+}
-+
-+struct from_to
-+{
-+	char *from;
-+	char *to;
-+};
-+
-+/*
-+ * The world's stupidest hash table of FROM -> TO.
-+ */
-+static struct from_to **from_tos[256];
-+static size_t alloc_from_tos[256];
-+static size_t num_from_tos[256];
-+
-+static unsigned char from_to_hash(const char *from)
-+{
-+	unsigned char hval = 0;
-+
-+	const char *p;
-+	for (p = from; *p; p++)
-+		hval += *p;
-+
-+	return hval;
-+}
-+
-+/*
-+ * Note that we will need to add a CU mapping later on.
-+ *
-+ * Present purely to work around a binutils bug that stops
-+ * ctf_link_add_cu_mapping() working right when called repeatedly
-+ * with the same FROM.
-+ */
-+static int add_cu_mapping(const char *from, const char *to)
-+{
-+	ssize_t i, j;
-+
-+	i = from_to_hash(from);
-+
-+	for (j = 0; j < num_from_tos[i]; j++)
-+		if (strcmp(from, from_tos[i][j]->from) == 0) {
-+			char *tmp;
-+
-+			free(from_tos[i][j]->to);
-+			tmp = strdup(to);
-+			if (!tmp)
-+				goto oom;
-+			from_tos[i][j]->to = tmp;
-+			return 0;
-+		    }
-+
-+	if (num_from_tos[i] >= alloc_from_tos[i]) {
-+		struct from_to **tmp;
-+		if (alloc_from_tos[i] < 16)
-+			alloc_from_tos[i] = 16;
-+		else
-+			alloc_from_tos[i] *= 2;
-+
-+		tmp = realloc(from_tos[i], alloc_from_tos[i] * sizeof(struct from_to *));
-+		if (!tmp)
-+			goto oom;
-+
-+		from_tos[i] = tmp;
-+	}
-+
-+	j = num_from_tos[i];
-+	from_tos[i][j] = malloc(sizeof(struct from_to));
-+	if (from_tos[i][j] == NULL)
-+		goto oom;
-+	from_tos[i][j]->from = strdup(from);
-+	from_tos[i][j]->to = strdup(to);
-+	if (!from_tos[i][j]->from || !from_tos[i][j]->to)
-+		goto oom;
-+	num_from_tos[i]++;
-+
-+	return 0;
-+  oom:
-+	fprintf(stderr,
-+		"out of memory in add_cu_mapping\n");
-+	exit(1);
-+}
-+
-+/*
-+ * Finally tell binutils to add all the CU mappings, with duplicate FROMs
-+ * replaced with the most recent one.
-+ */
-+static void commit_cu_mappings(void)
-+{
-+	ssize_t i, j;
-+
-+	for (i = 0; i < 256; i++)
-+		for (j = 0; j < num_from_tos[i]; j++)
-+			ctf_link_add_cu_mapping(output, from_tos[i][j]->from,
-+						from_tos[i][j]->to);
-+}
-+
-+/*
-+ * Add a CU mapping to the link.
-+ *
-+ * CU mappings for built-in modules are added by suck_in_modules, below: here,
-+ * we only want to add mappings for names ending in '.ko.ctf', i.e. external
-+ * modules, which appear only in the filelist (since they are not built-in).
-+ * The pathnames are stripped off because modules don't have any, and hyphens
-+ * are translated into underscores.
-+ */
-+static void add_cu_mappings(const char *fn)
-+{
-+	const char *last_slash;
-+	const char *modname = fn;
-+	char *dynmodname = NULL;
-+	char *dash;
-+	size_t n;
-+
-+	last_slash = strrchr(modname, '/');
-+	if (last_slash)
-+		last_slash++;
-+	else
-+		last_slash = modname;
-+	modname = last_slash;
-+	if (strchr(modname, '-') != NULL)
-+	{
-+		dynmodname = strdup(last_slash);
-+		dash = dynmodname;
-+		while (dash != NULL) {
-+			dash = strchr(dash, '-');
-+			if (dash != NULL)
-+				*dash = '_';
-+		}
-+		modname = dynmodname;
-+	}
-+
-+	n = strlen(modname);
-+	if (strcmp(modname + n - strlen(".ko.ctf"), ".ko.ctf") == 0) {
-+		char *mod;
-+
-+		n -= strlen(".ko.ctf");
-+		mod = strndup(modname, n);
-+		add_cu_mapping(fn, mod);
-+		free(mod);
-+	}
-+	free(dynmodname);
-+}
-+
-+/*
-+ * Add the passed names as mappings to "vmlinux".
-+ */
-+static void add_builtins(const char *fn)
-+{
-+	if (add_cu_mapping(fn, "vmlinux") < 0)
-+	{
-+		fprintf(stderr, "Cannot add CTF CU mapping from %s to \"vmlinux\"\n",
-+			ctf_errmsg(ctf_errno(output)));
-+		exit(1);
-+	}
-+}
-+
-+/*
-+ * Do something with a file, line by line.
-+ */
-+static void suck_in_lines(const char *filename, void (*func)(const char *line))
-+{
-+	FILE *f;
-+	char *line = NULL;
-+	size_t line_size = 0;
-+
-+	f = fopen(filename, "r");
-+	if (f == NULL) {
-+		fprintf(stderr, "Cannot open %s: %s\n", filename,
-+			strerror(errno));
-+		exit(1);
-+	}
-+
-+	while (getline(&line, &line_size, f) >= 0) {
-+		size_t len = strlen(line);
-+
-+		if (len == 0)
-+			continue;
-+
-+		if (line[len-1] == '\n')
-+			line[len-1] = '\0';
-+
-+		func(line);
-+	}
-+	free(line);
-+
-+	if (ferror(f)) {
-+		fprintf(stderr, "Error reading from %s: %s\n", filename,
-+			strerror(errno));
-+		exit(1);
-+	}
-+
-+	fclose(f);
-+}
-+
-+/*
-+ * Pull in modules.builtin.objs and turn it into CU mappings.
-+ */
-+static void suck_in_modules(const char *modules_builtin_name)
-+{
-+	struct modules_builtin_iter *i;
-+	char *module_name = NULL;
-+	char **paths;
-+
-+	i = modules_builtin_iter_new(modules_builtin_name);
-+	if (i == NULL) {
-+		fprintf(stderr, "Cannot iterate over builtin module file.\n");
-+		exit(1);
-+	}
-+
-+	while ((paths = modules_builtin_iter_next(i, &module_name)) != NULL) {
-+		size_t j;
-+
-+		for (j = 0; paths[j] != NULL; j++) {
-+			char *alloc = NULL;
-+			char *path = paths[j];
-+			/*
-+			 * If the name doesn't start in ./, add it, to match the names
-+			 * passed to add_builtins.
-+			 */
-+			if (strncmp(paths[j], "./", 2) != 0) {
-+				char *p;
-+				if ((alloc = malloc(strlen(paths[j]) + 3)) == NULL) {
-+					fprintf(stderr, "Cannot allocate memory for "
-+						"builtin module object name %s.\n",
-+						paths[j]);
-+					exit(1);
-+				}
-+				p = alloc;
-+				p = stpcpy(p, "./");
-+				p = stpcpy(p, paths[j]);
-+				path = alloc;
-+			}
-+			if (add_cu_mapping(path, module_name) < 0) {
-+				fprintf(stderr, "Cannot add path -> module mapping for "
-+					"%s -> %s: %s\n", path, module_name,
-+					ctf_errmsg(ctf_errno(output)));
-+				exit(1);
-+			}
-+			free (alloc);
-+		}
-+		free(paths);
-+	}
-+	free(module_name);
-+	modules_builtin_iter_free(i);
-+}
-+
-+/*
-+ * Strip the leading .ctf. off all the module names: transform the default name
-+ * from _CTF_SECTION into shared_ctf, and chop any trailing .ctf off (since that
-+ * derives from the intermediate file used to keep the CTF out of the final
-+ * module).
-+ */
-+static char *transform_module_names(ctf_file_t *fp __attribute__((__unused__)),
-+				    const char *name,
-+				    void *arg __attribute__((__unused__)))
-+{
-+	if (strcmp(name, ".ctf") == 0)
-+		return strdup("shared_ctf");
-+
-+	if (strncmp(name, ".ctf", 4) == 0) {
-+		size_t n = strlen (name);
-+		if (strcmp(name + n - 4, ".ctf") == 0)
-+			n -= 4;
-+		return strndup(name + 4, n - 4);
-+	}
-+	return NULL;
-+}
-+
-+int main(int argc, char *argv[])
-+{
-+	int err;
-+	const char *output_file;
-+	unsigned char *file_data = NULL;
-+	size_t file_size;
-+	FILE *fp;
-+
-+	if (argc != 5) {
-+		fprintf(stderr, "Syntax: ctfarchive output-file objects.builtin modules.builtin\n");
-+		fprintf(stderr, "                   filelist\n");
-+		exit(1);
-+	}
-+
-+	output_file = argv[1];
-+
-+	/*
-+	 * First pull in the input files and add them to the link.
-+	 */
-+
-+	output = ctf_create(&err);
-+	if (!output) {
-+		fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+			ctf_errmsg(err));
-+		return 1;
-+	}
-+
-+	suck_in_lines(argv[4], add_to_link);
-+
-+	/*
-+	 * Make sure that, even if all their types are shared, all modules have
-+	 * a ctf member that can be used as a child of the shared CTF.
-+	 */
-+	suck_in_lines(argv[4], add_cu_mappings);
-+
-+	/*
-+	 * Then pull in the builtin objects list and add them as
-+	 * mappings to "vmlinux".
-+	 */
-+
-+	suck_in_lines(argv[2], add_builtins);
-+
-+	/*
-+	 * Finally, pull in the object -> module mapping and add it
-+	 * as appropriate mappings.
-+	 */
-+	suck_in_modules(argv[3]);
-+
-+	/*
-+	 * Commit the added CU mappings.
-+	 */
-+	commit_cu_mappings();
-+
-+	/*
-+	 * Arrange to fix up the module names.
-+	 */
-+	ctf_link_set_memb_name_changer(output, transform_module_names, NULL);
-+
-+	/*
-+	 * Do the link.
-+	 */
-+	if (ctf_link(output, CTF_LINK_SHARE_DUPLICATED |
-+                     CTF_LINK_EMPTY_CU_MAPPINGS) < 0)
-+		goto ctf_err;
-+
-+	/*
-+	 * Write the output.
-+	 */
-+
-+	file_data = ctf_link_write(output, &file_size, 4096);
-+	if (!file_data)
-+		goto ctf_err;
-+
-+	fp = fopen(output_file, "w");
-+	if (!fp)
-+		goto err;
-+
-+	while ((err = fwrite(file_data, file_size, 1, fp)) == 0);
-+	if (ferror(fp)) {
-+		errno = ferror(fp);
-+		goto err;
-+	}
-+	if (fclose(fp) < 0)
-+		goto err;
-+	free(file_data);
-+	ctf_file_close(output);
-+
-+	return 0;
-+err:
-+	free(file_data);
-+	fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+		strerror(errno));
-+	return 1;
-+ctf_err:
-+	fprintf(stderr, "Cannot create output CTF archive: %s\n",
-+		ctf_errmsg(ctf_errno(output)));
-+	return 1;
-+}
-diff --git a/scripts/ctf/modules_builtin.c b/scripts/ctf/modules_builtin.c
-new file mode 100644
-index 0000000000000..10af2bbc80e0c
---- /dev/null
-+++ b/scripts/ctf/modules_builtin.c
-@@ -0,0 +1,2 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+#include "../modules_builtin.c"
-diff --git a/scripts/ctf/modules_builtin.h b/scripts/ctf/modules_builtin.h
-new file mode 100644
-index 0000000000000..5e0299e5600c2
---- /dev/null
-+++ b/scripts/ctf/modules_builtin.h
-@@ -0,0 +1,2 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+#include "../modules_builtin.h"
-diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
-new file mode 100755
-index 0000000000000..51ae0458ffbdd
---- /dev/null
-+++ b/scripts/generate_builtin_ranges.awk
-@@ -0,0 +1,516 @@
-+#!/usr/bin/gawk -f
-+# SPDX-License-Identifier: GPL-2.0
-+# generate_builtin_ranges.awk: Generate address range data for builtin modules
-+# Written by Kris Van Hees <kris.van.hees@oracle.com>
-+#
-+# Usage: generate_builtin_ranges.awk modules.builtin vmlinux.map \
-+#		vmlinux.o.map [ <build-dir> ] > modules.builtin.ranges
-+#
-+
-+# Return the module name(s) (if any) associated with the given object.
-+#
-+# If we have seen this object before, return information from the cache.
-+# Otherwise, retrieve it from the corresponding .cmd file.
-+#
-+function get_module_info(fn, mod, obj, s) {
-+	if (fn in omod)
-+		return omod[fn];
-+
-+	if (match(fn, /\/[^/]+$/) == 0)
-+		return "";
-+
-+	obj = fn;
-+	mod = "";
-+	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
-+	if (getline s <fn == 1) {
-+		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
-+			mod = substr(s, RSTART + 16, RLENGTH - 16);
-+			gsub(/['"]/, "", mod);
-+		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
-+			mod = substr(s, RSTART + 13, RLENGTH - 13);
-+	}
-+	close(fn);
-+
-+	# A single module (common case) also reflects objects that are not part
-+	# of a module.  Some of those objects have names that are also a module
-+	# name (e.g. core).  We check the associated module file name, and if
-+	# they do not match, the object is not part of a module.
-+	if (mod !~ / /) {
-+		if (!(mod in mods))
-+			mod = "";
-+	}
-+
-+	gsub(/([^/ ]*\/)+/, "", mod);
-+	gsub(/-/, "_", mod);
-+
-+	# At this point, mod is a single (valid) module name, or a list of
-+	# module names (that do not need validation).
-+	omod[obj] = mod;
-+
-+	return mod;
-+}
-+
-+# Update the ranges entry for the given module 'mod' in section 'osect'.
-+#
-+# We use a modified absolute start address (soff + base) as index because we
-+# may need to insert an anchor record later that must be at the start of the
-+# section data, and the first module may very well start at the same address.
-+# So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
-+# (addr << 1).  This is safe because the index is only used to sort the entries
-+# before writing them out.
-+#
-+function update_entry(osect, mod, soff, eoff, sect, idx) {
-+	sect = sect_in[osect];
-+	idx = (soff + sect_base[osect]) * 2 + 1;
-+	entries[idx] = sprintf("%s %08x-%08x %s", sect, soff, eoff, mod);
-+	count[sect]++;
-+}
-+
-+# Determine the kernel build directory to use (default is .).
-+#
-+BEGIN {
-+	if (ARGC > 4) {
-+		kdir = ARGV[ARGC - 1];
-+		ARGV[ARGC - 1] = "";
-+	} else
-+		kdir = ".";
-+}
-+
-+# (1) Build a lookup map of built-in module names.
-+#
-+# The first file argument is used as input (modules.builtin).
-+#
-+# Lines will be like:
-+#	kernel/crypto/lzo-rle.ko
-+# and we record the object name "crypto/lzo-rle".
-+#
-+ARGIND == 1 {
-+	sub(/kernel\//, "");			# strip off "kernel/" prefix
-+	sub(/\.ko$/, "");			# strip off .ko suffix
-+
-+	mods[$1] = 1;
-+	next;
-+}
-+
-+# (2) Collect address information for each section.
-+#
-+# The second file argument is used as input (vmlinux.map).
-+#
-+# We collect the base address of the section in order to convert all addresses
-+# in the section into offset values.
-+#
-+# We collect the address of the anchor (or first symbol in the section if there
-+# is no explicit anchor) to allow users of the range data to calculate address
-+# ranges based on the actual load address of the section in the running kernel.
-+#
-+# We collect the start address of any sub-section (section included in the top
-+# level section being processed).  This is needed when the final linking was
-+# done using vmlinux.a because then the list of objects contained in each
-+# section is to be obtained from vmlinux.o.map.  The offset of the sub-section
-+# is recorded here, to be used as an addend when processing vmlinux.o.map
-+# later.
-+#
-+
-+# Both GNU ld and LLVM lld linker map format are supported by converting LLVM
-+# lld linker map records into equivalent GNU ld linker map records.
-+#
-+# The first record of the vmlinux.map file provides enough information to know
-+# which format we are dealing with.
-+#
-+ARGIND == 2 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
-+	map_is_lld = 1;
-+	if (dbg)
-+		printf "NOTE: %s uses LLVM lld linker map format\n", FILENAME >"/dev/stderr";
-+	next;
-+}
-+
-+# (LLD) Convert a section record fronm lld format to ld format.
-+#
-+# lld: ffffffff82c00000          2c00000   2493c0  8192 .data
-+#  ->
-+# ld:  .data           0xffffffff82c00000   0x2493c0 load address 0x0000000002c00000
-+#
-+ARGIND == 2 && map_is_lld && NF == 5 && /[0-9] [^ ]+$/ {
-+	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
-+}
-+
-+# (LLD) Convert an anchor record from lld format to ld format.
-+#
-+# lld: ffffffff81000000          1000000        0     1         _text = .
-+#  ->
-+# ld:                  0xffffffff81000000                _text = .
-+#
-+ARGIND == 2 && map_is_lld && !anchor && NF == 7 && raw_addr == "0x"$1 && $6 == "=" && $7 == "." {
-+	$0 = "  0x"$1 " " $5 " = .";
-+}
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+# lld:            11480            11480     1f07    16         vmlinux.a(arch/x86/events/amd/uncore.o):(.text)
-+#  ->
-+# ld:   .text          0x0000000000011480     0x1f07 arch/x86/events/amd/uncore.o
-+#
-+ARGIND == 2 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
-+	gsub(/\)/, "");
-+	sub(/ vmlinux\.a\(/, " ");
-+	sub(/:\(/, " ");
-+	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (LLD) Convert a symbol record from lld format to ld format.
-+#
-+# We only care about these while processing a section for which no anchor has
-+# been determined yet.
-+#
-+# lld: ffffffff82a859a4          2a859a4        0     1                 btf_ksym_iter_id
-+#  ->
-+# ld:                  0xffffffff82a859a4                btf_ksym_iter_id
-+#
-+ARGIND == 2 && map_is_lld && sect && !anchor && NF == 5 && $5 ~ /^[_A-Za-z][_A-Za-z0-9]*$/ {
-+	$0 = "  0x"$1 " " $5;
-+}
-+
-+# (LLD) We do not need any other ldd linker map records.
-+#
-+ARGIND == 2 && map_is_lld && /^[0-9a-f]{16} / {
-+	next;
-+}
-+
-+# (LD) Section records with just the section name at the start of the line
-+#      need to have the next line pulled in to determine whether it is a
-+#      loadable section.  If it is, the next line will contains a hex value
-+#      as first and second items.
-+#
-+ARGIND == 2 && !map_is_lld && NF == 1 && /^[^ ]/ {
-+	s = $0;
-+	getline;
-+	if ($1 !~ /^0x/ || $2 !~ /^0x/)
-+		next;
-+
-+	$0 = s " " $0;
-+}
-+
-+# (LD) Object records with just the section name denote records with a long
-+#      section name for which the remainder of the record can be found on the
-+#      next line.
-+#
-+# (This is also needed for vmlinux.o.map, when used.)
-+#
-+ARGIND >= 2 && !map_is_lld && NF == 1 && /^ [^ \*]/ {
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# Beginning a new section - done with the previous one (if any).
-+#
-+ARGIND == 2 && /^[^ ]/ {
-+	sect = 0;
-+}
-+
-+# Process a loadable section (we only care about .-sections).
-+#
-+# Record the section name and its base address.
-+# We also record the raw (non-stripped) address of the section because it can
-+# be used to identify an anchor record.
-+#
-+# Note:
-+# Since some AWK implementations cannot handle large integers, we strip off the
-+# first 4 hex digits from the address.  This is safe because the kernel space
-+# is not large enough for addresses to extend into those digits.  The portion
-+# to strip off is stored in addr_prefix as a regexp, so further clauses can
-+# perform a simple substitution to do the address stripping.
-+#
-+ARGIND == 2 && /^\./ {
-+	# Explicitly ignore a few sections that are not relevant here.
-+	if ($1 ~ /^\.orc_/ || $1 ~ /_sites$/ || $1 ~ /\.percpu/)
-+		next;
-+
-+	# Sections with a 0-address can be ignored as well.
-+	if ($2 ~ /^0x0+$/)
-+		next;
-+
-+	raw_addr = $2;
-+	addr_prefix = "^" substr($2, 1, 6);
-+	base = $2;
-+	sub(addr_prefix, "0x", base);
-+	base = strtonum(base);
-+	sect = $1;
-+	anchor = 0;
-+	sect_base[sect] = base;
-+	sect_size[sect] = strtonum($3);
-+
-+	if (dbg)
-+		printf "[%s] BASE   %016x\n", sect, base >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# If we are not in a section we care about, we ignore the record.
-+#
-+ARGIND == 2 && !sect {
-+	next;
-+}
-+
-+# Record the first anchor symbol for the current section.
-+#
-+# An anchor record for the section bears the same raw address as the section
-+# record.
-+#
-+ARGIND == 2 && !anchor && NF == 4 && raw_addr == $1 && $3 == "=" && $4 == "." {
-+	anchor = sprintf("%s %08x-%08x = %s", sect, 0, 0, $2);
-+	sect_anchor[sect] = anchor;
-+
-+	if (dbg)
-+		printf "[%s] ANCHOR %016x = %s (.)\n", sect, 0, $2 >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# If no anchor record was found for the current section, use the first symbol
-+# in the section as anchor.
-+#
-+ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
-+	addr = $1;
-+	sub(addr_prefix, "0x", addr);
-+	addr = strtonum(addr) - base;
-+	anchor = sprintf("%s %08x-%08x = %s", sect, addr, addr, $2);
-+	sect_anchor[sect] = anchor;
-+
-+	if (dbg)
-+		printf "[%s] ANCHOR %016x = %s\n", sect, addr, $2 >"/dev/stderr";
-+
-+	next;
-+}
-+
-+# The first occurrence of a section name in an object record establishes the
-+# addend (often 0) for that section.  This information is needed to handle
-+# sections that get combined in the final linking of vmlinux (e.g. .head.text
-+# getting included at the start of .text).
-+#
-+# If the section does not have a base yet, use the base of the encapsulating
-+# section.
-+#
-+ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
-+	if (!($1 in sect_base)) {
-+		sect_base[$1] = base;
-+
-+		if (dbg)
-+			printf "[%s] BASE   %016x\n", $1, base >"/dev/stderr";
-+	}
-+
-+	addr = $2;
-+	sub(addr_prefix, "0x", addr);
-+	addr = strtonum(addr);
-+	sect_addend[$1] = addr - sect_base[$1];
-+	sect_in[$1] = sect;
-+
-+	if (dbg)
-+		printf "[%s] ADDEND %016x - %016x = %016x\n",  $1, addr, base, sect_addend[$1] >"/dev/stderr";
-+
-+	# If the object is vmlinux.o then we will need vmlinux.o.map to get the
-+	# actual offsets of objects.
-+	if ($4 == "vmlinux.o")
-+		need_o_map = 1;
-+}
-+
-+# (3) Collect offset ranges (relative to the section base address) for built-in
-+# modules.
-+#
-+# If the final link was done using the actual objects, vmlinux.map contains all
-+# the information we need (see section (3a)).
-+# If linking was done using vmlinux.a as intermediary, we will need to process
-+# vmlinux.o.map (see section (3b)).
-+
-+# (3a) Determine offset range info using vmlinux.map.
-+#
-+# Since we are already processing vmlinux.map, the top level section that is
-+# being processed is already known.  If we do not have a base address for it,
-+# we do not need to process records for it.
-+#
-+# Given the object name, we determine the module(s) (if any) that the current
-+# object is associated with.
-+#
-+# If we were already processing objects for a (list of) module(s):
-+#  - If the current object belongs to the same module(s), update the range data
-+#    to include the current object.
-+#  - Otherwise, ensure that the end offset of the range is valid.
-+#
-+# If the current object does not belong to a built-in module, ignore it.
-+#
-+# If it does, we add a new built-in module offset range record.
-+#
-+ARGIND == 2 && !need_o_map && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
-+	if (!(sect in sect_base))
-+		next;
-+
-+	# Turn the address into an offset from the section base.
-+	soff = $2;
-+	sub(addr_prefix, "0x", soff);
-+	soff = strtonum(soff) - sect_base[sect];
-+	eoff = soff + strtonum($3);
-+
-+	# Determine which (if any) built-in modules the object belongs to.
-+	mod = get_module_info($4);
-+
-+	# If we are processing a built-in module:
-+	#   - If the current object is within the same module, we update its
-+	#     entry by extending the range and move on
-+	#   - Otherwise:
-+	#       + If we are still processing within the same main section, we
-+	#         validate the end offset against the start offset of the
-+	#         current object (e.g. .rodata.str1.[18] objects are often
-+	#         listed with an incorrect size in the linker map)
-+	#       + Otherwise, we validate the end offset against the section
-+	#         size
-+	if (mod_name) {
-+		if (mod == mod_name) {
-+			mod_eoff = eoff;
-+			update_entry(mod_sect, mod_name, mod_soff, eoff);
-+
-+			next;
-+		} else if (sect == sect_in[mod_sect]) {
-+			if (mod_eoff > soff)
-+				update_entry(mod_sect, mod_name, mod_soff, soff);
-+		} else {
-+			v = sect_size[sect_in[mod_sect]];
-+			if (mod_eoff > v)
-+				update_entry(mod_sect, mod_name, mod_soff, v);
-+		}
-+	}
-+
-+	mod_name = mod;
-+
-+	# If we encountered an object that is not part of a built-in module, we
-+	# do not need to record any data.
-+	if (!mod)
-+		next;
-+
-+	# At this point, we encountered the start of a new built-in module.
-+	mod_name = mod;
-+	mod_soff = soff;
-+	mod_eoff = eoff;
-+	mod_sect = $1;
-+	update_entry($1, mod, soff, mod_eoff);
-+
-+	next;
-+}
-+
-+# If we do not need to parse the vmlinux.o.map file, we are done.
-+#
-+ARGIND == 3 && !need_o_map {
-+	if (dbg)
-+		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
-+	exit;
-+}
-+
-+# (3) Collect offset ranges (relative to the section base address) for built-in
-+# modules.
-+#
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+ARGIND == 3 && map_is_lld && NF == 5 && $5 ~ /:\(/ {
-+	gsub(/\)/, "");
-+	sub(/:\(/, " ");
-+
-+	sect = $6;
-+	if (!(sect in sect_addend))
-+		next;
-+
-+	sub(/ vmlinux\.a\(/, " ");
-+	$0 = " "sect " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (3b) Determine offset range info using vmlinux.o.map.
-+#
-+# If we do not know an addend for the object's section, we are interested in
-+# anything within that section.
-+#
-+# Determine the top-level section that the object's section was included in
-+# during the final link.  This is the section name offset range data will be
-+# associated with for this object.
-+#
-+# The remainder of the processing of the current object record follows the
-+# procedure outlined in (3a).
-+#
-+ARGIND == 3 && /^ [^ ]/ && NF == 4 && $3 != "0x0" {
-+	osect = $1;
-+	if (!(osect in sect_addend))
-+		next;
-+
-+	# We need to work with the main section.
-+	sect = sect_in[osect];
-+
-+	# Turn the address into an offset from the section base.
-+	soff = $2;
-+	sub(addr_prefix, "0x", soff);
-+	soff = strtonum(soff) + sect_addend[osect];
-+	eoff = soff + strtonum($3);
-+
-+	# Determine which (if any) built-in modules the object belongs to.
-+	mod = get_module_info($4);
-+
-+	# If we are processing a built-in module:
-+	#   - If the current object is within the same module, we update its
-+	#     entry by extending the range and move on
-+	#   - Otherwise:
-+	#       + If we are still processing within the same main section, we
-+	#         validate the end offset against the start offset of the
-+	#         current object (e.g. .rodata.str1.[18] objects are often
-+	#         listed with an incorrect size in the linker map)
-+	#       + Otherwise, we validate the end offset against the section
-+	#         size
-+	if (mod_name) {
-+		if (mod == mod_name) {
-+			mod_eoff = eoff;
-+			update_entry(mod_sect, mod_name, mod_soff, eoff);
-+
-+			next;
-+		} else if (sect == sect_in[mod_sect]) {
-+			if (mod_eoff > soff)
-+				update_entry(mod_sect, mod_name, mod_soff, soff);
-+		} else {
-+			v = sect_size[sect_in[mod_sect]];
-+			if (mod_eoff > v)
-+				update_entry(mod_sect, mod_name, mod_soff, v);
-+		}
-+	}
-+
-+	mod_name = mod;
-+
-+	# If we encountered an object that is not part of a built-in module, we
-+	# do not need to record any data.
-+	if (!mod)
-+		next;
-+
-+	# At this point, we encountered the start of a new built-in module.
-+	mod_name = mod;
-+	mod_soff = soff;
-+	mod_eoff = eoff;
-+	mod_sect = osect;
-+	update_entry(osect, mod, soff, mod_eoff);
-+
-+	next;
-+}
-+
-+# (4) Generate the output.
-+#
-+# Anchor records are added for each section that contains offset range data
-+# records.  They are added at an adjusted section base address (base << 1) to
-+# ensure they come first in the second records (see update_entry() above for
-+# more information).
-+#
-+# All entries are sorted by (adjusted) address to ensure that the output can be
-+# parsed in strict ascending address order.
-+#
-+END {
-+	for (sect in count) {
-+		if (sect in sect_anchor)
-+			entries[sect_base[sect] * 2] = sect_anchor[sect];
-+	}
-+
-+	n = asorti(entries, indices);
-+	for (i = 1; i <= n; i++)
-+		print entries[indices[i]];
-+}
-diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
-index f48d72d22dc2a..d7e6cd7781256 100644
---- a/scripts/mod/modpost.c
-+++ b/scripts/mod/modpost.c
-@@ -733,6 +733,7 @@ static const char *const section_white_list[] =
- 	".comment*",
- 	".debug*",
- 	".zdebug*",		/* Compressed debug sections. */
-+        ".ctf",			/* Type info */
- 	".GCC.command.line",	/* record-gcc-switches */
- 	".mdebug*",        /* alpha, score, mips etc. */
- 	".pdr",            /* alpha, score, mips etc. */
-diff --git a/scripts/modules_builtin.c b/scripts/modules_builtin.c
-new file mode 100644
-index 0000000000000..df52932a4417b
---- /dev/null
-+++ b/scripts/modules_builtin.c
-@@ -0,0 +1,200 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * A simple modules_builtin reader.
-+ *
-+ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#include <errno.h>
-+#include <stdio.h>
-+#include <stdlib.h>
-+#include <string.h>
-+
-+#include "modules_builtin.h"
-+
-+/*
-+ * Read a modules.builtin.objs file and translate it into a stream of
-+ * name / module-name pairs.
-+ */
-+
-+/*
-+ * Construct a modules.builtin.objs iterator.
-+ */
-+struct modules_builtin_iter *
-+modules_builtin_iter_new(const char *modules_builtin_file)
-+{
-+	struct modules_builtin_iter *i;
-+
-+	i = calloc(1, sizeof(struct modules_builtin_iter));
-+	if (i == NULL)
-+		return NULL;
-+
-+	i->f = fopen(modules_builtin_file, "r");
-+
-+	if (i->f == NULL) {
-+		fprintf(stderr, "Cannot open builtin module file %s: %s\n",
-+			modules_builtin_file, strerror(errno));
-+		return NULL;
-+	}
-+
-+	return i;
-+}
-+
-+/*
-+ * Iterate, returning a new null-terminated array of object file names, and a
-+ * new dynamically-allocated module name.  (The module name passed in is freed.)
-+ *
-+ * The array of object file names should be freed by the caller: the strings it
-+ * points to are owned by the iterator, and should not be freed.
-+ */
-+
-+char ** __attribute__((__nonnull__))
-+modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name)
-+{
-+	size_t npaths = 1;
-+	char **module_paths;
-+	char *last_slash;
-+	char *last_dot;
-+	char *trailing_linefeed;
-+	char *object_name = i->line;
-+	char *dash;
-+	int composite = 0;
-+
-+	/*
-+	 * Read in all module entries, computing the suffixless, pathless name
-+	 * of the module and building the next arrayful of object file names for
-+	 * return.
-+	 *
-+	 * Modules can consist of multiple files: in this case, the portion
-+	 * before the colon is the path to the module (as before): the portion
-+	 * after the colon is a space-separated list of files that should be
-+	 * considered part of this module.  In this case, the portion before the
-+	 * name is an "object file" that does not actually exist: it is merged
-+	 * into built-in.a without ever being written out.
-+	 *
-+	 * All module names have - translated to _, to match what is done to the
-+	 * names of the same things when built as modules.
-+	 */
-+
-+	/*
-+	 * Reinvocation of exhausted iterator. Return NULL, once.
-+	 */
-+retry:
-+	if (getline(&i->line, &i->line_size, i->f) < 0) {
-+		if (ferror(i->f)) {
-+			fprintf(stderr, "Error reading from modules_builtin file:"
-+				" %s\n", strerror(errno));
-+			exit(1);
-+		}
-+		rewind(i->f);
-+		return NULL;
-+	}
-+
-+	if (i->line[0] == '\0')
-+		goto retry;
-+
-+	trailing_linefeed = strchr(i->line, '\n');
-+	if (trailing_linefeed != NULL)
-+		*trailing_linefeed = '\0';
-+
-+	/*
-+	 * Slice the line in two at the colon, if any.  If there is anything
-+	 * past the ': ', this is a composite module.  (We allow for no colon
-+	 * for robustness, even though one should always be present.)
-+	 */
-+	if (strchr(i->line, ':') != NULL) {
-+		char *name_start;
-+
-+		object_name = strchr(i->line, ':');
-+		*object_name = '\0';
-+		object_name++;
-+		name_start = object_name + strspn(object_name, " \n");
-+		if (*name_start != '\0') {
-+			composite = 1;
-+			object_name = name_start;
-+		}
-+	}
-+
-+	/*
-+	 * Figure out the module name.
-+	 */
-+	last_slash = strrchr(i->line, '/');
-+	last_slash = (!last_slash) ? i->line :
-+		last_slash + 1;
-+	free(*module_name);
-+	*module_name = strdup(last_slash);
-+	dash = *module_name;
-+
-+	while (dash != NULL) {
-+		dash = strchr(dash, '-');
-+		if (dash != NULL)
-+			*dash = '_';
-+	}
-+
-+	last_dot = strrchr(*module_name, '.');
-+	if (last_dot != NULL)
-+		*last_dot = '\0';
-+
-+	/*
-+	 * Multifile separator? Object file names explicitly stated:
-+	 * slice them up and shuffle them in.
-+	 *
-+	 * The array size may be an overestimate if any object file
-+	 * names start or end with spaces (very unlikely) but cannot be
-+	 * an underestimate.  (Check for it anyway.)
-+	 */
-+	if (composite) {
-+		char *one_object;
-+
-+		for (npaths = 0, one_object = object_name;
-+		     one_object != NULL;
-+		     npaths++, one_object = strchr(one_object + 1, ' '));
-+	}
-+
-+	module_paths = malloc((npaths + 1) * sizeof(char *));
-+	if (!module_paths) {
-+		fprintf(stderr, "%s: out of memory on module %s\n", __func__,
-+			*module_name);
-+		exit(1);
-+	}
-+
-+	if (composite) {
-+		char *one_object;
-+		size_t i = 0;
-+
-+		while ((one_object = strsep(&object_name, " ")) != NULL) {
-+			if (i >= npaths) {
-+				fprintf(stderr, "%s: num_objs overflow on module "
-+					"%s: this is a bug.\n", __func__,
-+					*module_name);
-+				exit(1);
-+			}
-+
-+			module_paths[i++] = one_object;
-+		}
-+	} else
-+		module_paths[0] = i->line;	/* untransformed module name */
-+
-+	module_paths[npaths] = NULL;
-+
-+	return module_paths;
-+}
-+
-+/*
-+ * Free an iterator. Can be called while iteration is underway, so even
-+ * state that is freed at the end of iteration must be freed here too.
-+ */
-+void
-+modules_builtin_iter_free(struct modules_builtin_iter *i)
-+{
-+	if (i == NULL)
-+		return;
-+	fclose(i->f);
-+	free(i->line);
-+	free(i);
-+}
-diff --git a/scripts/modules_builtin.h b/scripts/modules_builtin.h
-new file mode 100644
-index 0000000000000..5138792b42ef0
---- /dev/null
-+++ b/scripts/modules_builtin.h
-@@ -0,0 +1,48 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+/*
-+ * A simple modules.builtin.objs reader.
-+ *
-+ * (C) 2014, 2022 Oracle, Inc.  All rights reserved.
-+ *
-+ * This program is free software; you can redistribute it and/or modify
-+ * it under the terms of the GNU General Public License as published by
-+ * the Free Software Foundation; either version 2 of the License, or
-+ * (at your option) any later version.
-+ */
-+
-+#ifndef _LINUX_MODULES_BUILTIN_H
-+#define _LINUX_MODULES_BUILTIN_H
-+
-+#include <stdio.h>
-+#include <stddef.h>
-+
-+/*
-+ * modules.builtin.objs iteration state.
-+ */
-+struct modules_builtin_iter {
-+	FILE *f;
-+	char *line;
-+	size_t line_size;
-+};
-+
-+/*
-+ * Construct a modules_builtin.objs iterator.
-+ */
-+struct modules_builtin_iter *
-+modules_builtin_iter_new(const char *modules_builtin_file);
-+
-+/*
-+ * Iterate, returning a new null-terminated array of object file names, and a
-+ * new dynamically-allocated module name.  (The module name passed in is freed.)
-+ *
-+ * The array of object file names should be freed by the caller: the strings it
-+ * points to are owned by the iterator, and should not be freed.
-+ */
-+
-+char ** __attribute__((__nonnull__))
-+modules_builtin_iter_next(struct modules_builtin_iter *i, char **module_name);
-+
-+void
-+modules_builtin_iter_free(struct modules_builtin_iter *i);
-+
-+#endif
-diff --git a/scripts/package/kernel.spec b/scripts/package/kernel.spec
-index c52d517b93647..8f75906a96314 100644
---- a/scripts/package/kernel.spec
-+++ b/scripts/package/kernel.spec
-@@ -53,12 +53,18 @@ patch -p1 < %{SOURCE2}
- 
- %build
- %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release}
-+%if %{with_ctf}
-+%{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} ctf
-+%endif
- 
- %install
- mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE}
- cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz
- # DEPMOD=true makes depmod no-op. We do not package depmod-generated files.
- %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} DEPMOD=true modules_install
-+%if %{with_ctf}
-+%{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} ctf_install
-+%endif
- %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
- cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE}
- cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config
-diff --git a/scripts/package/mkspec b/scripts/package/mkspec
-index ce201bfa8377c..aeb43c7ab1229 100755
---- a/scripts/package/mkspec
-+++ b/scripts/package/mkspec
-@@ -21,10 +21,16 @@ else
- echo '%define with_devel 0'
- fi
- 
-+if grep -q CONFIG_CTF=y include/config/auto.conf; then
-+echo '%define with_ctf %{?_without_ctf: 0} %{?!_without_ctf: 1}'
-+else
-+echo '%define with_ctf 0'
-+fi
- cat<<EOF
- %define ARCH ${ARCH}
- %define KERNELRELEASE ${KERNELRELEASE}
- %define pkg_release $("${srctree}/init/build-version")
-+
- EOF
- 
- cat "${srctree}/scripts/package/kernel.spec"
-diff --git a/scripts/remove-ctf-lds.awk b/scripts/remove-ctf-lds.awk
-new file mode 100644
-index 0000000000000..5d94d6ee99227
---- /dev/null
-+++ b/scripts/remove-ctf-lds.awk
-@@ -0,0 +1,12 @@
-+# SPDX-License-Identifier: GPL-2.0
-+# See Makefile.vmlinux_o
-+
-+BEGIN {
-+    discards = 0; p = 0
-+}
-+
-+/^====/ { p = 1; next; }
-+p && /\.ctf/ { next; }
-+p && !discards && /DISCARD/ { sub(/\} *$/, " *(.ctf) }"); discards = 1 }
-+p && /^\}/ && !discards { print "  /DISCARD/ : { *(.ctf) }"; }
-+p { print $0; }
-diff --git a/scripts/verify_builtin_ranges.awk b/scripts/verify_builtin_ranges.awk
-new file mode 100755
-index 0000000000000..f513841da83e1
---- /dev/null
-+++ b/scripts/verify_builtin_ranges.awk
-@@ -0,0 +1,356 @@
-+#!/usr/bin/gawk -f
-+# SPDX-License-Identifier: GPL-2.0
-+# verify_builtin_ranges.awk: Verify address range data for builtin modules
-+# Written by Kris Van Hees <kris.van.hees@oracle.com>
-+#
-+# Usage: verify_builtin_ranges.awk modules.builtin.ranges System.map \
-+#				   modules.builtin vmlinux.map vmlinux.o.map \
-+#				   [ <build-dir> ]
-+#
-+
-+# Return the module name(s) (if any) associated with the given object.
-+#
-+# If we have seen this object before, return information from the cache.
-+# Otherwise, retrieve it from the corresponding .cmd file.
-+#
-+function get_module_info(fn, mod, obj, s) {
-+	if (fn in omod)
-+		return omod[fn];
-+
-+	if (match(fn, /\/[^/]+$/) == 0)
-+		return "";
-+
-+	obj = fn;
-+	mod = "";
-+	fn = kdir "/" substr(fn, 1, RSTART) "." substr(fn, RSTART + 1) ".cmd";
-+	if (getline s <fn == 1) {
-+		if (match(s, /DKBUILD_MODFILE=['"]+[^'"]+/) > 0) {
-+			mod = substr(s, RSTART + 16, RLENGTH - 16);
-+			gsub(/['"]/, "", mod);
-+		} else if (match(s, /RUST_MODFILE=[^ ]+/) > 0)
-+			mod = substr(s, RSTART + 13, RLENGTH - 13);
-+	} else {
-+		print "ERROR: Failed to read: " fn "\n\n" \
-+		      "  Invalid kernel build directory (" kdir ")\n" \
-+		      "  or its content does not match " ARGV[1] >"/dev/stderr";
-+		close(fn);
-+		total = 0;
-+		exit(1);
-+	}
-+	close(fn);
-+
-+	# A single module (common case) also reflects objects that are not part
-+	# of a module.  Some of those objects have names that are also a module
-+	# name (e.g. core).  We check the associated module file name, and if
-+	# they do not match, the object is not part of a module.
-+	if (mod !~ / /) {
-+		if (!(mod in mods))
-+			mod = "";
-+	}
-+
-+	gsub(/([^/ ]*\/)+/, "", mod);
-+	gsub(/-/, "_", mod);
-+
-+	# At this point, mod is a single (valid) module name, or a list of
-+	# module names (that do not need validation).
-+	omod[obj] = mod;
-+
-+	return mod;
-+}
-+
-+# Return a representative integer value for a given hexadecimal address.
-+#
-+# Since all kernel addresses fall within the same memory region, we can safely
-+# strip off the first 6 hex digits before performing the hex-to-dec conversion,
-+# thereby avoiding integer overflows.
-+#
-+function addr2val(val) {
-+	sub(/^0x/, "", val);
-+	if (length(val) == 16)
-+		val = substr(val, 5);
-+	return strtonum("0x" val);
-+}
-+
-+# Determine the kernel build directory to use (default is .).
-+#
-+BEGIN {
-+	if (ARGC > 6) {
-+		kdir = ARGV[ARGC - 1];
-+		ARGV[ARGC - 1] = "";
-+	} else
-+		kdir = ".";
-+}
-+
-+# (1) Load the built-in module address range data.
-+#
-+ARGIND == 1 {
-+	ranges[FNR] = $0;
-+	rcnt++;
-+	next;
-+}
-+
-+# (2) Annotate System.map symbols with module names.
-+#
-+ARGIND == 2 {
-+	addr = addr2val($1);
-+	name = $3;
-+
-+	while (addr >= mod_eaddr) {
-+		if (sect_symb) {
-+			if (sect_symb != name)
-+				next;
-+
-+			sect_base = addr - sect_off;
-+			if (dbg)
-+				printf "[%s] BASE (%s) %016x - %016x = %016x\n", sect_name, sect_symb, addr, sect_off, sect_base >"/dev/stderr";
-+			sect_symb = 0;
-+		}
-+
-+		if (++ridx > rcnt)
-+			break;
-+
-+		$0 = ranges[ridx];
-+		sub(/-/, " ");
-+		if ($4 != "=") {
-+			sub(/-/, " ");
-+			mod_saddr = strtonum("0x" $2) + sect_base;
-+			mod_eaddr = strtonum("0x" $3) + sect_base;
-+			$1 = $2 = $3 = "";
-+			sub(/^ +/, "");
-+			mod_name = $0;
-+
-+			if (dbg)
-+				printf "[%s] %s from %016x to %016x\n", sect_name, mod_name, mod_saddr, mod_eaddr >"/dev/stderr";
-+		} else {
-+			sect_name = $1;
-+			sect_off = strtonum("0x" $2);
-+			sect_symb = $5;
-+		}
-+	}
-+
-+	idx = addr"-"name;
-+	if (addr >= mod_saddr && addr < mod_eaddr)
-+		sym2mod[idx] = mod_name;
-+
-+	next;
-+}
-+
-+# Once we are done annotating the System.map, we no longer need the ranges data.
-+#
-+FNR == 1 && ARGIND == 3 {
-+	delete ranges;
-+}
-+
-+# (3) Build a lookup map of built-in module names.
-+#
-+# Lines from modules.builtin will be like:
-+#	kernel/crypto/lzo-rle.ko
-+# and we record the object name "crypto/lzo-rle".
-+#
-+ARGIND == 3 {
-+	sub(/kernel\//, "");			# strip off "kernel/" prefix
-+	sub(/\.ko$/, "");			# strip off .ko suffix
-+
-+	mods[$1] = 1;
-+	next;
-+}
-+
-+# (4) Get a list of symbols (per object).
-+#
-+# Symbols by object are read from vmlinux.map, with fallback to vmlinux.o.map
-+# if vmlinux is found to have inked in vmlinux.o.
-+#
-+
-+# If we were able to get the data we need from vmlinux.map, there is no need to
-+# process vmlinux.o.map.
-+#
-+FNR == 1 && ARGIND == 5 && total > 0 {
-+	if (dbg)
-+		printf "Note: %s is not needed.\n", FILENAME >"/dev/stderr";
-+	exit;
-+}
-+
-+# First determine whether we are dealing with a GNU ld or LLVM lld linker map.
-+#
-+ARGIND >= 4 && FNR == 1 && NF == 7 && $1 == "VMA" && $7 == "Symbol" {
-+	map_is_lld = 1;
-+	next;
-+}
-+
-+# (LLD) Convert a section record fronm lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && /[0-9] [^ ]/ {
-+	$0 = $5 " 0x"$1 " 0x"$3 " load address 0x"$2;
-+}
-+
-+# (LLD) Convert an object record from lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /:\(\./ {
-+	gsub(/\)/, "");
-+	sub(/:\(/, " ");
-+	sub(/ vmlinux\.a\(/, " ");
-+	$0 = " "$6 " 0x"$1 " 0x"$3 " " $5;
-+}
-+
-+# (LLD) Convert a symbol record from lld format to ld format.
-+#
-+ARGIND >= 4 && map_is_lld && NF == 5 && $5 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ {
-+	$0 = "  0x" $1 " " $5;
-+}
-+
-+# (LLD) We do not need any other ldd linker map records.
-+#
-+ARGIND >= 4 && map_is_lld && /^[0-9a-f]{16} / {
-+	next;
-+}
-+
-+# Handle section records with long section names (spilling onto a 2nd line).
-+#
-+ARGIND >= 4 && !map_is_lld && NF == 1 && /^[^ ]/ {
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# Next section - previous one is done.
-+#
-+ARGIND >= 4 && /^[^ ]/ {
-+	sect = 0;
-+}
-+
-+# Get the (top level) section name.
-+#
-+ARGIND >= 4 && /^[^ ]/ && $2 ~ /^0x/ && $3 ~ /^0x/ {
-+	# Empty section or per-CPU section - ignore.
-+	if (NF < 3 || $1 ~ /\.percpu/) {
-+		sect = 0;
-+		next;
-+	}
-+
-+	sect = $1;
-+
-+	next;
-+}
-+
-+# If we are not currently in a section we care about, ignore records.
-+#
-+!sect {
-+	next;
-+}
-+
-+# Handle object records with long section names (spilling onto a 2nd line).
-+#
-+ARGIND >= 4 && /^ [^ \*]/ && NF == 1 {
-+	# If the section name is long, the remainder of the entry is found on
-+	# the next line.
-+	s = $0;
-+	getline;
-+	$0 = s " " $0;
-+}
-+
-+# If the object is vmlinux.o, we need to consult vmlinux.o.map for per-object
-+# symbol information
-+#
-+ARGIND == 4 && /^ [^ ]/ && NF == 4 {
-+	idx = sect":"$1;
-+	if (!(idx in sect_addend)) {
-+		sect_addend[idx] = addr2val($2);
-+		if (dbg)
-+			printf "ADDEND %s = %016x\n", idx, sect_addend[idx] >"/dev/stderr";
-+	}
-+	if ($4 == "vmlinux.o") {
-+		need_o_map = 1;
-+		next;
-+	}
-+}
-+
-+# If data from vmlinux.o.map is needed, we only process section and object
-+# records from vmlinux.map to determine which section we need to pay attention
-+# to in vmlinux.o.map.  So skip everything else from vmlinux.map.
-+#
-+ARGIND == 4 && need_o_map {
-+	next;
-+}
-+
-+# Get module information for the current object.
-+#
-+ARGIND >= 4 && /^ [^ ]/ && NF == 4 {
-+	msect = $1;
-+	mod_name = get_module_info($4);
-+	mod_eaddr = addr2val($2) + addr2val($3);
-+
-+	next;
-+}
-+
-+# Process a symbol record.
-+#
-+# Evaluate the module information obtained from vmlinux.map (or vmlinux.o.map)
-+# as follows:
-+#  - For all symbols in a given object:
-+#     - If the symbol is annotated with the same module name(s) that the object
-+#       belongs to, count it as a match.
-+#     - Otherwise:
-+#        - If the symbol is known to have duplicates of which at least one is
-+#          in a built-in module, disregard it.
-+#        - If the symbol us not annotated with any module name(s) AND the
-+#          object belongs to built-in modules, count it as missing.
-+#        - Otherwise, count it as a mismatch.
-+#
-+ARGIND >= 4 && /^ / && NF == 2 && $1 ~ /^0x/ {
-+	idx = sect":"msect;
-+	if (!(idx in sect_addend))
-+		next;
-+
-+	addr = addr2val($1);
-+
-+	# Handle the rare but annoying case where a 0-size symbol is placed at
-+	# the byte *after* the module range.  Based on vmlinux.map it will be
-+	# considered part of the current object, but it falls just beyond the
-+	# module address range.  Unfortunately, its address could be at the
-+	# start of another built-in module, so the only safe thing to do is to
-+	# ignore it.
-+	if (mod_name && addr == mod_eaddr)
-+		next;
-+
-+	# If we are processing vmlinux.o.map, we need to apply the base address
-+	# of the section to the relative address on the record.
-+	#
-+	if (ARGIND == 5)
-+		addr += sect_addend[idx];
-+
-+	idx = addr"-"$2;
-+	mod = "";
-+	if (idx in sym2mod) {
-+		mod = sym2mod[idx];
-+		if (sym2mod[idx] == mod_name) {
-+			mod_matches++;
-+			matches++;
-+		} else if (mod_name == "") {
-+			print $2 " in " sym2mod[idx] " (should NOT be)";
-+			mismatches++;
-+		} else {
-+			print $2 " in " sym2mod[idx] " (should be " mod_name ")";
-+			mismatches++;
-+		}
-+	} else if (mod_name != "") {
-+		print $2 " should be in " mod_name;
-+		missing++;
-+	} else
-+		matches++;
-+
-+	total++;
-+
-+	next;
-+}
-+
-+# Issue the comparison report.
-+#
-+END {
-+	if (total) {
-+		printf "Verification of %s:\n", ARGV[1];
-+		printf "  Correct matches:  %6d (%d%% of total)\n", matches, 100 * matches / total;
-+		printf "    Module matches: %6d (%d%% of matches)\n", mod_matches, 100 * mod_matches / matches;
-+		printf "  Mismatches:       %6d (%d%% of total)\n", mismatches, 100 * mismatches / total;
-+		printf "  Missing:          %6d (%d%% of total)\n", missing, 100 * missing / total;
-+	}
-+}


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-10-04 15:22 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-10-04 15:22 UTC (permalink / raw
  To: gentoo-commits

commit:     78a0c5de0cf3d9086eb4023a7425f4802f64d3a6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct  4 15:22:11 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct  4 15:22:11 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=78a0c5de

Linux patch 6.10.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1012_linux-6.10.13.patch | 28977 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 28981 insertions(+)

diff --git a/0000_README b/0000_README
index f735905c..fdea4bd1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-6.10.12.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.12
 
+Patch:  1012_linux-6.10.13.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.13
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1012_linux-6.10.13.patch b/1012_linux-6.10.13.patch
new file mode 100644
index 00000000..0fe55b0f
--- /dev/null
+++ b/1012_linux-6.10.13.patch
@@ -0,0 +1,28977 @@
+diff --git a/.gitignore b/.gitignore
+index c59dc60ba62ef0..7acc74c54ded5b 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -136,7 +136,6 @@ GTAGS
+ # id-utils files
+ ID
+ 
+-*.orig
+ *~
+ \#*#
+ 
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818 b/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
+index 31dbb390573ff2..c431f0a13cf502 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
++++ b/Documentation/ABI/testing/sysfs-bus-iio-filter-admv8818
+@@ -3,7 +3,7 @@ KernelVersion:
+ Contact:	linux-iio@vger.kernel.org
+ Description:
+ 		Reading this returns the valid values that can be written to the
+-		on_altvoltage0_mode attribute:
++		filter_mode attribute:
+ 
+ 		- auto -> Adjust bandpass filter to track changes in input clock rate.
+ 		- manual -> disable/unregister the clock rate notifier / input clock tracking.
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 50327c05be8d1b..39c52385f11fb3 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -55,6 +55,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Ampere         | AmpereOne       | AC03_CPU_38     | AMPERE_ERRATUM_AC03_CPU_38  |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Ampere         | AmpereOne AC04  | AC04_CPU_10     | AMPERE_ERRATUM_AC03_CPU_38  |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A510     | #2457168        | ARM64_ERRATUM_2457168       |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml b/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
+index 9790f75fc669ef..fe5145d3b73cf2 100644
+--- a/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
++++ b/Documentation/devicetree/bindings/iio/magnetometer/asahi-kasei,ak8975.yaml
+@@ -23,7 +23,6 @@ properties:
+           - ak8963
+           - ak09911
+           - ak09912
+-          - ak09916
+         deprecated: true
+ 
+   reg:
+diff --git a/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml b/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
+index 793986c5af7ff3..daeab5c0758d14 100644
+--- a/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/fsl,layerscape-pcie.yaml
+@@ -22,18 +22,20 @@ description:
+ 
+ properties:
+   compatible:
+-    enum:
+-      - fsl,ls1021a-pcie
+-      - fsl,ls2080a-pcie
+-      - fsl,ls2085a-pcie
+-      - fsl,ls2088a-pcie
+-      - fsl,ls1088a-pcie
+-      - fsl,ls1046a-pcie
+-      - fsl,ls1043a-pcie
+-      - fsl,ls1012a-pcie
+-      - fsl,ls1028a-pcie
+-      - fsl,lx2160a-pcie
+-
++    oneOf:
++      - enum:
++          - fsl,ls1012a-pcie
++          - fsl,ls1021a-pcie
++          - fsl,ls1028a-pcie
++          - fsl,ls1043a-pcie
++          - fsl,ls1046a-pcie
++          - fsl,ls1088a-pcie
++          - fsl,ls2080a-pcie
++          - fsl,ls2085a-pcie
++          - fsl,ls2088a-pcie
++      - items:
++          - const: fsl,lx2160ar2-pcie
++          - const: fsl,ls2088a-pcie
+   reg:
+     maxItems: 2
+ 
+diff --git a/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml b/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
+index 4a5f41bde00f3c..902db92da83207 100644
+--- a/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml
+@@ -21,6 +21,7 @@ properties:
+           - nxp,imx8mm-fspi
+           - nxp,imx8mp-fspi
+           - nxp,imx8qxp-fspi
++          - nxp,imx8ulp-fspi
+           - nxp,lx2160a-fspi
+       - items:
+           - enum:
+diff --git a/Documentation/driver-api/ipmi.rst b/Documentation/driver-api/ipmi.rst
+index e224e47b6b0944..dfa021eacd63c4 100644
+--- a/Documentation/driver-api/ipmi.rst
++++ b/Documentation/driver-api/ipmi.rst
+@@ -540,7 +540,7 @@ at module load time (for a module) with::
+ 	alerts_broken
+ 
+ The addresses are normal I2C addresses.  The adapter is the string
+-name of the adapter, as shown in /sys/class/i2c-adapter/i2c-<n>/name.
++name of the adapter, as shown in /sys/bus/i2c/devices/i2c-<n>/name.
+ It is *NOT* i2c-<n> itself.  Also, the comparison is done ignoring
+ spaces, so if the name is "This is an I2C chip" you can say
+ adapter_name=ThisisanI2cchip.  This is because it's hard to pass in
+diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst
+index 9aaf6ef75eb53b..317934c9e8fcac 100644
+--- a/Documentation/filesystems/mount_api.rst
++++ b/Documentation/filesystems/mount_api.rst
+@@ -645,6 +645,8 @@ The members are as follows:
+ 	fs_param_is_blockdev	Blockdev path		* Needs lookup
+ 	fs_param_is_path	Path			* Needs lookup
+ 	fs_param_is_fd		File descriptor		result->int_32
++	fs_param_is_uid		User ID (u32)           result->uid
++	fs_param_is_gid		Group ID (u32)          result->gid
+ 	=======================	=======================	=====================
+ 
+      Note that if the value is of fs_param_is_bool type, fs_parse() will try
+@@ -678,6 +680,8 @@ The members are as follows:
+ 	fsparam_bdev()		fs_param_is_blockdev
+ 	fsparam_path()		fs_param_is_path
+ 	fsparam_fd()		fs_param_is_fd
++	fsparam_uid()		fs_param_is_uid
++	fsparam_gid()		fs_param_is_gid
+ 	=======================	===============================================
+ 
+      all of which take two arguments, name string and option number - for
+@@ -784,8 +788,9 @@ process the parameters it is given.
+      option number (which it returns).
+ 
+      If successful, and if the parameter type indicates the result is a
+-     boolean, integer or enum type, the value is converted by this function and
+-     the result stored in result->{boolean,int_32,uint_32,uint_64}.
++     boolean, integer, enum, uid, or gid type, the value is converted by this
++     function and the result stored in
++     result->{boolean,int_32,uint_32,uint_64,uid,gid}.
+ 
+      If a match isn't initially made, the key is prefixed with "no" and no
+      value is present then an attempt will be made to look up the key with the
+diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
+index 02880d5552d5fa..198a5a8f26c2da 100644
+--- a/Documentation/virt/kvm/locking.rst
++++ b/Documentation/virt/kvm/locking.rst
+@@ -9,7 +9,7 @@ KVM Lock Overview
+ 
+ The acquisition orders for mutexes are as follows:
+ 
+-- cpus_read_lock() is taken outside kvm_lock
++- cpus_read_lock() is taken outside kvm_lock and kvm_usage_lock
+ 
+ - kvm->lock is taken outside vcpu->mutex
+ 
+@@ -24,6 +24,13 @@ The acquisition orders for mutexes are as follows:
+   are taken on the waiting side when modifying memslots, so MMU notifiers
+   must not take either kvm->slots_lock or kvm->slots_arch_lock.
+ 
++cpus_read_lock() vs kvm_lock:
++
++- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
++  being the official ordering, as it is quite easy to unknowingly trigger
++  cpus_read_lock() while holding kvm_lock.  Use caution when walking vm_list,
++  e.g. avoid complex operations when possible.
++
+ For SRCU:
+ 
+ - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
+@@ -227,10 +234,17 @@ time it will be set using the Dirty tracking mechanism described above.
+ :Type:		mutex
+ :Arch:		any
+ :Protects:	- vm_list
+-		- kvm_usage_count
++
++``kvm_usage_lock``
++^^^^^^^^^^^^^^^^^^
++
++:Type:		mutex
++:Arch:		any
++:Protects:	- kvm_usage_count
+ 		- hardware virtualization enable/disable
+-:Comment:	KVM also disables CPU hotplug via cpus_read_lock() during
+-		enable/disable.
++:Comment:	Exists because using kvm_lock leads to deadlock (see earlier comment
++		on cpus_read_lock() vs kvm_lock).  Note, KVM also disables CPU hotplug via
++		cpus_read_lock() when enabling/disabling virtualization.
+ 
+ ``kvm->mn_invalidate_lock``
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+@@ -290,11 +304,12 @@ time it will be set using the Dirty tracking mechanism described above.
+ 		wakeup.
+ 
+ ``vendor_module_lock``
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++^^^^^^^^^^^^^^^^^^^^^^
+ :Type:		mutex
+ :Arch:		x86
+ :Protects:	loading a vendor module (kvm_amd or kvm_intel)
+-:Comment:	Exists because using kvm_lock leads to deadlock.  cpu_hotplug_lock is
+-    taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and
+-    many operations need to take cpu_hotplug_lock when loading a vendor module,
+-    e.g. updating static calls.
++:Comment:	Exists because using kvm_lock leads to deadlock.  kvm_lock is taken
++    in notifiers, e.g. __kvmclock_cpufreq_notifier(), that may be invoked while
++    cpu_hotplug_lock is held, e.g. from cpufreq_boost_trigger_state(), and many
++    operations need to take cpu_hotplug_lock when loading a vendor module, e.g.
++    updating static calls.
+diff --git a/Makefile b/Makefile
+index 175d7f27ea32a4..93731d0b1a04ac 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+index 291540e5d81e76..d077afd5024db1 100644
+--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+@@ -1312,7 +1312,7 @@ rtt: rtc@fffffe20 {
+ 				compatible = "microchip,sam9x60-rtt", "atmel,at91sam9260-rtt";
+ 				reg = <0xfffffe20 0x20>;
+ 				interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
+-				clocks = <&clk32k 0>;
++				clocks = <&clk32k 1>;
+ 			};
+ 
+ 			pit: timer@fffffe40 {
+@@ -1338,7 +1338,7 @@ rtc: rtc@fffffea8 {
+ 				compatible = "microchip,sam9x60-rtc", "atmel,at91sam9x5-rtc";
+ 				reg = <0xfffffea8 0x100>;
+ 				interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
+-				clocks = <&clk32k 0>;
++				clocks = <&clk32k 1>;
+ 			};
+ 
+ 			watchdog: watchdog@ffffff80 {
+diff --git a/arch/arm/boot/dts/microchip/sama7g5.dtsi b/arch/arm/boot/dts/microchip/sama7g5.dtsi
+index 75778be126a3d9..17bcdcf0cf4a05 100644
+--- a/arch/arm/boot/dts/microchip/sama7g5.dtsi
++++ b/arch/arm/boot/dts/microchip/sama7g5.dtsi
+@@ -272,7 +272,7 @@ rtt: rtc@e001d020 {
+ 			compatible = "microchip,sama7g5-rtt", "microchip,sam9x60-rtt", "atmel,at91sam9260-rtt";
+ 			reg = <0xe001d020 0x30>;
+ 			interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&clk32k 0>;
++			clocks = <&clk32k 1>;
+ 		};
+ 
+ 		clk32k: clock-controller@e001d050 {
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts b/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
+index cdbb8c435cd6aa..601d89b904cdfb 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-geam.dts
+@@ -365,7 +365,7 @@ MX6UL_PAD_ENET1_RX_ER__PWM8_OUT   0x110b0
+ 	};
+ 
+ 	pinctrl_tsc: tscgrp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_GPIO1_IO01__GPIO1_IO01	0xb0
+ 			MX6UL_PAD_GPIO1_IO02__GPIO1_IO02	0xb0
+ 			MX6UL_PAD_GPIO1_IO03__GPIO1_IO03	0xb0
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
+index 6bb12e0bbc7ec6..50654dbf62e02c 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ull-seeed-npi-dev-board.dtsi
+@@ -339,14 +339,14 @@ MX6UL_PAD_JTAG_TRST_B__SAI2_TX_DATA	0x120b0
+ 	};
+ 
+ 	pinctrl_uart1: uart1grp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_UART1_TX_DATA__UART1_DCE_TX	0x1b0b1
+ 			MX6UL_PAD_UART1_RX_DATA__UART1_DCE_RX	0x1b0b1
+ 		>;
+ 	};
+ 
+ 	pinctrl_uart2: uart2grp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_UART2_TX_DATA__UART2_DCE_TX	0x1b0b1
+ 			MX6UL_PAD_UART2_RX_DATA__UART2_DCE_RX	0x1b0b1
+ 			MX6UL_PAD_UART2_CTS_B__UART2_DCE_CTS	0x1b0b1
+@@ -355,7 +355,7 @@ MX6UL_PAD_UART2_RTS_B__UART2_DCE_RTS	0x1b0b1
+ 	};
+ 
+ 	pinctrl_uart3: uart3grp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_UART3_TX_DATA__UART3_DCE_TX	0x1b0b1
+ 			MX6UL_PAD_UART3_RX_DATA__UART3_DCE_RX	0x1b0b1
+ 			MX6UL_PAD_UART3_CTS_B__UART3_DCE_CTS	0x1b0b1
+@@ -364,21 +364,21 @@ MX6UL_PAD_UART3_RTS_B__UART3_DCE_RTS	0x1b0b1
+ 	};
+ 
+ 	pinctrl_uart4: uart4grp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_UART4_TX_DATA__UART4_DCE_TX	0x1b0b1
+ 			MX6UL_PAD_UART4_RX_DATA__UART4_DCE_RX	0x1b0b1
+ 		>;
+ 	};
+ 
+ 	pinctrl_uart5: uart5grp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_UART5_TX_DATA__UART5_DCE_TX	0x1b0b1
+ 			MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX	0x1b0b1
+ 		>;
+ 	};
+ 
+ 	pinctrl_usb_otg1_id: usbotg1idgrp {
+-		fsl,pin = <
++		fsl,pins = <
+ 			MX6UL_PAD_GPIO1_IO00__ANATOP_OTG1_ID	0x17059
+ 		>;
+ 	};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts b/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
+index 521493342fe972..8f5566027c25a2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx7d-zii-rmu2.dts
+@@ -350,7 +350,7 @@ MX7D_PAD_SD3_RESET_B__SD3_RESET_B	0x59
+ 
+ &iomuxc_lpsr {
+ 	pinctrl_enet1_phy_interrupt: enet1phyinterruptgrp {
+-		fsl,phy = <
++		fsl,pins = <
+ 			MX7D_PAD_LPSR_GPIO1_IO02__GPIO1_IO2	0x08
+ 		>;
+ 	};
+diff --git a/arch/arm/mach-ep93xx/clock.c b/arch/arm/mach-ep93xx/clock.c
+index 85a496ddc6197e..e9f72a529b5089 100644
+--- a/arch/arm/mach-ep93xx/clock.c
++++ b/arch/arm/mach-ep93xx/clock.c
+@@ -359,7 +359,7 @@ static unsigned long ep93xx_div_recalc_rate(struct clk_hw *hw,
+ 	u32 val = __raw_readl(psc->reg);
+ 	u8 index = (val & psc->mask) >> psc->shift;
+ 
+-	if (index > psc->num_div)
++	if (index >= psc->num_div)
+ 		return 0;
+ 
+ 	return DIV_ROUND_UP_ULL(parent_rate, psc->div[index]);
+diff --git a/arch/arm/mach-versatile/platsmp-realview.c b/arch/arm/mach-versatile/platsmp-realview.c
+index 6965a1de727b07..d38b2e174257e8 100644
+--- a/arch/arm/mach-versatile/platsmp-realview.c
++++ b/arch/arm/mach-versatile/platsmp-realview.c
+@@ -70,6 +70,7 @@ static void __init realview_smp_prepare_cpus(unsigned int max_cpus)
+ 		return;
+ 	}
+ 	map = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(map)) {
+ 		pr_err("PLATSMP: No syscon regmap\n");
+ 		return;
+diff --git a/arch/arm/vfp/vfpinstr.h b/arch/arm/vfp/vfpinstr.h
+index 3c7938fd40aad6..32090b0fb250b8 100644
+--- a/arch/arm/vfp/vfpinstr.h
++++ b/arch/arm/vfp/vfpinstr.h
+@@ -64,33 +64,37 @@
+ 
+ #ifdef CONFIG_AS_VFP_VMRS_FPINST
+ 
+-#define fmrx(_vfp_) ({			\
+-	u32 __v;			\
+-	asm(".fpu	vfpv2\n"	\
+-	    "vmrs	%0, " #_vfp_	\
+-	    : "=r" (__v) : : "cc");	\
+-	__v;				\
+- })
+-
+-#define fmxr(_vfp_,_var_)		\
+-	asm(".fpu	vfpv2\n"	\
+-	    "vmsr	" #_vfp_ ", %0"	\
+-	   : : "r" (_var_) : "cc")
++#define fmrx(_vfp_) ({				\
++	u32 __v;				\
++	asm volatile (".fpu	vfpv2\n"	\
++		      "vmrs	%0, " #_vfp_	\
++		     : "=r" (__v) : : "cc");	\
++	__v;					\
++})
++
++#define fmxr(_vfp_, _var_) ({			\
++	asm volatile (".fpu	vfpv2\n"	\
++		      "vmsr	" #_vfp_ ", %0"	\
++		     : : "r" (_var_) : "cc");	\
++})
+ 
+ #else
+ 
+ #define vfpreg(_vfp_) #_vfp_
+ 
+-#define fmrx(_vfp_) ({			\
+-	u32 __v;			\
+-	asm("mrc p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmrx	%0, " #_vfp_	\
+-	    : "=r" (__v) : : "cc");	\
+-	__v;				\
+- })
+-
+-#define fmxr(_vfp_,_var_)		\
+-	asm("mcr p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr	" #_vfp_ ", %0"	\
+-	   : : "r" (_var_) : "cc")
++#define fmrx(_vfp_) ({						\
++	u32 __v;						\
++	asm volatile ("mrc p10, 7, %0, " vfpreg(_vfp_) ","	\
++		      "cr0, 0 @ fmrx	%0, " #_vfp_		\
++		     : "=r" (__v) : : "cc");			\
++	__v;							\
++})
++
++#define fmxr(_vfp_, _var_) ({					\
++	asm volatile ("mcr p10, 7, %0, " vfpreg(_vfp_) ","	\
++		      "cr0, 0 @ fmxr	" #_vfp_ ", %0"		\
++		     : : "r" (_var_) : "cc");			\
++})
+ 
+ #endif
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 11bbdc15c6e5e2..cd9772b1fd95ee 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -422,7 +422,7 @@ config AMPERE_ERRATUM_AC03_CPU_38
+ 	default y
+ 	help
+ 	  This option adds an alternative code sequence to work around Ampere
+-	  erratum AC03_CPU_38 on AmpereOne.
++	  errata AC03_CPU_38 and AC04_CPU_10 on AmpereOne.
+ 
+ 	  The affected design reports FEAT_HAFDBS as not implemented in
+ 	  ID_AA64MMFR1_EL1.HAFDBS, but (V)TCR_ELx.{HA,HD} are not RES0
+diff --git a/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts b/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
+index 47a389d9ff7d71..9d74fa6bfed9fb 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7885-jackpotlte.dts
+@@ -32,7 +32,7 @@ memory@80000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x80000000 0x3da00000>,
+ 		      <0x0 0xc0000000 0x40000000>,
+-		      <0x8 0x80000000 0x40000000>;
++		      <0x8 0x80000000 0x80000000>;
+ 	};
+ 
+ 	gpio-keys {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index 1807e9d6cb0e41..0a4838b35eab6e 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -321,7 +321,8 @@ &dpi {
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&dpi_pins_default>;
+ 	pinctrl-1 = <&dpi_pins_sleep>;
+-	status = "okay";
++	/* TODO Re-enable after DP to Type-C port muxing can be described */
++	status = "disabled";
+ };
+ 
+ &dpi_out {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index 4763ed5dc86cfb..d63a9defe73e17 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -731,7 +731,7 @@ opp-850000000 {
+ 		opp-900000000-3 {
+ 			opp-hz = /bits/ 64 <900000000>;
+ 			opp-microvolt = <850000>;
+-			opp-supported-hw = <0x8>;
++			opp-supported-hw = <0xcf>;
+ 		};
+ 
+ 		opp-900000000-4 {
+@@ -743,13 +743,13 @@ opp-900000000-4 {
+ 		opp-900000000-5 {
+ 			opp-hz = /bits/ 64 <900000000>;
+ 			opp-microvolt = <825000>;
+-			opp-supported-hw = <0x30>;
++			opp-supported-hw = <0x20>;
+ 		};
+ 
+ 		opp-950000000-3 {
+ 			opp-hz = /bits/ 64 <950000000>;
+ 			opp-microvolt = <900000>;
+-			opp-supported-hw = <0x8>;
++			opp-supported-hw = <0xcf>;
+ 		};
+ 
+ 		opp-950000000-4 {
+@@ -761,13 +761,13 @@ opp-950000000-4 {
+ 		opp-950000000-5 {
+ 			opp-hz = /bits/ 64 <950000000>;
+ 			opp-microvolt = <850000>;
+-			opp-supported-hw = <0x30>;
++			opp-supported-hw = <0x20>;
+ 		};
+ 
+ 		opp-1000000000-3 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <950000>;
+-			opp-supported-hw = <0x8>;
++			opp-supported-hw = <0xcf>;
+ 		};
+ 
+ 		opp-1000000000-4 {
+@@ -779,7 +779,7 @@ opp-1000000000-4 {
+ 		opp-1000000000-5 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+ 			opp-microvolt = <875000>;
+-			opp-supported-hw = <0x30>;
++			opp-supported-hw = <0x20>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 4a11918da37048..5a2d65edf4740f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -1359,6 +1359,7 @@ &xhci1 {
+ 	rx-fifo-depth = <3072>;
+ 	vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ 	vbus-supply = <&usb_vbus>;
++	mediatek,u3p-dis-msk = <1>;
+ };
+ 
+ &xhci2 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 2ee45752583c00..98c15eb68589a5 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -3251,10 +3251,10 @@ dp_intf0: dp-intf@1c015000 {
+ 			compatible = "mediatek,mt8195-dp-intf";
+ 			reg = <0 0x1c015000 0 0x1000>;
+ 			interrupts = <GIC_SPI 657 IRQ_TYPE_LEVEL_HIGH 0>;
+-			clocks = <&vdosys0  CLK_VDO0_DP_INTF0>,
+-				 <&vdosys0 CLK_VDO0_DP_INTF0_DP_INTF>,
++			clocks = <&vdosys0 CLK_VDO0_DP_INTF0_DP_INTF>,
++				 <&vdosys0  CLK_VDO0_DP_INTF0>,
+ 				 <&apmixedsys CLK_APMIXED_TVDPLL1>;
+-			clock-names = "engine", "pixel", "pll";
++			clock-names = "pixel", "engine", "pll";
+ 			status = "disabled";
+ 		};
+ 
+@@ -3521,10 +3521,10 @@ dp_intf1: dp-intf@1c113000 {
+ 			reg = <0 0x1c113000 0 0x1000>;
+ 			interrupts = <GIC_SPI 513 IRQ_TYPE_LEVEL_HIGH 0>;
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+-			clocks = <&vdosys1 CLK_VDO1_DP_INTF0_MM>,
+-				 <&vdosys1 CLK_VDO1_DPINTF>,
++			clocks = <&vdosys1 CLK_VDO1_DPINTF>,
++				 <&vdosys1 CLK_VDO1_DP_INTF0_MM>,
+ 				 <&apmixedsys CLK_APMIXED_TVDPLL2>;
+-			clock-names = "engine", "pixel", "pll";
++			clock-names = "pixel", "engine", "pll";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+index 97634cc04e659f..e2634590ca81f7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+@@ -816,6 +816,7 @@ &xhci1 {
+ 	usb2-lpm-disable;
+ 	vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ 	vbus-supply = <&vsys>;
++	mediatek,u3p-dis-msk = <1>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
+index 553fa4ba1cd48a..62c4fdad0b600b 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3701-0008.dtsi
+@@ -44,39 +44,6 @@ i2c@c240000 {
+ 			status = "okay";
+ 		};
+ 
+-		i2c@c250000 {
+-			power-sensor@41 {
+-				compatible = "ti,ina3221";
+-				reg = <0x41>;
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				input@0 {
+-					reg = <0x0>;
+-					label = "CVB_ATX_12V";
+-					shunt-resistor-micro-ohms = <2000>;
+-				};
+-
+-				input@1 {
+-					reg = <0x1>;
+-					label = "CVB_ATX_3V3";
+-					shunt-resistor-micro-ohms = <2000>;
+-				};
+-
+-				input@2 {
+-					reg = <0x2>;
+-					label = "CVB_ATX_5V";
+-					shunt-resistor-micro-ohms = <2000>;
+-				};
+-			};
+-
+-			power-sensor@44 {
+-				compatible = "ti,ina219";
+-				reg = <0x44>;
+-				shunt-resistor = <2000>;
+-			};
+-		};
+-
+ 		rtc@c2a0000 {
+ 			status = "okay";
+ 		};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
+index 527f2f3aee3ad4..377f518bd3e57b 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002.dtsi
+@@ -183,6 +183,39 @@ usb@3610000 {
+ 			phy-names = "usb2-0", "usb2-1", "usb2-2", "usb2-3",
+ 				"usb3-0", "usb3-1", "usb3-2";
+ 		};
++
++		i2c@c250000 {
++			power-sensor@41 {
++				compatible = "ti,ina3221";
++				reg = <0x41>;
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				input@0 {
++					reg = <0x0>;
++					label = "CVB_ATX_12V";
++					shunt-resistor-micro-ohms = <2000>;
++				};
++
++				input@1 {
++					reg = <0x1>;
++					label = "CVB_ATX_3V3";
++					shunt-resistor-micro-ohms = <2000>;
++				};
++
++				input@2 {
++					reg = <0x2>;
++					label = "CVB_ATX_5V";
++					shunt-resistor-micro-ohms = <2000>;
++				};
++			};
++
++			power-sensor@44 {
++				compatible = "ti,ina219";
++				reg = <0x44>;
++				shunt-resistor = <2000>;
++			};
++		};
+ 	};
+ 
+ 	vdd_3v3_dp: regulator-vdd-3v3-dp {
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 490e0369f52993..3a51e79af1bb6c 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2104,6 +2104,7 @@ apps_smmu: iommu@15000000 {
+ 			reg = <0x0 0x15000000 0x0 0x100000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <2>;
++			dma-coherent;
+ 
+ 			interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
+@@ -2242,6 +2243,7 @@ pcie_smmu: iommu@15200000 {
+ 			reg = <0x0 0x15200000 0x0 0x80000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <2>;
++			dma-coherent;
+ 
+ 			interrupts = <GIC_SPI 920 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 921 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 36c398e5fe5016..64d5f6e4c0b018 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -3998,14 +3998,14 @@ mdss_dp2: displayport-controller@ae9a000 {
+ 
+ 				assigned-clocks = <&dispcc DISP_CC_MDSS_DPTX2_LINK_CLK_SRC>,
+ 						  <&dispcc DISP_CC_MDSS_DPTX2_PIXEL0_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dp2_phy 0>,
+-							 <&mdss_dp2_phy 1>;
++				assigned-clock-parents = <&usb_1_ss2_qmpphy QMP_USB43DP_DP_LINK_CLK>,
++							 <&usb_1_ss2_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>;
+ 
+ 				operating-points-v2 = <&mdss_dp2_opp_table>;
+ 
+ 				power-domains = <&rpmhpd RPMHPD_MMCX>;
+ 
+-				phys = <&mdss_dp2_phy>;
++				phys = <&usb_1_ss2_qmpphy QMP_USB43DP_DP_PHY>;
+ 				phy-names = "dp";
+ 
+ 				#sound-dai-cells = <0>;
+@@ -4189,8 +4189,8 @@ dispcc: clock-controller@af00000 {
+ 				 <&usb_1_ss0_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+ 				 <&usb_1_ss1_qmpphy QMP_USB43DP_DP_LINK_CLK>, /* dp1 */
+ 				 <&usb_1_ss1_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+-				 <&mdss_dp2_phy 0>, /* dp2 */
+-				 <&mdss_dp2_phy 1>,
++				 <&usb_1_ss2_qmpphy QMP_USB43DP_DP_LINK_CLK>, /* dp2 */
++				 <&usb_1_ss2_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>,
+ 				 <&mdss_dp3_phy 0>, /* dp3 */
+ 				 <&mdss_dp3_phy 1>;
+ 			power-domains = <&rpmhpd RPMHPD_MMCX>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+index 18ef297db93363..20fb5e41c5988c 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+@@ -210,8 +210,8 @@ gic: interrupt-controller@11900000 {
+ 		#interrupt-cells = <3>;
+ 		#address-cells = <0>;
+ 		interrupt-controller;
+-		reg = <0x0 0x11900000 0 0x40000>,
+-		      <0x0 0x11940000 0 0x60000>;
++		reg = <0x0 0x11900000 0 0x20000>,
++		      <0x0 0x11940000 0 0x40000>;
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index 1a9891ba6c02c4..960537e401f4cf 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -1043,8 +1043,8 @@ gic: interrupt-controller@11900000 {
+ 			#interrupt-cells = <3>;
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+-			reg = <0x0 0x11900000 0 0x40000>,
+-			      <0x0 0x11940000 0 0x60000>;
++			reg = <0x0 0x11900000 0 0x20000>,
++			      <0x0 0x11940000 0 0x40000>;
+ 			interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index a2318478a66ba7..66894b676c01ec 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -1051,8 +1051,8 @@ gic: interrupt-controller@11900000 {
+ 			#interrupt-cells = <3>;
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+-			reg = <0x0 0x11900000 0 0x40000>,
+-			      <0x0 0x11940000 0 0x60000>;
++			reg = <0x0 0x11900000 0 0x20000>,
++			      <0x0 0x11940000 0 0x40000>;
+ 			interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+index a2adc4e27ce979..17609d81af294f 100644
+--- a/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a08g045.dtsi
+@@ -269,8 +269,8 @@ gic: interrupt-controller@12400000 {
+ 			#interrupt-cells = <3>;
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+-			reg = <0x0 0x12400000 0 0x40000>,
+-			      <0x0 0x12440000 0 0x60000>;
++			reg = <0x0 0x12400000 0 0x20000>,
++			      <0x0 0x12440000 0 0x40000>;
+ 			interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 294eb2de263deb..f5e124b235c83c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -32,12 +32,12 @@ chosen {
+ 	backlight: edp-backlight {
+ 		compatible = "pwm-backlight";
+ 		power-supply = <&vcc_12v>;
+-		pwms = <&pwm0 0 740740 0>;
++		pwms = <&pwm0 0 125000 0>;
+ 	};
+ 
+ 	bat: battery {
+ 		compatible = "simple-battery";
+-		charge-full-design-microamp-hours = <9800000>;
++		charge-full-design-microamp-hours = <10000000>;
+ 		voltage-max-design-microvolt = <4350000>;
+ 		voltage-min-design-microvolt = <3000000>;
+ 	};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts b/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
+index a337f547caf538..6a02db4f073f29 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-odroid-m1.dts
+@@ -13,7 +13,7 @@
+ 
+ / {
+ 	model = "Hardkernel ODROID-M1";
+-	compatible = "rockchip,rk3568-odroid-m1", "rockchip,rk3568";
++	compatible = "hardkernel,odroid-m1", "rockchip,rk3568";
+ 
+ 	aliases {
+ 		ethernet0 = &gmac0;
+diff --git a/arch/arm64/boot/dts/ti/k3-am654-idk.dtso b/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
+index 8bdb87fcbde007..1674ad564be1fd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am654-idk.dtso
+@@ -58,9 +58,7 @@ icssg0_eth: icssg0-eth {
+ 		       <&main_udmap 0xc107>, /* egress slice 1 */
+ 
+ 		       <&main_udmap 0x4100>, /* ingress slice 0 */
+-		       <&main_udmap 0x4101>, /* ingress slice 1 */
+-		       <&main_udmap 0x4102>, /* mgmnt rsp slice 0 */
+-		       <&main_udmap 0x4103>; /* mgmnt rsp slice 1 */
++		       <&main_udmap 0x4101>; /* ingress slice 1 */
+ 		dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3",
+ 			    "tx1-0", "tx1-1", "tx1-2", "tx1-3",
+ 			    "rx0", "rx1";
+@@ -126,9 +124,7 @@ icssg1_eth: icssg1-eth {
+ 		       <&main_udmap 0xc207>, /* egress slice 1 */
+ 
+ 		       <&main_udmap 0x4200>, /* ingress slice 0 */
+-		       <&main_udmap 0x4201>, /* ingress slice 1 */
+-		       <&main_udmap 0x4202>, /* mgmnt rsp slice 0 */
+-		       <&main_udmap 0x4203>; /* mgmnt rsp slice 1 */
++		       <&main_udmap 0x4201>; /* ingress slice 1 */
+ 		dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3",
+ 			    "tx1-0", "tx1-1", "tx1-2", "tx1-3",
+ 			    "rx0", "rx1";
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+index a2925555fe8180..fb899c99753ecd 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+@@ -123,7 +123,7 @@ main_r5fss1_core1_memory_region: r5f-memory@a5100000 {
+ 			no-map;
+ 		};
+ 
+-		c66_1_dma_memory_region: c66-dma-memory@a6000000 {
++		c66_0_dma_memory_region: c66-dma-memory@a6000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0x00 0xa6000000 0x00 0x100000>;
+ 			no-map;
+@@ -135,7 +135,7 @@ c66_0_memory_region: c66-memory@a6100000 {
+ 			no-map;
+ 		};
+ 
+-		c66_0_dma_memory_region: c66-dma-memory@a7000000 {
++		c66_1_dma_memory_region: c66-dma-memory@a7000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0x00 0xa7000000 0x00 0x100000>;
+ 			no-map;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+index 0c4575ad8d7cb0..53156b71e47964 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+@@ -119,7 +119,7 @@ main_r5fss1_core1_memory_region: r5f-memory@a5100000 {
+ 			no-map;
+ 		};
+ 
+-		c66_1_dma_memory_region: c66-dma-memory@a6000000 {
++		c66_0_dma_memory_region: c66-dma-memory@a6000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0x00 0xa6000000 0x00 0x100000>;
+ 			no-map;
+@@ -131,7 +131,7 @@ c66_0_memory_region: c66-memory@a6100000 {
+ 			no-map;
+ 		};
+ 
+-		c66_0_dma_memory_region: c66-dma-memory@a7000000 {
++		c66_1_dma_memory_region: c66-dma-memory@a7000000 {
+ 			compatible = "shared-dma-pool";
+ 			reg = <0x00 0xa7000000 0x00 0x100000>;
+ 			no-map;
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 5fd7caea441936..5a7dfeb8e8eb55 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -143,6 +143,7 @@
+ #define APPLE_CPU_PART_M2_AVALANCHE_MAX	0x039
+ 
+ #define AMPERE_CPU_PART_AMPERE1		0xAC3
++#define AMPERE_CPU_PART_AMPERE1A	0xAC4
+ 
+ #define MICROSOFT_CPU_PART_AZURE_COBALT_100	0xD49 /* Based on r0p0 of ARM Neoverse N2 */
+ 
+@@ -212,6 +213,7 @@
+ #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
+ #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
+ #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
++#define MIDR_AMPERE1A MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1A)
+ #define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100)
+ 
+ /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index 7abf09df70331c..60f4320e44e5d3 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -10,63 +10,63 @@
+ #include <asm/memory.h>
+ #include <asm/sysreg.h>
+ 
+-#define ESR_ELx_EC_UNKNOWN	(0x00)
+-#define ESR_ELx_EC_WFx		(0x01)
++#define ESR_ELx_EC_UNKNOWN	UL(0x00)
++#define ESR_ELx_EC_WFx		UL(0x01)
+ /* Unallocated EC: 0x02 */
+-#define ESR_ELx_EC_CP15_32	(0x03)
+-#define ESR_ELx_EC_CP15_64	(0x04)
+-#define ESR_ELx_EC_CP14_MR	(0x05)
+-#define ESR_ELx_EC_CP14_LS	(0x06)
+-#define ESR_ELx_EC_FP_ASIMD	(0x07)
+-#define ESR_ELx_EC_CP10_ID	(0x08)	/* EL2 only */
+-#define ESR_ELx_EC_PAC		(0x09)	/* EL2 and above */
++#define ESR_ELx_EC_CP15_32	UL(0x03)
++#define ESR_ELx_EC_CP15_64	UL(0x04)
++#define ESR_ELx_EC_CP14_MR	UL(0x05)
++#define ESR_ELx_EC_CP14_LS	UL(0x06)
++#define ESR_ELx_EC_FP_ASIMD	UL(0x07)
++#define ESR_ELx_EC_CP10_ID	UL(0x08)	/* EL2 only */
++#define ESR_ELx_EC_PAC		UL(0x09)	/* EL2 and above */
+ /* Unallocated EC: 0x0A - 0x0B */
+-#define ESR_ELx_EC_CP14_64	(0x0C)
+-#define ESR_ELx_EC_BTI		(0x0D)
+-#define ESR_ELx_EC_ILL		(0x0E)
++#define ESR_ELx_EC_CP14_64	UL(0x0C)
++#define ESR_ELx_EC_BTI		UL(0x0D)
++#define ESR_ELx_EC_ILL		UL(0x0E)
+ /* Unallocated EC: 0x0F - 0x10 */
+-#define ESR_ELx_EC_SVC32	(0x11)
+-#define ESR_ELx_EC_HVC32	(0x12)	/* EL2 only */
+-#define ESR_ELx_EC_SMC32	(0x13)	/* EL2 and above */
++#define ESR_ELx_EC_SVC32	UL(0x11)
++#define ESR_ELx_EC_HVC32	UL(0x12)	/* EL2 only */
++#define ESR_ELx_EC_SMC32	UL(0x13)	/* EL2 and above */
+ /* Unallocated EC: 0x14 */
+-#define ESR_ELx_EC_SVC64	(0x15)
+-#define ESR_ELx_EC_HVC64	(0x16)	/* EL2 and above */
+-#define ESR_ELx_EC_SMC64	(0x17)	/* EL2 and above */
+-#define ESR_ELx_EC_SYS64	(0x18)
+-#define ESR_ELx_EC_SVE		(0x19)
+-#define ESR_ELx_EC_ERET		(0x1a)	/* EL2 only */
++#define ESR_ELx_EC_SVC64	UL(0x15)
++#define ESR_ELx_EC_HVC64	UL(0x16)	/* EL2 and above */
++#define ESR_ELx_EC_SMC64	UL(0x17)	/* EL2 and above */
++#define ESR_ELx_EC_SYS64	UL(0x18)
++#define ESR_ELx_EC_SVE		UL(0x19)
++#define ESR_ELx_EC_ERET		UL(0x1a)	/* EL2 only */
+ /* Unallocated EC: 0x1B */
+-#define ESR_ELx_EC_FPAC		(0x1C)	/* EL1 and above */
+-#define ESR_ELx_EC_SME		(0x1D)
++#define ESR_ELx_EC_FPAC		UL(0x1C)	/* EL1 and above */
++#define ESR_ELx_EC_SME		UL(0x1D)
+ /* Unallocated EC: 0x1E */
+-#define ESR_ELx_EC_IMP_DEF	(0x1f)	/* EL3 only */
+-#define ESR_ELx_EC_IABT_LOW	(0x20)
+-#define ESR_ELx_EC_IABT_CUR	(0x21)
+-#define ESR_ELx_EC_PC_ALIGN	(0x22)
++#define ESR_ELx_EC_IMP_DEF	UL(0x1f)	/* EL3 only */
++#define ESR_ELx_EC_IABT_LOW	UL(0x20)
++#define ESR_ELx_EC_IABT_CUR	UL(0x21)
++#define ESR_ELx_EC_PC_ALIGN	UL(0x22)
+ /* Unallocated EC: 0x23 */
+-#define ESR_ELx_EC_DABT_LOW	(0x24)
+-#define ESR_ELx_EC_DABT_CUR	(0x25)
+-#define ESR_ELx_EC_SP_ALIGN	(0x26)
+-#define ESR_ELx_EC_MOPS		(0x27)
+-#define ESR_ELx_EC_FP_EXC32	(0x28)
++#define ESR_ELx_EC_DABT_LOW	UL(0x24)
++#define ESR_ELx_EC_DABT_CUR	UL(0x25)
++#define ESR_ELx_EC_SP_ALIGN	UL(0x26)
++#define ESR_ELx_EC_MOPS		UL(0x27)
++#define ESR_ELx_EC_FP_EXC32	UL(0x28)
+ /* Unallocated EC: 0x29 - 0x2B */
+-#define ESR_ELx_EC_FP_EXC64	(0x2C)
++#define ESR_ELx_EC_FP_EXC64	UL(0x2C)
+ /* Unallocated EC: 0x2D - 0x2E */
+-#define ESR_ELx_EC_SERROR	(0x2F)
+-#define ESR_ELx_EC_BREAKPT_LOW	(0x30)
+-#define ESR_ELx_EC_BREAKPT_CUR	(0x31)
+-#define ESR_ELx_EC_SOFTSTP_LOW	(0x32)
+-#define ESR_ELx_EC_SOFTSTP_CUR	(0x33)
+-#define ESR_ELx_EC_WATCHPT_LOW	(0x34)
+-#define ESR_ELx_EC_WATCHPT_CUR	(0x35)
++#define ESR_ELx_EC_SERROR	UL(0x2F)
++#define ESR_ELx_EC_BREAKPT_LOW	UL(0x30)
++#define ESR_ELx_EC_BREAKPT_CUR	UL(0x31)
++#define ESR_ELx_EC_SOFTSTP_LOW	UL(0x32)
++#define ESR_ELx_EC_SOFTSTP_CUR	UL(0x33)
++#define ESR_ELx_EC_WATCHPT_LOW	UL(0x34)
++#define ESR_ELx_EC_WATCHPT_CUR	UL(0x35)
+ /* Unallocated EC: 0x36 - 0x37 */
+-#define ESR_ELx_EC_BKPT32	(0x38)
++#define ESR_ELx_EC_BKPT32	UL(0x38)
+ /* Unallocated EC: 0x39 */
+-#define ESR_ELx_EC_VECTOR32	(0x3A)	/* EL2 only */
++#define ESR_ELx_EC_VECTOR32	UL(0x3A)	/* EL2 only */
+ /* Unallocated EC: 0x3B */
+-#define ESR_ELx_EC_BRK64	(0x3C)
++#define ESR_ELx_EC_BRK64	UL(0x3C)
+ /* Unallocated EC: 0x3D - 0x3F */
+-#define ESR_ELx_EC_MAX		(0x3F)
++#define ESR_ELx_EC_MAX		UL(0x3F)
+ 
+ #define ESR_ELx_EC_SHIFT	(26)
+ #define ESR_ELx_EC_WIDTH	(6)
+diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h
+index 8a45b7a411e045..57f76d82077ea5 100644
+--- a/arch/arm64/include/uapi/asm/sigcontext.h
++++ b/arch/arm64/include/uapi/asm/sigcontext.h
+@@ -320,10 +320,10 @@ struct zt_context {
+ 	((sizeof(struct za_context) + (__SVE_VQ_BYTES - 1))	\
+ 		/ __SVE_VQ_BYTES * __SVE_VQ_BYTES)
+ 
+-#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES))
++#define ZA_SIG_REGS_SIZE(vq) (((vq) * __SVE_VQ_BYTES) * ((vq) * __SVE_VQ_BYTES))
+ 
+ #define ZA_SIG_ZAV_OFFSET(vq, n) (ZA_SIG_REGS_OFFSET + \
+-				  (SVE_SIG_ZREG_SIZE(vq) * n))
++				  (SVE_SIG_ZREG_SIZE(vq) * (n)))
+ 
+ #define ZA_SIG_CONTEXT_SIZE(vq) \
+ 		(ZA_SIG_REGS_OFFSET + ZA_SIG_REGS_SIZE(vq))
+@@ -334,7 +334,7 @@ struct zt_context {
+ 
+ #define ZT_SIG_REGS_OFFSET sizeof(struct zt_context)
+ 
+-#define ZT_SIG_REGS_SIZE(n) (ZT_SIG_REG_BYTES * n)
++#define ZT_SIG_REGS_SIZE(n) (ZT_SIG_REG_BYTES * (n))
+ 
+ #define ZT_SIG_CONTEXT_SIZE(n) \
+ 	(sizeof(struct zt_context) + ZT_SIG_REGS_SIZE(n))
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index f6b6b450735715..dfefbdf4073a6a 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -456,6 +456,14 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ };
+ #endif
+ 
++#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
++static const struct midr_range erratum_ac03_cpu_38_list[] = {
++	MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++	MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
++	{},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ 	{
+@@ -772,7 +780,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 	{
+ 		.desc = "AmpereOne erratum AC03_CPU_38",
+ 		.capability = ARM64_WORKAROUND_AMPERE_AC03_CPU_38,
+-		ERRATA_MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++		ERRATA_MIDR_RANGE_LIST(erratum_ac03_cpu_38_list),
+ 	},
+ #endif
+ 	{
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 05688f6a275f10..d36b9160e9346b 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -71,7 +71,7 @@ enum ipi_msg_type {
+ 	IPI_RESCHEDULE,
+ 	IPI_CALL_FUNC,
+ 	IPI_CPU_STOP,
+-	IPI_CPU_CRASH_STOP,
++	IPI_CPU_STOP_NMI,
+ 	IPI_TIMER,
+ 	IPI_IRQ_WORK,
+ 	NR_IPI,
+@@ -88,6 +88,8 @@ static int ipi_irq_base __ro_after_init;
+ static int nr_ipi __ro_after_init = NR_IPI;
+ static struct irq_desc *ipi_desc[MAX_IPI] __ro_after_init;
+ 
++static bool crash_stop;
++
+ static void ipi_setup(int cpu);
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+@@ -773,7 +775,7 @@ static const char *ipi_types[MAX_IPI] __tracepoint_string = {
+ 	[IPI_RESCHEDULE]	= "Rescheduling interrupts",
+ 	[IPI_CALL_FUNC]		= "Function call interrupts",
+ 	[IPI_CPU_STOP]		= "CPU stop interrupts",
+-	[IPI_CPU_CRASH_STOP]	= "CPU stop (for crash dump) interrupts",
++	[IPI_CPU_STOP_NMI]	= "CPU stop NMIs",
+ 	[IPI_TIMER]		= "Timer broadcast interrupts",
+ 	[IPI_IRQ_WORK]		= "IRQ work interrupts",
+ 	[IPI_CPU_BACKTRACE]	= "CPU backtrace interrupts",
+@@ -817,9 +819,9 @@ void arch_irq_work_raise(void)
+ }
+ #endif
+ 
+-static void __noreturn local_cpu_stop(void)
++static void __noreturn local_cpu_stop(unsigned int cpu)
+ {
+-	set_cpu_online(smp_processor_id(), false);
++	set_cpu_online(cpu, false);
+ 
+ 	local_daif_mask();
+ 	sdei_mask_local_cpu();
+@@ -833,21 +835,26 @@ static void __noreturn local_cpu_stop(void)
+  */
+ void __noreturn panic_smp_self_stop(void)
+ {
+-	local_cpu_stop();
++	local_cpu_stop(smp_processor_id());
+ }
+ 
+-#ifdef CONFIG_KEXEC_CORE
+-static atomic_t waiting_for_crash_ipi = ATOMIC_INIT(0);
+-#endif
+-
+ static void __noreturn ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs)
+ {
+ #ifdef CONFIG_KEXEC_CORE
++	/*
++	 * Use local_daif_mask() instead of local_irq_disable() to make sure
++	 * that pseudo-NMIs are disabled. The "crash stop" code starts with
++	 * an IRQ and falls back to NMI (which might be pseudo). If the IRQ
++	 * finally goes through right as we're timing out then the NMI could
++	 * interrupt us. It's better to prevent the NMI and let the IRQ
++	 * finish since the pt_regs will be better.
++	 */
++	local_daif_mask();
++
+ 	crash_save_cpu(regs, cpu);
+ 
+-	atomic_dec(&waiting_for_crash_ipi);
++	set_cpu_online(cpu, false);
+ 
+-	local_irq_disable();
+ 	sdei_mask_local_cpu();
+ 
+ 	if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
+@@ -912,14 +919,12 @@ static void do_handle_IPI(int ipinr)
+ 		break;
+ 
+ 	case IPI_CPU_STOP:
+-		local_cpu_stop();
+-		break;
+-
+-	case IPI_CPU_CRASH_STOP:
+-		if (IS_ENABLED(CONFIG_KEXEC_CORE)) {
++	case IPI_CPU_STOP_NMI:
++		if (IS_ENABLED(CONFIG_KEXEC_CORE) && crash_stop) {
+ 			ipi_cpu_crash_stop(cpu, get_irq_regs());
+-
+ 			unreachable();
++		} else {
++			local_cpu_stop(cpu);
+ 		}
+ 		break;
+ 
+@@ -974,8 +979,7 @@ static bool ipi_should_be_nmi(enum ipi_msg_type ipi)
+ 		return false;
+ 
+ 	switch (ipi) {
+-	case IPI_CPU_STOP:
+-	case IPI_CPU_CRASH_STOP:
++	case IPI_CPU_STOP_NMI:
+ 	case IPI_CPU_BACKTRACE:
+ 	case IPI_KGDB_ROUNDUP:
+ 		return true;
+@@ -1088,79 +1092,109 @@ static inline unsigned int num_other_online_cpus(void)
+ 
+ void smp_send_stop(void)
+ {
++	static unsigned long stop_in_progress;
++	cpumask_t mask;
+ 	unsigned long timeout;
+ 
+-	if (num_other_online_cpus()) {
+-		cpumask_t mask;
++	/*
++	 * If this cpu is the only one alive at this point in time, online or
++	 * not, there are no stop messages to be sent around, so just back out.
++	 */
++	if (num_other_online_cpus() == 0)
++		goto skip_ipi;
+ 
+-		cpumask_copy(&mask, cpu_online_mask);
+-		cpumask_clear_cpu(smp_processor_id(), &mask);
++	/* Only proceed if this is the first CPU to reach this code */
++	if (test_and_set_bit(0, &stop_in_progress))
++		return;
+ 
+-		if (system_state <= SYSTEM_RUNNING)
+-			pr_crit("SMP: stopping secondary CPUs\n");
+-		smp_cross_call(&mask, IPI_CPU_STOP);
+-	}
++	/*
++	 * Send an IPI to all currently online CPUs except the CPU running
++	 * this code.
++	 *
++	 * NOTE: we don't do anything here to prevent other CPUs from coming
++	 * online after we snapshot `cpu_online_mask`. Ideally, the calling code
++	 * should do something to prevent other CPUs from coming up. This code
++	 * can be called in the panic path and thus it doesn't seem wise to
++	 * grab the CPU hotplug mutex ourselves. Worst case:
++	 * - If a CPU comes online as we're running, we'll likely notice it
++	 *   during the 1 second wait below and then we'll catch it when we try
++	 *   with an NMI (assuming NMIs are enabled) since we re-snapshot the
++	 *   mask before sending an NMI.
++	 * - If we leave the function and see that CPUs are still online we'll
++	 *   at least print a warning. Especially without NMIs this function
++	 *   isn't foolproof anyway so calling code will just have to accept
++	 *   the fact that there could be cases where a CPU can't be stopped.
++	 */
++	cpumask_copy(&mask, cpu_online_mask);
++	cpumask_clear_cpu(smp_processor_id(), &mask);
+ 
+-	/* Wait up to one second for other CPUs to stop */
++	if (system_state <= SYSTEM_RUNNING)
++		pr_crit("SMP: stopping secondary CPUs\n");
++
++	/*
++	 * Start with a normal IPI and wait up to one second for other CPUs to
++	 * stop. We do this first because it gives other processors a chance
++	 * to exit critical sections / drop locks and makes the rest of the
++	 * stop process (especially console flush) more robust.
++	 */
++	smp_cross_call(&mask, IPI_CPU_STOP);
+ 	timeout = USEC_PER_SEC;
+ 	while (num_other_online_cpus() && timeout--)
+ 		udelay(1);
+ 
+-	if (num_other_online_cpus())
++	/*
++	 * If CPUs are still online, try an NMI. There's no excuse for this to
++	 * be slow, so we only give them an extra 10 ms to respond.
++	 */
++	if (num_other_online_cpus() && ipi_should_be_nmi(IPI_CPU_STOP_NMI)) {
++		smp_rmb();
++		cpumask_copy(&mask, cpu_online_mask);
++		cpumask_clear_cpu(smp_processor_id(), &mask);
++
++		pr_info("SMP: retry stop with NMI for CPUs %*pbl\n",
++			cpumask_pr_args(&mask));
++
++		smp_cross_call(&mask, IPI_CPU_STOP_NMI);
++		timeout = USEC_PER_MSEC * 10;
++		while (num_other_online_cpus() && timeout--)
++			udelay(1);
++	}
++
++	if (num_other_online_cpus()) {
++		smp_rmb();
++		cpumask_copy(&mask, cpu_online_mask);
++		cpumask_clear_cpu(smp_processor_id(), &mask);
++
+ 		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+-			cpumask_pr_args(cpu_online_mask));
++			cpumask_pr_args(&mask));
++	}
+ 
++skip_ipi:
+ 	sdei_mask_local_cpu();
+ }
+ 
+ #ifdef CONFIG_KEXEC_CORE
+ void crash_smp_send_stop(void)
+ {
+-	static int cpus_stopped;
+-	cpumask_t mask;
+-	unsigned long timeout;
+-
+ 	/*
+ 	 * This function can be called twice in panic path, but obviously
+ 	 * we execute this only once.
++	 *
++	 * We use this same boolean to tell whether the IPI we send was a
++	 * stop or a "crash stop".
+ 	 */
+-	if (cpus_stopped)
++	if (crash_stop)
+ 		return;
++	crash_stop = 1;
+ 
+-	cpus_stopped = 1;
++	smp_send_stop();
+ 
+-	/*
+-	 * If this cpu is the only one alive at this point in time, online or
+-	 * not, there are no stop messages to be sent around, so just back out.
+-	 */
+-	if (num_other_online_cpus() == 0)
+-		goto skip_ipi;
+-
+-	cpumask_copy(&mask, cpu_online_mask);
+-	cpumask_clear_cpu(smp_processor_id(), &mask);
+-
+-	atomic_set(&waiting_for_crash_ipi, num_other_online_cpus());
+-
+-	pr_crit("SMP: stopping secondary CPUs\n");
+-	smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
+-
+-	/* Wait up to one second for other CPUs to stop */
+-	timeout = USEC_PER_SEC;
+-	while ((atomic_read(&waiting_for_crash_ipi) > 0) && timeout--)
+-		udelay(1);
+-
+-	if (atomic_read(&waiting_for_crash_ipi) > 0)
+-		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+-			cpumask_pr_args(&mask));
+-
+-skip_ipi:
+-	sdei_mask_local_cpu();
+ 	sdei_handler_abort();
+ }
+ 
+ bool smp_crash_stop_failed(void)
+ {
+-	return (atomic_read(&waiting_for_crash_ipi) > 0);
++	return num_other_online_cpus() != 0;
+ }
+ #endif
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
+index efb053af331cc9..f26ae1819d1b4c 100644
+--- a/arch/arm64/kvm/hyp/nvhe/ffa.c
++++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
+@@ -423,9 +423,9 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_res *res,
+ 	return;
+ }
+ 
+-static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+-					    struct arm_smccc_res *res,
+-					    struct kvm_cpu_context *ctxt)
++static void __do_ffa_mem_xfer(const u64 func_id,
++			      struct arm_smccc_res *res,
++			      struct kvm_cpu_context *ctxt)
+ {
+ 	DECLARE_REG(u32, len, ctxt, 1);
+ 	DECLARE_REG(u32, fraglen, ctxt, 2);
+@@ -437,9 +437,6 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ 	u32 offset, nr_ranges;
+ 	int ret = 0;
+ 
+-	BUILD_BUG_ON(func_id != FFA_FN64_MEM_SHARE &&
+-		     func_id != FFA_FN64_MEM_LEND);
+-
+ 	if (addr_mbz || npages_mbz || fraglen > len ||
+ 	    fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
+ 		ret = FFA_RET_INVALID_PARAMETERS;
+@@ -458,6 +455,11 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ 		goto out_unlock;
+ 	}
+ 
++	if (len > ffa_desc_buf.len) {
++		ret = FFA_RET_NO_MEMORY;
++		goto out_unlock;
++	}
++
+ 	buf = hyp_buffers.tx;
+ 	memcpy(buf, host_buffers.tx, fraglen);
+ 
+@@ -509,6 +511,13 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
+ 	goto out_unlock;
+ }
+ 
++#define do_ffa_mem_xfer(fid, res, ctxt)				\
++	do {							\
++		BUILD_BUG_ON((fid) != FFA_FN64_MEM_SHARE &&	\
++			     (fid) != FFA_FN64_MEM_LEND);	\
++		__do_ffa_mem_xfer((fid), (res), (ctxt));	\
++	} while (0);
++
+ static void do_ffa_mem_reclaim(struct arm_smccc_res *res,
+ 			       struct kvm_cpu_context *ctxt)
+ {
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 1bf483ec971d92..c4190865f24335 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -26,7 +26,7 @@
+ 
+ #define TMP_REG_1 (MAX_BPF_JIT_REG + 0)
+ #define TMP_REG_2 (MAX_BPF_JIT_REG + 1)
+-#define TCALL_CNT (MAX_BPF_JIT_REG + 2)
++#define TCCNT_PTR (MAX_BPF_JIT_REG + 2)
+ #define TMP_REG_3 (MAX_BPF_JIT_REG + 3)
+ #define FP_BOTTOM (MAX_BPF_JIT_REG + 4)
+ #define ARENA_VM_START (MAX_BPF_JIT_REG + 5)
+@@ -63,8 +63,8 @@ static const int bpf2a64[] = {
+ 	[TMP_REG_1] = A64_R(10),
+ 	[TMP_REG_2] = A64_R(11),
+ 	[TMP_REG_3] = A64_R(12),
+-	/* tail_call_cnt */
+-	[TCALL_CNT] = A64_R(26),
++	/* tail_call_cnt_ptr */
++	[TCCNT_PTR] = A64_R(26),
+ 	/* temporary register for blinding constants */
+ 	[BPF_REG_AX] = A64_R(9),
+ 	[FP_BOTTOM] = A64_R(27),
+@@ -282,13 +282,35 @@ static bool is_lsi_offset(int offset, int scale)
+  *      mov x29, sp
+  *      stp x19, x20, [sp, #-16]!
+  *      stp x21, x22, [sp, #-16]!
+- *      stp x25, x26, [sp, #-16]!
++ *      stp x26, x25, [sp, #-16]!
++ *      stp x26, x25, [sp, #-16]!
+  *      stp x27, x28, [sp, #-16]!
+  *      mov x25, sp
+  *      mov tcc, #0
+  *      // PROLOGUE_OFFSET
+  */
+ 
++static void prepare_bpf_tail_call_cnt(struct jit_ctx *ctx)
++{
++	const struct bpf_prog *prog = ctx->prog;
++	const bool is_main_prog = !bpf_is_subprog(prog);
++	const u8 ptr = bpf2a64[TCCNT_PTR];
++	const u8 fp = bpf2a64[BPF_REG_FP];
++	const u8 tcc = ptr;
++
++	emit(A64_PUSH(ptr, fp, A64_SP), ctx);
++	if (is_main_prog) {
++		/* Initialize tail_call_cnt. */
++		emit(A64_MOVZ(1, tcc, 0, 0), ctx);
++		emit(A64_PUSH(tcc, fp, A64_SP), ctx);
++		emit(A64_MOV(1, ptr, A64_SP), ctx);
++	} else {
++		emit(A64_PUSH(ptr, fp, A64_SP), ctx);
++		emit(A64_NOP, ctx);
++		emit(A64_NOP, ctx);
++	}
++}
++
+ #define BTI_INSNS (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) ? 1 : 0)
+ #define PAC_INSNS (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) ? 1 : 0)
+ 
+@@ -296,7 +318,7 @@ static bool is_lsi_offset(int offset, int scale)
+ #define POKE_OFFSET (BTI_INSNS + 1)
+ 
+ /* Tail call offset to jump into */
+-#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 8)
++#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 10)
+ 
+ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ 			  bool is_exception_cb, u64 arena_vm_start)
+@@ -308,7 +330,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ 	const u8 r8 = bpf2a64[BPF_REG_8];
+ 	const u8 r9 = bpf2a64[BPF_REG_9];
+ 	const u8 fp = bpf2a64[BPF_REG_FP];
+-	const u8 tcc = bpf2a64[TCALL_CNT];
+ 	const u8 fpb = bpf2a64[FP_BOTTOM];
+ 	const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
+ 	const int idx0 = ctx->idx;
+@@ -359,7 +380,7 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ 		/* Save callee-saved registers */
+ 		emit(A64_PUSH(r6, r7, A64_SP), ctx);
+ 		emit(A64_PUSH(r8, r9, A64_SP), ctx);
+-		emit(A64_PUSH(fp, tcc, A64_SP), ctx);
++		prepare_bpf_tail_call_cnt(ctx);
+ 		emit(A64_PUSH(fpb, A64_R(28), A64_SP), ctx);
+ 	} else {
+ 		/*
+@@ -372,18 +393,15 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
+ 		 * callee-saved registers. The exception callback will not push
+ 		 * anything and re-use the main program's stack.
+ 		 *
+-		 * 10 registers are on the stack
++		 * 12 registers are on the stack
+ 		 */
+-		emit(A64_SUB_I(1, A64_SP, A64_FP, 80), ctx);
++		emit(A64_SUB_I(1, A64_SP, A64_FP, 96), ctx);
+ 	}
+ 
+ 	/* Set up BPF prog stack base register */
+ 	emit(A64_MOV(1, fp, A64_SP), ctx);
+ 
+ 	if (!ebpf_from_cbpf && is_main_prog) {
+-		/* Initialize tail_call_cnt */
+-		emit(A64_MOVZ(1, tcc, 0, 0), ctx);
+-
+ 		cur_offset = ctx->idx - idx0;
+ 		if (cur_offset != PROLOGUE_OFFSET) {
+ 			pr_err_once("PROLOGUE_OFFSET = %d, expected %d!\n",
+@@ -432,7 +450,8 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 
+ 	const u8 tmp = bpf2a64[TMP_REG_1];
+ 	const u8 prg = bpf2a64[TMP_REG_2];
+-	const u8 tcc = bpf2a64[TCALL_CNT];
++	const u8 tcc = bpf2a64[TMP_REG_3];
++	const u8 ptr = bpf2a64[TCCNT_PTR];
+ 	const int idx0 = ctx->idx;
+ #define cur_offset (ctx->idx - idx0)
+ #define jmp_offset (out_offset - (cur_offset))
+@@ -449,11 +468,12 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ 
+ 	/*
+-	 * if (tail_call_cnt >= MAX_TAIL_CALL_CNT)
++	 * if ((*tail_call_cnt_ptr) >= MAX_TAIL_CALL_CNT)
+ 	 *     goto out;
+-	 * tail_call_cnt++;
++	 * (*tail_call_cnt_ptr)++;
+ 	 */
+ 	emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx);
++	emit(A64_LDR64I(tcc, ptr, 0), ctx);
+ 	emit(A64_CMP(1, tcc, tmp), ctx);
+ 	emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ 	emit(A64_ADD_I(1, tcc, tcc, 1), ctx);
+@@ -469,6 +489,9 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	emit(A64_LDR64(prg, tmp, prg), ctx);
+ 	emit(A64_CBZ(1, prg, jmp_offset), ctx);
+ 
++	/* Update tail_call_cnt if the slot is populated. */
++	emit(A64_STR64I(tcc, ptr, 0), ctx);
++
+ 	/* goto *(prog->bpf_func + prologue_offset); */
+ 	off = offsetof(struct bpf_prog, bpf_func);
+ 	emit_a64_mov_i64(tmp, off, ctx);
+@@ -721,6 +744,7 @@ static void build_epilogue(struct jit_ctx *ctx, bool is_exception_cb)
+ 	const u8 r8 = bpf2a64[BPF_REG_8];
+ 	const u8 r9 = bpf2a64[BPF_REG_9];
+ 	const u8 fp = bpf2a64[BPF_REG_FP];
++	const u8 ptr = bpf2a64[TCCNT_PTR];
+ 	const u8 fpb = bpf2a64[FP_BOTTOM];
+ 
+ 	/* We're done with BPF stack */
+@@ -738,7 +762,8 @@ static void build_epilogue(struct jit_ctx *ctx, bool is_exception_cb)
+ 	/* Restore x27 and x28 */
+ 	emit(A64_POP(fpb, A64_R(28), A64_SP), ctx);
+ 	/* Restore fs (x25) and x26 */
+-	emit(A64_POP(fp, A64_R(26), A64_SP), ctx);
++	emit(A64_POP(ptr, fp, A64_SP), ctx);
++	emit(A64_POP(ptr, fp, A64_SP), ctx);
+ 
+ 	/* Restore callee-saved register */
+ 	emit(A64_POP(r8, r9, A64_SP), ctx);
+diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
+index d741c3e9933a51..590a92cb54165c 100644
+--- a/arch/loongarch/include/asm/kvm_vcpu.h
++++ b/arch/loongarch/include/asm/kvm_vcpu.h
+@@ -76,6 +76,7 @@ static inline void kvm_restore_lasx(struct loongarch_fpu *fpu) { }
+ #endif
+ 
+ void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
++void kvm_reset_timer(struct kvm_vcpu *vcpu);
+ void kvm_save_timer(struct kvm_vcpu *vcpu);
+ void kvm_restore_timer(struct kvm_vcpu *vcpu);
+ 
+diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c
+index 74a4b5c272d60e..bcc6b6d063d914 100644
+--- a/arch/loongarch/kvm/timer.c
++++ b/arch/loongarch/kvm/timer.c
+@@ -188,3 +188,10 @@ void kvm_save_timer(struct kvm_vcpu *vcpu)
+ 	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+ 	preempt_enable();
+ }
++
++void kvm_reset_timer(struct kvm_vcpu *vcpu)
++{
++	write_gcsr_timercfg(0);
++	kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0);
++	hrtimer_cancel(&vcpu->arch.swtimer);
++}
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 0b53f4d9fddf91..9e8030d4512902 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -572,7 +572,7 @@ static int kvm_set_one_reg(struct kvm_vcpu *vcpu,
+ 				vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
+ 			break;
+ 		case KVM_REG_LOONGARCH_VCPU_RESET:
+-			vcpu->arch.st.guest_addr = 0;
++			kvm_reset_timer(vcpu);
+ 			memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
+ 			memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
+ 			break;
+diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
+index 2584e94e213468..fda7eac23f872d 100644
+--- a/arch/m68k/kernel/process.c
++++ b/arch/m68k/kernel/process.c
+@@ -117,7 +117,7 @@ asmlinkage int m68k_clone(struct pt_regs *regs)
+ {
+ 	/* regs will be equal to current_pt_regs() */
+ 	struct kernel_clone_args args = {
+-		.flags		= regs->d1 & ~CSIGNAL,
++		.flags		= (u32)(regs->d1) & ~CSIGNAL,
+ 		.pidfd		= (int __user *)regs->d3,
+ 		.child_tid	= (int __user *)regs->d4,
+ 		.parent_tid	= (int __user *)regs->d3,
+diff --git a/arch/powerpc/crypto/Kconfig b/arch/powerpc/crypto/Kconfig
+index 1e201b7ae2fc60..bd9d77b0c92ec1 100644
+--- a/arch/powerpc/crypto/Kconfig
++++ b/arch/powerpc/crypto/Kconfig
+@@ -96,6 +96,7 @@ config CRYPTO_AES_PPC_SPE
+ 
+ config CRYPTO_AES_GCM_P10
+ 	tristate "Stitched AES/GCM acceleration support on P10 or later CPU (PPC)"
++	depends on BROKEN
+ 	depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
+ 	select CRYPTO_LIB_AES
+ 	select CRYPTO_ALGAPI
+diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
+index 2bc53c646ccd7d..83848b534cb171 100644
+--- a/arch/powerpc/include/asm/asm-compat.h
++++ b/arch/powerpc/include/asm/asm-compat.h
+@@ -39,6 +39,12 @@
+ #define STDX_BE	stringify_in_c(stdbrx)
+ #endif
+ 
++#ifdef CONFIG_CC_IS_CLANG
++#define DS_FORM_CONSTRAINT "Z<>"
++#else
++#define DS_FORM_CONSTRAINT "YZ<>"
++#endif
++
+ #else /* 32-bit */
+ 
+ /* operations for longs and pointers */
+diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
+index 5bf6a4d49268c7..d1ea554c33ed7e 100644
+--- a/arch/powerpc/include/asm/atomic.h
++++ b/arch/powerpc/include/asm/atomic.h
+@@ -11,6 +11,7 @@
+ #include <asm/cmpxchg.h>
+ #include <asm/barrier.h>
+ #include <asm/asm-const.h>
++#include <asm/asm-compat.h>
+ 
+ /*
+  * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
+@@ -197,7 +198,7 @@ static __inline__ s64 arch_atomic64_read(const atomic64_t *v)
+ 	if (IS_ENABLED(CONFIG_PPC_KERNEL_PREFIXED))
+ 		__asm__ __volatile__("ld %0,0(%1)" : "=r"(t) : "b"(&v->counter));
+ 	else
+-		__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m<>"(v->counter));
++		__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : DS_FORM_CONSTRAINT (v->counter));
+ 
+ 	return t;
+ }
+@@ -208,7 +209,7 @@ static __inline__ void arch_atomic64_set(atomic64_t *v, s64 i)
+ 	if (IS_ENABLED(CONFIG_PPC_KERNEL_PREFIXED))
+ 		__asm__ __volatile__("std %1,0(%2)" : "=m"(v->counter) : "r"(i), "b"(&v->counter));
+ 	else
+-		__asm__ __volatile__("std%U0%X0 %1,%0" : "=m<>"(v->counter) : "r"(i));
++		__asm__ __volatile__("std%U0%X0 %1,%0" : "=" DS_FORM_CONSTRAINT (v->counter) : "r"(i));
+ }
+ 
+ #define ATOMIC64_OP(op, asm_op)						\
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index fd594bf6c6a9c5..4f5a46a77fa2b6 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -6,6 +6,7 @@
+ #include <asm/page.h>
+ #include <asm/extable.h>
+ #include <asm/kup.h>
++#include <asm/asm-compat.h>
+ 
+ #ifdef __powerpc64__
+ /* We use TASK_SIZE_USER64 as TASK_SIZE is not constant */
+@@ -92,12 +93,6 @@ __pu_failed:							\
+ 		: label)
+ #endif
+ 
+-#ifdef CONFIG_CC_IS_CLANG
+-#define DS_FORM_CONSTRAINT "Z<>"
+-#else
+-#define DS_FORM_CONSTRAINT "YZ<>"
+-#endif
+-
+ #ifdef __powerpc64__
+ #ifdef CONFIG_PPC_KERNEL_PREFIXED
+ #define __put_user_asm2_goto(x, ptr, label)			\
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index edc479a7c2bce3..0bd317b5647526 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -41,12 +41,12 @@
+ #include "head_32.h"
+ 
+ .macro compare_to_kernel_boundary scratch, addr
+-#if CONFIG_TASK_SIZE <= 0x80000000 && CONFIG_PAGE_OFFSET >= 0x80000000
++#if CONFIG_TASK_SIZE <= 0x80000000 && MODULES_VADDR >= 0x80000000
+ /* By simply checking Address >= 0x80000000, we know if its a kernel address */
+ 	not.	\scratch, \addr
+ #else
+ 	rlwinm	\scratch, \addr, 16, 0xfff8
+-	cmpli	cr0, \scratch, PAGE_OFFSET@h
++	cmpli	cr0, \scratch, TASK_SIZE@h
+ #endif
+ .endm
+ 
+@@ -404,7 +404,7 @@ FixupDAR:/* Entry point for dcbx workaround. */
+ 	mfspr	r10, SPRN_SRR0
+ 	mtspr	SPRN_MD_EPN, r10
+ 	rlwinm	r11, r10, 16, 0xfff8
+-	cmpli	cr1, r11, PAGE_OFFSET@h
++	cmpli	cr1, r11, TASK_SIZE@h
+ 	mfspr	r11, SPRN_M_TWB	/* Get level 1 table */
+ 	blt+	cr1, 3f
+ 
+diff --git a/arch/powerpc/kernel/vdso/gettimeofday.S b/arch/powerpc/kernel/vdso/gettimeofday.S
+index 48fc6658053aa4..894cb939cd2b31 100644
+--- a/arch/powerpc/kernel/vdso/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso/gettimeofday.S
+@@ -38,11 +38,7 @@
+ 	.else
+ 	addi		r4, r5, VDSO_DATA_OFFSET
+ 	.endif
+-#ifdef __powerpc64__
+ 	bl		CFUNC(DOTSYM(\funct))
+-#else
+-	bl		\funct
+-#endif
+ 	PPC_LL		r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
+ #ifdef __powerpc64__
+ 	PPC_LL		r2, PPC_MIN_STKFRM + STK_GOT(r1)
+diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
+index d93433e26dedb8..e5cc3b3a259f9a 100644
+--- a/arch/powerpc/mm/nohash/8xx.c
++++ b/arch/powerpc/mm/nohash/8xx.c
+@@ -154,11 +154,11 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
+ 
+ 	mmu_mapin_immr();
+ 
+-	mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, true);
++	mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_X, true);
+ 	if (debug_pagealloc_enabled_or_kfence()) {
+ 		top = boundary;
+ 	} else {
+-		mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_TEXT, true);
++		mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_X, true);
+ 		mmu_mapin_ram_chunk(einittext8, top, PAGE_KERNEL, true);
+ 	}
+ 
+diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h
+index fa0f535bbbf024..1d85b661750884 100644
+--- a/arch/riscv/include/asm/kvm_vcpu_pmu.h
++++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h
+@@ -10,6 +10,7 @@
+ #define __KVM_VCPU_RISCV_PMU_H
+ 
+ #include <linux/perf/riscv_pmu.h>
++#include <asm/kvm_vcpu_insn.h>
+ #include <asm/sbi.h>
+ 
+ #ifdef CONFIG_RISCV_PMU_SBI
+@@ -64,11 +65,11 @@ struct kvm_pmu {
+ 
+ #if defined(CONFIG_32BIT)
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = CSR_CYCLEH,	.count = 31,	.func = kvm_riscv_vcpu_pmu_read_hpm }, \
+-{.base = CSR_CYCLE,	.count = 31,	.func = kvm_riscv_vcpu_pmu_read_hpm },
++{.base = CSR_CYCLEH,	.count = 32,	.func = kvm_riscv_vcpu_pmu_read_hpm }, \
++{.base = CSR_CYCLE,	.count = 32,	.func = kvm_riscv_vcpu_pmu_read_hpm },
+ #else
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = CSR_CYCLE,	.count = 31,	.func = kvm_riscv_vcpu_pmu_read_hpm },
++{.base = CSR_CYCLE,	.count = 32,	.func = kvm_riscv_vcpu_pmu_read_hpm },
+ #endif
+ 
+ int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid);
+@@ -104,8 +105,20 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu);
+ struct kvm_pmu {
+ };
+ 
++static inline int kvm_riscv_vcpu_pmu_read_legacy(struct kvm_vcpu *vcpu, unsigned int csr_num,
++						 unsigned long *val, unsigned long new_val,
++						 unsigned long wr_mask)
++{
++	if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) {
++		*val = 0;
++		return KVM_INSN_CONTINUE_NEXT_SEPC;
++	} else {
++		return KVM_INSN_ILLEGAL_TRAP;
++	}
++}
++
+ #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
+-{.base = 0,	.count = 0,	.func = NULL },
++{.base = CSR_CYCLE,	.count = 3,	.func = kvm_riscv_vcpu_pmu_read_legacy },
+ 
+ static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {}
+ static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid)
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index 3348a61de7d998..2932791e938821 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -62,7 +62,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+ 	perf_callchain_store(entry, regs->epc);
+ 
+ 	fp = user_backtrace(entry, fp, regs->ra);
+-	while (fp && !(fp & 0x3) && entry->nr < entry->max_stack)
++	while (fp && !(fp & 0x7) && entry->nr < entry->max_stack)
+ 		fp = user_backtrace(entry, fp, 0);
+ }
+ 
+diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
+index bcf41d6e0df0e3..2707a51b082ca7 100644
+--- a/arch/riscv/kvm/vcpu_pmu.c
++++ b/arch/riscv/kvm/vcpu_pmu.c
+@@ -391,19 +391,9 @@ int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ static void kvm_pmu_clear_snapshot_area(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
+-	int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data);
+ 
+-	if (kvpmu->sdata) {
+-		if (kvpmu->snapshot_addr != INVALID_GPA) {
+-			memset(kvpmu->sdata, 0, snapshot_area_size);
+-			kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr,
+-					     kvpmu->sdata, snapshot_area_size);
+-		} else {
+-			pr_warn("snapshot address invalid\n");
+-		}
+-		kfree(kvpmu->sdata);
+-		kvpmu->sdata = NULL;
+-	}
++	kfree(kvpmu->sdata);
++	kvpmu->sdata = NULL;
+ 	kvpmu->snapshot_addr = INVALID_GPA;
+ }
+ 
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index 62f409d4176e41..7de128be8db9bc 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -127,8 +127,8 @@ void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ 	run->riscv_sbi.args[3] = cp->a3;
+ 	run->riscv_sbi.args[4] = cp->a4;
+ 	run->riscv_sbi.args[5] = cp->a5;
+-	run->riscv_sbi.ret[0] = cp->a0;
+-	run->riscv_sbi.ret[1] = cp->a1;
++	run->riscv_sbi.ret[0] = SBI_ERR_NOT_SUPPORTED;
++	run->riscv_sbi.ret[1] = 0;
+ }
+ 
+ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu,
+diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
+index 77e479d44f1e37..d0cb3ecd63908a 100644
+--- a/arch/s390/include/asm/ftrace.h
++++ b/arch/s390/include/asm/ftrace.h
+@@ -7,8 +7,23 @@
+ #define MCOUNT_INSN_SIZE	6
+ 
+ #ifndef __ASSEMBLY__
++#include <asm/stacktrace.h>
+ 
+-unsigned long return_address(unsigned int n);
++static __always_inline unsigned long return_address(unsigned int n)
++{
++	struct stack_frame *sf;
++
++	if (!n)
++		return (unsigned long)__builtin_return_address(0);
++
++	sf = (struct stack_frame *)current_frame_address();
++	do {
++		sf = (struct stack_frame *)sf->back_chain;
++		if (!sf)
++			return 0;
++	} while (--n);
++	return sf->gprs[8];
++}
+ #define ftrace_return_address(n) return_address(n)
+ 
+ void ftrace_caller(void);
+diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
+index 640363b2a1059e..9f59837d159e0c 100644
+--- a/arch/s390/kernel/stacktrace.c
++++ b/arch/s390/kernel/stacktrace.c
+@@ -162,22 +162,3 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+ {
+ 	arch_stack_walk_user_common(consume_entry, cookie, NULL, regs, false);
+ }
+-
+-unsigned long return_address(unsigned int n)
+-{
+-	struct unwind_state state;
+-	unsigned long addr;
+-
+-	/* Increment to skip current stack entry */
+-	n++;
+-
+-	unwind_for_each_frame(&state, NULL, NULL, 0) {
+-		addr = unwind_get_return_address(&state);
+-		if (!addr)
+-			break;
+-		if (!n--)
+-			return addr;
+-	}
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(return_address);
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 8fe4c2b07128e7..327c45c5013fea 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -7,6 +7,7 @@
+ #include <linux/cpufeature.h>
+ #include <linux/export.h>
+ #include <linux/io.h>
++#include <linux/kexec.h>
+ #include <asm/coco.h>
+ #include <asm/tdx.h>
+ #include <asm/vmx.h>
+@@ -14,6 +15,8 @@
+ #include <asm/insn.h>
+ #include <asm/insn-eval.h>
+ #include <asm/pgtable.h>
++#include <asm/set_memory.h>
++#include <asm/traps.h>
+ 
+ /* MMIO direction */
+ #define EPT_READ	0
+@@ -38,6 +41,8 @@
+ 
+ #define TDREPORT_SUBTYPE_0	0
+ 
++static atomic_long_t nr_shared;
++
+ /* Called from __tdx_hypercall() for unrecoverable failure */
+ noinstr void __noreturn __tdx_hypercall_failed(void)
+ {
+@@ -429,6 +434,11 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
+ 			return -EINVAL;
+ 	}
+ 
++	if (!fault_in_kernel_space(ve->gla)) {
++		WARN_ONCE(1, "Access to userspace address is not supported");
++		return -EINVAL;
++	}
++
+ 	/*
+ 	 * Reject EPT violation #VEs that split pages.
+ 	 *
+@@ -797,28 +807,124 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
+ 	return true;
+ }
+ 
+-static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
+-					  bool enc)
++static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
++					 bool enc)
+ {
+ 	/*
+ 	 * Only handle shared->private conversion here.
+ 	 * See the comment in tdx_early_init().
+ 	 */
+-	if (enc)
+-		return tdx_enc_status_changed(vaddr, numpages, enc);
+-	return true;
++	if (enc && !tdx_enc_status_changed(vaddr, numpages, enc))
++		return -EIO;
++
++	return 0;
+ }
+ 
+-static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
++static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
+ 					 bool enc)
+ {
+ 	/*
+ 	 * Only handle private->shared conversion here.
+ 	 * See the comment in tdx_early_init().
+ 	 */
+-	if (!enc)
+-		return tdx_enc_status_changed(vaddr, numpages, enc);
+-	return true;
++	if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
++		return -EIO;
++
++	if (enc)
++		atomic_long_sub(numpages, &nr_shared);
++	else
++		atomic_long_add(numpages, &nr_shared);
++
++	return 0;
++}
++
++/* Stop new private<->shared conversions */
++static void tdx_kexec_begin(void)
++{
++	if (!IS_ENABLED(CONFIG_KEXEC_CORE))
++		return;
++
++	/*
++	 * Crash kernel reaches here with interrupts disabled: can't wait for
++	 * conversions to finish.
++	 *
++	 * If race happened, just report and proceed.
++	 */
++	if (!set_memory_enc_stop_conversion())
++		pr_warn("Failed to stop shared<->private conversions\n");
++}
++
++/* Walk direct mapping and convert all shared memory back to private */
++static void tdx_kexec_finish(void)
++{
++	unsigned long addr, end;
++	long found = 0, shared;
++
++	if (!IS_ENABLED(CONFIG_KEXEC_CORE))
++		return;
++
++	lockdep_assert_irqs_disabled();
++
++	addr = PAGE_OFFSET;
++	end  = PAGE_OFFSET + get_max_mapped();
++
++	while (addr < end) {
++		unsigned long size;
++		unsigned int level;
++		pte_t *pte;
++
++		pte = lookup_address(addr, &level);
++		size = page_level_size(level);
++
++		if (pte && pte_decrypted(*pte)) {
++			int pages = size / PAGE_SIZE;
++
++			/*
++			 * Touching memory with shared bit set triggers implicit
++			 * conversion to shared.
++			 *
++			 * Make sure nobody touches the shared range from
++			 * now on.
++			 */
++			set_pte(pte, __pte(0));
++
++			/*
++			 * Memory encryption state persists across kexec.
++			 * If tdx_enc_status_changed() fails in the first
++			 * kernel, it leaves memory in an unknown state.
++			 *
++			 * If that memory remains shared, accessing it in the
++			 * *next* kernel through a private mapping will result
++			 * in an unrecoverable guest shutdown.
++			 *
++			 * The kdump kernel boot is not impacted as it uses
++			 * a pre-reserved memory range that is always private.
++			 * However, gathering crash information could lead to
++			 * a crash if it accesses unconverted memory through
++			 * a private mapping which is possible when accessing
++			 * that memory through /proc/vmcore, for example.
++			 *
++			 * In all cases, print error info in order to leave
++			 * enough bread crumbs for debugging.
++			 */
++			if (!tdx_enc_status_changed(addr, pages, true)) {
++				pr_err("Failed to unshare range %#lx-%#lx\n",
++				       addr, addr + size);
++			}
++
++			found += pages;
++		}
++
++		addr += size;
++	}
++
++	__flush_tlb_all();
++
++	shared = atomic_long_read(&nr_shared);
++	if (shared != found) {
++		pr_err("shared page accounting is off\n");
++		pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found);
++	}
+ }
+ 
+ void __init tdx_early_init(void)
+@@ -880,6 +986,9 @@ void __init tdx_early_init(void)
+ 	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
+ 	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
+ 
++	x86_platform.guest.enc_kexec_begin	     = tdx_kexec_begin;
++	x86_platform.guest.enc_kexec_finish	     = tdx_kexec_finish;
++
+ 	/*
+ 	 * TDX intercepts the RDMSR to read the X2APIC ID in the parallel
+ 	 * bringup low level code. That raises #VE which cannot be handled
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index dcac96133cb6a9..28757629d0d830 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3912,8 +3912,12 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 			x86_pmu.pebs_aliases(event);
+ 	}
+ 
+-	if (needs_branch_stack(event) && is_sampling_event(event))
+-		event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
++	if (needs_branch_stack(event)) {
++		/* Avoid branch stack setup for counting events in SAMPLE READ */
++		if (is_sampling_event(event) ||
++		    !(event->attr.sample_type & PERF_SAMPLE_READ))
++			event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
++	}
+ 
+ 	if (branch_sample_counters(event)) {
+ 		struct perf_event *leader, *sibling;
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index b4aa8daa47738f..2959970dd10eb1 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1606,6 +1606,7 @@ static void pt_event_stop(struct perf_event *event, int mode)
+ 	 * see comment in intel_pt_interrupt().
+ 	 */
+ 	WRITE_ONCE(pt->handle_nmi, 0);
++	barrier();
+ 
+ 	pt_config_stop(event);
+ 
+@@ -1657,11 +1658,10 @@ static long pt_event_snapshot_aux(struct perf_event *event,
+ 		return 0;
+ 
+ 	/*
+-	 * Here, handle_nmi tells us if the tracing is on
++	 * There is no PT interrupt in this mode, so stop the trace and it will
++	 * remain stopped while the buffer is copied.
+ 	 */
+-	if (READ_ONCE(pt->handle_nmi))
+-		pt_config_stop(event);
+-
++	pt_config_stop(event);
+ 	pt_read_offset(buf);
+ 	pt_update_head(pt);
+ 
+@@ -1673,11 +1673,10 @@ static long pt_event_snapshot_aux(struct perf_event *event,
+ 	ret = perf_output_copy_aux(&pt->handle, handle, from, to);
+ 
+ 	/*
+-	 * If the tracing was on when we turned up, restart it.
+-	 * Compiler barrier not needed as we couldn't have been
+-	 * preempted by anything that touches pt->handle_nmi.
++	 * Here, handle_nmi tells us if the tracing was on.
++	 * If the tracing was on, restart it.
+ 	 */
+-	if (pt->handle_nmi)
++	if (READ_ONCE(pt->handle_nmi))
+ 		pt_config_start(event);
+ 
+ 	return ret;
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index 768d73de0d098a..b4a851d27c7cb8 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -523,9 +523,9 @@ static int hv_mark_gpa_visibility(u16 count, const u64 pfn[],
+  * transition is complete, hv_vtom_set_host_visibility() marks the pages
+  * as "present" again.
+  */
+-static bool hv_vtom_clear_present(unsigned long kbuffer, int pagecount, bool enc)
++static int hv_vtom_clear_present(unsigned long kbuffer, int pagecount, bool enc)
+ {
+-	return !set_memory_np(kbuffer, pagecount);
++	return set_memory_np(kbuffer, pagecount);
+ }
+ 
+ /*
+@@ -536,20 +536,19 @@ static bool hv_vtom_clear_present(unsigned long kbuffer, int pagecount, bool enc
+  * with host. This function works as wrap of hv_mark_gpa_visibility()
+  * with memory base and size.
+  */
+-static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bool enc)
++static int hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bool enc)
+ {
+ 	enum hv_mem_host_visibility visibility = enc ?
+ 			VMBUS_PAGE_NOT_VISIBLE : VMBUS_PAGE_VISIBLE_READ_WRITE;
+ 	u64 *pfn_array;
+ 	phys_addr_t paddr;
++	int i, pfn, err;
+ 	void *vaddr;
+ 	int ret = 0;
+-	bool result = true;
+-	int i, pfn;
+ 
+ 	pfn_array = kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
+ 	if (!pfn_array) {
+-		result = false;
++		ret = -ENOMEM;
+ 		goto err_set_memory_p;
+ 	}
+ 
+@@ -568,10 +567,8 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
+ 		if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
+ 			ret = hv_mark_gpa_visibility(pfn, pfn_array,
+ 						     visibility);
+-			if (ret) {
+-				result = false;
++			if (ret)
+ 				goto err_free_pfn_array;
+-			}
+ 			pfn = 0;
+ 		}
+ 	}
+@@ -586,10 +583,11 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
+ 	 * order to avoid leaving the memory range in a "broken" state. Setting
+ 	 * the PRESENT bits shouldn't fail, but return an error if it does.
+ 	 */
+-	if (set_memory_p(kbuffer, pagecount))
+-		result = false;
++	err = set_memory_p(kbuffer, pagecount);
++	if (err && !ret)
++		ret = err;
+ 
+-	return result;
++	return ret;
+ }
+ 
+ static bool hv_vtom_tlb_flush_required(bool private)
+diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
+index 5af926c050f094..041efa274eebd6 100644
+--- a/arch/x86/include/asm/acpi.h
++++ b/arch/x86/include/asm/acpi.h
+@@ -167,6 +167,14 @@ void acpi_generic_reduced_hw_init(void);
+ void x86_default_set_root_pointer(u64 addr);
+ u64 x86_default_get_root_pointer(void);
+ 
++#ifdef CONFIG_XEN_PV
++/* A Xen PV domain needs a special acpi_os_ioremap() handling. */
++extern void __iomem * (*acpi_os_ioremap)(acpi_physical_address phys,
++					 acpi_size size);
++void __iomem *x86_acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
++#define acpi_os_ioremap acpi_os_ioremap
++#endif
++
+ #else /* !CONFIG_ACPI */
+ 
+ #define acpi_lapic 0
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index c67fa6ad098aae..6ffa8b75f4cd33 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -69,7 +69,11 @@ extern u64 arch_irq_stat(void);
+ #define local_softirq_pending_ref       pcpu_hot.softirq_pending
+ 
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+-static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++/*
++ * This function is called from noinstr interrupt contexts
++ * and must be inlined to not get instrumentation.
++ */
++static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void)
+ {
+ 	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
+ }
+@@ -84,7 +88,7 @@ static __always_inline bool kvm_get_cpu_l1tf_flush_l1d(void)
+ 	return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
+ }
+ #else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
+-static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
+ #endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
+ 
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index d4f24499b256c8..ad5c68f0509d4d 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -212,8 +212,8 @@ __visible noinstr void func(struct pt_regs *regs,			\
+ 	irqentry_state_t state = irqentry_enter(regs);			\
+ 	u32 vector = (u32)(u8)error_code;				\
+ 									\
++	kvm_set_cpu_l1tf_flush_l1d();                                   \
+ 	instrumentation_begin();					\
+-	kvm_set_cpu_l1tf_flush_l1d();					\
+ 	run_irq_on_irqstack_cond(__##func, regs, vector);		\
+ 	instrumentation_end();						\
+ 	irqentry_exit(regs, state);					\
+@@ -250,7 +250,6 @@ static void __##func(struct pt_regs *regs);				\
+ 									\
+ static __always_inline void instr_##func(struct pt_regs *regs)		\
+ {									\
+-	kvm_set_cpu_l1tf_flush_l1d();					\
+ 	run_sysvec_on_irqstack_cond(__##func, regs);			\
+ }									\
+ 									\
+@@ -258,6 +257,7 @@ __visible noinstr void func(struct pt_regs *regs)			\
+ {									\
+ 	irqentry_state_t state = irqentry_enter(regs);			\
+ 									\
++	kvm_set_cpu_l1tf_flush_l1d();                                   \
+ 	instrumentation_begin();					\
+ 	instr_##func (regs);						\
+ 	instrumentation_end();						\
+@@ -288,7 +288,6 @@ static __always_inline void __##func(struct pt_regs *regs);		\
+ static __always_inline void instr_##func(struct pt_regs *regs)		\
+ {									\
+ 	__irq_enter_raw();						\
+-	kvm_set_cpu_l1tf_flush_l1d();					\
+ 	__##func (regs);						\
+ 	__irq_exit_raw();						\
+ }									\
+@@ -297,6 +296,7 @@ __visible noinstr void func(struct pt_regs *regs)			\
+ {									\
+ 	irqentry_state_t state = irqentry_enter(regs);			\
+ 									\
++	kvm_set_cpu_l1tf_flush_l1d();                                   \
+ 	instrumentation_begin();					\
+ 	instr_##func (regs);						\
+ 	instrumentation_end();						\
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index d0274b3be2c400..e18399d08fb178 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1708,7 +1708,8 @@ struct kvm_x86_ops {
+ 	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
+ 	void (*enable_irq_window)(struct kvm_vcpu *vcpu);
+ 	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
+-	bool (*check_apicv_inhibit_reasons)(enum kvm_apicv_inhibit reason);
++
++	const bool x2apic_icr_is_split;
+ 	const unsigned long required_apicv_inhibits;
+ 	bool allow_apicv_in_x2apic_without_x2apic_virtualization;
+ 	void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 65b8e5bb902cc3..e39311a89bf478 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -140,6 +140,11 @@ static inline int pte_young(pte_t pte)
+ 	return pte_flags(pte) & _PAGE_ACCESSED;
+ }
+ 
++static inline bool pte_decrypted(pte_t pte)
++{
++	return cc_mkdec(pte_val(pte)) == pte_val(pte);
++}
++
+ #define pmd_dirty pmd_dirty
+ static inline bool pmd_dirty(pmd_t pmd)
+ {
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index 9aee31862b4a8b..4b2abce2e3e7d6 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -49,8 +49,11 @@ int set_memory_wb(unsigned long addr, int numpages);
+ int set_memory_np(unsigned long addr, int numpages);
+ int set_memory_p(unsigned long addr, int numpages);
+ int set_memory_4k(unsigned long addr, int numpages);
++
++bool set_memory_enc_stop_conversion(void);
+ int set_memory_encrypted(unsigned long addr, int numpages);
+ int set_memory_decrypted(unsigned long addr, int numpages);
++
+ int set_memory_np_noalias(unsigned long addr, int numpages);
+ int set_memory_nonglobal(unsigned long addr, int numpages);
+ int set_memory_global(unsigned long addr, int numpages);
+diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
+index 6149eabe200f5b..213cf5379a5a6d 100644
+--- a/arch/x86/include/asm/x86_init.h
++++ b/arch/x86/include/asm/x86_init.h
+@@ -149,12 +149,22 @@ struct x86_init_acpi {
+  * @enc_status_change_finish	Notify HV after the encryption status of a range is changed
+  * @enc_tlb_flush_required	Returns true if a TLB flush is needed before changing page encryption status
+  * @enc_cache_flush_required	Returns true if a cache flush is needed before changing page encryption status
++ * @enc_kexec_begin		Begin the two-step process of converting shared memory back
++ *				to private. It stops the new conversions from being started
++ *				and waits in-flight conversions to finish, if possible.
++ * @enc_kexec_finish		Finish the two-step process of converting shared memory to
++ *				private. All memory is private after the call when
++ *				the function returns.
++ *				It is called on only one CPU while the others are shut down
++ *				and with interrupts disabled.
+  */
+ struct x86_guest {
+-	bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
+-	bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
++	int (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
++	int (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
+ 	bool (*enc_tlb_flush_required)(bool enc);
+ 	bool (*enc_cache_flush_required)(void);
++	void (*enc_kexec_begin)(void);
++	void (*enc_kexec_finish)(void);
+ };
+ 
+ /**
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 4bf82dbd2a6b5a..01e929a0f6ccc1 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1862,3 +1862,14 @@ u64 x86_default_get_root_pointer(void)
+ {
+ 	return boot_params.acpi_rsdp_addr;
+ }
++
++#ifdef CONFIG_XEN_PV
++void __iomem *x86_acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
++{
++	return ioremap_cache(phys, size);
++}
++
++void __iomem * (*acpi_os_ioremap)(acpi_physical_address phys, acpi_size size) =
++	x86_acpi_os_ioremap;
++EXPORT_SYMBOL_GPL(acpi_os_ioremap);
++#endif
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index 27892e57c4ef93..6aeeb43dd3f6a8 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -475,24 +475,25 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
+ {
+ 	struct sgx_epc_page *page;
+ 	int nid_of_current = numa_node_id();
+-	int nid = nid_of_current;
++	int nid_start, nid;
+ 
+-	if (node_isset(nid_of_current, sgx_numa_mask)) {
+-		page = __sgx_alloc_epc_page_from_node(nid_of_current);
+-		if (page)
+-			return page;
+-	}
+-
+-	/* Fall back to the non-local NUMA nodes: */
+-	while (true) {
+-		nid = next_node_in(nid, sgx_numa_mask);
+-		if (nid == nid_of_current)
+-			break;
++	/*
++	 * Try local node first. If it doesn't have an EPC section,
++	 * fall back to the non-local NUMA nodes.
++	 */
++	if (node_isset(nid_of_current, sgx_numa_mask))
++		nid_start = nid_of_current;
++	else
++		nid_start = next_node_in(nid_of_current, sgx_numa_mask);
+ 
++	nid = nid_start;
++	do {
+ 		page = __sgx_alloc_epc_page_from_node(nid);
+ 		if (page)
+ 			return page;
+-	}
++
++		nid = next_node_in(nid, sgx_numa_mask);
++	} while (nid != nid_start);
+ 
+ 	return ERR_PTR(-ENOMEM);
+ }
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index f06501445cd981..340af815565848 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -128,6 +128,18 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
+ #ifdef CONFIG_HPET_TIMER
+ 	hpet_disable();
+ #endif
++
++	/*
++	 * Non-crash kexec calls enc_kexec_begin() while scheduling is still
++	 * active. This allows the callback to wait until all in-flight
++	 * shared<->private conversions are complete. In a crash scenario,
++	 * enc_kexec_begin() gets called after all but one CPU have been shut
++	 * down and interrupts have been disabled. This allows the callback to
++	 * detect a race with the conversion and report it.
++	 */
++	x86_platform.guest.enc_kexec_begin();
++	x86_platform.guest.enc_kexec_finish();
++
+ 	crash_save_cpu(regs, safe_smp_processor_id());
+ }
+ 
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index a817ed0724d1e7..4b9d4557fc94a4 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -559,10 +559,11 @@ void early_setup_idt(void)
+  */
+ void __head startup_64_setup_gdt_idt(void)
+ {
++	struct desc_struct *gdt = (void *)(__force unsigned long)init_per_cpu_var(gdt_page.gdt);
+ 	void *handler = NULL;
+ 
+ 	struct desc_ptr startup_gdt_descr = {
+-		.address = (unsigned long)&RIP_REL_REF(init_per_cpu_var(gdt_page.gdt)),
++		.address = (unsigned long)&RIP_REL_REF(*gdt),
+ 		.size    = GDT_SIZE - 1,
+ 	};
+ 
+diff --git a/arch/x86/kernel/jailhouse.c b/arch/x86/kernel/jailhouse.c
+index df337860612d84..cd8ed1edbf9ee7 100644
+--- a/arch/x86/kernel/jailhouse.c
++++ b/arch/x86/kernel/jailhouse.c
+@@ -12,6 +12,7 @@
+ #include <linux/kernel.h>
+ #include <linux/reboot.h>
+ #include <linux/serial_8250.h>
++#include <linux/acpi.h>
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/kernel/mmconf-fam10h_64.c b/arch/x86/kernel/mmconf-fam10h_64.c
+index c94dec6a18345a..1f54eedc3015e9 100644
+--- a/arch/x86/kernel/mmconf-fam10h_64.c
++++ b/arch/x86/kernel/mmconf-fam10h_64.c
+@@ -9,6 +9,7 @@
+ #include <linux/pci.h>
+ #include <linux/dmi.h>
+ #include <linux/range.h>
++#include <linux/acpi.h>
+ 
+ #include <asm/pci-direct.h>
+ #include <linux/sort.h>
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 6d3d20e3e43a9b..d8d582b750d4f6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -798,6 +798,27 @@ static long prctl_map_vdso(const struct vdso_image *image, unsigned long addr)
+ 
+ #define LAM_U57_BITS 6
+ 
++static void enable_lam_func(void *__mm)
++{
++	struct mm_struct *mm = __mm;
++
++	if (this_cpu_read(cpu_tlbstate.loaded_mm) == mm) {
++		write_cr3(__read_cr3() | mm->context.lam_cr3_mask);
++		set_tlbstate_lam_mode(mm);
++	}
++}
++
++static void mm_enable_lam(struct mm_struct *mm)
++{
++	/*
++	 * Even though the process must still be single-threaded at this
++	 * point, kernel threads may be using the mm.  IPI those kernel
++	 * threads if they exist.
++	 */
++	on_each_cpu_mask(mm_cpumask(mm), enable_lam_func, mm, true);
++	set_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags);
++}
++
+ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ {
+ 	if (!cpu_feature_enabled(X86_FEATURE_LAM))
+@@ -814,6 +835,10 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ 	if (mmap_write_lock_killable(mm))
+ 		return -EINTR;
+ 
++	/*
++	 * MM_CONTEXT_LOCK_LAM is set on clone.  Prevent LAM from
++	 * being enabled unless the process is single threaded:
++	 */
+ 	if (test_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags)) {
+ 		mmap_write_unlock(mm);
+ 		return -EBUSY;
+@@ -830,9 +855,7 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
+ 		return -EINVAL;
+ 	}
+ 
+-	write_cr3(__read_cr3() | mm->context.lam_cr3_mask);
+-	set_tlbstate_lam_mode(mm);
+-	set_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags);
++	mm_enable_lam(mm);
+ 
+ 	mmap_write_unlock(mm);
+ 
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index f3130f762784a1..bb7a44af7efd19 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -12,6 +12,7 @@
+ #include <linux/delay.h>
+ #include <linux/objtool.h>
+ #include <linux/pgtable.h>
++#include <linux/kexec.h>
+ #include <acpi/reboot.h>
+ #include <asm/io.h>
+ #include <asm/apic.h>
+@@ -716,6 +717,14 @@ static void native_machine_emergency_restart(void)
+ 
+ void native_machine_shutdown(void)
+ {
++	/*
++	 * Call enc_kexec_begin() while all CPUs are still active and
++	 * interrupts are enabled. This will allow all in-flight memory
++	 * conversions to finish cleanly.
++	 */
++	if (kexec_in_progress)
++		x86_platform.guest.enc_kexec_begin();
++
+ 	/* Stop the cpus and apics */
+ #ifdef CONFIG_X86_IO_APIC
+ 	/*
+@@ -752,6 +761,9 @@ void native_machine_shutdown(void)
+ #ifdef CONFIG_X86_64
+ 	x86_platform.iommu_shutdown();
+ #endif
++
++	if (kexec_in_progress)
++		x86_platform.guest.enc_kexec_finish();
+ }
+ 
+ static void __machine_emergency_restart(int emergency)
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 0c35207320cb4f..390e4fe7433ea3 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -60,6 +60,7 @@
+ #include <linux/stackprotector.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/mc146818rtc.h>
++#include <linux/acpi.h>
+ 
+ #include <asm/acpi.h>
+ #include <asm/cacheinfo.h>
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index d5dc5a92635a82..0a2bbd674a6d99 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -8,6 +8,7 @@
+ #include <linux/ioport.h>
+ #include <linux/export.h>
+ #include <linux/pci.h>
++#include <linux/acpi.h>
+ 
+ #include <asm/acpi.h>
+ #include <asm/bios_ebda.h>
+@@ -134,10 +135,12 @@ struct x86_cpuinit_ops x86_cpuinit = {
+ 
+ static void default_nmi_init(void) { };
+ 
+-static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return true; }
+-static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return true; }
++static int enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return 0; }
++static int enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return 0; }
+ static bool enc_tlb_flush_required_noop(bool enc) { return false; }
+ static bool enc_cache_flush_required_noop(void) { return false; }
++static void enc_kexec_begin_noop(void) {}
++static void enc_kexec_finish_noop(void) {}
+ static bool is_private_mmio_noop(u64 addr) {return false; }
+ 
+ struct x86_platform_ops x86_platform __ro_after_init = {
+@@ -161,6 +164,8 @@ struct x86_platform_ops x86_platform __ro_after_init = {
+ 		.enc_status_change_finish  = enc_status_change_finish_noop,
+ 		.enc_tlb_flush_required	   = enc_tlb_flush_required_noop,
+ 		.enc_cache_flush_required  = enc_cache_flush_required_noop,
++		.enc_kexec_begin	   = enc_kexec_begin_noop,
++		.enc_kexec_finish	   = enc_kexec_finish_noop,
+ 	},
+ };
+ 
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index acd7d48100a1d3..523d02c50562f4 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -351,10 +351,8 @@ static void kvm_recalculate_logical_map(struct kvm_apic_map *new,
+ 	 * reversing the LDR calculation to get cluster of APICs, i.e. no
+ 	 * additional work is required.
+ 	 */
+-	if (apic_x2apic_mode(apic)) {
+-		WARN_ON_ONCE(ldr != kvm_apic_calc_x2apic_ldr(kvm_x2apic_id(apic)));
++	if (apic_x2apic_mode(apic))
+ 		return;
+-	}
+ 
+ 	if (WARN_ON_ONCE(!kvm_apic_map_get_logical_dest(new, ldr,
+ 							&cluster, &mask))) {
+@@ -2453,6 +2451,43 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
+ 
++#define X2APIC_ICR_RESERVED_BITS (GENMASK_ULL(31, 20) | GENMASK_ULL(17, 16) | BIT(13))
++
++int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
++{
++	if (data & X2APIC_ICR_RESERVED_BITS)
++		return 1;
++
++	/*
++	 * The BUSY bit is reserved on both Intel and AMD in x2APIC mode, but
++	 * only AMD requires it to be zero, Intel essentially just ignores the
++	 * bit.  And if IPI virtualization (Intel) or x2AVIC (AMD) is enabled,
++	 * the CPU performs the reserved bits checks, i.e. the underlying CPU
++	 * behavior will "win".  Arbitrarily clear the BUSY bit, as there is no
++	 * sane way to provide consistent behavior with respect to hardware.
++	 */
++	data &= ~APIC_ICR_BUSY;
++
++	kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
++	if (kvm_x86_ops.x2apic_icr_is_split) {
++		kvm_lapic_set_reg(apic, APIC_ICR, data);
++		kvm_lapic_set_reg(apic, APIC_ICR2, data >> 32);
++	} else {
++		kvm_lapic_set_reg64(apic, APIC_ICR, data);
++	}
++	trace_kvm_apic_write(APIC_ICR, data);
++	return 0;
++}
++
++static u64 kvm_x2apic_icr_read(struct kvm_lapic *apic)
++{
++	if (kvm_x86_ops.x2apic_icr_is_split)
++		return (u64)kvm_lapic_get_reg(apic, APIC_ICR) |
++		       (u64)kvm_lapic_get_reg(apic, APIC_ICR2) << 32;
++
++	return kvm_lapic_get_reg64(apic, APIC_ICR);
++}
++
+ /* emulate APIC access in a trap manner */
+ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
+ {
+@@ -2470,7 +2505,7 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
+ 	 * maybe-unecessary write, and both are in the noise anyways.
+ 	 */
+ 	if (apic_x2apic_mode(apic) && offset == APIC_ICR)
+-		kvm_x2apic_icr_write(apic, kvm_lapic_get_reg64(apic, APIC_ICR));
++		WARN_ON_ONCE(kvm_x2apic_icr_write(apic, kvm_x2apic_icr_read(apic)));
+ 	else
+ 		kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset));
+ }
+@@ -2964,34 +2999,48 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
+ 		struct kvm_lapic_state *s, bool set)
+ {
+ 	if (apic_x2apic_mode(vcpu->arch.apic)) {
++		u32 x2apic_id = kvm_x2apic_id(vcpu->arch.apic);
+ 		u32 *id = (u32 *)(s->regs + APIC_ID);
+ 		u32 *ldr = (u32 *)(s->regs + APIC_LDR);
+ 		u64 icr;
+ 
+ 		if (vcpu->kvm->arch.x2apic_format) {
+-			if (*id != vcpu->vcpu_id)
++			if (*id != x2apic_id)
+ 				return -EINVAL;
+ 		} else {
++			/*
++			 * Ignore the userspace value when setting APIC state.
++			 * KVM's model is that the x2APIC ID is readonly, e.g.
++			 * KVM only supports delivering interrupts to KVM's
++			 * version of the x2APIC ID.  However, for backwards
++			 * compatibility, don't reject attempts to set a
++			 * mismatched ID for userspace that hasn't opted into
++			 * x2apic_format.
++			 */
+ 			if (set)
+-				*id >>= 24;
++				*id = x2apic_id;
+ 			else
+-				*id <<= 24;
++				*id = x2apic_id << 24;
+ 		}
+ 
+ 		/*
+ 		 * In x2APIC mode, the LDR is fixed and based on the id.  And
+-		 * ICR is internally a single 64-bit register, but needs to be
+-		 * split to ICR+ICR2 in userspace for backwards compatibility.
++		 * if the ICR is _not_ split, ICR is internally a single 64-bit
++		 * register, but needs to be split to ICR+ICR2 in userspace for
++		 * backwards compatibility.
+ 		 */
+-		if (set) {
+-			*ldr = kvm_apic_calc_x2apic_ldr(*id);
+-
+-			icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
+-			      (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
+-			__kvm_lapic_set_reg64(s->regs, APIC_ICR, icr);
+-		} else {
+-			icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
+-			__kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
++		if (set)
++			*ldr = kvm_apic_calc_x2apic_ldr(x2apic_id);
++
++		if (!kvm_x86_ops.x2apic_icr_is_split) {
++			if (set) {
++				icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
++				      (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
++				__kvm_lapic_set_reg64(s->regs, APIC_ICR, icr);
++			} else {
++				icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
++				__kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
++			}
+ 		}
+ 	}
+ 
+@@ -3183,22 +3232,12 @@ int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
+ 	return 0;
+ }
+ 
+-int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
+-{
+-	data &= ~APIC_ICR_BUSY;
+-
+-	kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
+-	kvm_lapic_set_reg64(apic, APIC_ICR, data);
+-	trace_kvm_apic_write(APIC_ICR, data);
+-	return 0;
+-}
+-
+ static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data)
+ {
+ 	u32 low;
+ 
+ 	if (reg == APIC_ICR) {
+-		*data = kvm_lapic_get_reg64(apic, APIC_ICR);
++		*data = kvm_x2apic_icr_read(apic);
+ 		return 0;
+ 	}
+ 
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 0357f7af559665..6d5da700268a5b 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -5051,6 +5051,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ 	.enable_nmi_window = svm_enable_nmi_window,
+ 	.enable_irq_window = svm_enable_irq_window,
+ 	.update_cr8_intercept = svm_update_cr8_intercept,
++
++	.x2apic_icr_is_split = true,
+ 	.set_virtual_apic_mode = avic_refresh_virtual_apic_mode,
+ 	.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
+ 	.apicv_post_state_restore = avic_apicv_post_state_restore,
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index 547fca3709febd..35c2c004dacd2b 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -89,6 +89,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ 	.enable_nmi_window = vmx_enable_nmi_window,
+ 	.enable_irq_window = vmx_enable_irq_window,
+ 	.update_cr8_intercept = vmx_update_cr8_intercept,
++
++	.x2apic_icr_is_split = false,
+ 	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
+ 	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
+ 	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index d404227c164d60..e46aba18600e7a 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -46,7 +46,6 @@ bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
+ void vmx_migrate_timers(struct kvm_vcpu *vcpu);
+ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+ void vmx_apicv_pre_state_restore(struct kvm_vcpu *vcpu);
+-bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason);
+ void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+ void vmx_hwapic_isr_update(int max_isr);
+ int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index 422602f6039b82..e7b67519ddb5d3 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -283,7 +283,7 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)
+ #endif
+ }
+ 
+-static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
++static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
+ {
+ 	/*
+ 	 * To maintain the security guarantees of SEV-SNP guests, make sure
+@@ -292,11 +292,11 @@ static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool
+ 	if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc)
+ 		snp_set_memory_shared(vaddr, npages);
+ 
+-	return true;
++	return 0;
+ }
+ 
+ /* Return true unconditionally: return value doesn't matter for the SEV side */
+-static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool enc)
++static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool enc)
+ {
+ 	/*
+ 	 * After memory is mapped encrypted in the page table, validate it
+@@ -308,7 +308,7 @@ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool e
+ 	if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
+ 		enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc);
+ 
+-	return true;
++	return 0;
+ }
+ 
+ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 19fdfbb171ed6e..1356e25e6d1254 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -2196,7 +2196,8 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
+ 		cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required());
+ 
+ 	/* Notify hypervisor that we are about to set/clr encryption attribute. */
+-	if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc))
++	ret = x86_platform.guest.enc_status_change_prepare(addr, numpages, enc);
++	if (ret)
+ 		goto vmm_fail;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, 1);
+@@ -2214,24 +2215,61 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
+ 		return ret;
+ 
+ 	/* Notify hypervisor that we have successfully set/clr encryption attribute. */
+-	if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc))
++	ret = x86_platform.guest.enc_status_change_finish(addr, numpages, enc);
++	if (ret)
+ 		goto vmm_fail;
+ 
+ 	return 0;
+ 
+ vmm_fail:
+-	WARN_ONCE(1, "CPA VMM failure to convert memory (addr=%p, numpages=%d) to %s.\n",
+-		  (void *)addr, numpages, enc ? "private" : "shared");
++	WARN_ONCE(1, "CPA VMM failure to convert memory (addr=%p, numpages=%d) to %s: %d\n",
++		  (void *)addr, numpages, enc ? "private" : "shared", ret);
+ 
+-	return -EIO;
++	return ret;
++}
++
++/*
++ * The lock serializes conversions between private and shared memory.
++ *
++ * It is taken for read on conversion. A write lock guarantees that no
++ * concurrent conversions are in progress.
++ */
++static DECLARE_RWSEM(mem_enc_lock);
++
++/*
++ * Stop new private<->shared conversions.
++ *
++ * Taking the exclusive mem_enc_lock waits for in-flight conversions to complete.
++ * The lock is not released to prevent new conversions from being started.
++ */
++bool set_memory_enc_stop_conversion(void)
++{
++	/*
++	 * In a crash scenario, sleep is not allowed. Try to take the lock.
++	 * Failure indicates that there is a race with the conversion.
++	 */
++	if (oops_in_progress)
++		return down_write_trylock(&mem_enc_lock);
++
++	down_write(&mem_enc_lock);
++
++	return true;
+ }
+ 
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+-	if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
+-		return __set_memory_enc_pgtable(addr, numpages, enc);
++	int ret = 0;
+ 
+-	return 0;
++	if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
++		if (!down_read_trylock(&mem_enc_lock))
++			return -EBUSY;
++
++		ret = __set_memory_enc_pgtable(addr, numpages, enc);
++
++		up_read(&mem_enc_lock);
++	}
++
++	return ret;
+ }
+ 
+ int set_memory_encrypted(unsigned long addr, int numpages)
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 44ac64f3a047ca..a041d2ecd83804 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -503,9 +503,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ {
+ 	struct mm_struct *prev = this_cpu_read(cpu_tlbstate.loaded_mm);
+ 	u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+-	unsigned long new_lam = mm_lam_cr3_mask(next);
+ 	bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy);
+ 	unsigned cpu = smp_processor_id();
++	unsigned long new_lam;
+ 	u64 next_tlb_gen;
+ 	bool need_flush;
+ 	u16 new_asid;
+@@ -619,9 +619,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ 			cpumask_clear_cpu(cpu, mm_cpumask(prev));
+ 		}
+ 
+-		/*
+-		 * Start remote flushes and then read tlb_gen.
+-		 */
++		/* Start receiving IPIs and then read tlb_gen (and LAM below) */
+ 		if (next != &init_mm)
+ 			cpumask_set_cpu(cpu, mm_cpumask(next));
+ 		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
+@@ -633,6 +631,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+ 		barrier();
+ 	}
+ 
++	new_lam = mm_lam_cr3_mask(next);
+ 	set_tlbstate_lam_mode(next);
+ 	if (need_flush) {
+ 		this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 5159c7a2292294..2f28a9b34b91e4 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -273,7 +273,7 @@ struct jit_context {
+ /* Number of bytes emit_patch() needs to generate instructions */
+ #define X86_PATCH_SIZE		5
+ /* Number of bytes that will be skipped on tailcall */
+-#define X86_TAIL_CALL_OFFSET	(11 + ENDBR_INSN_SIZE)
++#define X86_TAIL_CALL_OFFSET	(12 + ENDBR_INSN_SIZE)
+ 
+ static void push_r12(u8 **pprog)
+ {
+@@ -403,6 +403,37 @@ static void emit_cfi(u8 **pprog, u32 hash)
+ 	*pprog = prog;
+ }
+ 
++static void emit_prologue_tail_call(u8 **pprog, bool is_subprog)
++{
++	u8 *prog = *pprog;
++
++	if (!is_subprog) {
++		/* cmp rax, MAX_TAIL_CALL_CNT */
++		EMIT4(0x48, 0x83, 0xF8, MAX_TAIL_CALL_CNT);
++		EMIT2(X86_JA, 6);        /* ja 6 */
++		/* rax is tail_call_cnt if <= MAX_TAIL_CALL_CNT.
++		 * case1: entry of main prog.
++		 * case2: tail callee of main prog.
++		 */
++		EMIT1(0x50);             /* push rax */
++		/* Make rax as tail_call_cnt_ptr. */
++		EMIT3(0x48, 0x89, 0xE0); /* mov rax, rsp */
++		EMIT2(0xEB, 1);          /* jmp 1 */
++		/* rax is tail_call_cnt_ptr if > MAX_TAIL_CALL_CNT.
++		 * case: tail callee of subprog.
++		 */
++		EMIT1(0x50);             /* push rax */
++		/* push tail_call_cnt_ptr */
++		EMIT1(0x50);             /* push rax */
++	} else { /* is_subprog */
++		/* rax is tail_call_cnt_ptr. */
++		EMIT1(0x50);             /* push rax */
++		EMIT1(0x50);             /* push rax */
++	}
++
++	*pprog = prog;
++}
++
+ /*
+  * Emit x86-64 prologue code for BPF program.
+  * bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes
+@@ -424,10 +455,10 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
+ 			/* When it's the entry of the whole tailcall context,
+ 			 * zeroing rax means initialising tail_call_cnt.
+ 			 */
+-			EMIT2(0x31, 0xC0); /* xor eax, eax */
++			EMIT3(0x48, 0x31, 0xC0); /* xor rax, rax */
+ 		else
+ 			/* Keep the same instruction layout. */
+-			EMIT2(0x66, 0x90); /* nop2 */
++			emit_nops(&prog, 3);     /* nop3 */
+ 	}
+ 	/* Exception callback receives FP as third parameter */
+ 	if (is_exception_cb) {
+@@ -453,7 +484,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
+ 	if (stack_depth)
+ 		EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8));
+ 	if (tail_call_reachable)
+-		EMIT1(0x50);         /* push rax */
++		emit_prologue_tail_call(&prog, is_subprog);
+ 	*pprog = prog;
+ }
+ 
+@@ -589,13 +620,15 @@ static void emit_return(u8 **pprog, u8 *ip)
+ 	*pprog = prog;
+ }
+ 
++#define BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack)	(-16 - round_up(stack, 8))
++
+ /*
+  * Generate the following code:
+  *
+  * ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ...
+  *   if (index >= array->map.max_entries)
+  *     goto out;
+- *   if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++ *   if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+  *     goto out;
+  *   prog = array->ptrs[index];
+  *   if (prog == NULL)
+@@ -608,7 +641,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ 					u32 stack_depth, u8 *ip,
+ 					struct jit_context *ctx)
+ {
+-	int tcc_off = -4 - round_up(stack_depth, 8);
++	int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
+ 	u8 *prog = *pprog, *start = *pprog;
+ 	int offset;
+ 
+@@ -630,16 +663,14 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ 	EMIT2(X86_JBE, offset);                   /* jbe out */
+ 
+ 	/*
+-	 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++	 * if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 */
+-	EMIT2_off32(0x8B, 0x85, tcc_off);         /* mov eax, dword ptr [rbp - tcc_off] */
+-	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);     /* cmp eax, MAX_TAIL_CALL_CNT */
++	EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */
++	EMIT4(0x48, 0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp qword ptr [rax], MAX_TAIL_CALL_CNT */
+ 
+ 	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
+ 	EMIT2(X86_JAE, offset);                   /* jae out */
+-	EMIT3(0x83, 0xC0, 0x01);                  /* add eax, 1 */
+-	EMIT2_off32(0x89, 0x85, tcc_off);         /* mov dword ptr [rbp - tcc_off], eax */
+ 
+ 	/* prog = array->ptrs[index]; */
+ 	EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6,       /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */
+@@ -654,6 +685,9 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ 	offset = ctx->tail_call_indirect_label - (prog + 2 - start);
+ 	EMIT2(X86_JE, offset);                    /* je out */
+ 
++	/* Inc tail_call_cnt if the slot is populated. */
++	EMIT4(0x48, 0x83, 0x00, 0x01);            /* add qword ptr [rax], 1 */
++
+ 	if (bpf_prog->aux->exception_boundary) {
+ 		pop_callee_regs(&prog, all_callee_regs_used);
+ 		pop_r12(&prog);
+@@ -663,6 +697,11 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ 			pop_r12(&prog);
+ 	}
+ 
++	/* Pop tail_call_cnt_ptr. */
++	EMIT1(0x58);                              /* pop rax */
++	/* Pop tail_call_cnt, if it's main prog.
++	 * Pop tail_call_cnt_ptr, if it's subprog.
++	 */
+ 	EMIT1(0x58);                              /* pop rax */
+ 	if (stack_depth)
+ 		EMIT3_off32(0x48, 0x81, 0xC4,     /* add rsp, sd */
+@@ -691,21 +730,19 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ 				      bool *callee_regs_used, u32 stack_depth,
+ 				      struct jit_context *ctx)
+ {
+-	int tcc_off = -4 - round_up(stack_depth, 8);
++	int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
+ 	u8 *prog = *pprog, *start = *pprog;
+ 	int offset;
+ 
+ 	/*
+-	 * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT)
++	 * if ((*tcc_ptr)++ >= MAX_TAIL_CALL_CNT)
+ 	 *	goto out;
+ 	 */
+-	EMIT2_off32(0x8B, 0x85, tcc_off);             /* mov eax, dword ptr [rbp - tcc_off] */
+-	EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);         /* cmp eax, MAX_TAIL_CALL_CNT */
++	EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off);   /* mov rax, qword ptr [rbp - tcc_ptr_off] */
++	EMIT4(0x48, 0x83, 0x38, MAX_TAIL_CALL_CNT);   /* cmp qword ptr [rax], MAX_TAIL_CALL_CNT */
+ 
+ 	offset = ctx->tail_call_direct_label - (prog + 2 - start);
+ 	EMIT2(X86_JAE, offset);                       /* jae out */
+-	EMIT3(0x83, 0xC0, 0x01);                      /* add eax, 1 */
+-	EMIT2_off32(0x89, 0x85, tcc_off);             /* mov dword ptr [rbp - tcc_off], eax */
+ 
+ 	poke->tailcall_bypass = ip + (prog - start);
+ 	poke->adj_off = X86_TAIL_CALL_OFFSET;
+@@ -715,6 +752,9 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ 	emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE,
+ 		  poke->tailcall_bypass);
+ 
++	/* Inc tail_call_cnt if the slot is populated. */
++	EMIT4(0x48, 0x83, 0x00, 0x01);                /* add qword ptr [rax], 1 */
++
+ 	if (bpf_prog->aux->exception_boundary) {
+ 		pop_callee_regs(&prog, all_callee_regs_used);
+ 		pop_r12(&prog);
+@@ -724,6 +764,11 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ 			pop_r12(&prog);
+ 	}
+ 
++	/* Pop tail_call_cnt_ptr. */
++	EMIT1(0x58);                                  /* pop rax */
++	/* Pop tail_call_cnt, if it's main prog.
++	 * Pop tail_call_cnt_ptr, if it's subprog.
++	 */
+ 	EMIT1(0x58);                                  /* pop rax */
+ 	if (stack_depth)
+ 		EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8));
+@@ -1313,9 +1358,11 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
+ 
+ #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+ 
+-/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
+-#define RESTORE_TAIL_CALL_CNT(stack)				\
+-	EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
++#define __LOAD_TCC_PTR(off)			\
++	EMIT3_off32(0x48, 0x8B, 0x85, off)
++/* mov rax, qword ptr [rbp - rounded_stack_depth - 16] */
++#define LOAD_TAIL_CALL_CNT_PTR(stack)				\
++	__LOAD_TCC_PTR(BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack))
+ 
+ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
+ 		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
+@@ -2038,7 +2085,7 @@ st:			if (is_imm8(insn->off))
+ 
+ 			func = (u8 *) __bpf_call_base + imm32;
+ 			if (tail_call_reachable) {
+-				RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
++				LOAD_TAIL_CALL_CNT_PTR(bpf_prog->aux->stack_depth);
+ 				ip += 7;
+ 			}
+ 			if (!imm32)
+@@ -2713,6 +2760,10 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+ 	return 0;
+ }
+ 
++/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
++#define LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack)	\
++	__LOAD_TCC_PTR(-round_up(stack, 8) - 8)
++
+ /* Example:
+  * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
+  * its 'struct btf_func_model' will be nr_args=2
+@@ -2833,7 +2884,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ 	 *                     [ ...        ]
+ 	 *                     [ stack_arg2 ]
+ 	 * RBP - arg_stack_off [ stack_arg1 ]
+-	 * RSP                 [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
++	 * RSP                 [ tail_call_cnt_ptr ] BPF_TRAMP_F_TAIL_CALL_CTX
+ 	 */
+ 
+ 	/* room for return value of orig_call or fentry prog */
+@@ -2962,10 +3013,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ 		save_args(m, &prog, arg_stack_off, true);
+ 
+ 		if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+-			/* Before calling the original function, restore the
+-			 * tail_call_cnt from stack to rax.
++			/* Before calling the original function, load the
++			 * tail_call_cnt_ptr from stack to rax.
+ 			 */
+-			RESTORE_TAIL_CALL_CNT(stack_size);
++			LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack_size);
+ 		}
+ 
+ 		if (flags & BPF_TRAMP_F_ORIG_STACK) {
+@@ -3024,10 +3075,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
+ 			goto cleanup;
+ 		}
+ 	} else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+-		/* Before running the original function, restore the
+-		 * tail_call_cnt from stack to rax.
++		/* Before running the original function, load the
++		 * tail_call_cnt_ptr from stack to rax.
+ 		 */
+-		RESTORE_TAIL_CALL_CNT(stack_size);
++		LOAD_TRAMP_TAIL_CALL_CNT_PTR(stack_size);
+ 	}
+ 
+ 	/* restore return value of orig_call or fentry prog back into RAX */
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index b33afb240601b0..98a9bb92d75c88 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -980,7 +980,7 @@ static void amd_rp_pme_suspend(struct pci_dev *dev)
+ 		return;
+ 
+ 	rp = pcie_find_root_port(dev);
+-	if (!rp->pm_cap)
++	if (!rp || !rp->pm_cap)
+ 		return;
+ 
+ 	rp->pme_support &= ~((PCI_PM_CAP_PME_D3hot|PCI_PM_CAP_PME_D3cold) >>
+@@ -994,7 +994,7 @@ static void amd_rp_pme_resume(struct pci_dev *dev)
+ 	u16 pmc;
+ 
+ 	rp = pcie_find_root_port(dev);
+-	if (!rp->pm_cap)
++	if (!rp || !rp->pm_cap)
+ 		return;
+ 
+ 	pci_read_config_word(rp, rp->pm_cap + PCI_PM_PMC, &pmc);
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 54e0d311dcc94a..9e9a122c2bcfac 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -2019,10 +2019,7 @@ void __init xen_reserve_special_pages(void)
+ 
+ void __init xen_pt_check_e820(void)
+ {
+-	if (xen_is_e820_reserved(xen_pt_base, xen_pt_size)) {
+-		xen_raw_console_write("Xen hypervisor allocated page table memory conflicts with E820 map\n");
+-		BUG();
+-	}
++	xen_chk_is_e820_usable(xen_pt_base, xen_pt_size, "page table");
+ }
+ 
+ static unsigned char dummy_mapping[PAGE_SIZE] __page_aligned_bss;
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 6bcbdf3b7999fd..d8f175ee9248f6 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -70,6 +70,7 @@
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
++#include <linux/acpi.h>
+ 
+ #include <asm/cache.h>
+ #include <asm/setup.h>
+@@ -80,6 +81,7 @@
+ #include <asm/xen/hypervisor.h>
+ #include <xen/balloon.h>
+ #include <xen/grant_table.h>
++#include <xen/hvc-console.h>
+ 
+ #include "multicalls.h"
+ #include "xen-ops.h"
+@@ -793,6 +795,102 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ 	return ret;
+ }
+ 
++/* Remapped non-RAM areas */
++#define NR_NONRAM_REMAP 4
++static struct nonram_remap {
++	phys_addr_t maddr;
++	phys_addr_t paddr;
++	size_t size;
++} xen_nonram_remap[NR_NONRAM_REMAP] __ro_after_init;
++static unsigned int nr_nonram_remap __ro_after_init;
++
++/*
++ * Do the real remapping of non-RAM regions as specified in the
++ * xen_nonram_remap[] array.
++ * In case of an error just crash the system.
++ */
++void __init xen_do_remap_nonram(void)
++{
++	unsigned int i;
++	unsigned int remapped = 0;
++	const struct nonram_remap *remap = xen_nonram_remap;
++	unsigned long pfn, mfn, end_pfn;
++
++	for (i = 0; i < nr_nonram_remap; i++) {
++		end_pfn = PFN_UP(remap->paddr + remap->size);
++		pfn = PFN_DOWN(remap->paddr);
++		mfn = PFN_DOWN(remap->maddr);
++		while (pfn < end_pfn) {
++			if (!set_phys_to_machine(pfn, mfn))
++				panic("Failed to set p2m mapping for pfn=%lx mfn=%lx\n",
++				       pfn, mfn);
++
++			pfn++;
++			mfn++;
++			remapped++;
++		}
++
++		remap++;
++	}
++
++	pr_info("Remapped %u non-RAM page(s)\n", remapped);
++}
++
++#ifdef CONFIG_ACPI
++/*
++ * Xen variant of acpi_os_ioremap() taking potentially remapped non-RAM
++ * regions into account.
++ * Any attempt to map an area crossing a remap boundary will produce a
++ * WARN() splat.
++ * phys is related to remap->maddr on input and will be rebased to remap->paddr.
++ */
++static void __iomem *xen_acpi_os_ioremap(acpi_physical_address phys,
++					 acpi_size size)
++{
++	unsigned int i;
++	const struct nonram_remap *remap = xen_nonram_remap;
++
++	for (i = 0; i < nr_nonram_remap; i++) {
++		if (phys + size > remap->maddr &&
++		    phys < remap->maddr + remap->size) {
++			WARN_ON(phys < remap->maddr ||
++				phys + size > remap->maddr + remap->size);
++			phys += remap->paddr - remap->maddr;
++			break;
++		}
++	}
++
++	return x86_acpi_os_ioremap(phys, size);
++}
++#endif /* CONFIG_ACPI */
++
++/*
++ * Add a new non-RAM remap entry.
++ * In case of no free entry found, just crash the system.
++ */
++void __init xen_add_remap_nonram(phys_addr_t maddr, phys_addr_t paddr,
++				 unsigned long size)
++{
++	BUG_ON((maddr & ~PAGE_MASK) != (paddr & ~PAGE_MASK));
++
++	if (nr_nonram_remap == NR_NONRAM_REMAP) {
++		xen_raw_console_write("Number of required E820 entry remapping actions exceed maximum value\n");
++		BUG();
++	}
++
++#ifdef CONFIG_ACPI
++	/* Switch to the Xen acpi_os_ioremap() variant. */
++	if (nr_nonram_remap == 0)
++		acpi_os_ioremap = xen_acpi_os_ioremap;
++#endif
++
++	xen_nonram_remap[nr_nonram_remap].maddr = maddr;
++	xen_nonram_remap[nr_nonram_remap].paddr = paddr;
++	xen_nonram_remap[nr_nonram_remap].size = size;
++
++	nr_nonram_remap++;
++}
++
+ #ifdef CONFIG_XEN_DEBUG_FS
+ #include <linux/debugfs.h>
+ #include "debugfs.h"
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 380591028cb8f4..dc822124cacb9c 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -15,12 +15,12 @@
+ #include <linux/cpuidle.h>
+ #include <linux/cpufreq.h>
+ #include <linux/memory_hotplug.h>
++#include <linux/acpi.h>
+ 
+ #include <asm/elf.h>
+ #include <asm/vdso.h>
+ #include <asm/e820/api.h>
+ #include <asm/setup.h>
+-#include <asm/acpi.h>
+ #include <asm/numa.h>
+ #include <asm/idtentry.h>
+ #include <asm/xen/hypervisor.h>
+@@ -47,6 +47,9 @@ bool xen_pv_pci_possible;
+ /* E820 map used during setting up memory. */
+ static struct e820_table xen_e820_table __initdata;
+ 
++/* Number of initially usable memory pages. */
++static unsigned long ini_nr_pages __initdata;
++
+ /*
+  * Buffer used to remap identity mapped pages. We only need the virtual space.
+  * The physical page behind this address is remapped as needed to different
+@@ -213,7 +216,7 @@ static int __init xen_free_mfn(unsigned long mfn)
+  * as a fallback if the remapping fails.
+  */
+ static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,
+-			unsigned long end_pfn, unsigned long nr_pages)
++						      unsigned long end_pfn)
+ {
+ 	unsigned long pfn, end;
+ 	int ret;
+@@ -221,7 +224,7 @@ static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,
+ 	WARN_ON(start_pfn > end_pfn);
+ 
+ 	/* Release pages first. */
+-	end = min(end_pfn, nr_pages);
++	end = min(end_pfn, ini_nr_pages);
+ 	for (pfn = start_pfn; pfn < end; pfn++) {
+ 		unsigned long mfn = pfn_to_mfn(pfn);
+ 
+@@ -342,15 +345,14 @@ static void __init xen_do_set_identity_and_remap_chunk(
+  * to Xen and not remapped.
+  */
+ static unsigned long __init xen_set_identity_and_remap_chunk(
+-	unsigned long start_pfn, unsigned long end_pfn, unsigned long nr_pages,
+-	unsigned long remap_pfn)
++	unsigned long start_pfn, unsigned long end_pfn, unsigned long remap_pfn)
+ {
+ 	unsigned long pfn;
+ 	unsigned long i = 0;
+ 	unsigned long n = end_pfn - start_pfn;
+ 
+ 	if (remap_pfn == 0)
+-		remap_pfn = nr_pages;
++		remap_pfn = ini_nr_pages;
+ 
+ 	while (i < n) {
+ 		unsigned long cur_pfn = start_pfn + i;
+@@ -359,19 +361,19 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
+ 		unsigned long remap_range_size;
+ 
+ 		/* Do not remap pages beyond the current allocation */
+-		if (cur_pfn >= nr_pages) {
++		if (cur_pfn >= ini_nr_pages) {
+ 			/* Identity map remaining pages */
+ 			set_phys_range_identity(cur_pfn, cur_pfn + size);
+ 			break;
+ 		}
+-		if (cur_pfn + size > nr_pages)
+-			size = nr_pages - cur_pfn;
++		if (cur_pfn + size > ini_nr_pages)
++			size = ini_nr_pages - cur_pfn;
+ 
+ 		remap_range_size = xen_find_pfn_range(&remap_pfn);
+ 		if (!remap_range_size) {
+ 			pr_warn("Unable to find available pfn range, not remapping identity pages\n");
+ 			xen_set_identity_and_release_chunk(cur_pfn,
+-						cur_pfn + left, nr_pages);
++							   cur_pfn + left);
+ 			break;
+ 		}
+ 		/* Adjust size to fit in current e820 RAM region */
+@@ -398,18 +400,18 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
+ }
+ 
+ static unsigned long __init xen_count_remap_pages(
+-	unsigned long start_pfn, unsigned long end_pfn, unsigned long nr_pages,
++	unsigned long start_pfn, unsigned long end_pfn,
+ 	unsigned long remap_pages)
+ {
+-	if (start_pfn >= nr_pages)
++	if (start_pfn >= ini_nr_pages)
+ 		return remap_pages;
+ 
+-	return remap_pages + min(end_pfn, nr_pages) - start_pfn;
++	return remap_pages + min(end_pfn, ini_nr_pages) - start_pfn;
+ }
+ 
+-static unsigned long __init xen_foreach_remap_area(unsigned long nr_pages,
++static unsigned long __init xen_foreach_remap_area(
+ 	unsigned long (*func)(unsigned long start_pfn, unsigned long end_pfn,
+-			      unsigned long nr_pages, unsigned long last_val))
++			      unsigned long last_val))
+ {
+ 	phys_addr_t start = 0;
+ 	unsigned long ret_val = 0;
+@@ -437,8 +439,7 @@ static unsigned long __init xen_foreach_remap_area(unsigned long nr_pages,
+ 				end_pfn = PFN_UP(entry->addr);
+ 
+ 			if (start_pfn < end_pfn)
+-				ret_val = func(start_pfn, end_pfn, nr_pages,
+-					       ret_val);
++				ret_val = func(start_pfn, end_pfn, ret_val);
+ 			start = end;
+ 		}
+ 	}
+@@ -495,6 +496,8 @@ void __init xen_remap_memory(void)
+ 	set_pte_mfn(buf, mfn_save, PAGE_KERNEL);
+ 
+ 	pr_info("Remapped %ld page(s)\n", remapped);
++
++	xen_do_remap_nonram();
+ }
+ 
+ static unsigned long __init xen_get_pages_limit(void)
+@@ -568,7 +571,7 @@ static void __init xen_ignore_unusable(void)
+ 	}
+ }
+ 
+-bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size)
++static bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size)
+ {
+ 	struct e820_entry *entry;
+ 	unsigned mapcnt;
+@@ -625,6 +628,111 @@ phys_addr_t __init xen_find_free_area(phys_addr_t size)
+ 	return 0;
+ }
+ 
++/*
++ * Swap a non-RAM E820 map entry with RAM above ini_nr_pages.
++ * Note that the E820 map is modified accordingly, but the P2M map isn't yet.
++ * The adaption of the P2M must be deferred until page allocation is possible.
++ */
++static void __init xen_e820_swap_entry_with_ram(struct e820_entry *swap_entry)
++{
++	struct e820_entry *entry;
++	unsigned int mapcnt;
++	phys_addr_t mem_end = PFN_PHYS(ini_nr_pages);
++	phys_addr_t swap_addr, swap_size, entry_end;
++
++	swap_addr = PAGE_ALIGN_DOWN(swap_entry->addr);
++	swap_size = PAGE_ALIGN(swap_entry->addr - swap_addr + swap_entry->size);
++	entry = xen_e820_table.entries;
++
++	for (mapcnt = 0; mapcnt < xen_e820_table.nr_entries; mapcnt++) {
++		entry_end = entry->addr + entry->size;
++		if (entry->type == E820_TYPE_RAM && entry->size >= swap_size &&
++		    entry_end - swap_size >= mem_end) {
++			/* Reduce RAM entry by needed space (whole pages). */
++			entry->size -= swap_size;
++
++			/* Add new entry at the end of E820 map. */
++			entry = xen_e820_table.entries +
++				xen_e820_table.nr_entries;
++			xen_e820_table.nr_entries++;
++
++			/* Fill new entry (keep size and page offset). */
++			entry->type = swap_entry->type;
++			entry->addr = entry_end - swap_size +
++				      swap_addr - swap_entry->addr;
++			entry->size = swap_entry->size;
++
++			/* Convert old entry to RAM, align to pages. */
++			swap_entry->type = E820_TYPE_RAM;
++			swap_entry->addr = swap_addr;
++			swap_entry->size = swap_size;
++
++			/* Remember PFN<->MFN relation for P2M update. */
++			xen_add_remap_nonram(swap_addr, entry_end - swap_size,
++					     swap_size);
++
++			/* Order E820 table and merge entries. */
++			e820__update_table(&xen_e820_table);
++
++			return;
++		}
++
++		entry++;
++	}
++
++	xen_raw_console_write("No suitable area found for required E820 entry remapping action\n");
++	BUG();
++}
++
++/*
++ * Look for non-RAM memory types in a specific guest physical area and move
++ * those away if possible (ACPI NVS only for now).
++ */
++static void __init xen_e820_resolve_conflicts(phys_addr_t start,
++					      phys_addr_t size)
++{
++	struct e820_entry *entry;
++	unsigned int mapcnt;
++	phys_addr_t end;
++
++	if (!size)
++		return;
++
++	end = start + size;
++	entry = xen_e820_table.entries;
++
++	for (mapcnt = 0; mapcnt < xen_e820_table.nr_entries; mapcnt++) {
++		if (entry->addr >= end)
++			return;
++
++		if (entry->addr + entry->size > start &&
++		    entry->type == E820_TYPE_NVS)
++			xen_e820_swap_entry_with_ram(entry);
++
++		entry++;
++	}
++}
++
++/*
++ * Check for an area in physical memory to be usable for non-movable purposes.
++ * An area is considered to usable if the used E820 map lists it to be RAM or
++ * some other type which can be moved to higher PFNs while keeping the MFNs.
++ * In case the area is not usable, crash the system with an error message.
++ */
++void __init xen_chk_is_e820_usable(phys_addr_t start, phys_addr_t size,
++				   const char *component)
++{
++	xen_e820_resolve_conflicts(start, size);
++
++	if (!xen_is_e820_reserved(start, size))
++		return;
++
++	xen_raw_console_write("Xen hypervisor allocated ");
++	xen_raw_console_write(component);
++	xen_raw_console_write(" memory conflicts with E820 map\n");
++	BUG();
++}
++
+ /*
+  * Like memcpy, but with physical addresses for dest and src.
+  */
+@@ -684,20 +792,20 @@ static void __init xen_reserve_xen_mfnlist(void)
+  **/
+ char * __init xen_memory_setup(void)
+ {
+-	unsigned long max_pfn, pfn_s, n_pfns;
++	unsigned long pfn_s, n_pfns;
+ 	phys_addr_t mem_end, addr, size, chunk_size;
+ 	u32 type;
+ 	int rc;
+ 	struct xen_memory_map memmap;
+ 	unsigned long max_pages;
+ 	unsigned long extra_pages = 0;
++	unsigned long maxmem_pages;
+ 	int i;
+ 	int op;
+ 
+ 	xen_parse_512gb();
+-	max_pfn = xen_get_pages_limit();
+-	max_pfn = min(max_pfn, xen_start_info->nr_pages);
+-	mem_end = PFN_PHYS(max_pfn);
++	ini_nr_pages = min(xen_get_pages_limit(), xen_start_info->nr_pages);
++	mem_end = PFN_PHYS(ini_nr_pages);
+ 
+ 	memmap.nr_entries = ARRAY_SIZE(xen_e820_table.entries);
+ 	set_xen_guest_handle(memmap.buffer, xen_e820_table.entries);
+@@ -747,13 +855,35 @@ char * __init xen_memory_setup(void)
+ 	/* Make sure the Xen-supplied memory map is well-ordered. */
+ 	e820__update_table(&xen_e820_table);
+ 
++	/*
++	 * Check whether the kernel itself conflicts with the target E820 map.
++	 * Failing now is better than running into weird problems later due
++	 * to relocating (and even reusing) pages with kernel text or data.
++	 */
++	xen_chk_is_e820_usable(__pa_symbol(_text),
++			       __pa_symbol(_end) - __pa_symbol(_text),
++			       "kernel");
++
++	/*
++	 * Check for a conflict of the xen_start_info memory with the target
++	 * E820 map.
++	 */
++	xen_chk_is_e820_usable(__pa(xen_start_info), sizeof(*xen_start_info),
++			       "xen_start_info");
++
++	/*
++	 * Check for a conflict of the hypervisor supplied page tables with
++	 * the target E820 map.
++	 */
++	xen_pt_check_e820();
++
+ 	max_pages = xen_get_max_pages();
+ 
+ 	/* How many extra pages do we need due to remapping? */
+-	max_pages += xen_foreach_remap_area(max_pfn, xen_count_remap_pages);
++	max_pages += xen_foreach_remap_area(xen_count_remap_pages);
+ 
+-	if (max_pages > max_pfn)
+-		extra_pages += max_pages - max_pfn;
++	if (max_pages > ini_nr_pages)
++		extra_pages += max_pages - ini_nr_pages;
+ 
+ 	/*
+ 	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+@@ -762,8 +892,8 @@ char * __init xen_memory_setup(void)
+ 	 * Make sure we have no memory above max_pages, as this area
+ 	 * isn't handled by the p2m management.
+ 	 */
+-	extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
+-			   extra_pages, max_pages - max_pfn);
++	maxmem_pages = EXTRA_MEM_RATIO * min(ini_nr_pages, PFN_DOWN(MAXMEM));
++	extra_pages = min3(maxmem_pages, extra_pages, max_pages - ini_nr_pages);
+ 	i = 0;
+ 	addr = xen_e820_table.entries[0].addr;
+ 	size = xen_e820_table.entries[0].size;
+@@ -819,23 +949,6 @@ char * __init xen_memory_setup(void)
+ 
+ 	e820__update_table(e820_table);
+ 
+-	/*
+-	 * Check whether the kernel itself conflicts with the target E820 map.
+-	 * Failing now is better than running into weird problems later due
+-	 * to relocating (and even reusing) pages with kernel text or data.
+-	 */
+-	if (xen_is_e820_reserved(__pa_symbol(_text),
+-			__pa_symbol(__bss_stop) - __pa_symbol(_text))) {
+-		xen_raw_console_write("Xen hypervisor allocated kernel memory conflicts with E820 map\n");
+-		BUG();
+-	}
+-
+-	/*
+-	 * Check for a conflict of the hypervisor supplied page tables with
+-	 * the target E820 map.
+-	 */
+-	xen_pt_check_e820();
+-
+ 	xen_reserve_xen_mfnlist();
+ 
+ 	/* Check for a conflict of the initrd with the target E820 map. */
+@@ -863,7 +976,7 @@ char * __init xen_memory_setup(void)
+ 	 * Set identity map on non-RAM pages and prepare remapping the
+ 	 * underlying RAM.
+ 	 */
+-	xen_foreach_remap_area(max_pfn, xen_set_identity_and_remap_chunk);
++	xen_foreach_remap_area(xen_set_identity_and_remap_chunk);
+ 
+ 	pr_info("Released %ld page(s)\n", xen_released_pages);
+ 
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 79cf93f2c92f1d..a6a21dd0552700 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -43,8 +43,12 @@ void xen_mm_unpin_all(void);
+ #ifdef CONFIG_X86_64
+ void __init xen_relocate_p2m(void);
+ #endif
++void __init xen_do_remap_nonram(void);
++void __init xen_add_remap_nonram(phys_addr_t maddr, phys_addr_t paddr,
++				 unsigned long size);
+ 
+-bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size);
++void __init xen_chk_is_e820_usable(phys_addr_t start, phys_addr_t size,
++				   const char *component);
+ unsigned long __ref xen_chk_extra_mem(unsigned long pfn);
+ void __init xen_inv_extra_mem(void);
+ void __init xen_remap_memory(void);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 4b88a54a9b76cb..05372a78cd5161 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2911,8 +2911,12 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx];
+ 
+ 	/* if a merge has already been setup, then proceed with that first */
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
++	new_bfqq = bfqq->new_bfqq;
++	if (new_bfqq) {
++		while (new_bfqq->new_bfqq)
++			new_bfqq = new_bfqq->new_bfqq;
++		return new_bfqq;
++	}
+ 
+ 	/*
+ 	 * Check delayed stable merge for rotational or non-queueing
+@@ -3125,10 +3129,12 @@ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	bfq_put_queue(bfqq);
+ }
+ 
+-static void
+-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+-		struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++static struct bfq_queue *bfq_merge_bfqqs(struct bfq_data *bfqd,
++					 struct bfq_io_cq *bic,
++					 struct bfq_queue *bfqq)
+ {
++	struct bfq_queue *new_bfqq = bfqq->new_bfqq;
++
+ 	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
+ 		(unsigned long)new_bfqq->pid);
+ 	/* Save weight raising and idle window of the merged queues */
+@@ -3222,6 +3228,8 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ 	bfq_reassign_last_bfqq(bfqq, new_bfqq);
+ 
+ 	bfq_release_process_ref(bfqd, bfqq);
++
++	return new_bfqq;
+ }
+ 
+ static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
+@@ -3257,14 +3265,8 @@ static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
+ 		 * fulfilled, i.e., bic can be redirected to new_bfqq
+ 		 * and bfqq can be put.
+ 		 */
+-		bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq,
+-				new_bfqq);
+-		/*
+-		 * If we get here, bio will be queued into new_queue,
+-		 * so use new_bfqq to decide whether bio and rq can be
+-		 * merged.
+-		 */
+-		bfqq = new_bfqq;
++		while (bfqq != new_bfqq)
++			bfqq = bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq);
+ 
+ 		/*
+ 		 * Change also bqfd->bio_bfqq, as
+@@ -5699,9 +5701,7 @@ bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	 * state before killing it.
+ 	 */
+ 	bfqq->bic = bic;
+-	bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
+-
+-	return new_bfqq;
++	return bfq_merge_bfqqs(bfqd, bic, bfqq);
+ }
+ 
+ /*
+@@ -6156,6 +6156,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
+ 	bool waiting, idle_timer_disabled = false;
+ 
+ 	if (new_bfqq) {
++		struct bfq_queue *old_bfqq = bfqq;
+ 		/*
+ 		 * Release the request's reference to the old bfqq
+ 		 * and make sure one is taken to the shared queue.
+@@ -6172,18 +6173,18 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
+ 		 * new_bfqq.
+ 		 */
+ 		if (bic_to_bfqq(RQ_BIC(rq), true,
+-				bfq_actuator_index(bfqd, rq->bio)) == bfqq)
+-			bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
+-					bfqq, new_bfqq);
++				bfq_actuator_index(bfqd, rq->bio)) == bfqq) {
++			while (bfqq != new_bfqq)
++				bfqq = bfq_merge_bfqqs(bfqd, RQ_BIC(rq), bfqq);
++		}
+ 
+-		bfq_clear_bfqq_just_created(bfqq);
++		bfq_clear_bfqq_just_created(old_bfqq);
+ 		/*
+ 		 * rq is about to be enqueued into new_bfqq,
+ 		 * release rq reference on bfqq
+ 		 */
+-		bfq_put_queue(bfqq);
++		bfq_put_queue(old_bfqq);
+ 		rq->elv.priv[1] = new_bfqq;
+-		bfqq = new_bfqq;
+ 	}
+ 
+ 	bfq_update_io_thinktime(bfqd, bfqq);
+@@ -6721,7 +6722,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ {
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
+ 
+-	if (bfqq_process_refs(bfqq) == 1) {
++	if (bfqq_process_refs(bfqq) == 1 && !bfqq->new_bfqq) {
+ 		bfqq->pid = current->pid;
+ 		bfq_clear_bfqq_coop(bfqq);
+ 		bfq_clear_bfqq_split_coop(bfqq);
+@@ -6819,6 +6820,31 @@ static void bfq_prepare_request(struct request *rq)
+ 	rq->elv.priv[0] = rq->elv.priv[1] = NULL;
+ }
+ 
++static struct bfq_queue *bfq_waker_bfqq(struct bfq_queue *bfqq)
++{
++	struct bfq_queue *new_bfqq = bfqq->new_bfqq;
++	struct bfq_queue *waker_bfqq = bfqq->waker_bfqq;
++
++	if (!waker_bfqq)
++		return NULL;
++
++	while (new_bfqq) {
++		if (new_bfqq == waker_bfqq) {
++			/*
++			 * If waker_bfqq is in the merge chain, and current
++			 * is the only procress.
++			 */
++			if (bfqq_process_refs(waker_bfqq) == 1)
++				return NULL;
++			break;
++		}
++
++		new_bfqq = new_bfqq->new_bfqq;
++	}
++
++	return waker_bfqq;
++}
++
+ /*
+  * If needed, init rq, allocate bfq data structures associated with
+  * rq, and increment reference counters in the destination bfq_queue
+@@ -6880,7 +6906,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 		/* If the queue was seeky for too long, break it apart. */
+ 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) &&
+ 			!bic->bfqq_data[a_idx].stably_merged) {
+-			struct bfq_queue *old_bfqq = bfqq;
++			struct bfq_queue *waker_bfqq = bfq_waker_bfqq(bfqq);
+ 
+ 			/* Update bic before losing reference to bfqq */
+ 			if (bfq_bfqq_in_large_burst(bfqq))
+@@ -6900,7 +6926,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 				bfqq_already_existing = true;
+ 
+ 			if (!bfqq_already_existing) {
+-				bfqq->waker_bfqq = old_bfqq->waker_bfqq;
++				bfqq->waker_bfqq = waker_bfqq;
+ 				bfqq->tentative_waker_bfqq = NULL;
+ 
+ 				/*
+@@ -6910,7 +6936,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 				 * woken_list of the waker. See
+ 				 * bfq_check_waker for details.
+ 				 */
+-				if (bfqq->waker_bfqq)
++				if (waker_bfqq)
+ 					hlist_add_head(&bfqq->woken_list_node,
+ 						       &bfqq->waker_bfqq->woken_list);
+ 			}
+@@ -6932,7 +6958,8 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 	 * addition, if the queue has also just been split, we have to
+ 	 * resume its state.
+ 	 */
+-	if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++	if (likely(bfqq != &bfqd->oom_bfqq) && !bfqq->new_bfqq &&
++	    bfqq_process_refs(bfqq) == 1) {
+ 		bfqq->bic = bic;
+ 		if (split) {
+ 			/*
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index ab76e64f0f6c3b..5bd7a603092ea9 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -555,9 +555,11 @@ static bool blk_add_partition(struct gendisk *disk,
+ 
+ 	part = add_partition(disk, p, from, size, state->parts[p].flags,
+ 			     &state->parts[p].info);
+-	if (IS_ERR(part) && PTR_ERR(part) != -ENXIO) {
+-		printk(KERN_ERR " %s: p%d could not be added: %pe\n",
+-		       disk->disk_name, p, part);
++	if (IS_ERR(part)) {
++		if (PTR_ERR(part) != -ENXIO) {
++			printk(KERN_ERR " %s: p%d could not be added: %pe\n",
++			       disk->disk_name, p, part);
++		}
+ 		return true;
+ 	}
+ 
+diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c
+index a5da8ccd353ef7..43af5fa510c09f 100644
+--- a/crypto/asymmetric_keys/asymmetric_type.c
++++ b/crypto/asymmetric_keys/asymmetric_type.c
+@@ -60,17 +60,18 @@ struct key *find_asymmetric_key(struct key *keyring,
+ 	char *req, *p;
+ 	int len;
+ 
+-	WARN_ON(!id_0 && !id_1 && !id_2);
+-
+ 	if (id_0) {
+ 		lookup = id_0->data;
+ 		len = id_0->len;
+ 	} else if (id_1) {
+ 		lookup = id_1->data;
+ 		len = id_1->len;
+-	} else {
++	} else if (id_2) {
+ 		lookup = id_2->data;
+ 		len = id_2->len;
++	} else {
++		WARN_ON(1);
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	/* Construct an identifier "id:<keyid>". */
+diff --git a/crypto/xor.c b/crypto/xor.c
+index 8e72e5d5db0ded..56aa3169e87171 100644
+--- a/crypto/xor.c
++++ b/crypto/xor.c
+@@ -83,33 +83,30 @@ static void __init
+ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
+ {
+ 	int speed;
+-	int i, j;
+-	ktime_t min, start, diff;
++	unsigned long reps;
++	ktime_t min, start, t0;
+ 
+ 	tmpl->next = template_list;
+ 	template_list = tmpl;
+ 
+ 	preempt_disable();
+ 
+-	min = (ktime_t)S64_MAX;
+-	for (i = 0; i < 3; i++) {
+-		start = ktime_get();
+-		for (j = 0; j < REPS; j++) {
+-			mb(); /* prevent loop optimization */
+-			tmpl->do_2(BENCH_SIZE, b1, b2);
+-			mb();
+-		}
+-		diff = ktime_sub(ktime_get(), start);
+-		if (diff < min)
+-			min = diff;
+-	}
++	reps = 0;
++	t0 = ktime_get();
++	/* delay start until time has advanced */
++	while ((start = ktime_get()) == t0)
++		cpu_relax();
++	do {
++		mb(); /* prevent loop optimization */
++		tmpl->do_2(BENCH_SIZE, b1, b2);
++		mb();
++	} while (reps++ < REPS || (t0 = ktime_get()) == start);
++	min = ktime_sub(t0, start);
+ 
+ 	preempt_enable();
+ 
+ 	// bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
+-	if (!min)
+-		min = 1;
+-	speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
++	speed = (1000 * reps * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
+ 	tmpl->speed = speed;
+ 
+ 	pr_info("   %-16s: %5d MB/sec\n", tmpl->name, speed);
+diff --git a/drivers/acpi/acpica/exsystem.c b/drivers/acpi/acpica/exsystem.c
+index f665ffd9a396cf..2c384bd52b9c4b 100644
+--- a/drivers/acpi/acpica/exsystem.c
++++ b/drivers/acpi/acpica/exsystem.c
+@@ -133,14 +133,15 @@ acpi_status acpi_ex_system_do_stall(u32 how_long_us)
+ 		 * (ACPI specifies 100 usec as max, but this gives some slack in
+ 		 * order to support existing BIOSs)
+ 		 */
+-		ACPI_ERROR((AE_INFO,
+-			    "Time parameter is too large (%u)", how_long_us));
++		ACPI_ERROR_ONCE((AE_INFO,
++				 "Time parameter is too large (%u)",
++				 how_long_us));
+ 		status = AE_AML_OPERAND_VALUE;
+ 	} else {
+ 		if (how_long_us > 100) {
+-			ACPI_WARNING((AE_INFO,
+-				      "Time parameter %u us > 100 us violating ACPI spec, please fix the firmware.",
+-				      how_long_us));
++			ACPI_WARNING_ONCE((AE_INFO,
++					   "Time parameter %u us > 100 us violating ACPI spec, please fix the firmware.",
++					   how_long_us));
+ 		}
+ 		acpi_os_stall(how_long_us);
+ 	}
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 1d857978f5f407..2a588e4ed4af44 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -170,8 +170,11 @@ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time);
+ #define GET_BIT_WIDTH(reg) ((reg)->access_width ? (8 << ((reg)->access_width - 1)) : (reg)->bit_width)
+ 
+ /* Shift and apply the mask for CPC reads/writes */
+-#define MASK_VAL(reg, val) (((val) >> (reg)->bit_offset) & 			\
++#define MASK_VAL_READ(reg, val) (((val) >> (reg)->bit_offset) &				\
+ 					GENMASK(((reg)->bit_width) - 1, 0))
++#define MASK_VAL_WRITE(reg, prev_val, val)						\
++	((((val) & GENMASK(((reg)->bit_width) - 1, 0)) << (reg)->bit_offset) |		\
++	((prev_val) & ~(GENMASK(((reg)->bit_width) - 1, 0) << (reg)->bit_offset)))	\
+ 
+ static ssize_t show_feedback_ctrs(struct kobject *kobj,
+ 		struct kobj_attribute *attr, char *buf)
+@@ -857,6 +860,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 
+ 	/* Store CPU Logical ID */
+ 	cpc_ptr->cpu_id = pr->id;
++	spin_lock_init(&cpc_ptr->rmw_lock);
+ 
+ 	/* Parse PSD data for this CPU */
+ 	ret = acpi_get_psd(cpc_ptr, handle);
+@@ -1062,7 +1066,7 @@ static int cpc_read(int cpu, struct cpc_register_resource *reg_res, u64 *val)
+ 	}
+ 
+ 	if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+-		*val = MASK_VAL(reg, *val);
++		*val = MASK_VAL_READ(reg, *val);
+ 
+ 	return 0;
+ }
+@@ -1071,9 +1075,11 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ {
+ 	int ret_val = 0;
+ 	int size;
++	u64 prev_val;
+ 	void __iomem *vaddr = NULL;
+ 	int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
+ 	struct cpc_reg *reg = &reg_res->cpc_entry.reg;
++	struct cpc_desc *cpc_desc;
+ 
+ 	size = GET_BIT_WIDTH(reg);
+ 
+@@ -1106,8 +1112,34 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ 		return acpi_os_write_memory((acpi_physical_address)reg->address,
+ 				val, size);
+ 
+-	if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+-		val = MASK_VAL(reg, val);
++	if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
++		cpc_desc = per_cpu(cpc_desc_ptr, cpu);
++		if (!cpc_desc) {
++			pr_debug("No CPC descriptor for CPU:%d\n", cpu);
++			return -ENODEV;
++		}
++
++		spin_lock(&cpc_desc->rmw_lock);
++		switch (size) {
++		case 8:
++			prev_val = readb_relaxed(vaddr);
++			break;
++		case 16:
++			prev_val = readw_relaxed(vaddr);
++			break;
++		case 32:
++			prev_val = readl_relaxed(vaddr);
++			break;
++		case 64:
++			prev_val = readq_relaxed(vaddr);
++			break;
++		default:
++			spin_unlock(&cpc_desc->rmw_lock);
++			return -EFAULT;
++		}
++		val = MASK_VAL_WRITE(reg, prev_val, val);
++		val |= prev_val;
++	}
+ 
+ 	switch (size) {
+ 	case 8:
+@@ -1134,6 +1166,9 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ 		break;
+ 	}
+ 
++	if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
++		spin_unlock(&cpc_desc->rmw_lock);
++
+ 	return ret_val;
+ }
+ 
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 23373faa35ecd6..95a19e3569c83f 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -540,8 +540,9 @@ int acpi_device_setup_files(struct acpi_device *dev)
+ 	 * If device has _STR, 'description' file is created
+ 	 */
+ 	if (acpi_has_method(dev->handle, "_STR")) {
+-		status = acpi_evaluate_object(dev->handle, "_STR",
+-					NULL, &buffer);
++		status = acpi_evaluate_object_typed(dev->handle, "_STR",
++						    NULL, &buffer,
++						    ACPI_TYPE_BUFFER);
+ 		if (ACPI_FAILURE(status))
+ 			buffer.pointer = NULL;
+ 		dev->pnp.str_obj = buffer.pointer;
+diff --git a/drivers/acpi/pmic/tps68470_pmic.c b/drivers/acpi/pmic/tps68470_pmic.c
+index ebd03e4729555a..0d1a82eeb4b0b6 100644
+--- a/drivers/acpi/pmic/tps68470_pmic.c
++++ b/drivers/acpi/pmic/tps68470_pmic.c
+@@ -376,10 +376,8 @@ static int tps68470_pmic_opregion_probe(struct platform_device *pdev)
+ 	struct tps68470_pmic_opregion *opregion;
+ 	acpi_status status;
+ 
+-	if (!dev || !tps68470_regmap) {
+-		dev_warn(dev, "dev or regmap is NULL\n");
+-		return -EINVAL;
+-	}
++	if (!tps68470_regmap)
++		return dev_err_probe(dev, -EINVAL, "regmap is missing\n");
+ 
+ 	if (!handle) {
+ 		dev_warn(dev, "acpi handle is NULL\n");
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index df5d5a554b388b..cb2aacbb93357e 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -554,6 +554,12 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+  * to have a working keyboard.
+  */
+ static const struct dmi_system_id irq1_edge_low_force_override[] = {
++	{
++		/* MECHREV Jiaolong17KS Series GM7XG0M */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GM7XG0M"),
++		},
++	},
+ 	{
+ 		/* XMG APEX 17 (M23) */
+ 		.matches = {
+@@ -572,6 +578,12 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
+ 		},
+ 	},
++	{
++		/* TongFang GMxXGxX/TUXEDO Polaris 15 Gen5 AMD */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxX"),
++		},
++	},
+ 	{
+ 		/* TongFang GMxXGxx sold as Eluktronics Inc. RP-15 */
+ 		.matches = {
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index ff6f260433a113..75a5f559402f87 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -541,6 +541,22 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "iMac12,2"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Apple MacBook Air 9,1 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir9,1"),
++		},
++	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Apple MacBook Pro 9,2 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,2"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ 	 .callback = video_detect_force_native,
+@@ -550,6 +566,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro12,1"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_native,
++	 /* Apple MacBook Pro 16,2 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro16,2"),
++		},
++	},
+ 	{
+ 	 .callback = video_detect_force_native,
+ 	 /* Dell Inspiron N4010 */
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 214b935c2ced79..7df9ec9f924c44 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -630,6 +630,14 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
+ 	list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) {
+ 		struct ata_queued_cmd *qc;
+ 
++		/*
++		 * If the scmd was added to EH, via ata_qc_schedule_eh() ->
++		 * scsi_timeout() -> scsi_eh_scmd_add(), scsi_timeout() will
++		 * have set DID_TIME_OUT (since libata does not have an abort
++		 * handler). Thus, to clear DID_TIME_OUT, clear the host byte.
++		 */
++		set_host_byte(scmd, DID_OK);
++
+ 		ata_qc_for_each_raw(ap, qc, i) {
+ 			if (qc->flags & ATA_QCFLAG_ACTIVE &&
+ 			    qc->scsicmd == scmd)
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 4116ae08871910..0cbe331124c255 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -1693,9 +1693,6 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
+ 			set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION);
+ 	} else if (is_error && !have_sense) {
+ 		ata_gen_ata_sense(qc);
+-	} else {
+-		/* Keep the SCSI ML and status byte, clear host byte. */
+-		cmd->result &= 0x0000ffff;
+ 	}
+ 
+ 	ata_qc_done(qc);
+@@ -2361,7 +2358,7 @@ static unsigned int ata_msense_control(struct ata_device *dev, u8 *buf,
+ 	case ALL_SUB_MPAGES:
+ 		n = ata_msense_control_spg0(dev, buf, changeable);
+ 		n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE);
+-		n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE);
++		n += ata_msense_control_spgt2(dev, buf + n, CDL_T2B_SUB_MPAGE);
+ 		n += ata_msense_control_ata_feature(dev, buf + n);
+ 		return n;
+ 	default:
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index b5399262198a68..6c1a944c00d9b3 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -4515,9 +4515,11 @@ EXPORT_SYMBOL_GPL(device_destroy);
+  */
+ int device_rename(struct device *dev, const char *new_name)
+ {
++	struct subsys_private *sp = NULL;
+ 	struct kobject *kobj = &dev->kobj;
+ 	char *old_device_name = NULL;
+ 	int error;
++	bool is_link_renamed = false;
+ 
+ 	dev = get_device(dev);
+ 	if (!dev)
+@@ -4532,7 +4534,7 @@ int device_rename(struct device *dev, const char *new_name)
+ 	}
+ 
+ 	if (dev->class) {
+-		struct subsys_private *sp = class_to_subsys(dev->class);
++		sp = class_to_subsys(dev->class);
+ 
+ 		if (!sp) {
+ 			error = -EINVAL;
+@@ -4541,16 +4543,19 @@ int device_rename(struct device *dev, const char *new_name)
+ 
+ 		error = sysfs_rename_link_ns(&sp->subsys.kobj, kobj, old_device_name,
+ 					     new_name, kobject_namespace(kobj));
+-		subsys_put(sp);
+ 		if (error)
+ 			goto out;
++
++		is_link_renamed = true;
+ 	}
+ 
+ 	error = kobject_rename(kobj, new_name);
+-	if (error)
+-		goto out;
+-
+ out:
++	if (error && is_link_renamed)
++		sysfs_rename_link_ns(&sp->subsys.kobj, kobj, new_name,
++				     old_device_name, kobject_namespace(kobj));
++	subsys_put(sp);
++
+ 	put_device(dev);
+ 
+ 	kfree(old_device_name);
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index da8ca01d011c3f..0989a62962d33c 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -849,6 +849,26 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name,
+ {}
+ #endif
+ 
++/*
++ * Reject firmware file names with ".." path components.
++ * There are drivers that construct firmware file names from device-supplied
++ * strings, and we don't want some device to be able to tell us "I would like to
++ * be sent my firmware from ../../../etc/shadow, please".
++ *
++ * Search for ".." surrounded by either '/' or start/end of string.
++ *
++ * This intentionally only looks at the firmware name, not at the firmware base
++ * directory or at symlink contents.
++ */
++static bool name_contains_dotdot(const char *name)
++{
++	size_t name_len = strlen(name);
++
++	return strcmp(name, "..") == 0 || strncmp(name, "../", 3) == 0 ||
++	       strstr(name, "/../") != NULL ||
++	       (name_len >= 3 && strcmp(name+name_len-3, "/..") == 0);
++}
++
+ /* called from request_firmware() and request_firmware_work_func() */
+ static int
+ _request_firmware(const struct firmware **firmware_p, const char *name,
+@@ -869,6 +889,14 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 		goto out;
+ 	}
+ 
++	if (name_contains_dotdot(name)) {
++		dev_warn(device,
++			 "Firmware load for '%s' refused, path contains '..' component\n",
++			 name);
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
+ 	if (ret <= 0) /* error or already assigned */
+@@ -946,6 +974,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+  *      @name will be used as $FIRMWARE in the uevent environment and
+  *      should be distinctive enough not to be confused with any other
+  *      firmware image for this or any other device.
++ *	It must not contain any ".." path components - "foo/bar..bin" is
++ *	allowed, but "foo/../bar.bin" is not.
+  *
+  *	Caller must hold the reference count of @device.
+  *
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index b0b79b9c189d4f..0d5c5da367f720 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -66,27 +66,31 @@ int module_add_driver(struct module *mod, struct device_driver *drv)
+ 	driver_name = make_driver_name(drv);
+ 	if (!driver_name) {
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_remove_kobj;
+ 	}
+ 
+ 	module_create_drivers_dir(mk);
+ 	if (!mk->drivers_dir) {
+ 		ret = -EINVAL;
+-		goto out;
++		goto out_free_driver_name;
+ 	}
+ 
+ 	ret = sysfs_create_link(mk->drivers_dir, &drv->p->kobj, driver_name);
+ 	if (ret)
+-		goto out;
++		goto out_remove_drivers_dir;
+ 
+ 	kfree(driver_name);
+ 
+ 	return 0;
+-out:
+-	sysfs_remove_link(&drv->p->kobj, "module");
++
++out_remove_drivers_dir:
+ 	sysfs_remove_link(mk->drivers_dir, driver_name);
++
++out_free_driver_name:
+ 	kfree(driver_name);
+ 
++out_remove_kobj:
++	sysfs_remove_link(&drv->p->kobj, "module");
+ 	return ret;
+ }
+ 
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 113b441d4d3670..301843297fb788 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -3399,10 +3399,12 @@ void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
+ void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
+ {
+ 	unsigned long flags;
+-	if (device->ldev->md.uuid[UI_BITMAP] == 0 && val == 0)
++	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
++	if (device->ldev->md.uuid[UI_BITMAP] == 0 && val == 0) {
++		spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+ 		return;
++	}
+ 
+-	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+ 	if (val == 0) {
+ 		drbd_uuid_move_history(device);
+ 		device->ldev->md.uuid[UI_HISTORY_START] = device->ldev->md.uuid[UI_BITMAP];
+diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
+index e858e7e0383f26..c2b6c4d9729db9 100644
+--- a/drivers/block/drbd/drbd_state.c
++++ b/drivers/block/drbd/drbd_state.c
+@@ -876,7 +876,7 @@ is_valid_state(struct drbd_device *device, union drbd_state ns)
+ 		  ns.disk == D_OUTDATED)
+ 		rv = SS_CONNECTED_OUTDATES;
+ 
+-	else if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
++	else if (nc && (ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
+ 		 (nc->verify_alg[0] == 0))
+ 		rv = SS_NO_VERIFY_ALG;
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b87aa80a46ddaa..d4e260d3218063 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -181,6 +181,17 @@ static void nbd_requeue_cmd(struct nbd_cmd *cmd)
+ {
+ 	struct request *req = blk_mq_rq_from_pdu(cmd);
+ 
++	lockdep_assert_held(&cmd->lock);
++
++	/*
++	 * Clear INFLIGHT flag so that this cmd won't be completed in
++	 * normal completion path
++	 *
++	 * INFLIGHT flag will be set when the cmd is queued to nbd next
++	 * time.
++	 */
++	__clear_bit(NBD_CMD_INFLIGHT, &cmd->flags);
++
+ 	if (!test_and_set_bit(NBD_CMD_REQUEUED, &cmd->flags))
+ 		blk_mq_requeue_request(req, true);
+ }
+@@ -339,7 +350,7 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+ 
+ 	lim = queue_limits_start_update(nbd->disk->queue);
+ 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM)
+-		lim.max_hw_discard_sectors = UINT_MAX;
++		lim.max_hw_discard_sectors = UINT_MAX >> SECTOR_SHIFT;
+ 	else
+ 		lim.max_hw_discard_sectors = 0;
+ 	lim.logical_block_size = blksize;
+@@ -480,8 +491,8 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req)
+ 					nbd_mark_nsock_dead(nbd, nsock, 1);
+ 				mutex_unlock(&nsock->tx_lock);
+ 			}
+-			mutex_unlock(&cmd->lock);
+ 			nbd_requeue_cmd(cmd);
++			mutex_unlock(&cmd->lock);
+ 			nbd_config_put(nbd);
+ 			return BLK_EH_DONE;
+ 		}
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index fc001e9f95f61a..d06c8ed29620b9 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -71,9 +71,6 @@ struct ublk_rq_data {
+ 	struct llist_node node;
+ 
+ 	struct kref ref;
+-	__u64 sector;
+-	__u32 operation;
+-	__u32 nr_zones;
+ };
+ 
+ struct ublk_uring_cmd_pdu {
+@@ -214,6 +211,33 @@ static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq)
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 
++struct ublk_zoned_report_desc {
++	__u64 sector;
++	__u32 operation;
++	__u32 nr_zones;
++};
++
++static DEFINE_XARRAY(ublk_zoned_report_descs);
++
++static int ublk_zoned_insert_report_desc(const struct request *req,
++		struct ublk_zoned_report_desc *desc)
++{
++	return xa_insert(&ublk_zoned_report_descs, (unsigned long)req,
++			    desc, GFP_KERNEL);
++}
++
++static struct ublk_zoned_report_desc *ublk_zoned_erase_report_desc(
++		const struct request *req)
++{
++	return xa_erase(&ublk_zoned_report_descs, (unsigned long)req);
++}
++
++static struct ublk_zoned_report_desc *ublk_zoned_get_report_desc(
++		const struct request *req)
++{
++	return xa_load(&ublk_zoned_report_descs, (unsigned long)req);
++}
++
+ static int ublk_get_nr_zones(const struct ublk_device *ub)
+ {
+ 	const struct ublk_param_basic *p = &ub->params.basic;
+@@ -310,7 +334,7 @@ static int ublk_report_zones(struct gendisk *disk, sector_t sector,
+ 		unsigned int zones_in_request =
+ 			min_t(unsigned int, remaining_zones, max_zones_per_request);
+ 		struct request *req;
+-		struct ublk_rq_data *pdu;
++		struct ublk_zoned_report_desc desc;
+ 		blk_status_t status;
+ 
+ 		memset(buffer, 0, buffer_length);
+@@ -321,20 +345,23 @@ static int ublk_report_zones(struct gendisk *disk, sector_t sector,
+ 			goto out;
+ 		}
+ 
+-		pdu = blk_mq_rq_to_pdu(req);
+-		pdu->operation = UBLK_IO_OP_REPORT_ZONES;
+-		pdu->sector = sector;
+-		pdu->nr_zones = zones_in_request;
++		desc.operation = UBLK_IO_OP_REPORT_ZONES;
++		desc.sector = sector;
++		desc.nr_zones = zones_in_request;
++		ret = ublk_zoned_insert_report_desc(req, &desc);
++		if (ret)
++			goto free_req;
+ 
+ 		ret = blk_rq_map_kern(disk->queue, req, buffer, buffer_length,
+ 					GFP_KERNEL);
+-		if (ret) {
+-			blk_mq_free_request(req);
+-			goto out;
+-		}
++		if (ret)
++			goto erase_desc;
+ 
+ 		status = blk_execute_rq(req, 0);
+ 		ret = blk_status_to_errno(status);
++erase_desc:
++		ublk_zoned_erase_report_desc(req);
++free_req:
+ 		blk_mq_free_request(req);
+ 		if (ret)
+ 			goto out;
+@@ -368,7 +395,7 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
+ {
+ 	struct ublksrv_io_desc *iod = ublk_get_iod(ubq, req->tag);
+ 	struct ublk_io *io = &ubq->ios[req->tag];
+-	struct ublk_rq_data *pdu = blk_mq_rq_to_pdu(req);
++	struct ublk_zoned_report_desc *desc;
+ 	u32 ublk_op;
+ 
+ 	switch (req_op(req)) {
+@@ -391,12 +418,15 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
+ 		ublk_op = UBLK_IO_OP_ZONE_RESET_ALL;
+ 		break;
+ 	case REQ_OP_DRV_IN:
+-		ublk_op = pdu->operation;
++		desc = ublk_zoned_get_report_desc(req);
++		if (!desc)
++			return BLK_STS_IOERR;
++		ublk_op = desc->operation;
+ 		switch (ublk_op) {
+ 		case UBLK_IO_OP_REPORT_ZONES:
+ 			iod->op_flags = ublk_op | ublk_req_build_flags(req);
+-			iod->nr_zones = pdu->nr_zones;
+-			iod->start_sector = pdu->sector;
++			iod->nr_zones = desc->nr_zones;
++			iod->start_sector = desc->sector;
+ 			return BLK_STS_OK;
+ 		default:
+ 			return BLK_STS_IOERR;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 0927f51867c26a..c41b86608ba866 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1393,7 +1393,10 @@ static int btusb_submit_intr_urb(struct hci_dev *hdev, gfp_t mem_flags)
+ 	if (!urb)
+ 		return -ENOMEM;
+ 
+-	size = le16_to_cpu(data->intr_ep->wMaxPacketSize);
++	/* Use maximum HCI Event size so the USB stack handles
++	 * ZPL/short-transfer automatically.
++	 */
++	size = HCI_MAX_EVENT_SIZE;
+ 
+ 	buf = kmalloc(size, mem_flags);
+ 	if (!buf) {
+diff --git a/drivers/bus/arm-integrator-lm.c b/drivers/bus/arm-integrator-lm.c
+index b715c8ab36e8bd..a65c79b08804f4 100644
+--- a/drivers/bus/arm-integrator-lm.c
++++ b/drivers/bus/arm-integrator-lm.c
+@@ -85,6 +85,7 @@ static int integrator_ap_lm_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 	map = syscon_node_to_regmap(syscon);
++	of_node_put(syscon);
+ 	if (IS_ERR(map)) {
+ 		dev_err(dev,
+ 			"could not find Integrator/AP system controller\n");
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 08844ee79654ae..edb54c97992877 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -606,6 +606,15 @@ static const struct mhi_pci_dev_info mhi_telit_fn990_info = {
+ 	.mru_default = 32768,
+ };
+ 
++static const struct mhi_pci_dev_info mhi_telit_fe990a_info = {
++	.name = "telit-fe990a",
++	.config = &modem_telit_fn990_config,
++	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++	.dma_data_width = 32,
++	.sideband_wake = false,
++	.mru_default = 32768,
++};
++
+ /* Keep the list sorted based on the PID. New VID should be added as the last entry */
+ static const struct pci_device_id mhi_pci_id_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
+@@ -623,9 +632,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ 	/* Telit FN990 */
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2010),
+ 		.driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
+-	/* Telit FE990 */
++	/* Telit FE990A */
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2015),
+-		.driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
++		.driver_data = (kernel_ulong_t) &mhi_telit_fe990a_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0309),
+diff --git a/drivers/char/hw_random/bcm2835-rng.c b/drivers/char/hw_random/bcm2835-rng.c
+index b03e8030062758..aa2b135e3ee230 100644
+--- a/drivers/char/hw_random/bcm2835-rng.c
++++ b/drivers/char/hw_random/bcm2835-rng.c
+@@ -94,8 +94,10 @@ static int bcm2835_rng_init(struct hwrng *rng)
+ 		return ret;
+ 
+ 	ret = reset_control_reset(priv->reset);
+-	if (ret)
++	if (ret) {
++		clk_disable_unprepare(priv->clk);
+ 		return ret;
++	}
+ 
+ 	if (priv->mask_interrupts) {
+ 		/* mask the interrupt */
+diff --git a/drivers/char/hw_random/cctrng.c b/drivers/char/hw_random/cctrng.c
+index c0d2f824769f88..4c50efc464835b 100644
+--- a/drivers/char/hw_random/cctrng.c
++++ b/drivers/char/hw_random/cctrng.c
+@@ -622,6 +622,7 @@ static int __maybe_unused cctrng_resume(struct device *dev)
+ 	/* wait for Cryptocell reset completion */
+ 	if (!cctrng_wait_for_reset_completion(drvdata)) {
+ 		dev_err(dev, "Cryptocell reset not completed");
++		clk_disable_unprepare(drvdata->clk);
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index aa993753ab120b..1e3048f2bb38f0 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -142,7 +142,7 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ 	dev_set_drvdata(&pdev->dev, priv);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
++	devm_pm_runtime_enable(&pdev->dev);
+ 
+ 	dev_info(&pdev->dev, "registered RNG driver\n");
+ 
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 30b4c288c1bbc3..c3fbbf4d3db79a 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -47,6 +47,8 @@ static ssize_t tpm_dev_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ 
+ 	if (!ret)
+ 		ret = tpm2_commit_space(chip, space, buf, &len);
++	else
++		tpm2_flush_space(chip);
+ 
+ out_rc:
+ 	return ret ? ret : len;
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index d3521aadd43eef..44f60730cff441 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -1362,4 +1362,5 @@ int tpm2_sessions_init(struct tpm_chip *chip)
+ 
+ 	return rc;
+ }
++EXPORT_SYMBOL(tpm2_sessions_init);
+ #endif /* CONFIG_TCG_TPM2_HMAC */
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 4892d491da8dae..25a66870c165c5 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -169,6 +169,9 @@ void tpm2_flush_space(struct tpm_chip *chip)
+ 	struct tpm_space *space = &chip->work_space;
+ 	int i;
+ 
++	if (!space)
++		return;
++
+ 	for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++)
+ 		if (space->context_tbl[i] && ~space->context_tbl[i])
+ 			tpm2_flush_context(chip, space->context_tbl[i]);
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index 91b5c6f1481964..4e9594714b1428 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -66,6 +66,7 @@ enum pll_component_id {
+ 	PLL_COMPID_FRAC,
+ 	PLL_COMPID_DIV0,
+ 	PLL_COMPID_DIV1,
++	PLL_COMPID_MAX,
+ };
+ 
+ /*
+@@ -165,7 +166,7 @@ static struct sama7g5_pll {
+ 	u8 t;
+ 	u8 eid;
+ 	u8 safe_div;
+-} sama7g5_plls[][PLL_ID_MAX] = {
++} sama7g5_plls[][PLL_COMPID_MAX] = {
+ 	[PLL_ID_CPU] = {
+ 		[PLL_COMPID_FRAC] = {
+ 			.n = "cpupll_fracck",
+@@ -1038,7 +1039,7 @@ static void __init sama7g5_pmc_setup(struct device_node *np)
+ 	sama7g5_pmc->chws[PMC_MAIN] = hw;
+ 
+ 	for (i = 0; i < PLL_ID_MAX; i++) {
+-		for (j = 0; j < 3; j++) {
++		for (j = 0; j < PLL_COMPID_MAX; j++) {
+ 			struct clk_hw *parent_hw;
+ 
+ 			if (!sama7g5_plls[i][j].n)
+diff --git a/drivers/clk/imx/clk-composite-7ulp.c b/drivers/clk/imx/clk-composite-7ulp.c
+index e208ddc511339e..db7f40b07d1abf 100644
+--- a/drivers/clk/imx/clk-composite-7ulp.c
++++ b/drivers/clk/imx/clk-composite-7ulp.c
+@@ -14,6 +14,7 @@
+ #include "../clk-fractional-divider.h"
+ #include "clk.h"
+ 
++#define PCG_PR_MASK		BIT(31)
+ #define PCG_PCS_SHIFT	24
+ #define PCG_PCS_MASK	0x7
+ #define PCG_CGC_SHIFT	30
+@@ -78,6 +79,12 @@ static struct clk_hw *imx_ulp_clk_hw_composite(const char *name,
+ 	struct clk_hw *hw;
+ 	u32 val;
+ 
++	val = readl(reg);
++	if (!(val & PCG_PR_MASK)) {
++		pr_info("PCC PR is 0 for clk:%s, bypass\n", name);
++		return 0;
++	}
++
+ 	if (mux_present) {
+ 		mux = kzalloc(sizeof(*mux), GFP_KERNEL);
+ 		if (!mux)
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 8cc07d056a8384..f187582ba49196 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -204,6 +204,34 @@ static const struct clk_ops imx8m_clk_composite_mux_ops = {
+ 	.determine_rate = imx8m_clk_composite_mux_determine_rate,
+ };
+ 
++static int imx8m_clk_composite_gate_enable(struct clk_hw *hw)
++{
++	struct clk_gate *gate = to_clk_gate(hw);
++	unsigned long flags;
++	u32 val;
++
++	spin_lock_irqsave(gate->lock, flags);
++
++	val = readl(gate->reg);
++	val |= BIT(gate->bit_idx);
++	writel(val, gate->reg);
++
++	spin_unlock_irqrestore(gate->lock, flags);
++
++	return 0;
++}
++
++static void imx8m_clk_composite_gate_disable(struct clk_hw *hw)
++{
++	/* composite clk requires the disable hook */
++}
++
++static const struct clk_ops imx8m_clk_composite_gate_ops = {
++	.enable = imx8m_clk_composite_gate_enable,
++	.disable = imx8m_clk_composite_gate_disable,
++	.is_enabled = clk_gate_is_enabled,
++};
++
+ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ 					const char * const *parent_names,
+ 					int num_parents, void __iomem *reg,
+@@ -217,6 +245,7 @@ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ 	struct clk_mux *mux;
+ 	const struct clk_ops *divider_ops;
+ 	const struct clk_ops *mux_ops;
++	const struct clk_ops *gate_ops;
+ 
+ 	mux = kzalloc(sizeof(*mux), GFP_KERNEL);
+ 	if (!mux)
+@@ -257,20 +286,22 @@ struct clk_hw *__imx8m_clk_hw_composite(const char *name,
+ 	div->flags = CLK_DIVIDER_ROUND_CLOSEST;
+ 
+ 	/* skip registering the gate ops if M4 is enabled */
+-	if (!mcore_booted) {
+-		gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+-		if (!gate)
+-			goto free_div;
+-
+-		gate_hw = &gate->hw;
+-		gate->reg = reg;
+-		gate->bit_idx = PCG_CGC_SHIFT;
+-		gate->lock = &imx_ccm_lock;
+-	}
++	gate = kzalloc(sizeof(*gate), GFP_KERNEL);
++	if (!gate)
++		goto free_div;
++
++	gate_hw = &gate->hw;
++	gate->reg = reg;
++	gate->bit_idx = PCG_CGC_SHIFT;
++	gate->lock = &imx_ccm_lock;
++	if (!mcore_booted)
++		gate_ops = &clk_gate_ops;
++	else
++		gate_ops = &imx8m_clk_composite_gate_ops;
+ 
+ 	hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+ 			mux_hw, mux_ops, div_hw,
+-			divider_ops, gate_hw, &clk_gate_ops, flags);
++			divider_ops, gate_hw, gate_ops, flags);
+ 	if (IS_ERR(hw))
+ 		goto free_gate;
+ 
+diff --git a/drivers/clk/imx/clk-composite-93.c b/drivers/clk/imx/clk-composite-93.c
+index 81164bdcd6cc9a..6c6c5a30f3282d 100644
+--- a/drivers/clk/imx/clk-composite-93.c
++++ b/drivers/clk/imx/clk-composite-93.c
+@@ -76,6 +76,13 @@ static int imx93_clk_composite_gate_enable(struct clk_hw *hw)
+ 
+ static void imx93_clk_composite_gate_disable(struct clk_hw *hw)
+ {
++	/*
++	 * Skip disable the root clock gate if mcore enabled.
++	 * The root clock may be used by the mcore.
++	 */
++	if (mcore_booted)
++		return;
++
+ 	imx93_clk_composite_gate_endisable(hw, 0);
+ }
+ 
+@@ -222,7 +229,7 @@ struct clk_hw *imx93_clk_composite_flags(const char *name, const char * const *p
+ 		hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+ 					       mux_hw, &clk_mux_ro_ops, div_hw,
+ 					       &clk_divider_ro_ops, NULL, NULL, flags);
+-	} else if (!mcore_booted) {
++	} else {
+ 		gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+ 		if (!gate)
+ 			goto fail;
+@@ -238,12 +245,6 @@ struct clk_hw *imx93_clk_composite_flags(const char *name, const char * const *p
+ 					       &imx93_clk_composite_divider_ops, gate_hw,
+ 					       &imx93_clk_composite_gate_ops,
+ 					       flags | CLK_SET_RATE_NO_REPARENT);
+-	} else {
+-		hw = clk_hw_register_composite(NULL, name, parent_names, num_parents,
+-					       mux_hw, &imx93_clk_composite_mux_ops, div_hw,
+-					       &imx93_clk_composite_divider_ops, NULL,
+-					       &imx93_clk_composite_gate_ops,
+-					       flags | CLK_SET_RATE_NO_REPARENT);
+ 	}
+ 
+ 	if (IS_ERR(hw))
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 44462ab50e513c..1becba2b62d0be 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -291,6 +291,10 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
+ 	if (val & POWERUP_MASK)
+ 		return 0;
+ 
++	if (pll->flags & CLK_FRACN_GPPLL_FRACN)
++		writel_relaxed(readl_relaxed(pll->base + PLL_NUMERATOR),
++			       pll->base + PLL_NUMERATOR);
++
+ 	val |= CLKMUX_BYPASS;
+ 	writel_relaxed(val, pll->base + PLL_CTRL);
+ 
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index f9394e94f69d73..05c7a82b751f3c 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -542,8 +542,8 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 
+ 	clk_set_parent(hws[IMX6UL_CLK_ENFC_SEL]->clk, hws[IMX6UL_CLK_PLL2_PFD2]->clk);
+ 
+-	clk_set_parent(hws[IMX6UL_CLK_ENET1_REF_SEL]->clk, hws[IMX6UL_CLK_ENET_REF]->clk);
+-	clk_set_parent(hws[IMX6UL_CLK_ENET2_REF_SEL]->clk, hws[IMX6UL_CLK_ENET2_REF]->clk);
++	clk_set_parent(hws[IMX6UL_CLK_ENET1_REF_SEL]->clk, hws[IMX6UL_CLK_ENET1_REF_125M]->clk);
++	clk_set_parent(hws[IMX6UL_CLK_ENET2_REF_SEL]->clk, hws[IMX6UL_CLK_ENET2_REF_125M]->clk);
+ 
+ 	imx_register_uart_clocks();
+ }
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index b381d6f784c890..0767d613b68b00 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -154,6 +154,15 @@ static const struct clk_parent_data clk_imx8mp_audiomix_pll_bypass_sels[] = {
+ 		PDM_SEL, 2, 0						\
+ 	}
+ 
++#define CLK_GATE_PARENT(gname, cname, pname)						\
++	{								\
++		gname"_cg",						\
++		IMX8MP_CLK_AUDIOMIX_##cname,				\
++		{ .fw_name = pname, .name = pname }, NULL, 1,		\
++		CLKEN0 + 4 * !!(IMX8MP_CLK_AUDIOMIX_##cname / 32),	\
++		1, IMX8MP_CLK_AUDIOMIX_##cname % 32			\
++	}
++
+ struct clk_imx8mp_audiomix_sel {
+ 	const char			*name;
+ 	int				clkid;
+@@ -171,14 +180,14 @@ static struct clk_imx8mp_audiomix_sel sels[] = {
+ 	CLK_GATE("earc", EARC_IPG),
+ 	CLK_GATE("ocrama", OCRAMA_IPG),
+ 	CLK_GATE("aud2htx", AUD2HTX_IPG),
+-	CLK_GATE("earc_phy", EARC_PHY),
++	CLK_GATE_PARENT("earc_phy", EARC_PHY, "sai_pll_out_div2"),
+ 	CLK_GATE("sdma2", SDMA2_ROOT),
+ 	CLK_GATE("sdma3", SDMA3_ROOT),
+ 	CLK_GATE("spba2", SPBA2_ROOT),
+ 	CLK_GATE("dsp", DSP_ROOT),
+ 	CLK_GATE("dspdbg", DSPDBG_ROOT),
+ 	CLK_GATE("edma", EDMA_ROOT),
+-	CLK_GATE("audpll", AUDPLL_ROOT),
++	CLK_GATE_PARENT("audpll", AUDPLL_ROOT, "osc_24m"),
+ 	CLK_GATE("mu2", MU2_ROOT),
+ 	CLK_GATE("mu3", MU3_ROOT),
+ 	CLK_PDM,
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 670aa2bab3017e..e561ff7b135fb5 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -551,8 +551,8 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	hws[IMX8MP_CLK_IPG_ROOT] = imx_clk_hw_divider2("ipg_root", "ahb_root", ccm_base + 0x9080, 0, 1);
+ 
+-	hws[IMX8MP_CLK_DRAM_ALT] = imx8m_clk_hw_composite("dram_alt", imx8mp_dram_alt_sels, ccm_base + 0xa000);
+-	hws[IMX8MP_CLK_DRAM_APB] = imx8m_clk_hw_composite_critical("dram_apb", imx8mp_dram_apb_sels, ccm_base + 0xa080);
++	hws[IMX8MP_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mp_dram_alt_sels, ccm_base + 0xa000);
++	hws[IMX8MP_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mp_dram_apb_sels, ccm_base + 0xa080);
+ 	hws[IMX8MP_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mp_vpu_g1_sels, ccm_base + 0xa100);
+ 	hws[IMX8MP_CLK_VPU_G2] = imx8m_clk_hw_composite("vpu_g2", imx8mp_vpu_g2_sels, ccm_base + 0xa180);
+ 	hws[IMX8MP_CLK_CAN1] = imx8m_clk_hw_composite("can1", imx8mp_can1_sels, ccm_base + 0xa200);
+diff --git a/drivers/clk/imx/clk-imx8qxp.c b/drivers/clk/imx/clk-imx8qxp.c
+index 7d8883916cacdd..83f2e8bd6d5062 100644
+--- a/drivers/clk/imx/clk-imx8qxp.c
++++ b/drivers/clk/imx/clk-imx8qxp.c
+@@ -170,8 +170,8 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ 	imx_clk_scu("pwm_clk",   IMX_SC_R_LCD_0_PWM_0, IMX_SC_PM_CLK_PER);
+ 	imx_clk_scu("elcdif_pll", IMX_SC_R_ELCDIF_PLL, IMX_SC_PM_CLK_PLL);
+ 	imx_clk_scu2("lcd_clk", lcd_sels, ARRAY_SIZE(lcd_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_PER);
+-	imx_clk_scu2("lcd_pxl_clk", lcd_pxl_sels, ARRAY_SIZE(lcd_pxl_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_MISC0);
+ 	imx_clk_scu("lcd_pxl_bypass_div_clk", IMX_SC_R_LCD_0, IMX_SC_PM_CLK_BYPASS);
++	imx_clk_scu2("lcd_pxl_clk", lcd_pxl_sels, ARRAY_SIZE(lcd_pxl_sels), IMX_SC_R_LCD_0, IMX_SC_PM_CLK_MISC0);
+ 
+ 	/* Audio SS */
+ 	imx_clk_scu("audio_pll0_clk", IMX_SC_R_AUDIO_PLL_0, IMX_SC_PM_CLK_PLL);
+@@ -206,18 +206,18 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ 	imx_clk_scu("usb3_lpm_div", IMX_SC_R_USB_2, IMX_SC_PM_CLK_MISC);
+ 
+ 	/* Display controller SS */
+-	imx_clk_scu2("dc0_disp0_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC0);
+-	imx_clk_scu2("dc0_disp1_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC1);
+ 	imx_clk_scu("dc0_pll0_clk", IMX_SC_R_DC_0_PLL_0, IMX_SC_PM_CLK_PLL);
+ 	imx_clk_scu("dc0_pll1_clk", IMX_SC_R_DC_0_PLL_1, IMX_SC_PM_CLK_PLL);
+ 	imx_clk_scu("dc0_bypass0_clk", IMX_SC_R_DC_0_VIDEO0, IMX_SC_PM_CLK_BYPASS);
++	imx_clk_scu2("dc0_disp0_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC0);
++	imx_clk_scu2("dc0_disp1_clk", dc0_sels, ARRAY_SIZE(dc0_sels), IMX_SC_R_DC_0, IMX_SC_PM_CLK_MISC1);
+ 	imx_clk_scu("dc0_bypass1_clk", IMX_SC_R_DC_0_VIDEO1, IMX_SC_PM_CLK_BYPASS);
+ 
+-	imx_clk_scu2("dc1_disp0_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC0);
+-	imx_clk_scu2("dc1_disp1_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC1);
+ 	imx_clk_scu("dc1_pll0_clk", IMX_SC_R_DC_1_PLL_0, IMX_SC_PM_CLK_PLL);
+ 	imx_clk_scu("dc1_pll1_clk", IMX_SC_R_DC_1_PLL_1, IMX_SC_PM_CLK_PLL);
+ 	imx_clk_scu("dc1_bypass0_clk", IMX_SC_R_DC_1_VIDEO0, IMX_SC_PM_CLK_BYPASS);
++	imx_clk_scu2("dc1_disp0_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC0);
++	imx_clk_scu2("dc1_disp1_clk", dc1_sels, ARRAY_SIZE(dc1_sels), IMX_SC_R_DC_1, IMX_SC_PM_CLK_MISC1);
+ 	imx_clk_scu("dc1_bypass1_clk", IMX_SC_R_DC_1_VIDEO1, IMX_SC_PM_CLK_BYPASS);
+ 
+ 	/* MIPI-LVDS SS */
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 25a7b4b15c56ab..2720cbc40e0acc 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1784,6 +1784,58 @@ const struct clk_ops clk_alpha_pll_agera_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_agera_ops);
+ 
++/**
++ * clk_lucid_5lpe_pll_configure - configure the lucid 5lpe pll
++ *
++ * @pll: clk alpha pll
++ * @regmap: register map
++ * @config: configuration to apply for pll
++ */
++void clk_lucid_5lpe_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
++				  const struct alpha_pll_config *config)
++{
++	/*
++	 * If the bootloader left the PLL enabled it's likely that there are
++	 * RCGs that will lock up if we disable the PLL below.
++	 */
++	if (trion_pll_is_enabled(pll, regmap)) {
++		pr_debug("Lucid 5LPE PLL is already enabled, skipping configuration\n");
++		return;
++	}
++
++	clk_alpha_pll_write_config(regmap, PLL_L_VAL(pll), config->l);
++	regmap_write(regmap, PLL_CAL_L_VAL(pll), TRION_PLL_CAL_VAL);
++	clk_alpha_pll_write_config(regmap, PLL_ALPHA_VAL(pll), config->alpha);
++	clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL(pll),
++				     config->config_ctl_val);
++	clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL_U(pll),
++				     config->config_ctl_hi_val);
++	clk_alpha_pll_write_config(regmap, PLL_CONFIG_CTL_U1(pll),
++				     config->config_ctl_hi1_val);
++	clk_alpha_pll_write_config(regmap, PLL_USER_CTL(pll),
++					config->user_ctl_val);
++	clk_alpha_pll_write_config(regmap, PLL_USER_CTL_U(pll),
++					config->user_ctl_hi_val);
++	clk_alpha_pll_write_config(regmap, PLL_USER_CTL_U1(pll),
++					config->user_ctl_hi1_val);
++	clk_alpha_pll_write_config(regmap, PLL_TEST_CTL(pll),
++					config->test_ctl_val);
++	clk_alpha_pll_write_config(regmap, PLL_TEST_CTL_U(pll),
++					config->test_ctl_hi_val);
++	clk_alpha_pll_write_config(regmap, PLL_TEST_CTL_U1(pll),
++					config->test_ctl_hi1_val);
++
++	/* Disable PLL output */
++	regmap_update_bits(regmap, PLL_MODE(pll),  PLL_OUTCTRL, 0);
++
++	/* Set operation mode to OFF */
++	regmap_write(regmap, PLL_OPMODE(pll), PLL_STANDBY);
++
++	/* Place the PLL in STANDBY mode */
++	regmap_update_bits(regmap, PLL_MODE(pll), PLL_RESET_N, PLL_RESET_N);
++}
++EXPORT_SYMBOL_GPL(clk_lucid_5lpe_pll_configure);
++
+ static int alpha_pll_lucid_5lpe_enable(struct clk_hw *hw)
+ {
+ 	struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h
+index c7055b6c42f1d5..d89ec4723e2d9c 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.h
++++ b/drivers/clk/qcom/clk-alpha-pll.h
+@@ -205,6 +205,8 @@ void clk_agera_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ 
+ void clk_zonda_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ 			     const struct alpha_pll_config *config);
++void clk_lucid_5lpe_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
++				  const struct alpha_pll_config *config);
+ void clk_lucid_evo_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ 				 const struct alpha_pll_config *config);
+ void clk_lucid_ole_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index 43307c8a342cae..2103e22ca3ddee 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -1357,8 +1357,13 @@ static int disp_cc_sm8250_probe(struct platform_device *pdev)
+ 		disp_cc_sm8250_clocks[DISP_CC_MDSS_EDP_GTC_CLK_SRC] = NULL;
+ 	}
+ 
+-	clk_lucid_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
+-	clk_lucid_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++	if (of_device_is_compatible(pdev->dev.of_node, "qcom,sm8350-dispcc")) {
++		clk_lucid_5lpe_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
++		clk_lucid_5lpe_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++	} else {
++		clk_lucid_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
++		clk_lucid_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++	}
+ 
+ 	/* Enable clock gating for MDP clocks */
+ 	regmap_update_bits(regmap, 0x8000, 0x10, 0x10);
+diff --git a/drivers/clk/qcom/dispcc-sm8550.c b/drivers/clk/qcom/dispcc-sm8550.c
+index 38ecea805503d3..1ba01bdb763b7f 100644
+--- a/drivers/clk/qcom/dispcc-sm8550.c
++++ b/drivers/clk/qcom/dispcc-sm8550.c
+@@ -196,7 +196,7 @@ static const struct clk_parent_data disp_cc_parent_data_3[] = {
+ static const struct parent_map disp_cc_parent_map_4[] = {
+ 	{ P_BI_TCXO, 0 },
+ 	{ P_DP0_PHY_PLL_LINK_CLK, 1 },
+-	{ P_DP1_PHY_PLL_VCO_DIV_CLK, 2 },
++	{ P_DP0_PHY_PLL_VCO_DIV_CLK, 2 },
+ 	{ P_DP3_PHY_PLL_VCO_DIV_CLK, 3 },
+ 	{ P_DP1_PHY_PLL_VCO_DIV_CLK, 4 },
+ 	{ P_DP2_PHY_PLL_VCO_DIV_CLK, 6 },
+@@ -213,7 +213,7 @@ static const struct clk_parent_data disp_cc_parent_data_4[] = {
+ 
+ static const struct parent_map disp_cc_parent_map_5[] = {
+ 	{ P_BI_TCXO, 0 },
+-	{ P_DSI0_PHY_PLL_OUT_BYTECLK, 4 },
++	{ P_DSI0_PHY_PLL_OUT_BYTECLK, 2 },
+ 	{ P_DSI1_PHY_PLL_OUT_BYTECLK, 4 },
+ };
+ 
+@@ -400,7 +400,7 @@ static struct clk_rcg2 disp_cc_mdss_dptx1_aux_clk_src = {
+ 		.parent_data = disp_cc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_dp_ops,
++		.ops = &clk_rcg2_ops,
+ 	},
+ };
+ 
+@@ -562,7 +562,7 @@ static struct clk_rcg2 disp_cc_mdss_esc0_clk_src = {
+ 		.parent_data = disp_cc_parent_data_5,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -577,7 +577,7 @@ static struct clk_rcg2 disp_cc_mdss_esc1_clk_src = {
+ 		.parent_data = disp_cc_parent_data_5,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_5),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1611,7 +1611,7 @@ static struct gdsc mdss_gdsc = {
+ 		.name = "mdss_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL | RETAIN_FF_ENABLE,
++	.flags = POLL_CFG_GDSCR | HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc mdss_int2_gdsc = {
+@@ -1620,7 +1620,7 @@ static struct gdsc mdss_int2_gdsc = {
+ 		.name = "mdss_int2_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL | RETAIN_FF_ENABLE,
++	.flags = POLL_CFG_GDSCR | HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct clk_regmap *disp_cc_sm8550_clocks[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq5332.c b/drivers/clk/qcom/gcc-ipq5332.c
+index f98591148a9767..6a4877d8882946 100644
+--- a/drivers/clk/qcom/gcc-ipq5332.c
++++ b/drivers/clk/qcom/gcc-ipq5332.c
+@@ -3388,6 +3388,7 @@ static struct clk_regmap *gcc_ipq5332_clocks[] = {
+ 	[GCC_QDSS_DAP_DIV_CLK_SRC] = &gcc_qdss_dap_div_clk_src.clkr,
+ 	[GCC_QDSS_ETR_USB_CLK] = &gcc_qdss_etr_usb_clk.clkr,
+ 	[GCC_QDSS_EUD_AT_CLK] = &gcc_qdss_eud_at_clk.clkr,
++	[GCC_QDSS_TSCTR_CLK_SRC] = &gcc_qdss_tsctr_clk_src.clkr,
+ 	[GCC_QPIC_AHB_CLK] = &gcc_qpic_ahb_clk.clkr,
+ 	[GCC_QPIC_CLK] = &gcc_qpic_clk.clkr,
+ 	[GCC_QPIC_IO_MACRO_CLK] = &gcc_qpic_io_macro_clk.clkr,
+diff --git a/drivers/clk/rockchip/clk-rk3228.c b/drivers/clk/rockchip/clk-rk3228.c
+index a24a35553e1349..7343d2d7676bca 100644
+--- a/drivers/clk/rockchip/clk-rk3228.c
++++ b/drivers/clk/rockchip/clk-rk3228.c
+@@ -409,7 +409,7 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
+ 			RK2928_CLKSEL_CON(29), 0, 3, DFLAGS),
+ 	DIV(0, "sclk_vop_pre", "sclk_vop_src", 0,
+ 			RK2928_CLKSEL_CON(27), 8, 8, DFLAGS),
+-	MUX(DCLK_VOP, "dclk_vop", mux_dclk_vop_p, 0,
++	MUX(DCLK_VOP, "dclk_vop", mux_dclk_vop_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
+ 			RK2928_CLKSEL_CON(27), 1, 1, MFLAGS),
+ 
+ 	FACTOR(0, "xin12m", "xin24m", 0, 1, 2),
+diff --git a/drivers/clk/rockchip/clk-rk3588.c b/drivers/clk/rockchip/clk-rk3588.c
+index b30279a96dc8af..3027379f2fdd11 100644
+--- a/drivers/clk/rockchip/clk-rk3588.c
++++ b/drivers/clk/rockchip/clk-rk3588.c
+@@ -526,7 +526,7 @@ PNAME(pmu_200m_100m_p)			= { "clk_pmu1_200m_src", "clk_pmu1_100m_src" };
+ PNAME(pmu_300m_24m_p)			= { "clk_300m_src", "xin24m" };
+ PNAME(pmu_400m_24m_p)			= { "clk_400m_src", "xin24m" };
+ PNAME(pmu_100m_50m_24m_src_p)		= { "clk_pmu1_100m_src", "clk_pmu1_50m_src", "xin24m" };
+-PNAME(pmu_24m_32k_100m_src_p)		= { "xin24m", "32k", "clk_pmu1_100m_src" };
++PNAME(pmu_24m_32k_100m_src_p)		= { "xin24m", "xin32k", "clk_pmu1_100m_src" };
+ PNAME(hclk_pmu1_root_p)			= { "clk_pmu1_200m_src", "clk_pmu1_100m_src", "clk_pmu1_50m_src", "xin24m" };
+ PNAME(hclk_pmu_cm0_root_p)		= { "clk_pmu1_400m_src", "clk_pmu1_200m_src", "clk_pmu1_100m_src", "xin24m" };
+ PNAME(mclk_pdm0_p)			= { "clk_pmu1_300m_src", "clk_pmu1_200m_src" };
+diff --git a/drivers/clk/starfive/clk-starfive-jh7110-vout.c b/drivers/clk/starfive/clk-starfive-jh7110-vout.c
+index 53f7af234cc23e..aabd0484ac23f6 100644
+--- a/drivers/clk/starfive/clk-starfive-jh7110-vout.c
++++ b/drivers/clk/starfive/clk-starfive-jh7110-vout.c
+@@ -145,7 +145,7 @@ static int jh7110_voutcrg_probe(struct platform_device *pdev)
+ 
+ 	/* enable power domain and clocks */
+ 	pm_runtime_enable(priv->dev);
+-	ret = pm_runtime_get_sync(priv->dev);
++	ret = pm_runtime_resume_and_get(priv->dev);
+ 	if (ret < 0)
+ 		return dev_err_probe(priv->dev, ret, "failed to turn on power\n");
+ 
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index d964e3affd42ce..0eab7f3e2eab9e 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -240,6 +240,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		clk = of_clk_get_from_provider(&clkspec);
++		of_node_put(clkspec.np);
+ 		if (IS_ERR(clk)) {
+ 			pr_err("%s: failed to get atl clock %d from provider\n",
+ 			       __func__, i);
+diff --git a/drivers/clocksource/timer-qcom.c b/drivers/clocksource/timer-qcom.c
+index b4afe3a6758351..eac4c95c6127f2 100644
+--- a/drivers/clocksource/timer-qcom.c
++++ b/drivers/clocksource/timer-qcom.c
+@@ -233,6 +233,7 @@ static int __init msm_dt_timer_init(struct device_node *np)
+ 	}
+ 
+ 	if (of_property_read_u32(np, "clock-frequency", &freq)) {
++		iounmap(cpu0_base);
+ 		pr_err("Unknown frequency\n");
+ 		return -EINVAL;
+ 	}
+@@ -243,7 +244,11 @@ static int __init msm_dt_timer_init(struct device_node *np)
+ 	freq /= 4;
+ 	writel_relaxed(DGT_CLK_CTL_DIV_4, source_base + DGT_CLK_CTL);
+ 
+-	return msm_timer_init(freq, 32, irq, !!percpu_offset);
++	ret = msm_timer_init(freq, 32, irq, !!percpu_offset);
++	if (ret)
++		iounmap(cpu0_base);
++
++	return ret;
+ }
+ TIMER_OF_DECLARE(kpss_timer, "qcom,kpss-timer", msm_dt_timer_init);
+ TIMER_OF_DECLARE(scss_timer, "qcom,scss-timer", msm_dt_timer_init);
+diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c
+index 5af85c4cbad0cc..f8e6dc3c14d359 100644
+--- a/drivers/cpufreq/ti-cpufreq.c
++++ b/drivers/cpufreq/ti-cpufreq.c
+@@ -61,6 +61,9 @@ struct ti_cpufreq_soc_data {
+ 	unsigned long efuse_shift;
+ 	unsigned long rev_offset;
+ 	bool multi_regulator;
++/* Backward compatibility hack: Might have missing syscon */
++#define TI_QUIRK_SYSCON_MAY_BE_MISSING	0x1
++	u8 quirks;
+ };
+ 
+ struct ti_cpufreq_data {
+@@ -182,6 +185,7 @@ static struct ti_cpufreq_soc_data omap34xx_soc_data = {
+ 	.efuse_mask = BIT(3),
+ 	.rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ 	.multi_regulator = false,
++	.quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+ 
+ /*
+@@ -209,6 +213,7 @@ static struct ti_cpufreq_soc_data omap36xx_soc_data = {
+ 	.efuse_mask = BIT(9),
+ 	.rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ 	.multi_regulator = true,
++	.quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+ 
+ /*
+@@ -223,6 +228,7 @@ static struct ti_cpufreq_soc_data am3517_soc_data = {
+ 	.efuse_mask = 0,
+ 	.rev_offset = OMAP3_CONTROL_IDCODE - OMAP3_SYSCON_BASE,
+ 	.multi_regulator = false,
++	.quirks = TI_QUIRK_SYSCON_MAY_BE_MISSING,
+ };
+ 
+ static struct ti_cpufreq_soc_data am625_soc_data = {
+@@ -250,7 +256,7 @@ static int ti_cpufreq_get_efuse(struct ti_cpufreq_data *opp_data,
+ 
+ 	ret = regmap_read(opp_data->syscon, opp_data->soc_data->efuse_offset,
+ 			  &efuse);
+-	if (ret == -EIO) {
++	if (opp_data->soc_data->quirks & TI_QUIRK_SYSCON_MAY_BE_MISSING && ret == -EIO) {
+ 		/* not a syscon register! */
+ 		void __iomem *regs = ioremap(OMAP3_SYSCON_BASE +
+ 				opp_data->soc_data->efuse_offset, 4);
+@@ -291,7 +297,7 @@ static int ti_cpufreq_get_rev(struct ti_cpufreq_data *opp_data,
+ 
+ 	ret = regmap_read(opp_data->syscon, opp_data->soc_data->rev_offset,
+ 			  &revision);
+-	if (ret == -EIO) {
++	if (opp_data->soc_data->quirks & TI_QUIRK_SYSCON_MAY_BE_MISSING && ret == -EIO) {
+ 		/* not a syscon register! */
+ 		void __iomem *regs = ioremap(OMAP3_SYSCON_BASE +
+ 				opp_data->soc_data->rev_offset, 4);
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index a6e123dfe394d8..5bb3401220d296 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -8,6 +8,7 @@
+ 
+ #define pr_fmt(fmt) "cpuidle-riscv-sbi: " fmt
+ 
++#include <linux/cleanup.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpumask.h>
+@@ -236,19 +237,16 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+ {
+ 	struct sbi_cpuidle_data *data = per_cpu_ptr(&sbi_cpuidle_data, cpu);
+ 	struct device_node *state_node;
+-	struct device_node *cpu_node;
+ 	u32 *states;
+ 	int i, ret;
+ 
+-	cpu_node = of_cpu_device_node_get(cpu);
++	struct device_node *cpu_node __free(device_node) = of_cpu_device_node_get(cpu);
+ 	if (!cpu_node)
+ 		return -ENODEV;
+ 
+ 	states = devm_kcalloc(dev, state_count, sizeof(*states), GFP_KERNEL);
+-	if (!states) {
+-		ret = -ENOMEM;
+-		goto fail;
+-	}
++	if (!states)
++		return -ENOMEM;
+ 
+ 	/* Parse SBI specific details from state DT nodes */
+ 	for (i = 1; i < state_count; i++) {
+@@ -264,10 +262,8 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+ 
+ 		pr_debug("sbi-state %#x index %d\n", states[i], i);
+ 	}
+-	if (i != state_count) {
+-		ret = -ENODEV;
+-		goto fail;
+-	}
++	if (i != state_count)
++		return -ENODEV;
+ 
+ 	/* Initialize optional data, used for the hierarchical topology. */
+ 	ret = sbi_dt_cpu_init_topology(drv, data, state_count, cpu);
+@@ -277,10 +273,7 @@ static int sbi_cpuidle_dt_init_states(struct device *dev,
+ 	/* Store states in the per-cpu struct. */
+ 	data->states = states;
+ 
+-fail:
+-	of_node_put(cpu_node);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static void sbi_cpuidle_deinit_cpu(int cpu)
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index fdd724228c2fa8..25c02e26725858 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -708,6 +708,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
+ 		       GFP_KERNEL : GFP_ATOMIC;
+ 	struct ahash_edesc *edesc;
+ 
++	sg_num = pad_sg_nents(sg_num);
+ 	edesc = kzalloc(struct_size(edesc, sec4_sg, sg_num), flags);
+ 	if (!edesc)
+ 		return NULL;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 1912bee22dd4a6..7407c9a1e844ed 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -910,7 +910,18 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 
+ 	sev->int_rcvd = 0;
+ 
+-	reg = FIELD_PREP(SEV_CMDRESP_CMD, cmd) | SEV_CMDRESP_IOC;
++	reg = FIELD_PREP(SEV_CMDRESP_CMD, cmd);
++
++	/*
++	 * If invoked during panic handling, local interrupts are disabled so
++	 * the PSP command completion interrupt can't be used.
++	 * sev_wait_cmd_ioc() already checks for interrupts disabled and
++	 * polls for PSP command completion.  Ensure we do not request an
++	 * interrupt from the PSP if irqs disabled.
++	 */
++	if (!irqs_disabled())
++		reg |= SEV_CMDRESP_IOC;
++
+ 	iowrite32(reg, sev->io_regs + sev->vdata->cmdresp_reg);
+ 
+ 	/* wait for command completion */
+@@ -2374,6 +2385,8 @@ void sev_pci_init(void)
+ 	return;
+ 
+ err:
++	sev_dev_destroy(psp_master);
++
+ 	psp_master->sev_data = NULL;
+ }
+ 
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 10aa4da93323f1..6b536ad2ada52a 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -13,9 +13,7 @@
+ #include <linux/uacce.h>
+ #include "hpre.h"
+ 
+-#define HPRE_QM_ABNML_INT_MASK		0x100004
+ #define HPRE_CTRL_CNT_CLR_CE_BIT	BIT(0)
+-#define HPRE_COMM_CNT_CLR_CE		0x0
+ #define HPRE_CTRL_CNT_CLR_CE		0x301000
+ #define HPRE_FSM_MAX_CNT		0x301008
+ #define HPRE_VFG_AXQOS			0x30100c
+@@ -42,7 +40,6 @@
+ #define HPRE_HAC_INT_SET		0x301500
+ #define HPRE_RNG_TIMEOUT_NUM		0x301A34
+ #define HPRE_CORE_INT_ENABLE		0
+-#define HPRE_CORE_INT_DISABLE		GENMASK(21, 0)
+ #define HPRE_RDCHN_INI_ST		0x301a00
+ #define HPRE_CLSTR_BASE			0x302000
+ #define HPRE_CORE_EN_OFFSET		0x04
+@@ -66,7 +63,6 @@
+ #define HPRE_CLSTR_ADDR_INTRVL		0x1000
+ #define HPRE_CLUSTER_INQURY		0x100
+ #define HPRE_CLSTR_ADDR_INQRY_RSLT	0x104
+-#define HPRE_TIMEOUT_ABNML_BIT		6
+ #define HPRE_PASID_EN_BIT		9
+ #define HPRE_REG_RD_INTVRL_US		10
+ #define HPRE_REG_RD_TMOUT_US		1000
+@@ -203,9 +199,9 @@ static const struct hisi_qm_cap_info hpre_basic_info[] = {
+ 	{HPRE_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC37, 0x6C37},
+ 	{HPRE_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C37},
+ 	{HPRE_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8},
+-	{HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0x1FFFFFE},
+-	{HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFFFE},
+-	{HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFFFE},
++	{HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0x1FFFC3E},
++	{HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFC3E},
++	{HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFC3E},
+ 	{HPRE_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1},
+ 	{HPRE_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x0,  0x4, 0x1},
+ 	{HPRE_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x0, 0x2, 0x2},
+@@ -358,6 +354,8 @@ static struct dfx_diff_registers hpre_diff_regs[] = {
+ 	},
+ };
+ 
++static const struct hisi_qm_err_ini hpre_err_ini;
++
+ bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg)
+ {
+ 	u32 cap_val;
+@@ -654,11 +652,6 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
+ 	writel(HPRE_QM_USR_CFG_MASK, qm->io_base + QM_AWUSER_M_CFG_ENABLE);
+ 	writel_relaxed(HPRE_QM_AXI_CFG_MASK, qm->io_base + QM_AXI_M_CFG);
+ 
+-	/* HPRE need more time, we close this interrupt */
+-	val = readl_relaxed(qm->io_base + HPRE_QM_ABNML_INT_MASK);
+-	val |= BIT(HPRE_TIMEOUT_ABNML_BIT);
+-	writel_relaxed(val, qm->io_base + HPRE_QM_ABNML_INT_MASK);
+-
+ 	if (qm->ver >= QM_HW_V3)
+ 		writel(HPRE_RSA_ENB | HPRE_ECC_ENB,
+ 			qm->io_base + HPRE_TYPES_ENB);
+@@ -667,9 +660,7 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
+ 
+ 	writel(HPRE_QM_VFG_AX_MASK, qm->io_base + HPRE_VFG_AXCACHE);
+ 	writel(0x0, qm->io_base + HPRE_BD_ENDIAN);
+-	writel(0x0, qm->io_base + HPRE_INT_MASK);
+ 	writel(0x0, qm->io_base + HPRE_POISON_BYPASS);
+-	writel(0x0, qm->io_base + HPRE_COMM_CNT_CLR_CE);
+ 	writel(0x0, qm->io_base + HPRE_ECC_BYPASS);
+ 
+ 	writel(HPRE_BD_USR_MASK, qm->io_base + HPRE_BD_ARUSR_CFG);
+@@ -759,7 +750,7 @@ static void hpre_hw_error_disable(struct hisi_qm *qm)
+ 
+ static void hpre_hw_error_enable(struct hisi_qm *qm)
+ {
+-	u32 ce, nfe;
++	u32 ce, nfe, err_en;
+ 
+ 	ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
+ 	nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+@@ -776,7 +767,8 @@ static void hpre_hw_error_enable(struct hisi_qm *qm)
+ 	hpre_master_ooo_ctrl(qm, true);
+ 
+ 	/* enable hpre hw error interrupts */
+-	writel(HPRE_CORE_INT_ENABLE, qm->io_base + HPRE_INT_MASK);
++	err_en = ce | nfe | HPRE_HAC_RAS_FE_ENABLE;
++	writel(~err_en, qm->io_base + HPRE_INT_MASK);
+ }
+ 
+ static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file)
+@@ -1161,6 +1153,7 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		qm->qp_num = pf_q_num;
+ 		qm->debug.curr_qm_qp_num = pf_q_num;
+ 		qm->qm_list = &hpre_devices;
++		qm->err_ini = &hpre_err_ini;
+ 		if (pf_q_num_flag)
+ 			set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ 	}
+@@ -1350,8 +1343,6 @@ static int hpre_pf_probe_init(struct hpre *hpre)
+ 
+ 	hpre_open_sva_prefetch(qm);
+ 
+-	qm->err_ini = &hpre_err_ini;
+-	qm->err_ini->err_info_init(qm);
+ 	hisi_qm_dev_err_init(qm);
+ 	ret = hpre_show_last_regs_init(qm);
+ 	if (ret)
+@@ -1380,6 +1371,18 @@ static int hpre_probe_init(struct hpre *hpre)
+ 	return 0;
+ }
+ 
++static void hpre_probe_uninit(struct hisi_qm *qm)
++{
++	if (qm->fun_type == QM_HW_VF)
++		return;
++
++	hpre_cnt_regs_clear(qm);
++	qm->debug.curr_qm_qp_num = 0;
++	hpre_show_last_regs_uninit(qm);
++	hpre_close_sva_prefetch(qm);
++	hisi_qm_dev_err_uninit(qm);
++}
++
+ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct hisi_qm *qm;
+@@ -1405,7 +1408,7 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	ret = hisi_qm_start(qm);
+ 	if (ret)
+-		goto err_with_err_init;
++		goto err_with_probe_init;
+ 
+ 	ret = hpre_debugfs_init(qm);
+ 	if (ret)
+@@ -1444,9 +1447,8 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	hpre_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+ 
+-err_with_err_init:
+-	hpre_show_last_regs_uninit(qm);
+-	hisi_qm_dev_err_uninit(qm);
++err_with_probe_init:
++	hpre_probe_uninit(qm);
+ 
+ err_with_qm_init:
+ 	hisi_qm_uninit(qm);
+@@ -1468,13 +1470,7 @@ static void hpre_remove(struct pci_dev *pdev)
+ 	hpre_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+ 
+-	if (qm->fun_type == QM_HW_PF) {
+-		hpre_cnt_regs_clear(qm);
+-		qm->debug.curr_qm_qp_num = 0;
+-		hpre_show_last_regs_uninit(qm);
+-		hisi_qm_dev_err_uninit(qm);
+-	}
+-
++	hpre_probe_uninit(qm);
+ 	hisi_qm_uninit(qm);
+ }
+ 
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 3dac8d8e856866..da562ceaaf277f 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -450,6 +450,7 @@ static struct qm_typical_qos_table shaper_cbs_s[] = {
+ };
+ 
+ static void qm_irqs_unregister(struct hisi_qm *qm);
++static int qm_reset_device(struct hisi_qm *qm);
+ 
+ static u32 qm_get_hw_error_status(struct hisi_qm *qm)
+ {
+@@ -4019,6 +4020,28 @@ static int qm_set_vf_mse(struct hisi_qm *qm, bool set)
+ 	return -ETIMEDOUT;
+ }
+ 
++static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
++{
++	u32 nfe_enb = 0;
++
++	/* Kunpeng930 hardware automatically close master ooo when NFE occurs */
++	if (qm->ver >= QM_HW_V3)
++		return;
++
++	if (!qm->err_status.is_dev_ecc_mbit &&
++	    qm->err_status.is_qm_ecc_mbit &&
++	    qm->err_ini->close_axi_master_ooo) {
++		qm->err_ini->close_axi_master_ooo(qm);
++	} else if (qm->err_status.is_dev_ecc_mbit &&
++		   !qm->err_status.is_qm_ecc_mbit &&
++		   !qm->err_ini->close_axi_master_ooo) {
++		nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
++		writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
++		       qm->io_base + QM_RAS_NFE_ENABLE);
++		writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
++	}
++}
++
+ static int qm_vf_reset_prepare(struct hisi_qm *qm,
+ 			       enum qm_stop_reason stop_reason)
+ {
+@@ -4083,6 +4106,8 @@ static int qm_controller_reset_prepare(struct hisi_qm *qm)
+ 		return ret;
+ 	}
+ 
++	qm_dev_ecc_mbit_handle(qm);
++
+ 	/* PF obtains the information of VF by querying the register. */
+ 	qm_cmd_uninit(qm);
+ 
+@@ -4113,33 +4138,26 @@ static int qm_controller_reset_prepare(struct hisi_qm *qm)
+ 	return 0;
+ }
+ 
+-static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
++static int qm_master_ooo_check(struct hisi_qm *qm)
+ {
+-	u32 nfe_enb = 0;
++	u32 val;
++	int ret;
+ 
+-	/* Kunpeng930 hardware automatically close master ooo when NFE occurs */
+-	if (qm->ver >= QM_HW_V3)
+-		return;
++	/* Check the ooo register of the device before resetting the device. */
++	writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN, qm->io_base + ACC_MASTER_GLOBAL_CTRL);
++	ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
++					 val, (val == ACC_MASTER_TRANS_RETURN_RW),
++					 POLL_PERIOD, POLL_TIMEOUT);
++	if (ret)
++		pci_warn(qm->pdev, "Bus lock! Please reset system.\n");
+ 
+-	if (!qm->err_status.is_dev_ecc_mbit &&
+-	    qm->err_status.is_qm_ecc_mbit &&
+-	    qm->err_ini->close_axi_master_ooo) {
+-		qm->err_ini->close_axi_master_ooo(qm);
+-	} else if (qm->err_status.is_dev_ecc_mbit &&
+-		   !qm->err_status.is_qm_ecc_mbit &&
+-		   !qm->err_ini->close_axi_master_ooo) {
+-		nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
+-		writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
+-		       qm->io_base + QM_RAS_NFE_ENABLE);
+-		writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
+-	}
++	return ret;
+ }
+ 
+-static int qm_soft_reset(struct hisi_qm *qm)
++static int qm_soft_reset_prepare(struct hisi_qm *qm)
+ {
+ 	struct pci_dev *pdev = qm->pdev;
+ 	int ret;
+-	u32 val;
+ 
+ 	/* Ensure all doorbells and mailboxes received by QM */
+ 	ret = qm_check_req_recv(qm);
+@@ -4160,30 +4178,23 @@ static int qm_soft_reset(struct hisi_qm *qm)
+ 		return ret;
+ 	}
+ 
+-	qm_dev_ecc_mbit_handle(qm);
+-
+-	/* OOO register set and check */
+-	writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN,
+-	       qm->io_base + ACC_MASTER_GLOBAL_CTRL);
+-
+-	/* If bus lock, reset chip */
+-	ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
+-					 val,
+-					 (val == ACC_MASTER_TRANS_RETURN_RW),
+-					 POLL_PERIOD, POLL_TIMEOUT);
+-	if (ret) {
+-		pci_emerg(pdev, "Bus lock! Please reset system.\n");
++	ret = qm_master_ooo_check(qm);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	if (qm->err_ini->close_sva_prefetch)
+ 		qm->err_ini->close_sva_prefetch(qm);
+ 
+ 	ret = qm_set_pf_mse(qm, false);
+-	if (ret) {
++	if (ret)
+ 		pci_err(pdev, "Fails to disable pf MSE bit.\n");
+-		return ret;
+-	}
++
++	return ret;
++}
++
++static int qm_reset_device(struct hisi_qm *qm)
++{
++	struct pci_dev *pdev = qm->pdev;
+ 
+ 	/* The reset related sub-control registers are not in PCI BAR */
+ 	if (ACPI_HANDLE(&pdev->dev)) {
+@@ -4202,12 +4213,23 @@ static int qm_soft_reset(struct hisi_qm *qm)
+ 			pci_err(pdev, "Reset step %llu failed!\n", value);
+ 			return -EIO;
+ 		}
+-	} else {
+-		pci_err(pdev, "No reset method!\n");
+-		return -EINVAL;
++
++		return 0;
+ 	}
+ 
+-	return 0;
++	pci_err(pdev, "No reset method!\n");
++	return -EINVAL;
++}
++
++static int qm_soft_reset(struct hisi_qm *qm)
++{
++	int ret;
++
++	ret = qm_soft_reset_prepare(qm);
++	if (ret)
++		return ret;
++
++	return qm_reset_device(qm);
+ }
+ 
+ static int qm_vf_reset_done(struct hisi_qm *qm)
+@@ -5160,6 +5182,35 @@ static int qm_get_pci_res(struct hisi_qm *qm)
+ 	return ret;
+ }
+ 
++static int qm_clear_device(struct hisi_qm *qm)
++{
++	acpi_handle handle = ACPI_HANDLE(&qm->pdev->dev);
++	int ret;
++
++	if (qm->fun_type == QM_HW_VF)
++		return 0;
++
++	/* Device does not support reset, return */
++	if (!qm->err_ini->err_info_init)
++		return 0;
++	qm->err_ini->err_info_init(qm);
++
++	if (!handle)
++		return 0;
++
++	/* No reset method, return */
++	if (!acpi_has_method(handle, qm->err_info.acpi_rst))
++		return 0;
++
++	ret = qm_master_ooo_check(qm);
++	if (ret) {
++		writel(0x0, qm->io_base + ACC_MASTER_GLOBAL_CTRL);
++		return ret;
++	}
++
++	return qm_reset_device(qm);
++}
++
+ static int hisi_qm_pci_init(struct hisi_qm *qm)
+ {
+ 	struct pci_dev *pdev = qm->pdev;
+@@ -5189,8 +5240,14 @@ static int hisi_qm_pci_init(struct hisi_qm *qm)
+ 		goto err_get_pci_res;
+ 	}
+ 
++	ret = qm_clear_device(qm);
++	if (ret)
++		goto err_free_vectors;
++
+ 	return 0;
+ 
++err_free_vectors:
++	pci_free_irq_vectors(pdev);
+ err_get_pci_res:
+ 	qm_put_pci_res(qm);
+ err_disable_pcidev:
+@@ -5491,7 +5548,6 @@ static int qm_prepare_for_suspend(struct hisi_qm *qm)
+ {
+ 	struct pci_dev *pdev = qm->pdev;
+ 	int ret;
+-	u32 val;
+ 
+ 	ret = qm->ops->set_msi(qm, false);
+ 	if (ret) {
+@@ -5499,18 +5555,9 @@ static int qm_prepare_for_suspend(struct hisi_qm *qm)
+ 		return ret;
+ 	}
+ 
+-	/* shutdown OOO register */
+-	writel(ACC_MASTER_GLOBAL_CTRL_SHUTDOWN,
+-	       qm->io_base + ACC_MASTER_GLOBAL_CTRL);
+-
+-	ret = readl_relaxed_poll_timeout(qm->io_base + ACC_MASTER_TRANS_RETURN,
+-					 val,
+-					 (val == ACC_MASTER_TRANS_RETURN_RW),
+-					 POLL_PERIOD, POLL_TIMEOUT);
+-	if (ret) {
+-		pci_emerg(pdev, "Bus lock! Please reset system.\n");
++	ret = qm_master_ooo_check(qm);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	ret = qm_set_pf_mse(qm, false);
+ 	if (ret)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 75aad04ffe5eb9..c35533d8930b21 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -1065,9 +1065,6 @@ static int sec_pf_probe_init(struct sec_dev *sec)
+ 	struct hisi_qm *qm = &sec->qm;
+ 	int ret;
+ 
+-	qm->err_ini = &sec_err_ini;
+-	qm->err_ini->err_info_init(qm);
+-
+ 	ret = sec_set_user_domain_and_cache(qm);
+ 	if (ret)
+ 		return ret;
+@@ -1122,6 +1119,7 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		qm->qp_num = pf_q_num;
+ 		qm->debug.curr_qm_qp_num = pf_q_num;
+ 		qm->qm_list = &sec_devices;
++		qm->err_ini = &sec_err_ini;
+ 		if (pf_q_num_flag)
+ 			set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ 	} else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
+@@ -1186,6 +1184,12 @@ static int sec_probe_init(struct sec_dev *sec)
+ 
+ static void sec_probe_uninit(struct hisi_qm *qm)
+ {
++	if (qm->fun_type == QM_HW_VF)
++		return;
++
++	sec_debug_regs_clear(qm);
++	sec_show_last_regs_uninit(qm);
++	sec_close_sva_prefetch(qm);
+ 	hisi_qm_dev_err_uninit(qm);
+ }
+ 
+@@ -1274,7 +1278,6 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	sec_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+ err_probe_uninit:
+-	sec_show_last_regs_uninit(qm);
+ 	sec_probe_uninit(qm);
+ err_qm_uninit:
+ 	sec_qm_uninit(qm);
+@@ -1296,11 +1299,6 @@ static void sec_remove(struct pci_dev *pdev)
+ 	sec_debugfs_exit(qm);
+ 
+ 	(void)hisi_qm_stop(qm, QM_NORMAL);
+-
+-	if (qm->fun_type == QM_HW_PF)
+-		sec_debug_regs_clear(qm);
+-	sec_show_last_regs_uninit(qm);
+-
+ 	sec_probe_uninit(qm);
+ 
+ 	sec_qm_uninit(qm);
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index c94a7b20d07e6c..befef0b0e6bbe4 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -1149,8 +1149,6 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+ 
+ 	hisi_zip->ctrl = ctrl;
+ 	ctrl->hisi_zip = hisi_zip;
+-	qm->err_ini = &hisi_zip_err_ini;
+-	qm->err_ini->err_info_init(qm);
+ 
+ 	ret = hisi_zip_set_user_domain_and_cache(qm);
+ 	if (ret)
+@@ -1211,6 +1209,7 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		qm->qp_num = pf_q_num;
+ 		qm->debug.curr_qm_qp_num = pf_q_num;
+ 		qm->qm_list = &zip_devices;
++		qm->err_ini = &hisi_zip_err_ini;
+ 		if (pf_q_num_flag)
+ 			set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
+ 	} else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
+@@ -1277,6 +1276,16 @@ static int hisi_zip_probe_init(struct hisi_zip *hisi_zip)
+ 	return 0;
+ }
+ 
++static void hisi_zip_probe_uninit(struct hisi_qm *qm)
++{
++	if (qm->fun_type == QM_HW_VF)
++		return;
++
++	hisi_zip_show_last_regs_uninit(qm);
++	hisi_zip_close_sva_prefetch(qm);
++	hisi_qm_dev_err_uninit(qm);
++}
++
+ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct hisi_zip *hisi_zip;
+@@ -1303,7 +1312,7 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	ret = hisi_qm_start(qm);
+ 	if (ret)
+-		goto err_dev_err_uninit;
++		goto err_probe_uninit;
+ 
+ 	ret = hisi_zip_debugfs_init(qm);
+ 	if (ret)
+@@ -1342,9 +1351,8 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	hisi_zip_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+ 
+-err_dev_err_uninit:
+-	hisi_zip_show_last_regs_uninit(qm);
+-	hisi_qm_dev_err_uninit(qm);
++err_probe_uninit:
++	hisi_zip_probe_uninit(qm);
+ 
+ err_qm_uninit:
+ 	hisi_zip_qm_uninit(qm);
+@@ -1366,8 +1374,7 @@ static void hisi_zip_remove(struct pci_dev *pdev)
+ 
+ 	hisi_zip_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+-	hisi_zip_show_last_regs_uninit(qm);
+-	hisi_qm_dev_err_uninit(qm);
++	hisi_zip_probe_uninit(qm);
+ 	hisi_zip_qm_uninit(qm);
+ }
+ 
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index e810d286ee8c42..237f8700007021 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -495,10 +495,10 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device)
+ 		if (!device_mode)
+ 			continue;
+ 
+-		free_device_compression_mode(iaa_device, device_mode);
+-		iaa_device->compression_modes[i] = NULL;
+ 		if (iaa_compression_modes[i]->free)
+ 			iaa_compression_modes[i]->free(device_mode);
++		free_device_compression_mode(iaa_device, device_mode);
++		iaa_device->compression_modes[i] = NULL;
+ 	}
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+index 8b10926cedbac2..e8c53bd76f1bd2 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+@@ -83,7 +83,7 @@
+ #define ADF_WQM_CSR_RPRESETSTS(bank)	(ADF_WQM_CSR_RPRESETCTL(bank) + 4)
+ 
+ /* Ring interrupt */
+-#define ADF_RP_INT_SRC_SEL_F_RISE_MASK	BIT(2)
++#define ADF_RP_INT_SRC_SEL_F_RISE_MASK	GENMASK(1, 0)
+ #define ADF_RP_INT_SRC_SEL_F_FALL_MASK	GENMASK(2, 0)
+ #define ADF_RP_INT_SRC_SEL_RANGE_WIDTH	4
+ #define ADF_COALESCED_POLL_TIMEOUT_US	(1 * USEC_PER_SEC)
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_init.c b/drivers/crypto/intel/qat/qat_common/adf_init.c
+index 74f0818c070348..55f1ff1e0b3225 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_init.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_init.c
+@@ -323,6 +323,8 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
+ 	if (hw_data->stop_timer)
+ 		hw_data->stop_timer(accel_dev);
+ 
++	hw_data->disable_iov(accel_dev);
++
+ 	if (wait)
+ 		msleep(100);
+ 
+@@ -386,8 +388,6 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
+ 
+ 	adf_tl_shutdown(accel_dev);
+ 
+-	hw_data->disable_iov(accel_dev);
+-
+ 	if (test_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status)) {
+ 		hw_data->free_irq(accel_dev);
+ 		clear_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
+index 0e31f4b41844e0..0cee3b23dee90b 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
+@@ -18,14 +18,17 @@ void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev)
+ 
+ 	dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarting\n");
+ 	for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) {
+-		vf->restarting = false;
++		if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK)
++			vf->restarting = true;
++		else
++			vf->restarting = false;
++
+ 		if (!vf->init)
+ 			continue;
++
+ 		if (adf_send_pf2vf_msg(accel_dev, i, msg))
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Failed to send restarting msg to VF%d\n", i);
+-		else if (vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK)
+-			vf->restarting = true;
+ 	}
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
+index 1141258db4b65a..10c91e56d6be3b 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.c
+@@ -48,6 +48,20 @@ void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ }
+ EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
+ 
++void adf_vf2pf_notify_restart_complete(struct adf_accel_dev *accel_dev)
++{
++	struct pfvf_message msg = { .type = ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE };
++
++	/* Check compatibility version */
++	if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_FALLBACK)
++		return;
++
++	if (adf_send_vf2pf_msg(accel_dev, msg))
++		dev_err(&GET_DEV(accel_dev),
++			"Failed to send Restarting complete event to PF\n");
++}
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_restart_complete);
++
+ int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ {
+ 	u8 pf_version;
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
+index 71bc0e3f1d9335..d79340ab3134ff 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_msg.h
+@@ -6,6 +6,7 @@
+ #if defined(CONFIG_PCI_IOV)
+ int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
+ void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
++void adf_vf2pf_notify_restart_complete(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_get_capabilities(struct adf_accel_dev *accel_dev);
+ int adf_vf2pf_get_ring_to_svc(struct adf_accel_dev *accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
+index cdbb2d687b1b0d..4ab9ac33151957 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
+@@ -13,6 +13,7 @@
+ #include "adf_cfg.h"
+ #include "adf_cfg_strings.h"
+ #include "adf_cfg_common.h"
++#include "adf_pfvf_vf_msg.h"
+ #include "adf_transport_access_macros.h"
+ #include "adf_transport_internal.h"
+ 
+@@ -75,6 +76,7 @@ static void adf_dev_stop_async(struct work_struct *work)
+ 
+ 	/* Re-enable PF2VF interrupts */
+ 	adf_enable_pf2vf_interrupts(accel_dev);
++	adf_vf2pf_notify_restart_complete(accel_dev);
+ 	kfree(stop_data);
+ }
+ 
+diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
+index 59d472cb11e750..509aeffdedc69c 100644
+--- a/drivers/crypto/n2_core.c
++++ b/drivers/crypto/n2_core.c
+@@ -1357,6 +1357,7 @@ static int __n2_register_one_hmac(struct n2_ahash_alg *n2ahash)
+ 	ahash->setkey = n2_hmac_async_setkey;
+ 
+ 	base = &ahash->halg.base;
++	err = -EINVAL;
+ 	if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)",
+ 		     p->child_alg) >= CRYPTO_MAX_ALG_NAME)
+ 		goto out_free_p;
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index c670d7d0c11ea8..6496b075a48d38 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -196,7 +196,7 @@ static int qcom_rng_probe(struct platform_device *pdev)
+ 	if (IS_ERR(rng->clk))
+ 		return PTR_ERR(rng->clk);
+ 
+-	rng->of_data = (struct qcom_rng_of_data *)of_device_get_match_data(&pdev->dev);
++	rng->of_data = (struct qcom_rng_of_data *)device_get_match_data(&pdev->dev);
+ 
+ 	qcom_rng_dev = rng;
+ 	ret = crypto_register_rng(&qcom_rng_alg);
+@@ -247,7 +247,7 @@ static struct qcom_rng_of_data qcom_trng_of_data = {
+ };
+ 
+ static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
+-	{ .id = "QCOM8160", .driver_data = 1 },
++	{ .id = "QCOM8160", .driver_data = (kernel_ulong_t)&qcom_prng_ee_of_data },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match);
+diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
+index 8567dd11eaac74..205085ccf12c30 100644
+--- a/drivers/cxl/core/pci.c
++++ b/drivers/cxl/core/pci.c
+@@ -390,10 +390,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
+ 
+ 		size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK;
+ 		if (!size) {
+-			info->dvsec_range[i] = (struct range) {
+-				.start = 0,
+-				.end = CXL_RESOURCE_NONE,
+-			};
+ 			continue;
+ 		}
+ 
+@@ -411,12 +407,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
+ 
+ 		base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK;
+ 
+-		info->dvsec_range[i] = (struct range) {
++		info->dvsec_range[ranges++] = (struct range) {
+ 			.start = base,
+ 			.end = base + size - 1
+ 		};
+-
+-		ranges++;
+ 	}
+ 
+ 	info->ranges = ranges;
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index dbe9fe5f2ca6c3..594408aa934bf5 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -311,7 +311,7 @@ static u64 ehl_err_addr_to_imc_addr(u64 eaddr, int mc)
+ 	if (igen6_tom <= _4GB)
+ 		return eaddr + igen6_tolud - _4GB;
+ 
+-	if (eaddr < _4GB)
++	if (eaddr >= igen6_tom)
+ 		return eaddr + igen6_tolud - igen6_tom;
+ 
+ 	return eaddr;
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index ea7a9a342dd30b..d7416166fd8a42 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -10,6 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/spinlock.h>
++#include <linux/sizes.h>
+ #include <linux/interrupt.h>
+ #include <linux/of.h>
+ 
+@@ -337,6 +338,7 @@ struct synps_edac_priv {
+  * @get_mtype:		Get mtype.
+  * @get_dtype:		Get dtype.
+  * @get_ecc_state:	Get ECC state.
++ * @get_mem_info:	Get EDAC memory info
+  * @quirks:		To differentiate IPs.
+  */
+ struct synps_platform_data {
+@@ -344,6 +346,9 @@ struct synps_platform_data {
+ 	enum mem_type (*get_mtype)(const void __iomem *base);
+ 	enum dev_type (*get_dtype)(const void __iomem *base);
+ 	bool (*get_ecc_state)(void __iomem *base);
++#ifdef CONFIG_EDAC_DEBUG
++	u64 (*get_mem_info)(struct synps_edac_priv *priv);
++#endif
+ 	int quirks;
+ };
+ 
+@@ -402,6 +407,25 @@ static int zynq_get_error_info(struct synps_edac_priv *priv)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_EDAC_DEBUG
++/**
++ * zynqmp_get_mem_info - Get the current memory info.
++ * @priv:	DDR memory controller private instance data.
++ *
++ * Return: host interface address.
++ */
++static u64 zynqmp_get_mem_info(struct synps_edac_priv *priv)
++{
++	u64 hif_addr = 0, linear_addr;
++
++	linear_addr = priv->poison_addr;
++	if (linear_addr >= SZ_32G)
++		linear_addr = linear_addr - SZ_32G + SZ_2G;
++	hif_addr = linear_addr >> 3;
++	return hif_addr;
++}
++#endif
++
+ /**
+  * zynqmp_get_error_info - Get the current ECC error info.
+  * @priv:	DDR memory controller private instance data.
+@@ -922,6 +946,9 @@ static const struct synps_platform_data zynqmp_edac_def = {
+ 	.get_mtype	= zynqmp_get_mtype,
+ 	.get_dtype	= zynqmp_get_dtype,
+ 	.get_ecc_state	= zynqmp_get_ecc_state,
++#ifdef CONFIG_EDAC_DEBUG
++	.get_mem_info	= zynqmp_get_mem_info,
++#endif
+ 	.quirks         = (DDR_ECC_INTR_SUPPORT
+ #ifdef CONFIG_EDAC_DEBUG
+ 			  | DDR_ECC_DATA_POISON_SUPPORT
+@@ -975,10 +1002,16 @@ MODULE_DEVICE_TABLE(of, synps_edac_match);
+ static void ddr_poison_setup(struct synps_edac_priv *priv)
+ {
+ 	int col = 0, row = 0, bank = 0, bankgrp = 0, rank = 0, regval;
++	const struct synps_platform_data *p_data;
+ 	int index;
+ 	ulong hif_addr = 0;
+ 
+-	hif_addr = priv->poison_addr >> 3;
++	p_data = priv->p_data;
++
++	if (p_data->get_mem_info)
++		hif_addr = p_data->get_mem_info(priv);
++	else
++		hif_addr = priv->poison_addr >> 3;
+ 
+ 	for (index = 0; index < DDR_MAX_ROW_SHIFT; index++) {
+ 		if (priv->row_shift[index])
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index 9a7dc90330a351..a888a001bedb15 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -599,11 +599,11 @@ static void complete_transaction(struct fw_card *card, int rcode, u32 request_ts
+ 		queue_event(client, &e->event, rsp, sizeof(*rsp) + rsp->length, NULL, 0);
+ 
+ 		break;
++	}
+ 	default:
+ 		WARN_ON(1);
+ 		break;
+ 	}
+-	}
+ 
+ 	/* Drop the idr's reference */
+ 	client_put(client);
+diff --git a/drivers/firmware/arm_scmi/optee.c b/drivers/firmware/arm_scmi/optee.c
+index 4e7944b91e3857..0c8908d3b1d678 100644
+--- a/drivers/firmware/arm_scmi/optee.c
++++ b/drivers/firmware/arm_scmi/optee.c
+@@ -473,6 +473,13 @@ static int scmi_optee_chan_free(int id, void *p, void *data)
+ 	struct scmi_chan_info *cinfo = p;
+ 	struct scmi_optee_channel *channel = cinfo->transport_info;
+ 
++	/*
++	 * Different protocols might share the same chan info, so a previous
++	 * call might have already freed the structure.
++	 */
++	if (!channel)
++		return 0;
++
+ 	mutex_lock(&scmi_optee_private->mu);
+ 	list_del(&channel->link);
+ 	mutex_unlock(&scmi_optee_private->mu);
+diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c
+index df3182f2e63a56..1fd6823248ab6e 100644
+--- a/drivers/firmware/efi/libstub/tpm.c
++++ b/drivers/firmware/efi/libstub/tpm.c
+@@ -96,7 +96,7 @@ static void efi_retrieve_tcg2_eventlog(int version, efi_physical_addr_t log_loca
+ 	}
+ 
+ 	/* Allocate space for the logs and copy them. */
+-	status = efi_bs_call(allocate_pool, EFI_LOADER_DATA,
++	status = efi_bs_call(allocate_pool, EFI_ACPI_RECLAIM_MEMORY,
+ 			     sizeof(*log_tbl) + log_size, (void **)&log_tbl);
+ 
+ 	if (status != EFI_SUCCESS) {
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 68f4df7e6c3c7f..77dd831febf5bc 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -1875,14 +1875,12 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	 * will cause the boot stages to enter download mode, unless
+ 	 * disabled below by a clean shutdown/reboot.
+ 	 */
+-	if (download_mode)
+-		qcom_scm_set_download_mode(true);
+-
++	qcom_scm_set_download_mode(download_mode);
+ 
+ 	/*
+ 	 * Disable SDI if indicated by DT that it is enabled by default.
+ 	 */
+-	if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled"))
++	if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled") || !download_mode)
+ 		qcom_scm_disable_sdi();
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 0f7106066480ea..3fdbc10aebce1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -902,10 +902,12 @@ amdgpu_vm_tlb_flush(struct amdgpu_vm_update_params *params,
+ {
+ 	struct amdgpu_vm *vm = params->vm;
+ 
+-	if (!fence || !*fence)
++	tlb_cb->vm = vm;
++	if (!fence || !*fence) {
++		amdgpu_vm_tlb_seq_cb(NULL, &tlb_cb->cb);
+ 		return;
++	}
+ 
+-	tlb_cb->vm = vm;
+ 	if (!dma_fence_add_callback(*fence, &tlb_cb->cb,
+ 				    amdgpu_vm_tlb_seq_cb)) {
+ 		dma_fence_put(vm->last_tlb_flush);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+index fb2b394bb9c555..6e9eeaeb3de1dd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
+@@ -213,7 +213,7 @@ struct amd_sriov_msg_pf2vf_info {
+ 	uint32_t gpu_capacity;
+ 	/* reserved */
+ 	uint32_t reserved[256 - AMD_SRIOV_MSG_PF2VF_INFO_FILLED_SIZE];
+-};
++} __packed;
+ 
+ struct amd_sriov_msg_vf2pf_info_header {
+ 	/* the total structure size in byte */
+@@ -273,7 +273,7 @@ struct amd_sriov_msg_vf2pf_info {
+ 	uint32_t mes_info_size;
+ 	/* reserved */
+ 	uint32_t reserved[256 - AMD_SRIOV_MSG_VF2PF_INFO_FILLED_SIZE];
+-};
++} __packed;
+ 
+ /* mailbox message send from guest to host  */
+ enum amd_sriov_mailbox_request_message {
+diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+index 25feab188dfe69..ebf83fee43bb99 100644
+--- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
++++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
+@@ -2065,26 +2065,29 @@ amdgpu_atombios_encoder_get_lcd_info(struct amdgpu_encoder *encoder)
+ 					fake_edid_record = (ATOM_FAKE_EDID_PATCH_RECORD *)record;
+ 					if (fake_edid_record->ucFakeEDIDLength) {
+ 						struct edid *edid;
+-						int edid_size =
+-							max((int)EDID_LENGTH, (int)fake_edid_record->ucFakeEDIDLength);
+-						edid = kmalloc(edid_size, GFP_KERNEL);
++						int edid_size;
++
++						if (fake_edid_record->ucFakeEDIDLength == 128)
++							edid_size = fake_edid_record->ucFakeEDIDLength;
++						else
++							edid_size = fake_edid_record->ucFakeEDIDLength * 128;
++						edid = kmemdup(&fake_edid_record->ucFakeEDIDString[0],
++							       edid_size, GFP_KERNEL);
+ 						if (edid) {
+-							memcpy((u8 *)edid, (u8 *)&fake_edid_record->ucFakeEDIDString[0],
+-							       fake_edid_record->ucFakeEDIDLength);
+-
+ 							if (drm_edid_is_valid(edid)) {
+ 								adev->mode_info.bios_hardcoded_edid = edid;
+ 								adev->mode_info.bios_hardcoded_edid_size = edid_size;
+-							} else
++							} else {
+ 								kfree(edid);
++							}
+ 						}
++						record += struct_size(fake_edid_record,
++								      ucFakeEDIDString,
++								      edid_size);
++					} else {
++						/* empty fake edid record must be 3 bytes long */
++						record += sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ 					}
+-					record += fake_edid_record->ucFakeEDIDLength ?
+-						  struct_size(fake_edid_record,
+-							      ucFakeEDIDString,
+-							      fake_edid_record->ucFakeEDIDLength) :
+-						  /* empty fake edid record must be 3 bytes long */
+-						  sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ 					break;
+ 				case LCD_PANEL_RESOLUTION_RECORD_TYPE:
+ 					panel_res_record = (ATOM_PANEL_RESOLUTION_PATCH_RECORD *)record;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index e1a66d585f5e99..44b277c55de4a5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -155,7 +155,7 @@ static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+ 						    int api_status_off)
+ {
+ 	union MESAPI__QUERY_MES_STATUS mes_status_pkt;
+-	signed long timeout = 3000000; /* 3000 ms */
++	signed long timeout = 2100000; /* 2100 ms */
+ 	struct amdgpu_device *adev = mes->adev;
+ 	struct amdgpu_ring *ring = &mes->ring;
+ 	struct MES_API_STATUS *api_status;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index 30e80c6f11ed68..68ac91d28ded0f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -1353,170 +1353,6 @@ static void vcn_v4_0_5_unified_ring_set_wptr(struct amdgpu_ring *ring)
+ 	}
+ }
+ 
+-static int vcn_v4_0_5_limit_sched(struct amdgpu_cs_parser *p,
+-				struct amdgpu_job *job)
+-{
+-	struct drm_gpu_scheduler **scheds;
+-
+-	/* The create msg must be in the first IB submitted */
+-	if (atomic_read(&job->base.entity->fence_seq))
+-		return -EINVAL;
+-
+-	/* if VCN0 is harvested, we can't support AV1 */
+-	if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+-		return -EINVAL;
+-
+-	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
+-		[AMDGPU_RING_PRIO_0].sched;
+-	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+-	return 0;
+-}
+-
+-static int vcn_v4_0_5_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+-			    uint64_t addr)
+-{
+-	struct ttm_operation_ctx ctx = { false, false };
+-	struct amdgpu_bo_va_mapping *map;
+-	uint32_t *msg, num_buffers;
+-	struct amdgpu_bo *bo;
+-	uint64_t start, end;
+-	unsigned int i;
+-	void *ptr;
+-	int r;
+-
+-	addr &= AMDGPU_GMC_HOLE_MASK;
+-	r = amdgpu_cs_find_mapping(p, addr, &bo, &map);
+-	if (r) {
+-		DRM_ERROR("Can't find BO for addr 0x%08llx\n", addr);
+-		return r;
+-	}
+-
+-	start = map->start * AMDGPU_GPU_PAGE_SIZE;
+-	end = (map->last + 1) * AMDGPU_GPU_PAGE_SIZE;
+-	if (addr & 0x7) {
+-		DRM_ERROR("VCN messages must be 8 byte aligned!\n");
+-		return -EINVAL;
+-	}
+-
+-	bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+-	amdgpu_bo_placement_from_domain(bo, bo->allowed_domains);
+-	r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
+-	if (r) {
+-		DRM_ERROR("Failed validating the VCN message BO (%d)!\n", r);
+-		return r;
+-	}
+-
+-	r = amdgpu_bo_kmap(bo, &ptr);
+-	if (r) {
+-		DRM_ERROR("Failed mapping the VCN message (%d)!\n", r);
+-		return r;
+-	}
+-
+-	msg = ptr + addr - start;
+-
+-	/* Check length */
+-	if (msg[1] > end - addr) {
+-		r = -EINVAL;
+-		goto out;
+-	}
+-
+-	if (msg[3] != RDECODE_MSG_CREATE)
+-		goto out;
+-
+-	num_buffers = msg[2];
+-	for (i = 0, msg = &msg[6]; i < num_buffers; ++i, msg += 4) {
+-		uint32_t offset, size, *create;
+-
+-		if (msg[0] != RDECODE_MESSAGE_CREATE)
+-			continue;
+-
+-		offset = msg[1];
+-		size = msg[2];
+-
+-		if (offset + size > end) {
+-			r = -EINVAL;
+-			goto out;
+-		}
+-
+-		create = ptr + addr + offset - start;
+-
+-		/* H264, HEVC and VP9 can run on any instance */
+-		if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11)
+-			continue;
+-
+-		r = vcn_v4_0_5_limit_sched(p, job);
+-		if (r)
+-			goto out;
+-	}
+-
+-out:
+-	amdgpu_bo_kunmap(bo);
+-	return r;
+-}
+-
+-#define RADEON_VCN_ENGINE_TYPE_ENCODE			(0x00000002)
+-#define RADEON_VCN_ENGINE_TYPE_DECODE			(0x00000003)
+-
+-#define RADEON_VCN_ENGINE_INFO				(0x30000001)
+-#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET		16
+-
+-#define RENCODE_ENCODE_STANDARD_AV1			2
+-#define RENCODE_IB_PARAM_SESSION_INIT			0x00000003
+-#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET	64
+-
+-/* return the offset in ib if id is found, -1 otherwise
+- * to speed up the searching we only search upto max_offset
+- */
+-static int vcn_v4_0_5_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
+-{
+-	int i;
+-
+-	for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
+-		if (ib->ptr[i + 1] == id)
+-			return i;
+-	}
+-	return -1;
+-}
+-
+-static int vcn_v4_0_5_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+-					   struct amdgpu_job *job,
+-					   struct amdgpu_ib *ib)
+-{
+-	struct amdgpu_ring *ring = amdgpu_job_ring(job);
+-	struct amdgpu_vcn_decode_buffer *decode_buffer;
+-	uint64_t addr;
+-	uint32_t val;
+-	int idx;
+-
+-	/* The first instance can decode anything */
+-	if (!ring->me)
+-		return 0;
+-
+-	/* RADEON_VCN_ENGINE_INFO is at the top of ib block */
+-	idx = vcn_v4_0_5_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
+-			RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
+-	if (idx < 0) /* engine info is missing */
+-		return 0;
+-
+-	val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
+-	if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
+-		decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
+-
+-		if (!(decode_buffer->valid_buf_flag  & 0x1))
+-			return 0;
+-
+-		addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
+-			decode_buffer->msg_buffer_address_lo;
+-		return vcn_v4_0_5_dec_msg(p, job, addr);
+-	} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
+-		idx = vcn_v4_0_5_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
+-			RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
+-		if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
+-			return vcn_v4_0_5_limit_sched(p, job);
+-	}
+-	return 0;
+-}
+-
+ static const struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
+ 	.type = AMDGPU_RING_TYPE_VCN_ENC,
+ 	.align_mask = 0x3f,
+@@ -1524,7 +1360,6 @@ static const struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
+ 	.get_rptr = vcn_v4_0_5_unified_ring_get_rptr,
+ 	.get_wptr = vcn_v4_0_5_unified_ring_get_wptr,
+ 	.set_wptr = vcn_v4_0_5_unified_ring_set_wptr,
+-	.patch_cs_in_place = vcn_v4_0_5_ring_patch_cs_in_place,
+ 	.emit_frame_size =
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 +
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 27e641f176289a..3541d154cc8d06 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4149,6 +4149,7 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+ 
+ #define AMDGPU_DM_DEFAULT_MIN_BACKLIGHT 12
+ #define AMDGPU_DM_DEFAULT_MAX_BACKLIGHT 255
++#define AMDGPU_DM_MIN_SPREAD ((AMDGPU_DM_DEFAULT_MAX_BACKLIGHT - AMDGPU_DM_DEFAULT_MIN_BACKLIGHT) / 2)
+ #define AUX_BL_DEFAULT_TRANSITION_TIME_MS 50
+ 
+ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+@@ -4163,6 +4164,21 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ 		return;
+ 
+ 	amdgpu_acpi_get_backlight_caps(&caps);
++
++	/* validate the firmware value is sane */
++	if (caps.caps_valid) {
++		int spread = caps.max_input_signal - caps.min_input_signal;
++
++		if (caps.max_input_signal > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
++		    caps.min_input_signal < AMDGPU_DM_DEFAULT_MIN_BACKLIGHT ||
++		    spread > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
++		    spread < AMDGPU_DM_MIN_SPREAD) {
++			DRM_DEBUG_KMS("DM: Invalid backlight caps: min=%d, max=%d\n",
++				      caps.min_input_signal, caps.max_input_signal);
++			caps.caps_valid = false;
++		}
++	}
++
+ 	if (caps.caps_valid) {
+ 		dm->backlight_caps[bl_idx].caps_valid = true;
+ 		if (caps.aux_support)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index b50010ed763327..9a620773141682 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -251,7 +251,7 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
+ 		aconnector->dsc_aux = &aconnector->mst_root->dm_dp_aux.aux;
+ 
+ 	/* synaptics cascaded MST hub case */
+-	if (!aconnector->dsc_aux && is_synaptics_cascaded_panamera(aconnector->dc_link, port))
++	if (is_synaptics_cascaded_panamera(aconnector->dc_link, port))
+ 		aconnector->dsc_aux = port->mgr->aux;
+ 
+ 	if (!aconnector->dsc_aux)
+@@ -1111,7 +1111,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ 		params[count].num_slices_v = aconnector->dsc_settings.dsc_num_slices_v;
+ 		params[count].bpp_overwrite = aconnector->dsc_settings.dsc_bits_per_pixel;
+ 		params[count].compression_possible = stream->sink->dsc_caps.dsc_dec_caps.is_dsc_supported;
+-		dc_dsc_get_policy_for_timing(params[count].timing, 0, &dsc_policy);
++		dc_dsc_get_policy_for_timing(params[count].timing, 0, &dsc_policy, dc_link_get_highest_encoding_format(stream->link));
+ 		if (!dc_dsc_compute_bandwidth_range(
+ 				stream->sink->ctx->dc->res_pool->dscs[0],
+ 				stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+@@ -1264,6 +1264,9 @@ static bool is_dsc_need_re_compute(
+ 		}
+ 	}
+ 
++	if (new_stream_on_link_num == 0)
++		return false;
++
+ 	if (new_stream_on_link_num == 0)
+ 		return false;
+ 
+@@ -1583,7 +1586,7 @@ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+ {
+ 	struct dc_dsc_policy dsc_policy = {0};
+ 
+-	dc_dsc_get_policy_for_timing(&stream->timing, 0, &dsc_policy);
++	dc_dsc_get_policy_for_timing(&stream->timing, 0, &dsc_policy, dc_link_get_highest_encoding_format(stream->link));
+ 	dc_dsc_compute_bandwidth_range(stream->sink->ctx->dc->res_pool->dscs[0],
+ 				       stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+ 				       dsc_policy.min_target_bpp * 16,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 6c9b4e6491a5e4..985e847f958032 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -1149,6 +1149,12 @@ void dcn35_clk_mgr_construct(
+ 			ctx->dc->debug.disable_dpp_power_gate = false;
+ 			ctx->dc->debug.disable_hubp_power_gate = false;
+ 			ctx->dc->debug.disable_dsc_power_gate = false;
++
++			/* Disable dynamic IPS2 in older PMFW (93.12) for Z8 interop. */
++			if (ctx->dc->config.disable_ips == DMUB_IPS_ENABLE &&
++			    ctx->dce_version == DCN_VERSION_3_5 &&
++			    ((clk_mgr->base.smu_ver & 0x00FFFFFF) <= 0x005d0c00))
++				ctx->dc->config.disable_ips = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
+ 		} else {
+ 			/*let's reset the config control flag*/
+ 			ctx->dc->config.disable_ips = DMUB_IPS_DISABLE_ALL; /*pmfw not support it, disable it all*/
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dsc.h b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+index fe3078b8789ef1..01c07545ef6b47 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dsc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+@@ -100,7 +100,8 @@ uint32_t dc_dsc_stream_bandwidth_overhead_in_kbps(
+  */
+ void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
+ 		uint32_t max_target_bpp_limit_override_x16,
+-		struct dc_dsc_policy *policy);
++		struct dc_dsc_policy *policy,
++		const enum dc_link_encoding_format link_encoding);
+ 
+ void dc_dsc_policy_set_max_target_bpp_limit(uint32_t limit);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+index 150ef23440a2c1..f8c1e1ca678bf1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
++++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+@@ -883,7 +883,7 @@ static bool setup_dsc_config(
+ 
+ 	memset(dsc_cfg, 0, sizeof(struct dc_dsc_config));
+ 
+-	dc_dsc_get_policy_for_timing(timing, options->max_target_bpp_limit_override_x16, &policy);
++	dc_dsc_get_policy_for_timing(timing, options->max_target_bpp_limit_override_x16, &policy, link_encoding);
+ 	pic_width = timing->h_addressable + timing->h_border_left + timing->h_border_right;
+ 	pic_height = timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
+ 
+@@ -1156,7 +1156,8 @@ uint32_t dc_dsc_stream_bandwidth_overhead_in_kbps(
+ 
+ void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
+ 		uint32_t max_target_bpp_limit_override_x16,
+-		struct dc_dsc_policy *policy)
++		struct dc_dsc_policy *policy,
++		const enum dc_link_encoding_format link_encoding)
+ {
+ 	uint32_t bpc = 0;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 0d3ea291eeee18..e9e9f80a02a775 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -57,6 +57,7 @@
+ #include "panel_cntl.h"
+ #include "dc_state_priv.h"
+ #include "dpcd_defs.h"
++#include "dsc.h"
+ /* include DCE11 register header files */
+ #include "dce/dce_11_0_d.h"
+ #include "dce/dce_11_0_sh_mask.h"
+@@ -1768,6 +1769,48 @@ static void get_edp_links_with_sink(
+ 	}
+ }
+ 
++static void clean_up_dsc_blocks(struct dc *dc)
++{
++	struct display_stream_compressor *dsc = NULL;
++	struct timing_generator *tg = NULL;
++	struct stream_encoder *se = NULL;
++	struct dccg *dccg = dc->res_pool->dccg;
++	struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl;
++	int i;
++
++	if (dc->ctx->dce_version != DCN_VERSION_3_5 &&
++		dc->ctx->dce_version != DCN_VERSION_3_51)
++		return;
++
++	for (i = 0; i < dc->res_pool->res_cap->num_dsc; i++) {
++		struct dcn_dsc_state s  = {0};
++
++		dsc = dc->res_pool->dscs[i];
++		dsc->funcs->dsc_read_state(dsc, &s);
++		if (s.dsc_fw_en) {
++			/* disable DSC in OPTC */
++			if (i < dc->res_pool->timing_generator_count) {
++				tg = dc->res_pool->timing_generators[i];
++				tg->funcs->set_dsc_config(tg, OPTC_DSC_DISABLED, 0, 0);
++			}
++			/* disable DSC in stream encoder */
++			if (i < dc->res_pool->stream_enc_count) {
++				se = dc->res_pool->stream_enc[i];
++				se->funcs->dp_set_dsc_config(se, OPTC_DSC_DISABLED, 0, 0);
++				se->funcs->dp_set_dsc_pps_info_packet(se, false, NULL, true);
++			}
++			/* disable DSC block */
++			if (dccg->funcs->set_ref_dscclk)
++				dccg->funcs->set_ref_dscclk(dccg, dsc->inst);
++			dsc->funcs->dsc_disable(dsc);
++
++			/* power down DSC */
++			if (pg_cntl != NULL)
++				pg_cntl->funcs->dsc_pg_control(pg_cntl, dsc->inst, false);
++		}
++	}
++}
++
+ /*
+  * When ASIC goes from VBIOS/VGA mode to driver/accelerated mode we need:
+  *  1. Power down all DC HW blocks
+@@ -1852,6 +1895,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
+ 		clk_mgr_exit_optimized_pwr_state(dc, dc->clk_mgr);
+ 
+ 		power_down_all_hw_blocks(dc);
++
++		/* DSC could be enabled on eDP during VBIOS post.
++		 * To clean up dsc blocks if eDP is in link but not active.
++		 */
++		if (edp_link_with_sink && (edp_stream_num == 0))
++			clean_up_dsc_blocks(dc);
++
+ 		disable_vga_and_power_gate_all_controllers(dc);
+ 		if (edp_link_with_sink && !keep_edp_vdd_on)
+ 			dc->hwss.edp_power_control(edp_link_with_sink, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index 4c470615330509..05c5d4f04e1bd8 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -395,7 +395,11 @@ bool dcn30_set_output_transfer_func(struct dc *dc,
+ 		}
+ 	}
+ 
+-	mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++	if (mpc->funcs->set_output_gamma)
++		mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++	else
++		DC_LOG_ERROR("%s: set_output_gamma function pointer is NULL.\n", __func__);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index b8e884368dc6e5..5fc377f51f5621 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -991,6 +991,20 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 		struct dsc_config dsc_cfg;
+ 		struct dsc_optc_config dsc_optc_cfg = {0};
+ 		enum optc_dsc_mode optc_dsc_mode;
++		struct dcn_dsc_state dsc_state = {0};
++
++		if (!dsc) {
++			DC_LOG_DSC("DSC is NULL for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++			return;
++		}
++
++		if (dsc->funcs->dsc_read_state) {
++			dsc->funcs->dsc_read_state(dsc, &dsc_state);
++			if (!dsc_state.dsc_fw_en) {
++				DC_LOG_DSC("DSC has been disabled for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++				return;
++			}
++		}
+ 
+ 		/* Enable DSC hw block */
+ 		dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index 4f0e9e0f701dd2..900dfc0f275373 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -375,7 +375,20 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 		struct dsc_config dsc_cfg;
+ 		struct dsc_optc_config dsc_optc_cfg = {0};
+ 		enum optc_dsc_mode optc_dsc_mode;
++		struct dcn_dsc_state dsc_state = {0};
+ 
++		if (!dsc) {
++			DC_LOG_DSC("DSC is NULL for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++			return;
++		}
++
++		if (dsc->funcs->dsc_read_state) {
++			dsc->funcs->dsc_read_state(dsc, &dsc_state);
++			if (!dsc_state.dsc_fw_en) {
++				DC_LOG_DSC("DSC has been disabled for tg instance %d:", pipe_ctx->stream_res.tg->inst);
++				return;
++			}
++		}
+ 		/* Enable DSC hw block */
+ 		dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+ 		dsc_cfg.pic_height = stream->timing.v_addressable + stream->timing.v_border_top + stream->timing.v_border_bottom;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index 915d68cc04e9c2..66b73edda7b6dd 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -2150,6 +2150,7 @@ static bool dcn35_resource_construct(
+ 	dc->dml2_options.max_segments_per_hubp = 24;
+ 
+ 	dc->dml2_options.det_segment_size = DCN3_2_DET_SEG_SIZE;/*todo*/
++	dc->dml2_options.override_det_buffer_size_kbytes = true;
+ 
+ 	if (dc->config.sdpif_request_limit_words_per_umc == 0)
+ 		dc->config.sdpif_request_limit_words_per_umc = 16;/*todo*/
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index b7bd0f36125a4d..987c3927b11851 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -736,7 +736,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 			.hdmichar = true,
+ 			.dpstream = true,
+ 			.symclk32_se = true,
+-			.symclk32_le = true,
++			.symclk32_le = false,
+ 			.symclk_fe = true,
+ 			.physymclk = true,
+ 			.dpiasymclk = true,
+@@ -2132,6 +2132,7 @@ static bool dcn351_resource_construct(
+ 
+ 	dc->dml2_options.max_segments_per_hubp = 24;
+ 	dc->dml2_options.det_segment_size = DCN3_2_DET_SEG_SIZE;/*todo*/
++	dc->dml2_options.override_det_buffer_size_kbytes = true;
+ 
+ 	if (dc->config.sdpif_request_limit_words_per_umc == 0)
+ 		dc->config.sdpif_request_limit_words_per_umc = 16;/*todo*/
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index d09627c15b9c22..2b8d09bb0cc158 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -134,7 +134,7 @@ unsigned int mod_freesync_calc_v_total_from_refresh(
+ 
+ 	v_total = div64_u64(div64_u64(((unsigned long long)(
+ 			frame_duration_in_ns) * (stream->timing.pix_clk_100hz / 10)),
+-			stream->timing.h_total), 1000000);
++			stream->timing.h_total) + 500000, 1000000);
+ 
+ 	/* v_total cannot be less than nominal */
+ 	if (v_total < stream->timing.v_total) {
+diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+index 1a9defa15663cf..e265ab3c8c9293 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
++++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+@@ -422,22 +422,6 @@ static const struct drm_connector_funcs lt8912_connector_funcs = {
+ 	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ };
+ 
+-static enum drm_mode_status
+-lt8912_connector_mode_valid(struct drm_connector *connector,
+-			    struct drm_display_mode *mode)
+-{
+-	if (mode->clock > 150000)
+-		return MODE_CLOCK_HIGH;
+-
+-	if (mode->hdisplay > 1920)
+-		return MODE_BAD_HVALUE;
+-
+-	if (mode->vdisplay > 1080)
+-		return MODE_BAD_VVALUE;
+-
+-	return MODE_OK;
+-}
+-
+ static int lt8912_connector_get_modes(struct drm_connector *connector)
+ {
+ 	const struct drm_edid *drm_edid;
+@@ -463,7 +447,6 @@ static int lt8912_connector_get_modes(struct drm_connector *connector)
+ 
+ static const struct drm_connector_helper_funcs lt8912_connector_helper_funcs = {
+ 	.get_modes = lt8912_connector_get_modes,
+-	.mode_valid = lt8912_connector_mode_valid,
+ };
+ 
+ static void lt8912_bridge_mode_set(struct drm_bridge *bridge,
+@@ -605,6 +588,23 @@ static void lt8912_bridge_detach(struct drm_bridge *bridge)
+ 		drm_bridge_hpd_disable(lt->hdmi_port);
+ }
+ 
++static enum drm_mode_status
++lt8912_bridge_mode_valid(struct drm_bridge *bridge,
++			 const struct drm_display_info *info,
++			 const struct drm_display_mode *mode)
++{
++	if (mode->clock > 150000)
++		return MODE_CLOCK_HIGH;
++
++	if (mode->hdisplay > 1920)
++		return MODE_BAD_HVALUE;
++
++	if (mode->vdisplay > 1080)
++		return MODE_BAD_VVALUE;
++
++	return MODE_OK;
++}
++
+ static enum drm_connector_status
+ lt8912_bridge_detect(struct drm_bridge *bridge)
+ {
+@@ -635,6 +635,7 @@ static const struct drm_edid *lt8912_bridge_edid_read(struct drm_bridge *bridge,
+ static const struct drm_bridge_funcs lt8912_bridge_funcs = {
+ 	.attach = lt8912_bridge_attach,
+ 	.detach = lt8912_bridge_detach,
++	.mode_valid = lt8912_bridge_mode_valid,
+ 	.mode_set = lt8912_bridge_mode_set,
+ 	.enable = lt8912_bridge_enable,
+ 	.detect = lt8912_bridge_detect,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+index 1b111e2c33472b..752339d33f39a5 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+@@ -1174,7 +1174,7 @@ static int gsc_bind(struct device *dev, struct device *master, void *data)
+ 	struct exynos_drm_ipp *ipp = &ctx->ipp;
+ 
+ 	ctx->drm_dev = drm_dev;
+-	ctx->drm_dev = drm_dev;
++	ipp->drm_dev = drm_dev;
+ 	exynos_drm_register_dma(drm_dev, dev, &ctx->dma_priv);
+ 
+ 	exynos_drm_ipp_register(dev, ipp, &ipp_funcs,
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index 6f34f573e127ec..a90504359e8d27 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -69,6 +69,8 @@ struct mtk_crtc {
+ 	/* lock for display hardware access */
+ 	struct mutex			hw_lock;
+ 	bool				config_updating;
++	/* lock for config_updating to cmd buffer */
++	spinlock_t			config_lock;
+ };
+ 
+ struct mtk_crtc_state {
+@@ -106,11 +108,16 @@ static void mtk_crtc_finish_page_flip(struct mtk_crtc *mtk_crtc)
+ 
+ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
+ {
++	unsigned long flags;
++
+ 	drm_crtc_handle_vblank(&mtk_crtc->base);
++
++	spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ 	if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) {
+ 		mtk_crtc_finish_page_flip(mtk_crtc);
+ 		mtk_crtc->pending_needs_vblank = false;
+ 	}
++	spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+ }
+ 
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+@@ -308,12 +315,19 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ 	struct mtk_crtc *mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, cmdq_client);
+ 	struct mtk_crtc_state *state;
+ 	unsigned int i;
++	unsigned long flags;
+ 
+ 	if (data->sta < 0)
+ 		return;
+ 
+ 	state = to_mtk_crtc_state(mtk_crtc->base.state);
+ 
++	spin_lock_irqsave(&mtk_crtc->config_lock, flags);
++	if (mtk_crtc->config_updating) {
++		spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++		goto ddp_cmdq_cb_out;
++	}
++
+ 	state->pending_config = false;
+ 
+ 	if (mtk_crtc->pending_planes) {
+@@ -340,6 +354,10 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ 		mtk_crtc->pending_async_planes = false;
+ 	}
+ 
++	spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
++ddp_cmdq_cb_out:
++
+ 	mtk_crtc->cmdq_vblank_cnt = 0;
+ 	wake_up(&mtk_crtc->cb_blocking_queue);
+ }
+@@ -449,6 +467,7 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_crtc *mtk_crtc)
+ {
+ 	struct drm_device *drm = mtk_crtc->base.dev;
+ 	struct drm_crtc *crtc = &mtk_crtc->base;
++	unsigned long flags;
+ 	int i;
+ 
+ 	for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
+@@ -480,10 +499,10 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_crtc *mtk_crtc)
+ 	pm_runtime_put(drm->dev);
+ 
+ 	if (crtc->state->event && !crtc->state->active) {
+-		spin_lock_irq(&crtc->dev->event_lock);
++		spin_lock_irqsave(&crtc->dev->event_lock, flags);
+ 		drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ 		crtc->state->event = NULL;
+-		spin_unlock_irq(&crtc->dev->event_lock);
++		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ 	}
+ }
+ 
+@@ -569,9 +588,14 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ 	struct mtk_drm_private *priv = crtc->dev->dev_private;
+ 	unsigned int pending_planes = 0, pending_async_planes = 0;
+ 	int i;
++	unsigned long flags;
+ 
+ 	mutex_lock(&mtk_crtc->hw_lock);
++
++	spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ 	mtk_crtc->config_updating = true;
++	spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ 	if (needs_vblank)
+ 		mtk_crtc->pending_needs_vblank = true;
+ 
+@@ -625,7 +649,10 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ 		mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
+ 	}
+ #endif
++	spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ 	mtk_crtc->config_updating = false;
++	spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ 	mutex_unlock(&mtk_crtc->hw_lock);
+ }
+ 
+@@ -1068,6 +1095,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
+ 		drm_mode_crtc_set_gamma_size(&mtk_crtc->base, gamma_lut_size);
+ 	drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, has_ctm, gamma_lut_size);
+ 	mutex_init(&mtk_crtc->hw_lock);
++	spin_lock_init(&mtk_crtc->config_lock);
+ 
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+ 	i = priv->mbox_index++;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index c003f970189b06..0eb3db9c3d9e61 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -65,6 +65,8 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
+ 
+ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ {
++	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ 	struct msm_ringbuffer *ring = submit->ring;
+ 	struct drm_gem_object *obj;
+ 	uint32_t *ptr, dwords;
+@@ -109,6 +111,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
+ 		}
+ 	}
+ 
++	a5xx_gpu->last_seqno[ring->id] = submit->seqno;
+ 	a5xx_flush(gpu, ring, true);
+ 	a5xx_preempt_trigger(gpu);
+ 
+@@ -150,9 +153,13 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
+ 	OUT_RING(ring, 1);
+ 
+-	/* Enable local preemption for finegrain preemption */
++	/*
++	 * Disable local preemption by default because it requires
++	 * user-space to be aware of it and provide additional handling
++	 * to restore rendering state or do various flushes on switch.
++	 */
+ 	OUT_PKT7(ring, CP_PREEMPT_ENABLE_LOCAL, 1);
+-	OUT_RING(ring, 0x1);
++	OUT_RING(ring, 0x0);
+ 
+ 	/* Allow CP_CONTEXT_SWITCH_YIELD packets in the IB2 */
+ 	OUT_PKT7(ring, CP_YIELD_ENABLE, 1);
+@@ -206,6 +213,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	/* Write the fence to the scratch register */
+ 	OUT_PKT4(ring, REG_A5XX_CP_SCRATCH_REG(2), 1);
+ 	OUT_RING(ring, submit->seqno);
++	a5xx_gpu->last_seqno[ring->id] = submit->seqno;
+ 
+ 	/*
+ 	 * Execute a CACHE_FLUSH_TS event. This will ensure that the
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+index c7187bcc5e9082..9c0d701fe4b85b 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+@@ -34,8 +34,10 @@ struct a5xx_gpu {
+ 	struct drm_gem_object *preempt_counters_bo[MSM_GPU_MAX_RINGS];
+ 	struct a5xx_preempt_record *preempt[MSM_GPU_MAX_RINGS];
+ 	uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
++	uint32_t last_seqno[MSM_GPU_MAX_RINGS];
+ 
+ 	atomic_t preempt_state;
++	spinlock_t preempt_start_lock;
+ 	struct timer_list preempt_timer;
+ 
+ 	struct drm_gem_object *shadow_bo;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+index f58dd564d122ba..0469fea5501083 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+@@ -55,6 +55,8 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+ /* Return the highest priority ringbuffer with something in it */
+ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+ {
++	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ 	unsigned long flags;
+ 	int i;
+ 
+@@ -64,6 +66,8 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+ 
+ 		spin_lock_irqsave(&ring->preempt_lock, flags);
+ 		empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
++		if (!empty && ring == a5xx_gpu->cur_ring)
++			empty = ring->memptrs->fence == a5xx_gpu->last_seqno[i];
+ 		spin_unlock_irqrestore(&ring->preempt_lock, flags);
+ 
+ 		if (!empty)
+@@ -97,12 +101,19 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ 	if (gpu->nr_rings == 1)
+ 		return;
+ 
++	/*
++	 * Serialize preemption start to ensure that we always make
++	 * decision on latest state. Otherwise we can get stuck in
++	 * lower priority or empty ring.
++	 */
++	spin_lock_irqsave(&a5xx_gpu->preempt_start_lock, flags);
++
+ 	/*
+ 	 * Try to start preemption by moving from NONE to START. If
+ 	 * unsuccessful, a preemption is already in flight
+ 	 */
+ 	if (!try_preempt_state(a5xx_gpu, PREEMPT_NONE, PREEMPT_START))
+-		return;
++		goto out;
+ 
+ 	/* Get the next ring to preempt to */
+ 	ring = get_next_ring(gpu);
+@@ -127,9 +138,11 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ 		set_preempt_state(a5xx_gpu, PREEMPT_ABORT);
+ 		update_wptr(gpu, a5xx_gpu->cur_ring);
+ 		set_preempt_state(a5xx_gpu, PREEMPT_NONE);
+-		return;
++		goto out;
+ 	}
+ 
++	spin_unlock_irqrestore(&a5xx_gpu->preempt_start_lock, flags);
++
+ 	/* Make sure the wptr doesn't update while we're in motion */
+ 	spin_lock_irqsave(&ring->preempt_lock, flags);
+ 	a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring);
+@@ -152,6 +165,10 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu)
+ 
+ 	/* And actually start the preemption */
+ 	gpu_write(gpu, REG_A5XX_CP_CONTEXT_SWITCH_CNTL, 1);
++	return;
++
++out:
++	spin_unlock_irqrestore(&a5xx_gpu->preempt_start_lock, flags);
+ }
+ 
+ void a5xx_preempt_irq(struct msm_gpu *gpu)
+@@ -188,6 +205,12 @@ void a5xx_preempt_irq(struct msm_gpu *gpu)
+ 	update_wptr(gpu, a5xx_gpu->cur_ring);
+ 
+ 	set_preempt_state(a5xx_gpu, PREEMPT_NONE);
++
++	/*
++	 * Try to trigger preemption again in case there was a submit or
++	 * retire during ring switch
++	 */
++	a5xx_preempt_trigger(gpu);
+ }
+ 
+ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
+@@ -204,6 +227,8 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
+ 		return;
+ 
+ 	for (i = 0; i < gpu->nr_rings; i++) {
++		a5xx_gpu->preempt[i]->data = 0;
++		a5xx_gpu->preempt[i]->info = 0;
+ 		a5xx_gpu->preempt[i]->wptr = 0;
+ 		a5xx_gpu->preempt[i]->rptr = 0;
+ 		a5xx_gpu->preempt[i]->rbase = gpu->rb[i]->iova;
+@@ -298,5 +323,6 @@ void a5xx_preempt_init(struct msm_gpu *gpu)
+ 		}
+ 	}
+ 
++	spin_lock_init(&a5xx_gpu->preempt_start_lock);
+ 	timer_setup(&a5xx_gpu->preempt_timer, a5xx_preempt_timer, 0);
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 789a11416f7a45..0fcae53c0b140b 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -388,18 +388,18 @@ static void a7xx_get_debugbus_blocks(struct msm_gpu *gpu,
+ 	const u32 *debugbus_blocks, *gbif_debugbus_blocks;
+ 	int i;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		debugbus_blocks = gen7_0_0_debugbus_blocks;
+ 		debugbus_blocks_count = ARRAY_SIZE(gen7_0_0_debugbus_blocks);
+ 		gbif_debugbus_blocks = a7xx_gbif_debugbus_blocks;
+ 		gbif_debugbus_blocks_count = ARRAY_SIZE(a7xx_gbif_debugbus_blocks);
+-	} else if (adreno_is_a740_family(adreno_gpu)) {
++	} else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ 		debugbus_blocks = gen7_2_0_debugbus_blocks;
+ 		debugbus_blocks_count = ARRAY_SIZE(gen7_2_0_debugbus_blocks);
+ 		gbif_debugbus_blocks = a7xx_gbif_debugbus_blocks;
+ 		gbif_debugbus_blocks_count = ARRAY_SIZE(a7xx_gbif_debugbus_blocks);
+ 	} else {
+-		BUG_ON(!adreno_is_a750(adreno_gpu));
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ 		debugbus_blocks = gen7_9_0_debugbus_blocks;
+ 		debugbus_blocks_count = ARRAY_SIZE(gen7_9_0_debugbus_blocks);
+ 		gbif_debugbus_blocks = gen7_9_0_gbif_debugbus_blocks;
+@@ -509,7 +509,7 @@ static void a6xx_get_debugbus(struct msm_gpu *gpu,
+ 		const struct a6xx_debugbus_block *cx_debugbus_blocks;
+ 
+ 		if (adreno_is_a7xx(adreno_gpu)) {
+-			BUG_ON(!(adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu)));
++			BUG_ON(adreno_gpu->info->family > ADRENO_7XX_GEN3);
+ 			cx_debugbus_blocks = a7xx_cx_debugbus_blocks;
+ 			nr_cx_debugbus_blocks = ARRAY_SIZE(a7xx_cx_debugbus_blocks);
+ 		} else {
+@@ -660,13 +660,16 @@ static void a7xx_get_dbgahb_clusters(struct msm_gpu *gpu,
+ 	const struct gen7_sptp_cluster_registers *dbgahb_clusters;
+ 	unsigned dbgahb_clusters_size;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		dbgahb_clusters = gen7_0_0_sptp_clusters;
+ 		dbgahb_clusters_size = ARRAY_SIZE(gen7_0_0_sptp_clusters);
+-	} else {
+-		BUG_ON(!adreno_is_a740_family(adreno_gpu));
++	} else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ 		dbgahb_clusters = gen7_2_0_sptp_clusters;
+ 		dbgahb_clusters_size = ARRAY_SIZE(gen7_2_0_sptp_clusters);
++	} else {
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
++		dbgahb_clusters = gen7_9_0_sptp_clusters;
++		dbgahb_clusters_size = ARRAY_SIZE(gen7_9_0_sptp_clusters);
+ 	}
+ 
+ 	a6xx_state->dbgahb_clusters = state_kcalloc(a6xx_state,
+@@ -818,14 +821,14 @@ static void a7xx_get_clusters(struct msm_gpu *gpu,
+ 	const struct gen7_cluster_registers *clusters;
+ 	unsigned clusters_size;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		clusters = gen7_0_0_clusters;
+ 		clusters_size = ARRAY_SIZE(gen7_0_0_clusters);
+-	} else if (adreno_is_a740_family(adreno_gpu)) {
++	} else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ 		clusters = gen7_2_0_clusters;
+ 		clusters_size = ARRAY_SIZE(gen7_2_0_clusters);
+ 	} else {
+-		BUG_ON(!adreno_is_a750(adreno_gpu));
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ 		clusters = gen7_9_0_clusters;
+ 		clusters_size = ARRAY_SIZE(gen7_9_0_clusters);
+ 	}
+@@ -893,7 +896,7 @@ static void a7xx_get_shader_block(struct msm_gpu *gpu,
+ 	if (WARN_ON(datasize > A6XX_CD_DATA_SIZE))
+ 		return;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		gpu_rmw(gpu, REG_A7XX_SP_DBG_CNTL, GENMASK(1, 0), 3);
+ 	}
+ 
+@@ -923,7 +926,7 @@ static void a7xx_get_shader_block(struct msm_gpu *gpu,
+ 		datasize);
+ 
+ out:
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		gpu_rmw(gpu, REG_A7XX_SP_DBG_CNTL, GENMASK(1, 0), 0);
+ 	}
+ }
+@@ -956,14 +959,14 @@ static void a7xx_get_shaders(struct msm_gpu *gpu,
+ 	unsigned num_shader_blocks;
+ 	int i;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		shader_blocks = gen7_0_0_shader_blocks;
+ 		num_shader_blocks = ARRAY_SIZE(gen7_0_0_shader_blocks);
+-	} else if (adreno_is_a740_family(adreno_gpu)) {
++	} else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ 		shader_blocks = gen7_2_0_shader_blocks;
+ 		num_shader_blocks = ARRAY_SIZE(gen7_2_0_shader_blocks);
+ 	} else {
+-		BUG_ON(!adreno_is_a750(adreno_gpu));
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ 		shader_blocks = gen7_9_0_shader_blocks;
+ 		num_shader_blocks = ARRAY_SIZE(gen7_9_0_shader_blocks);
+ 	}
+@@ -1348,14 +1351,14 @@ static void a7xx_get_registers(struct msm_gpu *gpu,
+ 	const u32 *pre_crashdumper_regs;
+ 	const struct gen7_reg_list *reglist;
+ 
+-	if (adreno_is_a730(adreno_gpu)) {
++	if (adreno_gpu->info->family == ADRENO_7XX_GEN1) {
+ 		reglist = gen7_0_0_reg_list;
+ 		pre_crashdumper_regs = gen7_0_0_pre_crashdumper_gpu_registers;
+-	} else if (adreno_is_a740_family(adreno_gpu)) {
++	} else if (adreno_gpu->info->family == ADRENO_7XX_GEN2) {
+ 		reglist = gen7_2_0_reg_list;
+ 		pre_crashdumper_regs = gen7_0_0_pre_crashdumper_gpu_registers;
+ 	} else {
+-		BUG_ON(!adreno_is_a750(adreno_gpu));
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ 		reglist = gen7_9_0_reg_list;
+ 		pre_crashdumper_regs = gen7_9_0_pre_crashdumper_gpu_registers;
+ 	}
+@@ -1405,8 +1408,7 @@ static void a7xx_get_post_crashdumper_registers(struct msm_gpu *gpu,
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	const u32 *regs;
+ 
+-	BUG_ON(!(adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu) ||
+-		 adreno_is_a750(adreno_gpu)));
++	BUG_ON(adreno_gpu->info->family > ADRENO_7XX_GEN3);
+ 	regs = gen7_0_0_post_crashdumper_registers;
+ 
+ 	a7xx_get_ahb_gpu_registers(gpu,
+@@ -1514,11 +1516,11 @@ static void a7xx_get_indexed_registers(struct msm_gpu *gpu,
+ 	const struct a6xx_indexed_registers *indexed_regs;
+ 	int i, indexed_count, mempool_count;
+ 
+-	if (adreno_is_a730(adreno_gpu) || adreno_is_a740_family(adreno_gpu)) {
++	if (adreno_gpu->info->family <= ADRENO_7XX_GEN2) {
+ 		indexed_regs = a7xx_indexed_reglist;
+ 		indexed_count = ARRAY_SIZE(a7xx_indexed_reglist);
+ 	} else {
+-		BUG_ON(!adreno_is_a750(adreno_gpu));
++		BUG_ON(adreno_gpu->info->family != ADRENO_7XX_GEN3);
+ 		indexed_regs = gen7_9_0_cp_indexed_reg_list;
+ 		indexed_count = ARRAY_SIZE(gen7_9_0_cp_indexed_reg_list);
+ 	}
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+index 260d66eccfecbf..9a327d543f27de 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
++++ b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+@@ -1303,7 +1303,7 @@ static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
+ 		REG_A6XX_CP_ROQ_DBG_DATA, 0x00800},
+ 	{ "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR,
+ 		REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x08000},
+-	{ "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
++	{ "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR,
+ 		REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x00200},
+ 	{ "CP_BV_ROQ_DBG_ADDR", REG_A7XX_CP_BV_ROQ_DBG_ADDR,
+ 		REG_A7XX_CP_BV_ROQ_DBG_DATA, 0x00800},
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index b93ed15f04a30e..d5d9361e11aa53 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -475,7 +475,7 @@ adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname)
+ 		ret = request_firmware_direct(&fw, fwname, drm->dev);
+ 		if (!ret) {
+ 			DRM_DEV_INFO(drm->dev, "loaded %s from legacy location\n",
+-				newname);
++				fwname);
+ 			adreno_gpu->fwloc = FW_LOCATION_LEGACY;
+ 			goto out;
+ 		} else if (adreno_gpu->fwloc != FW_LOCATION_UNKNOWN) {
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
+index 3a7f7edda96b27..500b7dc895d055 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
+@@ -351,7 +351,7 @@ void mdp5_smp_dump(struct mdp5_smp *smp, struct drm_printer *p,
+ 
+ 			drm_printf(p, "%s:%d\t%d\t%s\n",
+ 				pipe2name(pipe), j, inuse,
+-				plane ? plane->name : NULL);
++				plane ? plane->name : "(null)");
+ 
+ 			total += inuse;
+ 		}
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 672a7ba52edadd..9dc44ea85b7c62 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -119,7 +119,7 @@ struct msm_dp_desc {
+ };
+ 
+ static const struct msm_dp_desc sc7180_dp_descs[] = {
+-	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 },
++	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
+ 	{}
+ };
+ 
+@@ -130,9 +130,9 @@ static const struct msm_dp_desc sc7280_dp_descs[] = {
+ };
+ 
+ static const struct msm_dp_desc sc8180x_dp_descs[] = {
+-	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 },
+-	{ .io_start = 0x0ae98000, .id = MSM_DP_CONTROLLER_1 },
+-	{ .io_start = 0x0ae9a000, .id = MSM_DP_CONTROLLER_2 },
++	{ .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
++	{ .io_start = 0x0ae98000, .id = MSM_DP_CONTROLLER_1, .wide_bus_supported = true },
++	{ .io_start = 0x0ae9a000, .id = MSM_DP_CONTROLLER_2, .wide_bus_supported = true },
+ 	{}
+ };
+ 
+@@ -149,7 +149,7 @@ static const struct msm_dp_desc sc8280xp_dp_descs[] = {
+ };
+ 
+ static const struct msm_dp_desc sm8650_dp_descs[] = {
+-	{ .io_start = 0x0af54000, .id = MSM_DP_CONTROLLER_0 },
++	{ .io_start = 0x0af54000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true },
+ 	{}
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 82d015aa2d634c..29aa91238bc47e 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -135,7 +135,7 @@ static void dsi_pll_calc_dec_frac(struct dsi_pll_7nm *pll, struct dsi_pll_config
+ 			config->pll_clock_inverters = 0x00;
+ 		else
+ 			config->pll_clock_inverters = 0x40;
+-	} else {
++	} else if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_1) {
+ 		if (pll_freq <= 1000000000ULL)
+ 			config->pll_clock_inverters = 0xa0;
+ 		else if (pll_freq <= 2500000000ULL)
+@@ -144,6 +144,16 @@ static void dsi_pll_calc_dec_frac(struct dsi_pll_7nm *pll, struct dsi_pll_config
+ 			config->pll_clock_inverters = 0x00;
+ 		else
+ 			config->pll_clock_inverters = 0x40;
++	} else {
++		/* 4.2, 4.3 */
++		if (pll_freq <= 1000000000ULL)
++			config->pll_clock_inverters = 0xa0;
++		else if (pll_freq <= 2500000000ULL)
++			config->pll_clock_inverters = 0x20;
++		else if (pll_freq <= 3500000000ULL)
++			config->pll_clock_inverters = 0x00;
++		else
++			config->pll_clock_inverters = 0x40;
+ 	}
+ 
+ 	config->decimal_div_start = dec;
+diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
+index 1fe6e0d883c79b..675a649fa7ab5d 100644
+--- a/drivers/gpu/drm/radeon/evergreen_cs.c
++++ b/drivers/gpu/drm/radeon/evergreen_cs.c
+@@ -395,7 +395,7 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ 	struct evergreen_cs_track *track = p->track;
+ 	struct eg_surface surf;
+ 	unsigned pitch, slice, mslice;
+-	unsigned long offset;
++	u64 offset;
+ 	int r;
+ 
+ 	mslice = G_028C6C_SLICE_MAX(track->cb_color_view[id]) + 1;
+@@ -433,14 +433,14 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ 		return r;
+ 	}
+ 
+-	offset = track->cb_color_bo_offset[id] << 8;
++	offset = (u64)track->cb_color_bo_offset[id] << 8;
+ 	if (offset & (surf.base_align - 1)) {
+-		dev_warn(p->dev, "%s:%d cb[%d] bo base %ld not aligned with %ld\n",
++		dev_warn(p->dev, "%s:%d cb[%d] bo base %llu not aligned with %ld\n",
+ 			 __func__, __LINE__, id, offset, surf.base_align);
+ 		return -EINVAL;
+ 	}
+ 
+-	offset += surf.layer_size * mslice;
++	offset += (u64)surf.layer_size * mslice;
+ 	if (offset > radeon_bo_size(track->cb_color_bo[id])) {
+ 		/* old ddx are broken they allocate bo with w*h*bpp but
+ 		 * program slice with ALIGN(h, 8), catch this and patch
+@@ -448,14 +448,14 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ 		 */
+ 		if (!surf.mode) {
+ 			uint32_t *ib = p->ib.ptr;
+-			unsigned long tmp, nby, bsize, size, min = 0;
++			u64 tmp, nby, bsize, size, min = 0;
+ 
+ 			/* find the height the ddx wants */
+ 			if (surf.nby > 8) {
+ 				min = surf.nby - 8;
+ 			}
+ 			bsize = radeon_bo_size(track->cb_color_bo[id]);
+-			tmp = track->cb_color_bo_offset[id] << 8;
++			tmp = (u64)track->cb_color_bo_offset[id] << 8;
+ 			for (nby = surf.nby; nby > min; nby--) {
+ 				size = nby * surf.nbx * surf.bpe * surf.nsamples;
+ 				if ((tmp + size * mslice) <= bsize) {
+@@ -467,7 +467,7 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ 				slice = ((nby * surf.nbx) / 64) - 1;
+ 				if (!evergreen_surface_check(p, &surf, "cb")) {
+ 					/* check if this one works */
+-					tmp += surf.layer_size * mslice;
++					tmp += (u64)surf.layer_size * mslice;
+ 					if (tmp <= bsize) {
+ 						ib[track->cb_color_slice_idx[id]] = slice;
+ 						goto old_ddx_ok;
+@@ -476,9 +476,9 @@ static int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, unsigned i
+ 			}
+ 		}
+ 		dev_warn(p->dev, "%s:%d cb[%d] bo too small (layer size %d, "
+-			 "offset %d, max layer %d, bo size %ld, slice %d)\n",
++			 "offset %llu, max layer %d, bo size %ld, slice %d)\n",
+ 			 __func__, __LINE__, id, surf.layer_size,
+-			track->cb_color_bo_offset[id] << 8, mslice,
++			(u64)track->cb_color_bo_offset[id] << 8, mslice,
+ 			radeon_bo_size(track->cb_color_bo[id]), slice);
+ 		dev_warn(p->dev, "%s:%d problematic surf: (%d %d) (%d %d %d %d %d %d %d)\n",
+ 			 __func__, __LINE__, surf.nbx, surf.nby,
+@@ -562,7 +562,7 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ 	struct evergreen_cs_track *track = p->track;
+ 	struct eg_surface surf;
+ 	unsigned pitch, slice, mslice;
+-	unsigned long offset;
++	u64 offset;
+ 	int r;
+ 
+ 	mslice = G_028008_SLICE_MAX(track->db_depth_view) + 1;
+@@ -608,18 +608,18 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ 		return r;
+ 	}
+ 
+-	offset = track->db_s_read_offset << 8;
++	offset = (u64)track->db_s_read_offset << 8;
+ 	if (offset & (surf.base_align - 1)) {
+-		dev_warn(p->dev, "%s:%d stencil read bo base %ld not aligned with %ld\n",
++		dev_warn(p->dev, "%s:%d stencil read bo base %llu not aligned with %ld\n",
+ 			 __func__, __LINE__, offset, surf.base_align);
+ 		return -EINVAL;
+ 	}
+-	offset += surf.layer_size * mslice;
++	offset += (u64)surf.layer_size * mslice;
+ 	if (offset > radeon_bo_size(track->db_s_read_bo)) {
+ 		dev_warn(p->dev, "%s:%d stencil read bo too small (layer size %d, "
+-			 "offset %ld, max layer %d, bo size %ld)\n",
++			 "offset %llu, max layer %d, bo size %ld)\n",
+ 			 __func__, __LINE__, surf.layer_size,
+-			(unsigned long)track->db_s_read_offset << 8, mslice,
++			(u64)track->db_s_read_offset << 8, mslice,
+ 			radeon_bo_size(track->db_s_read_bo));
+ 		dev_warn(p->dev, "%s:%d stencil invalid (0x%08x 0x%08x 0x%08x 0x%08x)\n",
+ 			 __func__, __LINE__, track->db_depth_size,
+@@ -627,18 +627,18 @@ static int evergreen_cs_track_validate_stencil(struct radeon_cs_parser *p)
+ 		return -EINVAL;
+ 	}
+ 
+-	offset = track->db_s_write_offset << 8;
++	offset = (u64)track->db_s_write_offset << 8;
+ 	if (offset & (surf.base_align - 1)) {
+-		dev_warn(p->dev, "%s:%d stencil write bo base %ld not aligned with %ld\n",
++		dev_warn(p->dev, "%s:%d stencil write bo base %llu not aligned with %ld\n",
+ 			 __func__, __LINE__, offset, surf.base_align);
+ 		return -EINVAL;
+ 	}
+-	offset += surf.layer_size * mslice;
++	offset += (u64)surf.layer_size * mslice;
+ 	if (offset > radeon_bo_size(track->db_s_write_bo)) {
+ 		dev_warn(p->dev, "%s:%d stencil write bo too small (layer size %d, "
+-			 "offset %ld, max layer %d, bo size %ld)\n",
++			 "offset %llu, max layer %d, bo size %ld)\n",
+ 			 __func__, __LINE__, surf.layer_size,
+-			(unsigned long)track->db_s_write_offset << 8, mslice,
++			(u64)track->db_s_write_offset << 8, mslice,
+ 			radeon_bo_size(track->db_s_write_bo));
+ 		return -EINVAL;
+ 	}
+@@ -659,7 +659,7 @@ static int evergreen_cs_track_validate_depth(struct radeon_cs_parser *p)
+ 	struct evergreen_cs_track *track = p->track;
+ 	struct eg_surface surf;
+ 	unsigned pitch, slice, mslice;
+-	unsigned long offset;
++	u64 offset;
+ 	int r;
+ 
+ 	mslice = G_028008_SLICE_MAX(track->db_depth_view) + 1;
+@@ -706,34 +706,34 @@ static int evergreen_cs_track_validate_depth(struct radeon_cs_parser *p)
+ 		return r;
+ 	}
+ 
+-	offset = track->db_z_read_offset << 8;
++	offset = (u64)track->db_z_read_offset << 8;
+ 	if (offset & (surf.base_align - 1)) {
+-		dev_warn(p->dev, "%s:%d stencil read bo base %ld not aligned with %ld\n",
++		dev_warn(p->dev, "%s:%d stencil read bo base %llu not aligned with %ld\n",
+ 			 __func__, __LINE__, offset, surf.base_align);
+ 		return -EINVAL;
+ 	}
+-	offset += surf.layer_size * mslice;
++	offset += (u64)surf.layer_size * mslice;
+ 	if (offset > radeon_bo_size(track->db_z_read_bo)) {
+ 		dev_warn(p->dev, "%s:%d depth read bo too small (layer size %d, "
+-			 "offset %ld, max layer %d, bo size %ld)\n",
++			 "offset %llu, max layer %d, bo size %ld)\n",
+ 			 __func__, __LINE__, surf.layer_size,
+-			(unsigned long)track->db_z_read_offset << 8, mslice,
++			(u64)track->db_z_read_offset << 8, mslice,
+ 			radeon_bo_size(track->db_z_read_bo));
+ 		return -EINVAL;
+ 	}
+ 
+-	offset = track->db_z_write_offset << 8;
++	offset = (u64)track->db_z_write_offset << 8;
+ 	if (offset & (surf.base_align - 1)) {
+-		dev_warn(p->dev, "%s:%d stencil write bo base %ld not aligned with %ld\n",
++		dev_warn(p->dev, "%s:%d stencil write bo base %llu not aligned with %ld\n",
+ 			 __func__, __LINE__, offset, surf.base_align);
+ 		return -EINVAL;
+ 	}
+-	offset += surf.layer_size * mslice;
++	offset += (u64)surf.layer_size * mslice;
+ 	if (offset > radeon_bo_size(track->db_z_write_bo)) {
+ 		dev_warn(p->dev, "%s:%d depth write bo too small (layer size %d, "
+-			 "offset %ld, max layer %d, bo size %ld)\n",
++			 "offset %llu, max layer %d, bo size %ld)\n",
+ 			 __func__, __LINE__, surf.layer_size,
+-			(unsigned long)track->db_z_write_offset << 8, mslice,
++			(u64)track->db_z_write_offset << 8, mslice,
+ 			radeon_bo_size(track->db_z_write_bo));
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
+index 10793a433bf586..d698a899eaf4cf 100644
+--- a/drivers/gpu/drm/radeon/radeon_atombios.c
++++ b/drivers/gpu/drm/radeon/radeon_atombios.c
+@@ -1717,26 +1717,29 @@ struct radeon_encoder_atom_dig *radeon_atombios_get_lvds_info(struct
+ 					fake_edid_record = (ATOM_FAKE_EDID_PATCH_RECORD *)record;
+ 					if (fake_edid_record->ucFakeEDIDLength) {
+ 						struct edid *edid;
+-						int edid_size =
+-							max((int)EDID_LENGTH, (int)fake_edid_record->ucFakeEDIDLength);
+-						edid = kmalloc(edid_size, GFP_KERNEL);
++						int edid_size;
++
++						if (fake_edid_record->ucFakeEDIDLength == 128)
++							edid_size = fake_edid_record->ucFakeEDIDLength;
++						else
++							edid_size = fake_edid_record->ucFakeEDIDLength * 128;
++						edid = kmemdup(&fake_edid_record->ucFakeEDIDString[0],
++							       edid_size, GFP_KERNEL);
+ 						if (edid) {
+-							memcpy((u8 *)edid, (u8 *)&fake_edid_record->ucFakeEDIDString[0],
+-							       fake_edid_record->ucFakeEDIDLength);
+-
+ 							if (drm_edid_is_valid(edid)) {
+ 								rdev->mode_info.bios_hardcoded_edid = edid;
+ 								rdev->mode_info.bios_hardcoded_edid_size = edid_size;
+-							} else
++							} else {
+ 								kfree(edid);
++							}
+ 						}
++						record += struct_size(fake_edid_record,
++								      ucFakeEDIDString,
++								      edid_size);
++					} else {
++						/* empty fake edid record must be 3 bytes long */
++						record += sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ 					}
+-					record += fake_edid_record->ucFakeEDIDLength ?
+-						  struct_size(fake_edid_record,
+-							      ucFakeEDIDString,
+-							      fake_edid_record->ucFakeEDIDLength) :
+-						  /* empty fake edid record must be 3 bytes long */
+-						  sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
+ 					break;
+ 				case LCD_PANEL_RESOLUTION_RECORD_TYPE:
+ 					panel_res_record = (ATOM_PANEL_RESOLUTION_PATCH_RECORD *)record;
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index fe33092abbe7d7..aae48e906af11b 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -434,6 +434,8 @@ static void dw_hdmi_rk3328_setup_hpd(struct dw_hdmi *dw_hdmi, void *data)
+ 		HIWORD_UPDATE(RK3328_HDMI_SDAIN_MSK | RK3328_HDMI_SCLIN_MSK,
+ 			      RK3328_HDMI_SDAIN_MSK | RK3328_HDMI_SCLIN_MSK |
+ 			      RK3328_HDMI_HPD_IOE));
++
++	dw_hdmi_rk3328_read_hpd(dw_hdmi, data);
+ }
+ 
+ static const struct dw_hdmi_phy_ops rk3228_hdmi_phy_ops = {
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index a13473b2d54c40..4a9c6ea7f15dc3 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -396,8 +396,8 @@ static void scl_vop_cal_scl_fac(struct vop *vop, const struct vop_win_data *win,
+ 	if (info->is_yuv)
+ 		is_yuv = true;
+ 
+-	if (dst_w > 3840) {
+-		DRM_DEV_ERROR(vop->dev, "Maximum dst width (3840) exceeded\n");
++	if (dst_w > 4096) {
++		DRM_DEV_ERROR(vop->dev, "Maximum dst width (4096) exceeded\n");
+ 		return;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/stm/drv.c b/drivers/gpu/drm/stm/drv.c
+index e8523abef27a50..4d2db079ad4ff3 100644
+--- a/drivers/gpu/drm/stm/drv.c
++++ b/drivers/gpu/drm/stm/drv.c
+@@ -203,12 +203,14 @@ static int stm_drm_platform_probe(struct platform_device *pdev)
+ 
+ 	ret = drm_dev_register(ddev, 0);
+ 	if (ret)
+-		goto err_put;
++		goto err_unload;
+ 
+ 	drm_fbdev_dma_setup(ddev, 16);
+ 
+ 	return 0;
+ 
++err_unload:
++	drv_unload(ddev);
+ err_put:
+ 	drm_dev_put(ddev);
+ 
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 5576fdae496233..5aec1e58c968c2 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -1580,6 +1580,8 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ 			       ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp) +
+ 			       ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp)) *
+ 			       sizeof(*formats), GFP_KERNEL);
++	if (!formats)
++		return NULL;
+ 
+ 	for (i = 0; i < ldev->caps.pix_fmt_nb; i++) {
+ 		drm_fmt = ldev->caps.pix_fmt_drm[i];
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index d30f8e8e896717..c75bd5af2cefd5 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -462,6 +462,7 @@ static int vc4_hdmi_connector_detect_ctx(struct drm_connector *connector,
+ {
+ 	struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
+ 	enum drm_connector_status status = connector_status_disconnected;
++	int ret;
+ 
+ 	/*
+ 	 * NOTE: This function should really take vc4_hdmi->mutex, but
+@@ -474,7 +475,12 @@ static int vc4_hdmi_connector_detect_ctx(struct drm_connector *connector,
+ 	 * the lock for now.
+ 	 */
+ 
+-	WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
++	if (ret) {
++		drm_err_once(connector->dev, "Failed to retain HDMI power domain: %d\n",
++			     ret);
++		return connector_status_unknown;
++	}
+ 
+ 	if (vc4_hdmi->hpd_gpio) {
+ 		if (gpiod_get_value_cansleep(vc4_hdmi->hpd_gpio))
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d84740be96426a..7660f62e6c1f23 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2368,6 +2368,9 @@ static void wacom_wac_pen_usage_mapping(struct hid_device *hdev,
+ 		wacom_map_usage(input, usage, field, EV_KEY, BTN_STYLUS3, 0);
+ 		features->quirks &= ~WACOM_QUIRK_PEN_BUTTON3;
+ 		break;
++	case WACOM_HID_WD_SEQUENCENUMBER:
++		wacom_wac->hid_data.sequence_number = -1;
++		break;
+ 	}
+ }
+ 
+@@ -2492,9 +2495,15 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ 		wacom_wac->hid_data.barrelswitch3 = value;
+ 		return;
+ 	case WACOM_HID_WD_SEQUENCENUMBER:
+-		if (wacom_wac->hid_data.sequence_number != value)
+-			hid_warn(hdev, "Dropped %hu packets", (unsigned short)(value - wacom_wac->hid_data.sequence_number));
++		if (wacom_wac->hid_data.sequence_number != value &&
++		    wacom_wac->hid_data.sequence_number >= 0) {
++			int sequence_size = field->logical_maximum - field->logical_minimum + 1;
++			int drop_count = (value - wacom_wac->hid_data.sequence_number) % sequence_size;
++			hid_warn(hdev, "Dropped %d packets", drop_count);
++		}
+ 		wacom_wac->hid_data.sequence_number = value + 1;
++		if (wacom_wac->hid_data.sequence_number > field->logical_maximum)
++			wacom_wac->hid_data.sequence_number = field->logical_minimum;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 6ec499841f7095..e6443740b462fd 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -324,7 +324,7 @@ struct hid_data {
+ 	int bat_connected;
+ 	int ps_connected;
+ 	bool pad_input_event_flag;
+-	unsigned short sequence_number;
++	int sequence_number;
+ 	ktime_t time_delayed;
+ };
+ 
+diff --git a/drivers/hwmon/max16065.c b/drivers/hwmon/max16065.c
+index aa38c45adc09e2..0ccb5eb596fc40 100644
+--- a/drivers/hwmon/max16065.c
++++ b/drivers/hwmon/max16065.c
+@@ -79,7 +79,7 @@ static const bool max16065_have_current[] = {
+ };
+ 
+ struct max16065_data {
+-	enum chips type;
++	enum chips chip;
+ 	struct i2c_client *client;
+ 	const struct attribute_group *groups[4];
+ 	struct mutex update_lock;
+@@ -114,9 +114,10 @@ static inline int LIMIT_TO_MV(int limit, int range)
+ 	return limit * range / 256;
+ }
+ 
+-static inline int MV_TO_LIMIT(int mv, int range)
++static inline int MV_TO_LIMIT(unsigned long mv, int range)
+ {
+-	return clamp_val(DIV_ROUND_CLOSEST(mv * 256, range), 0, 255);
++	mv = clamp_val(mv, 0, ULONG_MAX / 256);
++	return DIV_ROUND_CLOSEST(clamp_val(mv * 256, 0, range * 255), range);
+ }
+ 
+ static inline int ADC_TO_CURR(int adc, int gain)
+@@ -161,10 +162,17 @@ static struct max16065_data *max16065_update_device(struct device *dev)
+ 						     MAX16065_CURR_SENSE);
+ 		}
+ 
+-		for (i = 0; i < DIV_ROUND_UP(data->num_adc, 8); i++)
++		for (i = 0; i < 2; i++)
+ 			data->fault[i]
+ 			  = i2c_smbus_read_byte_data(client, MAX16065_FAULT(i));
+ 
++		/*
++		 * MAX16067 and MAX16068 have separate undervoltage and
++		 * overvoltage alarm bits. Squash them together.
++		 */
++		if (data->chip == max16067 || data->chip == max16068)
++			data->fault[0] |= data->fault[1];
++
+ 		data->last_updated = jiffies;
+ 		data->valid = true;
+ 	}
+@@ -493,8 +501,6 @@ static const struct attribute_group max16065_max_group = {
+ 	.is_visible = max16065_secondary_is_visible,
+ };
+ 
+-static const struct i2c_device_id max16065_id[];
+-
+ static int max16065_probe(struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+@@ -505,7 +511,7 @@ static int max16065_probe(struct i2c_client *client)
+ 	bool have_secondary;		/* true if chip has secondary limits */
+ 	bool secondary_is_max = false;	/* secondary limits reflect max */
+ 	int groups = 0;
+-	const struct i2c_device_id *id = i2c_match_id(max16065_id, client);
++	enum chips chip = (uintptr_t)i2c_get_match_data(client);
+ 
+ 	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA
+ 				     | I2C_FUNC_SMBUS_READ_WORD_DATA))
+@@ -515,12 +521,13 @@ static int max16065_probe(struct i2c_client *client)
+ 	if (unlikely(!data))
+ 		return -ENOMEM;
+ 
++	data->chip = chip;
+ 	data->client = client;
+ 	mutex_init(&data->update_lock);
+ 
+-	data->num_adc = max16065_num_adc[id->driver_data];
+-	data->have_current = max16065_have_current[id->driver_data];
+-	have_secondary = max16065_have_secondary[id->driver_data];
++	data->num_adc = max16065_num_adc[chip];
++	data->have_current = max16065_have_current[chip];
++	have_secondary = max16065_have_secondary[chip];
+ 
+ 	if (have_secondary) {
+ 		val = i2c_smbus_read_byte_data(client, MAX16065_SW_ENABLE);
+diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
+index ef75b63f5894e5..b5352900463fb9 100644
+--- a/drivers/hwmon/ntc_thermistor.c
++++ b/drivers/hwmon/ntc_thermistor.c
+@@ -62,6 +62,7 @@ static const struct platform_device_id ntc_thermistor_id[] = {
+ 	[NTC_SSG1404001221]   = { "ssg1404_001221",  TYPE_NCPXXWB473 },
+ 	[NTC_LAST]            = { },
+ };
++MODULE_DEVICE_TABLE(platform, ntc_thermistor_id);
+ 
+ /*
+  * A compensation table should be sorted by the values of .ohm
+diff --git a/drivers/hwtracing/coresight/coresight-dummy.c b/drivers/hwtracing/coresight/coresight-dummy.c
+index ac70c0b491bebd..dab389a5507c11 100644
+--- a/drivers/hwtracing/coresight/coresight-dummy.c
++++ b/drivers/hwtracing/coresight/coresight-dummy.c
+@@ -23,6 +23,9 @@ DEFINE_CORESIGHT_DEVLIST(sink_devs, "dummy_sink");
+ static int dummy_source_enable(struct coresight_device *csdev,
+ 			       struct perf_event *event, enum cs_mode mode)
+ {
++	if (!coresight_take_mode(csdev, mode))
++		return -EBUSY;
++
+ 	dev_dbg(csdev->dev.parent, "Dummy source enabled\n");
+ 
+ 	return 0;
+@@ -31,6 +34,7 @@ static int dummy_source_enable(struct coresight_device *csdev,
+ static void dummy_source_disable(struct coresight_device *csdev,
+ 				 struct perf_event *event)
+ {
++	coresight_set_mode(csdev, CS_MODE_DISABLED);
+ 	dev_dbg(csdev->dev.parent, "Dummy source disabled\n");
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index e75428fa1592a7..610ad51cda656b 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -261,6 +261,7 @@ void tmc_free_sg_table(struct tmc_sg_table *sg_table)
+ {
+ 	tmc_free_table_pages(sg_table);
+ 	tmc_free_data_pages(sg_table);
++	kfree(sg_table);
+ }
+ EXPORT_SYMBOL_GPL(tmc_free_sg_table);
+ 
+@@ -342,7 +343,6 @@ struct tmc_sg_table *tmc_alloc_sg_table(struct device *dev,
+ 		rc = tmc_alloc_table_pages(sg_table);
+ 	if (rc) {
+ 		tmc_free_sg_table(sg_table);
+-		kfree(sg_table);
+ 		return ERR_PTR(rc);
+ 	}
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tpdm.c b/drivers/hwtracing/coresight/coresight-tpdm.c
+index 0726f8842552c6..5c5a4b3fe6871c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpdm.c
++++ b/drivers/hwtracing/coresight/coresight-tpdm.c
+@@ -449,6 +449,11 @@ static int tpdm_enable(struct coresight_device *csdev, struct perf_event *event,
+ 		return -EBUSY;
+ 	}
+ 
++	if (!coresight_take_mode(csdev, mode)) {
++		spin_unlock(&drvdata->spinlock);
++		return -EBUSY;
++	}
++
+ 	__tpdm_enable(drvdata);
+ 	drvdata->enable = true;
+ 	spin_unlock(&drvdata->spinlock);
+@@ -506,6 +511,7 @@ static void tpdm_disable(struct coresight_device *csdev,
+ 	}
+ 
+ 	__tpdm_disable(drvdata);
++	coresight_set_mode(csdev, CS_MODE_DISABLED);
+ 	drvdata->enable = false;
+ 	spin_unlock(&drvdata->spinlock);
+ 
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index ce8c4846b7fae4..2a03a221e2dd57 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -170,6 +170,13 @@ struct aspeed_i2c_bus {
+ 
+ static int aspeed_i2c_reset(struct aspeed_i2c_bus *bus);
+ 
++/* precondition: bus.lock has been acquired. */
++static void aspeed_i2c_do_stop(struct aspeed_i2c_bus *bus)
++{
++	bus->master_state = ASPEED_I2C_MASTER_STOP;
++	writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
++}
++
+ static int aspeed_i2c_recover_bus(struct aspeed_i2c_bus *bus)
+ {
+ 	unsigned long time_left, flags;
+@@ -187,7 +194,7 @@ static int aspeed_i2c_recover_bus(struct aspeed_i2c_bus *bus)
+ 			command);
+ 
+ 		reinit_completion(&bus->cmd_complete);
+-		writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
++		aspeed_i2c_do_stop(bus);
+ 		spin_unlock_irqrestore(&bus->lock, flags);
+ 
+ 		time_left = wait_for_completion_timeout(
+@@ -390,13 +397,6 @@ static void aspeed_i2c_do_start(struct aspeed_i2c_bus *bus)
+ 	writel(command, bus->base + ASPEED_I2C_CMD_REG);
+ }
+ 
+-/* precondition: bus.lock has been acquired. */
+-static void aspeed_i2c_do_stop(struct aspeed_i2c_bus *bus)
+-{
+-	bus->master_state = ASPEED_I2C_MASTER_STOP;
+-	writel(ASPEED_I2CD_M_STOP_CMD, bus->base + ASPEED_I2C_CMD_REG);
+-}
+-
+ /* precondition: bus.lock has been acquired. */
+ static void aspeed_i2c_next_msg_or_stop(struct aspeed_i2c_bus *bus)
+ {
+diff --git a/drivers/i2c/busses/i2c-isch.c b/drivers/i2c/busses/i2c-isch.c
+index 416a9968ed2870..8b225412be5ba4 100644
+--- a/drivers/i2c/busses/i2c-isch.c
++++ b/drivers/i2c/busses/i2c-isch.c
+@@ -99,8 +99,7 @@ static int sch_transaction(void)
+ 	if (retries > MAX_RETRIES) {
+ 		dev_err(&sch_adapter.dev, "SMBus Timeout!\n");
+ 		result = -ETIMEDOUT;
+-	}
+-	if (temp & 0x04) {
++	} else if (temp & 0x04) {
+ 		result = -EIO;
+ 		dev_dbg(&sch_adapter.dev, "Bus collision! SMBus may be "
+ 			"locked until next hard reset. (sorry!)\n");
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index 1c08c0921ee712..4d755ffc3f4148 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -215,9 +215,9 @@ static int ad7606_write_os_hw(struct iio_dev *indio_dev, int val)
+ 	struct ad7606_state *st = iio_priv(indio_dev);
+ 	DECLARE_BITMAP(values, 3);
+ 
+-	values[0] = val;
++	values[0] = val & GENMASK(2, 0);
+ 
+-	gpiod_set_array_value(ARRAY_SIZE(values), st->gpio_os->desc,
++	gpiod_set_array_value(st->gpio_os->ndescs, st->gpio_os->desc,
+ 			      st->gpio_os->info, values);
+ 
+ 	/* AD7616 requires a reset to update value */
+@@ -422,7 +422,7 @@ static int ad7606_request_gpios(struct ad7606_state *st)
+ 		return PTR_ERR(st->gpio_range);
+ 
+ 	st->gpio_standby = devm_gpiod_get_optional(dev, "standby",
+-						   GPIOD_OUT_HIGH);
++						   GPIOD_OUT_LOW);
+ 	if (IS_ERR(st->gpio_standby))
+ 		return PTR_ERR(st->gpio_standby);
+ 
+@@ -665,7 +665,7 @@ static int ad7606_suspend(struct device *dev)
+ 
+ 	if (st->gpio_standby) {
+ 		gpiod_set_value(st->gpio_range, 1);
+-		gpiod_set_value(st->gpio_standby, 0);
++		gpiod_set_value(st->gpio_standby, 1);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index 263a778bcf2539..287a0591533b6a 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -249,8 +249,9 @@ static int ad7616_sw_mode_config(struct iio_dev *indio_dev)
+ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ {
+ 	struct ad7606_state *st = iio_priv(indio_dev);
+-	unsigned long os[3] = {1};
++	DECLARE_BITMAP(os, 3);
+ 
++	bitmap_fill(os, 3);
+ 	/*
+ 	 * Software mode is enabled when all three oversampling
+ 	 * pins are set to high. If oversampling gpios are defined
+@@ -258,7 +259,7 @@ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ 	 * otherwise, they must be hardwired to VDD
+ 	 */
+ 	if (st->gpio_os) {
+-		gpiod_set_array_value(ARRAY_SIZE(os),
++		gpiod_set_array_value(st->gpio_os->ndescs,
+ 				      st->gpio_os->desc, st->gpio_os->info, os);
+ 	}
+ 	/* OS of 128 and 256 are available only in software mode */
+diff --git a/drivers/iio/chemical/bme680_core.c b/drivers/iio/chemical/bme680_core.c
+index 500f56834b01f6..a6bf689833dad7 100644
+--- a/drivers/iio/chemical/bme680_core.c
++++ b/drivers/iio/chemical/bme680_core.c
+@@ -10,6 +10,7 @@
+  */
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
++#include <linux/cleanup.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/module.h>
+@@ -52,6 +53,7 @@ struct bme680_calib {
+ struct bme680_data {
+ 	struct regmap *regmap;
+ 	struct bme680_calib bme680;
++	struct mutex lock; /* Protect multiple serial R/W ops to device. */
+ 	u8 oversampling_temp;
+ 	u8 oversampling_press;
+ 	u8 oversampling_humid;
+@@ -827,6 +829,8 @@ static int bme680_read_raw(struct iio_dev *indio_dev,
+ {
+ 	struct bme680_data *data = iio_priv(indio_dev);
+ 
++	guard(mutex)(&data->lock);
++
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_PROCESSED:
+ 		switch (chan->type) {
+@@ -871,6 +875,8 @@ static int bme680_write_raw(struct iio_dev *indio_dev,
+ {
+ 	struct bme680_data *data = iio_priv(indio_dev);
+ 
++	guard(mutex)(&data->lock);
++
+ 	if (val2 != 0)
+ 		return -EINVAL;
+ 
+@@ -967,6 +973,7 @@ int bme680_core_probe(struct device *dev, struct regmap *regmap,
+ 		name = bme680_match_acpi_device(dev);
+ 
+ 	data = iio_priv(indio_dev);
++	mutex_init(&data->lock);
+ 	dev_set_drvdata(dev, indio_dev);
+ 	data->regmap = regmap;
+ 	indio_dev->name = name;
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index dd466c5fa6214f..ccbebe5b66cde2 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -1081,7 +1081,6 @@ static const struct of_device_id ak8975_of_match[] = {
+ 	{ .compatible = "asahi-kasei,ak09912", .data = &ak_def_array[AK09912] },
+ 	{ .compatible = "ak09912", .data = &ak_def_array[AK09912] },
+ 	{ .compatible = "asahi-kasei,ak09916", .data = &ak_def_array[AK09916] },
+-	{ .compatible = "ak09916", .data = &ak_def_array[AK09916] },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(of, ak8975_of_match);
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 6791df64a5fe05..b7c078b7f7cfd4 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -1640,8 +1640,10 @@ int ib_cache_setup_one(struct ib_device *device)
+ 
+ 	rdma_for_each_port (device, p) {
+ 		err = ib_cache_update(device, p, true, true, true);
+-		if (err)
++		if (err) {
++			gid_table_cleanup_one(device);
+ 			return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index bf3265e6786517..7d85fe41004240 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -1190,7 +1190,7 @@ static int __init iw_cm_init(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0);
++	iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", WQ_MEM_RECLAIM);
+ 	if (!iwcm_wq)
+ 		goto err_alloc;
+ 
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 040ba2224f9ff6..b3757c6a0457a1 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1222,6 +1222,8 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb)
+ 	int ret;
+ 
+ 	ep = lookup_atid(t, atid);
++	if (!ep)
++		return -EINVAL;
+ 
+ 	pr_debug("ep %p tid %u snd_isn %u rcv_isn %u\n", ep, tid,
+ 		 be32_to_cpu(req->snd_isn), be32_to_cpu(req->rcv_isn));
+@@ -2279,6 +2281,9 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ 	int ret = 0;
+ 
+ 	ep = lookup_atid(t, atid);
++	if (!ep)
++		return -EINVAL;
++
+ 	la = (struct sockaddr_in *)&ep->com.local_addr;
+ 	ra = (struct sockaddr_in *)&ep->com.remote_addr;
+ 	la6 = (struct sockaddr_in6 *)&ep->com.local_addr;
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index 40c9b6e46b82b3..3e3a3e1c241e79 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -1544,11 +1544,31 @@ int erdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+ 	return ret;
+ }
+ 
++static enum ib_qp_state query_qp_state(struct erdma_qp *qp)
++{
++	switch (qp->attrs.state) {
++	case ERDMA_QP_STATE_IDLE:
++		return IB_QPS_INIT;
++	case ERDMA_QP_STATE_RTR:
++		return IB_QPS_RTR;
++	case ERDMA_QP_STATE_RTS:
++		return IB_QPS_RTS;
++	case ERDMA_QP_STATE_CLOSING:
++		return IB_QPS_ERR;
++	case ERDMA_QP_STATE_TERMINATE:
++		return IB_QPS_ERR;
++	case ERDMA_QP_STATE_ERROR:
++		return IB_QPS_ERR;
++	default:
++		return IB_QPS_ERR;
++	}
++}
++
+ int erdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 		   int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+ {
+-	struct erdma_qp *qp;
+ 	struct erdma_dev *dev;
++	struct erdma_qp *qp;
+ 
+ 	if (ibqp && qp_attr && qp_init_attr) {
+ 		qp = to_eqp(ibqp);
+@@ -1575,6 +1595,9 @@ int erdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 
+ 	qp_init_attr->cap = qp_attr->cap;
+ 
++	qp_attr->qp_state = query_qp_state(qp);
++	qp_attr->cur_qp_state = query_qp_state(qp);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 3e02c474f59fec..4fc5b9d5fea87e 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -64,8 +64,10 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ 	u8 tc_mode = 0;
+ 	int ret;
+ 
+-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 && udata)
+-		return -EOPNOTSUPP;
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 && udata) {
++		ret = -EOPNOTSUPP;
++		goto err_out;
++	}
+ 
+ 	ah->av.port = rdma_ah_get_port_num(ah_attr);
+ 	ah->av.gid_index = grh->sgid_index;
+@@ -83,7 +85,7 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ 		ret = 0;
+ 
+ 	if (ret && grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
+-		return ret;
++		goto err_out;
+ 
+ 	if (tc_mode == HNAE3_TC_MAP_MODE_DSCP &&
+ 	    grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
+@@ -91,8 +93,10 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ 	else
+ 		ah->av.sl = rdma_ah_get_sl(ah_attr);
+ 
+-	if (!check_sl_valid(hr_dev, ah->av.sl))
+-		return -EINVAL;
++	if (!check_sl_valid(hr_dev, ah->av.sl)) {
++		ret = -EINVAL;
++		goto err_out;
++	}
+ 
+ 	memcpy(ah->av.dgid, grh->dgid.raw, HNS_ROCE_GID_SIZE);
+ 	memcpy(ah->av.mac, ah_attr->roce.dmac, ETH_ALEN);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 02baa853a76c9b..c7c167e2a04513 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -1041,9 +1041,9 @@ static bool hem_list_is_bottom_bt(int hopnum, int bt_level)
+  * @bt_level: base address table level
+  * @unit: ba entries per bt page
+  */
+-static u32 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
++static u64 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
+ {
+-	u32 step;
++	u64 step;
+ 	int max;
+ 	int i;
+ 
+@@ -1079,7 +1079,7 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
+ {
+ 	struct hns_roce_buf_region *r;
+ 	int total = 0;
+-	int step;
++	u64 step;
+ 	int i;
+ 
+ 	for (i = 0; i < region_cnt; i++) {
+@@ -1110,7 +1110,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ 	int ret = 0;
+ 	int max_ofs;
+ 	int level;
+-	u32 step;
++	u64 step;
+ 	int end;
+ 
+ 	if (hopnum <= 1)
+@@ -1134,10 +1134,12 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ 
+ 	/* config L1 bt to last bt and link them to corresponding parent */
+ 	for (level = 1; level < hopnum; level++) {
+-		cur = hem_list_search_item(&mid_bt[level], offset);
+-		if (cur) {
+-			hem_ptrs[level] = cur;
+-			continue;
++		if (!hem_list_is_bottom_bt(hopnum, level)) {
++			cur = hem_list_search_item(&mid_bt[level], offset);
++			if (cur) {
++				hem_ptrs[level] = cur;
++				continue;
++			}
+ 		}
+ 
+ 		step = hem_list_calc_ba_range(hopnum, level, unit);
+@@ -1147,7 +1149,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+ 		}
+ 
+ 		start_aligned = (distance / step) * step + r->offset;
+-		end = min_t(int, start_aligned + step - 1, max_ofs);
++		end = min_t(u64, start_aligned + step - 1, max_ofs);
+ 		cur = hem_list_alloc_item(hr_dev, start_aligned, end, unit,
+ 					  true);
+ 		if (!cur) {
+@@ -1235,7 +1237,7 @@ static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+ 	struct hns_roce_hem_item *hem, *temp_hem;
+ 	int total = 0;
+ 	int offset;
+-	int step;
++	u64 step;
+ 
+ 	step = hem_list_calc_ba_range(r->hopnum, 1, unit);
+ 	if (step < 1)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 621b057fb9daa6..24e906b9d3ae13 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1681,8 +1681,8 @@ static int hns_roce_hw_v2_query_counter(struct hns_roce_dev *hr_dev,
+ 
+ 	for (i = 0; i < HNS_ROCE_HW_CNT_TOTAL && i < *num_counters; i++) {
+ 		bd_idx = i / CNT_PER_DESC;
+-		if (!(desc[bd_idx].flag & HNS_ROCE_CMD_FLAG_NEXT) &&
+-		    bd_idx != HNS_ROCE_HW_CNT_TOTAL / CNT_PER_DESC)
++		if (bd_idx != HNS_ROCE_HW_CNT_TOTAL / CNT_PER_DESC &&
++		    !(desc[bd_idx].flag & cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT)))
+ 			break;
+ 
+ 		cnt_data = (__le64 *)&desc[bd_idx].data[0];
+@@ -2972,6 +2972,9 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ 
+ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ {
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
++		free_mr_exit(hr_dev);
++
+ 	hns_roce_function_clear(hr_dev);
+ 
+ 	if (!hr_dev->is_vf)
+@@ -4423,12 +4426,14 @@ static int config_qp_rq_buf(struct hns_roce_dev *hr_dev,
+ 		     upper_32_bits(to_hr_hw_page_addr(mtts[0])));
+ 	hr_reg_clear(qpc_mask, QPC_RQ_CUR_BLK_ADDR_H);
+ 
+-	context->rq_nxt_blk_addr = cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
+-	qpc_mask->rq_nxt_blk_addr = 0;
+-
+-	hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
+-		     upper_32_bits(to_hr_hw_page_addr(mtts[1])));
+-	hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
++		context->rq_nxt_blk_addr =
++				cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
++		qpc_mask->rq_nxt_blk_addr = 0;
++		hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
++			     upper_32_bits(to_hr_hw_page_addr(mtts[1])));
++		hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
++	}
+ 
+ 	return 0;
+ }
+@@ -6193,6 +6198,7 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
+ 	struct pci_dev *pdev = hr_dev->pci_dev;
+ 	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+ 	const struct hnae3_ae_ops *ops = ae_dev->ops;
++	enum hnae3_reset_type reset_type;
+ 	irqreturn_t int_work = IRQ_NONE;
+ 	u32 int_en;
+ 
+@@ -6204,10 +6210,12 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
+ 		roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG,
+ 			   1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S);
+ 
++		reset_type = hr_dev->is_vf ?
++			     HNAE3_VF_FUNC_RESET : HNAE3_FUNC_RESET;
++
+ 		/* Set reset level for reset_event() */
+ 		if (ops->set_default_reset_request)
+-			ops->set_default_reset_request(ae_dev,
+-						       HNAE3_FUNC_RESET);
++			ops->set_default_reset_request(ae_dev, reset_type);
+ 		if (ops->reset_event)
+ 			ops->reset_event(pdev, NULL);
+ 
+@@ -6277,7 +6285,7 @@ static u64 fmea_get_ram_res_addr(u32 res_type, __le64 *data)
+ 	    res_type == ECC_RESOURCE_SCCC)
+ 		return le64_to_cpu(*data);
+ 
+-	return le64_to_cpu(*data) << PAGE_SHIFT;
++	return le64_to_cpu(*data) << HNS_HW_PAGE_SHIFT;
+ }
+ 
+ static int fmea_recover_others(struct hns_roce_dev *hr_dev, u32 res_type,
+@@ -6949,9 +6957,6 @@ static void __hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+ 	hr_dev->state = HNS_ROCE_DEVICE_STATE_UNINIT;
+ 	hns_roce_handle_device_err(hr_dev);
+ 
+-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
+-		free_mr_exit(hr_dev);
+-
+ 	hns_roce_exit(hr_dev);
+ 	kfree(hr_dev->priv);
+ 	ib_dealloc_device(&hr_dev->ib_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 1de384ce4d0e15..6b03ba671ff8f3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -1460,19 +1460,19 @@ void hns_roce_lock_cqs(struct hns_roce_cq *send_cq, struct hns_roce_cq *recv_cq)
+ 		__acquire(&send_cq->lock);
+ 		__acquire(&recv_cq->lock);
+ 	} else if (unlikely(send_cq != NULL && recv_cq == NULL)) {
+-		spin_lock_irq(&send_cq->lock);
++		spin_lock(&send_cq->lock);
+ 		__acquire(&recv_cq->lock);
+ 	} else if (unlikely(send_cq == NULL && recv_cq != NULL)) {
+-		spin_lock_irq(&recv_cq->lock);
++		spin_lock(&recv_cq->lock);
+ 		__acquire(&send_cq->lock);
+ 	} else if (send_cq == recv_cq) {
+-		spin_lock_irq(&send_cq->lock);
++		spin_lock(&send_cq->lock);
+ 		__acquire(&recv_cq->lock);
+ 	} else if (send_cq->cqn < recv_cq->cqn) {
+-		spin_lock_irq(&send_cq->lock);
++		spin_lock(&send_cq->lock);
+ 		spin_lock_nested(&recv_cq->lock, SINGLE_DEPTH_NESTING);
+ 	} else {
+-		spin_lock_irq(&recv_cq->lock);
++		spin_lock(&recv_cq->lock);
+ 		spin_lock_nested(&send_cq->lock, SINGLE_DEPTH_NESTING);
+ 	}
+ }
+@@ -1492,13 +1492,13 @@ void hns_roce_unlock_cqs(struct hns_roce_cq *send_cq,
+ 		spin_unlock(&recv_cq->lock);
+ 	} else if (send_cq == recv_cq) {
+ 		__release(&recv_cq->lock);
+-		spin_unlock_irq(&send_cq->lock);
++		spin_unlock(&send_cq->lock);
+ 	} else if (send_cq->cqn < recv_cq->cqn) {
+ 		spin_unlock(&recv_cq->lock);
+-		spin_unlock_irq(&send_cq->lock);
++		spin_unlock(&send_cq->lock);
+ 	} else {
+ 		spin_unlock(&send_cq->lock);
+-		spin_unlock_irq(&recv_cq->lock);
++		spin_unlock(&recv_cq->lock);
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 12704efb7b19a8..954450195824c3 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -1347,7 +1347,7 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		if (attr->max_dest_rd_atomic > dev->hw_attrs.max_hw_ird) {
+ 			ibdev_err(&iwdev->ibdev,
+ 				  "rd_atomic = %d, above max_hw_ird=%d\n",
+-				   attr->max_rd_atomic,
++				   attr->max_dest_rd_atomic,
+ 				   dev->hw_attrs.max_hw_ird);
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 43660c831b22cf..fdb0e62d805b9a 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -542,7 +542,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u32 port_num,
+ 	if (!ndev)
+ 		goto out;
+ 
+-	if (dev->lag_active) {
++	if (mlx5_lag_is_roce(mdev) || mlx5_lag_is_sriov(mdev)) {
+ 		rcu_read_lock();
+ 		upper = netdev_master_upper_dev_get_rcu(ndev);
+ 		if (upper) {
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index f9abdca3493aa9..0b2f18c34ee5e6 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -795,6 +795,7 @@ struct mlx5_cache_ent {
+ 	u8 is_tmp:1;
+ 	u8 disabled:1;
+ 	u8 fill_to_high_water:1;
++	u8 tmp_cleanup_scheduled:1;
+ 
+ 	/*
+ 	 * - limit is the low water mark for stored mkeys, 2* limit is the
+@@ -826,7 +827,6 @@ struct mlx5_mkey_cache {
+ 	struct mutex		rb_lock;
+ 	struct dentry		*fs_root;
+ 	unsigned long		last_add;
+-	struct delayed_work	remove_ent_dwork;
+ };
+ 
+ struct mlx5_ib_port_resources {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index d3c1f63791a2b6..59bde614f061f5 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -48,6 +48,7 @@ enum {
+ 	MAX_PENDING_REG_MR = 8,
+ };
+ 
++#define MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS 4
+ #define MLX5_UMR_ALIGN 2048
+ 
+ static void
+@@ -211,9 +212,9 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context)
+ 
+ 	spin_lock_irqsave(&ent->mkeys_queue.lock, flags);
+ 	push_mkey_locked(ent, mkey_out->mkey);
++	ent->pending--;
+ 	/* If we are doing fill_to_high_water then keep going. */
+ 	queue_adjust_cache_locked(ent);
+-	ent->pending--;
+ 	spin_unlock_irqrestore(&ent->mkeys_queue.lock, flags);
+ 	kfree(mkey_out);
+ }
+@@ -527,6 +528,21 @@ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent)
+ 	}
+ }
+ 
++static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent)
++{
++	u32 mkey;
++
++	spin_lock_irq(&ent->mkeys_queue.lock);
++	while (ent->mkeys_queue.ci) {
++		mkey = pop_mkey_locked(ent);
++		spin_unlock_irq(&ent->mkeys_queue.lock);
++		mlx5_core_destroy_mkey(dev->mdev, mkey);
++		spin_lock_irq(&ent->mkeys_queue.lock);
++	}
++	ent->tmp_cleanup_scheduled = false;
++	spin_unlock_irq(&ent->mkeys_queue.lock);
++}
++
+ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ {
+ 	struct mlx5_ib_dev *dev = ent->dev;
+@@ -598,7 +614,11 @@ static void delayed_cache_work_func(struct work_struct *work)
+ 	struct mlx5_cache_ent *ent;
+ 
+ 	ent = container_of(work, struct mlx5_cache_ent, dwork.work);
+-	__cache_work_func(ent);
++	/* temp entries are never filled, only cleaned */
++	if (ent->is_tmp)
++		clean_keys(ent->dev, ent);
++	else
++		__cache_work_func(ent);
+ }
+ 
+ static int cache_ent_key_cmp(struct mlx5r_cache_rb_key key1,
+@@ -659,6 +679,7 @@ mkey_cache_ent_from_rb_key(struct mlx5_ib_dev *dev,
+ {
+ 	struct rb_node *node = dev->cache.rb_root.rb_node;
+ 	struct mlx5_cache_ent *cur, *smallest = NULL;
++	u64 ndescs_limit;
+ 	int cmp;
+ 
+ 	/*
+@@ -677,10 +698,18 @@ mkey_cache_ent_from_rb_key(struct mlx5_ib_dev *dev,
+ 			return cur;
+ 	}
+ 
++	/*
++	 * Limit the usage of mkeys larger than twice the required size while
++	 * also allowing the usage of smallest cache entry for small MRs.
++	 */
++	ndescs_limit = max_t(u64, rb_key.ndescs * 2,
++			     MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS);
++
+ 	return (smallest &&
+ 		smallest->rb_key.access_mode == rb_key.access_mode &&
+ 		smallest->rb_key.access_flags == rb_key.access_flags &&
+-		smallest->rb_key.ats == rb_key.ats) ?
++		smallest->rb_key.ats == rb_key.ats &&
++		smallest->rb_key.ndescs <= ndescs_limit) ?
+ 		       smallest :
+ 		       NULL;
+ }
+@@ -765,21 +794,6 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ 	return _mlx5_mr_cache_alloc(dev, ent, access_flags);
+ }
+ 
+-static void clean_keys(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent)
+-{
+-	u32 mkey;
+-
+-	cancel_delayed_work(&ent->dwork);
+-	spin_lock_irq(&ent->mkeys_queue.lock);
+-	while (ent->mkeys_queue.ci) {
+-		mkey = pop_mkey_locked(ent);
+-		spin_unlock_irq(&ent->mkeys_queue.lock);
+-		mlx5_core_destroy_mkey(dev->mdev, mkey);
+-		spin_lock_irq(&ent->mkeys_queue.lock);
+-	}
+-	spin_unlock_irq(&ent->mkeys_queue.lock);
+-}
+-
+ static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev)
+ {
+ 	if (!mlx5_debugfs_root || dev->is_rep)
+@@ -892,10 +906,6 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ 			ent->limit = 0;
+ 
+ 		mlx5_mkey_cache_debugfs_add_ent(dev, ent);
+-	} else {
+-		mod_delayed_work(ent->dev->cache.wq,
+-				 &ent->dev->cache.remove_ent_dwork,
+-				 msecs_to_jiffies(30 * 1000));
+ 	}
+ 
+ 	return ent;
+@@ -906,35 +916,6 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ 	return ERR_PTR(ret);
+ }
+ 
+-static void remove_ent_work_func(struct work_struct *work)
+-{
+-	struct mlx5_mkey_cache *cache;
+-	struct mlx5_cache_ent *ent;
+-	struct rb_node *cur;
+-
+-	cache = container_of(work, struct mlx5_mkey_cache,
+-			     remove_ent_dwork.work);
+-	mutex_lock(&cache->rb_lock);
+-	cur = rb_last(&cache->rb_root);
+-	while (cur) {
+-		ent = rb_entry(cur, struct mlx5_cache_ent, node);
+-		cur = rb_prev(cur);
+-		mutex_unlock(&cache->rb_lock);
+-
+-		spin_lock_irq(&ent->mkeys_queue.lock);
+-		if (!ent->is_tmp) {
+-			spin_unlock_irq(&ent->mkeys_queue.lock);
+-			mutex_lock(&cache->rb_lock);
+-			continue;
+-		}
+-		spin_unlock_irq(&ent->mkeys_queue.lock);
+-
+-		clean_keys(ent->dev, ent);
+-		mutex_lock(&cache->rb_lock);
+-	}
+-	mutex_unlock(&cache->rb_lock);
+-}
+-
+ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ {
+ 	struct mlx5_mkey_cache *cache = &dev->cache;
+@@ -950,7 +931,6 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ 	mutex_init(&dev->slow_path_mutex);
+ 	mutex_init(&dev->cache.rb_lock);
+ 	dev->cache.rb_root = RB_ROOT;
+-	INIT_DELAYED_WORK(&dev->cache.remove_ent_dwork, remove_ent_work_func);
+ 	cache->wq = alloc_ordered_workqueue("mkey_cache", WQ_MEM_RECLAIM);
+ 	if (!cache->wq) {
+ 		mlx5_ib_warn(dev, "failed to create work queue\n");
+@@ -962,7 +942,7 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ 	mlx5_mkey_cache_debugfs_init(dev);
+ 	mutex_lock(&cache->rb_lock);
+ 	for (i = 0; i <= mkey_cache_max_order(dev); i++) {
+-		rb_key.ndescs = 1 << (i + 2);
++		rb_key.ndescs = MLX5_MR_CACHE_PERSISTENT_ENTRY_MIN_DESCS << i;
+ 		ent = mlx5r_cache_create_ent_locked(dev, rb_key, true);
+ 		if (IS_ERR(ent)) {
+ 			ret = PTR_ERR(ent);
+@@ -1001,7 +981,6 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
+ 		return;
+ 
+ 	mutex_lock(&dev->cache.rb_lock);
+-	cancel_delayed_work(&dev->cache.remove_ent_dwork);
+ 	for (node = rb_first(root); node; node = rb_next(node)) {
+ 		ent = rb_entry(node, struct mlx5_cache_ent, node);
+ 		spin_lock_irq(&ent->mkeys_queue.lock);
+@@ -1843,8 +1822,18 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ 	struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ 	struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
+ 
+-	if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr))
++	if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
++		ent = mr->mmkey.cache_ent;
++		/* upon storing to a clean temp entry - schedule its cleanup */
++		spin_lock_irq(&ent->mkeys_queue.lock);
++		if (ent->is_tmp && !ent->tmp_cleanup_scheduled) {
++			mod_delayed_work(ent->dev->cache.wq, &ent->dwork,
++					 msecs_to_jiffies(30 * 1000));
++			ent->tmp_cleanup_scheduled = true;
++		}
++		spin_unlock_irq(&ent->mkeys_queue.lock);
+ 		return 0;
++	}
+ 
+ 	if (ent) {
+ 		spin_lock_irq(&ent->mkeys_queue.lock);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 88106cf5ce550c..84d2dfcd20af6f 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -626,6 +626,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		 */
+ 		if (WARN_ON(wc->wr_cqe->done != rtrs_clt_rdma_done))
+ 			return;
++		clt_path->s.hb_missed_cnt = 0;
+ 		rtrs_from_imm(be32_to_cpu(wc->ex.imm_data),
+ 			       &imm_type, &imm_payload);
+ 		if (imm_type == RTRS_IO_RSP_IMM ||
+@@ -643,7 +644,6 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 				return  rtrs_clt_recv_done(con, wc);
+ 		} else if (imm_type == RTRS_HB_ACK_IMM) {
+ 			WARN_ON(con->c.cid);
+-			clt_path->s.hb_missed_cnt = 0;
+ 			clt_path->s.hb_cur_latency =
+ 				ktime_sub(ktime_get(), clt_path->s.hb_last_sent);
+ 			if (clt_path->flags & RTRS_MSG_NEW_RKEY_F)
+@@ -670,6 +670,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		/*
+ 		 * Key invalidations from server side
+ 		 */
++		clt_path->s.hb_missed_cnt = 0;
+ 		WARN_ON(!(wc->wc_flags & IB_WC_WITH_INVALIDATE ||
+ 			  wc->wc_flags & IB_WC_WITH_IMM));
+ 		WARN_ON(wc->wr_cqe->done != rtrs_clt_rdma_done);
+@@ -2346,6 +2347,12 @@ static int init_conns(struct rtrs_clt_path *clt_path)
+ 		if (err)
+ 			goto destroy;
+ 	}
++
++	/*
++	 * Set the cid to con_num - 1, since if we fail later, we want to stay in bounds.
++	 */
++	cid = clt_path->s.con_num - 1;
++
+ 	err = alloc_path_reqs(clt_path);
+ 	if (err)
+ 		goto destroy;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 1d33efb8fb03be..94ac99a4f696e7 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1229,6 +1229,7 @@ static void rtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		 */
+ 		if (WARN_ON(wc->wr_cqe != &io_comp_cqe))
+ 			return;
++		srv_path->s.hb_missed_cnt = 0;
+ 		err = rtrs_post_recv_empty(&con->c, &io_comp_cqe);
+ 		if (err) {
+ 			rtrs_err(s, "rtrs_post_recv(), err: %d\n", err);
+diff --git a/drivers/input/keyboard/adp5588-keys.c b/drivers/input/keyboard/adp5588-keys.c
+index 1b0279393df4bb..5acaffb7f6e11d 100644
+--- a/drivers/input/keyboard/adp5588-keys.c
++++ b/drivers/input/keyboard/adp5588-keys.c
+@@ -627,7 +627,7 @@ static int adp5588_setup(struct adp5588_kpad *kpad)
+ 
+ 	for (i = 0; i < KEYP_MAX_EVENT; i++) {
+ 		ret = adp5588_read(client, KEY_EVENTA);
+-		if (ret)
++		if (ret < 0)
+ 			return ret;
+ 	}
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index bad238f69a7afd..34d1f07ea4c304 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1120,6 +1120,43 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOLOOP)
+ 	},
++	/*
++	 * Some TongFang barebones have touchpad and/or keyboard issues after
++	 * suspend fixable with nomux + reset + noloop + nopnp. Luckily, none of
++	 * them have an external PS/2 port so this can safely be set for all of
++	 * them.
++	 * TongFang barebones come with board_vendor and/or system_vendor set to
++	 * a different value for each individual reseller. The only somewhat
++	 * universal way to identify them is by board_name.
++	 */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GM6XGxX"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxX"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
+ 	/*
+ 	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+ 	 * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
+diff --git a/drivers/input/touchscreen/ilitek_ts_i2c.c b/drivers/input/touchscreen/ilitek_ts_i2c.c
+index 3eb762896345b7..5a807ad723190d 100644
+--- a/drivers/input/touchscreen/ilitek_ts_i2c.c
++++ b/drivers/input/touchscreen/ilitek_ts_i2c.c
+@@ -37,6 +37,8 @@
+ #define ILITEK_TP_CMD_GET_MCU_VER			0x61
+ #define ILITEK_TP_CMD_GET_IC_MODE			0xC0
+ 
++#define ILITEK_TP_I2C_REPORT_ID				0x48
++
+ #define REPORT_COUNT_ADDRESS				61
+ #define ILITEK_SUPPORT_MAX_POINT			40
+ 
+@@ -160,15 +162,19 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ 	error = ilitek_i2c_write_and_read(ts, NULL, 0, 0, buf, 64);
+ 	if (error) {
+ 		dev_err(dev, "get touch info failed, err:%d\n", error);
+-		goto err_sync_frame;
++		return error;
++	}
++
++	if (buf[0] != ILITEK_TP_I2C_REPORT_ID) {
++		dev_err(dev, "get touch info failed. Wrong id: 0x%02X\n", buf[0]);
++		return -EINVAL;
+ 	}
+ 
+ 	report_max_point = buf[REPORT_COUNT_ADDRESS];
+ 	if (report_max_point > ts->max_tp) {
+ 		dev_err(dev, "FW report max point:%d > panel info. max:%d\n",
+ 			report_max_point, ts->max_tp);
+-		error = -EINVAL;
+-		goto err_sync_frame;
++		return -EINVAL;
+ 	}
+ 
+ 	count = DIV_ROUND_UP(report_max_point, packet_max_point);
+@@ -178,7 +184,7 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ 		if (error) {
+ 			dev_err(dev, "get touch info. failed, cnt:%d, err:%d\n",
+ 				count, error);
+-			goto err_sync_frame;
++			return error;
+ 		}
+ 	}
+ 
+@@ -203,10 +209,10 @@ static int ilitek_process_and_report_v6(struct ilitek_ts_data *ts)
+ 		ilitek_touch_down(ts, id, x, y);
+ 	}
+ 
+-err_sync_frame:
+ 	input_mt_sync_frame(input);
+ 	input_sync(input);
+-	return error;
++
++	return 0;
+ }
+ 
+ /* APIs of cmds for ILITEK Touch IC */
+diff --git a/drivers/interconnect/icc-clk.c b/drivers/interconnect/icc-clk.c
+index d787f2ea36d97b..a91df709cfb2f3 100644
+--- a/drivers/interconnect/icc-clk.c
++++ b/drivers/interconnect/icc-clk.c
+@@ -87,6 +87,7 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ 	onecell = devm_kzalloc(dev, struct_size(onecell, nodes, 2 * num_clocks), GFP_KERNEL);
+ 	if (!onecell)
+ 		return ERR_PTR(-ENOMEM);
++	onecell->num_nodes = 2 * num_clocks;
+ 
+ 	qp = devm_kzalloc(dev, struct_size(qp, clocks, num_clocks), GFP_KERNEL);
+ 	if (!qp)
+@@ -133,8 +134,6 @@ struct icc_provider *icc_clk_register(struct device *dev,
+ 		onecell->nodes[j++] = node;
+ 	}
+ 
+-	onecell->num_nodes = j;
+-
+ 	ret = icc_provider_register(provider);
+ 	if (ret)
+ 		goto err;
+diff --git a/drivers/interconnect/qcom/sm8350.c b/drivers/interconnect/qcom/sm8350.c
+index b321c3009acbac..885a9d3f92e4d1 100644
+--- a/drivers/interconnect/qcom/sm8350.c
++++ b/drivers/interconnect/qcom/sm8350.c
+@@ -1965,6 +1965,7 @@ static struct platform_driver qnoc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-sm8350",
+ 		.of_match_table = qnoc_of_match,
++		.sync_state = icc_sync_state,
+ 	},
+ };
+ module_platform_driver(qnoc_driver);
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 9d9a7fde59e75d..05aed3cb46f1bf 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -574,23 +574,27 @@ static void v1_free_pgtable(struct io_pgtable *iop)
+ 	       pgtable->mode > PAGE_MODE_6_LEVEL);
+ 
+ 	free_sub_pt(pgtable->root, pgtable->mode, &freelist);
++	iommu_put_pages_list(&freelist);
+ 
+ 	/* Update data structure */
+ 	amd_iommu_domain_clr_pt_root(dom);
+ 
+ 	/* Make changes visible to IOMMUs */
+ 	amd_iommu_domain_update(dom);
+-
+-	iommu_put_pages_list(&freelist);
+ }
+ 
+ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+ {
+ 	struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+ 
+-	cfg->pgsize_bitmap  = AMD_IOMMU_PGSIZES,
+-	cfg->ias            = IOMMU_IN_ADDR_BIT_SIZE,
+-	cfg->oas            = IOMMU_OUT_ADDR_BIT_SIZE,
++	pgtable->root = iommu_alloc_page(GFP_KERNEL);
++	if (!pgtable->root)
++		return NULL;
++	pgtable->mode = PAGE_MODE_3_LEVEL;
++
++	cfg->pgsize_bitmap  = AMD_IOMMU_PGSIZES;
++	cfg->ias            = IOMMU_IN_ADDR_BIT_SIZE;
++	cfg->oas            = IOMMU_OUT_ADDR_BIT_SIZE;
+ 	cfg->tlb            = &v1_flush_ops;
+ 
+ 	pgtable->iop.ops.map_pages    = iommu_v1_map_pages;
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index 78ac37c5ccc1e0..743f417b281d4b 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -51,7 +51,7 @@ static inline u64 set_pgtable_attr(u64 *page)
+ 	u64 prot;
+ 
+ 	prot = IOMMU_PAGE_PRESENT | IOMMU_PAGE_RW | IOMMU_PAGE_USER;
+-	prot |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
++	prot |= IOMMU_PAGE_ACCESS;
+ 
+ 	return (iommu_virt_to_phys(page) | prot);
+ }
+@@ -362,7 +362,7 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
+ 	struct protection_domain *pdom = (struct protection_domain *)cookie;
+ 	int ias = IOMMU_IN_ADDR_BIT_SIZE;
+ 
+-	pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_ATOMIC);
++	pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_KERNEL);
+ 	if (!pgtable->pgd)
+ 		return NULL;
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index b19e8c0f48fa25..1a61f14459e4fe 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -52,8 +52,6 @@
+ #define HT_RANGE_START		(0xfd00000000ULL)
+ #define HT_RANGE_END		(0xffffffffffULL)
+ 
+-#define DEFAULT_PGTABLE_LEVEL	PAGE_MODE_3_LEVEL
+-
+ static DEFINE_SPINLOCK(pd_bitmap_lock);
+ 
+ LIST_HEAD(ioapic_map);
+@@ -1552,8 +1550,8 @@ void amd_iommu_dev_flush_pasid_pages(struct iommu_dev_data *dev_data,
+ void amd_iommu_dev_flush_pasid_all(struct iommu_dev_data *dev_data,
+ 				   ioasid_t pasid)
+ {
+-	amd_iommu_dev_flush_pasid_pages(dev_data, 0,
+-					CMD_INV_IOMMU_ALL_PAGES_ADDRESS, pasid);
++	amd_iommu_dev_flush_pasid_pages(dev_data, pasid, 0,
++					CMD_INV_IOMMU_ALL_PAGES_ADDRESS);
+ }
+ 
+ void amd_iommu_domain_flush_complete(struct protection_domain *domain)
+@@ -2185,11 +2183,12 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
+ 		dev_err(dev, "Failed to initialize - trying to proceed anyway\n");
+ 		iommu_dev = ERR_PTR(ret);
+ 		iommu_ignore_device(iommu, dev);
+-	} else {
+-		amd_iommu_set_pci_msi_domain(dev, iommu);
+-		iommu_dev = &iommu->iommu;
++		goto out_err;
+ 	}
+ 
++	amd_iommu_set_pci_msi_domain(dev, iommu);
++	iommu_dev = &iommu->iommu;
++
+ 	/*
+ 	 * If IOMMU and device supports PASID then it will contain max
+ 	 * supported PASIDs, else it will be zero.
+@@ -2201,6 +2200,7 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
+ 					     pci_max_pasids(to_pci_dev(dev)));
+ 	}
+ 
++out_err:
+ 	iommu_completion_wait(iommu);
+ 
+ 	return iommu_dev;
+@@ -2265,47 +2265,17 @@ void protection_domain_free(struct protection_domain *domain)
+ 	if (domain->iop.pgtbl_cfg.tlb)
+ 		free_io_pgtable_ops(&domain->iop.iop.ops);
+ 
+-	if (domain->iop.root)
+-		iommu_free_page(domain->iop.root);
+-
+ 	if (domain->id)
+ 		domain_id_free(domain->id);
+ 
+ 	kfree(domain);
+ }
+ 
+-static int protection_domain_init_v1(struct protection_domain *domain, int mode)
+-{
+-	u64 *pt_root = NULL;
+-
+-	BUG_ON(mode < PAGE_MODE_NONE || mode > PAGE_MODE_6_LEVEL);
+-
+-	if (mode != PAGE_MODE_NONE) {
+-		pt_root = iommu_alloc_page(GFP_KERNEL);
+-		if (!pt_root)
+-			return -ENOMEM;
+-	}
+-
+-	domain->pd_mode = PD_MODE_V1;
+-	amd_iommu_domain_set_pgtable(domain, pt_root, mode);
+-
+-	return 0;
+-}
+-
+-static int protection_domain_init_v2(struct protection_domain *pdom)
+-{
+-	pdom->pd_mode = PD_MODE_V2;
+-	pdom->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+-
+-	return 0;
+-}
+-
+ struct protection_domain *protection_domain_alloc(unsigned int type)
+ {
+ 	struct io_pgtable_ops *pgtbl_ops;
+ 	struct protection_domain *domain;
+ 	int pgtable;
+-	int ret;
+ 
+ 	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+ 	if (!domain)
+@@ -2341,18 +2311,14 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
+ 
+ 	switch (pgtable) {
+ 	case AMD_IOMMU_V1:
+-		ret = protection_domain_init_v1(domain, DEFAULT_PGTABLE_LEVEL);
++		domain->pd_mode = PD_MODE_V1;
+ 		break;
+ 	case AMD_IOMMU_V2:
+-		ret = protection_domain_init_v2(domain);
++		domain->pd_mode = PD_MODE_V2;
+ 		break;
+ 	default:
+-		ret = -EINVAL;
+-		break;
+-	}
+-
+-	if (ret)
+ 		goto out_err;
++	}
+ 
+ 	pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain);
+ 	if (!pgtbl_ops)
+@@ -2405,10 +2371,10 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
+ 	domain->domain.geometry.aperture_start = 0;
+ 	domain->domain.geometry.aperture_end   = dma_max_address();
+ 	domain->domain.geometry.force_aperture = true;
++	domain->domain.pgsize_bitmap = domain->iop.iop.cfg.pgsize_bitmap;
+ 
+ 	if (iommu) {
+ 		domain->domain.type = type;
+-		domain->domain.pgsize_bitmap = iommu->iommu.ops->pgsize_bitmap;
+ 		domain->domain.ops = iommu->iommu.ops->default_domain_ops;
+ 
+ 		if (dirty_tracking)
+@@ -2867,7 +2833,6 @@ const struct iommu_ops amd_iommu_ops = {
+ 	.device_group = amd_iommu_device_group,
+ 	.get_resv_regions = amd_iommu_get_resv_regions,
+ 	.is_attach_deferred = amd_iommu_is_attach_deferred,
+-	.pgsize_bitmap	= AMD_IOMMU_PGSIZES,
+ 	.def_domain_type = amd_iommu_def_domain_type,
+ 	.dev_enable_feat = amd_iommu_dev_enable_feature,
+ 	.dev_disable_feat = amd_iommu_dev_disable_feature,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 13f3e2efb2ccbf..ccca410c948164 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -282,6 +282,20 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ 	u32 smr;
+ 	int i;
+ 
++	/*
++	 * MSM8998 LPASS SMMU reports 13 context banks, but accessing
++	 * the last context bank crashes the system.
++	 */
++	if (of_device_is_compatible(smmu->dev->of_node, "qcom,msm8998-smmu-v2") &&
++	    smmu->num_context_banks == 13) {
++		smmu->num_context_banks = 12;
++	} else if (of_device_is_compatible(smmu->dev->of_node, "qcom,sdm630-smmu-v2")) {
++		if (smmu->num_context_banks == 21) /* SDM630 / SDM660 A2NOC SMMU */
++			smmu->num_context_banks = 7;
++		else if (smmu->num_context_banks == 14) /* SDM630 / SDM660 LPASS SMMU */
++			smmu->num_context_banks = 13;
++	}
++
+ 	/*
+ 	 * Some platforms support more than the Arm SMMU architected maximum of
+ 	 * 128 stream matching groups. For unknown reasons, the additional
+@@ -338,6 +352,19 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
+ 	return 0;
+ }
+ 
++static int qcom_adreno_smmuv2_cfg_probe(struct arm_smmu_device *smmu)
++{
++	/* Support for 16K pages is advertised on some SoCs, but it doesn't seem to work */
++	smmu->features &= ~ARM_SMMU_FEAT_FMT_AARCH64_16K;
++
++	/* TZ protects several last context banks, hide them from Linux */
++	if (of_device_is_compatible(smmu->dev->of_node, "qcom,sdm630-smmu-v2") &&
++	    smmu->num_context_banks == 5)
++		smmu->num_context_banks = 2;
++
++	return 0;
++}
++
+ static void qcom_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
+ {
+ 	struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
+@@ -436,6 +463,7 @@ static const struct arm_smmu_impl sdm845_smmu_500_impl = {
+ 
+ static const struct arm_smmu_impl qcom_adreno_smmu_v2_impl = {
+ 	.init_context = qcom_adreno_smmu_init_context,
++	.cfg_probe = qcom_adreno_smmuv2_cfg_probe,
+ 	.def_domain_type = qcom_smmu_def_domain_type,
+ 	.alloc_context_bank = qcom_adreno_smmu_alloc_context_bank,
+ 	.write_sctlr = qcom_adreno_smmu_write_sctlr,
+diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
+index a9f1fe44c4c0bd..21f0d8cbd7aade 100644
+--- a/drivers/iommu/iommufd/hw_pagetable.c
++++ b/drivers/iommu/iommufd/hw_pagetable.c
+@@ -215,7 +215,8 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
+ 
+ 	if (flags || !user_data->len || !ops->domain_alloc_user)
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (parent->auto_domain || !parent->nest_parent)
++	if (parent->auto_domain || !parent->nest_parent ||
++	    parent->common.domain->owner != ops)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	hwpt_nested = __iommufd_object_alloc(
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index 05fd9d3abf1b80..9f193c933de6ef 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -112,6 +112,7 @@ static int iopt_alloc_iova(struct io_pagetable *iopt, unsigned long *iova,
+ 	unsigned long page_offset = uptr % PAGE_SIZE;
+ 	struct interval_tree_double_span_iter used_span;
+ 	struct interval_tree_span_iter allowed_span;
++	unsigned long max_alignment = PAGE_SIZE;
+ 	unsigned long iova_alignment;
+ 
+ 	lockdep_assert_held(&iopt->iova_rwsem);
+@@ -131,6 +132,13 @@ static int iopt_alloc_iova(struct io_pagetable *iopt, unsigned long *iova,
+ 				       roundup_pow_of_two(length),
+ 				       1UL << __ffs64(uptr));
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++	max_alignment = HPAGE_SIZE;
++#endif
++	/* Protect against ALIGN() overflow */
++	if (iova_alignment >= max_alignment)
++		iova_alignment = max_alignment;
++
+ 	if (iova_alignment < iopt->iova_alignment)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index 654ed333909579..62bfeb7a35d850 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -1313,7 +1313,7 @@ static int iommufd_test_dirty(struct iommufd_ucmd *ucmd, unsigned int mockpt_id,
+ 			      unsigned long page_size, void __user *uptr,
+ 			      u32 flags)
+ {
+-	unsigned long bitmap_size, i, max;
++	unsigned long i, max;
+ 	struct iommu_test_cmd *cmd = ucmd->cmd;
+ 	struct iommufd_hw_pagetable *hwpt;
+ 	struct mock_iommu_domain *mock;
+@@ -1334,15 +1334,14 @@ static int iommufd_test_dirty(struct iommufd_ucmd *ucmd, unsigned int mockpt_id,
+ 	}
+ 
+ 	max = length / page_size;
+-	bitmap_size = DIV_ROUND_UP(max, BITS_PER_BYTE);
+-
+-	tmp = kvzalloc(bitmap_size, GFP_KERNEL_ACCOUNT);
++	tmp = kvzalloc(DIV_ROUND_UP(max, BITS_PER_LONG) * sizeof(unsigned long),
++		       GFP_KERNEL_ACCOUNT);
+ 	if (!tmp) {
+ 		rc = -ENOMEM;
+ 		goto out_put;
+ 	}
+ 
+-	if (copy_from_user(tmp, uptr, bitmap_size)) {
++	if (copy_from_user(tmp, uptr,DIV_ROUND_UP(max, BITS_PER_BYTE))) {
+ 		rc = -EFAULT;
+ 		goto out_free;
+ 	}
+diff --git a/drivers/leds/leds-bd2606mvv.c b/drivers/leds/leds-bd2606mvv.c
+index 3fda712d2f8095..c1181a35d0f762 100644
+--- a/drivers/leds/leds-bd2606mvv.c
++++ b/drivers/leds/leds-bd2606mvv.c
+@@ -69,16 +69,14 @@ static const struct regmap_config bd2606mvv_regmap = {
+ 
+ static int bd2606mvv_probe(struct i2c_client *client)
+ {
+-	struct fwnode_handle *np, *child;
+ 	struct device *dev = &client->dev;
+ 	struct bd2606mvv_priv *priv;
+ 	struct fwnode_handle *led_fwnodes[BD2606_MAX_LEDS] = { 0 };
+ 	int active_pairs[BD2606_MAX_LEDS / 2] = { 0 };
+ 	int err, reg;
+-	int i;
++	int i, j;
+ 
+-	np = dev_fwnode(dev);
+-	if (!np)
++	if (!dev_fwnode(dev))
+ 		return -ENODEV;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -94,20 +92,18 @@ static int bd2606mvv_probe(struct i2c_client *client)
+ 
+ 	i2c_set_clientdata(client, priv);
+ 
+-	fwnode_for_each_available_child_node(np, child) {
++	device_for_each_child_node_scoped(dev, child) {
+ 		struct bd2606mvv_led *led;
+ 
+ 		err = fwnode_property_read_u32(child, "reg", &reg);
+-		if (err) {
+-			fwnode_handle_put(child);
++		if (err)
+ 			return err;
+-		}
+-		if (reg < 0 || reg >= BD2606_MAX_LEDS || led_fwnodes[reg]) {
+-			fwnode_handle_put(child);
++
++		if (reg < 0 || reg >= BD2606_MAX_LEDS || led_fwnodes[reg])
+ 			return -EINVAL;
+-		}
++
+ 		led = &priv->leds[reg];
+-		led_fwnodes[reg] = child;
++		led_fwnodes[reg] = fwnode_handle_get(child);
+ 		active_pairs[reg / 2]++;
+ 		led->priv = priv;
+ 		led->led_no = reg;
+@@ -130,7 +126,8 @@ static int bd2606mvv_probe(struct i2c_client *client)
+ 						     &priv->leds[i].ldev,
+ 						     &init_data);
+ 		if (err < 0) {
+-			fwnode_handle_put(child);
++			for (j = i; j < BD2606_MAX_LEDS; j++)
++				fwnode_handle_put(led_fwnodes[j]);
+ 			return dev_err_probe(dev, err,
+ 					     "couldn't register LED %s\n",
+ 					     priv->leds[i].ldev.name);
+diff --git a/drivers/leds/leds-gpio.c b/drivers/leds/leds-gpio.c
+index 83fcd7b6afff76..4d1612d557c841 100644
+--- a/drivers/leds/leds-gpio.c
++++ b/drivers/leds/leds-gpio.c
+@@ -150,7 +150,7 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ {
+ 	struct fwnode_handle *child;
+ 	struct gpio_leds_priv *priv;
+-	int count, ret;
++	int count, used, ret;
+ 
+ 	count = device_get_child_node_count(dev);
+ 	if (!count)
+@@ -159,9 +159,11 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ 	priv = devm_kzalloc(dev, struct_size(priv, leds, count), GFP_KERNEL);
+ 	if (!priv)
+ 		return ERR_PTR(-ENOMEM);
++	priv->num_leds = count;
++	used = 0;
+ 
+ 	device_for_each_child_node(dev, child) {
+-		struct gpio_led_data *led_dat = &priv->leds[priv->num_leds];
++		struct gpio_led_data *led_dat = &priv->leds[used];
+ 		struct gpio_led led = {};
+ 
+ 		/*
+@@ -197,8 +199,9 @@ static struct gpio_leds_priv *gpio_leds_create(struct device *dev)
+ 		/* Set gpiod label to match the corresponding LED name. */
+ 		gpiod_set_consumer_name(led_dat->gpiod,
+ 					led_dat->cdev.dev->kobj.name);
+-		priv->num_leds++;
++		used++;
+ 	}
++	priv->num_leds = used;
+ 
+ 	return priv;
+ }
+diff --git a/drivers/leds/leds-pca995x.c b/drivers/leds/leds-pca995x.c
+index 78215dff14997c..11c7bb69573e8c 100644
+--- a/drivers/leds/leds-pca995x.c
++++ b/drivers/leds/leds-pca995x.c
+@@ -19,10 +19,6 @@
+ #define PCA995X_MODE1			0x00
+ #define PCA995X_MODE2			0x01
+ #define PCA995X_LEDOUT0			0x02
+-#define PCA9955B_PWM0			0x08
+-#define PCA9952_PWM0			0x0A
+-#define PCA9952_IREFALL			0x43
+-#define PCA9955B_IREFALL		0x45
+ 
+ /* Auto-increment disabled. Normal mode */
+ #define PCA995X_MODE1_CFG		0x00
+@@ -34,17 +30,38 @@
+ #define PCA995X_LDRX_MASK		0x3
+ #define PCA995X_LDRX_BITS		2
+ 
+-#define PCA995X_MAX_OUTPUTS		16
++#define PCA995X_MAX_OUTPUTS		24
+ #define PCA995X_OUTPUTS_PER_REG		4
+ 
+ #define PCA995X_IREFALL_FULL_CFG	0xFF
+ #define PCA995X_IREFALL_HALF_CFG	(PCA995X_IREFALL_FULL_CFG / 2)
+ 
+-#define PCA995X_TYPE_NON_B		0
+-#define PCA995X_TYPE_B			1
+-
+ #define ldev_to_led(c)	container_of(c, struct pca995x_led, ldev)
+ 
++struct pca995x_chipdef {
++	unsigned int num_leds;
++	u8 pwm_base;
++	u8 irefall;
++};
++
++static const struct pca995x_chipdef pca9952_chipdef = {
++	.num_leds	= 16,
++	.pwm_base	= 0x0a,
++	.irefall	= 0x43,
++};
++
++static const struct pca995x_chipdef pca9955b_chipdef = {
++	.num_leds	= 16,
++	.pwm_base	= 0x08,
++	.irefall	= 0x45,
++};
++
++static const struct pca995x_chipdef pca9956b_chipdef = {
++	.num_leds	= 24,
++	.pwm_base	= 0x0a,
++	.irefall	= 0x40,
++};
++
+ struct pca995x_led {
+ 	unsigned int led_no;
+ 	struct led_classdev ldev;
+@@ -54,7 +71,7 @@ struct pca995x_led {
+ struct pca995x_chip {
+ 	struct regmap *regmap;
+ 	struct pca995x_led leds[PCA995X_MAX_OUTPUTS];
+-	int btype;
++	const struct pca995x_chipdef *chipdef;
+ };
+ 
+ static int pca995x_brightness_set(struct led_classdev *led_cdev,
+@@ -62,10 +79,11 @@ static int pca995x_brightness_set(struct led_classdev *led_cdev,
+ {
+ 	struct pca995x_led *led = ldev_to_led(led_cdev);
+ 	struct pca995x_chip *chip = led->chip;
++	const struct pca995x_chipdef *chipdef = chip->chipdef;
+ 	u8 ledout_addr, pwmout_addr;
+ 	int shift, ret;
+ 
+-	pwmout_addr = (chip->btype ? PCA9955B_PWM0 : PCA9952_PWM0) + led->led_no;
++	pwmout_addr = chipdef->pwm_base + led->led_no;
+ 	ledout_addr = PCA995X_LEDOUT0 + (led->led_no / PCA995X_OUTPUTS_PER_REG);
+ 	shift = PCA995X_LDRX_BITS * (led->led_no % PCA995X_OUTPUTS_PER_REG);
+ 
+@@ -102,43 +120,38 @@ static const struct regmap_config pca995x_regmap = {
+ static int pca995x_probe(struct i2c_client *client)
+ {
+ 	struct fwnode_handle *led_fwnodes[PCA995X_MAX_OUTPUTS] = { 0 };
+-	struct fwnode_handle *np, *child;
+ 	struct device *dev = &client->dev;
++	const struct pca995x_chipdef *chipdef;
+ 	struct pca995x_chip *chip;
+ 	struct pca995x_led *led;
+-	int i, btype, reg, ret;
++	int i, j, reg, ret;
+ 
+-	btype = (unsigned long)device_get_match_data(&client->dev);
++	chipdef = device_get_match_data(&client->dev);
+ 
+-	np = dev_fwnode(dev);
+-	if (!np)
++	if (!dev_fwnode(dev))
+ 		return -ENODEV;
+ 
+ 	chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
+ 	if (!chip)
+ 		return -ENOMEM;
+ 
+-	chip->btype = btype;
++	chip->chipdef = chipdef;
+ 	chip->regmap = devm_regmap_init_i2c(client, &pca995x_regmap);
+ 	if (IS_ERR(chip->regmap))
+ 		return PTR_ERR(chip->regmap);
+ 
+ 	i2c_set_clientdata(client, chip);
+ 
+-	fwnode_for_each_available_child_node(np, child) {
++	device_for_each_child_node_scoped(dev, child) {
+ 		ret = fwnode_property_read_u32(child, "reg", &reg);
+-		if (ret) {
+-			fwnode_handle_put(child);
++		if (ret)
+ 			return ret;
+-		}
+ 
+-		if (reg < 0 || reg >= PCA995X_MAX_OUTPUTS || led_fwnodes[reg]) {
+-			fwnode_handle_put(child);
++		if (reg < 0 || reg >= PCA995X_MAX_OUTPUTS || led_fwnodes[reg])
+ 			return -EINVAL;
+-		}
+ 
+ 		led = &chip->leds[reg];
+-		led_fwnodes[reg] = child;
++		led_fwnodes[reg] = fwnode_handle_get(child);
+ 		led->chip = chip;
+ 		led->led_no = reg;
+ 		led->ldev.brightness_set_blocking = pca995x_brightness_set;
+@@ -157,7 +170,8 @@ static int pca995x_probe(struct i2c_client *client)
+ 						     &chip->leds[i].ldev,
+ 						     &init_data);
+ 		if (ret < 0) {
+-			fwnode_handle_put(child);
++			for (j = i; j < PCA995X_MAX_OUTPUTS; j++)
++				fwnode_handle_put(led_fwnodes[j]);
+ 			return dev_err_probe(dev, ret,
+ 					     "Could not register LED %s\n",
+ 					     chip->leds[i].ldev.name);
+@@ -170,21 +184,21 @@ static int pca995x_probe(struct i2c_client *client)
+ 		return ret;
+ 
+ 	/* IREF Output current value for all LEDn outputs */
+-	return regmap_write(chip->regmap,
+-			    btype ? PCA9955B_IREFALL : PCA9952_IREFALL,
+-			    PCA995X_IREFALL_HALF_CFG);
++	return regmap_write(chip->regmap, chipdef->irefall, PCA995X_IREFALL_HALF_CFG);
+ }
+ 
+ static const struct i2c_device_id pca995x_id[] = {
+-	{ "pca9952", .driver_data = (kernel_ulong_t)PCA995X_TYPE_NON_B },
+-	{ "pca9955b", .driver_data = (kernel_ulong_t)PCA995X_TYPE_B },
++	{ "pca9952", .driver_data = (kernel_ulong_t)&pca9952_chipdef },
++	{ "pca9955b", .driver_data = (kernel_ulong_t)&pca9955b_chipdef },
++	{ "pca9956b", .driver_data = (kernel_ulong_t)&pca9956b_chipdef },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(i2c, pca995x_id);
+ 
+ static const struct of_device_id pca995x_of_match[] = {
+-	{ .compatible = "nxp,pca9952",  .data = (void *)PCA995X_TYPE_NON_B },
+-	{ .compatible = "nxp,pca9955b", .data = (void *)PCA995X_TYPE_B },
++	{ .compatible = "nxp,pca9952", .data = &pca9952_chipdef },
++	{ .compatible = "nxp,pca9955b", . data = &pca9955b_chipdef },
++	{ .compatible = "nxp,pca9956b", .data = &pca9956b_chipdef },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, pca995x_of_match);
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index f7e9a3632eb3d9..499f8cc8a39fbf 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -496,8 +496,10 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 		map = dm_get_live_table(md, &srcu_idx);
+ 		if (unlikely(!map)) {
++			DMERR_LIMIT("%s: mapping table unavailable, erroring io",
++				    dm_device_name(md));
+ 			dm_put_live_table(md, srcu_idx);
+-			return BLK_STS_RESOURCE;
++			return BLK_STS_IOERR;
+ 		}
+ 		ti = dm_table_find_target(map, 0);
+ 		dm_put_live_table(md, srcu_idx);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 6e15ac4e0845cd..a0b4afba8c608e 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1880,10 +1880,15 @@ static void dm_submit_bio(struct bio *bio)
+ 	struct dm_table *map;
+ 
+ 	map = dm_get_live_table(md, &srcu_idx);
++	if (unlikely(!map)) {
++		DMERR_LIMIT("%s: mapping table unavailable, erroring io",
++			    dm_device_name(md));
++		bio_io_error(bio);
++		goto out;
++	}
+ 
+-	/* If suspended, or map not yet available, queue this IO for later */
+-	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
+-	    unlikely(!map)) {
++	/* If suspended, queue this IO for later */
++	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
+ 		if (bio->bi_opf & REQ_NOWAIT)
+ 			bio_wouldblock_error(bio);
+ 		else if (bio->bi_opf & REQ_RAHEAD)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index a5b5801baa9e83..e832b8b5e631f2 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8648,7 +8648,6 @@ void md_write_start(struct mddev *mddev, struct bio *bi)
+ 	BUG_ON(mddev->ro == MD_RDONLY);
+ 	if (mddev->ro == MD_AUTO_READ) {
+ 		/* need to switch to read/write */
+-		flush_work(&mddev->sync_work);
+ 		mddev->ro = MD_RDWR;
+ 		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 		md_wakeup_thread(mddev->thread);
+diff --git a/drivers/media/dvb-frontends/rtl2830.c b/drivers/media/dvb-frontends/rtl2830.c
+index 30d10fe4b33e34..320aa2bf99d423 100644
+--- a/drivers/media/dvb-frontends/rtl2830.c
++++ b/drivers/media/dvb-frontends/rtl2830.c
+@@ -609,7 +609,7 @@ static int rtl2830_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid, int on
+ 		index, pid, onoff);
+ 
+ 	/* skip invalid PIDs (0x2000) */
+-	if (pid > 0x1fff || index > 32)
++	if (pid > 0x1fff || index >= 32)
+ 		return 0;
+ 
+ 	if (onoff)
+diff --git a/drivers/media/dvb-frontends/rtl2832.c b/drivers/media/dvb-frontends/rtl2832.c
+index 5142820b1b3d97..76c3f40443b2c9 100644
+--- a/drivers/media/dvb-frontends/rtl2832.c
++++ b/drivers/media/dvb-frontends/rtl2832.c
+@@ -983,7 +983,7 @@ static int rtl2832_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid,
+ 		index, pid, onoff, dev->slave_ts);
+ 
+ 	/* skip invalid PIDs (0x2000) */
+-	if (pid > 0x1fff || index > 32)
++	if (pid > 0x1fff || index >= 32)
+ 		return 0;
+ 
+ 	if (onoff)
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
+index 73d5cef33b2abf..1e1b32faac77bc 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_if.c
+@@ -347,11 +347,16 @@ static int vdec_h264_slice_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 		return vpu_dec_reset(vpu);
+ 
+ 	fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
++	if (!fb) {
++		mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++		return -ENOMEM;
++	}
++
+ 	src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+ 	dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
+ 
+-	y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
+-	c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++	y_fb_dma = fb->base_y.dma_addr;
++	c_fb_dma = fb->base_c.dma_addr;
+ 
+ 	mtk_vdec_debug(inst->ctx, "+ [%d] FB y_dma=%llx c_dma=%llx va=%p",
+ 		       inst->num_nalu, y_fb_dma, c_fb_dma, fb);
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
+index 2d4611e7fa0b2f..1ed0ccec56655e 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_h264_req_multi_if.c
+@@ -724,11 +724,16 @@ static int vdec_h264_slice_single_decode(void *h_vdec, struct mtk_vcodec_mem *bs
+ 		return vpu_dec_reset(vpu);
+ 
+ 	fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
++	if (!fb) {
++		mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++		return -ENOMEM;
++	}
++
+ 	src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+ 	dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
+ 
+-	y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
+-	c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++	y_fb_dma = fb->base_y.dma_addr;
++	c_fb_dma = fb->base_c.dma_addr;
+ 	mtk_vdec_debug(inst->ctx, "[h264-dec] [%d] y_dma=%llx c_dma=%llx",
+ 		       inst->ctx->decoded_frame_cnt, y_fb_dma, c_fb_dma);
+ 
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
+index e27e728f392e6c..232ef3bd246a3d 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp8_req_if.c
+@@ -334,14 +334,18 @@ static int vdec_vp8_slice_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
+ 	src_buf_info = container_of(bs, struct mtk_video_dec_buf, bs_buffer);
+ 
+ 	fb = inst->ctx->dev->vdec_pdata->get_cap_buffer(inst->ctx);
+-	dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
++	if (!fb) {
++		mtk_vdec_err(inst->ctx, "fb buffer is NULL");
++		return -ENOMEM;
++	}
+ 
+-	y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
++	dst_buf_info = container_of(fb, struct mtk_video_dec_buf, frame_buffer);
++	y_fb_dma = fb->base_y.dma_addr;
+ 	if (inst->ctx->q_data[MTK_Q_DATA_DST].fmt->num_planes == 1)
+ 		c_fb_dma = y_fb_dma +
+ 			inst->ctx->picinfo.buf_w * inst->ctx->picinfo.buf_h;
+ 	else
+-		c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
++		c_fb_dma = fb->base_c.dma_addr;
+ 
+ 	inst->vsi->dec.bs_dma = (u64)bs->dma_addr;
+ 	inst->vsi->dec.bs_sz = bs->size;
+diff --git a/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c b/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
+index e68fcdaea207aa..c7fdee347ac8ae 100644
+--- a/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
++++ b/drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
+@@ -865,6 +865,7 @@ static const struct of_device_id rzg2l_csi2_of_table[] = {
+ 	{ .compatible = "renesas,rzg2l-csi2", },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, rzg2l_csi2_of_table);
+ 
+ static struct platform_driver rzg2l_csi2_pdrv = {
+ 	.remove_new = rzg2l_csi2_remove,
+diff --git a/drivers/media/tuners/tuner-i2c.h b/drivers/media/tuners/tuner-i2c.h
+index 07aeead0644a31..724952e001cd13 100644
+--- a/drivers/media/tuners/tuner-i2c.h
++++ b/drivers/media/tuners/tuner-i2c.h
+@@ -133,10 +133,8 @@ static inline int tuner_i2c_xfer_send_recv(struct tuner_i2c_props *props,
+ 	}								\
+ 	if (0 == __ret) {						\
+ 		state = kzalloc(sizeof(type), GFP_KERNEL);		\
+-		if (!state) {						\
+-			__ret = -ENOMEM;				\
++		if (NULL == state)					\
+ 			goto __fail;					\
+-		}							\
+ 		state->i2c_props.addr = i2caddr;			\
+ 		state->i2c_props.adap = i2cadap;			\
+ 		state->i2c_props.name = devname;			\
+diff --git a/drivers/mtd/devices/powernv_flash.c b/drivers/mtd/devices/powernv_flash.c
+index 66044f4f5bade8..10cd1d9b48859d 100644
+--- a/drivers/mtd/devices/powernv_flash.c
++++ b/drivers/mtd/devices/powernv_flash.c
+@@ -207,6 +207,9 @@ static int powernv_flash_set_driver_info(struct device *dev,
+ 	 * get them
+ 	 */
+ 	mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
++	if (!mtd->name)
++		return -ENOMEM;
++
+ 	mtd->type = MTD_NORFLASH;
+ 	mtd->flags = MTD_WRITEABLE;
+ 	mtd->size = size;
+diff --git a/drivers/mtd/devices/slram.c b/drivers/mtd/devices/slram.c
+index 28131a127d065e..8297b366a06699 100644
+--- a/drivers/mtd/devices/slram.c
++++ b/drivers/mtd/devices/slram.c
+@@ -296,10 +296,12 @@ static int __init init_slram(void)
+ 		T("slram: devname = %s\n", devname);
+ 		if ((!map) || (!(devstart = strsep(&map, ",")))) {
+ 			E("slram: No devicestart specified.\n");
++			break;
+ 		}
+ 		T("slram: devstart = %s\n", devstart);
+ 		if ((!map) || (!(devlength = strsep(&map, ",")))) {
+ 			E("slram: No devicelength / -end specified.\n");
++			break;
+ 		}
+ 		T("slram: devlength = %s\n", devlength);
+ 		if (parse_cmdline(devname, devstart, devlength) != 0) {
+diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
+index 17477bb2d48ff0..586868b4139f51 100644
+--- a/drivers/mtd/nand/raw/mtk_nand.c
++++ b/drivers/mtd/nand/raw/mtk_nand.c
+@@ -1429,16 +1429,32 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
+ 	return 0;
+ }
+ 
++static void mtk_nfc_nand_chips_cleanup(struct mtk_nfc *nfc)
++{
++	struct mtk_nfc_nand_chip *mtk_chip;
++	struct nand_chip *chip;
++	int ret;
++
++	while (!list_empty(&nfc->chips)) {
++		mtk_chip = list_first_entry(&nfc->chips,
++					    struct mtk_nfc_nand_chip, node);
++		chip = &mtk_chip->nand;
++		ret = mtd_device_unregister(nand_to_mtd(chip));
++		WARN_ON(ret);
++		nand_cleanup(chip);
++		list_del(&mtk_chip->node);
++	}
++}
++
+ static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
+ {
+ 	struct device_node *np = dev->of_node;
+-	struct device_node *nand_np;
+ 	int ret;
+ 
+-	for_each_child_of_node(np, nand_np) {
++	for_each_child_of_node_scoped(np, nand_np) {
+ 		ret = mtk_nfc_nand_chip_init(dev, nfc, nand_np);
+ 		if (ret) {
+-			of_node_put(nand_np);
++			mtk_nfc_nand_chips_cleanup(nfc);
+ 			return ret;
+ 		}
+ 	}
+@@ -1570,20 +1586,8 @@ static int mtk_nfc_probe(struct platform_device *pdev)
+ static void mtk_nfc_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_nfc *nfc = platform_get_drvdata(pdev);
+-	struct mtk_nfc_nand_chip *mtk_chip;
+-	struct nand_chip *chip;
+-	int ret;
+-
+-	while (!list_empty(&nfc->chips)) {
+-		mtk_chip = list_first_entry(&nfc->chips,
+-					    struct mtk_nfc_nand_chip, node);
+-		chip = &mtk_chip->nand;
+-		ret = mtd_device_unregister(nand_to_mtd(chip));
+-		WARN_ON(ret);
+-		nand_cleanup(chip);
+-		list_del(&mtk_chip->node);
+-	}
+ 
++	mtk_nfc_nand_chips_cleanup(nfc);
+ 	mtk_ecc_release(nfc->ecc);
+ }
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 7aca0544fb29c9..e80992b4f9de9a 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -68,6 +68,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 	__be16 proto;
+ 	void *oiph;
+ 	int err;
++	int nh;
+ 
+ 	bareudp = rcu_dereference_sk_user_data(sk);
+ 	if (!bareudp)
+@@ -148,10 +149,25 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 	}
+ 	skb_dst_set(skb, &tun_dst->dst);
+ 	skb->dev = bareudp->dev;
+-	oiph = skb_network_header(skb);
+-	skb_reset_network_header(skb);
+ 	skb_reset_mac_header(skb);
+ 
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
++	skb_reset_network_header(skb);
++
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(bareudp->dev, rx_length_errors);
++		DEV_STATS_INC(bareudp->dev, rx_errors);
++		goto drop;
++	}
++
++	/* Get the outer header. */
++	oiph = skb->head + nh;
++
+ 	if (!ipv6_mod_enabled() || family == AF_INET)
+ 		err = IP_ECN_decapsulate(oiph, skb);
+ 	else
+@@ -301,6 +317,9 @@ static int bareudp_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be32 saddr;
+ 	int err;
+ 
++	if (!skb_vlan_inet_prepare(skb, skb->protocol != htons(ETH_P_TEB)))
++		return -EINVAL;
++
+ 	if (!sock)
+ 		return -ESHUTDOWN;
+ 
+@@ -368,6 +387,9 @@ static int bareudp6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
++	if (!skb_vlan_inet_prepare(skb, skb->protocol != htons(ETH_P_TEB)))
++		return -EINVAL;
++
+ 	if (!sock)
+ 		return -ESHUTDOWN;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 60db34095a2557..3c61f5fbe6b7fe 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -5536,9 +5536,9 @@ bond_xdp_get_xmit_slave(struct net_device *bond_dev, struct xdp_buff *xdp)
+ 		break;
+ 
+ 	default:
+-		/* Should never happen. Mode guarded by bond_xdp_check() */
+-		netdev_err(bond_dev, "Unknown bonding mode %d for xdp xmit\n", BOND_MODE(bond));
+-		WARN_ON_ONCE(1);
++		if (net_ratelimit())
++			netdev_err(bond_dev, "Unknown bonding mode %d for xdp xmit\n",
++				   BOND_MODE(bond));
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index ddde067f593fcf..0f9f17fdb3bb89 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1720,11 +1720,7 @@ static int m_can_close(struct net_device *dev)
+ 
+ 	netif_stop_queue(dev);
+ 
+-	if (!cdev->is_peripheral)
+-		napi_disable(&cdev->napi);
+-
+ 	m_can_stop(dev);
+-	m_can_clk_stop(cdev);
+ 	free_irq(dev->irq, dev);
+ 
+ 	m_can_clean(dev);
+@@ -1733,10 +1729,13 @@ static int m_can_close(struct net_device *dev)
+ 		destroy_workqueue(cdev->tx_wq);
+ 		cdev->tx_wq = NULL;
+ 		can_rx_offload_disable(&cdev->offload);
++	} else {
++		napi_disable(&cdev->napi);
+ 	}
+ 
+ 	close_candev(dev);
+ 
++	m_can_clk_stop(cdev);
+ 	phy_power_off(cdev->transceiver);
+ 
+ 	return 0;
+@@ -1987,6 +1986,8 @@ static int m_can_open(struct net_device *dev)
+ 
+ 	if (cdev->is_peripheral)
+ 		can_rx_offload_enable(&cdev->offload);
++	else
++		napi_enable(&cdev->napi);
+ 
+ 	/* register interrupt handler */
+ 	if (cdev->is_peripheral) {
+@@ -2020,9 +2021,6 @@ static int m_can_open(struct net_device *dev)
+ 	if (err)
+ 		goto exit_start_fail;
+ 
+-	if (!cdev->is_peripheral)
+-		napi_enable(&cdev->napi);
+-
+ 	netif_start_queue(dev);
+ 
+ 	return 0;
+@@ -2036,6 +2034,8 @@ static int m_can_open(struct net_device *dev)
+ out_wq_fail:
+ 	if (cdev->is_peripheral)
+ 		can_rx_offload_disable(&cdev->offload);
++	else
++		napi_disable(&cdev->napi);
+ 	close_candev(dev);
+ exit_disable_clks:
+ 	m_can_clk_stop(cdev);
+diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
+index 41a0e4261d15e9..03ad10b01867d8 100644
+--- a/drivers/net/can/usb/esd_usb.c
++++ b/drivers/net/can/usb/esd_usb.c
+@@ -3,7 +3,7 @@
+  * CAN driver for esd electronics gmbh CAN-USB/2, CAN-USB/3 and CAN-USB/Micro
+  *
+  * Copyright (C) 2010-2012 esd electronic system design gmbh, Matthias Fuchs <socketcan@esd.eu>
+- * Copyright (C) 2022-2023 esd electronics gmbh, Frank Jungclaus <frank.jungclaus@esd.eu>
++ * Copyright (C) 2022-2024 esd electronics gmbh, Frank Jungclaus <frank.jungclaus@esd.eu>
+  */
+ 
+ #include <linux/can.h>
+@@ -1116,9 +1116,6 @@ static int esd_usb_3_set_bittiming(struct net_device *netdev)
+ 	if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+ 		flags |= ESD_USB_3_BAUDRATE_FLAG_LOM;
+ 
+-	if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+-		flags |= ESD_USB_3_BAUDRATE_FLAG_TRS;
+-
+ 	baud_x->nom.brp = cpu_to_le16(nom_bt->brp & (nom_btc->brp_max - 1));
+ 	baud_x->nom.sjw = cpu_to_le16(nom_bt->sjw & (nom_btc->sjw_max - 1));
+ 	baud_x->nom.tseg1 = cpu_to_le16((nom_bt->prop_seg + nom_bt->phase_seg1)
+@@ -1219,7 +1216,6 @@ static int esd_usb_probe_one_net(struct usb_interface *intf, int index)
+ 	switch (le16_to_cpu(dev->udev->descriptor.idProduct)) {
+ 	case ESD_USB_CANUSB3_PRODUCT_ID:
+ 		priv->can.clock.freq = ESD_USB_3_CAN_CLOCK;
+-		priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
+ 		priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
+ 		priv->can.bittiming_const = &esd_usb_3_nom_bittiming_const;
+ 		priv->can.data_bittiming_const = &esd_usb_3_data_bittiming_const;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 5c45f42232d326..f04f42ea60c0f7 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -2305,12 +2305,11 @@ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ 
+ 		snprintf(v->name, sizeof(v->name), "%s-rxtx%d",
+ 			 priv->ndev->name, i);
+-		err = request_irq(irq, enetc_msix, 0, v->name, v);
++		err = request_irq(irq, enetc_msix, IRQF_NO_AUTOEN, v->name, v);
+ 		if (err) {
+ 			dev_err(priv->dev, "request_irq() failed!\n");
+ 			goto irq_err;
+ 		}
+-		disable_irq(irq);
+ 
+ 		v->tbier_base = hw->reg + ENETC_BDR(TX, 0, ENETC_TBIER);
+ 		v->rbier = hw->reg + ENETC_BDR(RX, i, ENETC_RBIER);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index e7a03653824659..f9e43d171f1712 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -17,10 +17,8 @@ struct idpf_vport_max_q;
+ #include <linux/sctp.h>
+ #include <linux/ethtool_netlink.h>
+ #include <net/gro.h>
+-#include <linux/dim.h>
+ 
+ #include "virtchnl2.h"
+-#include "idpf_lan_txrx.h"
+ #include "idpf_txrx.h"
+ #include "idpf_controlq.h"
+ 
+@@ -302,7 +300,7 @@ struct idpf_vport {
+ 	u16 num_txq_grp;
+ 	struct idpf_txq_group *txq_grps;
+ 	u32 txq_model;
+-	struct idpf_queue **txqs;
++	struct idpf_tx_queue **txqs;
+ 	bool crc_enable;
+ 
+ 	u16 num_rxq;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+index 1885ba618981df..e933fed16c7ea5 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -437,22 +437,24 @@ struct idpf_stats {
+ 	.stat_offset = offsetof(_type, _stat) \
+ }
+ 
+-/* Helper macro for defining some statistics related to queues */
+-#define IDPF_QUEUE_STAT(_name, _stat) \
+-	IDPF_STAT(struct idpf_queue, _name, _stat)
++/* Helper macros for defining some statistics related to queues */
++#define IDPF_RX_QUEUE_STAT(_name, _stat) \
++	IDPF_STAT(struct idpf_rx_queue, _name, _stat)
++#define IDPF_TX_QUEUE_STAT(_name, _stat) \
++	IDPF_STAT(struct idpf_tx_queue, _name, _stat)
+ 
+ /* Stats associated with a Tx queue */
+ static const struct idpf_stats idpf_gstrings_tx_queue_stats[] = {
+-	IDPF_QUEUE_STAT("pkts", q_stats.tx.packets),
+-	IDPF_QUEUE_STAT("bytes", q_stats.tx.bytes),
+-	IDPF_QUEUE_STAT("lso_pkts", q_stats.tx.lso_pkts),
++	IDPF_TX_QUEUE_STAT("pkts", q_stats.packets),
++	IDPF_TX_QUEUE_STAT("bytes", q_stats.bytes),
++	IDPF_TX_QUEUE_STAT("lso_pkts", q_stats.lso_pkts),
+ };
+ 
+ /* Stats associated with an Rx queue */
+ static const struct idpf_stats idpf_gstrings_rx_queue_stats[] = {
+-	IDPF_QUEUE_STAT("pkts", q_stats.rx.packets),
+-	IDPF_QUEUE_STAT("bytes", q_stats.rx.bytes),
+-	IDPF_QUEUE_STAT("rx_gro_hw_pkts", q_stats.rx.rsc_pkts),
++	IDPF_RX_QUEUE_STAT("pkts", q_stats.packets),
++	IDPF_RX_QUEUE_STAT("bytes", q_stats.bytes),
++	IDPF_RX_QUEUE_STAT("rx_gro_hw_pkts", q_stats.rsc_pkts),
+ };
+ 
+ #define IDPF_TX_QUEUE_STATS_LEN		ARRAY_SIZE(idpf_gstrings_tx_queue_stats)
+@@ -633,7 +635,7 @@ static int idpf_get_sset_count(struct net_device *netdev, int sset)
+  * Copies the stat data defined by the pointer and stat structure pair into
+  * the memory supplied as data. If the pointer is null, data will be zero'd.
+  */
+-static void idpf_add_one_ethtool_stat(u64 *data, void *pstat,
++static void idpf_add_one_ethtool_stat(u64 *data, const void *pstat,
+ 				      const struct idpf_stats *stat)
+ {
+ 	char *p;
+@@ -671,6 +673,7 @@ static void idpf_add_one_ethtool_stat(u64 *data, void *pstat,
+  * idpf_add_queue_stats - copy queue statistics into supplied buffer
+  * @data: ethtool stats buffer
+  * @q: the queue to copy
++ * @type: type of the queue
+  *
+  * Queue statistics must be copied while protected by u64_stats_fetch_begin,
+  * so we can't directly use idpf_add_ethtool_stats. Assumes that queue stats
+@@ -681,19 +684,23 @@ static void idpf_add_one_ethtool_stat(u64 *data, void *pstat,
+  *
+  * This function expects to be called while under rcu_read_lock().
+  */
+-static void idpf_add_queue_stats(u64 **data, struct idpf_queue *q)
++static void idpf_add_queue_stats(u64 **data, const void *q,
++				 enum virtchnl2_queue_type type)
+ {
++	const struct u64_stats_sync *stats_sync;
+ 	const struct idpf_stats *stats;
+ 	unsigned int start;
+ 	unsigned int size;
+ 	unsigned int i;
+ 
+-	if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
++	if (type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ 		size = IDPF_RX_QUEUE_STATS_LEN;
+ 		stats = idpf_gstrings_rx_queue_stats;
++		stats_sync = &((const struct idpf_rx_queue *)q)->stats_sync;
+ 	} else {
+ 		size = IDPF_TX_QUEUE_STATS_LEN;
+ 		stats = idpf_gstrings_tx_queue_stats;
++		stats_sync = &((const struct idpf_tx_queue *)q)->stats_sync;
+ 	}
+ 
+ 	/* To avoid invalid statistics values, ensure that we keep retrying
+@@ -701,10 +708,10 @@ static void idpf_add_queue_stats(u64 **data, struct idpf_queue *q)
+ 	 * u64_stats_fetch_retry.
+ 	 */
+ 	do {
+-		start = u64_stats_fetch_begin(&q->stats_sync);
++		start = u64_stats_fetch_begin(stats_sync);
+ 		for (i = 0; i < size; i++)
+ 			idpf_add_one_ethtool_stat(&(*data)[i], q, &stats[i]);
+-	} while (u64_stats_fetch_retry(&q->stats_sync, start));
++	} while (u64_stats_fetch_retry(stats_sync, start));
+ 
+ 	/* Once we successfully copy the stats in, update the data pointer */
+ 	*data += size;
+@@ -793,7 +800,7 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
+ 		for (j = 0; j < num_rxq; j++) {
+ 			u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs;
+ 			struct idpf_rx_queue_stats *stats;
+-			struct idpf_queue *rxq;
++			struct idpf_rx_queue *rxq;
+ 			unsigned int start;
+ 
+ 			if (idpf_is_queue_model_split(vport->rxq_model))
+@@ -807,7 +814,7 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
+ 			do {
+ 				start = u64_stats_fetch_begin(&rxq->stats_sync);
+ 
+-				stats = &rxq->q_stats.rx;
++				stats = &rxq->q_stats;
+ 				hw_csum_err = u64_stats_read(&stats->hw_csum_err);
+ 				hsplit = u64_stats_read(&stats->hsplit_pkts);
+ 				hsplit_hbo = u64_stats_read(&stats->hsplit_buf_ovf);
+@@ -828,7 +835,7 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
+ 
+ 		for (j = 0; j < txq_grp->num_txq; j++) {
+ 			u64 linearize, qbusy, skb_drops, dma_map_errs;
+-			struct idpf_queue *txq = txq_grp->txqs[j];
++			struct idpf_tx_queue *txq = txq_grp->txqs[j];
+ 			struct idpf_tx_queue_stats *stats;
+ 			unsigned int start;
+ 
+@@ -838,7 +845,7 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
+ 			do {
+ 				start = u64_stats_fetch_begin(&txq->stats_sync);
+ 
+-				stats = &txq->q_stats.tx;
++				stats = &txq->q_stats;
+ 				linearize = u64_stats_read(&stats->linearize);
+ 				qbusy = u64_stats_read(&stats->q_busy);
+ 				skb_drops = u64_stats_read(&stats->skb_drops);
+@@ -896,12 +903,12 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
+ 		qtype = VIRTCHNL2_QUEUE_TYPE_TX;
+ 
+ 		for (j = 0; j < txq_grp->num_txq; j++, total++) {
+-			struct idpf_queue *txq = txq_grp->txqs[j];
++			struct idpf_tx_queue *txq = txq_grp->txqs[j];
+ 
+ 			if (!txq)
+ 				idpf_add_empty_queue_stats(&data, qtype);
+ 			else
+-				idpf_add_queue_stats(&data, txq);
++				idpf_add_queue_stats(&data, txq, qtype);
+ 		}
+ 	}
+ 
+@@ -929,7 +936,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
+ 			num_rxq = rxq_grp->singleq.num_rxq;
+ 
+ 		for (j = 0; j < num_rxq; j++, total++) {
+-			struct idpf_queue *rxq;
++			struct idpf_rx_queue *rxq;
+ 
+ 			if (is_splitq)
+ 				rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
+@@ -938,7 +945,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
+ 			if (!rxq)
+ 				idpf_add_empty_queue_stats(&data, qtype);
+ 			else
+-				idpf_add_queue_stats(&data, rxq);
++				idpf_add_queue_stats(&data, rxq, qtype);
+ 
+ 			/* In splitq mode, don't get page pool stats here since
+ 			 * the pools are attached to the buffer queues
+@@ -953,7 +960,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
+ 
+ 	for (i = 0; i < vport->num_rxq_grp; i++) {
+ 		for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+-			struct idpf_queue *rxbufq =
++			struct idpf_buf_queue *rxbufq =
+ 				&vport->rxq_grps[i].splitq.bufq_sets[j].bufq;
+ 
+ 			page_pool_get_stats(rxbufq->pp, &pp_stats);
+@@ -971,60 +978,64 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
+ }
+ 
+ /**
+- * idpf_find_rxq - find rxq from q index
++ * idpf_find_rxq_vec - find rxq vector from q index
+  * @vport: virtual port associated to queue
+  * @q_num: q index used to find queue
+  *
+- * returns pointer to rx queue
++ * returns pointer to rx vector
+  */
+-static struct idpf_queue *idpf_find_rxq(struct idpf_vport *vport, int q_num)
++static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport,
++					       int q_num)
+ {
+ 	int q_grp, q_idx;
+ 
+ 	if (!idpf_is_queue_model_split(vport->rxq_model))
+-		return vport->rxq_grps->singleq.rxqs[q_num];
++		return vport->rxq_grps->singleq.rxqs[q_num]->q_vector;
+ 
+ 	q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
+ 	q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
+ 
+-	return &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq;
++	return vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector;
+ }
+ 
+ /**
+- * idpf_find_txq - find txq from q index
++ * idpf_find_txq_vec - find txq vector from q index
+  * @vport: virtual port associated to queue
+  * @q_num: q index used to find queue
+  *
+- * returns pointer to tx queue
++ * returns pointer to tx vector
+  */
+-static struct idpf_queue *idpf_find_txq(struct idpf_vport *vport, int q_num)
++static struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport,
++					       int q_num)
+ {
+ 	int q_grp;
+ 
+ 	if (!idpf_is_queue_model_split(vport->txq_model))
+-		return vport->txqs[q_num];
++		return vport->txqs[q_num]->q_vector;
+ 
+ 	q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
+ 
+-	return vport->txq_grps[q_grp].complq;
++	return vport->txq_grps[q_grp].complq->q_vector;
+ }
+ 
+ /**
+  * __idpf_get_q_coalesce - get ITR values for specific queue
+  * @ec: ethtool structure to fill with driver's coalesce settings
+- * @q: quuee of Rx or Tx
++ * @q_vector: queue vector corresponding to this queue
++ * @type: queue type
+  */
+ static void __idpf_get_q_coalesce(struct ethtool_coalesce *ec,
+-				  struct idpf_queue *q)
++				  const struct idpf_q_vector *q_vector,
++				  enum virtchnl2_queue_type type)
+ {
+-	if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
++	if (type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ 		ec->use_adaptive_rx_coalesce =
+-				IDPF_ITR_IS_DYNAMIC(q->q_vector->rx_intr_mode);
+-		ec->rx_coalesce_usecs = q->q_vector->rx_itr_value;
++				IDPF_ITR_IS_DYNAMIC(q_vector->rx_intr_mode);
++		ec->rx_coalesce_usecs = q_vector->rx_itr_value;
+ 	} else {
+ 		ec->use_adaptive_tx_coalesce =
+-				IDPF_ITR_IS_DYNAMIC(q->q_vector->tx_intr_mode);
+-		ec->tx_coalesce_usecs = q->q_vector->tx_itr_value;
++				IDPF_ITR_IS_DYNAMIC(q_vector->tx_intr_mode);
++		ec->tx_coalesce_usecs = q_vector->tx_itr_value;
+ 	}
+ }
+ 
+@@ -1040,8 +1051,8 @@ static int idpf_get_q_coalesce(struct net_device *netdev,
+ 			       struct ethtool_coalesce *ec,
+ 			       u32 q_num)
+ {
+-	struct idpf_netdev_priv *np = netdev_priv(netdev);
+-	struct idpf_vport *vport;
++	const struct idpf_netdev_priv *np = netdev_priv(netdev);
++	const struct idpf_vport *vport;
+ 	int err = 0;
+ 
+ 	idpf_vport_ctrl_lock(netdev);
+@@ -1056,10 +1067,12 @@ static int idpf_get_q_coalesce(struct net_device *netdev,
+ 	}
+ 
+ 	if (q_num < vport->num_rxq)
+-		__idpf_get_q_coalesce(ec, idpf_find_rxq(vport, q_num));
++		__idpf_get_q_coalesce(ec, idpf_find_rxq_vec(vport, q_num),
++				      VIRTCHNL2_QUEUE_TYPE_RX);
+ 
+ 	if (q_num < vport->num_txq)
+-		__idpf_get_q_coalesce(ec, idpf_find_txq(vport, q_num));
++		__idpf_get_q_coalesce(ec, idpf_find_txq_vec(vport, q_num),
++				      VIRTCHNL2_QUEUE_TYPE_TX);
+ 
+ unlock_mutex:
+ 	idpf_vport_ctrl_unlock(netdev);
+@@ -1103,16 +1116,15 @@ static int idpf_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ /**
+  * __idpf_set_q_coalesce - set ITR values for specific queue
+  * @ec: ethtool structure from user to update ITR settings
+- * @q: queue for which itr values has to be set
++ * @qv: queue vector for which itr values has to be set
+  * @is_rxq: is queue type rx
+  *
+  * Returns 0 on success, negative otherwise.
+  */
+-static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+-				 struct idpf_queue *q, bool is_rxq)
++static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
++				 struct idpf_q_vector *qv, bool is_rxq)
+ {
+ 	u32 use_adaptive_coalesce, coalesce_usecs;
+-	struct idpf_q_vector *qv = q->q_vector;
+ 	bool is_dim_ena = false;
+ 	u16 itr_val;
+ 
+@@ -1128,7 +1140,7 @@ static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+ 		itr_val = qv->tx_itr_value;
+ 	}
+ 	if (coalesce_usecs != itr_val && use_adaptive_coalesce) {
+-		netdev_err(q->vport->netdev, "Cannot set coalesce usecs if adaptive enabled\n");
++		netdev_err(qv->vport->netdev, "Cannot set coalesce usecs if adaptive enabled\n");
+ 
+ 		return -EINVAL;
+ 	}
+@@ -1137,7 +1149,7 @@ static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+ 		return 0;
+ 
+ 	if (coalesce_usecs > IDPF_ITR_MAX) {
+-		netdev_err(q->vport->netdev,
++		netdev_err(qv->vport->netdev,
+ 			   "Invalid value, %d-usecs range is 0-%d\n",
+ 			   coalesce_usecs, IDPF_ITR_MAX);
+ 
+@@ -1146,7 +1158,7 @@ static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+ 
+ 	if (coalesce_usecs % 2) {
+ 		coalesce_usecs--;
+-		netdev_info(q->vport->netdev,
++		netdev_info(qv->vport->netdev,
+ 			    "HW only supports even ITR values, ITR rounded to %d\n",
+ 			    coalesce_usecs);
+ 	}
+@@ -1185,15 +1197,16 @@ static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+  *
+  * Return 0 on success, and negative on failure
+  */
+-static int idpf_set_q_coalesce(struct idpf_vport *vport,
+-			       struct ethtool_coalesce *ec,
++static int idpf_set_q_coalesce(const struct idpf_vport *vport,
++			       const struct ethtool_coalesce *ec,
+ 			       int q_num, bool is_rxq)
+ {
+-	struct idpf_queue *q;
++	struct idpf_q_vector *qv;
+ 
+-	q = is_rxq ? idpf_find_rxq(vport, q_num) : idpf_find_txq(vport, q_num);
++	qv = is_rxq ? idpf_find_rxq_vec(vport, q_num) :
++		      idpf_find_txq_vec(vport, q_num);
+ 
+-	if (q && __idpf_set_q_coalesce(ec, q, is_rxq))
++	if (qv && __idpf_set_q_coalesce(ec, qv, is_rxq))
+ 		return -EINVAL;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
+index a5752dcab8887c..8c7f8ef8f1a153 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
+@@ -4,6 +4,8 @@
+ #ifndef _IDPF_LAN_TXRX_H_
+ #define _IDPF_LAN_TXRX_H_
+ 
++#include <linux/bits.h>
++
+ enum idpf_rss_hash {
+ 	IDPF_HASH_INVALID			= 0,
+ 	/* Values 1 - 28 are reserved for future use */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 3ac9d7ab83f20f..5e336f64bc25e3 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -4,8 +4,7 @@
+ #include "idpf.h"
+ #include "idpf_virtchnl.h"
+ 
+-static const struct net_device_ops idpf_netdev_ops_splitq;
+-static const struct net_device_ops idpf_netdev_ops_singleq;
++static const struct net_device_ops idpf_netdev_ops;
+ 
+ /**
+  * idpf_init_vector_stack - Fill the MSIX vector stack with vector index
+@@ -765,10 +764,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+ 	}
+ 
+ 	/* assign netdev_ops */
+-	if (idpf_is_queue_model_split(vport->txq_model))
+-		netdev->netdev_ops = &idpf_netdev_ops_splitq;
+-	else
+-		netdev->netdev_ops = &idpf_netdev_ops_singleq;
++	netdev->netdev_ops = &idpf_netdev_ops;
+ 
+ 	/* setup watchdog timeout value to be 5 second */
+ 	netdev->watchdog_timeo = 5 * HZ;
+@@ -1318,14 +1314,14 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
+ 
+ 		if (idpf_is_queue_model_split(vport->rxq_model)) {
+ 			for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+-				struct idpf_queue *q =
++				const struct idpf_buf_queue *q =
+ 					&grp->splitq.bufq_sets[j].bufq;
+ 
+ 				writel(q->next_to_alloc, q->tail);
+ 			}
+ 		} else {
+ 			for (j = 0; j < grp->singleq.num_rxq; j++) {
+-				struct idpf_queue *q =
++				const struct idpf_rx_queue *q =
+ 					grp->singleq.rxqs[j];
+ 
+ 				writel(q->next_to_alloc, q->tail);
+@@ -1852,7 +1848,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ 	enum idpf_vport_state current_state = np->state;
+ 	struct idpf_adapter *adapter = vport->adapter;
+ 	struct idpf_vport *new_vport;
+-	int err, i;
++	int err;
+ 
+ 	/* If the system is low on memory, we can end up in bad state if we
+ 	 * free all the memory for queue resources and try to allocate them
+@@ -1923,46 +1919,6 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ 	 */
+ 	memcpy(vport, new_vport, offsetof(struct idpf_vport, link_speed_mbps));
+ 
+-	/* Since idpf_vport_queues_alloc was called with new_port, the queue
+-	 * back pointers are currently pointing to the local new_vport. Reset
+-	 * the backpointers to the original vport here
+-	 */
+-	for (i = 0; i < vport->num_txq_grp; i++) {
+-		struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+-		int j;
+-
+-		tx_qgrp->vport = vport;
+-		for (j = 0; j < tx_qgrp->num_txq; j++)
+-			tx_qgrp->txqs[j]->vport = vport;
+-
+-		if (idpf_is_queue_model_split(vport->txq_model))
+-			tx_qgrp->complq->vport = vport;
+-	}
+-
+-	for (i = 0; i < vport->num_rxq_grp; i++) {
+-		struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+-		struct idpf_queue *q;
+-		u16 num_rxq;
+-		int j;
+-
+-		rx_qgrp->vport = vport;
+-		for (j = 0; j < vport->num_bufqs_per_qgrp; j++)
+-			rx_qgrp->splitq.bufq_sets[j].bufq.vport = vport;
+-
+-		if (idpf_is_queue_model_split(vport->rxq_model))
+-			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+-		else
+-			num_rxq = rx_qgrp->singleq.num_rxq;
+-
+-		for (j = 0; j < num_rxq; j++) {
+-			if (idpf_is_queue_model_split(vport->rxq_model))
+-				q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+-			else
+-				q = rx_qgrp->singleq.rxqs[j];
+-			q->vport = vport;
+-		}
+-	}
+-
+ 	if (reset_cause == IDPF_SR_Q_CHANGE)
+ 		idpf_vport_alloc_vec_indexes(vport);
+ 
+@@ -2393,24 +2349,10 @@ void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem)
+ 	mem->pa = 0;
+ }
+ 
+-static const struct net_device_ops idpf_netdev_ops_splitq = {
+-	.ndo_open = idpf_open,
+-	.ndo_stop = idpf_stop,
+-	.ndo_start_xmit = idpf_tx_splitq_start,
+-	.ndo_features_check = idpf_features_check,
+-	.ndo_set_rx_mode = idpf_set_rx_mode,
+-	.ndo_validate_addr = eth_validate_addr,
+-	.ndo_set_mac_address = idpf_set_mac,
+-	.ndo_change_mtu = idpf_change_mtu,
+-	.ndo_get_stats64 = idpf_get_stats64,
+-	.ndo_set_features = idpf_set_features,
+-	.ndo_tx_timeout = idpf_tx_timeout,
+-};
+-
+-static const struct net_device_ops idpf_netdev_ops_singleq = {
++static const struct net_device_ops idpf_netdev_ops = {
+ 	.ndo_open = idpf_open,
+ 	.ndo_stop = idpf_stop,
+-	.ndo_start_xmit = idpf_tx_singleq_start,
++	.ndo_start_xmit = idpf_tx_start,
+ 	.ndo_features_check = idpf_features_check,
+ 	.ndo_set_rx_mode = idpf_set_rx_mode,
+ 	.ndo_validate_addr = eth_validate_addr,
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+index 27b93592c4babb..5e5fa2d0aa4d18 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+@@ -186,7 +186,7 @@ static int idpf_tx_singleq_csum(struct sk_buff *skb,
+  * and gets a physical address for each memory location and programs
+  * it and the length into the transmit base mode descriptor.
+  */
+-static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
++static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
+ 				struct idpf_tx_buf *first,
+ 				struct idpf_tx_offload_params *offloads)
+ {
+@@ -205,12 +205,12 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+ 	data_len = skb->data_len;
+ 	size = skb_headlen(skb);
+ 
+-	tx_desc = IDPF_BASE_TX_DESC(tx_q, i);
++	tx_desc = &tx_q->base_tx[i];
+ 
+ 	dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+ 
+ 	/* write each descriptor with CRC bit */
+-	if (tx_q->vport->crc_enable)
++	if (idpf_queue_has(CRC_EN, tx_q))
+ 		td_cmd |= IDPF_TX_DESC_CMD_ICRC;
+ 
+ 	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+@@ -239,7 +239,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+ 			i++;
+ 
+ 			if (i == tx_q->desc_count) {
+-				tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
++				tx_desc = &tx_q->base_tx[0];
+ 				i = 0;
+ 			}
+ 
+@@ -259,7 +259,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+ 		i++;
+ 
+ 		if (i == tx_q->desc_count) {
+-			tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
++			tx_desc = &tx_q->base_tx[0];
+ 			i = 0;
+ 		}
+ 
+@@ -285,7 +285,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+ 	/* set next_to_watch value indicating a packet is present */
+ 	first->next_to_watch = tx_desc;
+ 
+-	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
++	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 	netdev_tx_sent_queue(nq, first->bytecount);
+ 
+ 	idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more());
+@@ -299,7 +299,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+  * ring entry to reflect that this index is a context descriptor
+  */
+ static struct idpf_base_tx_ctx_desc *
+-idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq)
++idpf_tx_singleq_get_ctx_desc(struct idpf_tx_queue *txq)
+ {
+ 	struct idpf_base_tx_ctx_desc *ctx_desc;
+ 	int ntu = txq->next_to_use;
+@@ -307,7 +307,7 @@ idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq)
+ 	memset(&txq->tx_buf[ntu], 0, sizeof(struct idpf_tx_buf));
+ 	txq->tx_buf[ntu].ctx_entry = true;
+ 
+-	ctx_desc = IDPF_BASE_TX_CTX_DESC(txq, ntu);
++	ctx_desc = &txq->base_ctx[ntu];
+ 
+ 	IDPF_SINGLEQ_BUMP_RING_IDX(txq, ntu);
+ 	txq->next_to_use = ntu;
+@@ -320,7 +320,7 @@ idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq)
+  * @txq: queue to send buffer on
+  * @offload: offload parameter structure
+  **/
+-static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq,
++static void idpf_tx_singleq_build_ctx_desc(struct idpf_tx_queue *txq,
+ 					   struct idpf_tx_offload_params *offload)
+ {
+ 	struct idpf_base_tx_ctx_desc *desc = idpf_tx_singleq_get_ctx_desc(txq);
+@@ -333,7 +333,7 @@ static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq,
+ 		qw1 |= FIELD_PREP(IDPF_TXD_CTX_QW1_MSS_M, offload->mss);
+ 
+ 		u64_stats_update_begin(&txq->stats_sync);
+-		u64_stats_inc(&txq->q_stats.tx.lso_pkts);
++		u64_stats_inc(&txq->q_stats.lso_pkts);
+ 		u64_stats_update_end(&txq->stats_sync);
+ 	}
+ 
+@@ -351,8 +351,8 @@ static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq,
+  *
+  * Returns NETDEV_TX_OK if sent, else an error code
+  */
+-static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+-					 struct idpf_queue *tx_q)
++netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
++				  struct idpf_tx_queue *tx_q)
+ {
+ 	struct idpf_tx_offload_params offload = { };
+ 	struct idpf_tx_buf *first;
+@@ -369,6 +369,10 @@ static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ 				      IDPF_TX_DESCS_FOR_CTX)) {
+ 		idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+ 
++		u64_stats_update_begin(&tx_q->stats_sync);
++		u64_stats_inc(&tx_q->q_stats.q_busy);
++		u64_stats_update_end(&tx_q->stats_sync);
++
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
+@@ -408,33 +412,6 @@ static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ 	return idpf_tx_drop_skb(tx_q, skb);
+ }
+ 
+-/**
+- * idpf_tx_singleq_start - Selects the right Tx queue to send buffer
+- * @skb: send buffer
+- * @netdev: network interface device structure
+- *
+- * Returns NETDEV_TX_OK if sent, else an error code
+- */
+-netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
+-				  struct net_device *netdev)
+-{
+-	struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+-	struct idpf_queue *tx_q;
+-
+-	tx_q = vport->txqs[skb_get_queue_mapping(skb)];
+-
+-	/* hardware can't handle really short frames, hardware padding works
+-	 * beyond this point
+-	 */
+-	if (skb_put_padto(skb, IDPF_TX_MIN_PKT_LEN)) {
+-		idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+-
+-		return NETDEV_TX_OK;
+-	}
+-
+-	return idpf_tx_singleq_frame(skb, tx_q);
+-}
+-
+ /**
+  * idpf_tx_singleq_clean - Reclaim resources from queue
+  * @tx_q: Tx queue to clean
+@@ -442,20 +419,19 @@ netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
+  * @cleaned: returns number of packets cleaned
+  *
+  */
+-static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
++static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget,
+ 				  int *cleaned)
+ {
+-	unsigned int budget = tx_q->vport->compln_clean_budget;
+ 	unsigned int total_bytes = 0, total_pkts = 0;
+ 	struct idpf_base_tx_desc *tx_desc;
++	u32 budget = tx_q->clean_budget;
+ 	s16 ntc = tx_q->next_to_clean;
+ 	struct idpf_netdev_priv *np;
+ 	struct idpf_tx_buf *tx_buf;
+-	struct idpf_vport *vport;
+ 	struct netdev_queue *nq;
+ 	bool dont_wake;
+ 
+-	tx_desc = IDPF_BASE_TX_DESC(tx_q, ntc);
++	tx_desc = &tx_q->base_tx[ntc];
+ 	tx_buf = &tx_q->tx_buf[ntc];
+ 	ntc -= tx_q->desc_count;
+ 
+@@ -517,7 +493,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
+ 			if (unlikely(!ntc)) {
+ 				ntc -= tx_q->desc_count;
+ 				tx_buf = tx_q->tx_buf;
+-				tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
++				tx_desc = &tx_q->base_tx[0];
+ 			}
+ 
+ 			/* unmap any remaining paged data */
+@@ -540,7 +516,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
+ 		if (unlikely(!ntc)) {
+ 			ntc -= tx_q->desc_count;
+ 			tx_buf = tx_q->tx_buf;
+-			tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
++			tx_desc = &tx_q->base_tx[0];
+ 		}
+ 	} while (likely(budget));
+ 
+@@ -550,16 +526,15 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
+ 	*cleaned += total_pkts;
+ 
+ 	u64_stats_update_begin(&tx_q->stats_sync);
+-	u64_stats_add(&tx_q->q_stats.tx.packets, total_pkts);
+-	u64_stats_add(&tx_q->q_stats.tx.bytes, total_bytes);
++	u64_stats_add(&tx_q->q_stats.packets, total_pkts);
++	u64_stats_add(&tx_q->q_stats.bytes, total_bytes);
+ 	u64_stats_update_end(&tx_q->stats_sync);
+ 
+-	vport = tx_q->vport;
+-	np = netdev_priv(vport->netdev);
+-	nq = netdev_get_tx_queue(vport->netdev, tx_q->idx);
++	np = netdev_priv(tx_q->netdev);
++	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 
+ 	dont_wake = np->state != __IDPF_VPORT_UP ||
+-		    !netif_carrier_ok(vport->netdev);
++		    !netif_carrier_ok(tx_q->netdev);
+ 	__netif_txq_completed_wake(nq, total_pkts, total_bytes,
+ 				   IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
+ 				   dont_wake);
+@@ -584,7 +559,7 @@ static bool idpf_tx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ 
+ 	budget_per_q = num_txq ? max(budget / num_txq, 1) : 0;
+ 	for (i = 0; i < num_txq; i++) {
+-		struct idpf_queue *q;
++		struct idpf_tx_queue *q;
+ 
+ 		q = q_vec->tx[i];
+ 		clean_complete &= idpf_tx_singleq_clean(q, budget_per_q,
+@@ -614,14 +589,9 @@ static bool idpf_rx_singleq_test_staterr(const union virtchnl2_rx_desc *rx_desc,
+ 
+ /**
+  * idpf_rx_singleq_is_non_eop - process handling of non-EOP buffers
+- * @rxq: Rx ring being processed
+  * @rx_desc: Rx descriptor for current buffer
+- * @skb: Current socket buffer containing buffer in progress
+- * @ntc: next to clean
+  */
+-static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq,
+-				       union virtchnl2_rx_desc *rx_desc,
+-				       struct sk_buff *skb, u16 ntc)
++static bool idpf_rx_singleq_is_non_eop(const union virtchnl2_rx_desc *rx_desc)
+ {
+ 	/* if we are the last buffer then there is nothing else to do */
+ 	if (likely(idpf_rx_singleq_test_staterr(rx_desc, IDPF_RXD_EOF_SINGLEQ)))
+@@ -639,7 +609,7 @@ static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq,
+  *
+  * skb->protocol must be set before this function is called
+  */
+-static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
++static void idpf_rx_singleq_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ 				 struct idpf_rx_csum_decoded *csum_bits,
+ 				 u16 ptype)
+ {
+@@ -647,14 +617,14 @@ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+ 	bool ipv4, ipv6;
+ 
+ 	/* check if Rx checksum is enabled */
+-	if (unlikely(!(rxq->vport->netdev->features & NETIF_F_RXCSUM)))
++	if (unlikely(!(rxq->netdev->features & NETIF_F_RXCSUM)))
+ 		return;
+ 
+ 	/* check if HW has decoded the packet and checksum */
+ 	if (unlikely(!(csum_bits->l3l4p)))
+ 		return;
+ 
+-	decoded = rxq->vport->rx_ptype_lkup[ptype];
++	decoded = rxq->rx_ptype_lkup[ptype];
+ 	if (unlikely(!(decoded.known && decoded.outer_ip)))
+ 		return;
+ 
+@@ -707,7 +677,7 @@ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+ 
+ checksum_fail:
+ 	u64_stats_update_begin(&rxq->stats_sync);
+-	u64_stats_inc(&rxq->q_stats.rx.hw_csum_err);
++	u64_stats_inc(&rxq->q_stats.hw_csum_err);
+ 	u64_stats_update_end(&rxq->stats_sync);
+ }
+ 
+@@ -721,9 +691,9 @@ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+  * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+  * descriptor writeback format.
+  **/
+-static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q,
++static void idpf_rx_singleq_base_csum(struct idpf_rx_queue *rx_q,
+ 				      struct sk_buff *skb,
+-				      union virtchnl2_rx_desc *rx_desc,
++				      const union virtchnl2_rx_desc *rx_desc,
+ 				      u16 ptype)
+ {
+ 	struct idpf_rx_csum_decoded csum_bits;
+@@ -761,9 +731,9 @@ static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q,
+  * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+  * descriptor writeback format.
+  **/
+-static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q,
++static void idpf_rx_singleq_flex_csum(struct idpf_rx_queue *rx_q,
+ 				      struct sk_buff *skb,
+-				      union virtchnl2_rx_desc *rx_desc,
++				      const union virtchnl2_rx_desc *rx_desc,
+ 				      u16 ptype)
+ {
+ 	struct idpf_rx_csum_decoded csum_bits;
+@@ -801,14 +771,14 @@ static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q,
+  * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+  * descriptor writeback format.
+  **/
+-static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q,
++static void idpf_rx_singleq_base_hash(struct idpf_rx_queue *rx_q,
+ 				      struct sk_buff *skb,
+-				      union virtchnl2_rx_desc *rx_desc,
++				      const union virtchnl2_rx_desc *rx_desc,
+ 				      struct idpf_rx_ptype_decoded *decoded)
+ {
+ 	u64 mask, qw1;
+ 
+-	if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
++	if (unlikely(!(rx_q->netdev->features & NETIF_F_RXHASH)))
+ 		return;
+ 
+ 	mask = VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M;
+@@ -831,12 +801,12 @@ static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q,
+  * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+  * descriptor writeback format.
+  **/
+-static void idpf_rx_singleq_flex_hash(struct idpf_queue *rx_q,
++static void idpf_rx_singleq_flex_hash(struct idpf_rx_queue *rx_q,
+ 				      struct sk_buff *skb,
+-				      union virtchnl2_rx_desc *rx_desc,
++				      const union virtchnl2_rx_desc *rx_desc,
+ 				      struct idpf_rx_ptype_decoded *decoded)
+ {
+-	if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
++	if (unlikely(!(rx_q->netdev->features & NETIF_F_RXHASH)))
+ 		return;
+ 
+ 	if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M,
+@@ -857,16 +827,16 @@ static void idpf_rx_singleq_flex_hash(struct idpf_queue *rx_q,
+  * order to populate the hash, checksum, VLAN, protocol, and
+  * other fields within the skb.
+  */
+-static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q,
+-					       struct sk_buff *skb,
+-					       union virtchnl2_rx_desc *rx_desc,
+-					       u16 ptype)
++static void
++idpf_rx_singleq_process_skb_fields(struct idpf_rx_queue *rx_q,
++				   struct sk_buff *skb,
++				   const union virtchnl2_rx_desc *rx_desc,
++				   u16 ptype)
+ {
+-	struct idpf_rx_ptype_decoded decoded =
+-					rx_q->vport->rx_ptype_lkup[ptype];
++	struct idpf_rx_ptype_decoded decoded = rx_q->rx_ptype_lkup[ptype];
+ 
+ 	/* modifies the skb - consumes the enet header */
+-	skb->protocol = eth_type_trans(skb, rx_q->vport->netdev);
++	skb->protocol = eth_type_trans(skb, rx_q->netdev);
+ 
+ 	/* Check if we're using base mode descriptor IDs */
+ 	if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) {
+@@ -878,6 +848,22 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q,
+ 	}
+ }
+ 
++/**
++ * idpf_rx_buf_hw_update - Store the new tail and head values
++ * @rxq: queue to bump
++ * @val: new head index
++ */
++static void idpf_rx_buf_hw_update(struct idpf_rx_queue *rxq, u32 val)
++{
++	rxq->next_to_use = val;
++
++	if (unlikely(!rxq->tail))
++		return;
++
++	/* writel has an implicit memory barrier */
++	writel(val, rxq->tail);
++}
++
+ /**
+  * idpf_rx_singleq_buf_hw_alloc_all - Replace used receive buffers
+  * @rx_q: queue for which the hw buffers are allocated
+@@ -885,7 +871,7 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q,
+  *
+  * Returns false if all allocations were successful, true if any fail
+  */
+-bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
++bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rx_q,
+ 				      u16 cleaned_count)
+ {
+ 	struct virtchnl2_singleq_rx_buf_desc *desc;
+@@ -895,8 +881,8 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
+ 	if (!cleaned_count)
+ 		return false;
+ 
+-	desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, nta);
+-	buf = &rx_q->rx_buf.buf[nta];
++	desc = &rx_q->single_buf[nta];
++	buf = &rx_q->rx_buf[nta];
+ 
+ 	do {
+ 		dma_addr_t addr;
+@@ -915,8 +901,8 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
+ 		buf++;
+ 		nta++;
+ 		if (unlikely(nta == rx_q->desc_count)) {
+-			desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, 0);
+-			buf = rx_q->rx_buf.buf;
++			desc = &rx_q->single_buf[0];
++			buf = rx_q->rx_buf;
+ 			nta = 0;
+ 		}
+ 
+@@ -933,7 +919,6 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
+ 
+ /**
+  * idpf_rx_singleq_extract_base_fields - Extract fields from the Rx descriptor
+- * @rx_q: Rx descriptor queue
+  * @rx_desc: the descriptor to process
+  * @fields: storage for extracted values
+  *
+@@ -943,9 +928,9 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
+  * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+  * descriptor writeback format.
+  */
+-static void idpf_rx_singleq_extract_base_fields(struct idpf_queue *rx_q,
+-						union virtchnl2_rx_desc *rx_desc,
+-						struct idpf_rx_extracted *fields)
++static void
++idpf_rx_singleq_extract_base_fields(const union virtchnl2_rx_desc *rx_desc,
++				    struct idpf_rx_extracted *fields)
+ {
+ 	u64 qword;
+ 
+@@ -957,7 +942,6 @@ static void idpf_rx_singleq_extract_base_fields(struct idpf_queue *rx_q,
+ 
+ /**
+  * idpf_rx_singleq_extract_flex_fields - Extract fields from the Rx descriptor
+- * @rx_q: Rx descriptor queue
+  * @rx_desc: the descriptor to process
+  * @fields: storage for extracted values
+  *
+@@ -967,9 +951,9 @@ static void idpf_rx_singleq_extract_base_fields(struct idpf_queue *rx_q,
+  * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+  * descriptor writeback format.
+  */
+-static void idpf_rx_singleq_extract_flex_fields(struct idpf_queue *rx_q,
+-						union virtchnl2_rx_desc *rx_desc,
+-						struct idpf_rx_extracted *fields)
++static void
++idpf_rx_singleq_extract_flex_fields(const union virtchnl2_rx_desc *rx_desc,
++				    struct idpf_rx_extracted *fields)
+ {
+ 	fields->size = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M,
+ 				 le16_to_cpu(rx_desc->flex_nic_wb.pkt_len));
+@@ -984,14 +968,15 @@ static void idpf_rx_singleq_extract_flex_fields(struct idpf_queue *rx_q,
+  * @fields: storage for extracted values
+  *
+  */
+-static void idpf_rx_singleq_extract_fields(struct idpf_queue *rx_q,
+-					   union virtchnl2_rx_desc *rx_desc,
+-					   struct idpf_rx_extracted *fields)
++static void
++idpf_rx_singleq_extract_fields(const struct idpf_rx_queue *rx_q,
++			       const union virtchnl2_rx_desc *rx_desc,
++			       struct idpf_rx_extracted *fields)
+ {
+ 	if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M)
+-		idpf_rx_singleq_extract_base_fields(rx_q, rx_desc, fields);
++		idpf_rx_singleq_extract_base_fields(rx_desc, fields);
+ 	else
+-		idpf_rx_singleq_extract_flex_fields(rx_q, rx_desc, fields);
++		idpf_rx_singleq_extract_flex_fields(rx_desc, fields);
+ }
+ 
+ /**
+@@ -1001,7 +986,7 @@ static void idpf_rx_singleq_extract_fields(struct idpf_queue *rx_q,
+  *
+  * Returns true if there's any budget left (e.g. the clean is finished)
+  */
+-static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
++static int idpf_rx_singleq_clean(struct idpf_rx_queue *rx_q, int budget)
+ {
+ 	unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+ 	struct sk_buff *skb = rx_q->skb;
+@@ -1016,7 +1001,7 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+ 		struct idpf_rx_buf *rx_buf;
+ 
+ 		/* get the Rx desc from Rx queue based on 'next_to_clean' */
+-		rx_desc = IDPF_RX_DESC(rx_q, ntc);
++		rx_desc = &rx_q->rx[ntc];
+ 
+ 		/* status_error_ptype_len will always be zero for unused
+ 		 * descriptors because it's cleared in cleanup, and overlaps
+@@ -1036,7 +1021,7 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+ 
+ 		idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields);
+ 
+-		rx_buf = &rx_q->rx_buf.buf[ntc];
++		rx_buf = &rx_q->rx_buf[ntc];
+ 		if (!fields.size) {
+ 			idpf_rx_put_page(rx_buf);
+ 			goto skip_data;
+@@ -1058,7 +1043,7 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+ 		cleaned_count++;
+ 
+ 		/* skip if it is non EOP desc */
+-		if (idpf_rx_singleq_is_non_eop(rx_q, rx_desc, skb, ntc))
++		if (idpf_rx_singleq_is_non_eop(rx_desc))
+ 			continue;
+ 
+ #define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \
+@@ -1084,7 +1069,7 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+ 						   rx_desc, fields.rx_ptype);
+ 
+ 		/* send completed skb up the stack */
+-		napi_gro_receive(&rx_q->q_vector->napi, skb);
++		napi_gro_receive(rx_q->pp->p.napi, skb);
+ 		skb = NULL;
+ 
+ 		/* update budget accounting */
+@@ -1099,8 +1084,8 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+ 		failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count);
+ 
+ 	u64_stats_update_begin(&rx_q->stats_sync);
+-	u64_stats_add(&rx_q->q_stats.rx.packets, total_rx_pkts);
+-	u64_stats_add(&rx_q->q_stats.rx.bytes, total_rx_bytes);
++	u64_stats_add(&rx_q->q_stats.packets, total_rx_pkts);
++	u64_stats_add(&rx_q->q_stats.bytes, total_rx_bytes);
+ 	u64_stats_update_end(&rx_q->stats_sync);
+ 
+ 	/* guarantee a trip back through this routine if there was a failure */
+@@ -1127,7 +1112,7 @@ static bool idpf_rx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ 	 */
+ 	budget_per_q = num_rxq ? max(budget / num_rxq, 1) : 0;
+ 	for (i = 0; i < num_rxq; i++) {
+-		struct idpf_queue *rxq = q_vec->rx[i];
++		struct idpf_rx_queue *rxq = q_vec->rx[i];
+ 		int pkts_cleaned_per_q;
+ 
+ 		pkts_cleaned_per_q = idpf_rx_singleq_clean(rxq, budget_per_q);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 20ca04320d4bde..9b7e67d0f38be9 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -4,6 +4,9 @@
+ #include "idpf.h"
+ #include "idpf_virtchnl.h"
+ 
++static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
++			       unsigned int count);
++
+ /**
+  * idpf_buf_lifo_push - push a buffer pointer onto stack
+  * @stack: pointer to stack struct
+@@ -60,7 +63,8 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+  * @tx_q: the queue that owns the buffer
+  * @tx_buf: the buffer to free
+  */
+-static void idpf_tx_buf_rel(struct idpf_queue *tx_q, struct idpf_tx_buf *tx_buf)
++static void idpf_tx_buf_rel(struct idpf_tx_queue *tx_q,
++			    struct idpf_tx_buf *tx_buf)
+ {
+ 	if (tx_buf->skb) {
+ 		if (dma_unmap_len(tx_buf, len))
+@@ -86,8 +90,9 @@ static void idpf_tx_buf_rel(struct idpf_queue *tx_q, struct idpf_tx_buf *tx_buf)
+  * idpf_tx_buf_rel_all - Free any empty Tx buffers
+  * @txq: queue to be cleaned
+  */
+-static void idpf_tx_buf_rel_all(struct idpf_queue *txq)
++static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq)
+ {
++	struct idpf_buf_lifo *buf_stack;
+ 	u16 i;
+ 
+ 	/* Buffers already cleared, nothing to do */
+@@ -101,38 +106,57 @@ static void idpf_tx_buf_rel_all(struct idpf_queue *txq)
+ 	kfree(txq->tx_buf);
+ 	txq->tx_buf = NULL;
+ 
+-	if (!txq->buf_stack.bufs)
++	if (!idpf_queue_has(FLOW_SCH_EN, txq))
++		return;
++
++	buf_stack = &txq->stash->buf_stack;
++	if (!buf_stack->bufs)
+ 		return;
+ 
+-	for (i = 0; i < txq->buf_stack.size; i++)
+-		kfree(txq->buf_stack.bufs[i]);
++	for (i = 0; i < buf_stack->size; i++)
++		kfree(buf_stack->bufs[i]);
+ 
+-	kfree(txq->buf_stack.bufs);
+-	txq->buf_stack.bufs = NULL;
++	kfree(buf_stack->bufs);
++	buf_stack->bufs = NULL;
+ }
+ 
+ /**
+  * idpf_tx_desc_rel - Free Tx resources per queue
+  * @txq: Tx descriptor ring for a specific queue
+- * @bufq: buffer q or completion q
+  *
+  * Free all transmit software resources
+  */
+-static void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq)
++static void idpf_tx_desc_rel(struct idpf_tx_queue *txq)
+ {
+-	if (bufq)
+-		idpf_tx_buf_rel_all(txq);
++	idpf_tx_buf_rel_all(txq);
+ 
+ 	if (!txq->desc_ring)
+ 		return;
+ 
+ 	dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
+ 	txq->desc_ring = NULL;
+-	txq->next_to_alloc = 0;
+ 	txq->next_to_use = 0;
+ 	txq->next_to_clean = 0;
+ }
+ 
++/**
++ * idpf_compl_desc_rel - Free completion resources per queue
++ * @complq: completion queue
++ *
++ * Free all completion software resources.
++ */
++static void idpf_compl_desc_rel(struct idpf_compl_queue *complq)
++{
++	if (!complq->comp)
++		return;
++
++	dma_free_coherent(complq->netdev->dev.parent, complq->size,
++			  complq->comp, complq->dma);
++	complq->comp = NULL;
++	complq->next_to_use = 0;
++	complq->next_to_clean = 0;
++}
++
+ /**
+  * idpf_tx_desc_rel_all - Free Tx Resources for All Queues
+  * @vport: virtual port structure
+@@ -150,10 +174,10 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
+ 		struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ 
+ 		for (j = 0; j < txq_grp->num_txq; j++)
+-			idpf_tx_desc_rel(txq_grp->txqs[j], true);
++			idpf_tx_desc_rel(txq_grp->txqs[j]);
+ 
+ 		if (idpf_is_queue_model_split(vport->txq_model))
+-			idpf_tx_desc_rel(txq_grp->complq, false);
++			idpf_compl_desc_rel(txq_grp->complq);
+ 	}
+ }
+ 
+@@ -163,8 +187,9 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
+  *
+  * Returns 0 on success, negative on failure
+  */
+-static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q)
++static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
+ {
++	struct idpf_buf_lifo *buf_stack;
+ 	int buf_size;
+ 	int i;
+ 
+@@ -180,22 +205,26 @@ static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q)
+ 	for (i = 0; i < tx_q->desc_count; i++)
+ 		tx_q->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+ 
++	if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
++		return 0;
++
++	buf_stack = &tx_q->stash->buf_stack;
++
+ 	/* Initialize tx buf stack for out-of-order completions if
+ 	 * flow scheduling offload is enabled
+ 	 */
+-	tx_q->buf_stack.bufs =
+-		kcalloc(tx_q->desc_count, sizeof(struct idpf_tx_stash *),
+-			GFP_KERNEL);
+-	if (!tx_q->buf_stack.bufs)
++	buf_stack->bufs = kcalloc(tx_q->desc_count, sizeof(*buf_stack->bufs),
++				  GFP_KERNEL);
++	if (!buf_stack->bufs)
+ 		return -ENOMEM;
+ 
+-	tx_q->buf_stack.size = tx_q->desc_count;
+-	tx_q->buf_stack.top = tx_q->desc_count;
++	buf_stack->size = tx_q->desc_count;
++	buf_stack->top = tx_q->desc_count;
+ 
+ 	for (i = 0; i < tx_q->desc_count; i++) {
+-		tx_q->buf_stack.bufs[i] = kzalloc(sizeof(*tx_q->buf_stack.bufs[i]),
+-						  GFP_KERNEL);
+-		if (!tx_q->buf_stack.bufs[i])
++		buf_stack->bufs[i] = kzalloc(sizeof(*buf_stack->bufs[i]),
++					     GFP_KERNEL);
++		if (!buf_stack->bufs[i])
+ 			return -ENOMEM;
+ 	}
+ 
+@@ -204,28 +233,22 @@ static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q)
+ 
+ /**
+  * idpf_tx_desc_alloc - Allocate the Tx descriptors
++ * @vport: vport to allocate resources for
+  * @tx_q: the tx ring to set up
+- * @bufq: buffer or completion queue
+  *
+  * Returns 0 on success, negative on failure
+  */
+-static int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq)
++static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
++			      struct idpf_tx_queue *tx_q)
+ {
+ 	struct device *dev = tx_q->dev;
+-	u32 desc_sz;
+ 	int err;
+ 
+-	if (bufq) {
+-		err = idpf_tx_buf_alloc_all(tx_q);
+-		if (err)
+-			goto err_alloc;
+-
+-		desc_sz = sizeof(struct idpf_base_tx_desc);
+-	} else {
+-		desc_sz = sizeof(struct idpf_splitq_tx_compl_desc);
+-	}
++	err = idpf_tx_buf_alloc_all(tx_q);
++	if (err)
++		goto err_alloc;
+ 
+-	tx_q->size = tx_q->desc_count * desc_sz;
++	tx_q->size = tx_q->desc_count * sizeof(*tx_q->base_tx);
+ 
+ 	/* Allocate descriptors also round up to nearest 4K */
+ 	tx_q->size = ALIGN(tx_q->size, 4096);
+@@ -238,19 +261,43 @@ static int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq)
+ 		goto err_alloc;
+ 	}
+ 
+-	tx_q->next_to_alloc = 0;
+ 	tx_q->next_to_use = 0;
+ 	tx_q->next_to_clean = 0;
+-	set_bit(__IDPF_Q_GEN_CHK, tx_q->flags);
++	idpf_queue_set(GEN_CHK, tx_q);
+ 
+ 	return 0;
+ 
+ err_alloc:
+-	idpf_tx_desc_rel(tx_q, bufq);
++	idpf_tx_desc_rel(tx_q);
+ 
+ 	return err;
+ }
+ 
++/**
++ * idpf_compl_desc_alloc - allocate completion descriptors
++ * @vport: vport to allocate resources for
++ * @complq: completion queue to set up
++ *
++ * Return: 0 on success, -errno on failure.
++ */
++static int idpf_compl_desc_alloc(const struct idpf_vport *vport,
++				 struct idpf_compl_queue *complq)
++{
++	complq->size = array_size(complq->desc_count, sizeof(*complq->comp));
++
++	complq->comp = dma_alloc_coherent(complq->netdev->dev.parent,
++					  complq->size, &complq->dma,
++					  GFP_KERNEL);
++	if (!complq->comp)
++		return -ENOMEM;
++
++	complq->next_to_use = 0;
++	complq->next_to_clean = 0;
++	idpf_queue_set(GEN_CHK, complq);
++
++	return 0;
++}
++
+ /**
+  * idpf_tx_desc_alloc_all - allocate all queues Tx resources
+  * @vport: virtual port private structure
+@@ -259,7 +306,6 @@ static int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq)
+  */
+ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+ {
+-	struct device *dev = &vport->adapter->pdev->dev;
+ 	int err = 0;
+ 	int i, j;
+ 
+@@ -268,13 +314,14 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+ 	 */
+ 	for (i = 0; i < vport->num_txq_grp; i++) {
+ 		for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
+-			struct idpf_queue *txq = vport->txq_grps[i].txqs[j];
++			struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j];
+ 			u8 gen_bits = 0;
+ 			u16 bufidx_mask;
+ 
+-			err = idpf_tx_desc_alloc(txq, true);
++			err = idpf_tx_desc_alloc(vport, txq);
+ 			if (err) {
+-				dev_err(dev, "Allocation for Tx Queue %u failed\n",
++				pci_err(vport->adapter->pdev,
++					"Allocation for Tx Queue %u failed\n",
+ 					i);
+ 				goto err_out;
+ 			}
+@@ -312,9 +359,10 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+ 			continue;
+ 
+ 		/* Setup completion queues */
+-		err = idpf_tx_desc_alloc(vport->txq_grps[i].complq, false);
++		err = idpf_compl_desc_alloc(vport, vport->txq_grps[i].complq);
+ 		if (err) {
+-			dev_err(dev, "Allocation for Tx Completion Queue %u failed\n",
++			pci_err(vport->adapter->pdev,
++				"Allocation for Tx Completion Queue %u failed\n",
+ 				i);
+ 			goto err_out;
+ 		}
+@@ -329,15 +377,14 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+ 
+ /**
+  * idpf_rx_page_rel - Release an rx buffer page
+- * @rxq: the queue that owns the buffer
+  * @rx_buf: the buffer to free
+  */
+-static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf)
++static void idpf_rx_page_rel(struct idpf_rx_buf *rx_buf)
+ {
+ 	if (unlikely(!rx_buf->page))
+ 		return;
+ 
+-	page_pool_put_full_page(rxq->pp, rx_buf->page, false);
++	page_pool_put_full_page(rx_buf->page->pp, rx_buf->page, false);
+ 
+ 	rx_buf->page = NULL;
+ 	rx_buf->page_offset = 0;
+@@ -345,54 +392,72 @@ static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf)
+ 
+ /**
+  * idpf_rx_hdr_buf_rel_all - Release header buffer memory
+- * @rxq: queue to use
++ * @bufq: queue to use
++ * @dev: device to free DMA memory
+  */
+-static void idpf_rx_hdr_buf_rel_all(struct idpf_queue *rxq)
++static void idpf_rx_hdr_buf_rel_all(struct idpf_buf_queue *bufq,
++				    struct device *dev)
+ {
+-	struct idpf_adapter *adapter = rxq->vport->adapter;
+-
+-	dma_free_coherent(&adapter->pdev->dev,
+-			  rxq->desc_count * IDPF_HDR_BUF_SIZE,
+-			  rxq->rx_buf.hdr_buf_va,
+-			  rxq->rx_buf.hdr_buf_pa);
+-	rxq->rx_buf.hdr_buf_va = NULL;
++	dma_free_coherent(dev, bufq->desc_count * IDPF_HDR_BUF_SIZE,
++			  bufq->rx_buf.hdr_buf_va, bufq->rx_buf.hdr_buf_pa);
++	bufq->rx_buf.hdr_buf_va = NULL;
+ }
+ 
+ /**
+- * idpf_rx_buf_rel_all - Free all Rx buffer resources for a queue
+- * @rxq: queue to be cleaned
++ * idpf_rx_buf_rel_bufq - Free all Rx buffer resources for a buffer queue
++ * @bufq: queue to be cleaned
++ * @dev: device to free DMA memory
+  */
+-static void idpf_rx_buf_rel_all(struct idpf_queue *rxq)
++static void idpf_rx_buf_rel_bufq(struct idpf_buf_queue *bufq,
++				 struct device *dev)
+ {
+-	u16 i;
+-
+ 	/* queue already cleared, nothing to do */
+-	if (!rxq->rx_buf.buf)
++	if (!bufq->rx_buf.buf)
+ 		return;
+ 
+ 	/* Free all the bufs allocated and given to hw on Rx queue */
+-	for (i = 0; i < rxq->desc_count; i++)
+-		idpf_rx_page_rel(rxq, &rxq->rx_buf.buf[i]);
++	for (u32 i = 0; i < bufq->desc_count; i++)
++		idpf_rx_page_rel(&bufq->rx_buf.buf[i]);
++
++	if (idpf_queue_has(HSPLIT_EN, bufq))
++		idpf_rx_hdr_buf_rel_all(bufq, dev);
+ 
+-	if (rxq->rx_hsplit_en)
+-		idpf_rx_hdr_buf_rel_all(rxq);
++	page_pool_destroy(bufq->pp);
++	bufq->pp = NULL;
++
++	kfree(bufq->rx_buf.buf);
++	bufq->rx_buf.buf = NULL;
++}
++
++/**
++ * idpf_rx_buf_rel_all - Free all Rx buffer resources for a receive queue
++ * @rxq: queue to be cleaned
++ */
++static void idpf_rx_buf_rel_all(struct idpf_rx_queue *rxq)
++{
++	if (!rxq->rx_buf)
++		return;
++
++	for (u32 i = 0; i < rxq->desc_count; i++)
++		idpf_rx_page_rel(&rxq->rx_buf[i]);
+ 
+ 	page_pool_destroy(rxq->pp);
+ 	rxq->pp = NULL;
+ 
+-	kfree(rxq->rx_buf.buf);
+-	rxq->rx_buf.buf = NULL;
++	kfree(rxq->rx_buf);
++	rxq->rx_buf = NULL;
+ }
+ 
+ /**
+  * idpf_rx_desc_rel - Free a specific Rx q resources
+  * @rxq: queue to clean the resources from
+- * @bufq: buffer q or completion q
+- * @q_model: single or split q model
++ * @dev: device to free DMA memory
++ * @model: single or split queue model
+  *
+  * Free a specific rx queue resources
+  */
+-static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model)
++static void idpf_rx_desc_rel(struct idpf_rx_queue *rxq, struct device *dev,
++			     u32 model)
+ {
+ 	if (!rxq)
+ 		return;
+@@ -402,7 +467,7 @@ static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model)
+ 		rxq->skb = NULL;
+ 	}
+ 
+-	if (bufq || !idpf_is_queue_model_split(q_model))
++	if (!idpf_is_queue_model_split(model))
+ 		idpf_rx_buf_rel_all(rxq);
+ 
+ 	rxq->next_to_alloc = 0;
+@@ -411,10 +476,34 @@ static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model)
+ 	if (!rxq->desc_ring)
+ 		return;
+ 
+-	dmam_free_coherent(rxq->dev, rxq->size, rxq->desc_ring, rxq->dma);
++	dmam_free_coherent(dev, rxq->size, rxq->desc_ring, rxq->dma);
+ 	rxq->desc_ring = NULL;
+ }
+ 
++/**
++ * idpf_rx_desc_rel_bufq - free buffer queue resources
++ * @bufq: buffer queue to clean the resources from
++ * @dev: device to free DMA memory
++ */
++static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq,
++				  struct device *dev)
++{
++	if (!bufq)
++		return;
++
++	idpf_rx_buf_rel_bufq(bufq, dev);
++
++	bufq->next_to_alloc = 0;
++	bufq->next_to_clean = 0;
++	bufq->next_to_use = 0;
++
++	if (!bufq->split_buf)
++		return;
++
++	dma_free_coherent(dev, bufq->size, bufq->split_buf, bufq->dma);
++	bufq->split_buf = NULL;
++}
++
+ /**
+  * idpf_rx_desc_rel_all - Free Rx Resources for All Queues
+  * @vport: virtual port structure
+@@ -423,6 +512,7 @@ static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model)
+  */
+ static void idpf_rx_desc_rel_all(struct idpf_vport *vport)
+ {
++	struct device *dev = &vport->adapter->pdev->dev;
+ 	struct idpf_rxq_group *rx_qgrp;
+ 	u16 num_rxq;
+ 	int i, j;
+@@ -435,15 +525,15 @@ static void idpf_rx_desc_rel_all(struct idpf_vport *vport)
+ 
+ 		if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ 			for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+-				idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j],
+-						 false, vport->rxq_model);
++				idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j], dev,
++						 VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ 			continue;
+ 		}
+ 
+ 		num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ 		for (j = 0; j < num_rxq; j++)
+ 			idpf_rx_desc_rel(&rx_qgrp->splitq.rxq_sets[j]->rxq,
+-					 false, vport->rxq_model);
++					 dev, VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ 
+ 		if (!rx_qgrp->splitq.bufq_sets)
+ 			continue;
+@@ -452,44 +542,40 @@ static void idpf_rx_desc_rel_all(struct idpf_vport *vport)
+ 			struct idpf_bufq_set *bufq_set =
+ 				&rx_qgrp->splitq.bufq_sets[j];
+ 
+-			idpf_rx_desc_rel(&bufq_set->bufq, true,
+-					 vport->rxq_model);
++			idpf_rx_desc_rel_bufq(&bufq_set->bufq, dev);
+ 		}
+ 	}
+ }
+ 
+ /**
+  * idpf_rx_buf_hw_update - Store the new tail and head values
+- * @rxq: queue to bump
++ * @bufq: queue to bump
+  * @val: new head index
+  */
+-void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val)
++static void idpf_rx_buf_hw_update(struct idpf_buf_queue *bufq, u32 val)
+ {
+-	rxq->next_to_use = val;
++	bufq->next_to_use = val;
+ 
+-	if (unlikely(!rxq->tail))
++	if (unlikely(!bufq->tail))
+ 		return;
+ 
+ 	/* writel has an implicit memory barrier */
+-	writel(val, rxq->tail);
++	writel(val, bufq->tail);
+ }
+ 
+ /**
+  * idpf_rx_hdr_buf_alloc_all - Allocate memory for header buffers
+- * @rxq: ring to use
++ * @bufq: ring to use
+  *
+  * Returns 0 on success, negative on failure.
+  */
+-static int idpf_rx_hdr_buf_alloc_all(struct idpf_queue *rxq)
++static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq)
+ {
+-	struct idpf_adapter *adapter = rxq->vport->adapter;
+-
+-	rxq->rx_buf.hdr_buf_va =
+-		dma_alloc_coherent(&adapter->pdev->dev,
+-				   IDPF_HDR_BUF_SIZE * rxq->desc_count,
+-				   &rxq->rx_buf.hdr_buf_pa,
+-				   GFP_KERNEL);
+-	if (!rxq->rx_buf.hdr_buf_va)
++	bufq->rx_buf.hdr_buf_va =
++		dma_alloc_coherent(bufq->q_vector->vport->netdev->dev.parent,
++				   IDPF_HDR_BUF_SIZE * bufq->desc_count,
++				   &bufq->rx_buf.hdr_buf_pa, GFP_KERNEL);
++	if (!bufq->rx_buf.hdr_buf_va)
+ 		return -ENOMEM;
+ 
+ 	return 0;
+@@ -502,19 +588,20 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_queue *rxq)
+  */
+ static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
+ {
+-	u16 nta = refillq->next_to_alloc;
++	u32 nta = refillq->next_to_use;
+ 
+ 	/* store the buffer ID and the SW maintained GEN bit to the refillq */
+ 	refillq->ring[nta] =
+ 		FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) |
+ 		FIELD_PREP(IDPF_RX_BI_GEN_M,
+-			   test_bit(__IDPF_Q_GEN_CHK, refillq->flags));
++			   idpf_queue_has(GEN_CHK, refillq));
+ 
+ 	if (unlikely(++nta == refillq->desc_count)) {
+ 		nta = 0;
+-		change_bit(__IDPF_Q_GEN_CHK, refillq->flags);
++		idpf_queue_change(GEN_CHK, refillq);
+ 	}
+-	refillq->next_to_alloc = nta;
++
++	refillq->next_to_use = nta;
+ }
+ 
+ /**
+@@ -524,21 +611,20 @@ static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
+  *
+  * Returns false if buffer could not be allocated, true otherwise.
+  */
+-static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id)
++static bool idpf_rx_post_buf_desc(struct idpf_buf_queue *bufq, u16 buf_id)
+ {
+ 	struct virtchnl2_splitq_rx_buf_desc *splitq_rx_desc = NULL;
+ 	u16 nta = bufq->next_to_alloc;
+ 	struct idpf_rx_buf *buf;
+ 	dma_addr_t addr;
+ 
+-	splitq_rx_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, nta);
++	splitq_rx_desc = &bufq->split_buf[nta];
+ 	buf = &bufq->rx_buf.buf[buf_id];
+ 
+-	if (bufq->rx_hsplit_en) {
++	if (idpf_queue_has(HSPLIT_EN, bufq))
+ 		splitq_rx_desc->hdr_addr =
+ 			cpu_to_le64(bufq->rx_buf.hdr_buf_pa +
+ 				    (u32)buf_id * IDPF_HDR_BUF_SIZE);
+-	}
+ 
+ 	addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size);
+ 	if (unlikely(addr == DMA_MAPPING_ERROR))
+@@ -562,7 +648,8 @@ static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id)
+  *
+  * Returns true if @working_set bufs were posted successfully, false otherwise.
+  */
+-static bool idpf_rx_post_init_bufs(struct idpf_queue *bufq, u16 working_set)
++static bool idpf_rx_post_init_bufs(struct idpf_buf_queue *bufq,
++				   u16 working_set)
+ {
+ 	int i;
+ 
+@@ -571,26 +658,28 @@ static bool idpf_rx_post_init_bufs(struct idpf_queue *bufq, u16 working_set)
+ 			return false;
+ 	}
+ 
+-	idpf_rx_buf_hw_update(bufq,
+-			      bufq->next_to_alloc & ~(bufq->rx_buf_stride - 1));
++	idpf_rx_buf_hw_update(bufq, ALIGN_DOWN(bufq->next_to_alloc,
++					       IDPF_RX_BUF_STRIDE));
+ 
+ 	return true;
+ }
+ 
+ /**
+  * idpf_rx_create_page_pool - Create a page pool
+- * @rxbufq: RX queue to create page pool for
++ * @napi: NAPI of the associated queue vector
++ * @count: queue descriptor count
+  *
+  * Returns &page_pool on success, casted -errno on failure
+  */
+-static struct page_pool *idpf_rx_create_page_pool(struct idpf_queue *rxbufq)
++static struct page_pool *idpf_rx_create_page_pool(struct napi_struct *napi,
++						  u32 count)
+ {
+ 	struct page_pool_params pp = {
+ 		.flags		= PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+ 		.order		= 0,
+-		.pool_size	= rxbufq->desc_count,
++		.pool_size	= count,
+ 		.nid		= NUMA_NO_NODE,
+-		.dev		= rxbufq->vport->netdev->dev.parent,
++		.dev		= napi->dev->dev.parent,
+ 		.max_len	= PAGE_SIZE,
+ 		.dma_dir	= DMA_FROM_DEVICE,
+ 		.offset		= 0,
+@@ -599,15 +688,58 @@ static struct page_pool *idpf_rx_create_page_pool(struct idpf_queue *rxbufq)
+ 	return page_pool_create(&pp);
+ }
+ 
++/**
++ * idpf_rx_buf_alloc_singleq - Allocate memory for all buffer resources
++ * @rxq: queue for which the buffers are allocated
++ *
++ * Return: 0 on success, -ENOMEM on failure.
++ */
++static int idpf_rx_buf_alloc_singleq(struct idpf_rx_queue *rxq)
++{
++	rxq->rx_buf = kcalloc(rxq->desc_count, sizeof(*rxq->rx_buf),
++			      GFP_KERNEL);
++	if (!rxq->rx_buf)
++		return -ENOMEM;
++
++	if (idpf_rx_singleq_buf_hw_alloc_all(rxq, rxq->desc_count - 1))
++		goto err;
++
++	return 0;
++
++err:
++	idpf_rx_buf_rel_all(rxq);
++
++	return -ENOMEM;
++}
++
++/**
++ * idpf_rx_bufs_init_singleq - Initialize page pool and allocate Rx bufs
++ * @rxq: buffer queue to create page pool for
++ *
++ * Return: 0 on success, -errno on failure.
++ */
++static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
++{
++	struct page_pool *pool;
++
++	pool = idpf_rx_create_page_pool(&rxq->q_vector->napi, rxq->desc_count);
++	if (IS_ERR(pool))
++		return PTR_ERR(pool);
++
++	rxq->pp = pool;
++
++	return idpf_rx_buf_alloc_singleq(rxq);
++}
++
+ /**
+  * idpf_rx_buf_alloc_all - Allocate memory for all buffer resources
+- * @rxbufq: queue for which the buffers are allocated; equivalent to
+- * rxq when operating in singleq mode
++ * @rxbufq: queue for which the buffers are allocated
+  *
+  * Returns 0 on success, negative on failure
+  */
+-static int idpf_rx_buf_alloc_all(struct idpf_queue *rxbufq)
++static int idpf_rx_buf_alloc_all(struct idpf_buf_queue *rxbufq)
+ {
++	struct device *dev = rxbufq->q_vector->vport->netdev->dev.parent;
+ 	int err = 0;
+ 
+ 	/* Allocate book keeping buffers */
+@@ -618,48 +750,41 @@ static int idpf_rx_buf_alloc_all(struct idpf_queue *rxbufq)
+ 		goto rx_buf_alloc_all_out;
+ 	}
+ 
+-	if (rxbufq->rx_hsplit_en) {
++	if (idpf_queue_has(HSPLIT_EN, rxbufq)) {
+ 		err = idpf_rx_hdr_buf_alloc_all(rxbufq);
+ 		if (err)
+ 			goto rx_buf_alloc_all_out;
+ 	}
+ 
+ 	/* Allocate buffers to be given to HW.	 */
+-	if (idpf_is_queue_model_split(rxbufq->vport->rxq_model)) {
+-		int working_set = IDPF_RX_BUFQ_WORKING_SET(rxbufq);
+-
+-		if (!idpf_rx_post_init_bufs(rxbufq, working_set))
+-			err = -ENOMEM;
+-	} else {
+-		if (idpf_rx_singleq_buf_hw_alloc_all(rxbufq,
+-						     rxbufq->desc_count - 1))
+-			err = -ENOMEM;
+-	}
++	if (!idpf_rx_post_init_bufs(rxbufq, IDPF_RX_BUFQ_WORKING_SET(rxbufq)))
++		err = -ENOMEM;
+ 
+ rx_buf_alloc_all_out:
+ 	if (err)
+-		idpf_rx_buf_rel_all(rxbufq);
++		idpf_rx_buf_rel_bufq(rxbufq, dev);
+ 
+ 	return err;
+ }
+ 
+ /**
+  * idpf_rx_bufs_init - Initialize page pool, allocate rx bufs, and post to HW
+- * @rxbufq: RX queue to create page pool for
++ * @bufq: buffer queue to create page pool for
+  *
+  * Returns 0 on success, negative on failure
+  */
+-static int idpf_rx_bufs_init(struct idpf_queue *rxbufq)
++static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq)
+ {
+ 	struct page_pool *pool;
+ 
+-	pool = idpf_rx_create_page_pool(rxbufq);
++	pool = idpf_rx_create_page_pool(&bufq->q_vector->napi,
++					bufq->desc_count);
+ 	if (IS_ERR(pool))
+ 		return PTR_ERR(pool);
+ 
+-	rxbufq->pp = pool;
++	bufq->pp = pool;
+ 
+-	return idpf_rx_buf_alloc_all(rxbufq);
++	return idpf_rx_buf_alloc_all(bufq);
+ }
+ 
+ /**
+@@ -671,7 +796,6 @@ static int idpf_rx_bufs_init(struct idpf_queue *rxbufq)
+ int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+ {
+ 	struct idpf_rxq_group *rx_qgrp;
+-	struct idpf_queue *q;
+ 	int i, j, err;
+ 
+ 	for (i = 0; i < vport->num_rxq_grp; i++) {
+@@ -682,8 +806,10 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+ 			int num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 			for (j = 0; j < num_rxq; j++) {
++				struct idpf_rx_queue *q;
++
+ 				q = rx_qgrp->singleq.rxqs[j];
+-				err = idpf_rx_bufs_init(q);
++				err = idpf_rx_bufs_init_singleq(q);
+ 				if (err)
+ 					return err;
+ 			}
+@@ -693,6 +819,8 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+ 
+ 		/* Otherwise, allocate bufs for the buffer queues */
+ 		for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
++			struct idpf_buf_queue *q;
++
+ 			q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ 			err = idpf_rx_bufs_init(q);
+ 			if (err)
+@@ -705,22 +833,17 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+ 
+ /**
+  * idpf_rx_desc_alloc - Allocate queue Rx resources
++ * @vport: vport to allocate resources for
+  * @rxq: Rx queue for which the resources are setup
+- * @bufq: buffer or completion queue
+- * @q_model: single or split queue model
+  *
+  * Returns 0 on success, negative on failure
+  */
+-static int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model)
++static int idpf_rx_desc_alloc(const struct idpf_vport *vport,
++			      struct idpf_rx_queue *rxq)
+ {
+-	struct device *dev = rxq->dev;
++	struct device *dev = &vport->adapter->pdev->dev;
+ 
+-	if (bufq)
+-		rxq->size = rxq->desc_count *
+-			sizeof(struct virtchnl2_splitq_rx_buf_desc);
+-	else
+-		rxq->size = rxq->desc_count *
+-			sizeof(union virtchnl2_rx_desc);
++	rxq->size = rxq->desc_count * sizeof(union virtchnl2_rx_desc);
+ 
+ 	/* Allocate descriptors and also round up to nearest 4K */
+ 	rxq->size = ALIGN(rxq->size, 4096);
+@@ -735,7 +858,35 @@ static int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model)
+ 	rxq->next_to_alloc = 0;
+ 	rxq->next_to_clean = 0;
+ 	rxq->next_to_use = 0;
+-	set_bit(__IDPF_Q_GEN_CHK, rxq->flags);
++	idpf_queue_set(GEN_CHK, rxq);
++
++	return 0;
++}
++
++/**
++ * idpf_bufq_desc_alloc - Allocate buffer queue descriptor ring
++ * @vport: vport to allocate resources for
++ * @bufq: buffer queue for which the resources are set up
++ *
++ * Return: 0 on success, -ENOMEM on failure.
++ */
++static int idpf_bufq_desc_alloc(const struct idpf_vport *vport,
++				struct idpf_buf_queue *bufq)
++{
++	struct device *dev = &vport->adapter->pdev->dev;
++
++	bufq->size = array_size(bufq->desc_count, sizeof(*bufq->split_buf));
++
++	bufq->split_buf = dma_alloc_coherent(dev, bufq->size, &bufq->dma,
++					     GFP_KERNEL);
++	if (!bufq->split_buf)
++		return -ENOMEM;
++
++	bufq->next_to_alloc = 0;
++	bufq->next_to_clean = 0;
++	bufq->next_to_use = 0;
++
++	idpf_queue_set(GEN_CHK, bufq);
+ 
+ 	return 0;
+ }
+@@ -748,9 +899,7 @@ static int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model)
+  */
+ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+ {
+-	struct device *dev = &vport->adapter->pdev->dev;
+ 	struct idpf_rxq_group *rx_qgrp;
+-	struct idpf_queue *q;
+ 	int i, j, err;
+ 	u16 num_rxq;
+ 
+@@ -762,13 +911,17 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+ 			num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 		for (j = 0; j < num_rxq; j++) {
++			struct idpf_rx_queue *q;
++
+ 			if (idpf_is_queue_model_split(vport->rxq_model))
+ 				q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ 			else
+ 				q = rx_qgrp->singleq.rxqs[j];
+-			err = idpf_rx_desc_alloc(q, false, vport->rxq_model);
++
++			err = idpf_rx_desc_alloc(vport, q);
+ 			if (err) {
+-				dev_err(dev, "Memory allocation for Rx Queue %u failed\n",
++				pci_err(vport->adapter->pdev,
++					"Memory allocation for Rx Queue %u failed\n",
+ 					i);
+ 				goto err_out;
+ 			}
+@@ -778,10 +931,14 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+ 			continue;
+ 
+ 		for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
++			struct idpf_buf_queue *q;
++
+ 			q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+-			err = idpf_rx_desc_alloc(q, true, vport->rxq_model);
++
++			err = idpf_bufq_desc_alloc(vport, q);
+ 			if (err) {
+-				dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n",
++				pci_err(vport->adapter->pdev,
++					"Memory allocation for Rx Buffer Queue %u failed\n",
+ 					i);
+ 				goto err_out;
+ 			}
+@@ -802,11 +959,16 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+  */
+ static void idpf_txq_group_rel(struct idpf_vport *vport)
+ {
++	bool split, flow_sch_en;
+ 	int i, j;
+ 
+ 	if (!vport->txq_grps)
+ 		return;
+ 
++	split = idpf_is_queue_model_split(vport->txq_model);
++	flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
++				       VIRTCHNL2_CAP_SPLITQ_QSCHED);
++
+ 	for (i = 0; i < vport->num_txq_grp; i++) {
+ 		struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ 
+@@ -814,8 +976,15 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
+ 			kfree(txq_grp->txqs[j]);
+ 			txq_grp->txqs[j] = NULL;
+ 		}
++
++		if (!split)
++			continue;
++
+ 		kfree(txq_grp->complq);
+ 		txq_grp->complq = NULL;
++
++		if (flow_sch_en)
++			kfree(txq_grp->stashes);
+ 	}
+ 	kfree(vport->txq_grps);
+ 	vport->txq_grps = NULL;
+@@ -919,7 +1088,7 @@ static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport)
+ {
+ 	int i, j, k = 0;
+ 
+-	vport->txqs = kcalloc(vport->num_txq, sizeof(struct idpf_queue *),
++	vport->txqs = kcalloc(vport->num_txq, sizeof(*vport->txqs),
+ 			      GFP_KERNEL);
+ 
+ 	if (!vport->txqs)
+@@ -1137,7 +1306,8 @@ static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport,
+  * @q: rx queue for which descids are set
+  *
+  */
+-static void idpf_rxq_set_descids(struct idpf_vport *vport, struct idpf_queue *q)
++static void idpf_rxq_set_descids(const struct idpf_vport *vport,
++				 struct idpf_rx_queue *q)
+ {
+ 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ 		q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+@@ -1158,20 +1328,22 @@ static void idpf_rxq_set_descids(struct idpf_vport *vport, struct idpf_queue *q)
+  */
+ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+ {
+-	bool flow_sch_en;
+-	int err, i;
++	bool split, flow_sch_en;
++	int i;
+ 
+ 	vport->txq_grps = kcalloc(vport->num_txq_grp,
+ 				  sizeof(*vport->txq_grps), GFP_KERNEL);
+ 	if (!vport->txq_grps)
+ 		return -ENOMEM;
+ 
++	split = idpf_is_queue_model_split(vport->txq_model);
+ 	flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
+ 				       VIRTCHNL2_CAP_SPLITQ_QSCHED);
+ 
+ 	for (i = 0; i < vport->num_txq_grp; i++) {
+ 		struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 		struct idpf_adapter *adapter = vport->adapter;
++		struct idpf_txq_stash *stashes;
+ 		int j;
+ 
+ 		tx_qgrp->vport = vport;
+@@ -1180,45 +1352,62 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+ 		for (j = 0; j < tx_qgrp->num_txq; j++) {
+ 			tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]),
+ 						   GFP_KERNEL);
+-			if (!tx_qgrp->txqs[j]) {
+-				err = -ENOMEM;
++			if (!tx_qgrp->txqs[j])
+ 				goto err_alloc;
+-			}
++		}
++
++		if (split && flow_sch_en) {
++			stashes = kcalloc(num_txq, sizeof(*stashes),
++					  GFP_KERNEL);
++			if (!stashes)
++				goto err_alloc;
++
++			tx_qgrp->stashes = stashes;
+ 		}
+ 
+ 		for (j = 0; j < tx_qgrp->num_txq; j++) {
+-			struct idpf_queue *q = tx_qgrp->txqs[j];
++			struct idpf_tx_queue *q = tx_qgrp->txqs[j];
+ 
+ 			q->dev = &adapter->pdev->dev;
+ 			q->desc_count = vport->txq_desc_count;
+ 			q->tx_max_bufs = idpf_get_max_tx_bufs(adapter);
+ 			q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter);
+-			q->vport = vport;
++			q->netdev = vport->netdev;
+ 			q->txq_grp = tx_qgrp;
+-			hash_init(q->sched_buf_hash);
+ 
+-			if (flow_sch_en)
+-				set_bit(__IDPF_Q_FLOW_SCH_EN, q->flags);
++			if (!split) {
++				q->clean_budget = vport->compln_clean_budget;
++				idpf_queue_assign(CRC_EN, q,
++						  vport->crc_enable);
++			}
++
++			if (!flow_sch_en)
++				continue;
++
++			if (split) {
++				q->stash = &stashes[j];
++				hash_init(q->stash->sched_buf_hash);
++			}
++
++			idpf_queue_set(FLOW_SCH_EN, q);
+ 		}
+ 
+-		if (!idpf_is_queue_model_split(vport->txq_model))
++		if (!split)
+ 			continue;
+ 
+ 		tx_qgrp->complq = kcalloc(IDPF_COMPLQ_PER_GROUP,
+ 					  sizeof(*tx_qgrp->complq),
+ 					  GFP_KERNEL);
+-		if (!tx_qgrp->complq) {
+-			err = -ENOMEM;
++		if (!tx_qgrp->complq)
+ 			goto err_alloc;
+-		}
+ 
+-		tx_qgrp->complq->dev = &adapter->pdev->dev;
+ 		tx_qgrp->complq->desc_count = vport->complq_desc_count;
+-		tx_qgrp->complq->vport = vport;
+ 		tx_qgrp->complq->txq_grp = tx_qgrp;
++		tx_qgrp->complq->netdev = vport->netdev;
++		tx_qgrp->complq->clean_budget = vport->compln_clean_budget;
+ 
+ 		if (flow_sch_en)
+-			__set_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags);
++			idpf_queue_set(FLOW_SCH_EN, tx_qgrp->complq);
+ 	}
+ 
+ 	return 0;
+@@ -1226,7 +1415,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+ err_alloc:
+ 	idpf_txq_group_rel(vport);
+ 
+-	return err;
++	return -ENOMEM;
+ }
+ 
+ /**
+@@ -1238,8 +1427,6 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+  */
+ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+ {
+-	struct idpf_adapter *adapter = vport->adapter;
+-	struct idpf_queue *q;
+ 	int i, k, err = 0;
+ 	bool hs;
+ 
+@@ -1292,21 +1479,15 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+ 			struct idpf_bufq_set *bufq_set =
+ 				&rx_qgrp->splitq.bufq_sets[j];
+ 			int swq_size = sizeof(struct idpf_sw_queue);
++			struct idpf_buf_queue *q;
+ 
+ 			q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+-			q->dev = &adapter->pdev->dev;
+ 			q->desc_count = vport->bufq_desc_count[j];
+-			q->vport = vport;
+-			q->rxq_grp = rx_qgrp;
+-			q->idx = j;
+ 			q->rx_buf_size = vport->bufq_size[j];
+ 			q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
+-			q->rx_buf_stride = IDPF_RX_BUF_STRIDE;
+ 
+-			if (hs) {
+-				q->rx_hsplit_en = true;
+-				q->rx_hbuf_size = IDPF_HDR_BUF_SIZE;
+-			}
++			idpf_queue_assign(HSPLIT_EN, q, hs);
++			q->rx_hbuf_size = hs ? IDPF_HDR_BUF_SIZE : 0;
+ 
+ 			bufq_set->num_refillqs = num_rxq;
+ 			bufq_set->refillqs = kcalloc(num_rxq, swq_size,
+@@ -1319,13 +1500,12 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+ 				struct idpf_sw_queue *refillq =
+ 					&bufq_set->refillqs[k];
+ 
+-				refillq->dev = &vport->adapter->pdev->dev;
+ 				refillq->desc_count =
+ 					vport->bufq_desc_count[j];
+-				set_bit(__IDPF_Q_GEN_CHK, refillq->flags);
+-				set_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags);
++				idpf_queue_set(GEN_CHK, refillq);
++				idpf_queue_set(RFL_GEN_CHK, refillq);
+ 				refillq->ring = kcalloc(refillq->desc_count,
+-							sizeof(u16),
++							sizeof(*refillq->ring),
+ 							GFP_KERNEL);
+ 				if (!refillq->ring) {
+ 					err = -ENOMEM;
+@@ -1336,27 +1516,27 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+ 
+ skip_splitq_rx_init:
+ 		for (j = 0; j < num_rxq; j++) {
++			struct idpf_rx_queue *q;
++
+ 			if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ 				q = rx_qgrp->singleq.rxqs[j];
+ 				goto setup_rxq;
+ 			}
+ 			q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+-			rx_qgrp->splitq.rxq_sets[j]->refillq0 =
++			rx_qgrp->splitq.rxq_sets[j]->refillq[0] =
+ 			      &rx_qgrp->splitq.bufq_sets[0].refillqs[j];
+ 			if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP)
+-				rx_qgrp->splitq.rxq_sets[j]->refillq1 =
++				rx_qgrp->splitq.rxq_sets[j]->refillq[1] =
+ 				      &rx_qgrp->splitq.bufq_sets[1].refillqs[j];
+ 
+-			if (hs) {
+-				q->rx_hsplit_en = true;
+-				q->rx_hbuf_size = IDPF_HDR_BUF_SIZE;
+-			}
++			idpf_queue_assign(HSPLIT_EN, q, hs);
++			q->rx_hbuf_size = hs ? IDPF_HDR_BUF_SIZE : 0;
+ 
+ setup_rxq:
+-			q->dev = &adapter->pdev->dev;
+ 			q->desc_count = vport->rxq_desc_count;
+-			q->vport = vport;
+-			q->rxq_grp = rx_qgrp;
++			q->rx_ptype_lkup = vport->rx_ptype_lkup;
++			q->netdev = vport->netdev;
++			q->bufq_sets = rx_qgrp->splitq.bufq_sets;
+ 			q->idx = (i * num_rxq) + j;
+ 			/* In splitq mode, RXQ buffer size should be
+ 			 * set to that of the first buffer queue
+@@ -1445,12 +1625,13 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport)
+  * idpf_tx_handle_sw_marker - Handle queue marker packet
+  * @tx_q: tx queue to handle software marker
+  */
+-static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q)
++static void idpf_tx_handle_sw_marker(struct idpf_tx_queue *tx_q)
+ {
+-	struct idpf_vport *vport = tx_q->vport;
++	struct idpf_netdev_priv *priv = netdev_priv(tx_q->netdev);
++	struct idpf_vport *vport = priv->vport;
+ 	int i;
+ 
+-	clear_bit(__IDPF_Q_SW_MARKER, tx_q->flags);
++	idpf_queue_clear(SW_MARKER, tx_q);
+ 	/* Hardware must write marker packets to all queues associated with
+ 	 * completion queues. So check if all queues received marker packets
+ 	 */
+@@ -1458,7 +1639,7 @@ static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q)
+ 		/* If we're still waiting on any other TXQ marker completions,
+ 		 * just return now since we cannot wake up the marker_wq yet.
+ 		 */
+-		if (test_bit(__IDPF_Q_SW_MARKER, vport->txqs[i]->flags))
++		if (idpf_queue_has(SW_MARKER, vport->txqs[i]))
+ 			return;
+ 
+ 	/* Drain complete */
+@@ -1474,7 +1655,7 @@ static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q)
+  * @cleaned: pointer to stats struct to track cleaned packets/bytes
+  * @napi_budget: Used to determine if we are in netpoll
+  */
+-static void idpf_tx_splitq_clean_hdr(struct idpf_queue *tx_q,
++static void idpf_tx_splitq_clean_hdr(struct idpf_tx_queue *tx_q,
+ 				     struct idpf_tx_buf *tx_buf,
+ 				     struct idpf_cleaned_stats *cleaned,
+ 				     int napi_budget)
+@@ -1505,7 +1686,8 @@ static void idpf_tx_splitq_clean_hdr(struct idpf_queue *tx_q,
+  * @cleaned: pointer to stats struct to track cleaned packets/bytes
+  * @budget: Used to determine if we are in netpoll
+  */
+-static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag,
++static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq,
++				       u16 compl_tag,
+ 				       struct idpf_cleaned_stats *cleaned,
+ 				       int budget)
+ {
+@@ -1513,7 +1695,7 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag,
+ 	struct hlist_node *tmp_buf;
+ 
+ 	/* Buffer completion */
+-	hash_for_each_possible_safe(txq->sched_buf_hash, stash, tmp_buf,
++	hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf,
+ 				    hlist, compl_tag) {
+ 		if (unlikely(stash->buf.compl_tag != (int)compl_tag))
+ 			continue;
+@@ -1530,7 +1712,7 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag,
+ 		}
+ 
+ 		/* Push shadow buf back onto stack */
+-		idpf_buf_lifo_push(&txq->buf_stack, stash);
++		idpf_buf_lifo_push(&txq->stash->buf_stack, stash);
+ 
+ 		hash_del(&stash->hlist);
+ 	}
+@@ -1542,7 +1724,7 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag,
+  * @txq: Tx queue to clean
+  * @tx_buf: buffer to store
+  */
+-static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq,
++static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq,
+ 				       struct idpf_tx_buf *tx_buf)
+ {
+ 	struct idpf_tx_stash *stash;
+@@ -1551,10 +1733,10 @@ static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq,
+ 		     !dma_unmap_len(tx_buf, len)))
+ 		return 0;
+ 
+-	stash = idpf_buf_lifo_pop(&txq->buf_stack);
++	stash = idpf_buf_lifo_pop(&txq->stash->buf_stack);
+ 	if (unlikely(!stash)) {
+ 		net_err_ratelimited("%s: No out-of-order TX buffers left!\n",
+-				    txq->vport->netdev->name);
++				    netdev_name(txq->netdev));
+ 
+ 		return -ENOMEM;
+ 	}
+@@ -1568,7 +1750,8 @@ static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq,
+ 	stash->buf.compl_tag = tx_buf->compl_tag;
+ 
+ 	/* Add buffer to buf_hash table to be freed later */
+-	hash_add(txq->sched_buf_hash, &stash->hlist, stash->buf.compl_tag);
++	hash_add(txq->stash->sched_buf_hash, &stash->hlist,
++		 stash->buf.compl_tag);
+ 
+ 	memset(tx_buf, 0, sizeof(struct idpf_tx_buf));
+ 
+@@ -1584,7 +1767,7 @@ do {								\
+ 	if (unlikely(!(ntc))) {					\
+ 		ntc -= (txq)->desc_count;			\
+ 		buf = (txq)->tx_buf;				\
+-		desc = IDPF_FLEX_TX_DESC(txq, 0);		\
++		desc = &(txq)->flex_tx[0];			\
+ 	} else {						\
+ 		(buf)++;					\
+ 		(desc)++;					\
+@@ -1607,7 +1790,7 @@ do {								\
+  * and the buffers will be cleaned separately. The stats are not updated from
+  * this function when using flow-based scheduling.
+  */
+-static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end,
++static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
+ 				 int napi_budget,
+ 				 struct idpf_cleaned_stats *cleaned,
+ 				 bool descs_only)
+@@ -1617,8 +1800,8 @@ static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end,
+ 	s16 ntc = tx_q->next_to_clean;
+ 	struct idpf_tx_buf *tx_buf;
+ 
+-	tx_desc = IDPF_FLEX_TX_DESC(tx_q, ntc);
+-	next_pending_desc = IDPF_FLEX_TX_DESC(tx_q, end);
++	tx_desc = &tx_q->flex_tx[ntc];
++	next_pending_desc = &tx_q->flex_tx[end];
+ 	tx_buf = &tx_q->tx_buf[ntc];
+ 	ntc -= tx_q->desc_count;
+ 
+@@ -1703,7 +1886,7 @@ do {							\
+  * stashed. Returns the byte/segment count for the cleaned packet associated
+  * this completion tag.
+  */
+-static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag,
++static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
+ 				   struct idpf_cleaned_stats *cleaned,
+ 				   int budget)
+ {
+@@ -1772,14 +1955,14 @@ static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag,
+  *
+  * Returns bytes/packets cleaned
+  */
+-static void idpf_tx_handle_rs_completion(struct idpf_queue *txq,
++static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
+ 					 struct idpf_splitq_tx_compl_desc *desc,
+ 					 struct idpf_cleaned_stats *cleaned,
+ 					 int budget)
+ {
+ 	u16 compl_tag;
+ 
+-	if (!test_bit(__IDPF_Q_FLOW_SCH_EN, txq->flags)) {
++	if (!idpf_queue_has(FLOW_SCH_EN, txq)) {
+ 		u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head);
+ 
+ 		return idpf_tx_splitq_clean(txq, head, budget, cleaned, false);
+@@ -1802,24 +1985,23 @@ static void idpf_tx_handle_rs_completion(struct idpf_queue *txq,
+  *
+  * Returns true if there's any budget left (e.g. the clean is finished)
+  */
+-static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
++static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget,
+ 				 int *cleaned)
+ {
+ 	struct idpf_splitq_tx_compl_desc *tx_desc;
+-	struct idpf_vport *vport = complq->vport;
+ 	s16 ntc = complq->next_to_clean;
+ 	struct idpf_netdev_priv *np;
+ 	unsigned int complq_budget;
+ 	bool complq_ok = true;
+ 	int i;
+ 
+-	complq_budget = vport->compln_clean_budget;
+-	tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, ntc);
++	complq_budget = complq->clean_budget;
++	tx_desc = &complq->comp[ntc];
+ 	ntc -= complq->desc_count;
+ 
+ 	do {
+ 		struct idpf_cleaned_stats cleaned_stats = { };
+-		struct idpf_queue *tx_q;
++		struct idpf_tx_queue *tx_q;
+ 		int rel_tx_qid;
+ 		u16 hw_head;
+ 		u8 ctype;	/* completion type */
+@@ -1828,7 +2010,7 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 		/* if the descriptor isn't done, no work yet to do */
+ 		gen = le16_get_bits(tx_desc->qid_comptype_gen,
+ 				    IDPF_TXD_COMPLQ_GEN_M);
+-		if (test_bit(__IDPF_Q_GEN_CHK, complq->flags) != gen)
++		if (idpf_queue_has(GEN_CHK, complq) != gen)
+ 			break;
+ 
+ 		/* Find necessary info of TX queue to clean buffers */
+@@ -1836,8 +2018,7 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 					   IDPF_TXD_COMPLQ_QID_M);
+ 		if (rel_tx_qid >= complq->txq_grp->num_txq ||
+ 		    !complq->txq_grp->txqs[rel_tx_qid]) {
+-			dev_err(&complq->vport->adapter->pdev->dev,
+-				"TxQ not found\n");
++			netdev_err(complq->netdev, "TxQ not found\n");
+ 			goto fetch_next_desc;
+ 		}
+ 		tx_q = complq->txq_grp->txqs[rel_tx_qid];
+@@ -1860,15 +2041,14 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 			idpf_tx_handle_sw_marker(tx_q);
+ 			break;
+ 		default:
+-			dev_err(&tx_q->vport->adapter->pdev->dev,
+-				"Unknown TX completion type: %d\n",
+-				ctype);
++			netdev_err(tx_q->netdev,
++				   "Unknown TX completion type: %d\n", ctype);
+ 			goto fetch_next_desc;
+ 		}
+ 
+ 		u64_stats_update_begin(&tx_q->stats_sync);
+-		u64_stats_add(&tx_q->q_stats.tx.packets, cleaned_stats.packets);
+-		u64_stats_add(&tx_q->q_stats.tx.bytes, cleaned_stats.bytes);
++		u64_stats_add(&tx_q->q_stats.packets, cleaned_stats.packets);
++		u64_stats_add(&tx_q->q_stats.bytes, cleaned_stats.bytes);
+ 		tx_q->cleaned_pkts += cleaned_stats.packets;
+ 		tx_q->cleaned_bytes += cleaned_stats.bytes;
+ 		complq->num_completions++;
+@@ -1879,8 +2059,8 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 		ntc++;
+ 		if (unlikely(!ntc)) {
+ 			ntc -= complq->desc_count;
+-			tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, 0);
+-			change_bit(__IDPF_Q_GEN_CHK, complq->flags);
++			tx_desc = &complq->comp[0];
++			idpf_queue_change(GEN_CHK, complq);
+ 		}
+ 
+ 		prefetch(tx_desc);
+@@ -1896,9 +2076,9 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 		     IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq)))
+ 		complq_ok = false;
+ 
+-	np = netdev_priv(complq->vport->netdev);
++	np = netdev_priv(complq->netdev);
+ 	for (i = 0; i < complq->txq_grp->num_txq; ++i) {
+-		struct idpf_queue *tx_q = complq->txq_grp->txqs[i];
++		struct idpf_tx_queue *tx_q = complq->txq_grp->txqs[i];
+ 		struct netdev_queue *nq;
+ 		bool dont_wake;
+ 
+@@ -1909,11 +2089,11 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ 		*cleaned += tx_q->cleaned_pkts;
+ 
+ 		/* Update BQL */
+-		nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
++		nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 
+ 		dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) ||
+ 			    np->state != __IDPF_VPORT_UP ||
+-			    !netif_carrier_ok(tx_q->vport->netdev);
++			    !netif_carrier_ok(tx_q->netdev);
+ 		/* Check if the TXQ needs to and can be restarted */
+ 		__netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes,
+ 					   IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
+@@ -1969,29 +2149,6 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ 	desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+ }
+ 
+-/**
+- * idpf_tx_maybe_stop_common - 1st level check for common Tx stop conditions
+- * @tx_q: the queue to be checked
+- * @size: number of descriptors we want to assure is available
+- *
+- * Returns 0 if stop is not needed
+- */
+-int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size)
+-{
+-	struct netdev_queue *nq;
+-
+-	if (likely(IDPF_DESC_UNUSED(tx_q) >= size))
+-		return 0;
+-
+-	u64_stats_update_begin(&tx_q->stats_sync);
+-	u64_stats_inc(&tx_q->q_stats.tx.q_busy);
+-	u64_stats_update_end(&tx_q->stats_sync);
+-
+-	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+-
+-	return netif_txq_maybe_stop(nq, IDPF_DESC_UNUSED(tx_q), size, size);
+-}
+-
+ /**
+  * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+  * @tx_q: the queue to be checked
+@@ -1999,11 +2156,11 @@ int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size)
+  *
+  * Returns 0 if stop is not needed
+  */
+-static int idpf_tx_maybe_stop_splitq(struct idpf_queue *tx_q,
++static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+ 				     unsigned int descs_needed)
+ {
+ 	if (idpf_tx_maybe_stop_common(tx_q, descs_needed))
+-		goto splitq_stop;
++		goto out;
+ 
+ 	/* If there are too many outstanding completions expected on the
+ 	 * completion queue, stop the TX queue to give the device some time to
+@@ -2022,10 +2179,12 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_queue *tx_q,
+ 	return 0;
+ 
+ splitq_stop:
++	netif_stop_subqueue(tx_q->netdev, tx_q->idx);
++
++out:
+ 	u64_stats_update_begin(&tx_q->stats_sync);
+-	u64_stats_inc(&tx_q->q_stats.tx.q_busy);
++	u64_stats_inc(&tx_q->q_stats.q_busy);
+ 	u64_stats_update_end(&tx_q->stats_sync);
+-	netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx);
+ 
+ 	return -EBUSY;
+ }
+@@ -2040,15 +2199,19 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_queue *tx_q,
+  * to do a register write to update our queue status. We know this can only
+  * mean tail here as HW should be owning head for TX.
+  */
+-void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
++void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ 			   bool xmit_more)
+ {
+ 	struct netdev_queue *nq;
+ 
+-	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
++	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 	tx_q->next_to_use = val;
+ 
+-	idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED);
++	if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) {
++		u64_stats_update_begin(&tx_q->stats_sync);
++		u64_stats_inc(&tx_q->q_stats.q_busy);
++		u64_stats_update_end(&tx_q->stats_sync);
++	}
+ 
+ 	/* Force memory writes to complete before letting h/w
+ 	 * know there are new descriptors to fetch.  (Only
+@@ -2069,7 +2232,7 @@ void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
+  *
+  * Returns number of data descriptors needed for this skb.
+  */
+-unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
++unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+ 					 struct sk_buff *skb)
+ {
+ 	const struct skb_shared_info *shinfo;
+@@ -2102,7 +2265,7 @@ unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
+ 
+ 		count = idpf_size_to_txd_count(skb->len);
+ 		u64_stats_update_begin(&txq->stats_sync);
+-		u64_stats_inc(&txq->q_stats.tx.linearize);
++		u64_stats_inc(&txq->q_stats.linearize);
+ 		u64_stats_update_end(&txq->stats_sync);
+ 	}
+ 
+@@ -2116,11 +2279,11 @@ unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
+  * @first: original first buffer info buffer for packet
+  * @idx: starting point on ring to unwind
+  */
+-void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
++void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
+ 			   struct idpf_tx_buf *first, u16 idx)
+ {
+ 	u64_stats_update_begin(&txq->stats_sync);
+-	u64_stats_inc(&txq->q_stats.tx.dma_map_errs);
++	u64_stats_inc(&txq->q_stats.dma_map_errs);
+ 	u64_stats_update_end(&txq->stats_sync);
+ 
+ 	/* clear dma mappings for failed tx_buf map */
+@@ -2143,7 +2306,7 @@ void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
+ 		 * used one additional descriptor for a context
+ 		 * descriptor. Reset that here.
+ 		 */
+-		tx_desc = IDPF_FLEX_TX_DESC(txq, idx);
++		tx_desc = &txq->flex_tx[idx];
+ 		memset(tx_desc, 0, sizeof(struct idpf_flex_tx_ctx_desc));
+ 		if (idx == 0)
+ 			idx = txq->desc_count;
+@@ -2159,7 +2322,7 @@ void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
+  * @txq: the tx ring to wrap
+  * @ntu: ring index to bump
+  */
+-static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_queue *txq, u16 ntu)
++static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
+ {
+ 	ntu++;
+ 
+@@ -2181,7 +2344,7 @@ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_queue *txq, u16 ntu)
+  * and gets a physical address for each memory location and programs
+  * it and the length into the transmit flex descriptor.
+  */
+-static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
++static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 			       struct idpf_tx_splitq_params *params,
+ 			       struct idpf_tx_buf *first)
+ {
+@@ -2202,7 +2365,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
+ 	data_len = skb->data_len;
+ 	size = skb_headlen(skb);
+ 
+-	tx_desc = IDPF_FLEX_TX_DESC(tx_q, i);
++	tx_desc = &tx_q->flex_tx[i];
+ 
+ 	dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+ 
+@@ -2275,7 +2438,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
+ 			i++;
+ 
+ 			if (i == tx_q->desc_count) {
+-				tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0);
++				tx_desc = &tx_q->flex_tx[0];
+ 				i = 0;
+ 				tx_q->compl_tag_cur_gen =
+ 					IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+@@ -2320,7 +2483,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
+ 		i++;
+ 
+ 		if (i == tx_q->desc_count) {
+-			tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0);
++			tx_desc = &tx_q->flex_tx[0];
+ 			i = 0;
+ 			tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+ 		}
+@@ -2348,7 +2511,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
+ 	tx_q->txq_grp->num_completions_pending++;
+ 
+ 	/* record bytecount for BQL */
+-	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
++	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 	netdev_tx_sent_queue(nq, first->bytecount);
+ 
+ 	idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more());
+@@ -2525,8 +2688,8 @@ static bool __idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs)
+  * E.g.: a packet with 7 fragments can require 9 DMA transactions; 1 for TSO
+  * header, 1 for segment payload, and then 7 for the fragments.
+  */
+-bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
+-			unsigned int count)
++static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
++			       unsigned int count)
+ {
+ 	if (likely(count < max_bufs))
+ 		return false;
+@@ -2544,7 +2707,7 @@ bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
+  * ring entry to reflect that this index is a context descriptor
+  */
+ static struct idpf_flex_tx_ctx_desc *
+-idpf_tx_splitq_get_ctx_desc(struct idpf_queue *txq)
++idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq)
+ {
+ 	struct idpf_flex_tx_ctx_desc *desc;
+ 	int i = txq->next_to_use;
+@@ -2553,7 +2716,7 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_queue *txq)
+ 	txq->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+ 
+ 	/* grab the next descriptor */
+-	desc = IDPF_FLEX_TX_CTX_DESC(txq, i);
++	desc = &txq->flex_ctx[i];
+ 	txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i);
+ 
+ 	return desc;
+@@ -2564,10 +2727,10 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_queue *txq)
+  * @tx_q: queue to send buffer on
+  * @skb: pointer to skb
+  */
+-netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb)
++netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb)
+ {
+ 	u64_stats_update_begin(&tx_q->stats_sync);
+-	u64_stats_inc(&tx_q->q_stats.tx.skb_drops);
++	u64_stats_inc(&tx_q->q_stats.skb_drops);
+ 	u64_stats_update_end(&tx_q->stats_sync);
+ 
+ 	idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+@@ -2585,7 +2748,7 @@ netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb)
+  * Returns NETDEV_TX_OK if sent, else an error code
+  */
+ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+-					struct idpf_queue *tx_q)
++					struct idpf_tx_queue *tx_q)
+ {
+ 	struct idpf_tx_splitq_params tx_params = { };
+ 	struct idpf_tx_buf *first;
+@@ -2625,7 +2788,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 		ctx_desc->tso.qw0.hdr_len = tx_params.offload.tso_hdr_len;
+ 
+ 		u64_stats_update_begin(&tx_q->stats_sync);
+-		u64_stats_inc(&tx_q->q_stats.tx.lso_pkts);
++		u64_stats_inc(&tx_q->q_stats.lso_pkts);
+ 		u64_stats_update_end(&tx_q->stats_sync);
+ 	}
+ 
+@@ -2642,7 +2805,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 		first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+ 	}
+ 
+-	if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_q->flags)) {
++	if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
+ 		tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
+ 		tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
+ 		/* Set the RE bit to catch any packets that may have not been
+@@ -2672,17 +2835,16 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ }
+ 
+ /**
+- * idpf_tx_splitq_start - Selects the right Tx queue to send buffer
++ * idpf_tx_start - Selects the right Tx queue to send buffer
+  * @skb: send buffer
+  * @netdev: network interface device structure
+  *
+  * Returns NETDEV_TX_OK if sent, else an error code
+  */
+-netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
+-				 struct net_device *netdev)
++netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev)
+ {
+ 	struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+-	struct idpf_queue *tx_q;
++	struct idpf_tx_queue *tx_q;
+ 
+ 	if (unlikely(skb_get_queue_mapping(skb) >= vport->num_txq)) {
+ 		dev_kfree_skb_any(skb);
+@@ -2701,7 +2863,10 @@ netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+-	return idpf_tx_splitq_frame(skb, tx_q);
++	if (idpf_is_queue_model_split(vport->txq_model))
++		return idpf_tx_splitq_frame(skb, tx_q);
++	else
++		return idpf_tx_singleq_frame(skb, tx_q);
+ }
+ 
+ /**
+@@ -2735,13 +2900,14 @@ enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *deco
+  * @rx_desc: Receive descriptor
+  * @decoded: Decoded Rx packet type related fields
+  */
+-static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb,
+-			 struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+-			 struct idpf_rx_ptype_decoded *decoded)
++static void
++idpf_rx_hash(const struct idpf_rx_queue *rxq, struct sk_buff *skb,
++	     const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
++	     struct idpf_rx_ptype_decoded *decoded)
+ {
+ 	u32 hash;
+ 
+-	if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXHASH)))
++	if (unlikely(!(rxq->netdev->features & NETIF_F_RXHASH)))
+ 		return;
+ 
+ 	hash = le16_to_cpu(rx_desc->hash1) |
+@@ -2760,14 +2926,14 @@ static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb,
+  *
+  * skb->protocol must be set before this function is called
+  */
+-static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb,
++static void idpf_rx_csum(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ 			 struct idpf_rx_csum_decoded *csum_bits,
+ 			 struct idpf_rx_ptype_decoded *decoded)
+ {
+ 	bool ipv4, ipv6;
+ 
+ 	/* check if Rx checksum is enabled */
+-	if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXCSUM)))
++	if (unlikely(!(rxq->netdev->features & NETIF_F_RXCSUM)))
+ 		return;
+ 
+ 	/* check if HW has decoded the packet and checksum */
+@@ -2814,7 +2980,7 @@ static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+ 
+ checksum_fail:
+ 	u64_stats_update_begin(&rxq->stats_sync);
+-	u64_stats_inc(&rxq->q_stats.rx.hw_csum_err);
++	u64_stats_inc(&rxq->q_stats.hw_csum_err);
+ 	u64_stats_update_end(&rxq->stats_sync);
+ }
+ 
+@@ -2824,8 +2990,9 @@ static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+  * @csum: structure to extract checksum fields
+  *
+  **/
+-static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+-					     struct idpf_rx_csum_decoded *csum)
++static void
++idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
++				 struct idpf_rx_csum_decoded *csum)
+ {
+ 	u8 qword0, qword1;
+ 
+@@ -2860,8 +3027,8 @@ static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_n
+  * Populate the skb fields with the total number of RSC segments, RSC payload
+  * length and packet type.
+  */
+-static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb,
+-		       struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
++static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
++		       const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+ 		       struct idpf_rx_ptype_decoded *decoded)
+ {
+ 	u16 rsc_segments, rsc_seg_len;
+@@ -2914,7 +3081,7 @@ static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb,
+ 	tcp_gro_complete(skb);
+ 
+ 	u64_stats_update_begin(&rxq->stats_sync);
+-	u64_stats_inc(&rxq->q_stats.rx.rsc_pkts);
++	u64_stats_inc(&rxq->q_stats.rsc_pkts);
+ 	u64_stats_update_end(&rxq->stats_sync);
+ 
+ 	return 0;
+@@ -2930,9 +3097,9 @@ static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb,
+  * order to populate the hash, checksum, protocol, and
+  * other fields within the skb.
+  */
+-static int idpf_rx_process_skb_fields(struct idpf_queue *rxq,
+-				      struct sk_buff *skb,
+-				      struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
++static int
++idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
++			   const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+ {
+ 	struct idpf_rx_csum_decoded csum_bits = { };
+ 	struct idpf_rx_ptype_decoded decoded;
+@@ -2940,19 +3107,13 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq,
+ 
+ 	rx_ptype = le16_get_bits(rx_desc->ptype_err_fflags0,
+ 				 VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M);
+-
+-	skb->protocol = eth_type_trans(skb, rxq->vport->netdev);
+-
+-	decoded = rxq->vport->rx_ptype_lkup[rx_ptype];
+-	/* If we don't know the ptype we can't do anything else with it. Just
+-	 * pass it up the stack as-is.
+-	 */
+-	if (!decoded.known)
+-		return 0;
++	decoded = rxq->rx_ptype_lkup[rx_ptype];
+ 
+ 	/* process RSS/hash */
+ 	idpf_rx_hash(rxq, skb, rx_desc, &decoded);
+ 
++	skb->protocol = eth_type_trans(skb, rxq->netdev);
++
+ 	if (le16_get_bits(rx_desc->hdrlen_flags,
+ 			  VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
+ 		return idpf_rx_rsc(rxq, skb, rx_desc, &decoded);
+@@ -2992,7 +3153,7 @@ void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb,
+  * data from the current receive descriptor, taking care to set up the
+  * skb correctly.
+  */
+-struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
++struct sk_buff *idpf_rx_construct_skb(const struct idpf_rx_queue *rxq,
+ 				      struct idpf_rx_buf *rx_buf,
+ 				      unsigned int size)
+ {
+@@ -3005,7 +3166,7 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
+ 	/* prefetch first cache line of first page */
+ 	net_prefetch(va);
+ 	/* allocate a skb to store the frags */
+-	skb = napi_alloc_skb(&rxq->q_vector->napi, IDPF_RX_HDR_SIZE);
++	skb = napi_alloc_skb(rxq->napi, IDPF_RX_HDR_SIZE);
+ 	if (unlikely(!skb)) {
+ 		idpf_rx_put_page(rx_buf);
+ 
+@@ -3052,14 +3213,14 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
+  * the current receive descriptor, taking care to set up the skb correctly.
+  * This specifically uses a header buffer to start building the skb.
+  */
+-static struct sk_buff *idpf_rx_hdr_construct_skb(struct idpf_queue *rxq,
+-						 const void *va,
+-						 unsigned int size)
++static struct sk_buff *
++idpf_rx_hdr_construct_skb(const struct idpf_rx_queue *rxq, const void *va,
++			  unsigned int size)
+ {
+ 	struct sk_buff *skb;
+ 
+ 	/* allocate a skb to store the frags */
+-	skb = napi_alloc_skb(&rxq->q_vector->napi, size);
++	skb = napi_alloc_skb(rxq->napi, size);
+ 	if (unlikely(!skb))
+ 		return NULL;
+ 
+@@ -3115,10 +3276,10 @@ static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_de
+  *
+  * Returns amount of work completed
+  */
+-static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
++static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
+ {
+ 	int total_rx_bytes = 0, total_rx_pkts = 0;
+-	struct idpf_queue *rx_bufq = NULL;
++	struct idpf_buf_queue *rx_bufq = NULL;
+ 	struct sk_buff *skb = rxq->skb;
+ 	u16 ntc = rxq->next_to_clean;
+ 
+@@ -3128,7 +3289,6 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 		struct idpf_sw_queue *refillq = NULL;
+ 		struct idpf_rxq_set *rxq_set = NULL;
+ 		struct idpf_rx_buf *rx_buf = NULL;
+-		union virtchnl2_rx_desc *desc;
+ 		unsigned int pkt_len = 0;
+ 		unsigned int hdr_len = 0;
+ 		u16 gen_id, buf_id = 0;
+@@ -3138,8 +3298,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 		u8 rxdid;
+ 
+ 		/* get the Rx desc from Rx queue based on 'next_to_clean' */
+-		desc = IDPF_RX_DESC(rxq, ntc);
+-		rx_desc = (struct virtchnl2_rx_flex_desc_adv_nic_3 *)desc;
++		rx_desc = &rxq->rx[ntc].flex_adv_nic_3_wb;
+ 
+ 		/* This memory barrier is needed to keep us from reading
+ 		 * any other fields out of the rx_desc
+@@ -3150,7 +3309,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 		gen_id = le16_get_bits(rx_desc->pktlen_gen_bufq_id,
+ 				       VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M);
+ 
+-		if (test_bit(__IDPF_Q_GEN_CHK, rxq->flags) != gen_id)
++		if (idpf_queue_has(GEN_CHK, rxq) != gen_id)
+ 			break;
+ 
+ 		rxdid = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M,
+@@ -3158,7 +3317,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 		if (rxdid != VIRTCHNL2_RXDID_2_FLEX_SPLITQ) {
+ 			IDPF_RX_BUMP_NTC(rxq, ntc);
+ 			u64_stats_update_begin(&rxq->stats_sync);
+-			u64_stats_inc(&rxq->q_stats.rx.bad_descs);
++			u64_stats_inc(&rxq->q_stats.bad_descs);
+ 			u64_stats_update_end(&rxq->stats_sync);
+ 			continue;
+ 		}
+@@ -3176,7 +3335,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 			 * data/payload buffer.
+ 			 */
+ 			u64_stats_update_begin(&rxq->stats_sync);
+-			u64_stats_inc(&rxq->q_stats.rx.hsplit_buf_ovf);
++			u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
+ 			u64_stats_update_end(&rxq->stats_sync);
+ 			goto bypass_hsplit;
+ 		}
+@@ -3189,13 +3348,10 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 					VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M);
+ 
+ 		rxq_set = container_of(rxq, struct idpf_rxq_set, rxq);
+-		if (!bufq_id)
+-			refillq = rxq_set->refillq0;
+-		else
+-			refillq = rxq_set->refillq1;
++		refillq = rxq_set->refillq[bufq_id];
+ 
+ 		/* retrieve buffer from the rxq */
+-		rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq;
++		rx_bufq = &rxq->bufq_sets[bufq_id].bufq;
+ 
+ 		buf_id = le16_to_cpu(rx_desc->buf_id);
+ 
+@@ -3207,7 +3363,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 
+ 			skb = idpf_rx_hdr_construct_skb(rxq, va, hdr_len);
+ 			u64_stats_update_begin(&rxq->stats_sync);
+-			u64_stats_inc(&rxq->q_stats.rx.hsplit_pkts);
++			u64_stats_inc(&rxq->q_stats.hsplit_pkts);
+ 			u64_stats_update_end(&rxq->stats_sync);
+ 		}
+ 
+@@ -3250,7 +3406,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 		}
+ 
+ 		/* send completed skb up the stack */
+-		napi_gro_receive(&rxq->q_vector->napi, skb);
++		napi_gro_receive(rxq->napi, skb);
+ 		skb = NULL;
+ 
+ 		/* update budget accounting */
+@@ -3261,8 +3417,8 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ 
+ 	rxq->skb = skb;
+ 	u64_stats_update_begin(&rxq->stats_sync);
+-	u64_stats_add(&rxq->q_stats.rx.packets, total_rx_pkts);
+-	u64_stats_add(&rxq->q_stats.rx.bytes, total_rx_bytes);
++	u64_stats_add(&rxq->q_stats.packets, total_rx_pkts);
++	u64_stats_add(&rxq->q_stats.bytes, total_rx_bytes);
+ 	u64_stats_update_end(&rxq->stats_sync);
+ 
+ 	/* guarantee a trip back through this routine if there was a failure */
+@@ -3272,19 +3428,16 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+ /**
+  * idpf_rx_update_bufq_desc - Update buffer queue descriptor
+  * @bufq: Pointer to the buffer queue
+- * @refill_desc: SW Refill queue descriptor containing buffer ID
++ * @buf_id: buffer ID
+  * @buf_desc: Buffer queue descriptor
+  *
+  * Return 0 on success and negative on failure.
+  */
+-static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc,
++static int idpf_rx_update_bufq_desc(struct idpf_buf_queue *bufq, u32 buf_id,
+ 				    struct virtchnl2_splitq_rx_buf_desc *buf_desc)
+ {
+ 	struct idpf_rx_buf *buf;
+ 	dma_addr_t addr;
+-	u16 buf_id;
+-
+-	buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
+ 
+ 	buf = &bufq->rx_buf.buf[buf_id];
+ 
+@@ -3295,7 +3448,7 @@ static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc,
+ 	buf_desc->pkt_addr = cpu_to_le64(addr);
+ 	buf_desc->qword0.buf_id = cpu_to_le16(buf_id);
+ 
+-	if (!bufq->rx_hsplit_en)
++	if (!idpf_queue_has(HSPLIT_EN, bufq))
+ 		return 0;
+ 
+ 	buf_desc->hdr_addr = cpu_to_le64(bufq->rx_buf.hdr_buf_pa +
+@@ -3311,38 +3464,37 @@ static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc,
+  *
+  * This function takes care of the buffer refill management
+  */
+-static void idpf_rx_clean_refillq(struct idpf_queue *bufq,
++static void idpf_rx_clean_refillq(struct idpf_buf_queue *bufq,
+ 				  struct idpf_sw_queue *refillq)
+ {
+ 	struct virtchnl2_splitq_rx_buf_desc *buf_desc;
+ 	u16 bufq_nta = bufq->next_to_alloc;
+ 	u16 ntc = refillq->next_to_clean;
+ 	int cleaned = 0;
+-	u16 gen;
+ 
+-	buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, bufq_nta);
++	buf_desc = &bufq->split_buf[bufq_nta];
+ 
+ 	/* make sure we stop at ring wrap in the unlikely case ring is full */
+ 	while (likely(cleaned < refillq->desc_count)) {
+-		u16 refill_desc = IDPF_SPLITQ_RX_BI_DESC(refillq, ntc);
++		u32 buf_id, refill_desc = refillq->ring[ntc];
+ 		bool failure;
+ 
+-		gen = FIELD_GET(IDPF_RX_BI_GEN_M, refill_desc);
+-		if (test_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags) != gen)
++		if (idpf_queue_has(RFL_GEN_CHK, refillq) !=
++		    !!(refill_desc & IDPF_RX_BI_GEN_M))
+ 			break;
+ 
+-		failure = idpf_rx_update_bufq_desc(bufq, refill_desc,
+-						   buf_desc);
++		buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
++		failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc);
+ 		if (failure)
+ 			break;
+ 
+ 		if (unlikely(++ntc == refillq->desc_count)) {
+-			change_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags);
++			idpf_queue_change(RFL_GEN_CHK, refillq);
+ 			ntc = 0;
+ 		}
+ 
+ 		if (unlikely(++bufq_nta == bufq->desc_count)) {
+-			buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, 0);
++			buf_desc = &bufq->split_buf[0];
+ 			bufq_nta = 0;
+ 		} else {
+ 			buf_desc++;
+@@ -3376,7 +3528,7 @@ static void idpf_rx_clean_refillq(struct idpf_queue *bufq,
+  * this vector.  Returns true if clean is complete within budget, false
+  * otherwise.
+  */
+-static void idpf_rx_clean_refillq_all(struct idpf_queue *bufq)
++static void idpf_rx_clean_refillq_all(struct idpf_buf_queue *bufq)
+ {
+ 	struct idpf_bufq_set *bufq_set;
+ 	int i;
+@@ -3439,6 +3591,8 @@ void idpf_vport_intr_rel(struct idpf_vport *vport)
+ 	for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ 		struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
+ 
++		kfree(q_vector->complq);
++		q_vector->complq = NULL;
+ 		kfree(q_vector->bufq);
+ 		q_vector->bufq = NULL;
+ 		kfree(q_vector->tx);
+@@ -3557,13 +3711,13 @@ static void idpf_net_dim(struct idpf_q_vector *q_vector)
+ 		goto check_rx_itr;
+ 
+ 	for (i = 0, packets = 0, bytes = 0; i < q_vector->num_txq; i++) {
+-		struct idpf_queue *txq = q_vector->tx[i];
++		struct idpf_tx_queue *txq = q_vector->tx[i];
+ 		unsigned int start;
+ 
+ 		do {
+ 			start = u64_stats_fetch_begin(&txq->stats_sync);
+-			packets += u64_stats_read(&txq->q_stats.tx.packets);
+-			bytes += u64_stats_read(&txq->q_stats.tx.bytes);
++			packets += u64_stats_read(&txq->q_stats.packets);
++			bytes += u64_stats_read(&txq->q_stats.bytes);
+ 		} while (u64_stats_fetch_retry(&txq->stats_sync, start));
+ 	}
+ 
+@@ -3576,13 +3730,13 @@ static void idpf_net_dim(struct idpf_q_vector *q_vector)
+ 		return;
+ 
+ 	for (i = 0, packets = 0, bytes = 0; i < q_vector->num_rxq; i++) {
+-		struct idpf_queue *rxq = q_vector->rx[i];
++		struct idpf_rx_queue *rxq = q_vector->rx[i];
+ 		unsigned int start;
+ 
+ 		do {
+ 			start = u64_stats_fetch_begin(&rxq->stats_sync);
+-			packets += u64_stats_read(&rxq->q_stats.rx.packets);
+-			bytes += u64_stats_read(&rxq->q_stats.rx.bytes);
++			packets += u64_stats_read(&rxq->q_stats.packets);
++			bytes += u64_stats_read(&rxq->q_stats.bytes);
+ 		} while (u64_stats_fetch_retry(&rxq->stats_sync, start));
+ 	}
+ 
+@@ -3826,16 +3980,17 @@ static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport)
+ static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec,
+ 				     int budget, int *cleaned)
+ {
+-	u16 num_txq = q_vec->num_txq;
++	u16 num_complq = q_vec->num_complq;
+ 	bool clean_complete = true;
+ 	int i, budget_per_q;
+ 
+-	if (unlikely(!num_txq))
++	if (unlikely(!num_complq))
+ 		return true;
+ 
+-	budget_per_q = DIV_ROUND_UP(budget, num_txq);
+-	for (i = 0; i < num_txq; i++)
+-		clean_complete &= idpf_tx_clean_complq(q_vec->tx[i],
++	budget_per_q = DIV_ROUND_UP(budget, num_complq);
++
++	for (i = 0; i < num_complq; i++)
++		clean_complete &= idpf_tx_clean_complq(q_vec->complq[i],
+ 						       budget_per_q, cleaned);
+ 
+ 	return clean_complete;
+@@ -3862,7 +4017,7 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ 	 */
+ 	budget_per_q = num_rxq ? max(budget / num_rxq, 1) : 0;
+ 	for (i = 0; i < num_rxq; i++) {
+-		struct idpf_queue *rxq = q_vec->rx[i];
++		struct idpf_rx_queue *rxq = q_vec->rx[i];
+ 		int pkts_cleaned_per_q;
+ 
+ 		pkts_cleaned_per_q = idpf_rx_splitq_clean(rxq, budget_per_q);
+@@ -3917,8 +4072,8 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+ 	 * queues virtchnl message, as the interrupts will be disabled after
+ 	 * that
+ 	 */
+-	if (unlikely(q_vector->num_txq && test_bit(__IDPF_Q_POLL_MODE,
+-						   q_vector->tx[0]->flags)))
++	if (unlikely(q_vector->num_txq && idpf_queue_has(POLL_MODE,
++							 q_vector->tx[0])))
+ 		return budget;
+ 	else
+ 		return work_done;
+@@ -3932,27 +4087,28 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+  */
+ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
+ {
++	bool split = idpf_is_queue_model_split(vport->rxq_model);
+ 	u16 num_txq_grp = vport->num_txq_grp;
+-	int i, j, qv_idx, bufq_vidx = 0;
+ 	struct idpf_rxq_group *rx_qgrp;
+ 	struct idpf_txq_group *tx_qgrp;
+-	struct idpf_queue *q, *bufq;
+-	u16 q_index;
++	u32 i, qv_idx, q_index;
+ 
+ 	for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) {
+ 		u16 num_rxq;
+ 
++		if (qv_idx >= vport->num_q_vectors)
++			qv_idx = 0;
++
+ 		rx_qgrp = &vport->rxq_grps[i];
+-		if (idpf_is_queue_model_split(vport->rxq_model))
++		if (split)
+ 			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ 		else
+ 			num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+-		for (j = 0; j < num_rxq; j++) {
+-			if (qv_idx >= vport->num_q_vectors)
+-				qv_idx = 0;
++		for (u32 j = 0; j < num_rxq; j++) {
++			struct idpf_rx_queue *q;
+ 
+-			if (idpf_is_queue_model_split(vport->rxq_model))
++			if (split)
+ 				q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ 			else
+ 				q = rx_qgrp->singleq.rxqs[j];
+@@ -3960,52 +4116,53 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
+ 			q_index = q->q_vector->num_rxq;
+ 			q->q_vector->rx[q_index] = q;
+ 			q->q_vector->num_rxq++;
+-			qv_idx++;
++
++			if (split)
++				q->napi = &q->q_vector->napi;
+ 		}
+ 
+-		if (idpf_is_queue_model_split(vport->rxq_model)) {
+-			for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
++		if (split) {
++			for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) {
++				struct idpf_buf_queue *bufq;
++
+ 				bufq = &rx_qgrp->splitq.bufq_sets[j].bufq;
+-				bufq->q_vector = &vport->q_vectors[bufq_vidx];
++				bufq->q_vector = &vport->q_vectors[qv_idx];
+ 				q_index = bufq->q_vector->num_bufq;
+ 				bufq->q_vector->bufq[q_index] = bufq;
+ 				bufq->q_vector->num_bufq++;
+ 			}
+-			if (++bufq_vidx >= vport->num_q_vectors)
+-				bufq_vidx = 0;
+ 		}
++
++		qv_idx++;
+ 	}
+ 
++	split = idpf_is_queue_model_split(vport->txq_model);
++
+ 	for (i = 0, qv_idx = 0; i < num_txq_grp; i++) {
+ 		u16 num_txq;
+ 
++		if (qv_idx >= vport->num_q_vectors)
++			qv_idx = 0;
++
+ 		tx_qgrp = &vport->txq_grps[i];
+ 		num_txq = tx_qgrp->num_txq;
+ 
+-		if (idpf_is_queue_model_split(vport->txq_model)) {
+-			if (qv_idx >= vport->num_q_vectors)
+-				qv_idx = 0;
++		for (u32 j = 0; j < num_txq; j++) {
++			struct idpf_tx_queue *q;
+ 
+-			q = tx_qgrp->complq;
++			q = tx_qgrp->txqs[j];
+ 			q->q_vector = &vport->q_vectors[qv_idx];
+-			q_index = q->q_vector->num_txq;
+-			q->q_vector->tx[q_index] = q;
+-			q->q_vector->num_txq++;
+-			qv_idx++;
+-		} else {
+-			for (j = 0; j < num_txq; j++) {
+-				if (qv_idx >= vport->num_q_vectors)
+-					qv_idx = 0;
++			q->q_vector->tx[q->q_vector->num_txq++] = q;
++		}
+ 
+-				q = tx_qgrp->txqs[j];
+-				q->q_vector = &vport->q_vectors[qv_idx];
+-				q_index = q->q_vector->num_txq;
+-				q->q_vector->tx[q_index] = q;
+-				q->q_vector->num_txq++;
++		if (split) {
++			struct idpf_compl_queue *q = tx_qgrp->complq;
+ 
+-				qv_idx++;
+-			}
++			q->q_vector = &vport->q_vectors[qv_idx];
++			q->q_vector->complq[q->q_vector->num_complq++] = q;
+ 		}
++
++		qv_idx++;
+ 	}
+ }
+ 
+@@ -4081,18 +4238,22 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ {
+ 	u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector;
+ 	struct idpf_q_vector *q_vector;
+-	int v_idx, err;
++	u32 complqs_per_vector, v_idx;
+ 
+ 	vport->q_vectors = kcalloc(vport->num_q_vectors,
+ 				   sizeof(struct idpf_q_vector), GFP_KERNEL);
+ 	if (!vport->q_vectors)
+ 		return -ENOMEM;
+ 
+-	txqs_per_vector = DIV_ROUND_UP(vport->num_txq, vport->num_q_vectors);
+-	rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq, vport->num_q_vectors);
++	txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
++				       vport->num_q_vectors);
++	rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp,
++				       vport->num_q_vectors);
+ 	bufqs_per_vector = vport->num_bufqs_per_qgrp *
+ 			   DIV_ROUND_UP(vport->num_rxq_grp,
+ 					vport->num_q_vectors);
++	complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
++					  vport->num_q_vectors);
+ 
+ 	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ 		q_vector = &vport->q_vectors[v_idx];
+@@ -4106,32 +4267,30 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ 		q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC;
+ 		q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0;
+ 
+-		q_vector->tx = kcalloc(txqs_per_vector,
+-				       sizeof(struct idpf_queue *),
++		q_vector->tx = kcalloc(txqs_per_vector, sizeof(*q_vector->tx),
+ 				       GFP_KERNEL);
+-		if (!q_vector->tx) {
+-			err = -ENOMEM;
++		if (!q_vector->tx)
+ 			goto error;
+-		}
+ 
+-		q_vector->rx = kcalloc(rxqs_per_vector,
+-				       sizeof(struct idpf_queue *),
++		q_vector->rx = kcalloc(rxqs_per_vector, sizeof(*q_vector->rx),
+ 				       GFP_KERNEL);
+-		if (!q_vector->rx) {
+-			err = -ENOMEM;
++		if (!q_vector->rx)
+ 			goto error;
+-		}
+ 
+ 		if (!idpf_is_queue_model_split(vport->rxq_model))
+ 			continue;
+ 
+ 		q_vector->bufq = kcalloc(bufqs_per_vector,
+-					 sizeof(struct idpf_queue *),
++					 sizeof(*q_vector->bufq),
+ 					 GFP_KERNEL);
+-		if (!q_vector->bufq) {
+-			err = -ENOMEM;
++		if (!q_vector->bufq)
++			goto error;
++
++		q_vector->complq = kcalloc(complqs_per_vector,
++					   sizeof(*q_vector->complq),
++					   GFP_KERNEL);
++		if (!q_vector->complq)
+ 			goto error;
+-		}
+ 	}
+ 
+ 	return 0;
+@@ -4139,7 +4298,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ error:
+ 	idpf_vport_intr_rel(vport);
+ 
+-	return err;
++	return -ENOMEM;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index 551391e2046470..214a24e684634a 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -4,10 +4,13 @@
+ #ifndef _IDPF_TXRX_H_
+ #define _IDPF_TXRX_H_
+ 
++#include <linux/dim.h>
++
+ #include <net/page_pool/helpers.h>
+ #include <net/tcp.h>
+ #include <net/netdev_queues.h>
+ 
++#include "idpf_lan_txrx.h"
+ #include "virtchnl2_lan_desc.h"
+ 
+ #define IDPF_LARGE_MAX_Q			256
+@@ -83,7 +86,7 @@
+ do {								\
+ 	if (unlikely(++(ntc) == (rxq)->desc_count)) {		\
+ 		ntc = 0;					\
+-		change_bit(__IDPF_Q_GEN_CHK, (rxq)->flags);	\
++		idpf_queue_change(GEN_CHK, rxq);		\
+ 	}							\
+ } while (0)
+ 
+@@ -110,36 +113,17 @@ do {								\
+  */
+ #define IDPF_TX_SPLITQ_RE_MIN_GAP	64
+ 
+-#define IDPF_RX_BI_BUFID_S		0
+-#define IDPF_RX_BI_BUFID_M		GENMASK(14, 0)
+-#define IDPF_RX_BI_GEN_S		15
+-#define IDPF_RX_BI_GEN_M		BIT(IDPF_RX_BI_GEN_S)
++#define IDPF_RX_BI_GEN_M		BIT(16)
++#define IDPF_RX_BI_BUFID_M		GENMASK(15, 0)
++
+ #define IDPF_RXD_EOF_SPLITQ		VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
+ #define IDPF_RXD_EOF_SINGLEQ		VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
+ 
+-#define IDPF_SINGLEQ_RX_BUF_DESC(rxq, i)	\
+-	(&(((struct virtchnl2_singleq_rx_buf_desc *)((rxq)->desc_ring))[i]))
+-#define IDPF_SPLITQ_RX_BUF_DESC(rxq, i)	\
+-	(&(((struct virtchnl2_splitq_rx_buf_desc *)((rxq)->desc_ring))[i]))
+-#define IDPF_SPLITQ_RX_BI_DESC(rxq, i) ((((rxq)->ring))[i])
+-
+-#define IDPF_BASE_TX_DESC(txq, i)	\
+-	(&(((struct idpf_base_tx_desc *)((txq)->desc_ring))[i]))
+-#define IDPF_BASE_TX_CTX_DESC(txq, i) \
+-	(&(((struct idpf_base_tx_ctx_desc *)((txq)->desc_ring))[i]))
+-#define IDPF_SPLITQ_TX_COMPLQ_DESC(txcq, i)	\
+-	(&(((struct idpf_splitq_tx_compl_desc *)((txcq)->desc_ring))[i]))
+-
+-#define IDPF_FLEX_TX_DESC(txq, i) \
+-	(&(((union idpf_tx_flex_desc *)((txq)->desc_ring))[i]))
+-#define IDPF_FLEX_TX_CTX_DESC(txq, i)	\
+-	(&(((struct idpf_flex_tx_ctx_desc *)((txq)->desc_ring))[i]))
+-
+ #define IDPF_DESC_UNUSED(txq)     \
+ 	((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \
+ 	(txq)->next_to_clean - (txq)->next_to_use - 1)
+ 
+-#define IDPF_TX_BUF_RSV_UNUSED(txq)	((txq)->buf_stack.top)
++#define IDPF_TX_BUF_RSV_UNUSED(txq)	((txq)->stash->buf_stack.top)
+ #define IDPF_TX_BUF_RSV_LOW(txq)	(IDPF_TX_BUF_RSV_UNUSED(txq) < \
+ 					 (txq)->desc_count >> 2)
+ 
+@@ -317,8 +301,6 @@ struct idpf_rx_extracted {
+ 
+ #define IDPF_RX_DMA_ATTR \
+ 	(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+-#define IDPF_RX_DESC(rxq, i)	\
+-	(&(((union virtchnl2_rx_desc *)((rxq)->desc_ring))[i]))
+ 
+ struct idpf_rx_buf {
+ 	struct page *page;
+@@ -452,23 +434,37 @@ struct idpf_rx_ptype_decoded {
+  *		      to 1 and knows that reading a gen bit of 1 in any
+  *		      descriptor on the initial pass of the ring indicates a
+  *		      writeback. It also flips on every ring wrap.
+- * @__IDPF_RFLQ_GEN_CHK: Refill queues are SW only, so Q_GEN acts as the HW bit
+- *			 and RFLGQ_GEN is the SW bit.
++ * @__IDPF_Q_RFL_GEN_CHK: Refill queues are SW only, so Q_GEN acts as the HW
++ *			  bit and Q_RFL_GEN is the SW bit.
+  * @__IDPF_Q_FLOW_SCH_EN: Enable flow scheduling
+  * @__IDPF_Q_SW_MARKER: Used to indicate TX queue marker completions
+  * @__IDPF_Q_POLL_MODE: Enable poll mode
++ * @__IDPF_Q_CRC_EN: enable CRC offload in singleq mode
++ * @__IDPF_Q_HSPLIT_EN: enable header split on Rx (splitq)
+  * @__IDPF_Q_FLAGS_NBITS: Must be last
+  */
+ enum idpf_queue_flags_t {
+ 	__IDPF_Q_GEN_CHK,
+-	__IDPF_RFLQ_GEN_CHK,
++	__IDPF_Q_RFL_GEN_CHK,
+ 	__IDPF_Q_FLOW_SCH_EN,
+ 	__IDPF_Q_SW_MARKER,
+ 	__IDPF_Q_POLL_MODE,
++	__IDPF_Q_CRC_EN,
++	__IDPF_Q_HSPLIT_EN,
+ 
+ 	__IDPF_Q_FLAGS_NBITS,
+ };
+ 
++#define idpf_queue_set(f, q)		__set_bit(__IDPF_Q_##f, (q)->flags)
++#define idpf_queue_clear(f, q)		__clear_bit(__IDPF_Q_##f, (q)->flags)
++#define idpf_queue_change(f, q)		__change_bit(__IDPF_Q_##f, (q)->flags)
++#define idpf_queue_has(f, q)		test_bit(__IDPF_Q_##f, (q)->flags)
++
++#define idpf_queue_has_clear(f, q)			\
++	__test_and_clear_bit(__IDPF_Q_##f, (q)->flags)
++#define idpf_queue_assign(f, q, v)			\
++	__assign_bit(__IDPF_Q_##f, (q)->flags, v)
++
+ /**
+  * struct idpf_vec_regs
+  * @dyn_ctl_reg: Dynamic control interrupt register offset
+@@ -514,7 +510,9 @@ struct idpf_intr_reg {
+  * @v_idx: Vector index
+  * @intr_reg: See struct idpf_intr_reg
+  * @num_txq: Number of TX queues
++ * @num_complq: number of completion queues
+  * @tx: Array of TX queues to service
++ * @complq: array of completion queues
+  * @tx_dim: Data for TX net_dim algorithm
+  * @tx_itr_value: TX interrupt throttling rate
+  * @tx_intr_mode: Dynamic ITR or not
+@@ -538,21 +536,24 @@ struct idpf_q_vector {
+ 	struct idpf_intr_reg intr_reg;
+ 
+ 	u16 num_txq;
+-	struct idpf_queue **tx;
++	u16 num_complq;
++	struct idpf_tx_queue **tx;
++	struct idpf_compl_queue **complq;
++
+ 	struct dim tx_dim;
+ 	u16 tx_itr_value;
+ 	bool tx_intr_mode;
+ 	u32 tx_itr_idx;
+ 
+ 	u16 num_rxq;
+-	struct idpf_queue **rx;
++	struct idpf_rx_queue **rx;
+ 	struct dim rx_dim;
+ 	u16 rx_itr_value;
+ 	bool rx_intr_mode;
+ 	u32 rx_itr_idx;
+ 
+ 	u16 num_bufq;
+-	struct idpf_queue **bufq;
++	struct idpf_buf_queue **bufq;
+ 
+ 	u16 total_events;
+ 	char *name;
+@@ -583,11 +584,6 @@ struct idpf_cleaned_stats {
+ 	u32 bytes;
+ };
+ 
+-union idpf_queue_stats {
+-	struct idpf_rx_queue_stats rx;
+-	struct idpf_tx_queue_stats tx;
+-};
+-
+ #define IDPF_ITR_DYNAMIC	1
+ #define IDPF_ITR_MAX		0x1FE0
+ #define IDPF_ITR_20K		0x0032
+@@ -603,39 +599,114 @@ union idpf_queue_stats {
+ #define IDPF_DIM_DEFAULT_PROFILE_IX		1
+ 
+ /**
+- * struct idpf_queue
+- * @dev: Device back pointer for DMA mapping
+- * @vport: Back pointer to associated vport
+- * @txq_grp: See struct idpf_txq_group
+- * @rxq_grp: See struct idpf_rxq_group
+- * @idx: For buffer queue, it is used as group id, either 0 or 1. On clean,
+- *	 buffer queue uses this index to determine which group of refill queues
+- *	 to clean.
+- *	 For TX queue, it is used as index to map between TX queue group and
+- *	 hot path TX pointers stored in vport. Used in both singleq/splitq.
+- *	 For RX queue, it is used to index to total RX queue across groups and
++ * struct idpf_txq_stash - Tx buffer stash for Flow-based scheduling mode
++ * @buf_stack: Stack of empty buffers to store buffer info for out of order
++ *	       buffer completions. See struct idpf_buf_lifo
++ * @sched_buf_hash: Hash table to store buffers
++ */
++struct idpf_txq_stash {
++	struct idpf_buf_lifo buf_stack;
++	DECLARE_HASHTABLE(sched_buf_hash, 12);
++} ____cacheline_aligned;
++
++/**
++ * struct idpf_rx_queue - software structure representing a receive queue
++ * @rx: universal receive descriptor array
++ * @single_buf: buffer descriptor array in singleq
++ * @desc_ring: virtual descriptor ring address
++ * @bufq_sets: Pointer to the array of buffer queues in splitq mode
++ * @napi: NAPI instance corresponding to this queue (splitq)
++ * @rx_buf: See struct idpf_rx_buf
++ * @pp: Page pool pointer in singleq mode
++ * @netdev: &net_device corresponding to this queue
++ * @tail: Tail offset. Used for both queue models single and split.
++ * @flags: See enum idpf_queue_flags_t
++ * @idx: For RX queue, it is used to index to total RX queue across groups and
+  *	 used for skb reporting.
+- * @tail: Tail offset. Used for both queue models single and split. In splitq
+- *	  model relevant only for TX queue and RX queue.
+- * @tx_buf: See struct idpf_tx_buf
+- * @rx_buf: Struct with RX buffer related members
+- * @rx_buf.buf: See struct idpf_rx_buf
+- * @rx_buf.hdr_buf_pa: DMA handle
+- * @rx_buf.hdr_buf_va: Virtual address
+- * @pp: Page pool pointer
++ * @desc_count: Number of descriptors
++ * @next_to_use: Next descriptor to use
++ * @next_to_clean: Next descriptor to clean
++ * @next_to_alloc: RX buffer to allocate at
++ * @rxdids: Supported RX descriptor ids
++ * @rx_ptype_lkup: LUT of Rx ptypes
+  * @skb: Pointer to the skb
+- * @q_type: Queue type (TX, RX, TX completion, RX buffer)
++ * @stats_sync: See struct u64_stats_sync
++ * @q_stats: See union idpf_rx_queue_stats
+  * @q_id: Queue id
+- * @desc_count: Number of descriptors
+- * @next_to_use: Next descriptor to use. Relevant in both split & single txq
+- *		 and bufq.
+- * @next_to_clean: Next descriptor to clean. In split queue model, only
+- *		   relevant to TX completion queue and RX queue.
+- * @next_to_alloc: RX buffer to allocate at. Used only for RX. In splitq model
+- *		   only relevant to RX queue.
++ * @size: Length of descriptor ring in bytes
++ * @dma: Physical address of ring
++ * @q_vector: Backreference to associated vector
++ * @rx_buffer_low_watermark: RX buffer low watermark
++ * @rx_hbuf_size: Header buffer size
++ * @rx_buf_size: Buffer size
++ * @rx_max_pkt_size: RX max packet size
++ */
++struct idpf_rx_queue {
++	union {
++		union virtchnl2_rx_desc *rx;
++		struct virtchnl2_singleq_rx_buf_desc *single_buf;
++
++		void *desc_ring;
++	};
++	union {
++		struct {
++			struct idpf_bufq_set *bufq_sets;
++			struct napi_struct *napi;
++		};
++		struct {
++			struct idpf_rx_buf *rx_buf;
++			struct page_pool *pp;
++		};
++	};
++	struct net_device *netdev;
++	void __iomem *tail;
++
++	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
++	u16 idx;
++	u16 desc_count;
++	u16 next_to_use;
++	u16 next_to_clean;
++	u16 next_to_alloc;
++
++	u32 rxdids;
++
++	const struct idpf_rx_ptype_decoded *rx_ptype_lkup;
++	struct sk_buff *skb;
++
++	struct u64_stats_sync stats_sync;
++	struct idpf_rx_queue_stats q_stats;
++
++	/* Slowpath */
++	u32 q_id;
++	u32 size;
++	dma_addr_t dma;
++
++	struct idpf_q_vector *q_vector;
++
++	u16 rx_buffer_low_watermark;
++	u16 rx_hbuf_size;
++	u16 rx_buf_size;
++	u16 rx_max_pkt_size;
++} ____cacheline_aligned;
++
++/**
++ * struct idpf_tx_queue - software structure representing a transmit queue
++ * @base_tx: base Tx descriptor array
++ * @base_ctx: base Tx context descriptor array
++ * @flex_tx: flex Tx descriptor array
++ * @flex_ctx: flex Tx context descriptor array
++ * @desc_ring: virtual descriptor ring address
++ * @tx_buf: See struct idpf_tx_buf
++ * @txq_grp: See struct idpf_txq_group
++ * @dev: Device back pointer for DMA mapping
++ * @tail: Tail offset. Used for both queue models single and split
+  * @flags: See enum idpf_queue_flags_t
+- * @q_stats: See union idpf_queue_stats
+- * @stats_sync: See struct u64_stats_sync
++ * @idx: For TX queue, it is used as index to map between TX queue group and
++ *	 hot path TX pointers stored in vport. Used in both singleq/splitq.
++ * @desc_count: Number of descriptors
++ * @next_to_use: Next descriptor to use
++ * @next_to_clean: Next descriptor to clean
++ * @netdev: &net_device corresponding to this queue
+  * @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
+  *		   the TX completion queue, it can be for any TXQ associated
+  *		   with that completion queue. This means we can clean up to
+@@ -644,26 +715,10 @@ union idpf_queue_stats {
+  *		   that single call to clean the completion queue. By doing so,
+  *		   we can update BQL with aggregate cleaned stats for each TXQ
+  *		   only once at the end of the cleaning routine.
++ * @clean_budget: singleq only, queue cleaning budget
+  * @cleaned_pkts: Number of packets cleaned for the above said case
+- * @rx_hsplit_en: RX headsplit enable
+- * @rx_hbuf_size: Header buffer size
+- * @rx_buf_size: Buffer size
+- * @rx_max_pkt_size: RX max packet size
+- * @rx_buf_stride: RX buffer stride
+- * @rx_buffer_low_watermark: RX buffer low watermark
+- * @rxdids: Supported RX descriptor ids
+- * @q_vector: Backreference to associated vector
+- * @size: Length of descriptor ring in bytes
+- * @dma: Physical address of ring
+- * @desc_ring: Descriptor ring memory
+  * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
+  * @tx_min_pkt_len: Min supported packet length
+- * @num_completions: Only relevant for TX completion queue. It tracks the
+- *		     number of completions received to compare against the
+- *		     number of completions pending, as accumulated by the
+- *		     TX queues.
+- * @buf_stack: Stack of empty buffers to store buffer info for out of order
+- *	       buffer completions. See struct idpf_buf_lifo.
+  * @compl_tag_bufid_m: Completion tag buffer id mask
+  * @compl_tag_gen_s: Completion tag generation bit
+  *	The format of the completion tag will change based on the TXQ
+@@ -687,106 +742,188 @@ union idpf_queue_stats {
+  *	This gives us 8*8160 = 65280 possible unique values.
+  * @compl_tag_cur_gen: Used to keep track of current completion tag generation
+  * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
+- * @sched_buf_hash: Hash table to stores buffers
++ * @stash: Tx buffer stash for Flow-based scheduling mode
++ * @stats_sync: See struct u64_stats_sync
++ * @q_stats: See union idpf_tx_queue_stats
++ * @q_id: Queue id
++ * @size: Length of descriptor ring in bytes
++ * @dma: Physical address of ring
++ * @q_vector: Backreference to associated vector
+  */
+-struct idpf_queue {
+-	struct device *dev;
+-	struct idpf_vport *vport;
++struct idpf_tx_queue {
+ 	union {
+-		struct idpf_txq_group *txq_grp;
+-		struct idpf_rxq_group *rxq_grp;
++		struct idpf_base_tx_desc *base_tx;
++		struct idpf_base_tx_ctx_desc *base_ctx;
++		union idpf_tx_flex_desc *flex_tx;
++		struct idpf_flex_tx_ctx_desc *flex_ctx;
++
++		void *desc_ring;
+ 	};
+-	u16 idx;
++	struct idpf_tx_buf *tx_buf;
++	struct idpf_txq_group *txq_grp;
++	struct device *dev;
+ 	void __iomem *tail;
++
++	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
++	u16 idx;
++	u16 desc_count;
++	u16 next_to_use;
++	u16 next_to_clean;
++
++	struct net_device *netdev;
++
+ 	union {
+-		struct idpf_tx_buf *tx_buf;
+-		struct {
+-			struct idpf_rx_buf *buf;
+-			dma_addr_t hdr_buf_pa;
+-			void *hdr_buf_va;
+-		} rx_buf;
++		u32 cleaned_bytes;
++		u32 clean_budget;
+ 	};
+-	struct page_pool *pp;
+-	struct sk_buff *skb;
+-	u16 q_type;
++	u16 cleaned_pkts;
++
++	u16 tx_max_bufs;
++	u16 tx_min_pkt_len;
++
++	u16 compl_tag_bufid_m;
++	u16 compl_tag_gen_s;
++
++	u16 compl_tag_cur_gen;
++	u16 compl_tag_gen_max;
++
++	struct idpf_txq_stash *stash;
++
++	struct u64_stats_sync stats_sync;
++	struct idpf_tx_queue_stats q_stats;
++
++	/* Slowpath */
+ 	u32 q_id;
+-	u16 desc_count;
++	u32 size;
++	dma_addr_t dma;
+ 
++	struct idpf_q_vector *q_vector;
++} ____cacheline_aligned;
++
++/**
++ * struct idpf_buf_queue - software structure representing a buffer queue
++ * @split_buf: buffer descriptor array
++ * @rx_buf: Struct with RX buffer related members
++ * @rx_buf.buf: See struct idpf_rx_buf
++ * @rx_buf.hdr_buf_pa: DMA handle
++ * @rx_buf.hdr_buf_va: Virtual address
++ * @pp: Page pool pointer
++ * @tail: Tail offset
++ * @flags: See enum idpf_queue_flags_t
++ * @desc_count: Number of descriptors
++ * @next_to_use: Next descriptor to use
++ * @next_to_clean: Next descriptor to clean
++ * @next_to_alloc: RX buffer to allocate at
++ * @q_id: Queue id
++ * @size: Length of descriptor ring in bytes
++ * @dma: Physical address of ring
++ * @q_vector: Backreference to associated vector
++ * @rx_buffer_low_watermark: RX buffer low watermark
++ * @rx_hbuf_size: Header buffer size
++ * @rx_buf_size: Buffer size
++ */
++struct idpf_buf_queue {
++	struct virtchnl2_splitq_rx_buf_desc *split_buf;
++	struct {
++		struct idpf_rx_buf *buf;
++		dma_addr_t hdr_buf_pa;
++		void *hdr_buf_va;
++	} rx_buf;
++	struct page_pool *pp;
++	void __iomem *tail;
++
++	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
++	u16 desc_count;
+ 	u16 next_to_use;
+ 	u16 next_to_clean;
+ 	u16 next_to_alloc;
+-	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
+ 
+-	union idpf_queue_stats q_stats;
+-	struct u64_stats_sync stats_sync;
++	/* Slowpath */
++	u32 q_id;
++	u32 size;
++	dma_addr_t dma;
+ 
+-	u32 cleaned_bytes;
+-	u16 cleaned_pkts;
++	struct idpf_q_vector *q_vector;
+ 
+-	bool rx_hsplit_en;
++	u16 rx_buffer_low_watermark;
+ 	u16 rx_hbuf_size;
+ 	u16 rx_buf_size;
+-	u16 rx_max_pkt_size;
+-	u16 rx_buf_stride;
+-	u8 rx_buffer_low_watermark;
+-	u64 rxdids;
+-	struct idpf_q_vector *q_vector;
+-	unsigned int size;
+-	dma_addr_t dma;
+-	void *desc_ring;
+-
+-	u16 tx_max_bufs;
+-	u8 tx_min_pkt_len;
++} ____cacheline_aligned;
+ 
+-	u32 num_completions;
++/**
++ * struct idpf_compl_queue - software structure representing a completion queue
++ * @comp: completion descriptor array
++ * @txq_grp: See struct idpf_txq_group
++ * @flags: See enum idpf_queue_flags_t
++ * @desc_count: Number of descriptors
++ * @next_to_use: Next descriptor to use. Relevant in both split & single txq
++ *		 and bufq.
++ * @next_to_clean: Next descriptor to clean
++ * @netdev: &net_device corresponding to this queue
++ * @clean_budget: queue cleaning budget
++ * @num_completions: Only relevant for TX completion queue. It tracks the
++ *		     number of completions received to compare against the
++ *		     number of completions pending, as accumulated by the
++ *		     TX queues.
++ * @q_id: Queue id
++ * @size: Length of descriptor ring in bytes
++ * @dma: Physical address of ring
++ * @q_vector: Backreference to associated vector
++ */
++struct idpf_compl_queue {
++	struct idpf_splitq_tx_compl_desc *comp;
++	struct idpf_txq_group *txq_grp;
+ 
+-	struct idpf_buf_lifo buf_stack;
++	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
++	u16 desc_count;
++	u16 next_to_use;
++	u16 next_to_clean;
+ 
+-	u16 compl_tag_bufid_m;
+-	u16 compl_tag_gen_s;
++	struct net_device *netdev;
++	u32 clean_budget;
++	u32 num_completions;
+ 
+-	u16 compl_tag_cur_gen;
+-	u16 compl_tag_gen_max;
++	/* Slowpath */
++	u32 q_id;
++	u32 size;
++	dma_addr_t dma;
+ 
+-	DECLARE_HASHTABLE(sched_buf_hash, 12);
+-} ____cacheline_internodealigned_in_smp;
++	struct idpf_q_vector *q_vector;
++} ____cacheline_aligned;
+ 
+ /**
+  * struct idpf_sw_queue
+- * @next_to_clean: Next descriptor to clean
+- * @next_to_alloc: Buffer to allocate at
+- * @flags: See enum idpf_queue_flags_t
+  * @ring: Pointer to the ring
++ * @flags: See enum idpf_queue_flags_t
+  * @desc_count: Descriptor count
+- * @dev: Device back pointer for DMA mapping
++ * @next_to_use: Buffer to allocate at
++ * @next_to_clean: Next descriptor to clean
+  *
+  * Software queues are used in splitq mode to manage buffers between rxq
+  * producer and the bufq consumer.  These are required in order to maintain a
+  * lockless buffer management system and are strictly software only constructs.
+  */
+ struct idpf_sw_queue {
+-	u16 next_to_clean;
+-	u16 next_to_alloc;
++	u32 *ring;
++
+ 	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
+-	u16 *ring;
+ 	u16 desc_count;
+-	struct device *dev;
+-} ____cacheline_internodealigned_in_smp;
++	u16 next_to_use;
++	u16 next_to_clean;
++} ____cacheline_aligned;
+ 
+ /**
+  * struct idpf_rxq_set
+  * @rxq: RX queue
+- * @refillq0: Pointer to refill queue 0
+- * @refillq1: Pointer to refill queue 1
++ * @refillq: pointers to refill queues
+  *
+  * Splitq only.  idpf_rxq_set associates an rxq with at an array of refillqs.
+  * Each rxq needs a refillq to return used buffers back to the respective bufq.
+  * Bufqs then clean these refillqs for buffers to give to hardware.
+  */
+ struct idpf_rxq_set {
+-	struct idpf_queue rxq;
+-	struct idpf_sw_queue *refillq0;
+-	struct idpf_sw_queue *refillq1;
++	struct idpf_rx_queue rxq;
++	struct idpf_sw_queue *refillq[IDPF_MAX_BUFQS_PER_RXQ_GRP];
+ };
+ 
+ /**
+@@ -805,7 +942,7 @@ struct idpf_rxq_set {
+  * managed by at most two bufqs (depending on performance configuration).
+  */
+ struct idpf_bufq_set {
+-	struct idpf_queue bufq;
++	struct idpf_buf_queue bufq;
+ 	int num_refillqs;
+ 	struct idpf_sw_queue *refillqs;
+ };
+@@ -831,7 +968,7 @@ struct idpf_rxq_group {
+ 	union {
+ 		struct {
+ 			u16 num_rxq;
+-			struct idpf_queue *rxqs[IDPF_LARGE_MAX_Q];
++			struct idpf_rx_queue *rxqs[IDPF_LARGE_MAX_Q];
+ 		} singleq;
+ 		struct {
+ 			u16 num_rxq_sets;
+@@ -846,6 +983,7 @@ struct idpf_rxq_group {
+  * @vport: Vport back pointer
+  * @num_txq: Number of TX queues associated
+  * @txqs: Array of TX queue pointers
++ * @stashes: array of OOO stashes for the queues
+  * @complq: Associated completion queue pointer, split queue only
+  * @num_completions_pending: Total number of completions pending for the
+  *			     completion queue, acculumated for all TX queues
+@@ -859,9 +997,10 @@ struct idpf_txq_group {
+ 	struct idpf_vport *vport;
+ 
+ 	u16 num_txq;
+-	struct idpf_queue *txqs[IDPF_LARGE_MAX_Q];
++	struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q];
++	struct idpf_txq_stash *stashes;
+ 
+-	struct idpf_queue *complq;
++	struct idpf_compl_queue *complq;
+ 
+ 	u32 num_completions_pending;
+ };
+@@ -998,29 +1137,31 @@ void idpf_deinit_rss(struct idpf_vport *vport);
+ int idpf_rx_bufs_init_all(struct idpf_vport *vport);
+ void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb,
+ 		      unsigned int size);
+-struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
++struct sk_buff *idpf_rx_construct_skb(const struct idpf_rx_queue *rxq,
+ 				      struct idpf_rx_buf *rx_buf,
+ 				      unsigned int size);
+-bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf);
+-void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val);
+-void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
++void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ 			   bool xmit_more);
+ unsigned int idpf_size_to_txd_count(unsigned int size);
+-netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb);
+-void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
++netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb);
++void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
+ 			   struct idpf_tx_buf *first, u16 ring_idx);
+-unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
++unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+ 					 struct sk_buff *skb);
+-bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
+-			unsigned int count);
+-int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size);
+ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
+-netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
+-				 struct net_device *netdev);
+-netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
+-				  struct net_device *netdev);
+-bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq,
++netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
++				  struct idpf_tx_queue *tx_q);
++netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev);
++bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq,
+ 				      u16 cleaned_count);
+ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
+ 
++static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q,
++					     u32 needed)
++{
++	return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++					  IDPF_DESC_UNUSED(tx_q),
++					  needed, needed);
++}
++
+ #endif /* !_IDPF_TXRX_H_ */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index a5f9b7a5effe7b..44602b87cd4114 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -750,7 +750,7 @@ static int idpf_wait_for_marker_event(struct idpf_vport *vport)
+ 	int i;
+ 
+ 	for (i = 0; i < vport->num_txq; i++)
+-		set_bit(__IDPF_Q_SW_MARKER, vport->txqs[i]->flags);
++		idpf_queue_set(SW_MARKER, vport->txqs[i]);
+ 
+ 	event = wait_event_timeout(vport->sw_marker_wq,
+ 				   test_and_clear_bit(IDPF_VPORT_SW_MARKER,
+@@ -758,7 +758,7 @@ static int idpf_wait_for_marker_event(struct idpf_vport *vport)
+ 				   msecs_to_jiffies(500));
+ 
+ 	for (i = 0; i < vport->num_txq; i++)
+-		clear_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
++		idpf_queue_clear(POLL_MODE, vport->txqs[i]);
+ 
+ 	if (event)
+ 		return 0;
+@@ -1092,7 +1092,6 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
+ 				 int num_regs, u32 q_type)
+ {
+ 	struct idpf_adapter *adapter = vport->adapter;
+-	struct idpf_queue *q;
+ 	int i, j, k = 0;
+ 
+ 	switch (q_type) {
+@@ -1111,6 +1110,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
+ 			u16 num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 			for (j = 0; j < num_rxq && k < num_regs; j++, k++) {
++				struct idpf_rx_queue *q;
++
+ 				q = rx_qgrp->singleq.rxqs[j];
+ 				q->tail = idpf_get_reg_addr(adapter,
+ 							    reg_vals[k]);
+@@ -1123,6 +1124,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
+ 			u8 num_bufqs = vport->num_bufqs_per_qgrp;
+ 
+ 			for (j = 0; j < num_bufqs && k < num_regs; j++, k++) {
++				struct idpf_buf_queue *q;
++
+ 				q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ 				q->tail = idpf_get_reg_addr(adapter,
+ 							    reg_vals[k]);
+@@ -1449,19 +1452,19 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
+ 			qi[k].model =
+ 				cpu_to_le16(vport->txq_model);
+ 			qi[k].type =
+-				cpu_to_le32(tx_qgrp->txqs[j]->q_type);
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX);
+ 			qi[k].ring_len =
+ 				cpu_to_le16(tx_qgrp->txqs[j]->desc_count);
+ 			qi[k].dma_ring_addr =
+ 				cpu_to_le64(tx_qgrp->txqs[j]->dma);
+ 			if (idpf_is_queue_model_split(vport->txq_model)) {
+-				struct idpf_queue *q = tx_qgrp->txqs[j];
++				struct idpf_tx_queue *q = tx_qgrp->txqs[j];
+ 
+ 				qi[k].tx_compl_queue_id =
+ 					cpu_to_le16(tx_qgrp->complq->q_id);
+ 				qi[k].relative_queue_id = cpu_to_le16(j);
+ 
+-				if (test_bit(__IDPF_Q_FLOW_SCH_EN, q->flags))
++				if (idpf_queue_has(FLOW_SCH_EN, q))
+ 					qi[k].sched_mode =
+ 					cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_FLOW);
+ 				else
+@@ -1478,11 +1481,11 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
+ 
+ 		qi[k].queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
+ 		qi[k].model = cpu_to_le16(vport->txq_model);
+-		qi[k].type = cpu_to_le32(tx_qgrp->complq->q_type);
++		qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION);
+ 		qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count);
+ 		qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma);
+ 
+-		if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags))
++		if (idpf_queue_has(FLOW_SCH_EN, tx_qgrp->complq))
+ 			sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ 		else
+ 			sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+@@ -1567,17 +1570,18 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+ 			goto setup_rxqs;
+ 
+ 		for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
+-			struct idpf_queue *bufq =
++			struct idpf_buf_queue *bufq =
+ 				&rx_qgrp->splitq.bufq_sets[j].bufq;
+ 
+ 			qi[k].queue_id = cpu_to_le32(bufq->q_id);
+ 			qi[k].model = cpu_to_le16(vport->rxq_model);
+-			qi[k].type = cpu_to_le32(bufq->q_type);
++			qi[k].type =
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
+ 			qi[k].desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M);
+ 			qi[k].ring_len = cpu_to_le16(bufq->desc_count);
+ 			qi[k].dma_ring_addr = cpu_to_le64(bufq->dma);
+ 			qi[k].data_buffer_size = cpu_to_le32(bufq->rx_buf_size);
+-			qi[k].buffer_notif_stride = bufq->rx_buf_stride;
++			qi[k].buffer_notif_stride = IDPF_RX_BUF_STRIDE;
+ 			qi[k].rx_buffer_low_watermark =
+ 				cpu_to_le16(bufq->rx_buffer_low_watermark);
+ 			if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW))
+@@ -1591,7 +1595,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+ 			num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 		for (j = 0; j < num_rxq; j++, k++) {
+-			struct idpf_queue *rxq;
++			struct idpf_rx_queue *rxq;
+ 
+ 			if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ 				rxq = rx_qgrp->singleq.rxqs[j];
+@@ -1599,11 +1603,11 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+ 			}
+ 			rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ 			qi[k].rx_bufq1_id =
+-			  cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id);
++			  cpu_to_le16(rxq->bufq_sets[0].bufq.q_id);
+ 			if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) {
+ 				qi[k].bufq2_ena = IDPF_BUFQ2_ENA;
+ 				qi[k].rx_bufq2_id =
+-				  cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id);
++				  cpu_to_le16(rxq->bufq_sets[1].bufq.q_id);
+ 			}
+ 			qi[k].rx_buffer_low_watermark =
+ 				cpu_to_le16(rxq->rx_buffer_low_watermark);
+@@ -1611,7 +1615,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+ 				qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC);
+ 
+ common_qi_fields:
+-			if (rxq->rx_hsplit_en) {
++			if (idpf_queue_has(HSPLIT_EN, rxq)) {
+ 				qi[k].qflags |=
+ 					cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT);
+ 				qi[k].hdr_buffer_size =
+@@ -1619,7 +1623,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+ 			}
+ 			qi[k].queue_id = cpu_to_le32(rxq->q_id);
+ 			qi[k].model = cpu_to_le16(vport->rxq_model);
+-			qi[k].type = cpu_to_le32(rxq->q_type);
++			qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
+ 			qi[k].ring_len = cpu_to_le16(rxq->desc_count);
+ 			qi[k].dma_ring_addr = cpu_to_le64(rxq->dma);
+ 			qi[k].max_pkt_size = cpu_to_le32(rxq->rx_max_pkt_size);
+@@ -1706,7 +1710,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
+ 		struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 
+ 		for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+-			qc[k].type = cpu_to_le32(tx_qgrp->txqs[j]->q_type);
++			qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX);
+ 			qc[k].start_queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id);
+ 			qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ 		}
+@@ -1720,7 +1724,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
+ 	for (i = 0; i < vport->num_txq_grp; i++, k++) {
+ 		struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 
+-		qc[k].type = cpu_to_le32(tx_qgrp->complq->q_type);
++		qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION);
+ 		qc[k].start_queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
+ 		qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ 	}
+@@ -1741,12 +1745,12 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
+ 				qc[k].start_queue_id =
+ 				cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id);
+ 				qc[k].type =
+-				cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_type);
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
+ 			} else {
+ 				qc[k].start_queue_id =
+ 				cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id);
+ 				qc[k].type =
+-				cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_type);
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
+ 			}
+ 			qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ 		}
+@@ -1761,10 +1765,11 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
+ 		struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ 
+ 		for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
+-			struct idpf_queue *q;
++			const struct idpf_buf_queue *q;
+ 
+ 			q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+-			qc[k].type = cpu_to_le32(q->q_type);
++			qc[k].type =
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
+ 			qc[k].start_queue_id = cpu_to_le32(q->q_id);
+ 			qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ 		}
+@@ -1849,7 +1854,8 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
+ 		struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 
+ 		for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+-			vqv[k].queue_type = cpu_to_le32(tx_qgrp->txqs[j]->q_type);
++			vqv[k].queue_type =
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX);
+ 			vqv[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id);
+ 
+ 			if (idpf_is_queue_model_split(vport->txq_model)) {
+@@ -1879,14 +1885,15 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
+ 			num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 		for (j = 0; j < num_rxq; j++, k++) {
+-			struct idpf_queue *rxq;
++			struct idpf_rx_queue *rxq;
+ 
+ 			if (idpf_is_queue_model_split(vport->rxq_model))
+ 				rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ 			else
+ 				rxq = rx_qgrp->singleq.rxqs[j];
+ 
+-			vqv[k].queue_type = cpu_to_le32(rxq->q_type);
++			vqv[k].queue_type =
++				cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
+ 			vqv[k].queue_id = cpu_to_le32(rxq->q_id);
+ 			vqv[k].vector_id = cpu_to_le16(rxq->q_vector->v_idx);
+ 			vqv[k].itr_idx = cpu_to_le32(rxq->q_vector->rx_itr_idx);
+@@ -1975,7 +1982,7 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport)
+ 	 * queues virtchnl message is sent
+ 	 */
+ 	for (i = 0; i < vport->num_txq; i++)
+-		set_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
++		idpf_queue_set(POLL_MODE, vport->txqs[i]);
+ 
+ 	/* schedule the napi to receive all the marker packets */
+ 	local_bh_disable();
+@@ -3242,7 +3249,6 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ 				       int num_qids,
+ 				       u32 q_type)
+ {
+-	struct idpf_queue *q;
+ 	int i, j, k = 0;
+ 
+ 	switch (q_type) {
+@@ -3250,11 +3256,8 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ 		for (i = 0; i < vport->num_txq_grp; i++) {
+ 			struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 
+-			for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++) {
++			for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++)
+ 				tx_qgrp->txqs[j]->q_id = qids[k];
+-				tx_qgrp->txqs[j]->q_type =
+-					VIRTCHNL2_QUEUE_TYPE_TX;
+-			}
+ 		}
+ 		break;
+ 	case VIRTCHNL2_QUEUE_TYPE_RX:
+@@ -3268,12 +3271,13 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ 				num_rxq = rx_qgrp->singleq.num_rxq;
+ 
+ 			for (j = 0; j < num_rxq && k < num_qids; j++, k++) {
++				struct idpf_rx_queue *q;
++
+ 				if (idpf_is_queue_model_split(vport->rxq_model))
+ 					q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ 				else
+ 					q = rx_qgrp->singleq.rxqs[j];
+ 				q->q_id = qids[k];
+-				q->q_type = VIRTCHNL2_QUEUE_TYPE_RX;
+ 			}
+ 		}
+ 		break;
+@@ -3282,8 +3286,6 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ 			struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ 
+ 			tx_qgrp->complq->q_id = qids[k];
+-			tx_qgrp->complq->q_type =
+-				VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ 		}
+ 		break;
+ 	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+@@ -3292,9 +3294,10 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ 			u8 num_bufqs = vport->num_bufqs_per_qgrp;
+ 
+ 			for (j = 0; j < num_bufqs && k < num_qids; j++, k++) {
++				struct idpf_buf_queue *q;
++
+ 				q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ 				q->q_id = qids[k];
+-				q->q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ 			}
+ 		}
+ 		break;
+diff --git a/drivers/net/ethernet/realtek/r8169_phy_config.c b/drivers/net/ethernet/realtek/r8169_phy_config.c
+index 1f74317beb8878..e1e5d9672ae44b 100644
+--- a/drivers/net/ethernet/realtek/r8169_phy_config.c
++++ b/drivers/net/ethernet/realtek/r8169_phy_config.c
+@@ -1060,6 +1060,7 @@ static void rtl8125a_2_hw_phy_config(struct rtl8169_private *tp,
+ 	phy_modify_paged(phydev, 0xa86, 0x15, 0x0001, 0x0000);
+ 	rtl8168g_enable_gphy_10m(phydev);
+ 
++	rtl8168g_disable_aldps(phydev);
+ 	rtl8125a_config_eee_phy(phydev);
+ }
+ 
+@@ -1099,6 +1100,7 @@ static void rtl8125b_hw_phy_config(struct rtl8169_private *tp,
+ 	phy_modify_paged(phydev, 0xbf8, 0x12, 0xe000, 0xa000);
+ 
+ 	rtl8125_legacy_force_mode(phydev);
++	rtl8168g_disable_aldps(phydev);
+ 	rtl8125b_config_eee_phy(phydev);
+ }
+ 
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 4d100283c30fb4..067357b2495cdc 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -530,8 +530,16 @@ static void ravb_emac_init_gbeth(struct net_device *ndev)
+ 
+ static void ravb_emac_init_rcar(struct net_device *ndev)
+ {
+-	/* Receive frame limit set register */
+-	ravb_write(ndev, ndev->mtu + ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN, RFLR);
++	struct ravb_private *priv = netdev_priv(ndev);
++
++	/* Set receive frame length
++	 *
++	 * The length set here describes the frame from the destination address
++	 * up to and including the CRC data. However only the frame data,
++	 * excluding the CRC, are transferred to memory. To allow for the
++	 * largest frames add the CRC length to the maximum Rx descriptor size.
++	 */
++	ravb_write(ndev, priv->info->rx_max_frame_size + ETH_FCS_LEN, RFLR);
+ 
+ 	/* EMAC Mode: PAUSE prohibition; Duplex; RX Checksum; TX; RX */
+ 	ravb_write(ndev, ECMR_ZPF | ECMR_DM |
+diff --git a/drivers/net/ethernet/seeq/ether3.c b/drivers/net/ethernet/seeq/ether3.c
+index c672f92d65e976..9319a2675e7b65 100644
+--- a/drivers/net/ethernet/seeq/ether3.c
++++ b/drivers/net/ethernet/seeq/ether3.c
+@@ -847,9 +847,11 @@ static void ether3_remove(struct expansion_card *ec)
+ {
+ 	struct net_device *dev = ecard_get_drvdata(ec);
+ 
++	ether3_outw(priv(dev)->regs.config2 |= CFG2_CTRLO, REG_CONFIG2);
+ 	ecard_set_drvdata(ec, NULL);
+ 
+ 	unregister_netdev(dev);
++	del_timer_sync(&priv(dev)->timer);
+ 	free_netdev(dev);
+ 	ecard_release_resources(ec);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index 9e40c28d453ab1..ee3604f58def52 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -35,6 +35,9 @@ static int loongson_default_data(struct plat_stmmacenet_data *plat)
+ 	/* Disable RX queues routing by default */
+ 	plat->rx_queues_cfg[0].pkt_route = 0x0;
+ 
++	plat->clk_ref_rate = 125000000;
++	plat->clk_ptp_rate = 125000000;
++
+ 	/* Default to phy auto-detection */
+ 	plat->phy_addr = -1;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 33e2bd5a351cad..3b1bb6aa5b8c8c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2025,7 +2025,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
+ 	rx_q->queue_index = queue;
+ 	rx_q->priv_data = priv;
+ 
+-	pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
++	pp_params.flags = PP_FLAG_DMA_MAP | (xdp_prog ? PP_FLAG_DMA_SYNC_DEV : 0);
+ 	pp_params.pool_size = dma_conf->dma_rx_size;
+ 	num_pages = DIV_ROUND_UP(dma_conf->dma_buf_sz, PAGE_SIZE);
+ 	pp_params.order = ilog2(num_pages);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 88d7bc2ea71326..f1d928644b8242 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -674,15 +674,15 @@ static int axienet_device_reset(struct net_device *ndev)
+  *
+  * Would either be called after a successful transmit operation, or after
+  * there was an error when setting up the chain.
+- * Returns the number of descriptors handled.
++ * Returns the number of packets handled.
+  */
+ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ 				 int nr_bds, bool force, u32 *sizep, int budget)
+ {
+ 	struct axidma_bd *cur_p;
+ 	unsigned int status;
++	int i, packets = 0;
+ 	dma_addr_t phys;
+-	int i;
+ 
+ 	for (i = 0; i < nr_bds; i++) {
+ 		cur_p = &lp->tx_bd_v[(first_bd + i) % lp->tx_bd_num];
+@@ -701,8 +701,10 @@ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ 				 (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
+ 				 DMA_TO_DEVICE);
+ 
+-		if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
++		if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
+ 			napi_consume_skb(cur_p->skb, budget);
++			packets++;
++		}
+ 
+ 		cur_p->app0 = 0;
+ 		cur_p->app1 = 0;
+@@ -718,7 +720,13 @@ static int axienet_free_tx_chain(struct axienet_local *lp, u32 first_bd,
+ 			*sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
+ 	}
+ 
+-	return i;
++	if (!force) {
++		lp->tx_bd_ci += i;
++		if (lp->tx_bd_ci >= lp->tx_bd_num)
++			lp->tx_bd_ci %= lp->tx_bd_num;
++	}
++
++	return packets;
+ }
+ 
+ /**
+@@ -891,13 +899,10 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
+ 	u32 size = 0;
+ 	int packets;
+ 
+-	packets = axienet_free_tx_chain(lp, lp->tx_bd_ci, budget, false, &size, budget);
++	packets = axienet_free_tx_chain(lp, lp->tx_bd_ci, lp->tx_bd_num, false,
++					&size, budget);
+ 
+ 	if (packets) {
+-		lp->tx_bd_ci += packets;
+-		if (lp->tx_bd_ci >= lp->tx_bd_num)
+-			lp->tx_bd_ci %= lp->tx_bd_num;
+-
+ 		u64_stats_update_begin(&lp->tx_stat_sync);
+ 		u64_stats_add(&lp->tx_packets, packets);
+ 		u64_stats_add(&lp->tx_bytes, size);
+@@ -1222,9 +1227,10 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev)
+ 		u32 cr = lp->tx_dma_cr;
+ 
+ 		cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK);
+-		axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
+-
+-		napi_schedule(&lp->napi_tx);
++		if (napi_schedule_prep(&lp->napi_tx)) {
++			axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
++			__napi_schedule(&lp->napi_tx);
++		}
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -1266,9 +1272,10 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev)
+ 		u32 cr = lp->rx_dma_cr;
+ 
+ 		cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK);
+-		axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
+-
+-		napi_schedule(&lp->napi_rx);
++		if (napi_schedule_prep(&lp->napi_rx)) {
++			axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
++			__napi_schedule(&lp->napi_rx);
++		}
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 18eb5ba436df69..2506aa8c603ec0 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -464,10 +464,15 @@ static enum skb_state defer_bh(struct usbnet *dev, struct sk_buff *skb,
+ void usbnet_defer_kevent (struct usbnet *dev, int work)
+ {
+ 	set_bit (work, &dev->flags);
+-	if (!schedule_work (&dev->kevent))
+-		netdev_dbg(dev->net, "kevent %s may have been dropped\n", usbnet_event_names[work]);
+-	else
+-		netdev_dbg(dev->net, "kevent %s scheduled\n", usbnet_event_names[work]);
++	if (!usbnet_going_away(dev)) {
++		if (!schedule_work(&dev->kevent))
++			netdev_dbg(dev->net,
++				   "kevent %s may have been dropped\n",
++				   usbnet_event_names[work]);
++		else
++			netdev_dbg(dev->net,
++				   "kevent %s scheduled\n", usbnet_event_names[work]);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usbnet_defer_kevent);
+ 
+@@ -535,7 +540,8 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ 			tasklet_schedule (&dev->bh);
+ 			break;
+ 		case 0:
+-			__usbnet_queue_skb(&dev->rxq, skb, rx_start);
++			if (!usbnet_going_away(dev))
++				__usbnet_queue_skb(&dev->rxq, skb, rx_start);
+ 		}
+ 	} else {
+ 		netif_dbg(dev, ifdown, dev->net, "rx: stopped\n");
+@@ -843,9 +849,18 @@ int usbnet_stop (struct net_device *net)
+ 
+ 	/* deferred work (timer, softirq, task) must also stop */
+ 	dev->flags = 0;
+-	del_timer_sync (&dev->delay);
+-	tasklet_kill (&dev->bh);
++	del_timer_sync(&dev->delay);
++	tasklet_kill(&dev->bh);
+ 	cancel_work_sync(&dev->kevent);
++
++	/* We have cyclic dependencies. Those calls are needed
++	 * to break a cycle. We cannot fall into the gaps because
++	 * we have a flag
++	 */
++	tasklet_kill(&dev->bh);
++	del_timer_sync(&dev->delay);
++	cancel_work_sync(&dev->kevent);
++
+ 	if (!pm)
+ 		usb_autopm_put_interface(dev->intf);
+ 
+@@ -1171,7 +1186,8 @@ usbnet_deferred_kevent (struct work_struct *work)
+ 					   status);
+ 		} else {
+ 			clear_bit (EVENT_RX_HALT, &dev->flags);
+-			tasklet_schedule (&dev->bh);
++			if (!usbnet_going_away(dev))
++				tasklet_schedule(&dev->bh);
+ 		}
+ 	}
+ 
+@@ -1196,7 +1212,8 @@ usbnet_deferred_kevent (struct work_struct *work)
+ 			usb_autopm_put_interface(dev->intf);
+ fail_lowmem:
+ 			if (resched)
+-				tasklet_schedule (&dev->bh);
++				if (!usbnet_going_away(dev))
++					tasklet_schedule(&dev->bh);
+ 		}
+ 	}
+ 
+@@ -1559,6 +1576,7 @@ static void usbnet_bh (struct timer_list *t)
+ 	} else if (netif_running (dev->net) &&
+ 		   netif_device_present (dev->net) &&
+ 		   netif_carrier_ok(dev->net) &&
++		   !usbnet_going_away(dev) &&
+ 		   !timer_pending(&dev->delay) &&
+ 		   !test_bit(EVENT_RX_PAUSED, &dev->flags) &&
+ 		   !test_bit(EVENT_RX_HALT, &dev->flags)) {
+@@ -1606,6 +1624,7 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 	usb_set_intfdata(intf, NULL);
+ 	if (!dev)
+ 		return;
++	usbnet_mark_going_away(dev);
+ 
+ 	xdev = interface_to_usbdev (intf);
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 21bd0c127b05ab..0b1630bb173a5e 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1439,6 +1439,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ 	struct page *page = virt_to_head_page(buf);
+ 	struct sk_buff *skb;
+ 
++	/* We passed the address of virtnet header to virtio-core,
++	 * so truncate the padding.
++	 */
++	buf -= VIRTNET_RX_PAD + xdp_headroom;
++
+ 	len -= vi->hdr_len;
+ 	u64_stats_add(&stats->bytes, len);
+ 
+@@ -2029,8 +2034,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
+ 	if (unlikely(!buf))
+ 		return -ENOMEM;
+ 
+-	virtnet_rq_init_one_sg(rq, buf + VIRTNET_RX_PAD + xdp_headroom,
+-			       vi->hdr_len + GOOD_PACKET_LEN);
++	buf += VIRTNET_RX_PAD + xdp_headroom;
++
++	virtnet_rq_init_one_sg(rq, buf, vi->hdr_len + GOOD_PACKET_LEN);
+ 
+ 	err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
+ 	if (err < 0) {
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 141ba4487cb421..9d2d3a86abf172 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -396,6 +396,7 @@ struct ath11k_vif {
+ 	u8 bssid[ETH_ALEN];
+ 	struct cfg80211_bitrate_mask bitrate_mask;
+ 	struct delayed_work connection_loss_work;
++	struct work_struct bcn_tx_work;
+ 	int num_legacy_stations;
+ 	int rtscts_prot_mode;
+ 	int txpower;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index eaa53bc39ab2c0..74719cb78888b1 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6599,6 +6599,16 @@ static int ath11k_mac_vdev_delete(struct ath11k *ar, struct ath11k_vif *arvif)
+ 	return ret;
+ }
+ 
++static void ath11k_mac_bcn_tx_work(struct work_struct *work)
++{
++	struct ath11k_vif *arvif = container_of(work, struct ath11k_vif,
++						bcn_tx_work);
++
++	mutex_lock(&arvif->ar->conf_mutex);
++	ath11k_mac_bcn_tx_event(arvif);
++	mutex_unlock(&arvif->ar->conf_mutex);
++}
++
+ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ 				       struct ieee80211_vif *vif)
+ {
+@@ -6637,6 +6647,7 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ 	arvif->vif = vif;
+ 
+ 	INIT_LIST_HEAD(&arvif->list);
++	INIT_WORK(&arvif->bcn_tx_work, ath11k_mac_bcn_tx_work);
+ 	INIT_DELAYED_WORK(&arvif->connection_loss_work,
+ 			  ath11k_mac_vif_sta_connection_loss_work);
+ 
+@@ -6879,6 +6890,7 @@ static void ath11k_mac_op_remove_interface(struct ieee80211_hw *hw,
+ 	int i;
+ 
+ 	cancel_delayed_work_sync(&arvif->connection_loss_work);
++	cancel_work_sync(&arvif->bcn_tx_work);
+ 
+ 	mutex_lock(&ar->conf_mutex);
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 6ff01c45f16591..f50a4c0d4b2bac 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -7404,7 +7404,9 @@ static void ath11k_bcn_tx_status_event(struct ath11k_base *ab, struct sk_buff *s
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+-	ath11k_mac_bcn_tx_event(arvif);
++
++	queue_work(ab->workqueue, &arvif->bcn_tx_work);
++
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 7037004ce97713..818ee74cf7a055 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -1948,9 +1948,8 @@ static void ath12k_peer_assoc_h_he(struct ath12k *ar,
+ 	 * request, then use MAX_AMPDU_LEN_FACTOR as 16 to calculate max_ampdu
+ 	 * length.
+ 	 */
+-	ampdu_factor = (he_cap->he_cap_elem.mac_cap_info[3] &
+-			IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK) >>
+-			IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK;
++	ampdu_factor = u8_get_bits(he_cap->he_cap_elem.mac_cap_info[3],
++				   IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);
+ 
+ 	if (ampdu_factor) {
+ 		if (sta->deflink.vht_cap.vht_supported)
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index ef775af25093c4..55230dd00d0c7b 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -1528,6 +1528,7 @@ int ath12k_wmi_pdev_bss_chan_info_request(struct ath12k *ar,
+ 	cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST,
+ 						 sizeof(*cmd));
+ 	cmd->req_type = cpu_to_le32(type);
++	cmd->pdev_id = cpu_to_le32(ar->pdev->pdev_id);
+ 
+ 	ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ 		   "WMI bss chan info req type %d\n", type);
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 742fe0b36cf20a..e947d646353c38 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3104,6 +3104,7 @@ struct wmi_pdev_bss_chan_info_req_cmd {
+ 	__le32 tlv_header;
+ 	/* ref wmi_bss_chan_info_req_type */
+ 	__le32 req_type;
++	__le32 pdev_id;
+ } __packed;
+ 
+ struct wmi_ap_ps_peer_cmd {
+@@ -4053,7 +4054,6 @@ struct wmi_vdev_stopped_event {
+ } __packed;
+ 
+ struct wmi_pdev_bss_chan_info_event {
+-	__le32 pdev_id;
+ 	__le32 freq;	/* Units in MHz */
+ 	__le32 noise_floor;	/* units are dBm */
+ 	/* rx clear - how often the channel was unused */
+@@ -4071,6 +4071,7 @@ struct wmi_pdev_bss_chan_info_event {
+ 	/*rx_cycle cnt for my bss in 64bits format */
+ 	__le32 rx_bss_cycle_count_low;
+ 	__le32 rx_bss_cycle_count_high;
++	__le32 pdev_id;
+ } __packed;
+ 
+ #define WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS 0
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index d84e3ee7b5d902..bf3da631c69fda 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1380,8 +1380,6 @@ int ath9k_init_debug(struct ath_hw *ah)
+ 
+ 	sc->debug.debugfs_phy = debugfs_create_dir("ath9k",
+ 						   sc->hw->wiphy->debugfsdir);
+-	if (IS_ERR(sc->debug.debugfs_phy))
+-		return -ENOMEM;
+ 
+ #ifdef CONFIG_ATH_DEBUG
+ 	debugfs_create_file("debug", 0600, sc->debug.debugfs_phy,
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+index f7c6d9bc931196..9437d69877cc56 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+@@ -486,8 +486,6 @@ int ath9k_htc_init_debug(struct ath_hw *ah)
+ 
+ 	priv->debug.debugfs_phy = debugfs_create_dir(KBUILD_MODNAME,
+ 					     priv->hw->wiphy->debugfsdir);
+-	if (IS_ERR(priv->debug.debugfs_phy))
+-		return -ENOMEM;
+ 
+ 	ath9k_cmn_spectral_init_debug(&priv->spec_priv, priv->debug.debugfs_phy);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+index 7ea2631b80692d..00794086cc7c97 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+@@ -123,7 +123,7 @@ static s32 brcmf_btcoex_params_read(struct brcmf_if *ifp, u32 addr, u32 *data)
+ {
+ 	*data = addr;
+ 
+-	return brcmf_fil_iovar_int_get(ifp, "btc_params", data);
++	return brcmf_fil_iovar_int_query(ifp, "btc_params", data);
+ }
+ 
+ /**
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 826b768196e289..ccc069ae5e9d8b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -663,8 +663,8 @@ static int brcmf_cfg80211_request_sta_if(struct brcmf_if *ifp, u8 *macaddr)
+ 	/* interface_create version 3+ */
+ 	/* get supported version from firmware side */
+ 	iface_create_ver = 0;
+-	err = brcmf_fil_bsscfg_int_get(ifp, "interface_create",
+-				       &iface_create_ver);
++	err = brcmf_fil_bsscfg_int_query(ifp, "interface_create",
++					 &iface_create_ver);
+ 	if (err) {
+ 		brcmf_err("fail to get supported version, err=%d\n", err);
+ 		return -EOPNOTSUPP;
+@@ -756,8 +756,8 @@ static int brcmf_cfg80211_request_ap_if(struct brcmf_if *ifp)
+ 	/* interface_create version 3+ */
+ 	/* get supported version from firmware side */
+ 	iface_create_ver = 0;
+-	err = brcmf_fil_bsscfg_int_get(ifp, "interface_create",
+-				       &iface_create_ver);
++	err = brcmf_fil_bsscfg_int_query(ifp, "interface_create",
++					 &iface_create_ver);
+ 	if (err) {
+ 		brcmf_err("fail to get supported version, err=%d\n", err);
+ 		return -EOPNOTSUPP;
+@@ -2101,7 +2101,8 @@ brcmf_set_key_mgmt(struct net_device *ndev, struct cfg80211_connect_params *sme)
+ 	if (!sme->crypto.n_akm_suites)
+ 		return 0;
+ 
+-	err = brcmf_fil_bsscfg_int_get(netdev_priv(ndev), "wpa_auth", &val);
++	err = brcmf_fil_bsscfg_int_get(netdev_priv(ndev),
++				       "wpa_auth", &val);
+ 	if (err) {
+ 		bphy_err(drvr, "could not get wpa_auth (%d)\n", err);
+ 		return err;
+@@ -2680,7 +2681,7 @@ brcmf_cfg80211_get_tx_power(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
+ 	struct brcmf_cfg80211_vif *vif = wdev_to_vif(wdev);
+ 	struct brcmf_pub *drvr = cfg->pub;
+-	s32 qdbm = 0;
++	s32 qdbm;
+ 	s32 err;
+ 
+ 	brcmf_dbg(TRACE, "Enter\n");
+@@ -3067,7 +3068,7 @@ brcmf_cfg80211_get_station_ibss(struct brcmf_if *ifp,
+ 	struct brcmf_scb_val_le scbval;
+ 	struct brcmf_pktcnt_le pktcnt;
+ 	s32 err;
+-	u32 rate = 0;
++	u32 rate;
+ 	u32 rssi;
+ 
+ 	/* Get the current tx rate */
+@@ -7046,8 +7047,8 @@ static int brcmf_construct_chaninfo(struct brcmf_cfg80211_info *cfg,
+ 			ch.bw = BRCMU_CHAN_BW_20;
+ 			cfg->d11inf.encchspec(&ch);
+ 			chaninfo = ch.chspec;
+-			err = brcmf_fil_bsscfg_int_get(ifp, "per_chan_info",
+-						       &chaninfo);
++			err = brcmf_fil_bsscfg_int_query(ifp, "per_chan_info",
++							 &chaninfo);
+ 			if (!err) {
+ 				if (chaninfo & WL_CHAN_RADAR)
+ 					channel->flags |=
+@@ -7081,7 +7082,7 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
+ 
+ 	/* verify support for bw_cap command */
+ 	val = WLC_BAND_5G;
+-	err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &val);
++	err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &val);
+ 
+ 	if (!err) {
+ 		/* only set 2G bandwidth using bw_cap command */
+@@ -7157,11 +7158,11 @@ static void brcmf_get_bwcap(struct brcmf_if *ifp, u32 bw_cap[])
+ 	int err;
+ 
+ 	band = WLC_BAND_2G;
+-	err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &band);
++	err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &band);
+ 	if (!err) {
+ 		bw_cap[NL80211_BAND_2GHZ] = band;
+ 		band = WLC_BAND_5G;
+-		err = brcmf_fil_iovar_int_get(ifp, "bw_cap", &band);
++		err = brcmf_fil_iovar_int_query(ifp, "bw_cap", &band);
+ 		if (!err) {
+ 			bw_cap[NL80211_BAND_5GHZ] = band;
+ 			return;
+@@ -7170,7 +7171,6 @@ static void brcmf_get_bwcap(struct brcmf_if *ifp, u32 bw_cap[])
+ 		return;
+ 	}
+ 	brcmf_dbg(INFO, "fallback to mimo_bw_cap info\n");
+-	mimo_bwcap = 0;
+ 	err = brcmf_fil_iovar_int_get(ifp, "mimo_bw_cap", &mimo_bwcap);
+ 	if (err)
+ 		/* assume 20MHz if firmware does not give a clue */
+@@ -7266,10 +7266,10 @@ static int brcmf_setup_wiphybands(struct brcmf_cfg80211_info *cfg)
+ 	struct brcmf_pub *drvr = cfg->pub;
+ 	struct brcmf_if *ifp = brcmf_get_ifp(drvr, 0);
+ 	struct wiphy *wiphy = cfg_to_wiphy(cfg);
+-	u32 nmode = 0;
++	u32 nmode;
+ 	u32 vhtmode = 0;
+ 	u32 bw_cap[2] = { WLC_BW_20MHZ_BIT, WLC_BW_20MHZ_BIT };
+-	u32 rxchain = 0;
++	u32 rxchain;
+ 	u32 nchain;
+ 	int err;
+ 	s32 i;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index bf91b1e1368f03..df53dd1d7e748a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -691,7 +691,7 @@ static int brcmf_net_mon_open(struct net_device *ndev)
+ {
+ 	struct brcmf_if *ifp = netdev_priv(ndev);
+ 	struct brcmf_pub *drvr = ifp->drvr;
+-	u32 monitor = 0;
++	u32 monitor;
+ 	int err;
+ 
+ 	brcmf_dbg(TRACE, "Enter\n");
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+index f23310a77a5d11..0d9ae197fa1ec3 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+@@ -184,7 +184,7 @@ static void brcmf_feat_wlc_version_overrides(struct brcmf_pub *drv)
+ static void brcmf_feat_iovar_int_get(struct brcmf_if *ifp,
+ 				     enum brcmf_feat_id id, char *name)
+ {
+-	u32 data = 0;
++	u32 data;
+ 	int err;
+ 
+ 	/* we need to know firmware error */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+index a315a7fac6a06f..31e080e4da6697 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+@@ -96,15 +96,22 @@ static inline
+ s32 brcmf_fil_cmd_int_get(struct brcmf_if *ifp, u32 cmd, u32 *data)
+ {
+ 	s32 err;
+-	__le32 data_le = cpu_to_le32(*data);
+ 
+-	err = brcmf_fil_cmd_data_get(ifp, cmd, &data_le, sizeof(data_le));
++	err = brcmf_fil_cmd_data_get(ifp, cmd, data, sizeof(*data));
+ 	if (err == 0)
+-		*data = le32_to_cpu(data_le);
++		*data = le32_to_cpu(*(__le32 *)data);
+ 	brcmf_dbg(FIL, "ifidx=%d, cmd=%d, value=%d\n", ifp->ifidx, cmd, *data);
+ 
+ 	return err;
+ }
++static inline
++s32 brcmf_fil_cmd_int_query(struct brcmf_if *ifp, u32 cmd, u32 *data)
++{
++	__le32 *data_le = (__le32 *)data;
++
++	*data_le = cpu_to_le32(*data);
++	return brcmf_fil_cmd_int_get(ifp, cmd, data);
++}
+ 
+ s32 brcmf_fil_iovar_data_set(struct brcmf_if *ifp, const char *name,
+ 			     const void *data, u32 len);
+@@ -120,14 +127,21 @@ s32 brcmf_fil_iovar_int_set(struct brcmf_if *ifp, const char *name, u32 data)
+ static inline
+ s32 brcmf_fil_iovar_int_get(struct brcmf_if *ifp, const char *name, u32 *data)
+ {
+-	__le32 data_le = cpu_to_le32(*data);
+ 	s32 err;
+ 
+-	err = brcmf_fil_iovar_data_get(ifp, name, &data_le, sizeof(data_le));
++	err = brcmf_fil_iovar_data_get(ifp, name, data, sizeof(*data));
+ 	if (err == 0)
+-		*data = le32_to_cpu(data_le);
++		*data = le32_to_cpu(*(__le32 *)data);
+ 	return err;
+ }
++static inline
++s32 brcmf_fil_iovar_int_query(struct brcmf_if *ifp, const char *name, u32 *data)
++{
++	__le32 *data_le = (__le32 *)data;
++
++	*data_le = cpu_to_le32(*data);
++	return brcmf_fil_iovar_int_get(ifp, name, data);
++}
+ 
+ 
+ s32 brcmf_fil_bsscfg_data_set(struct brcmf_if *ifp, const char *name,
+@@ -145,15 +159,21 @@ s32 brcmf_fil_bsscfg_int_set(struct brcmf_if *ifp, const char *name, u32 data)
+ static inline
+ s32 brcmf_fil_bsscfg_int_get(struct brcmf_if *ifp, const char *name, u32 *data)
+ {
+-	__le32 data_le = cpu_to_le32(*data);
+ 	s32 err;
+ 
+-	err = brcmf_fil_bsscfg_data_get(ifp, name, &data_le,
+-					sizeof(data_le));
++	err = brcmf_fil_bsscfg_data_get(ifp, name, data, sizeof(*data));
+ 	if (err == 0)
+-		*data = le32_to_cpu(data_le);
++		*data = le32_to_cpu(*(__le32 *)data);
+ 	return err;
+ }
++static inline
++s32 brcmf_fil_bsscfg_int_query(struct brcmf_if *ifp, const char *name, u32 *data)
++{
++	__le32 *data_le = (__le32 *)data;
++
++	*data_le = cpu_to_le32(*data);
++	return brcmf_fil_bsscfg_int_get(ifp, name, data);
++}
+ 
+ s32 brcmf_fil_xtlv_data_set(struct brcmf_if *ifp, const char *name, u16 id,
+ 			    void *data, u32 len);
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+index bc98b87cf2a131..02a95bf72740b1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+@@ -148,6 +148,17 @@ const struct iwl_cfg_trans_params iwl_bz_trans_cfg = {
+ 	.ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
+ };
+ 
++const struct iwl_cfg_trans_params iwl_gl_trans_cfg = {
++	.device_family = IWL_DEVICE_FAMILY_BZ,
++	.base_params = &iwl_bz_base_params,
++	.mq_rx_supported = true,
++	.rf_id = true,
++	.gen2 = true,
++	.umac_prph_offset = 0x300000,
++	.xtal_latency = 12000,
++	.low_latency_xtal = true,
++};
++
+ const char iwl_bz_name[] = "Intel(R) TBD Bz device";
+ const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz";
+ const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz";
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 732889f96ca27d..29a28b5c281142 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -503,6 +503,7 @@ extern const struct iwl_cfg_trans_params iwl_so_long_latency_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_so_long_latency_imr_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_ma_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_bz_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_gl_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_sc_trans_cfg;
+ extern const char iwl9162_name[];
+ extern const char iwl9260_name[];
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+index 3cbeaddf435865..4ff643e9db5e36 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+@@ -109,7 +109,7 @@
+ #define IWL_MVM_FTM_INITIATOR_SECURE_LTF	false
+ #define IWL_MVM_FTM_RESP_NDP_SUPPORT		true
+ #define IWL_MVM_FTM_RESP_LMR_FEEDBACK_SUPPORT	true
+-#define IWL_MVM_FTM_NON_TB_MIN_TIME_BETWEEN_MSR	5
++#define IWL_MVM_FTM_NON_TB_MIN_TIME_BETWEEN_MSR	7
+ #define IWL_MVM_FTM_NON_TB_MAX_TIME_BETWEEN_MSR	1000
+ #define IWL_MVM_D3_DEBUG			false
+ #define IWL_MVM_USE_TWT				true
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 9d681377cbab3f..f40d3e59d694a1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -107,16 +107,14 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
+ 		iwl_mvm_flush_sta(mvm, mvm->aux_sta.sta_id,
+ 				  mvm->aux_sta.tfd_queue_msk);
+ 
+-		if (mvm->mld_api_is_used) {
+-			iwl_mvm_mld_rm_aux_sta(mvm);
+-			mutex_unlock(&mvm->mutex);
+-			return;
+-		}
+-
+ 		/* In newer version of this command an aux station is added only
+ 		 * in cases of dedicated tx queue and need to be removed in end
+-		 * of use */
+-		if (iwl_mvm_has_new_station_api(mvm->fw))
++		 * of use. For the even newer mld api, use the appropriate
++		 * function.
++		 */
++		if (mvm->mld_api_is_used)
++			iwl_mvm_mld_rm_aux_sta(mvm);
++		else if (iwl_mvm_has_new_station_api(mvm->fw))
+ 			iwl_mvm_rm_aux_sta(mvm);
+ 	}
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index fed2754be68029..d93eec242204f2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -500,10 +500,38 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x7E40, PCI_ANY_ID, iwl_ma_trans_cfg)},
+ 
+ /* Bz devices */
+-	{IWL_PCI_DEVICE(0x2727, PCI_ANY_ID, iwl_bz_trans_cfg)},
+-	{IWL_PCI_DEVICE(0x272D, PCI_ANY_ID, iwl_bz_trans_cfg)},
+-	{IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_bz_trans_cfg)},
+-	{IWL_PCI_DEVICE(0xA840, PCI_ANY_ID, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0x272b, PCI_ANY_ID, iwl_gl_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0000, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0090, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0094, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0098, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x009C, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00C0, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00C4, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00E0, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00E4, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00E8, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x00EC, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0100, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0110, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0114, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0118, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x011C, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0310, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0314, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0510, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x0A10, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1671, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1672, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1771, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1772, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1791, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x1792, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x4090, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x40C4, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x40E0, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x4110, iwl_bz_trans_cfg)},
++	{IWL_PCI_DEVICE(0xA840, 0x4314, iwl_bz_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x7740, PCI_ANY_ID, iwl_bz_trans_cfg)},
+ 	{IWL_PCI_DEVICE(0x4D40, PCI_ANY_ID, iwl_bz_trans_cfg)},
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index e8ba2e4e8484a8..b5dbcf925f9227 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -1524,7 +1524,7 @@ EXPORT_SYMBOL_GPL(mt76_wcid_init);
+ 
+ void mt76_wcid_cleanup(struct mt76_dev *dev, struct mt76_wcid *wcid)
+ {
+-	struct mt76_phy *phy = dev->phys[wcid->phy_idx];
++	struct mt76_phy *phy = mt76_dev_phy(dev, wcid->phy_idx);
+ 	struct ieee80211_hw *hw;
+ 	struct sk_buff_head list;
+ 	struct sk_buff *skb;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
+index 14304b06371588..e238161dfaa971 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
+@@ -29,7 +29,7 @@ mt7603_rx_loopback_skb(struct mt7603_dev *dev, struct sk_buff *skb)
+ 	struct ieee80211_sta *sta;
+ 	struct mt7603_sta *msta;
+ 	struct mt76_wcid *wcid;
+-	u8 tid = 0, hwq = 0;
++	u8 qid, tid = 0, hwq = 0;
+ 	void *priv;
+ 	int idx;
+ 	u32 val;
+@@ -57,7 +57,7 @@ mt7603_rx_loopback_skb(struct mt7603_dev *dev, struct sk_buff *skb)
+ 	if (ieee80211_is_data_qos(hdr->frame_control)) {
+ 		tid = *ieee80211_get_qos_ctl(hdr) &
+ 			 IEEE80211_QOS_CTL_TAG1D_MASK;
+-		u8 qid = tid_to_ac[tid];
++		qid = tid_to_ac[tid];
+ 		hwq = wmm_queue_map[qid];
+ 		skb_set_queue_mapping(skb, qid);
+ 	} else if (ieee80211_is_data(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+index f7722f67db576f..0b9ebdcda221e4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+@@ -56,6 +56,9 @@ int mt7615_thermal_init(struct mt7615_dev *dev)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7615_%s",
+ 			      wiphy_name(wiphy));
++	if (!name)
++		return -ENOMEM;
++
+ 	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, dev,
+ 						       mt7615_hwmon_groups);
+ 	return PTR_ERR_OR_ZERO(hwmon);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+index 353e6606984093..ef003d1620a5b7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+@@ -28,8 +28,6 @@ enum {
+ #define MT_RXD0_MESH			BIT(18)
+ #define MT_RXD0_MHCP			BIT(19)
+ #define MT_RXD0_NORMAL_ETH_TYPE_OFS	GENMASK(22, 16)
+-#define MT_RXD0_NORMAL_IP_SUM		BIT(23)
+-#define MT_RXD0_NORMAL_UDP_TCP_SUM	BIT(24)
+ 
+ #define MT_RXD0_SW_PKT_TYPE_MASK	GENMASK(31, 16)
+ #define MT_RXD0_SW_PKT_TYPE_MAP		0x380F
+@@ -80,6 +78,8 @@ enum {
+ #define MT_RXD3_NORMAL_BEACON_UC	BIT(21)
+ #define MT_RXD3_NORMAL_CO_ANT		BIT(22)
+ #define MT_RXD3_NORMAL_FCS_ERR		BIT(24)
++#define MT_RXD3_NORMAL_IP_SUM		BIT(26)
++#define MT_RXD3_NORMAL_UDP_TCP_SUM	BIT(27)
+ #define MT_RXD3_NORMAL_VLAN2ETH		BIT(31)
+ 
+ /* RXD DW4 */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index a978f434dc5e64..7bc3b4cd359255 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -194,6 +194,8 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7915_%s",
+ 			      wiphy_name(wiphy));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	cdev = thermal_cooling_device_register(name, phy, &mt7915_thermal_ops);
+ 	if (!IS_ERR(cdev)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 2624edbb59a1a5..eea41b29f09675 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -564,8 +564,7 @@ static void mt7915_configure_filter(struct ieee80211_hw *hw,
+ 
+ 	MT76_FILTER(CONTROL, MT_WF_RFCR_DROP_CTS |
+ 			     MT_WF_RFCR_DROP_RTS |
+-			     MT_WF_RFCR_DROP_CTL_RSV |
+-			     MT_WF_RFCR_DROP_NDPA);
++			     MT_WF_RFCR_DROP_CTL_RSV);
+ 
+ 	*total_flags = flags;
+ 	rxfilter = phy->rxfilter;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index ef0c721d26e332..d1d64fa7d35d02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -52,6 +52,8 @@ static int mt7921_thermal_init(struct mt792x_phy *phy)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7921_%s",
+ 			      wiphy_name(wiphy));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
+ 						       mt7921_hwmon_groups);
+@@ -83,7 +85,7 @@ mt7921_regd_channel_update(struct wiphy *wiphy, struct mt792x_dev *dev)
+ 		}
+ 
+ 		/* UNII-4 */
+-		if (IS_UNII_INVALID(0, 5850, 5925))
++		if (IS_UNII_INVALID(0, 5845, 5925))
+ 			ch->flags |= IEEE80211_CHAN_DISABLED;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+index c2460ef4993db6..e9d2d0f22a5863 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+@@ -350,7 +350,7 @@ mt7925_mac_fill_rx_rate(struct mt792x_dev *dev,
+ static int
+ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ {
+-	u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
++	u32 csum_mask = MT_RXD3_NORMAL_IP_SUM | MT_RXD3_NORMAL_UDP_TCP_SUM;
+ 	struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ 	bool hdr_trans, unicast, insert_ccmp_hdr = false;
+ 	u8 chfreq, qos_ctl = 0, remove_pad, amsdu_info;
+@@ -360,7 +360,6 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ 	struct mt792x_phy *phy = &dev->phy;
+ 	struct ieee80211_supported_band *sband;
+ 	u32 csum_status = *(u32 *)skb->cb;
+-	u32 rxd0 = le32_to_cpu(rxd[0]);
+ 	u32 rxd1 = le32_to_cpu(rxd[1]);
+ 	u32 rxd2 = le32_to_cpu(rxd[2]);
+ 	u32 rxd3 = le32_to_cpu(rxd[3]);
+@@ -418,7 +417,7 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+ 	if (!sband->channels)
+ 		return -EINVAL;
+ 
+-	if (mt76_is_mmio(&dev->mt76) && (rxd0 & csum_mask) == csum_mask &&
++	if (mt76_is_mmio(&dev->mt76) && (rxd3 & csum_mask) == csum_mask &&
+ 	    !(csum_status & (BIT(0) | BIT(2) | BIT(3))))
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 652a9accc43cc3..7ec6bb5bc2767f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -613,6 +613,9 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ 	for (offset = 0; offset < len; offset += le32_to_cpu(clc->len)) {
+ 		clc = (const struct mt7925_clc *)(clc_base + offset);
+ 
++		if (clc->idx > ARRAY_SIZE(phy->clc))
++			break;
++
+ 		/* do not init buf again if chip reset triggered */
+ 		if (phy->clc[clc->idx])
+ 			continue;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 283df84f1b4335..a98dcb40490bf8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -1011,8 +1011,6 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ 		return;
+ 
+ 	elem->phy_cap_info[3] |= IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
+-	if (vif == NL80211_IFTYPE_AP)
+-		elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
+ 
+ 	c = FIELD_PREP(IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK,
+ 		       sts - 1) |
+@@ -1020,6 +1018,11 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ 		       sts - 1);
+ 	elem->phy_cap_info[5] |= c;
+ 
++	if (vif != NL80211_IFTYPE_AP)
++		return;
++
++	elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
++
+ 	c = IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMING_FB |
+ 	    IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB;
+ 	elem->phy_cap_info[6] |= c;
+@@ -1179,7 +1182,6 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 		IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
+ 
+ 	eht_cap_elem->phy_cap_info[0] =
+-		IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ |
+ 		IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ 		IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+ 		IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+@@ -1193,30 +1195,36 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 		u8_encode_bits(u8_get_bits(val, GENMASK(2, 1)),
+ 			       IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK) |
+ 		u8_encode_bits(val,
+-			       IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK) |
+-		u8_encode_bits(val,
+-			       IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
++			       IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK);
+ 
+ 	eht_cap_elem->phy_cap_info[2] =
+ 		u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_80MHZ_MASK) |
+-		u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK) |
+-		u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK);
++		u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK);
++
++	if (band == NL80211_BAND_6GHZ) {
++		eht_cap_elem->phy_cap_info[0] |=
++			IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
++
++		eht_cap_elem->phy_cap_info[1] |=
++			u8_encode_bits(val,
++				       IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
++
++		eht_cap_elem->phy_cap_info[2] |=
++			u8_encode_bits(sts - 1,
++				       IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_320MHZ_MASK);
++	}
+ 
+ 	eht_cap_elem->phy_cap_info[3] =
+ 		IEEE80211_EHT_PHY_CAP3_NG_16_SU_FEEDBACK |
+ 		IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ 		IEEE80211_EHT_PHY_CAP3_CODEBOOK_4_2_SU_FDBK |
+-		IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+-		IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
+-		IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
+-		IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK;
++		IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK;
+ 
+ 	eht_cap_elem->phy_cap_info[4] =
+ 		u8_encode_bits(min_t(int, sts - 1, 2),
+ 			       IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+ 
+ 	eht_cap_elem->phy_cap_info[5] =
+-		IEEE80211_EHT_PHY_CAP5_NON_TRIG_CQI_FEEDBACK |
+ 		u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ 			       IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+ 		u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
+@@ -1230,14 +1238,6 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 			       IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ 		u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+ 
+-	eht_cap_elem->phy_cap_info[7] =
+-		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
+-		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
+-		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
+-		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+-		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
+-		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+-
+ 	val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+ 	      u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_TX);
+ #define SET_EHT_MAX_NSS(_bw, _val) do {				\
+@@ -1248,8 +1248,29 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 
+ 	SET_EHT_MAX_NSS(80, val);
+ 	SET_EHT_MAX_NSS(160, val);
+-	SET_EHT_MAX_NSS(320, val);
++	if (band == NL80211_BAND_6GHZ)
++		SET_EHT_MAX_NSS(320, val);
+ #undef SET_EHT_MAX_NSS
++
++	if (iftype != NL80211_IFTYPE_AP)
++		return;
++
++	eht_cap_elem->phy_cap_info[3] |=
++		IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
++		IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK;
++
++	eht_cap_elem->phy_cap_info[7] =
++		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
++		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
++		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
++		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ;
++
++	if (band != NL80211_BAND_6GHZ)
++		return;
++
++	eht_cap_elem->phy_cap_info[7] |=
++		IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
++		IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index bc7111a71f98ca..fd5fe96c32e3d1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -435,7 +435,7 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, enum mt76_rxq_id q,
+ 	u32 rxd2 = le32_to_cpu(rxd[2]);
+ 	u32 rxd3 = le32_to_cpu(rxd[3]);
+ 	u32 rxd4 = le32_to_cpu(rxd[4]);
+-	u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
++	u32 csum_mask = MT_RXD3_NORMAL_IP_SUM | MT_RXD3_NORMAL_UDP_TCP_SUM;
+ 	u32 csum_status = *(u32 *)skb->cb;
+ 	u32 mesh_mask = MT_RXD0_MESH | MT_RXD0_MHCP;
+ 	bool is_mesh = (rxd0 & mesh_mask) == mesh_mask;
+@@ -497,7 +497,7 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, enum mt76_rxq_id q,
+ 	if (!sband->channels)
+ 		return -EINVAL;
+ 
+-	if ((rxd0 & csum_mask) == csum_mask &&
++	if ((rxd3 & csum_mask) == csum_mask &&
+ 	    !(csum_status & (BIT(0) | BIT(2) | BIT(3))))
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 7c97140d8255a7..2b094b33ba31ac 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -206,7 +206,7 @@ static int mt7996_add_interface(struct ieee80211_hw *hw,
+ 	mvif->mt76.omac_idx = idx;
+ 	mvif->phy = phy;
+ 	mvif->mt76.band_idx = band_idx;
+-	mvif->mt76.wmm_idx = vif->type != NL80211_IFTYPE_AP;
++	mvif->mt76.wmm_idx = vif->type == NL80211_IFTYPE_AP ? 0 : 3;
+ 
+ 	ret = mt7996_mcu_add_dev_info(phy, vif, true);
+ 	if (ret)
+@@ -307,6 +307,10 @@ int mt7996_set_channel(struct mt7996_phy *phy)
+ 	if (ret)
+ 		goto out;
+ 
++	ret = mt7996_mcu_set_chan_info(phy, UNI_CHANNEL_RX_PATH);
++	if (ret)
++		goto out;
++
+ 	ret = mt7996_dfs_init_radar_detector(phy);
+ 	mt7996_mac_cca_stats_reset(phy);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 2c857867780095..36ab59885cf2b6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -735,7 +735,7 @@ void mt7996_mcu_rx_event(struct mt7996_dev *dev, struct sk_buff *skb)
+ static struct tlv *
+ mt7996_mcu_add_uni_tlv(struct sk_buff *skb, u16 tag, u16 len)
+ {
+-	struct tlv *ptlv = skb_put(skb, len);
++	struct tlv *ptlv = skb_put_zero(skb, len);
+ 
+ 	ptlv->tag = cpu_to_le16(tag);
+ 	ptlv->len = cpu_to_le16(len);
+@@ -822,7 +822,7 @@ mt7996_mcu_bss_mbssid_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	struct bss_info_uni_mbssid *mbssid;
+ 	struct tlv *tlv;
+ 
+-	if (!vif->bss_conf.bssid_indicator)
++	if (!vif->bss_conf.bssid_indicator && enable)
+ 		return;
+ 
+ 	tlv = mt7996_mcu_add_uni_tlv(skb, UNI_BSS_INFO_11V_MBSSID, sizeof(*mbssid));
+@@ -1429,10 +1429,10 @@ mt7996_is_ebf_supported(struct mt7996_phy *phy, struct ieee80211_vif *vif,
+ 
+ 		if (bfee)
+ 			return vif->bss_conf.eht_su_beamformee &&
+-			       EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
++			       EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
+ 		else
+ 			return vif->bss_conf.eht_su_beamformer &&
+-			       EHT_PHY(CAP0_SU_BEAMFORMER, pe->phy_cap_info[0]);
++			       EHT_PHY(CAP0_SU_BEAMFORMEE, pe->phy_cap_info[0]);
+ 	}
+ 
+ 	if (sta->deflink.he_cap.has_he) {
+@@ -1544,6 +1544,9 @@ mt7996_mcu_sta_bfer_he(struct ieee80211_sta *sta, struct ieee80211_vif *vif,
+ 	u8 nss_mcs = mt7996_mcu_get_sta_nss(mcs_map);
+ 	u8 snd_dim, sts;
+ 
++	if (!vc)
++		return;
++
+ 	bf->tx_mode = MT_PHY_TYPE_HE_SU;
+ 
+ 	mt7996_mcu_sta_sounding_rate(bf);
+@@ -1653,7 +1656,7 @@ mt7996_mcu_sta_bfer_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
+ {
+ 	struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+ 	struct mt7996_phy *phy = mvif->phy;
+-	int tx_ant = hweight8(phy->mt76->chainmask) - 1;
++	int tx_ant = hweight16(phy->mt76->chainmask) - 1;
+ 	struct sta_rec_bf *bf;
+ 	struct tlv *tlv;
+ 	static const u8 matrix[4][4] = {
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 7719e4f3e2a23a..2ee8c8e365f89f 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -384,6 +384,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 	struct wilc_join_bss_param *param;
+ 	u8 rates_len = 0;
+ 	int ies_len;
++	u64 ies_tsf;
+ 	int ret;
+ 
+ 	param = kzalloc(sizeof(*param), GFP_KERNEL);
+@@ -399,6 +400,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		return NULL;
+ 	}
+ 	ies_len = ies->len;
++	ies_tsf = ies->tsf;
+ 	rcu_read_unlock();
+ 
+ 	param->beacon_period = cpu_to_le16(bss->beacon_interval);
+@@ -454,7 +456,7 @@ wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				    IEEE80211_P2P_ATTR_ABSENCE_NOTICE,
+ 				    (u8 *)&noa_attr, sizeof(noa_attr));
+ 	if (ret > 0) {
+-		param->tsf_lo = cpu_to_le32(ies->tsf);
++		param->tsf_lo = cpu_to_le32(ies_tsf);
+ 		param->noa_enabled = 1;
+ 		param->idx = noa_attr.index;
+ 		if (noa_attr.oppps_ctwindow & IEEE80211_P2P_OPPPS_ENABLE_BIT) {
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index de3332eb7a227a..a99776af56c27f 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -2194,7 +2194,6 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ 	struct rtw_coex_stat *coex_stat = &coex->stat;
+ 	struct rtw_efuse *efuse = &rtwdev->efuse;
+ 	u8 table_case, tdma_case;
+-	bool wl_cpt_test = false, bt_cpt_test = false;
+ 
+ 	rtw_dbg(rtwdev, RTW_DBG_COEX, "[BTCoex], %s()\n", __func__);
+ 
+@@ -2202,29 +2201,16 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ 	rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+ 	if (efuse->share_ant) {
+ 		/* Shared-Ant */
+-		if (wl_cpt_test) {
+-			if (coex_stat->wl_gl_busy) {
+-				table_case = 20;
+-				tdma_case = 17;
+-			} else {
+-				table_case = 10;
+-				tdma_case = 15;
+-			}
+-		} else if (bt_cpt_test) {
+-			table_case = 26;
+-			tdma_case = 26;
+-		} else {
+-			if (coex_stat->wl_gl_busy &&
+-			    coex_stat->wl_noisy_level == 0)
+-				table_case = 14;
+-			else
+-				table_case = 10;
++		if (coex_stat->wl_gl_busy &&
++		    coex_stat->wl_noisy_level == 0)
++			table_case = 14;
++		else
++			table_case = 10;
+ 
+-			if (coex_stat->wl_gl_busy)
+-				tdma_case = 15;
+-			else
+-				tdma_case = 20;
+-		}
++		if (coex_stat->wl_gl_busy)
++			tdma_case = 15;
++		else
++			tdma_case = 20;
+ 	} else {
+ 		/* Non-Shared-Ant */
+ 		table_case = 112;
+@@ -2235,11 +2221,7 @@ static void rtw_coex_action_bt_a2dp_pan(struct rtw_dev *rtwdev)
+ 			tdma_case = 120;
+ 	}
+ 
+-	if (wl_cpt_test)
+-		rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[1]);
+-	else
+-		rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+-
++	rtw_coex_set_rf_para(rtwdev, chip->wl_rf_para_rx[0]);
+ 	rtw_coex_table(rtwdev, false, table_case);
+ 	rtw_coex_tdma(rtwdev, false, tdma_case);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index ab7d414d0ba679..b9b0114e253b43 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -1468,10 +1468,12 @@ int rtw_fw_write_data_rsvd_page(struct rtw_dev *rtwdev, u16 pg_addr,
+ 	val |= BIT_ENSWBCN >> 8;
+ 	rtw_write8(rtwdev, REG_CR + 1, val);
+ 
+-	val = rtw_read8(rtwdev, REG_FWHW_TXQ_CTRL + 2);
+-	bckp[1] = val;
+-	val &= ~(BIT_EN_BCNQ_DL >> 16);
+-	rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, val);
++	if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_PCIE) {
++		val = rtw_read8(rtwdev, REG_FWHW_TXQ_CTRL + 2);
++		bckp[1] = val;
++		val &= ~(BIT_EN_BCNQ_DL >> 16);
++		rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, val);
++	}
+ 
+ 	ret = rtw_hci_write_data_rsvd_page(rtwdev, buf, size);
+ 	if (ret) {
+@@ -1496,7 +1498,8 @@ int rtw_fw_write_data_rsvd_page(struct rtw_dev *rtwdev, u16 pg_addr,
+ 	rsvd_pg_head = rtwdev->fifo.rsvd_boundary;
+ 	rtw_write16(rtwdev, REG_FIFOPAGE_CTRL_2,
+ 		    rsvd_pg_head | BIT_BCN_VALID_V1);
+-	rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, bckp[1]);
++	if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_PCIE)
++		rtw_write8(rtwdev, REG_FWHW_TXQ_CTRL + 2, bckp[1]);
+ 	rtw_write8(rtwdev, REG_CR + 1, bckp[0]);
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 7ab7a988b123f1..33a7577557a568 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1313,20 +1313,21 @@ static int rtw_wait_firmware_completion(struct rtw_dev *rtwdev)
+ {
+ 	const struct rtw_chip_info *chip = rtwdev->chip;
+ 	struct rtw_fw_state *fw;
++	int ret = 0;
+ 
+ 	fw = &rtwdev->fw;
+ 	wait_for_completion(&fw->completion);
+ 	if (!fw->firmware)
+-		return -EINVAL;
++		ret = -EINVAL;
+ 
+ 	if (chip->wow_fw_name) {
+ 		fw = &rtwdev->wow_fw;
+ 		wait_for_completion(&fw->completion);
+ 		if (!fw->firmware)
+-			return -EINVAL;
++			ret = -EINVAL;
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static enum rtw_lps_deep_mode rtw_update_lps_deep_mode(struct rtw_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+index e2c7d9f876836a..a019f4085e7389 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+@@ -31,8 +31,6 @@ static const struct usb_device_id rtw_8821cu_id_table[] = {
+ 	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82b, 0xff, 0xff, 0xff),
+ 	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82c, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x331d, 0xff, 0xff, 0xff),
+ 	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* D-Link */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xc811, 0xff, 0xff, 0xff),
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 62376d1cca22fc..732d787652208b 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -2611,12 +2611,14 @@ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+ 	else
+ 		rxsc = GET_PHY_STAT_P1_HT_RXSC(phy_status);
+ 
+-	if (rxsc >= 9 && rxsc <= 12)
++	if (rxsc == 0)
++		bw = rtwdev->hal.current_band_width;
++	else if (rxsc >= 1 && rxsc <= 8)
++		bw = RTW_CHANNEL_WIDTH_20;
++	else if (rxsc >= 9 && rxsc <= 12)
+ 		bw = RTW_CHANNEL_WIDTH_40;
+-	else if (rxsc >= 13)
+-		bw = RTW_CHANNEL_WIDTH_80;
+ 	else
+-		bw = RTW_CHANNEL_WIDTH_20;
++		bw = RTW_CHANNEL_WIDTH_80;
+ 
+ 	channel = GET_PHY_STAT_P1_CHANNEL(phy_status);
+ 	rtw_set_rx_freq_band(pkt_stat, channel);
+diff --git a/drivers/net/wireless/realtek/rtw88/rx.h b/drivers/net/wireless/realtek/rtw88/rx.h
+index d3668c4efc24d5..8a072dd3d73ce4 100644
+--- a/drivers/net/wireless/realtek/rtw88/rx.h
++++ b/drivers/net/wireless/realtek/rtw88/rx.h
+@@ -41,7 +41,7 @@ enum rtw_rx_desc_enc {
+ #define GET_RX_DESC_TSFL(rxdesc)                                               \
+ 	le32_get_bits(*((__le32 *)(rxdesc) + 0x05), GENMASK(31, 0))
+ #define GET_RX_DESC_BW(rxdesc)                                                 \
+-	(le32_get_bits(*((__le32 *)(rxdesc) + 0x04), GENMASK(31, 24)))
++	(le32_get_bits(*((__le32 *)(rxdesc) + 0x04), GENMASK(5, 4)))
+ 
+ void rtw_rx_stats(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
+ 		  struct sk_buff *skb);
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index a580cb71923379..755a55c8bc20bf 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -421,7 +421,6 @@ enum rtw89_mac_c2h_mrc_func {
+ 
+ enum rtw89_mac_c2h_wow_func {
+ 	RTW89_MAC_C2H_FUNC_AOAC_REPORT,
+-	RTW89_MAC_C2H_FUNC_READ_WOW_CAM,
+ 
+ 	NUM_OF_RTW89_MAC_C2H_FUNC_WOW,
+ };
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+index 9ab836d0d4f12d..079b8cd7978573 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+@@ -778,7 +778,7 @@ static void ndev_init_debugfs(struct intel_ntb_dev *ndev)
+ 		ndev->debugfs_dir =
+ 			debugfs_create_dir(pci_name(ndev->ntb.pdev),
+ 					   debugfs_dir);
+-		if (!ndev->debugfs_dir)
++		if (IS_ERR(ndev->debugfs_dir))
+ 			ndev->debugfs_info = NULL;
+ 		else
+ 			ndev->debugfs_info =
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index f9e7847a378e77..c84fadfc63c52c 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -807,16 +807,29 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
+ }
+ 
+ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
+-			       struct device *dma_dev, size_t align)
++			       struct device *ntb_dev, size_t align)
+ {
+ 	dma_addr_t dma_addr;
+ 	void *alloc_addr, *virt_addr;
+ 	int rc;
+ 
+-	alloc_addr = dma_alloc_coherent(dma_dev, mw->alloc_size,
+-					&dma_addr, GFP_KERNEL);
++	/*
++	 * The buffer here is allocated against the NTB device. The reason to
++	 * use dma_alloc_*() call is to allocate a large IOVA contiguous buffer
++	 * backing the NTB BAR for the remote host to write to. During receive
++	 * processing, the data is being copied out of the receive buffer to
++	 * the kernel skbuff. When a DMA device is being used, dma_map_page()
++	 * is called on the kvaddr of the receive buffer (from dma_alloc_*())
++	 * and remapped against the DMA device. It appears to be a double
++	 * DMA mapping of buffers, but first is mapped to the NTB device and
++	 * second is to the DMA device. DMA_ATTR_FORCE_CONTIGUOUS is necessary
++	 * in order for the later dma_map_page() to not fail.
++	 */
++	alloc_addr = dma_alloc_attrs(ntb_dev, mw->alloc_size,
++				     &dma_addr, GFP_KERNEL,
++				     DMA_ATTR_FORCE_CONTIGUOUS);
+ 	if (!alloc_addr) {
+-		dev_err(dma_dev, "Unable to alloc MW buff of size %zu\n",
++		dev_err(ntb_dev, "Unable to alloc MW buff of size %zu\n",
+ 			mw->alloc_size);
+ 		return -ENOMEM;
+ 	}
+@@ -845,7 +858,7 @@ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
+ 	return 0;
+ 
+ err:
+-	dma_free_coherent(dma_dev, mw->alloc_size, alloc_addr, dma_addr);
++	dma_free_coherent(ntb_dev, mw->alloc_size, alloc_addr, dma_addr);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 553f1f46bc664f..72bc1d017a46ee 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -1227,7 +1227,7 @@ static ssize_t perf_dbgfs_read_info(struct file *filep, char __user *ubuf,
+ 			"\tOut buffer addr 0x%pK\n", peer->outbuf);
+ 
+ 		pos += scnprintf(buf + pos, buf_size - pos,
+-			"\tOut buff phys addr %pa[p]\n", &peer->out_phys_addr);
++			"\tOut buff phys addr %pap\n", &peer->out_phys_addr);
+ 
+ 		pos += scnprintf(buf + pos, buf_size - pos,
+ 			"\tOut buffer size %pa\n", &peer->outbuf_size);
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index d6d558f94d6bb2..35d9f3cc2efabf 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1937,12 +1937,16 @@ static int cmp_dpa(const void *a, const void *b)
+ static struct device **scan_labels(struct nd_region *nd_region)
+ {
+ 	int i, count = 0;
+-	struct device *dev, **devs = NULL;
++	struct device *dev, **devs;
+ 	struct nd_label_ent *label_ent, *e;
+ 	struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+ 	resource_size_t map_end = nd_mapping->start + nd_mapping->size - 1;
+ 
++	devs = kcalloc(2, sizeof(dev), GFP_KERNEL);
++	if (!devs)
++		return NULL;
++
+ 	/* "safe" because create_namespace_pmem() might list_move() label_ent */
+ 	list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) {
+ 		struct nd_namespace_label *nd_label = label_ent->label;
+@@ -1961,12 +1965,14 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ 			goto err;
+ 		if (i < count)
+ 			continue;
+-		__devs = kcalloc(count + 2, sizeof(dev), GFP_KERNEL);
+-		if (!__devs)
+-			goto err;
+-		memcpy(__devs, devs, sizeof(dev) * count);
+-		kfree(devs);
+-		devs = __devs;
++		if (count) {
++			__devs = kcalloc(count + 2, sizeof(dev), GFP_KERNEL);
++			if (!__devs)
++				goto err;
++			memcpy(__devs, devs, sizeof(dev) * count);
++			kfree(devs);
++			devs = __devs;
++		}
+ 
+ 		dev = create_namespace_pmem(nd_region, nd_mapping, nd_label);
+ 		if (IS_ERR(dev)) {
+@@ -1993,11 +1999,6 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ 
+ 		/* Publish a zero-sized namespace for userspace to configure. */
+ 		nd_mapping_free_labels(nd_mapping);
+-
+-		devs = kcalloc(2, sizeof(dev), GFP_KERNEL);
+-		if (!devs)
+-			goto err;
+-
+ 		nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
+ 		if (!nspm)
+ 			goto err;
+@@ -2036,11 +2037,10 @@ static struct device **scan_labels(struct nd_region *nd_region)
+ 	return devs;
+ 
+  err:
+-	if (devs) {
+-		for (i = 0; devs[i]; i++)
+-			namespace_pmem_release(devs[i]);
+-		kfree(devs);
+-	}
++	for (i = 0; devs[i]; i++)
++		namespace_pmem_release(devs[i]);
++	kfree(devs);
++
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 03a6868f4dbc18..a47d35102b7ce7 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -587,7 +587,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ 		rc = device_add_disk(&head->subsys->dev, head->disk,
+ 				     nvme_ns_attr_groups);
+ 		if (rc) {
+-			clear_bit(NVME_NSHEAD_DISK_LIVE, &ns->flags);
++			clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags);
+ 			return;
+ 		}
+ 		nvme_add_ns_head_cdev(head);
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index d2d17d37d3e0b5..4c67fa78db2677 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -850,14 +850,21 @@ static int dra7xx_pcie_probe(struct platform_device *pdev)
+ 	dra7xx->mode = mode;
+ 
+ 	ret = devm_request_threaded_irq(dev, irq, NULL, dra7xx_pcie_irq_handler,
+-			       IRQF_SHARED, "dra7xx-pcie-main", dra7xx);
++					IRQF_SHARED | IRQF_ONESHOT,
++					"dra7xx-pcie-main", dra7xx);
+ 	if (ret) {
+ 		dev_err(dev, "failed to request irq\n");
+-		goto err_gpio;
++		goto err_deinit;
+ 	}
+ 
+ 	return 0;
+ 
++err_deinit:
++	if (dra7xx->mode == DW_PCIE_RC_TYPE)
++		dw_pcie_host_deinit(&dra7xx->pci->pp);
++	else
++		dw_pcie_ep_deinit(&dra7xx->pci->ep);
++
+ err_gpio:
+ err_get_sync:
+ 	pm_runtime_put(dev);
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 917c69edee1d54..dbd6d7a2748930 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -958,7 +958,7 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
+ 		ret = phy_power_on(imx6_pcie->phy);
+ 		if (ret) {
+ 			dev_err(dev, "waiting for PHY ready timeout!\n");
+-			goto err_phy_off;
++			goto err_phy_exit;
+ 		}
+ 	}
+ 
+@@ -973,8 +973,9 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
+ 	return 0;
+ 
+ err_phy_off:
+-	if (imx6_pcie->phy)
+-		phy_exit(imx6_pcie->phy);
++	phy_power_off(imx6_pcie->phy);
++err_phy_exit:
++	phy_exit(imx6_pcie->phy);
+ err_clk_disable:
+ 	imx6_pcie_clk_disable(imx6_pcie);
+ err_reg_disable:
+@@ -1118,6 +1119,8 @@ static int imx6_add_pcie_ep(struct imx6_pcie *imx6_pcie,
+ 	if (imx6_check_flag(imx6_pcie, IMX6_PCIE_FLAG_SUPPORT_64BIT))
+ 		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ 
++	ep->page_size = imx6_pcie->drvdata->epc_features->align;
++
+ 	ret = dw_pcie_ep_init(ep);
+ 	if (ret) {
+ 		dev_err(dev, "failed to initialize endpoint\n");
+@@ -1578,7 +1581,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
+ 	},
+ 	[IMX8MM_EP] = {
+ 		.variant = IMX8MM_EP,
+-		.flags = IMX6_PCIE_FLAG_HAS_PHYDRV,
++		.flags = IMX6_PCIE_FLAG_HAS_APP_RESET |
++			 IMX6_PCIE_FLAG_HAS_PHYDRV,
+ 		.mode = DW_PCIE_EP_TYPE,
+ 		.gpr = "fsl,imx8mm-iomuxc-gpr",
+ 		.clk_names = imx8mm_clks,
+@@ -1589,7 +1593,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
+ 	},
+ 	[IMX8MP_EP] = {
+ 		.variant = IMX8MP_EP,
+-		.flags = IMX6_PCIE_FLAG_HAS_PHYDRV,
++		.flags = IMX6_PCIE_FLAG_HAS_APP_RESET |
++			 IMX6_PCIE_FLAG_HAS_PHYDRV,
+ 		.mode = DW_PCIE_EP_TYPE,
+ 		.gpr = "fsl,imx8mp-iomuxc-gpr",
+ 		.clk_names = imx8mm_clks,
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 483c9540651353..d911f0e521da04 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -577,7 +577,7 @@ static void ks_pcie_quirk(struct pci_dev *dev)
+ 	 */
+ 	if (pci_match_id(am6_pci_devids, bridge)) {
+ 		bridge_dev = pci_get_host_bridge_device(dev);
+-		if (!bridge_dev && !bridge_dev->parent)
++		if (!bridge_dev || !bridge_dev->parent)
+ 			return;
+ 
+ 		ks_pcie = dev_get_drvdata(bridge_dev->parent);
+diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c
+index d5523f3021024c..deab1e653b9a39 100644
+--- a/drivers/pci/controller/dwc/pcie-kirin.c
++++ b/drivers/pci/controller/dwc/pcie-kirin.c
+@@ -412,12 +412,12 @@ static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
+ 			if (pcie->gpio_id_reset[i] < 0)
+ 				continue;
+ 
+-			pcie->num_slots++;
+-			if (pcie->num_slots > MAX_PCI_SLOTS) {
++			if (pcie->num_slots + 1 >= MAX_PCI_SLOTS) {
+ 				dev_err(dev, "Too many PCI slots!\n");
+ 				ret = -EINVAL;
+ 				goto put_node;
+ 			}
++			pcie->num_slots++;
+ 
+ 			ret = of_pci_get_devfn(child);
+ 			if (ret < 0) {
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index 50b1635e3cbb1d..26cab226bccf4e 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -816,21 +816,15 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = qcom_pcie_enable_resources(pcie_ep);
+-	if (ret) {
+-		dev_err(dev, "Failed to enable resources: %d\n", ret);
+-		return ret;
+-	}
+-
+ 	ret = dw_pcie_ep_init(&pcie_ep->pci.ep);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to initialize endpoint: %d\n", ret);
+-		goto err_disable_resources;
++		return ret;
+ 	}
+ 
+ 	ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep);
+ 	if (ret)
+-		goto err_disable_resources;
++		goto err_ep_deinit;
+ 
+ 	name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
+ 	if (!name) {
+@@ -847,8 +841,8 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
+ 	disable_irq(pcie_ep->global_irq);
+ 	disable_irq(pcie_ep->perst_irq);
+ 
+-err_disable_resources:
+-	qcom_pcie_disable_resources(pcie_ep);
++err_ep_deinit:
++	dw_pcie_ep_deinit(&pcie_ep->pci.ep);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
+index 0408f4d612b5af..7417993f8cff76 100644
+--- a/drivers/pci/controller/pcie-xilinx-nwl.c
++++ b/drivers/pci/controller/pcie-xilinx-nwl.c
+@@ -80,8 +80,8 @@
+ #define MSGF_MISC_SR_NON_FATAL_DEV	BIT(22)
+ #define MSGF_MISC_SR_FATAL_DEV		BIT(23)
+ #define MSGF_MISC_SR_LINK_DOWN		BIT(24)
+-#define MSGF_MSIC_SR_LINK_AUTO_BWIDTH	BIT(25)
+-#define MSGF_MSIC_SR_LINK_BWIDTH	BIT(26)
++#define MSGF_MISC_SR_LINK_AUTO_BWIDTH	BIT(25)
++#define MSGF_MISC_SR_LINK_BWIDTH	BIT(26)
+ 
+ #define MSGF_MISC_SR_MASKALL		(MSGF_MISC_SR_RXMSG_AVAIL | \
+ 					MSGF_MISC_SR_RXMSG_OVER | \
+@@ -96,8 +96,8 @@
+ 					MSGF_MISC_SR_NON_FATAL_DEV | \
+ 					MSGF_MISC_SR_FATAL_DEV | \
+ 					MSGF_MISC_SR_LINK_DOWN | \
+-					MSGF_MSIC_SR_LINK_AUTO_BWIDTH | \
+-					MSGF_MSIC_SR_LINK_BWIDTH)
++					MSGF_MISC_SR_LINK_AUTO_BWIDTH | \
++					MSGF_MISC_SR_LINK_BWIDTH)
+ 
+ /* Legacy interrupt status mask bits */
+ #define MSGF_LEG_SR_INTA		BIT(0)
+@@ -299,10 +299,10 @@ static irqreturn_t nwl_pcie_misc_handler(int irq, void *data)
+ 	if (misc_stat & MSGF_MISC_SR_FATAL_DEV)
+ 		dev_err(dev, "Fatal Error Detected\n");
+ 
+-	if (misc_stat & MSGF_MSIC_SR_LINK_AUTO_BWIDTH)
++	if (misc_stat & MSGF_MISC_SR_LINK_AUTO_BWIDTH)
+ 		dev_info(dev, "Link Autonomous Bandwidth Management Status bit set\n");
+ 
+-	if (misc_stat & MSGF_MSIC_SR_LINK_BWIDTH)
++	if (misc_stat & MSGF_MISC_SR_LINK_BWIDTH)
+ 		dev_info(dev, "Link Bandwidth Management Status bit set\n");
+ 
+ 	/* Clear misc interrupt status */
+@@ -371,7 +371,7 @@ static void nwl_mask_intx_irq(struct irq_data *data)
+ 	u32 mask;
+ 	u32 val;
+ 
+-	mask = 1 << (data->hwirq - 1);
++	mask = 1 << data->hwirq;
+ 	raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
+ 	val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
+ 	nwl_bridge_writel(pcie, (val & (~mask)), MSGF_LEG_MASK);
+@@ -385,7 +385,7 @@ static void nwl_unmask_intx_irq(struct irq_data *data)
+ 	u32 mask;
+ 	u32 val;
+ 
+-	mask = 1 << (data->hwirq - 1);
++	mask = 1 << data->hwirq;
+ 	raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags);
+ 	val = nwl_bridge_readl(pcie, MSGF_LEG_MASK);
+ 	nwl_bridge_writel(pcie, (val | mask), MSGF_LEG_MASK);
+@@ -779,6 +779,7 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	pcie = pci_host_bridge_priv(bridge);
++	platform_set_drvdata(pdev, pcie);
+ 
+ 	pcie->dev = dev;
+ 
+@@ -801,13 +802,13 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 	err = nwl_pcie_bridge_init(pcie);
+ 	if (err) {
+ 		dev_err(dev, "HW Initialization failed\n");
+-		return err;
++		goto err_clk;
+ 	}
+ 
+ 	err = nwl_pcie_init_irq_domain(pcie);
+ 	if (err) {
+ 		dev_err(dev, "Failed creating IRQ Domain\n");
+-		return err;
++		goto err_clk;
+ 	}
+ 
+ 	bridge->sysdata = pcie;
+@@ -817,11 +818,24 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 		err = nwl_pcie_enable_msi(pcie);
+ 		if (err < 0) {
+ 			dev_err(dev, "failed to enable MSI support: %d\n", err);
+-			return err;
++			goto err_clk;
+ 		}
+ 	}
+ 
+-	return pci_host_probe(bridge);
++	err = pci_host_probe(bridge);
++	if (!err)
++		return 0;
++
++err_clk:
++	clk_disable_unprepare(pcie->clk);
++	return err;
++}
++
++static void nwl_pcie_remove(struct platform_device *pdev)
++{
++	struct nwl_pcie *pcie = platform_get_drvdata(pdev);
++
++	clk_disable_unprepare(pcie->clk);
+ }
+ 
+ static struct platform_driver nwl_pcie_driver = {
+@@ -831,5 +845,6 @@ static struct platform_driver nwl_pcie_driver = {
+ 		.of_match_table = nwl_pcie_of_match,
+ 	},
+ 	.probe = nwl_pcie_probe,
++	.remove_new = nwl_pcie_remove,
+ };
+ builtin_platform_driver(nwl_pcie_driver);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 8db214d4b1d464..4f77fe122e7d08 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1295,7 +1295,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ 		if (delay > PCI_RESET_WAIT) {
+ 			if (retrain) {
+ 				retrain = false;
+-				if (pcie_failed_link_retrain(bridge)) {
++				if (pcie_failed_link_retrain(bridge) == 0) {
+ 					delay = 1;
+ 					continue;
+ 				}
+@@ -4647,7 +4647,15 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
+ 		pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
+ 	}
+ 
+-	return pcie_wait_for_link_status(pdev, use_lt, !use_lt);
++	rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt);
++
++	/*
++	 * Clear LBMS after a manual retrain so that the bit can be used
++	 * to track link speed or width changes made by hardware itself
++	 * in attempt to correct unreliable link operation.
++	 */
++	pcie_capability_write_word(pdev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
++	return rc;
+ }
+ 
+ /**
+@@ -5598,8 +5606,10 @@ static void pci_bus_restore_locked(struct pci_bus *bus)
+ 
+ 	list_for_each_entry(dev, &bus->devices, bus_list) {
+ 		pci_dev_restore(dev);
+-		if (dev->subordinate)
++		if (dev->subordinate) {
++			pci_bridge_wait_for_secondary_bus(dev, "bus reset");
+ 			pci_bus_restore_locked(dev->subordinate);
++		}
+ 	}
+ }
+ 
+@@ -5633,8 +5643,10 @@ static void pci_slot_restore_locked(struct pci_slot *slot)
+ 		if (!dev->slot || dev->slot != slot)
+ 			continue;
+ 		pci_dev_restore(dev);
+-		if (dev->subordinate)
++		if (dev->subordinate) {
++			pci_bridge_wait_for_secondary_bus(dev, "slot reset");
+ 			pci_bus_restore_locked(dev->subordinate);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index fd44565c475628..55d76d6802ec88 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -541,7 +541,7 @@ void pci_acs_init(struct pci_dev *dev);
+ int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags);
+ int pci_dev_specific_enable_acs(struct pci_dev *dev);
+ int pci_dev_specific_disable_acs_redir(struct pci_dev *dev);
+-bool pcie_failed_link_retrain(struct pci_dev *dev);
++int pcie_failed_link_retrain(struct pci_dev *dev);
+ #else
+ static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev,
+ 					       u16 acs_flags)
+@@ -556,9 +556,9 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
+ {
+ 	return -ENOTTY;
+ }
+-static inline bool pcie_failed_link_retrain(struct pci_dev *dev)
++static inline int pcie_failed_link_retrain(struct pci_dev *dev)
+ {
+-	return false;
++	return -ENOTTY;
+ }
+ #endif
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 568410e64ce64e..206b76156c051a 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -66,7 +66,7 @@
+  * apply this erratum workaround to any downstream ports as long as they
+  * support Link Active reporting and have the Link Control 2 register.
+  * Restrict the speed to 2.5GT/s then with the Target Link Speed field,
+- * request a retrain and wait 200ms for the data link to go up.
++ * request a retrain and check the result.
+  *
+  * If this turns out successful and we know by the Vendor:Device ID it is
+  * safe to do so, then lift the restriction, letting the devices negotiate
+@@ -74,33 +74,45 @@
+  * firmware may have already arranged and lift it with ports that already
+  * report their data link being up.
+  *
+- * Return TRUE if the link has been successfully retrained, otherwise FALSE.
++ * Otherwise revert the speed to the original setting and request a retrain
++ * again to remove any residual state, ignoring the result as it's supposed
++ * to fail anyway.
++ *
++ * Return 0 if the link has been successfully retrained.  Return an error
++ * if retraining was not needed or we attempted a retrain and it failed.
+  */
+-bool pcie_failed_link_retrain(struct pci_dev *dev)
++int pcie_failed_link_retrain(struct pci_dev *dev)
+ {
+ 	static const struct pci_device_id ids[] = {
+ 		{ PCI_VDEVICE(ASMEDIA, 0x2824) }, /* ASMedia ASM2824 */
+ 		{}
+ 	};
+ 	u16 lnksta, lnkctl2;
++	int ret = -ENOTTY;
+ 
+ 	if (!pci_is_pcie(dev) || !pcie_downstream_port(dev) ||
+ 	    !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
+-		return false;
++		return ret;
+ 
+ 	pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
+ 	pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ 	if ((lnksta & (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_DLLLA)) ==
+ 	    PCI_EXP_LNKSTA_LBMS) {
++		u16 oldlnkctl2 = lnkctl2;
++
+ 		pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
+ 
+ 		lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
+ 		lnkctl2 |= PCI_EXP_LNKCTL2_TLS_2_5GT;
+ 		pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
+ 
+-		if (pcie_retrain_link(dev, false)) {
++		ret = pcie_retrain_link(dev, false);
++		if (ret) {
+ 			pci_info(dev, "retraining failed\n");
+-			return false;
++			pcie_capability_write_word(dev, PCI_EXP_LNKCTL2,
++						   oldlnkctl2);
++			pcie_retrain_link(dev, true);
++			return ret;
+ 		}
+ 
+ 		pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+@@ -117,13 +129,14 @@ bool pcie_failed_link_retrain(struct pci_dev *dev)
+ 		lnkctl2 |= lnkcap & PCI_EXP_LNKCAP_SLS;
+ 		pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
+ 
+-		if (pcie_retrain_link(dev, false)) {
++		ret = pcie_retrain_link(dev, false);
++		if (ret) {
+ 			pci_info(dev, "retraining failed\n");
+-			return false;
++			return ret;
+ 		}
+ 	}
+ 
+-	return true;
++	return ret;
+ }
+ 
+ static ktime_t fixup_debug_start(struct pci_dev *dev,
+diff --git a/drivers/perf/alibaba_uncore_drw_pmu.c b/drivers/perf/alibaba_uncore_drw_pmu.c
+index 38a2947ae8130c..c6ff1bc7d336b8 100644
+--- a/drivers/perf/alibaba_uncore_drw_pmu.c
++++ b/drivers/perf/alibaba_uncore_drw_pmu.c
+@@ -400,7 +400,7 @@ static irqreturn_t ali_drw_pmu_isr(int irq_num, void *data)
+ 			}
+ 
+ 			/* clear common counter intr status */
+-			clr_status = FIELD_PREP(ALI_DRW_PMCOM_CNT_OV_INTR_MASK, 1);
++			clr_status = FIELD_PREP(ALI_DRW_PMCOM_CNT_OV_INTR_MASK, status);
+ 			writel(clr_status,
+ 			       drw_pmu->cfg_base + ALI_DRW_PMU_OV_INTR_CLR);
+ 		}
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index e26ad1d3ed0bb7..058ea798b669b3 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -24,14 +24,6 @@
+ #define CMN_NI_NODE_ID			GENMASK_ULL(31, 16)
+ #define CMN_NI_LOGICAL_ID		GENMASK_ULL(47, 32)
+ 
+-#define CMN_NODEID_DEVID(reg)		((reg) & 3)
+-#define CMN_NODEID_EXT_DEVID(reg)	((reg) & 1)
+-#define CMN_NODEID_PID(reg)		(((reg) >> 2) & 1)
+-#define CMN_NODEID_EXT_PID(reg)		(((reg) >> 1) & 3)
+-#define CMN_NODEID_1x1_PID(reg)		(((reg) >> 2) & 7)
+-#define CMN_NODEID_X(reg, bits)		((reg) >> (3 + (bits)))
+-#define CMN_NODEID_Y(reg, bits)		(((reg) >> 3) & ((1U << (bits)) - 1))
+-
+ #define CMN_CHILD_INFO			0x0080
+ #define CMN_CI_CHILD_COUNT		GENMASK_ULL(15, 0)
+ #define CMN_CI_CHILD_PTR_OFFSET		GENMASK_ULL(31, 16)
+@@ -43,6 +35,9 @@
+ #define CMN_MAX_XPS			(CMN_MAX_DIMENSION * CMN_MAX_DIMENSION)
+ #define CMN_MAX_DTMS			(CMN_MAX_XPS + (CMN_MAX_DIMENSION - 1) * 4)
+ 
++/* Currently XPs are the node type we can have most of; others top out at 128 */
++#define CMN_MAX_NODES_PER_EVENT		CMN_MAX_XPS
++
+ /* The CFG node has various info besides the discovery tree */
+ #define CMN_CFGM_PERIPH_ID_01		0x0008
+ #define CMN_CFGM_PID0_PART_0		GENMASK_ULL(7, 0)
+@@ -78,7 +73,8 @@
+ /* Technically this is 4 bits wide on DNs, but we only use 2 there anyway */
+ #define CMN__PMU_OCCUP1_ID		GENMASK_ULL(34, 32)
+ 
+-/* HN-Ps are weird... */
++/* Some types are designed to coexist with another device in the same node */
++#define CMN_CCLA_PMU_EVENT_SEL		0x008
+ #define CMN_HNP_PMU_EVENT_SEL		0x008
+ 
+ /* DTMs live in the PMU space of XP registers */
+@@ -281,8 +277,11 @@ struct arm_cmn_node {
+ 	u16 id, logid;
+ 	enum cmn_node_type type;
+ 
++	/* XP properties really, but replicated to children for convenience */
+ 	u8 dtm;
+ 	s8 dtc;
++	u8 portid_bits:4;
++	u8 deviceid_bits:4;
+ 	/* DN/HN-F/CXHA */
+ 	struct {
+ 		u8 val : 4;
+@@ -358,49 +357,33 @@ struct arm_cmn {
+ static int arm_cmn_hp_state;
+ 
+ struct arm_cmn_nodeid {
+-	u8 x;
+-	u8 y;
+ 	u8 port;
+ 	u8 dev;
+ };
+ 
+ static int arm_cmn_xyidbits(const struct arm_cmn *cmn)
+ {
+-	return fls((cmn->mesh_x - 1) | (cmn->mesh_y - 1) | 2);
++	return fls((cmn->mesh_x - 1) | (cmn->mesh_y - 1));
+ }
+ 
+-static struct arm_cmn_nodeid arm_cmn_nid(const struct arm_cmn *cmn, u16 id)
++static struct arm_cmn_nodeid arm_cmn_nid(const struct arm_cmn_node *dn)
+ {
+ 	struct arm_cmn_nodeid nid;
+ 
+-	if (cmn->num_xps == 1) {
+-		nid.x = 0;
+-		nid.y = 0;
+-		nid.port = CMN_NODEID_1x1_PID(id);
+-		nid.dev = CMN_NODEID_DEVID(id);
+-	} else {
+-		int bits = arm_cmn_xyidbits(cmn);
+-
+-		nid.x = CMN_NODEID_X(id, bits);
+-		nid.y = CMN_NODEID_Y(id, bits);
+-		if (cmn->ports_used & 0xc) {
+-			nid.port = CMN_NODEID_EXT_PID(id);
+-			nid.dev = CMN_NODEID_EXT_DEVID(id);
+-		} else {
+-			nid.port = CMN_NODEID_PID(id);
+-			nid.dev = CMN_NODEID_DEVID(id);
+-		}
+-	}
++	nid.dev = dn->id & ((1U << dn->deviceid_bits) - 1);
++	nid.port = (dn->id >> dn->deviceid_bits) & ((1U << dn->portid_bits) - 1);
+ 	return nid;
+ }
+ 
+ static struct arm_cmn_node *arm_cmn_node_to_xp(const struct arm_cmn *cmn,
+ 					       const struct arm_cmn_node *dn)
+ {
+-	struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
+-	int xp_idx = cmn->mesh_x * nid.y + nid.x;
++	int id = dn->id >> (dn->portid_bits + dn->deviceid_bits);
++	int bits = arm_cmn_xyidbits(cmn);
++	int x = id >> bits;
++	int y = id & ((1U << bits) - 1);
+ 
+-	return cmn->xps + xp_idx;
++	return cmn->xps + cmn->mesh_x * y + x;
+ }
+ static struct arm_cmn_node *arm_cmn_node(const struct arm_cmn *cmn,
+ 					 enum cmn_node_type type)
+@@ -486,13 +469,13 @@ static const char *arm_cmn_device_type(u8 type)
+ 	}
+ }
+ 
+-static void arm_cmn_show_logid(struct seq_file *s, int x, int y, int p, int d)
++static void arm_cmn_show_logid(struct seq_file *s, const struct arm_cmn_node *xp, int p, int d)
+ {
+ 	struct arm_cmn *cmn = s->private;
+ 	struct arm_cmn_node *dn;
++	u16 id = xp->id | d | (p << xp->deviceid_bits);
+ 
+ 	for (dn = cmn->dns; dn->type; dn++) {
+-		struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
+ 		int pad = dn->logid < 10;
+ 
+ 		if (dn->type == CMN_TYPE_XP)
+@@ -501,7 +484,7 @@ static void arm_cmn_show_logid(struct seq_file *s, int x, int y, int p, int d)
+ 		if (dn->type < CMN_TYPE_HNI)
+ 			continue;
+ 
+-		if (nid.x != x || nid.y != y || nid.port != p || nid.dev != d)
++		if (dn->id != id)
+ 			continue;
+ 
+ 		seq_printf(s, " %*c#%-*d  |", pad + 1, ' ', 3 - pad, dn->logid);
+@@ -522,6 +505,7 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ 	y = cmn->mesh_y;
+ 	while (y--) {
+ 		int xp_base = cmn->mesh_x * y;
++		struct arm_cmn_node *xp = cmn->xps + xp_base;
+ 		u8 port[CMN_MAX_PORTS][CMN_MAX_DIMENSION];
+ 
+ 		for (x = 0; x < cmn->mesh_x; x++)
+@@ -529,16 +513,14 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ 
+ 		seq_printf(s, "\n%-2d   |", y);
+ 		for (x = 0; x < cmn->mesh_x; x++) {
+-			struct arm_cmn_node *xp = cmn->xps + xp_base + x;
+-
+ 			for (p = 0; p < CMN_MAX_PORTS; p++)
+-				port[p][x] = arm_cmn_device_connect_info(cmn, xp, p);
++				port[p][x] = arm_cmn_device_connect_info(cmn, xp + x, p);
+ 			seq_printf(s, " XP #%-3d|", xp_base + x);
+ 		}
+ 
+ 		seq_puts(s, "\n     |");
+ 		for (x = 0; x < cmn->mesh_x; x++) {
+-			s8 dtc = cmn->xps[xp_base + x].dtc;
++			s8 dtc = xp[x].dtc;
+ 
+ 			if (dtc < 0)
+ 				seq_puts(s, " DTC ?? |");
+@@ -555,10 +537,10 @@ static int arm_cmn_map_show(struct seq_file *s, void *data)
+ 				seq_puts(s, arm_cmn_device_type(port[p][x]));
+ 			seq_puts(s, "\n    0|");
+ 			for (x = 0; x < cmn->mesh_x; x++)
+-				arm_cmn_show_logid(s, x, y, p, 0);
++				arm_cmn_show_logid(s, xp + x, p, 0);
+ 			seq_puts(s, "\n    1|");
+ 			for (x = 0; x < cmn->mesh_x; x++)
+-				arm_cmn_show_logid(s, x, y, p, 1);
++				arm_cmn_show_logid(s, xp + x, p, 1);
+ 		}
+ 		seq_puts(s, "\n-----+");
+ 	}
+@@ -586,7 +568,7 @@ static void arm_cmn_debugfs_init(struct arm_cmn *cmn, int id) {}
+ 
+ struct arm_cmn_hw_event {
+ 	struct arm_cmn_node *dn;
+-	u64 dtm_idx[4];
++	u64 dtm_idx[DIV_ROUND_UP(CMN_MAX_NODES_PER_EVENT * 2, 64)];
+ 	s8 dtc_idx[CMN_MAX_DTCS];
+ 	u8 num_dns;
+ 	u8 dtm_offset;
+@@ -1753,10 +1735,7 @@ static int arm_cmn_event_init(struct perf_event *event)
+ 	}
+ 
+ 	if (!hw->num_dns) {
+-		struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, nodeid);
+-
+-		dev_dbg(cmn->dev, "invalid node 0x%x (%d,%d,%d,%d) type 0x%x\n",
+-			nodeid, nid.x, nid.y, nid.port, nid.dev, type);
++		dev_dbg(cmn->dev, "invalid node 0x%x type 0x%x\n", nodeid, type);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1851,7 +1830,7 @@ static int arm_cmn_event_add(struct perf_event *event, int flags)
+ 			dtm->wp_event[wp_idx] = hw->dtc_idx[d];
+ 			writel_relaxed(cfg, dtm->base + CMN_DTM_WPn_CONFIG(wp_idx));
+ 		} else {
+-			struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);
++			struct arm_cmn_nodeid nid = arm_cmn_nid(dn);
+ 
+ 			if (cmn->multi_dtm)
+ 				nid.port %= 2;
+@@ -2098,10 +2077,12 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ 			continue;
+ 
+ 		xp = arm_cmn_node_to_xp(cmn, dn);
++		dn->portid_bits = xp->portid_bits;
++		dn->deviceid_bits = xp->deviceid_bits;
+ 		dn->dtc = xp->dtc;
+ 		dn->dtm = xp->dtm;
+ 		if (cmn->multi_dtm)
+-			dn->dtm += arm_cmn_nid(cmn, dn->id).port / 2;
++			dn->dtm += arm_cmn_nid(dn).port / 2;
+ 
+ 		if (dn->type == CMN_TYPE_DTC) {
+ 			int err = arm_cmn_init_dtc(cmn, dn, dtc_idx++);
+@@ -2271,18 +2252,27 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ 		arm_cmn_init_dtm(dtm++, xp, 0);
+ 		/*
+ 		 * Keeping track of connected ports will let us filter out
+-		 * unnecessary XP events easily. We can also reliably infer the
+-		 * "extra device ports" configuration for the node ID format
+-		 * from this, since in that case we will see at least one XP
+-		 * with port 2 connected, for the HN-D.
++		 * unnecessary XP events easily, and also infer the per-XP
++		 * part of the node ID format.
+ 		 */
+ 		for (int p = 0; p < CMN_MAX_PORTS; p++)
+ 			if (arm_cmn_device_connect_info(cmn, xp, p))
+ 				xp_ports |= BIT(p);
+ 
+-		if (cmn->multi_dtm && (xp_ports & 0xc))
++		if (cmn->num_xps == 1) {
++			xp->portid_bits = 3;
++			xp->deviceid_bits = 2;
++		} else if (xp_ports > 0x3) {
++			xp->portid_bits = 2;
++			xp->deviceid_bits = 1;
++		} else {
++			xp->portid_bits = 1;
++			xp->deviceid_bits = 2;
++		}
++
++		if (cmn->multi_dtm && (xp_ports > 0x3))
+ 			arm_cmn_init_dtm(dtm++, xp, 1);
+-		if (cmn->multi_dtm && (xp_ports & 0x30))
++		if (cmn->multi_dtm && (xp_ports > 0xf))
+ 			arm_cmn_init_dtm(dtm++, xp, 2);
+ 
+ 		cmn->ports_used |= xp_ports;
+@@ -2337,10 +2327,13 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ 			case CMN_TYPE_CXHA:
+ 			case CMN_TYPE_CCRA:
+ 			case CMN_TYPE_CCHA:
+-			case CMN_TYPE_CCLA:
+ 			case CMN_TYPE_HNS:
+ 				dn++;
+ 				break;
++			case CMN_TYPE_CCLA:
++				dn->pmu_base += CMN_CCLA_PMU_EVENT_SEL;
++				dn++;
++				break;
+ 			/* Nothing to see here */
+ 			case CMN_TYPE_MPAM_S:
+ 			case CMN_TYPE_MPAM_NS:
+@@ -2358,7 +2351,7 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ 			case CMN_TYPE_HNP:
+ 			case CMN_TYPE_CCLA_RNI:
+ 				dn[1] = dn[0];
+-				dn[0].pmu_base += CMN_HNP_PMU_EVENT_SEL;
++				dn[0].pmu_base += CMN_CCLA_PMU_EVENT_SEL;
+ 				dn[1].type = arm_cmn_subtype(dn->type);
+ 				dn += 2;
+ 				break;
+diff --git a/drivers/perf/dwc_pcie_pmu.c b/drivers/perf/dwc_pcie_pmu.c
+index c5e328f2384194..f205ecad2e4c06 100644
+--- a/drivers/perf/dwc_pcie_pmu.c
++++ b/drivers/perf/dwc_pcie_pmu.c
+@@ -556,10 +556,10 @@ static int dwc_pcie_register_dev(struct pci_dev *pdev)
+ {
+ 	struct platform_device *plat_dev;
+ 	struct dwc_pcie_dev_info *dev_info;
+-	u32 bdf;
++	u32 sbdf;
+ 
+-	bdf = PCI_DEVID(pdev->bus->number, pdev->devfn);
+-	plat_dev = platform_device_register_data(NULL, "dwc_pcie_pmu", bdf,
++	sbdf = (pci_domain_nr(pdev->bus) << 16) | PCI_DEVID(pdev->bus->number, pdev->devfn);
++	plat_dev = platform_device_register_data(NULL, "dwc_pcie_pmu", sbdf,
+ 						 pdev, sizeof(*pdev));
+ 
+ 	if (IS_ERR(plat_dev))
+@@ -611,15 +611,15 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ 	struct pci_dev *pdev = plat_dev->dev.platform_data;
+ 	struct dwc_pcie_pmu *pcie_pmu;
+ 	char *name;
+-	u32 bdf, val;
++	u32 sbdf, val;
+ 	u16 vsec;
+ 	int ret;
+ 
+ 	vsec = pci_find_vsec_capability(pdev, pdev->vendor,
+ 					DWC_PCIE_VSEC_RAS_DES_ID);
+ 	pci_read_config_dword(pdev, vsec + PCI_VNDR_HEADER, &val);
+-	bdf = PCI_DEVID(pdev->bus->number, pdev->devfn);
+-	name = devm_kasprintf(&plat_dev->dev, GFP_KERNEL, "dwc_rootport_%x", bdf);
++	sbdf = plat_dev->id;
++	name = devm_kasprintf(&plat_dev->dev, GFP_KERNEL, "dwc_rootport_%x", sbdf);
+ 	if (!name)
+ 		return -ENOMEM;
+ 
+@@ -650,7 +650,7 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ 	ret = cpuhp_state_add_instance(dwc_pcie_pmu_hp_state,
+ 				       &pcie_pmu->cpuhp_node);
+ 	if (ret) {
+-		pci_err(pdev, "Error %d registering hotplug @%x\n", ret, bdf);
++		pci_err(pdev, "Error %d registering hotplug @%x\n", ret, sbdf);
+ 		return ret;
+ 	}
+ 
+@@ -663,7 +663,7 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ 
+ 	ret = perf_pmu_register(&pcie_pmu->pmu, name, -1);
+ 	if (ret) {
+-		pci_err(pdev, "Error %d registering PMU @%x\n", ret, bdf);
++		pci_err(pdev, "Error %d registering PMU @%x\n", ret, sbdf);
+ 		return ret;
+ 	}
+ 	ret = devm_add_action_or_reset(&plat_dev->dev, dwc_pcie_unregister_pmu,
+@@ -726,7 +726,6 @@ static struct platform_driver dwc_pcie_pmu_driver = {
+ static int __init dwc_pcie_pmu_init(void)
+ {
+ 	struct pci_dev *pdev = NULL;
+-	bool found = false;
+ 	int ret;
+ 
+ 	for_each_pci_dev(pdev) {
+@@ -738,11 +737,7 @@ static int __init dwc_pcie_pmu_init(void)
+ 			pci_dev_put(pdev);
+ 			return ret;
+ 		}
+-
+-		found = true;
+ 	}
+-	if (!found)
+-		return -ENODEV;
+ 
+ 	ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
+ 				      "perf/dwc_pcie_pmu:online",
+diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+index f06027574a241a..f7d6c59d993016 100644
+--- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
++++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+@@ -208,7 +208,7 @@ static void hisi_pcie_pmu_writeq(struct hisi_pcie_pmu *pcie_pmu, u32 reg_offset,
+ static u64 hisi_pcie_pmu_get_event_ctrl_val(struct perf_event *event)
+ {
+ 	u64 port, trig_len, thr_len, len_mode;
+-	u64 reg = HISI_PCIE_INIT_SET;
++	u64 reg = 0;
+ 
+ 	/* Config HISI_PCIE_EVENT_CTRL according to event. */
+ 	reg |= FIELD_PREP(HISI_PCIE_EVENT_M, hisi_pcie_get_real_event(event));
+@@ -452,10 +452,24 @@ static void hisi_pcie_pmu_set_period(struct perf_event *event)
+ 	struct hisi_pcie_pmu *pcie_pmu = to_pcie_pmu(event->pmu);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	int idx = hwc->idx;
++	u64 orig_cnt, cnt;
++
++	orig_cnt = hisi_pcie_pmu_read_counter(event);
+ 
+ 	local64_set(&hwc->prev_count, HISI_PCIE_INIT_VAL);
+ 	hisi_pcie_pmu_writeq(pcie_pmu, HISI_PCIE_CNT, idx, HISI_PCIE_INIT_VAL);
+ 	hisi_pcie_pmu_writeq(pcie_pmu, HISI_PCIE_EXT_CNT, idx, HISI_PCIE_INIT_VAL);
++
++	/*
++	 * The counter maybe unwritable if the target event is unsupported.
++	 * Check this by comparing the counts after setting the period. If
++	 * the counts stay unchanged after setting the period then update
++	 * the hwc->prev_count correctly. Otherwise the final counts user
++	 * get maybe totally wrong.
++	 */
++	cnt = hisi_pcie_pmu_read_counter(event);
++	if (orig_cnt == cnt)
++		local64_set(&hwc->prev_count, cnt);
+ }
+ 
+ static void hisi_pcie_pmu_enable_counter(struct hisi_pcie_pmu *pcie_pmu, struct hw_perf_event *hwc)
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 946c01210ac8c0..3bd9b62b23dcc2 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -15,6 +15,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/phy/phy.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/rational.h>
+ #include <linux/regmap.h>
+ #include <linux/reset.h>
+diff --git a/drivers/pinctrl/mvebu/pinctrl-dove.c b/drivers/pinctrl/mvebu/pinctrl-dove.c
+index 1947da73e51210..dce601d993728c 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-dove.c
++++ b/drivers/pinctrl/mvebu/pinctrl-dove.c
+@@ -767,7 +767,7 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 	struct resource fb_res;
+ 	struct mvebu_mpp_ctrl_data *mpp_data;
+ 	void __iomem *base;
+-	int i;
++	int i, ret;
+ 
+ 	pdev->dev.platform_data = (void *)device_get_match_data(&pdev->dev);
+ 
+@@ -783,13 +783,17 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 	clk_prepare_enable(clk);
+ 
+ 	base = devm_platform_get_and_ioremap_resource(pdev, 0, &mpp_res);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	if (IS_ERR(base)) {
++		ret = PTR_ERR(base);
++		goto err_probe;
++	}
+ 
+ 	mpp_data = devm_kcalloc(&pdev->dev, dove_pinctrl_info.ncontrols,
+ 				sizeof(*mpp_data), GFP_KERNEL);
+-	if (!mpp_data)
+-		return -ENOMEM;
++	if (!mpp_data) {
++		ret = -ENOMEM;
++		goto err_probe;
++	}
+ 
+ 	dove_pinctrl_info.control_data = mpp_data;
+ 	for (i = 0; i < ARRAY_SIZE(dove_mpp_controls); i++)
+@@ -808,8 +812,10 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	mpp4_base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(mpp4_base))
+-		return PTR_ERR(mpp4_base);
++	if (IS_ERR(mpp4_base)) {
++		ret = PTR_ERR(mpp4_base);
++		goto err_probe;
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
+ 	if (!res) {
+@@ -820,8 +826,10 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	pmu_base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(pmu_base))
+-		return PTR_ERR(pmu_base);
++	if (IS_ERR(pmu_base)) {
++		ret = PTR_ERR(pmu_base);
++		goto err_probe;
++	}
+ 
+ 	gconfmap = syscon_regmap_lookup_by_compatible("marvell,dove-global-config");
+ 	if (IS_ERR(gconfmap)) {
+@@ -831,12 +839,17 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 		adjust_resource(&fb_res,
+ 			(mpp_res->start & INT_REGS_MASK) + GC_REGS_OFFS, 0x14);
+ 		gc_base = devm_ioremap_resource(&pdev->dev, &fb_res);
+-		if (IS_ERR(gc_base))
+-			return PTR_ERR(gc_base);
++		if (IS_ERR(gc_base)) {
++			ret = PTR_ERR(gc_base);
++			goto err_probe;
++		}
++
+ 		gconfmap = devm_regmap_init_mmio(&pdev->dev,
+ 						 gc_base, &gc_regmap_config);
+-		if (IS_ERR(gconfmap))
+-			return PTR_ERR(gconfmap);
++		if (IS_ERR(gconfmap)) {
++			ret = PTR_ERR(gconfmap);
++			goto err_probe;
++		}
+ 	}
+ 
+ 	/* Warn on any missing DT resource */
+@@ -844,6 +857,9 @@ static int dove_pinctrl_probe(struct platform_device *pdev)
+ 		dev_warn(&pdev->dev, FW_BUG "Missing pinctrl regs in DTB. Please update your firmware.\n");
+ 
+ 	return mvebu_pinctrl_probe(pdev);
++err_probe:
++	clk_disable_unprepare(clk);
++	return ret;
+ }
+ 
+ static struct platform_driver dove_pinctrl_driver = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 4da3c3f422b691..2ec599e383e4b2 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1913,7 +1913,8 @@ static int pcs_probe(struct platform_device *pdev)
+ 
+ 	dev_info(pcs->dev, "%i pins, size %u\n", pcs->desc.npins, pcs->size);
+ 
+-	if (pinctrl_enable(pcs->pctl))
++	ret = pinctrl_enable(pcs->pctl);
++	if (ret)
+ 		goto free;
+ 
+ 	return 0;
+diff --git a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+index ef97586385019a..451801acdc4038 100644
+--- a/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
++++ b/drivers/pinctrl/ti/pinctrl-ti-iodelay.c
+@@ -273,6 +273,22 @@ static int ti_iodelay_pinconf_set(struct ti_iodelay_device *iod,
+ 	return r;
+ }
+ 
++/**
++ * ti_iodelay_pinconf_deinit_dev() - deinit the iodelay device
++ * @data:	IODelay device
++ *
++ * Deinitialize the IODelay device (basically just lock the region back up.
++ */
++static void ti_iodelay_pinconf_deinit_dev(void *data)
++{
++	struct ti_iodelay_device *iod = data;
++	const struct ti_iodelay_reg_data *reg = iod->reg_data;
++
++	/* lock the iodelay region back again */
++	regmap_update_bits(iod->regmap, reg->reg_global_lock_offset,
++			   reg->global_lock_mask, reg->global_lock_val);
++}
++
+ /**
+  * ti_iodelay_pinconf_init_dev() - Initialize IODelay device
+  * @iod: iodelay device
+@@ -295,6 +311,11 @@ static int ti_iodelay_pinconf_init_dev(struct ti_iodelay_device *iod)
+ 	if (r)
+ 		return r;
+ 
++	r = devm_add_action_or_reset(iod->dev, ti_iodelay_pinconf_deinit_dev,
++				     iod);
++	if (r)
++		return r;
++
+ 	/* Read up Recalibration sequence done by bootloader */
+ 	r = regmap_read(iod->regmap, reg->reg_refclk_offset, &val);
+ 	if (r)
+@@ -353,21 +374,6 @@ static int ti_iodelay_pinconf_init_dev(struct ti_iodelay_device *iod)
+ 	return 0;
+ }
+ 
+-/**
+- * ti_iodelay_pinconf_deinit_dev() - deinit the iodelay device
+- * @iod:	IODelay device
+- *
+- * Deinitialize the IODelay device (basically just lock the region back up.
+- */
+-static void ti_iodelay_pinconf_deinit_dev(struct ti_iodelay_device *iod)
+-{
+-	const struct ti_iodelay_reg_data *reg = iod->reg_data;
+-
+-	/* lock the iodelay region back again */
+-	regmap_update_bits(iod->regmap, reg->reg_global_lock_offset,
+-			   reg->global_lock_mask, reg->global_lock_val);
+-}
+-
+ /**
+  * ti_iodelay_get_pingroup() - Find the group mapped by a group selector
+  * @iod: iodelay device
+@@ -822,53 +828,48 @@ MODULE_DEVICE_TABLE(of, ti_iodelay_of_match);
+ static int ti_iodelay_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *np = of_node_get(dev->of_node);
++	struct device_node *np __free(device_node) = of_node_get(dev->of_node);
+ 	struct resource *res;
+ 	struct ti_iodelay_device *iod;
+-	int ret = 0;
++	int ret;
+ 
+ 	if (!np) {
+-		ret = -EINVAL;
+ 		dev_err(dev, "No OF node\n");
+-		goto exit_out;
++		return -EINVAL;
+ 	}
+ 
+ 	iod = devm_kzalloc(dev, sizeof(*iod), GFP_KERNEL);
+-	if (!iod) {
+-		ret = -ENOMEM;
+-		goto exit_out;
+-	}
++	if (!iod)
++		return -ENOMEM;
++
+ 	iod->dev = dev;
+ 	iod->reg_data = device_get_match_data(dev);
+ 	if (!iod->reg_data) {
+-		ret = -EINVAL;
+ 		dev_err(dev, "No DATA match\n");
+-		goto exit_out;
++		return -EINVAL;
+ 	}
+ 
+ 	/* So far We can assume there is only 1 bank of registers */
+ 	iod->reg_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+-	if (IS_ERR(iod->reg_base)) {
+-		ret = PTR_ERR(iod->reg_base);
+-		goto exit_out;
+-	}
++	if (IS_ERR(iod->reg_base))
++		return PTR_ERR(iod->reg_base);
++
+ 	iod->phys_base = res->start;
+ 
+ 	iod->regmap = devm_regmap_init_mmio(dev, iod->reg_base,
+ 					    iod->reg_data->regmap_config);
+ 	if (IS_ERR(iod->regmap)) {
+ 		dev_err(dev, "Regmap MMIO init failed.\n");
+-		ret = PTR_ERR(iod->regmap);
+-		goto exit_out;
++		return PTR_ERR(iod->regmap);
+ 	}
+ 
+ 	ret = ti_iodelay_pinconf_init_dev(iod);
+ 	if (ret)
+-		goto exit_out;
++		return ret;
+ 
+ 	ret = ti_iodelay_alloc_pins(dev, iod, res->start);
+ 	if (ret)
+-		goto exit_out;
++		return ret;
+ 
+ 	iod->desc.pctlops = &ti_iodelay_pinctrl_ops;
+ 	/* no pinmux ops - we are pinconf */
+@@ -879,38 +880,14 @@ static int ti_iodelay_probe(struct platform_device *pdev)
+ 	ret = devm_pinctrl_register_and_init(dev, &iod->desc, iod, &iod->pctl);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register pinctrl\n");
+-		goto exit_out;
++		return ret;
+ 	}
+ 
+-	platform_set_drvdata(pdev, iod);
+-
+-	ret = pinctrl_enable(iod->pctl);
+-	if (ret)
+-		goto exit_out;
+-
+-	return 0;
+-
+-exit_out:
+-	of_node_put(np);
+-	return ret;
+-}
+-
+-/**
+- * ti_iodelay_remove() - standard remove
+- * @pdev: platform device
+- */
+-static void ti_iodelay_remove(struct platform_device *pdev)
+-{
+-	struct ti_iodelay_device *iod = platform_get_drvdata(pdev);
+-
+-	ti_iodelay_pinconf_deinit_dev(iod);
+-
+-	/* Expect other allocations to be freed by devm */
++	return pinctrl_enable(iod->pctl);
+ }
+ 
+ static struct platform_driver ti_iodelay_driver = {
+ 	.probe = ti_iodelay_probe,
+-	.remove_new = ti_iodelay_remove,
+ 	.driver = {
+ 		   .name = DRIVER_NAME,
+ 		   .of_match_table = ti_iodelay_of_match,
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 490815917adec8..32293df50bb1cc 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -422,13 +422,14 @@ static ssize_t camera_power_show(struct device *dev,
+ 				 char *buf)
+ {
+ 	struct ideapad_private *priv = dev_get_drvdata(dev);
+-	unsigned long result;
++	unsigned long result = 0;
+ 	int err;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	return sysfs_emit(buf, "%d\n", !!result);
+ }
+@@ -445,10 +446,11 @@ static ssize_t camera_power_store(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	return count;
+ }
+@@ -496,13 +498,14 @@ static ssize_t fan_mode_show(struct device *dev,
+ 			     char *buf)
+ {
+ 	struct ideapad_private *priv = dev_get_drvdata(dev);
+-	unsigned long result;
++	unsigned long result = 0;
+ 	int err;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	return sysfs_emit(buf, "%lu\n", result);
+ }
+@@ -522,10 +525,11 @@ static ssize_t fan_mode_store(struct device *dev,
+ 	if (state > 4 || state == 3)
+ 		return -EINVAL;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	return count;
+ }
+@@ -605,13 +609,14 @@ static ssize_t touchpad_show(struct device *dev,
+ 			     char *buf)
+ {
+ 	struct ideapad_private *priv = dev_get_drvdata(dev);
+-	unsigned long result;
++	unsigned long result = 0;
+ 	int err;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	priv->r_touchpad_val = result;
+ 
+@@ -630,10 +635,11 @@ static ssize_t touchpad_store(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	scoped_guard(mutex, &priv->vpc_mutex)
++	scoped_guard(mutex, &priv->vpc_mutex) {
+ 		err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+ 	priv->r_touchpad_val = state;
+ 
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 623d15b68707ec..0ab6008e863e82 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -3153,7 +3153,7 @@ static int genpd_summary_one(struct seq_file *s,
+ 	else
+ 		snprintf(state, sizeof(state), "%s",
+ 			 status_lookup[genpd->status]);
+-	seq_printf(s, "%-30s  %-50s %u", genpd->name, state, genpd->performance_state);
++	seq_printf(s, "%-30s  %-49s  %u", genpd->name, state, genpd->performance_state);
+ 
+ 	/*
+ 	 * Modifications on the list require holding locks on both
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index 6ac5c80cfda214..7520b599eb3d17 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -303,11 +303,11 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ 		val->intval = reg & AXP209_FG_PERCENT;
+ 		break;
+ 
+-	case POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN:
++	case POWER_SUPPLY_PROP_VOLTAGE_MAX:
+ 		return axp20x_batt->data->get_max_voltage(axp20x_batt,
+ 							  &val->intval);
+ 
+-	case POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN:
++	case POWER_SUPPLY_PROP_VOLTAGE_MIN:
+ 		ret = regmap_read(axp20x_batt->regmap, AXP20X_V_OFF, &reg);
+ 		if (ret)
+ 			return ret;
+@@ -455,10 +455,10 @@ static int axp20x_battery_set_prop(struct power_supply *psy,
+ 	struct axp20x_batt_ps *axp20x_batt = power_supply_get_drvdata(psy);
+ 
+ 	switch (psp) {
+-	case POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN:
++	case POWER_SUPPLY_PROP_VOLTAGE_MIN:
+ 		return axp20x_set_voltage_min_design(axp20x_batt, val->intval);
+ 
+-	case POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN:
++	case POWER_SUPPLY_PROP_VOLTAGE_MAX:
+ 		return axp20x_batt->data->set_max_voltage(axp20x_batt, val->intval);
+ 
+ 	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
+@@ -493,8 +493,8 @@ static enum power_supply_property axp20x_battery_props[] = {
+ 	POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT,
+ 	POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
+ 	POWER_SUPPLY_PROP_HEALTH,
+-	POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN,
+-	POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN,
++	POWER_SUPPLY_PROP_VOLTAGE_MAX,
++	POWER_SUPPLY_PROP_VOLTAGE_MIN,
+ 	POWER_SUPPLY_PROP_CAPACITY,
+ };
+ 
+@@ -502,8 +502,8 @@ static int axp20x_battery_prop_writeable(struct power_supply *psy,
+ 					 enum power_supply_property psp)
+ {
+ 	return psp == POWER_SUPPLY_PROP_STATUS ||
+-	       psp == POWER_SUPPLY_PROP_VOLTAGE_MIN_DESIGN ||
+-	       psp == POWER_SUPPLY_PROP_VOLTAGE_MAX_DESIGN ||
++	       psp == POWER_SUPPLY_PROP_VOLTAGE_MIN ||
++	       psp == POWER_SUPPLY_PROP_VOLTAGE_MAX ||
+ 	       psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT ||
+ 	       psp == POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX;
+ }
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index e7d37e422c3f6e..496c3e1f2ee6d6 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -853,7 +853,10 @@ static void max17042_set_soc_threshold(struct max17042_chip *chip, u16 off)
+ 	/* program interrupt thresholds such that we should
+ 	 * get interrupt for every 'off' perc change in the soc
+ 	 */
+-	regmap_read(map, MAX17042_RepSOC, &soc);
++	if (chip->pdata->enable_current_sense)
++		regmap_read(map, MAX17042_RepSOC, &soc);
++	else
++		regmap_read(map, MAX17042_VFSOC, &soc);
+ 	soc >>= 8;
+ 	soc_tr = (soc + off) << 8;
+ 	if (off < soc)
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 8e7f4c0473ab94..9bf9ed9a6a54f7 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -740,7 +740,7 @@ static struct rapl_primitive_info *get_rpi(struct rapl_package *rp, int prim)
+ {
+ 	struct rapl_primitive_info *rpi = rp->priv->rpi;
+ 
+-	if (prim < 0 || prim > NR_RAPL_PRIMITIVES || !rpi)
++	if (prim < 0 || prim >= NR_RAPL_PRIMITIVES || !rpi)
+ 		return NULL;
+ 
+ 	return &rpi[prim];
+diff --git a/drivers/pps/clients/pps_parport.c b/drivers/pps/clients/pps_parport.c
+index af972cdc04b53c..53e9c304ae0a7a 100644
+--- a/drivers/pps/clients/pps_parport.c
++++ b/drivers/pps/clients/pps_parport.c
+@@ -149,6 +149,9 @@ static void parport_attach(struct parport *port)
+ 	}
+ 
+ 	index = ida_alloc(&pps_client_index, GFP_KERNEL);
++	if (index < 0)
++		goto err_free_device;
++
+ 	memset(&pps_client_cb, 0, sizeof(pps_client_cb));
+ 	pps_client_cb.private = device;
+ 	pps_client_cb.irq_func = parport_irq;
+@@ -159,7 +162,7 @@ static void parport_attach(struct parport *port)
+ 						    index);
+ 	if (!device->pardev) {
+ 		pr_err("couldn't register with %s\n", port->name);
+-		goto err_free;
++		goto err_free_ida;
+ 	}
+ 
+ 	if (parport_claim_or_block(device->pardev) < 0) {
+@@ -187,8 +190,9 @@ static void parport_attach(struct parport *port)
+ 	parport_release(device->pardev);
+ err_unregister_dev:
+ 	parport_unregister_device(device->pardev);
+-err_free:
++err_free_ida:
+ 	ida_free(&pps_client_index, index);
++err_free_device:
+ 	kfree(device);
+ }
+ 
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 03afc160fc72ce..86b680adbf01c7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -777,7 +777,7 @@ int of_regulator_bulk_get_all(struct device *dev, struct device_node *np,
+ 			name[i] = '\0';
+ 			tmp = regulator_get(dev, name);
+ 			if (IS_ERR(tmp)) {
+-				ret = -EINVAL;
++				ret = PTR_ERR(tmp);
+ 				goto error;
+ 			}
+ 			(*consumers)[n].consumer = tmp;
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 144c8e9a642e8b..448b9a5438e0b8 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -210,7 +210,7 @@ static const struct imx_rproc_att imx_rproc_att_imx8mq[] = {
+ 	/* QSPI Code - alias */
+ 	{ 0x08000000, 0x08000000, 0x08000000, 0 },
+ 	/* DDR (Code) - alias */
+-	{ 0x10000000, 0x80000000, 0x0FFE0000, 0 },
++	{ 0x10000000, 0x40000000, 0x0FFE0000, 0 },
+ 	/* TCML */
+ 	{ 0x1FFE0000, 0x007E0000, 0x00020000, ATT_OWN  | ATT_IOMEM},
+ 	/* TCMU */
+@@ -1076,6 +1076,8 @@ static int imx_rproc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	}
+ 
++	INIT_WORK(&priv->rproc_work, imx_rproc_vq_work);
++
+ 	ret = imx_rproc_xtr_mbox_init(rproc);
+ 	if (ret)
+ 		goto err_put_wkq;
+@@ -1094,8 +1096,6 @@ static int imx_rproc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_put_scu;
+ 
+-	INIT_WORK(&priv->rproc_work, imx_rproc_vq_work);
+-
+ 	if (rproc->state != RPROC_DETACHED)
+ 		rproc->auto_boot = of_property_read_bool(np, "fsl,auto-boot");
+ 
+diff --git a/drivers/reset/reset-berlin.c b/drivers/reset/reset-berlin.c
+index 2537ec05eceefd..578fe867080ce0 100644
+--- a/drivers/reset/reset-berlin.c
++++ b/drivers/reset/reset-berlin.c
+@@ -68,13 +68,14 @@ static int berlin_reset_xlate(struct reset_controller_dev *rcdev,
+ 
+ static int berlin2_reset_probe(struct platform_device *pdev)
+ {
+-	struct device_node *parent_np = of_get_parent(pdev->dev.of_node);
++	struct device_node *parent_np;
+ 	struct berlin_reset_priv *priv;
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
++	parent_np = of_get_parent(pdev->dev.of_node);
+ 	priv->regmap = syscon_node_to_regmap(parent_np);
+ 	of_node_put(parent_np);
+ 	if (IS_ERR(priv->regmap))
+diff --git a/drivers/reset/reset-k210.c b/drivers/reset/reset-k210.c
+index b62a2fd44e4e42..e77e4cca377dca 100644
+--- a/drivers/reset/reset-k210.c
++++ b/drivers/reset/reset-k210.c
+@@ -90,7 +90,7 @@ static const struct reset_control_ops k210_rst_ops = {
+ static int k210_rst_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *parent_np = of_get_parent(dev->of_node);
++	struct device_node *parent_np;
+ 	struct k210_rst *ksr;
+ 
+ 	dev_info(dev, "K210 reset controller\n");
+@@ -99,6 +99,7 @@ static int k210_rst_probe(struct platform_device *pdev)
+ 	if (!ksr)
+ 		return -ENOMEM;
+ 
++	parent_np = of_get_parent(dev->of_node);
+ 	ksr->map = syscon_node_to_regmap(parent_np);
+ 	of_node_put(parent_np);
+ 	if (IS_ERR(ksr->map))
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 99fadfb4cd9f26..09acc321d0133f 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -107,6 +107,7 @@ debug_info_t *ap_dbf_info;
+ static bool ap_scan_bus(void);
+ static bool ap_scan_bus_result; /* result of last ap_scan_bus() */
+ static DEFINE_MUTEX(ap_scan_bus_mutex); /* mutex ap_scan_bus() invocations */
++static struct task_struct *ap_scan_bus_task; /* thread holding the scan mutex */
+ static atomic64_t ap_scan_bus_count; /* counter ap_scan_bus() invocations */
+ static int ap_scan_bus_time = AP_CONFIG_TIME;
+ static struct timer_list ap_scan_bus_timer;
+@@ -1006,11 +1007,25 @@ bool ap_bus_force_rescan(void)
+ 	if (scan_counter <= 0)
+ 		goto out;
+ 
++	/*
++	 * There is one unlikely but nevertheless valid scenario where the
++	 * thread holding the mutex may try to send some crypto load but
++	 * all cards are offline so a rescan is triggered which causes
++	 * a recursive call of ap_bus_force_rescan(). A simple return if
++	 * the mutex is already locked by this thread solves this.
++	 */
++	if (mutex_is_locked(&ap_scan_bus_mutex)) {
++		if (ap_scan_bus_task == current)
++			goto out;
++	}
++
+ 	/* Try to acquire the AP scan bus mutex */
+ 	if (mutex_trylock(&ap_scan_bus_mutex)) {
+ 		/* mutex acquired, run the AP bus scan */
++		ap_scan_bus_task = current;
+ 		ap_scan_bus_result = ap_scan_bus();
+ 		rc = ap_scan_bus_result;
++		ap_scan_bus_task = NULL;
+ 		mutex_unlock(&ap_scan_bus_mutex);
+ 		goto out;
+ 	}
+@@ -2284,7 +2299,9 @@ static void ap_scan_bus_wq_callback(struct work_struct *unused)
+ 	 * system_long_wq which invokes this function here again.
+ 	 */
+ 	if (mutex_trylock(&ap_scan_bus_mutex)) {
++		ap_scan_bus_task = current;
+ 		ap_scan_bus_result = ap_scan_bus();
++		ap_scan_bus_task = NULL;
+ 		mutex_unlock(&ap_scan_bus_mutex);
+ 	}
+ }
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index cea3a79d538e4b..00e245173320c3 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -1485,6 +1485,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+ 				unsigned char **data)
+ {
+ 	struct NCR5380_hostdata *hostdata = shost_priv(instance);
++	struct NCR5380_cmd *ncmd = NCR5380_to_ncmd(hostdata->connected);
+ 	int c = *count;
+ 	unsigned char p = *phase;
+ 	unsigned char *d = *data;
+@@ -1496,7 +1497,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+ 		return -1;
+ 	}
+ 
+-	NCR5380_to_ncmd(hostdata->connected)->phase = p;
++	ncmd->phase = p;
+ 
+ 	if (p & SR_IO) {
+ 		if (hostdata->read_overruns)
+@@ -1608,45 +1609,44 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
+  * request.
+  */
+ 
+-	if (hostdata->flags & FLAG_DMA_FIXUP) {
+-		if (p & SR_IO) {
+-			/*
+-			 * The workaround was to transfer fewer bytes than we
+-			 * intended to with the pseudo-DMA read function, wait for
+-			 * the chip to latch the last byte, read it, and then disable
+-			 * pseudo-DMA mode.
+-			 *
+-			 * After REQ is asserted, the NCR5380 asserts DRQ and ACK.
+-			 * REQ is deasserted when ACK is asserted, and not reasserted
+-			 * until ACK goes false.  Since the NCR5380 won't lower ACK
+-			 * until DACK is asserted, which won't happen unless we twiddle
+-			 * the DMA port or we take the NCR5380 out of DMA mode, we
+-			 * can guarantee that we won't handshake another extra
+-			 * byte.
+-			 */
+-
+-			if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+-			                          BASR_DRQ, BASR_DRQ, 0) < 0) {
+-				result = -1;
+-				shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n");
+-			}
+-			if (NCR5380_poll_politely(hostdata, STATUS_REG,
+-			                          SR_REQ, 0, 0) < 0) {
+-				result = -1;
+-				shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n");
+-			}
+-			d[*count - 1] = NCR5380_read(INPUT_DATA_REG);
+-		} else {
+-			/*
+-			 * Wait for the last byte to be sent.  If REQ is being asserted for
+-			 * the byte we're interested, we'll ACK it and it will go false.
+-			 */
+-			if (NCR5380_poll_politely2(hostdata,
+-			     BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
+-			     BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, 0) < 0) {
+-				result = -1;
+-				shost_printk(KERN_ERR, instance, "PDMA write: DRQ and phase timeout\n");
++	if ((hostdata->flags & FLAG_DMA_FIXUP) &&
++	    (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) {
++		/*
++		 * The workaround was to transfer fewer bytes than we
++		 * intended to with the pseudo-DMA receive function, wait for
++		 * the chip to latch the last byte, read it, and then disable
++		 * DMA mode.
++		 *
++		 * After REQ is asserted, the NCR5380 asserts DRQ and ACK.
++		 * REQ is deasserted when ACK is asserted, and not reasserted
++		 * until ACK goes false. Since the NCR5380 won't lower ACK
++		 * until DACK is asserted, which won't happen unless we twiddle
++		 * the DMA port or we take the NCR5380 out of DMA mode, we
++		 * can guarantee that we won't handshake another extra
++		 * byte.
++		 *
++		 * If sending, wait for the last byte to be sent. If REQ is
++		 * being asserted for the byte we're interested, we'll ACK it
++		 * and it will go false.
++		 */
++		if (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
++					   BASR_DRQ, BASR_DRQ, 0)) {
++			if ((p & SR_IO) &&
++			    (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) {
++				if (!NCR5380_poll_politely(hostdata, STATUS_REG,
++							   SR_REQ, 0, 0)) {
++					d[c] = NCR5380_read(INPUT_DATA_REG);
++					--ncmd->this_residual;
++				} else {
++					result = -1;
++					scmd_printk(KERN_ERR, hostdata->connected,
++						    "PDMA fixup: !REQ timeout\n");
++				}
+ 			}
++		} else if (NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH) {
++			result = -1;
++			scmd_printk(KERN_ERR, hostdata->connected,
++				    "PDMA fixup: DRQ timeout\n");
+ 		}
+ 	}
+ 
+diff --git a/drivers/scsi/elx/libefc/efc_nport.c b/drivers/scsi/elx/libefc/efc_nport.c
+index 2e83a667901fec..1a7437f4328e87 100644
+--- a/drivers/scsi/elx/libefc/efc_nport.c
++++ b/drivers/scsi/elx/libefc/efc_nport.c
+@@ -705,9 +705,9 @@ efc_nport_vport_del(struct efc *efc, struct efc_domain *domain,
+ 	spin_lock_irqsave(&efc->lock, flags);
+ 	list_for_each_entry(nport, &domain->nport_list, list_entry) {
+ 		if (nport->wwpn == wwpn && nport->wwnn == wwnn) {
+-			kref_put(&nport->ref, nport->release);
+ 			/* Shutdown this NPORT */
+ 			efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL);
++			kref_put(&nport->ref, nport->release);
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 500253007b1dc8..26e1313ebb21fc 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -4847,6 +4847,7 @@ struct fcp_iwrite64_wqe {
+ #define	cmd_buff_len_SHIFT  16
+ #define	cmd_buff_len_MASK  0x00000ffff
+ #define	cmd_buff_len_WORD  word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+@@ -4863,6 +4864,7 @@ struct fcp_iread64_wqe {
+ #define	cmd_buff_len_SHIFT  16
+ #define	cmd_buff_len_MASK  0x00000ffff
+ #define	cmd_buff_len_WORD  word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+@@ -4879,6 +4881,7 @@ struct fcp_icmnd64_wqe {
+ #define	cmd_buff_len_SHIFT  16
+ #define	cmd_buff_len_MASK  0x00000ffff
+ #define	cmd_buff_len_WORD  word3
++/* Note: payload_offset_len field depends on ASIC support */
+ #define payload_offset_len_SHIFT 0
+ #define payload_offset_len_MASK 0x0000ffff
+ #define payload_offset_len_WORD word3
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index e1dfa96c2a553a..0c1404dc5f3bdb 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -4699,6 +4699,7 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
+ 	uint64_t wwn;
+ 	bool use_no_reset_hba = false;
+ 	int rc;
++	u8 if_type;
+ 
+ 	if (lpfc_no_hba_reset_cnt) {
+ 		if (phba->sli_rev < LPFC_SLI_REV4 &&
+@@ -4773,10 +4774,24 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
+ 	shost->max_id = LPFC_MAX_TARGET;
+ 	shost->max_lun = vport->cfg_max_luns;
+ 	shost->this_id = -1;
+-	if (phba->sli_rev == LPFC_SLI_REV4)
+-		shost->max_cmd_len = LPFC_FCP_CDB_LEN_32;
+-	else
++
++	/* Set max_cmd_len applicable to ASIC support */
++	if (phba->sli_rev == LPFC_SLI_REV4) {
++		if_type = bf_get(lpfc_sli_intf_if_type,
++				 &phba->sli4_hba.sli_intf);
++		switch (if_type) {
++		case LPFC_SLI_INTF_IF_TYPE_2:
++			fallthrough;
++		case LPFC_SLI_INTF_IF_TYPE_6:
++			shost->max_cmd_len = LPFC_FCP_CDB_LEN_32;
++			break;
++		default:
++			shost->max_cmd_len = LPFC_FCP_CDB_LEN;
++			break;
++		}
++	} else {
+ 		shost->max_cmd_len = LPFC_FCP_CDB_LEN;
++	}
+ 
+ 	if (phba->sli_rev == LPFC_SLI_REV4) {
+ 		if (!phba->cfg_fcp_mq_threshold ||
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 98ce9d97a22570..9f0b59672e1915 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4760,7 +4760,7 @@ static int lpfc_scsi_prep_cmnd_buf_s4(struct lpfc_vport *vport,
+ 
+ 	 /* Word 3 */
+ 	bf_set(payload_offset_len, &wqe->fcp_icmd,
+-	       sizeof(struct fcp_cmnd32) + sizeof(struct fcp_rsp));
++	       sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp));
+ 
+ 	/* Word 6 */
+ 	bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com,
+diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c
+index a402c4dc4645d3..2e9fad1e3069ec 100644
+--- a/drivers/scsi/mac_scsi.c
++++ b/drivers/scsi/mac_scsi.c
+@@ -102,11 +102,15 @@ __setup("mac5380=", mac_scsi_setup);
+  * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets
+  * so bus errors are unavoidable.
+  *
+- * If a MOVE.B instruction faults, we assume that zero bytes were transferred
+- * and simply retry. That assumption probably depends on target behaviour but
+- * seems to hold up okay. The NOP provides synchronization: without it the
+- * fault can sometimes occur after the program counter has moved past the
+- * offending instruction. Post-increment addressing can't be used.
++ * If a MOVE.B instruction faults during a receive operation, we assume the
++ * target sent nothing and try again. That assumption probably depends on
++ * target firmware but it seems to hold up okay. If a fault happens during a
++ * send operation, the target may or may not have seen /ACK and got the byte.
++ * It's uncertain so the whole SCSI command gets retried.
++ *
++ * The NOP is needed for synchronization because the fault address in the
++ * exception stack frame may or may not be the instruction that actually
++ * caused the bus error. Post-increment addressing can't be used.
+  */
+ 
+ #define MOVE_BYTE(operands) \
+@@ -208,8 +212,6 @@ __setup("mac5380=", mac_scsi_setup);
+ 		".previous                     \n" \
+ 		: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
+ 
+-#define MAC_PDMA_DELAY		32
+-
+ static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n)
+ {
+ 	unsigned char *addr = start;
+@@ -245,22 +247,21 @@ static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n)
+ 	if (n >= 1) {
+ 		MOVE_BYTE("%0@,%3@");
+ 		if (result)
+-			goto out;
++			return -1;
+ 	}
+ 	if (n >= 1 && ((unsigned long)addr & 1)) {
+ 		MOVE_BYTE("%0@,%3@");
+ 		if (result)
+-			goto out;
++			return -2;
+ 	}
+ 	while (n >= 32)
+ 		MOVE_16_WORDS("%0@+,%3@");
+ 	while (n >= 2)
+ 		MOVE_WORD("%0@+,%3@");
+ 	if (result)
+-		return start - addr; /* Negated to indicate uncertain length */
++		return start - addr - 1; /* Negated to indicate uncertain length */
+ 	if (n == 1)
+ 		MOVE_BYTE("%0@,%3@");
+-out:
+ 	return addr - start;
+ }
+ 
+@@ -274,25 +275,56 @@ static inline void write_ctrl_reg(struct NCR5380_hostdata *hostdata, u32 value)
+ 	out_be32(hostdata->io + (CTRL_REG << 4), value);
+ }
+ 
++static inline int macscsi_wait_for_drq(struct NCR5380_hostdata *hostdata)
++{
++	unsigned int n = 1; /* effectively multiplies NCR5380_REG_POLL_TIME */
++	unsigned char basr;
++
++again:
++	basr = NCR5380_read(BUS_AND_STATUS_REG);
++
++	if (!(basr & BASR_PHASE_MATCH))
++		return 1;
++
++	if (basr & BASR_IRQ)
++		return -1;
++
++	if (basr & BASR_DRQ)
++		return 0;
++
++	if (n-- == 0) {
++		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++			 "%s: DRQ timeout\n", __func__);
++		return -1;
++	}
++
++	NCR5380_poll_politely2(hostdata,
++			       BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
++			       BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, 0);
++	goto again;
++}
++
+ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+                                 unsigned char *dst, int len)
+ {
+ 	u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
+ 	unsigned char *d = dst;
+-	int result = 0;
+ 
+ 	hostdata->pdma_residual = len;
+ 
+-	while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+-	                              BASR_DRQ | BASR_PHASE_MATCH,
+-	                              BASR_DRQ | BASR_PHASE_MATCH, 0)) {
+-		int bytes;
++	while (macscsi_wait_for_drq(hostdata) == 0) {
++		int bytes, chunk_bytes;
+ 
+ 		if (macintosh_config->ident == MAC_MODEL_IIFX)
+ 			write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
+ 			                         CTRL_INTERRUPTS_ENABLE);
+ 
+-		bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512));
++		chunk_bytes = min(hostdata->pdma_residual, 512);
++		bytes = mac_pdma_recv(s, d, chunk_bytes);
++
++		if (macintosh_config->ident == MAC_MODEL_IIFX)
++			write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+ 
+ 		if (bytes > 0) {
+ 			d += bytes;
+@@ -300,37 +332,25 @@ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
+ 		}
+ 
+ 		if (hostdata->pdma_residual == 0)
+-			goto out;
++			break;
+ 
+-		if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+-		                           BUS_AND_STATUS_REG, BASR_ACK,
+-		                           BASR_ACK, 0) < 0)
+-			scmd_printk(KERN_DEBUG, hostdata->connected,
+-			            "%s: !REQ and !ACK\n", __func__);
+-		if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+-			goto out;
++		if (bytes > 0)
++			continue;
+ 
+-		if (bytes == 0)
+-			udelay(MAC_PDMA_DELAY);
++		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++			 "%s: bus error [%d/%d] (%d/%d)\n",
++			 __func__, d - dst, len, bytes, chunk_bytes);
+ 
+-		if (bytes >= 0)
++		if (bytes == 0)
+ 			continue;
+ 
+-		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+-		         "%s: bus error (%d/%d)\n", __func__, d - dst, len);
+-		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-		result = -1;
+-		goto out;
++		if (macscsi_wait_for_drq(hostdata) <= 0)
++			set_host_byte(hostdata->connected, DID_ERROR);
++		break;
+ 	}
+ 
+-	scmd_printk(KERN_ERR, hostdata->connected,
+-	            "%s: phase mismatch or !DRQ\n", __func__);
+-	NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-	result = -1;
+-out:
+-	if (macintosh_config->ident == MAC_MODEL_IIFX)
+-		write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+-	return result;
++	return 0;
+ }
+ 
+ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+@@ -338,67 +358,47 @@ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
+ {
+ 	unsigned char *s = src;
+ 	u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
+-	int result = 0;
+ 
+ 	hostdata->pdma_residual = len;
+ 
+-	while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
+-	                              BASR_DRQ | BASR_PHASE_MATCH,
+-	                              BASR_DRQ | BASR_PHASE_MATCH, 0)) {
+-		int bytes;
++	while (macscsi_wait_for_drq(hostdata) == 0) {
++		int bytes, chunk_bytes;
+ 
+ 		if (macintosh_config->ident == MAC_MODEL_IIFX)
+ 			write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
+ 			                         CTRL_INTERRUPTS_ENABLE);
+ 
+-		bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512));
++		chunk_bytes = min(hostdata->pdma_residual, 512);
++		bytes = mac_pdma_send(s, d, chunk_bytes);
++
++		if (macintosh_config->ident == MAC_MODEL_IIFX)
++			write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+ 
+ 		if (bytes > 0) {
+ 			s += bytes;
+ 			hostdata->pdma_residual -= bytes;
+ 		}
+ 
+-		if (hostdata->pdma_residual == 0) {
+-			if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
+-			                          TCR_LAST_BYTE_SENT,
+-			                          TCR_LAST_BYTE_SENT,
+-			                          0) < 0) {
+-				scmd_printk(KERN_ERR, hostdata->connected,
+-				            "%s: Last Byte Sent timeout\n", __func__);
+-				result = -1;
+-			}
+-			goto out;
+-		}
++		if (hostdata->pdma_residual == 0)
++			break;
+ 
+-		if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
+-		                           BUS_AND_STATUS_REG, BASR_ACK,
+-		                           BASR_ACK, 0) < 0)
+-			scmd_printk(KERN_DEBUG, hostdata->connected,
+-			            "%s: !REQ and !ACK\n", __func__);
+-		if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
+-			goto out;
++		if (bytes > 0)
++			continue;
+ 
+-		if (bytes == 0)
+-			udelay(MAC_PDMA_DELAY);
++		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
++		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
++			 "%s: bus error [%d/%d] (%d/%d)\n",
++			 __func__, s - src, len, bytes, chunk_bytes);
+ 
+-		if (bytes >= 0)
++		if (bytes == 0)
+ 			continue;
+ 
+-		dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
+-		         "%s: bus error (%d/%d)\n", __func__, s - src, len);
+-		NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-		result = -1;
+-		goto out;
++		if (macscsi_wait_for_drq(hostdata) <= 0)
++			set_host_byte(hostdata->connected, DID_ERROR);
++		break;
+ 	}
+ 
+-	scmd_printk(KERN_ERR, hostdata->connected,
+-	            "%s: phase mismatch or !DRQ\n", __func__);
+-	NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
+-	result = -1;
+-out:
+-	if (macintosh_config->ident == MAC_MODEL_IIFX)
+-		write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
+-	return result;
++	return 0;
+ }
+ 
+ static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 7d2a294ebc3d79..5dd100175ec616 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3283,7 +3283,7 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp)
+ 	rcu_read_lock();
+ 	vpd = rcu_dereference(sdkp->device->vpd_pgb1);
+ 
+-	if (!vpd || vpd->len < 8) {
++	if (!vpd || vpd->len <= 8) {
+ 		rcu_read_unlock();
+ 	        return;
+ 	}
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 24c7cb285dca0f..c1524fb334eb5c 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -2354,14 +2354,6 @@ static inline void pqi_mask_device(u8 *scsi3addr)
+ 	scsi3addr[3] |= 0xc0;
+ }
+ 
+-static inline bool pqi_is_multipath_device(struct pqi_scsi_dev *device)
+-{
+-	if (pqi_is_logical_device(device))
+-		return false;
+-
+-	return (device->path_map & (device->path_map - 1)) != 0;
+-}
+-
+ static inline bool pqi_expose_device(struct pqi_scsi_dev *device)
+ {
+ 	return !device->is_physical_device || !pqi_skip_device(device->scsi3addr);
+@@ -3258,14 +3250,12 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request)
+ 	int residual_count;
+ 	int xfer_count;
+ 	bool device_offline;
+-	struct pqi_scsi_dev *device;
+ 
+ 	scmd = io_request->scmd;
+ 	error_info = io_request->error_info;
+ 	host_byte = DID_OK;
+ 	sense_data_length = 0;
+ 	device_offline = false;
+-	device = scmd->device->hostdata;
+ 
+ 	switch (error_info->service_response) {
+ 	case PQI_AIO_SERV_RESPONSE_COMPLETE:
+@@ -3290,14 +3280,8 @@ static void pqi_process_aio_io_error(struct pqi_io_request *io_request)
+ 			break;
+ 		case PQI_AIO_STATUS_AIO_PATH_DISABLED:
+ 			pqi_aio_path_disabled(io_request);
+-			if (pqi_is_multipath_device(device)) {
+-				pqi_device_remove_start(device);
+-				host_byte = DID_NO_CONNECT;
+-				scsi_status = SAM_STAT_CHECK_CONDITION;
+-			} else {
+-				scsi_status = SAM_STAT_GOOD;
+-				io_request->status = -EAGAIN;
+-			}
++			scsi_status = SAM_STAT_GOOD;
++			io_request->status = -EAGAIN;
+ 			break;
+ 		case PQI_AIO_STATUS_NO_PATH_TO_DEVICE:
+ 		case PQI_AIO_STATUS_INVALID_DEVICE:
+diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
+index f498db9abe35f0..04035706b96dc8 100644
+--- a/drivers/soc/fsl/qe/qmc.c
++++ b/drivers/soc/fsl/qe/qmc.c
+@@ -940,11 +940,13 @@ static int qmc_chan_start_rx(struct qmc_chan *chan)
+ 		goto end;
+ 	}
+ 
+-	ret = qmc_setup_chan_trnsync(chan->qmc, chan);
+-	if (ret) {
+-		dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
+-			chan->id, ret);
+-		goto end;
++	if (chan->mode == QMC_TRANSPARENT) {
++		ret = qmc_setup_chan_trnsync(chan->qmc, chan);
++		if (ret) {
++			dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
++				chan->id, ret);
++			goto end;
++		}
+ 	}
+ 
+ 	/* Restart the receiver */
+@@ -982,11 +984,13 @@ static int qmc_chan_start_tx(struct qmc_chan *chan)
+ 		goto end;
+ 	}
+ 
+-	ret = qmc_setup_chan_trnsync(chan->qmc, chan);
+-	if (ret) {
+-		dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
+-			chan->id, ret);
+-		goto end;
++	if (chan->mode == QMC_TRANSPARENT) {
++		ret = qmc_setup_chan_trnsync(chan->qmc, chan);
++		if (ret) {
++			dev_err(chan->qmc->dev, "chan %u: setup TRNSYNC failed (%d)\n",
++				chan->id, ret);
++			goto end;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/soc/fsl/qe/tsa.c b/drivers/soc/fsl/qe/tsa.c
+index 6c5741cf5e9d2a..53968ea84c8872 100644
+--- a/drivers/soc/fsl/qe/tsa.c
++++ b/drivers/soc/fsl/qe/tsa.c
+@@ -140,7 +140,7 @@ static inline void tsa_write32(void __iomem *addr, u32 val)
+ 	iowrite32be(val, addr);
+ }
+ 
+-static inline void tsa_write8(void __iomem *addr, u32 val)
++static inline void tsa_write8(void __iomem *addr, u8 val)
+ {
+ 	iowrite8(val, addr);
+ }
+diff --git a/drivers/soc/qcom/smd-rpm.c b/drivers/soc/qcom/smd-rpm.c
+index b7056aed4c7d10..9d64283d212549 100644
+--- a/drivers/soc/qcom/smd-rpm.c
++++ b/drivers/soc/qcom/smd-rpm.c
+@@ -196,9 +196,6 @@ static int qcom_smd_rpm_probe(struct rpmsg_device *rpdev)
+ {
+ 	struct qcom_smd_rpm *rpm;
+ 
+-	if (!rpdev->dev.of_node)
+-		return -EINVAL;
+-
+ 	rpm = devm_kzalloc(&rpdev->dev, sizeof(*rpm), GFP_KERNEL);
+ 	if (!rpm)
+ 		return -ENOMEM;
+@@ -218,18 +215,38 @@ static void qcom_smd_rpm_remove(struct rpmsg_device *rpdev)
+ 	of_platform_depopulate(&rpdev->dev);
+ }
+ 
+-static const struct rpmsg_device_id qcom_smd_rpm_id_table[] = {
+-	{ .name = "rpm_requests", },
+-	{ /* sentinel */ }
++static const struct of_device_id qcom_smd_rpm_of_match[] = {
++	{ .compatible = "qcom,rpm-apq8084" },
++	{ .compatible = "qcom,rpm-ipq6018" },
++	{ .compatible = "qcom,rpm-ipq9574" },
++	{ .compatible = "qcom,rpm-msm8226" },
++	{ .compatible = "qcom,rpm-msm8909" },
++	{ .compatible = "qcom,rpm-msm8916" },
++	{ .compatible = "qcom,rpm-msm8936" },
++	{ .compatible = "qcom,rpm-msm8953" },
++	{ .compatible = "qcom,rpm-msm8974" },
++	{ .compatible = "qcom,rpm-msm8976" },
++	{ .compatible = "qcom,rpm-msm8994" },
++	{ .compatible = "qcom,rpm-msm8996" },
++	{ .compatible = "qcom,rpm-msm8998" },
++	{ .compatible = "qcom,rpm-sdm660" },
++	{ .compatible = "qcom,rpm-sm6115" },
++	{ .compatible = "qcom,rpm-sm6125" },
++	{ .compatible = "qcom,rpm-sm6375" },
++	{ .compatible = "qcom,rpm-qcm2290" },
++	{ .compatible = "qcom,rpm-qcs404" },
++	{}
+ };
+-MODULE_DEVICE_TABLE(rpmsg, qcom_smd_rpm_id_table);
++MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match);
+ 
+ static struct rpmsg_driver qcom_smd_rpm_driver = {
+ 	.probe = qcom_smd_rpm_probe,
+ 	.remove = qcom_smd_rpm_remove,
+ 	.callback = qcom_smd_rpm_callback,
+-	.id_table = qcom_smd_rpm_id_table,
+-	.drv.name = "qcom_smd_rpm",
++	.drv  = {
++		.name  = "qcom_smd_rpm",
++		.of_match_table = qcom_smd_rpm_of_match,
++	},
+ };
+ 
+ static int __init qcom_smd_rpm_init(void)
+diff --git a/drivers/soc/versatile/soc-integrator.c b/drivers/soc/versatile/soc-integrator.c
+index bab4ad87aa7500..d5099a3386b4fc 100644
+--- a/drivers/soc/versatile/soc-integrator.c
++++ b/drivers/soc/versatile/soc-integrator.c
+@@ -113,6 +113,7 @@ static int __init integrator_soc_init(void)
+ 		return -ENODEV;
+ 
+ 	syscon_regmap = syscon_node_to_regmap(np);
++	of_node_put(np);
+ 	if (IS_ERR(syscon_regmap))
+ 		return PTR_ERR(syscon_regmap);
+ 
+diff --git a/drivers/soc/versatile/soc-realview.c b/drivers/soc/versatile/soc-realview.c
+index c6876d232d8fd6..cf91abe07d38d0 100644
+--- a/drivers/soc/versatile/soc-realview.c
++++ b/drivers/soc/versatile/soc-realview.c
+@@ -4,6 +4,7 @@
+  *
+  * Author: Linus Walleij <linus.walleij@linaro.org>
+  */
++#include <linux/device.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/slab.h>
+@@ -81,6 +82,13 @@ static struct attribute *realview_attrs[] = {
+ 
+ ATTRIBUTE_GROUPS(realview);
+ 
++static void realview_soc_socdev_release(void *data)
++{
++	struct soc_device *soc_dev = data;
++
++	soc_device_unregister(soc_dev);
++}
++
+ static int realview_soc_probe(struct platform_device *pdev)
+ {
+ 	struct regmap *syscon_regmap;
+@@ -93,7 +101,7 @@ static int realview_soc_probe(struct platform_device *pdev)
+ 	if (IS_ERR(syscon_regmap))
+ 		return PTR_ERR(syscon_regmap);
+ 
+-	soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
++	soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr), GFP_KERNEL);
+ 	if (!soc_dev_attr)
+ 		return -ENOMEM;
+ 
+@@ -106,10 +114,14 @@ static int realview_soc_probe(struct platform_device *pdev)
+ 	soc_dev_attr->family = "Versatile";
+ 	soc_dev_attr->custom_attr_group = realview_groups[0];
+ 	soc_dev = soc_device_register(soc_dev_attr);
+-	if (IS_ERR(soc_dev)) {
+-		kfree(soc_dev_attr);
++	if (IS_ERR(soc_dev))
+ 		return -ENODEV;
+-	}
++
++	ret = devm_add_action_or_reset(&pdev->dev, realview_soc_socdev_release,
++				       soc_dev);
++	if (ret)
++		return ret;
++
+ 	ret = regmap_read(syscon_regmap, REALVIEW_SYS_ID_OFFSET,
+ 			  &realview_coreid);
+ 	if (ret)
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 5aaff3bee1b78f..9ea91432c11d81 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -375,9 +375,9 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ 	 * If the QSPI controller is set in regular SPI mode, set it in
+ 	 * Serial Memory Mode (SMM).
+ 	 */
+-	if (aq->mr != QSPI_MR_SMM) {
+-		atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR);
+-		aq->mr = QSPI_MR_SMM;
++	if (!(aq->mr & QSPI_MR_SMM)) {
++		aq->mr |= QSPI_MR_SMM;
++		atmel_qspi_write(aq->mr, aq, QSPI_MR);
+ 	}
+ 
+ 	/* Clear pending interrupts */
+@@ -501,7 +501,8 @@ static int atmel_qspi_setup(struct spi_device *spi)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	aq->scr = QSPI_SCR_SCBR(scbr);
++	aq->scr &= ~QSPI_SCR_SCBR_MASK;
++	aq->scr |= QSPI_SCR_SCBR(scbr);
+ 	atmel_qspi_write(aq->scr, aq, QSPI_SCR);
+ 
+ 	pm_runtime_mark_last_busy(ctrl->dev.parent);
+@@ -534,6 +535,7 @@ static int atmel_qspi_set_cs_timing(struct spi_device *spi)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	aq->scr &= ~QSPI_SCR_DLYBS_MASK;
+ 	aq->scr |= QSPI_SCR_DLYBS(cs_setup);
+ 	atmel_qspi_write(aq->scr, aq, QSPI_SCR);
+ 
+@@ -549,8 +551,8 @@ static void atmel_qspi_init(struct atmel_qspi *aq)
+ 	atmel_qspi_write(QSPI_CR_SWRST, aq, QSPI_CR);
+ 
+ 	/* Set the QSPI controller by default in Serial Memory Mode */
+-	atmel_qspi_write(QSPI_MR_SMM, aq, QSPI_MR);
+-	aq->mr = QSPI_MR_SMM;
++	aq->mr |= QSPI_MR_SMM;
++	atmel_qspi_write(aq->mr, aq, QSPI_MR);
+ 
+ 	/* Enable the QSPI controller */
+ 	atmel_qspi_write(QSPI_CR_QSPIEN, aq, QSPI_CR);
+@@ -726,6 +728,7 @@ static void atmel_qspi_remove(struct platform_device *pdev)
+ 	clk_unprepare(aq->pclk);
+ 
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ }
+ 
+diff --git a/drivers/spi/spi-airoha-snfi.c b/drivers/spi/spi-airoha-snfi.c
+index 9d97ec98881cc9..94458df53eae23 100644
+--- a/drivers/spi/spi-airoha-snfi.c
++++ b/drivers/spi/spi-airoha-snfi.c
+@@ -211,9 +211,6 @@ struct airoha_snand_dev {
+ 
+ 	u8 *txrx_buf;
+ 	dma_addr_t dma_addr;
+-
+-	u64 cur_page_num;
+-	bool data_need_update;
+ };
+ 
+ struct airoha_snand_ctrl {
+@@ -405,7 +402,7 @@ static int airoha_snand_write_data(struct airoha_snand_ctrl *as_ctrl, u8 cmd,
+ 	for (i = 0; i < len; i += data_len) {
+ 		int err;
+ 
+-		data_len = min(len, SPI_MAX_TRANSFER_SIZE);
++		data_len = min(len - i, SPI_MAX_TRANSFER_SIZE);
+ 		err = airoha_snand_set_fifo_op(as_ctrl, cmd, data_len);
+ 		if (err)
+ 			return err;
+@@ -427,7 +424,7 @@ static int airoha_snand_read_data(struct airoha_snand_ctrl *as_ctrl, u8 *data,
+ 	for (i = 0; i < len; i += data_len) {
+ 		int err;
+ 
+-		data_len = min(len, SPI_MAX_TRANSFER_SIZE);
++		data_len = min(len - i, SPI_MAX_TRANSFER_SIZE);
+ 		err = airoha_snand_set_fifo_op(as_ctrl, 0xc, data_len);
+ 		if (err)
+ 			return err;
+@@ -644,11 +641,6 @@ static ssize_t airoha_snand_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ 	u32 val, rd_mode;
+ 	int err;
+ 
+-	if (!as_dev->data_need_update)
+-		return len;
+-
+-	as_dev->data_need_update = false;
+-
+ 	switch (op->cmd.opcode) {
+ 	case SPI_NAND_OP_READ_FROM_CACHE_DUAL:
+ 		rd_mode = 1;
+@@ -739,8 +731,13 @@ static ssize_t airoha_snand_dirmap_read(struct spi_mem_dirmap_desc *desc,
+ 	if (err)
+ 		return err;
+ 
+-	err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
+-			      SPI_NFI_READ_FROM_CACHE_DONE);
++	/*
++	 * SPI_NFI_READ_FROM_CACHE_DONE bit must be written at the end
++	 * of dirmap_read operation even if it is already set.
++	 */
++	err = regmap_write_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
++				SPI_NFI_READ_FROM_CACHE_DONE,
++				SPI_NFI_READ_FROM_CACHE_DONE);
+ 	if (err)
+ 		return err;
+ 
+@@ -870,8 +867,13 @@ static ssize_t airoha_snand_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ 	if (err)
+ 		return err;
+ 
+-	err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
+-			      SPI_NFI_LOAD_TO_CACHE_DONE);
++	/*
++	 * SPI_NFI_LOAD_TO_CACHE_DONE bit must be written at the end
++	 * of dirmap_write operation even if it is already set.
++	 */
++	err = regmap_write_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_STA_CTL1,
++				SPI_NFI_LOAD_TO_CACHE_DONE,
++				SPI_NFI_LOAD_TO_CACHE_DONE);
+ 	if (err)
+ 		return err;
+ 
+@@ -885,23 +887,11 @@ static ssize_t airoha_snand_dirmap_write(struct spi_mem_dirmap_desc *desc,
+ static int airoha_snand_exec_op(struct spi_mem *mem,
+ 				const struct spi_mem_op *op)
+ {
+-	struct airoha_snand_dev *as_dev = spi_get_ctldata(mem->spi);
+ 	u8 data[8], cmd, opcode = op->cmd.opcode;
+ 	struct airoha_snand_ctrl *as_ctrl;
+ 	int i, err;
+ 
+ 	as_ctrl = spi_controller_get_devdata(mem->spi->controller);
+-	if (opcode == SPI_NAND_OP_PROGRAM_EXECUTE &&
+-	    op->addr.val == as_dev->cur_page_num) {
+-		as_dev->data_need_update = true;
+-	} else if (opcode == SPI_NAND_OP_PAGE_READ) {
+-		if (!as_dev->data_need_update &&
+-		    op->addr.val == as_dev->cur_page_num)
+-			return 0;
+-
+-		as_dev->data_need_update = true;
+-		as_dev->cur_page_num = op->addr.val;
+-	}
+ 
+ 	/* switch to manual mode */
+ 	err = airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL);
+@@ -986,7 +976,6 @@ static int airoha_snand_setup(struct spi_device *spi)
+ 	if (dma_mapping_error(as_ctrl->dev, as_dev->dma_addr))
+ 		return -ENOMEM;
+ 
+-	as_dev->data_need_update = true;
+ 	spi_set_ctldata(spi, as_dev);
+ 
+ 	return 0;
+diff --git a/drivers/spi/spi-bcmbca-hsspi.c b/drivers/spi/spi-bcmbca-hsspi.c
+index 9f64afd8164ea9..4965bc86d7f52a 100644
+--- a/drivers/spi/spi-bcmbca-hsspi.c
++++ b/drivers/spi/spi-bcmbca-hsspi.c
+@@ -546,12 +546,14 @@ static int bcmbca_hsspi_probe(struct platform_device *pdev)
+ 			goto out_put_host;
+ 	}
+ 
+-	pm_runtime_enable(&pdev->dev);
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		goto out_put_host;
+ 
+ 	ret = sysfs_create_group(&pdev->dev.kobj, &bcmbca_hsspi_group);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "couldn't register sysfs group\n");
+-		goto out_pm_disable;
++		goto out_put_host;
+ 	}
+ 
+ 	/* register and we are done */
+@@ -565,8 +567,6 @@ static int bcmbca_hsspi_probe(struct platform_device *pdev)
+ 
+ out_sysgroup_disable:
+ 	sysfs_remove_group(&pdev->dev.kobj, &bcmbca_hsspi_group);
+-out_pm_disable:
+-	pm_runtime_disable(&pdev->dev);
+ out_put_host:
+ 	spi_controller_put(host);
+ out_disable_pll_clk:
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index d2f603e1014cf5..e235df4177f965 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -986,6 +986,7 @@ static void fsl_lpspi_remove(struct platform_device *pdev)
+ 
+ 	fsl_lpspi_dma_exit(controller);
+ 
++	pm_runtime_dont_use_autosuspend(fsl_lpspi->dev);
+ 	pm_runtime_disable(fsl_lpspi->dev);
+ }
+ 
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index 6585b19a48662d..a961785724cdf2 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -57,13 +57,6 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/spi-mem.h>
+ 
+-/*
+- * The driver only uses one single LUT entry, that is updated on
+- * each call of exec_op(). Index 0 is preset at boot with a basic
+- * read operation, so let's use the last entry (31).
+- */
+-#define	SEQID_LUT			31
+-
+ /* Registers used by the driver */
+ #define FSPI_MCR0			0x00
+ #define FSPI_MCR0_AHB_TIMEOUT(x)	((x) << 24)
+@@ -263,9 +256,6 @@
+ #define FSPI_TFDR			0x180
+ 
+ #define FSPI_LUT_BASE			0x200
+-#define FSPI_LUT_OFFSET			(SEQID_LUT * 4 * 4)
+-#define FSPI_LUT_REG(idx) \
+-	(FSPI_LUT_BASE + FSPI_LUT_OFFSET + (idx) * 4)
+ 
+ /* register map end */
+ 
+@@ -341,6 +331,7 @@ struct nxp_fspi_devtype_data {
+ 	unsigned int txfifo;
+ 	unsigned int ahb_buf_size;
+ 	unsigned int quirks;
++	unsigned int lut_num;
+ 	bool little_endian;
+ };
+ 
+@@ -349,6 +340,7 @@ static struct nxp_fspi_devtype_data lx2160a_data = {
+ 	.txfifo = SZ_1K,        /* (128 * 64 bits)  */
+ 	.ahb_buf_size = SZ_2K,  /* (256 * 64 bits)  */
+ 	.quirks = 0,
++	.lut_num = 32,
+ 	.little_endian = true,  /* little-endian    */
+ };
+ 
+@@ -357,6 +349,7 @@ static struct nxp_fspi_devtype_data imx8mm_data = {
+ 	.txfifo = SZ_1K,        /* (128 * 64 bits)  */
+ 	.ahb_buf_size = SZ_2K,  /* (256 * 64 bits)  */
+ 	.quirks = 0,
++	.lut_num = 32,
+ 	.little_endian = true,  /* little-endian    */
+ };
+ 
+@@ -365,6 +358,7 @@ static struct nxp_fspi_devtype_data imx8qxp_data = {
+ 	.txfifo = SZ_1K,        /* (128 * 64 bits)  */
+ 	.ahb_buf_size = SZ_2K,  /* (256 * 64 bits)  */
+ 	.quirks = 0,
++	.lut_num = 32,
+ 	.little_endian = true,  /* little-endian    */
+ };
+ 
+@@ -373,6 +367,16 @@ static struct nxp_fspi_devtype_data imx8dxl_data = {
+ 	.txfifo = SZ_1K,        /* (128 * 64 bits)  */
+ 	.ahb_buf_size = SZ_2K,  /* (256 * 64 bits)  */
+ 	.quirks = FSPI_QUIRK_USE_IP_ONLY,
++	.lut_num = 32,
++	.little_endian = true,  /* little-endian    */
++};
++
++static struct nxp_fspi_devtype_data imx8ulp_data = {
++	.rxfifo = SZ_512,       /* (64  * 64 bits)  */
++	.txfifo = SZ_1K,        /* (128 * 64 bits)  */
++	.ahb_buf_size = SZ_2K,  /* (256 * 64 bits)  */
++	.quirks = 0,
++	.lut_num = 16,
+ 	.little_endian = true,  /* little-endian    */
+ };
+ 
+@@ -544,6 +548,8 @@ static void nxp_fspi_prepare_lut(struct nxp_fspi *f,
+ 	void __iomem *base = f->iobase;
+ 	u32 lutval[4] = {};
+ 	int lutidx = 1, i;
++	u32 lut_offset = (f->devtype_data->lut_num - 1) * 4 * 4;
++	u32 target_lut_reg;
+ 
+ 	/* cmd */
+ 	lutval[0] |= LUT_DEF(0, LUT_CMD, LUT_PAD(op->cmd.buswidth),
+@@ -588,8 +594,10 @@ static void nxp_fspi_prepare_lut(struct nxp_fspi *f,
+ 	fspi_writel(f, FSPI_LCKER_UNLOCK, f->iobase + FSPI_LCKCR);
+ 
+ 	/* fill LUT */
+-	for (i = 0; i < ARRAY_SIZE(lutval); i++)
+-		fspi_writel(f, lutval[i], base + FSPI_LUT_REG(i));
++	for (i = 0; i < ARRAY_SIZE(lutval); i++) {
++		target_lut_reg = FSPI_LUT_BASE + lut_offset + i * 4;
++		fspi_writel(f, lutval[i], base + target_lut_reg);
++	}
+ 
+ 	dev_dbg(f->dev, "CMD[%02x] lutval[0:%08x 1:%08x 2:%08x 3:%08x], size: 0x%08x\n",
+ 		op->cmd.opcode, lutval[0], lutval[1], lutval[2], lutval[3], op->data.nbytes);
+@@ -876,7 +884,7 @@ static int nxp_fspi_do_op(struct nxp_fspi *f, const struct spi_mem_op *op)
+ 	void __iomem *base = f->iobase;
+ 	int seqnum = 0;
+ 	int err = 0;
+-	u32 reg;
++	u32 reg, seqid_lut;
+ 
+ 	reg = fspi_readl(f, base + FSPI_IPRXFCR);
+ 	/* invalid RXFIFO first */
+@@ -892,8 +900,9 @@ static int nxp_fspi_do_op(struct nxp_fspi *f, const struct spi_mem_op *op)
+ 	 * the LUT at each exec_op() call. And also specify the DATA
+ 	 * length, since it's has not been specified in the LUT.
+ 	 */
++	seqid_lut = f->devtype_data->lut_num - 1;
+ 	fspi_writel(f, op->data.nbytes |
+-		 (SEQID_LUT << FSPI_IPCR1_SEQID_SHIFT) |
++		 (seqid_lut << FSPI_IPCR1_SEQID_SHIFT) |
+ 		 (seqnum << FSPI_IPCR1_SEQNUM_SHIFT),
+ 		 base + FSPI_IPCR1);
+ 
+@@ -1017,7 +1026,7 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ {
+ 	void __iomem *base = f->iobase;
+ 	int ret, i;
+-	u32 reg;
++	u32 reg, seqid_lut;
+ 
+ 	/* disable and unprepare clock to avoid glitch pass to controller */
+ 	nxp_fspi_clk_disable_unprep(f);
+@@ -1092,11 +1101,17 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ 	fspi_writel(f, reg, base + FSPI_FLSHB1CR1);
+ 	fspi_writel(f, reg, base + FSPI_FLSHB2CR1);
+ 
++	/*
++	 * The driver only uses one single LUT entry, that is updated on
++	 * each call of exec_op(). Index 0 is preset at boot with a basic
++	 * read operation, so let's use the last entry.
++	 */
++	seqid_lut = f->devtype_data->lut_num - 1;
+ 	/* AHB Read - Set lut sequence ID for all CS. */
+-	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA1CR2);
+-	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA2CR2);
+-	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHB1CR2);
+-	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHB2CR2);
++	fspi_writel(f, seqid_lut, base + FSPI_FLSHA1CR2);
++	fspi_writel(f, seqid_lut, base + FSPI_FLSHA2CR2);
++	fspi_writel(f, seqid_lut, base + FSPI_FLSHB1CR2);
++	fspi_writel(f, seqid_lut, base + FSPI_FLSHB2CR2);
+ 
+ 	f->selected = -1;
+ 
+@@ -1291,6 +1306,7 @@ static const struct of_device_id nxp_fspi_dt_ids[] = {
+ 	{ .compatible = "nxp,imx8mp-fspi", .data = (void *)&imx8mm_data, },
+ 	{ .compatible = "nxp,imx8qxp-fspi", .data = (void *)&imx8qxp_data, },
+ 	{ .compatible = "nxp,imx8dxl-fspi", .data = (void *)&imx8dxl_data, },
++	{ .compatible = "nxp,imx8ulp-fspi", .data = (void *)&imx8ulp_data, },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, nxp_fspi_dt_ids);
+diff --git a/drivers/spi/spi-ppc4xx.c b/drivers/spi/spi-ppc4xx.c
+index 942c3117ab3a90..8f6309f32de0b5 100644
+--- a/drivers/spi/spi-ppc4xx.c
++++ b/drivers/spi/spi-ppc4xx.c
+@@ -27,7 +27,6 @@
+ #include <linux/wait.h>
+ #include <linux/platform_device.h>
+ #include <linux/of_address.h>
+-#include <linux/of_irq.h>
+ #include <linux/of_platform.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+@@ -412,7 +411,11 @@ static int spi_ppc4xx_of_probe(struct platform_device *op)
+ 	}
+ 
+ 	/* Request IRQ */
+-	hw->irqnum = irq_of_parse_and_map(np, 0);
++	ret = platform_get_irq(op, 0);
++	if (ret < 0)
++		goto free_host;
++	hw->irqnum = ret;
++
+ 	ret = request_irq(hw->irqnum, spi_ppc4xx_int,
+ 			  0, "spi_ppc4xx_of", (void *)hw);
+ 	if (ret) {
+diff --git a/drivers/staging/media/starfive/camss/stf-camss.c b/drivers/staging/media/starfive/camss/stf-camss.c
+index fecd3e67c7a1d5..b6d34145bc191e 100644
+--- a/drivers/staging/media/starfive/camss/stf-camss.c
++++ b/drivers/staging/media/starfive/camss/stf-camss.c
+@@ -358,8 +358,6 @@ static int stfcamss_probe(struct platform_device *pdev)
+ /*
+  * stfcamss_remove - Remove STFCAMSS platform device
+  * @pdev: Pointer to STFCAMSS platform device
+- *
+- * Always returns 0.
+  */
+ static void stfcamss_remove(struct platform_device *pdev)
+ {
+diff --git a/drivers/thermal/gov_bang_bang.c b/drivers/thermal/gov_bang_bang.c
+index daed67d19efb81..863e7a4272e66f 100644
+--- a/drivers/thermal/gov_bang_bang.c
++++ b/drivers/thermal/gov_bang_bang.c
+@@ -92,23 +92,21 @@ static void bang_bang_manage(struct thermal_zone_device *tz)
+ 
+ 	for_each_trip_desc(tz, td) {
+ 		const struct thermal_trip *trip = &td->trip;
++		bool turn_on;
+ 
+-		if (tz->temperature >= td->threshold ||
+-		    trip->temperature == THERMAL_TEMP_INVALID ||
++		if (trip->temperature == THERMAL_TEMP_INVALID ||
+ 		    trip->type == THERMAL_TRIP_CRITICAL ||
+ 		    trip->type == THERMAL_TRIP_HOT)
+ 			continue;
+ 
+ 		/*
+-		 * If the initial cooling device state is "on", but the zone
+-		 * temperature is not above the trip point, the core will not
+-		 * call bang_bang_control() until the zone temperature reaches
+-		 * the trip point temperature which may be never.  In those
+-		 * cases, set the initial state of the cooling device to 0.
++		 * Adjust the target states for uninitialized thermal instances
++		 * to the thermal zone temperature and the trip point threshold.
+ 		 */
++		turn_on = tz->temperature >= td->threshold;
+ 		list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+ 			if (!instance->initialized && instance->trip == trip)
+-				bang_bang_set_instance_target(instance, 0);
++				bang_bang_set_instance_target(instance, turn_on);
+ 		}
+ 	}
+ 
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index b8d889ef4fa5e6..d0e71698b35627 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -323,11 +323,15 @@ static void thermal_zone_broken_disable(struct thermal_zone_device *tz)
+ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
+ 					    unsigned long delay)
+ {
+-	if (delay)
+-		mod_delayed_work(system_freezable_power_efficient_wq,
+-				 &tz->poll_queue, delay);
+-	else
++	if (!delay) {
+ 		cancel_delayed_work(&tz->poll_queue);
++		return;
++	}
++
++	if (delay > HZ)
++		delay = round_jiffies_relative(delay);
++
++	mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, delay);
+ }
+ 
+ static void thermal_zone_recheck(struct thermal_zone_device *tz, int error)
+@@ -988,20 +992,6 @@ void print_bind_err_msg(struct thermal_zone_device *tz,
+ 		tz->type, cdev->type, ret);
+ }
+ 
+-static void bind_cdev(struct thermal_cooling_device *cdev)
+-{
+-	int ret;
+-	struct thermal_zone_device *pos = NULL;
+-
+-	list_for_each_entry(pos, &thermal_tz_list, node) {
+-		if (pos->ops.bind) {
+-			ret = pos->ops.bind(pos, cdev);
+-			if (ret)
+-				print_bind_err_msg(pos, cdev, ret);
+-		}
+-	}
+-}
+-
+ /**
+  * __thermal_cooling_device_register() - register a new thermal cooling device
+  * @np:		a pointer to a device tree node.
+@@ -1097,7 +1087,13 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	list_add(&cdev->node, &thermal_cdev_list);
+ 
+ 	/* Update binding information for 'this' new cdev */
+-	bind_cdev(cdev);
++	list_for_each_entry(pos, &thermal_tz_list, node) {
++		if (pos->ops.bind) {
++			ret = pos->ops.bind(pos, cdev);
++			if (ret)
++				print_bind_err_msg(pos, cdev, ret);
++		}
++	}
+ 
+ 	list_for_each_entry(pos, &thermal_tz_list, node)
+ 		if (atomic_cmpxchg(&pos->need_update, 1, 0))
+@@ -1335,32 +1331,6 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev)
+ }
+ EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister);
+ 
+-static void bind_tz(struct thermal_zone_device *tz)
+-{
+-	int ret;
+-	struct thermal_cooling_device *pos = NULL;
+-
+-	if (!tz->ops.bind)
+-		return;
+-
+-	mutex_lock(&thermal_list_lock);
+-
+-	list_for_each_entry(pos, &thermal_cdev_list, node) {
+-		ret = tz->ops.bind(tz, pos);
+-		if (ret)
+-			print_bind_err_msg(tz, pos, ret);
+-	}
+-
+-	mutex_unlock(&thermal_list_lock);
+-}
+-
+-static void thermal_set_delay_jiffies(unsigned long *delay_jiffies, int delay_ms)
+-{
+-	*delay_jiffies = msecs_to_jiffies(delay_ms);
+-	if (delay_ms > 1000)
+-		*delay_jiffies = round_jiffies(*delay_jiffies);
+-}
+-
+ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp)
+ {
+ 	const struct thermal_trip_desc *td;
+@@ -1497,8 +1467,8 @@ thermal_zone_device_register_with_trips(const char *type,
+ 		td->threshold = INT_MAX;
+ 	}
+ 
+-	thermal_set_delay_jiffies(&tz->passive_delay_jiffies, passive_delay);
+-	thermal_set_delay_jiffies(&tz->polling_delay_jiffies, polling_delay);
++	tz->polling_delay_jiffies = msecs_to_jiffies(polling_delay);
++	tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay);
+ 	tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+ 
+ 	/* sys I/F */
+@@ -1542,13 +1512,23 @@ thermal_zone_device_register_with_trips(const char *type,
+ 	}
+ 
+ 	mutex_lock(&thermal_list_lock);
++
+ 	mutex_lock(&tz->lock);
+ 	list_add_tail(&tz->node, &thermal_tz_list);
+ 	mutex_unlock(&tz->lock);
+-	mutex_unlock(&thermal_list_lock);
+ 
+ 	/* Bind cooling devices for this zone */
+-	bind_tz(tz);
++	if (tz->ops.bind) {
++		struct thermal_cooling_device *cdev;
++
++		list_for_each_entry(cdev, &thermal_cdev_list, node) {
++			result = tz->ops.bind(tz, cdev);
++			if (result)
++				print_bind_err_msg(tz, cdev, result);
++		}
++	}
++
++	mutex_unlock(&thermal_list_lock);
+ 
+ 	thermal_zone_device_init(tz);
+ 	/* Update the new thermal zone and mark it as already updated. */
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index afef1dd4ddf49c..fca5f25d693a72 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1581,7 +1581,7 @@ static int omap8250_probe(struct platform_device *pdev)
+ 	ret = devm_request_irq(&pdev->dev, up.port.irq, omap8250_irq, 0,
+ 			       dev_name(&pdev->dev), priv);
+ 	if (ret < 0)
+-		return ret;
++		goto err;
+ 
+ 	priv->wakeirq = irq_of_parse_and_map(np, 1);
+ 
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 69a632fefc41f9..f8f6e9466b400d 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -124,13 +124,14 @@ struct qcom_geni_serial_port {
+ 	dma_addr_t tx_dma_addr;
+ 	dma_addr_t rx_dma_addr;
+ 	bool setup;
+-	unsigned int baud;
++	unsigned long poll_timeout_us;
+ 	unsigned long clk_rate;
+ 	void *rx_buf;
+ 	u32 loopback;
+ 	bool brk;
+ 
+ 	unsigned int tx_remaining;
++	unsigned int tx_queued;
+ 	int wakeup_irq;
+ 	bool rx_tx_swap;
+ 	bool cts_rts_swap;
+@@ -144,6 +145,8 @@ static const struct uart_ops qcom_geni_uart_pops;
+ static struct uart_driver qcom_geni_console_driver;
+ static struct uart_driver qcom_geni_uart_driver;
+ 
++static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport);
++
+ static inline struct qcom_geni_serial_port *to_dev_port(struct uart_port *uport)
+ {
+ 	return container_of(uport, struct qcom_geni_serial_port, uport);
+@@ -265,27 +268,18 @@ static bool qcom_geni_serial_secondary_active(struct uart_port *uport)
+ 	return readl(uport->membase + SE_GENI_STATUS) & S_GENI_CMD_ACTIVE;
+ }
+ 
+-static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+-				int offset, int field, bool set)
++static bool qcom_geni_serial_poll_bitfield(struct uart_port *uport,
++					   unsigned int offset, u32 field, u32 val)
+ {
+ 	u32 reg;
+ 	struct qcom_geni_serial_port *port;
+-	unsigned int baud;
+-	unsigned int fifo_bits;
+ 	unsigned long timeout_us = 20000;
+ 	struct qcom_geni_private_data *private_data = uport->private_data;
+ 
+ 	if (private_data->drv) {
+ 		port = to_dev_port(uport);
+-		baud = port->baud;
+-		if (!baud)
+-			baud = 115200;
+-		fifo_bits = port->tx_fifo_depth * port->tx_fifo_width;
+-		/*
+-		 * Total polling iterations based on FIFO worth of bytes to be
+-		 * sent at current baud. Add a little fluff to the wait.
+-		 */
+-		timeout_us = ((fifo_bits * USEC_PER_SEC) / baud) + 500;
++		if (port->poll_timeout_us)
++			timeout_us = port->poll_timeout_us;
+ 	}
+ 
+ 	/*
+@@ -295,7 +289,7 @@ static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+ 	timeout_us = DIV_ROUND_UP(timeout_us, 10) * 10;
+ 	while (timeout_us) {
+ 		reg = readl(uport->membase + offset);
+-		if ((bool)(reg & field) == set)
++		if ((reg & field) == val)
+ 			return true;
+ 		udelay(10);
+ 		timeout_us -= 10;
+@@ -303,6 +297,12 @@ static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
+ 	return false;
+ }
+ 
++static bool qcom_geni_serial_poll_bit(struct uart_port *uport,
++				      unsigned int offset, u32 field, bool set)
++{
++	return qcom_geni_serial_poll_bitfield(uport, offset, field, set ? field : 0);
++}
++
+ static void qcom_geni_serial_setup_tx(struct uart_port *uport, u32 xmit_size)
+ {
+ 	u32 m_cmd;
+@@ -315,18 +315,16 @@ static void qcom_geni_serial_setup_tx(struct uart_port *uport, u32 xmit_size)
+ static void qcom_geni_serial_poll_tx_done(struct uart_port *uport)
+ {
+ 	int done;
+-	u32 irq_clear = M_CMD_DONE_EN;
+ 
+ 	done = qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ 						M_CMD_DONE_EN, true);
+ 	if (!done) {
+ 		writel(M_GENI_CMD_ABORT, uport->membase +
+ 						SE_GENI_M_CMD_CTRL_REG);
+-		irq_clear |= M_CMD_ABORT_EN;
+ 		qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ 							M_CMD_ABORT_EN, true);
++		writel(M_CMD_ABORT_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ 	}
+-	writel(irq_clear, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ }
+ 
+ static void qcom_geni_serial_abort_rx(struct uart_port *uport)
+@@ -387,6 +385,7 @@ static void qcom_geni_serial_poll_put_char(struct uart_port *uport,
+ 							unsigned char c)
+ {
+ 	writel(DEF_TX_WM, uport->membase + SE_GENI_TX_WATERMARK_REG);
++	writel(M_CMD_DONE_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ 	qcom_geni_serial_setup_tx(uport, 1);
+ 	WARN_ON(!qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+ 						M_TX_FIFO_WATERMARK_EN, true));
+@@ -397,6 +396,14 @@ static void qcom_geni_serial_poll_put_char(struct uart_port *uport,
+ #endif
+ 
+ #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
++static void qcom_geni_serial_drain_fifo(struct uart_port *uport)
++{
++	struct qcom_geni_serial_port *port = to_dev_port(uport);
++
++	qcom_geni_serial_poll_bitfield(uport, SE_GENI_M_GP_LENGTH, GP_LENGTH,
++			port->tx_queued);
++}
++
+ static void qcom_geni_serial_wr_char(struct uart_port *uport, unsigned char ch)
+ {
+ 	struct qcom_geni_private_data *private_data = uport->private_data;
+@@ -431,6 +438,7 @@ __qcom_geni_serial_console_write(struct uart_port *uport, const char *s,
+ 	}
+ 
+ 	writel(DEF_TX_WM, uport->membase + SE_GENI_TX_WATERMARK_REG);
++	writel(M_CMD_DONE_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ 	qcom_geni_serial_setup_tx(uport, bytes_to_send);
+ 	for (i = 0; i < count; ) {
+ 		size_t chars_to_write = 0;
+@@ -471,8 +479,6 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s,
+ 	struct qcom_geni_serial_port *port;
+ 	bool locked = true;
+ 	unsigned long flags;
+-	u32 geni_status;
+-	u32 irq_en;
+ 
+ 	WARN_ON(co->index < 0 || co->index >= GENI_UART_CONS_PORTS);
+ 
+@@ -486,40 +492,20 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s,
+ 	else
+ 		uart_port_lock_irqsave(uport, &flags);
+ 
+-	geni_status = readl(uport->membase + SE_GENI_STATUS);
++	if (qcom_geni_serial_main_active(uport)) {
++		/* Wait for completion or drain FIFO */
++		if (!locked || port->tx_remaining == 0)
++			qcom_geni_serial_poll_tx_done(uport);
++		else
++			qcom_geni_serial_drain_fifo(uport);
+ 
+-	if (!locked) {
+-		/*
+-		 * We can only get here if an oops is in progress then we were
+-		 * unable to get the lock. This means we can't safely access
+-		 * our state variables like tx_remaining. About the best we
+-		 * can do is wait for the FIFO to be empty before we start our
+-		 * transfer, so we'll do that.
+-		 */
+-		qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+-					  M_TX_FIFO_NOT_EMPTY_EN, false);
+-	} else if ((geni_status & M_GENI_CMD_ACTIVE) && !port->tx_remaining) {
+-		/*
+-		 * It seems we can't interrupt existing transfers if all data
+-		 * has been sent, in which case we need to look for done first.
+-		 */
+-		qcom_geni_serial_poll_tx_done(uport);
+-
+-		if (!kfifo_is_empty(&uport->state->port.xmit_fifo)) {
+-			irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
+-			writel(irq_en | M_TX_FIFO_WATERMARK_EN,
+-					uport->membase + SE_GENI_M_IRQ_EN);
+-		}
++		qcom_geni_serial_cancel_tx_cmd(uport);
+ 	}
+ 
+ 	__qcom_geni_serial_console_write(uport, s, count);
+ 
+-
+-	if (locked) {
+-		if (port->tx_remaining)
+-			qcom_geni_serial_setup_tx(uport, port->tx_remaining);
++	if (locked)
+ 		uart_port_unlock_irqrestore(uport, flags);
+-	}
+ }
+ 
+ static void handle_rx_console(struct uart_port *uport, u32 bytes, bool drop)
+@@ -700,6 +686,7 @@ static void qcom_geni_serial_cancel_tx_cmd(struct uart_port *uport)
+ 	writel(M_CMD_CANCEL_EN, uport->membase + SE_GENI_M_IRQ_CLEAR);
+ 
+ 	port->tx_remaining = 0;
++	port->tx_queued = 0;
+ }
+ 
+ static void qcom_geni_serial_handle_rx_fifo(struct uart_port *uport, bool drop)
+@@ -926,6 +913,7 @@ static void qcom_geni_serial_handle_tx_fifo(struct uart_port *uport,
+ 	if (!port->tx_remaining) {
+ 		qcom_geni_serial_setup_tx(uport, pending);
+ 		port->tx_remaining = pending;
++		port->tx_queued = 0;
+ 
+ 		irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
+ 		if (!(irq_en & M_TX_FIFO_WATERMARK_EN))
+@@ -934,6 +922,7 @@ static void qcom_geni_serial_handle_tx_fifo(struct uart_port *uport,
+ 	}
+ 
+ 	qcom_geni_serial_send_chunk_fifo(uport, chunk);
++	port->tx_queued += chunk;
+ 
+ 	/*
+ 	 * The tx fifo watermark is level triggered and latched. Though we had
+@@ -1244,11 +1233,11 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ 	unsigned long clk_rate;
+ 	u32 ver, sampling_rate;
+ 	unsigned int avg_bw_core;
++	unsigned long timeout;
+ 
+ 	qcom_geni_serial_stop_rx(uport);
+ 	/* baud rate */
+ 	baud = uart_get_baud_rate(uport, termios, old, 300, 4000000);
+-	port->baud = baud;
+ 
+ 	sampling_rate = UART_OVERSAMPLING;
+ 	/* Sampling rate is halved for IP versions >= 2.5 */
+@@ -1326,9 +1315,21 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ 	else
+ 		tx_trans_cfg |= UART_CTS_MASK;
+ 
+-	if (baud)
++	if (baud) {
+ 		uart_update_timeout(uport, termios->c_cflag, baud);
+ 
++		/*
++		 * Make sure that qcom_geni_serial_poll_bitfield() waits for
++		 * the FIFO, two-word intermediate transfer register and shift
++		 * register to clear.
++		 *
++		 * Note that uart_fifo_timeout() also adds a 20 ms margin.
++		 */
++		timeout = jiffies_to_usecs(uart_fifo_timeout(uport));
++		timeout += 3 * timeout / port->tx_fifo_depth;
++		WRITE_ONCE(port->poll_timeout_us, timeout);
++	}
++
+ 	if (!uart_console(uport))
+ 		writel(port->loopback,
+ 				uport->membase + SE_UART_LOOPBACK_CFG);
+diff --git a/drivers/tty/serial/rp2.c b/drivers/tty/serial/rp2.c
+index 4132fcff7d4e29..8bab2aedc4991f 100644
+--- a/drivers/tty/serial/rp2.c
++++ b/drivers/tty/serial/rp2.c
+@@ -577,8 +577,8 @@ static void rp2_reset_asic(struct rp2_card *card, unsigned int asic_id)
+ 	u32 clk_cfg;
+ 
+ 	writew(1, base + RP2_GLOBAL_CMD);
+-	readw(base + RP2_GLOBAL_CMD);
+ 	msleep(100);
++	readw(base + RP2_GLOBAL_CMD);
+ 	writel(0, base + RP2_CLK_PRESCALER);
+ 
+ 	/* TDM clock configuration */
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 9967444eae10cb..1d0e7b8514ccc1 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2696,14 +2696,13 @@ static int uart_poll_init(struct tty_driver *driver, int line, char *options)
+ 	int ret = 0;
+ 
+ 	tport = &state->port;
+-	mutex_lock(&tport->mutex);
++
++	guard(mutex)(&tport->mutex);
+ 
+ 	port = uart_port_check(state);
+ 	if (!port || port->type == PORT_UNKNOWN ||
+-	    !(port->ops->poll_get_char && port->ops->poll_put_char)) {
+-		ret = -1;
+-		goto out;
+-	}
++	    !(port->ops->poll_get_char && port->ops->poll_put_char))
++		return -1;
+ 
+ 	pm_state = state->pm_state;
+ 	uart_change_pm(state, UART_PM_STATE_ON);
+@@ -2723,10 +2722,10 @@ static int uart_poll_init(struct tty_driver *driver, int line, char *options)
+ 		ret = uart_set_options(port, NULL, baud, parity, bits, flow);
+ 		console_list_unlock();
+ 	}
+-out:
++
+ 	if (ret)
+ 		uart_change_pm(state, pm_state);
+-	mutex_unlock(&tport->mutex);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index cca190d1c577e9..92ff9706860ceb 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -93,7 +93,7 @@ static const struct __ufs_qcom_bw_table {
+ 	[MODE_HS_RB][UFS_HS_G3][UFS_LANE_2] = { 1492582,	204800 },
+ 	[MODE_HS_RB][UFS_HS_G4][UFS_LANE_2] = { 2915200,	409600 },
+ 	[MODE_HS_RB][UFS_HS_G5][UFS_LANE_2] = { 5836800,	819200 },
+-	[MODE_MAX][0][0]		    = { 7643136,	307200 },
++	[MODE_MAX][0][0]		    = { 7643136,	819200 },
+ };
+ 
+ static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index dbd83d321bca01..46852529499d16 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -718,7 +718,8 @@ int cdnsp_remove_request(struct cdnsp_device *pdev,
+ 	seg = cdnsp_trb_in_td(pdev, cur_td->start_seg, cur_td->first_trb,
+ 			      cur_td->last_trb, hw_deq);
+ 
+-	if (seg && (pep->ep_state & EP_ENABLED))
++	if (seg && (pep->ep_state & EP_ENABLED) &&
++	    !(pep->ep_state & EP_DIS_IN_RROGRESS))
+ 		cdnsp_find_new_dequeue_state(pdev, pep, preq->request.stream_id,
+ 					     cur_td, &deq_state);
+ 	else
+@@ -736,7 +737,8 @@ int cdnsp_remove_request(struct cdnsp_device *pdev,
+ 	 * During disconnecting all endpoint will be disabled so we don't
+ 	 * have to worry about updating dequeue pointer.
+ 	 */
+-	if (pdev->cdnsp_state & CDNSP_STATE_DISCONNECT_PENDING) {
++	if (pdev->cdnsp_state & CDNSP_STATE_DISCONNECT_PENDING ||
++	    pep->ep_state & EP_DIS_IN_RROGRESS) {
+ 		status = -ESHUTDOWN;
+ 		ret = cdnsp_cmd_set_deq(pdev, pep, &deq_state);
+ 	}
+diff --git a/drivers/usb/cdns3/host.c b/drivers/usb/cdns3/host.c
+index ceca4d839dfd42..7ba760ee62e331 100644
+--- a/drivers/usb/cdns3/host.c
++++ b/drivers/usb/cdns3/host.c
+@@ -62,7 +62,9 @@ static const struct xhci_plat_priv xhci_plat_cdns3_xhci = {
+ 	.resume_quirk = xhci_cdns3_resume_quirk,
+ };
+ 
+-static const struct xhci_plat_priv xhci_plat_cdnsp_xhci;
++static const struct xhci_plat_priv xhci_plat_cdnsp_xhci = {
++	.quirks = XHCI_CDNS_SCTX_QUIRK,
++};
+ 
+ static int __cdns_host_init(struct cdns *cdns)
+ {
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 0c1b69d944ca45..605fea4611029b 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -962,10 +962,12 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 	struct acm *acm = tty->driver_data;
+ 
+ 	ss->line = acm->minor;
++	mutex_lock(&acm->port.mutex);
+ 	ss->close_delay	= jiffies_to_msecs(acm->port.close_delay) / 10;
+ 	ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ 				ASYNC_CLOSING_WAIT_NONE :
+ 				jiffies_to_msecs(acm->port.closing_wait) / 10;
++	mutex_unlock(&acm->port.mutex);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index a8605b02115b1c..1ad8fa3f862a15 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -127,6 +127,15 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 			role = USB_ROLE_DEVICE;
+ 	}
+ 
++	if ((IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) ||
++	     IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)) &&
++	     dwc2_is_device_mode(hsotg) &&
++	     hsotg->lx_state == DWC2_L2 &&
++	     hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
++	     hsotg->bus_suspended &&
++	     !hsotg->params.no_clock_gating)
++		dwc2_gadget_exit_clock_gating(hsotg, 0);
++
+ 	if (role == USB_ROLE_HOST) {
+ 		already = dwc2_ovr_avalid(hsotg, true);
+ 	} else if (role == USB_ROLE_DEVICE) {
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index f37b0d8386c1a9..ff7bee78bcc492 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -1304,7 +1304,8 @@ static int dummy_urb_enqueue(
+ 
+ 	/* kick the scheduler, it'll do the rest */
+ 	if (!hrtimer_active(&dum_hcd->timer))
+-		hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL);
++		hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
++				HRTIMER_MODE_REL_SOFT);
+ 
+  done:
+ 	spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+@@ -1325,7 +1326,7 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ 	rc = usb_hcd_check_unlink_urb(hcd, urb, status);
+ 	if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING &&
+ 			!list_empty(&dum_hcd->urbp_list))
+-		hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++		hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
+ 
+ 	spin_unlock_irqrestore(&dum_hcd->dum->lock, flags);
+ 	return rc;
+@@ -1995,7 +1996,8 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t)
+ 		dum_hcd->udev = NULL;
+ 	} else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) {
+ 		/* want a 1 msec delay here */
+-		hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL);
++		hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS),
++				HRTIMER_MODE_REL_SOFT);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&dum->lock, flags);
+@@ -2389,7 +2391,7 @@ static int dummy_bus_resume(struct usb_hcd *hcd)
+ 		dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ 		set_link_state(dum_hcd);
+ 		if (!list_empty(&dum_hcd->urbp_list))
+-			hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
++			hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
+ 		hcd->state = HC_STATE_RUNNING;
+ 	}
+ 	spin_unlock_irq(&dum_hcd->dum->lock);
+@@ -2467,7 +2469,7 @@ static DEVICE_ATTR_RO(urbs);
+ 
+ static int dummy_start_ss(struct dummy_hcd *dum_hcd)
+ {
+-	hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ 	dum_hcd->timer.function = dummy_timer;
+ 	dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ 	dum_hcd->stream_en_ep = 0;
+@@ -2497,7 +2499,7 @@ static int dummy_start(struct usb_hcd *hcd)
+ 		return dummy_start_ss(dum_hcd);
+ 
+ 	spin_lock_init(&dum_hcd->dum->lock);
+-	hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ 	dum_hcd->timer.function = dummy_timer;
+ 	dum_hcd->rh_state = DUMMY_RH_RUNNING;
+ 
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index f591ddd086627a..fa3ee53df0ecc5 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2325,7 +2325,10 @@ xhci_add_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir,
+ 	erst_base = xhci_read_64(xhci, &ir->ir_set->erst_base);
+ 	erst_base &= ERST_BASE_RSVDP;
+ 	erst_base |= ir->erst.erst_dma_addr & ~ERST_BASE_RSVDP;
+-	xhci_write_64(xhci, erst_base, &ir->ir_set->erst_base);
++	if (xhci->quirks & XHCI_WRITE_64_HI_LO)
++		hi_lo_writeq(erst_base, &ir->ir_set->erst_base);
++	else
++		xhci_write_64(xhci, erst_base, &ir->ir_set->erst_base);
+ 
+ 	/* Set the event ring dequeue address of this interrupter */
+ 	xhci_set_hc_event_deq(xhci, ir);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index dc1e345ab67ea8..994fd8b38bd015 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -55,6 +55,9 @@
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI		0x51ed
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI	0x54ed
+ 
++#define PCI_VENDOR_ID_PHYTIUM		0x1db7
++#define PCI_DEVICE_ID_PHYTIUM_XHCI			0xdc27
++
+ /* Thunderbolt */
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI	0x15b5
+@@ -78,6 +81,9 @@
+ #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI			0x2142
+ #define PCI_DEVICE_ID_ASMEDIA_3242_XHCI			0x3242
+ 
++#define PCI_DEVICE_ID_CADENCE				0x17CD
++#define PCI_DEVICE_ID_CADENCE_SSP			0x0200
++
+ static const char hcd_name[] = "xhci_hcd";
+ 
+ static struct hc_driver __read_mostly xhci_pci_hc_driver;
+@@ -416,6 +422,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	if (pdev->vendor == PCI_VENDOR_ID_VIA)
+ 		xhci->quirks |= XHCI_RESET_ON_RESUME;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_PHYTIUM &&
++	    pdev->device == PCI_DEVICE_ID_PHYTIUM_XHCI)
++		xhci->quirks |= XHCI_RESET_ON_RESUME;
++
+ 	/* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */
+ 	if (pdev->vendor == PCI_VENDOR_ID_VIA &&
+ 			pdev->device == 0x3432)
+@@ -473,6 +483,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 			xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
+ 	}
+ 
++	if (pdev->vendor == PCI_DEVICE_ID_CADENCE &&
++	    pdev->device == PCI_DEVICE_ID_CADENCE_SSP)
++		xhci->quirks |= XHCI_CDNS_SCTX_QUIRK;
++
+ 	/* xHC spec requires PCI devices to support D3hot and D3cold */
+ 	if (xhci->hci_version >= 0x120)
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+@@ -655,8 +669,10 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ static void xhci_pci_remove(struct pci_dev *dev)
+ {
+ 	struct xhci_hcd *xhci;
++	bool set_power_d3;
+ 
+ 	xhci = hcd_to_xhci(pci_get_drvdata(dev));
++	set_power_d3 = xhci->quirks & XHCI_SPURIOUS_WAKEUP;
+ 
+ 	xhci->xhc_state |= XHCI_STATE_REMOVING;
+ 
+@@ -669,11 +685,11 @@ static void xhci_pci_remove(struct pci_dev *dev)
+ 		xhci->shared_hcd = NULL;
+ 	}
+ 
++	usb_hcd_pci_remove(dev);
++
+ 	/* Workaround for spurious wakeups at shutdown with HSW */
+-	if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
++	if (set_power_d3)
+ 		pci_set_power_state(dev, PCI_D3hot);
+-
+-	usb_hcd_pci_remove(dev);
+ }
+ 
+ /*
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index fd0cde3d1569c7..0fe6bef6c39800 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1426,6 +1426,20 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ 			struct xhci_stream_ctx *ctx =
+ 				&ep->stream_info->stream_ctx_array[stream_id];
+ 			deq = le64_to_cpu(ctx->stream_ring) & SCTX_DEQ_MASK;
++
++			/*
++			 * Cadence xHCI controllers store some endpoint state
++			 * information within Rsvd0 fields of Stream Endpoint
++			 * context. This field is not cleared during Set TR
++			 * Dequeue Pointer command which causes XDMA to skip
++			 * over transfer ring and leads to data loss on stream
++			 * pipe.
++			 * To fix this issue driver must clear Rsvd0 field.
++			 */
++			if (xhci->quirks & XHCI_CDNS_SCTX_QUIRK) {
++				ctx->reserved[0] = 0;
++				ctx->reserved[1] = 0;
++			}
+ 		} else {
+ 			deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK;
+ 		}
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 78d014c4d884a2..ac8da8a7df86b7 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -17,6 +17,7 @@
+ #include <linux/kernel.h>
+ #include <linux/usb/hcd.h>
+ #include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/io-64-nonatomic-hi-lo.h>
+ 
+ /* Code sharing between pci-quirks and xhci hcd */
+ #include	"xhci-ext-caps.h"
+@@ -1628,6 +1629,8 @@ struct xhci_hcd {
+ #define XHCI_RESET_TO_DEFAULT	BIT_ULL(44)
+ #define XHCI_ZHAOXIN_TRB_FETCH	BIT_ULL(45)
+ #define XHCI_ZHAOXIN_HOST	BIT_ULL(46)
++#define XHCI_WRITE_64_HI_LO	BIT_ULL(47)
++#define XHCI_CDNS_SCTX_QUIRK	BIT_ULL(48)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/misc/appledisplay.c b/drivers/usb/misc/appledisplay.c
+index c8098e9b432e13..62b5a30edc4267 100644
+--- a/drivers/usb/misc/appledisplay.c
++++ b/drivers/usb/misc/appledisplay.c
+@@ -107,7 +107,12 @@ static void appledisplay_complete(struct urb *urb)
+ 	case ACD_BTN_BRIGHT_UP:
+ 	case ACD_BTN_BRIGHT_DOWN:
+ 		pdata->button_pressed = 1;
+-		schedule_delayed_work(&pdata->work, 0);
++		/*
++		 * there is a window during which no device
++		 * is registered
++		 */
++		if (pdata->bd )
++			schedule_delayed_work(&pdata->work, 0);
+ 		break;
+ 	case ACD_BTN_NONE:
+ 	default:
+@@ -202,6 +207,7 @@ static int appledisplay_probe(struct usb_interface *iface,
+ 	const struct usb_device_id *id)
+ {
+ 	struct backlight_properties props;
++	struct backlight_device *backlight;
+ 	struct appledisplay *pdata;
+ 	struct usb_device *udev = interface_to_usbdev(iface);
+ 	struct usb_endpoint_descriptor *endpoint;
+@@ -272,13 +278,14 @@ static int appledisplay_probe(struct usb_interface *iface,
+ 	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	props.max_brightness = 0xff;
+-	pdata->bd = backlight_device_register(bl_name, NULL, pdata,
++	backlight = backlight_device_register(bl_name, NULL, pdata,
+ 					      &appledisplay_bl_data, &props);
+-	if (IS_ERR(pdata->bd)) {
++	if (IS_ERR(backlight)) {
+ 		dev_err(&iface->dev, "Backlight registration failed\n");
+-		retval = PTR_ERR(pdata->bd);
++		retval = PTR_ERR(backlight);
+ 		goto error;
+ 	}
++	pdata->bd = backlight;
+ 
+ 	/* Try to get brightness */
+ 	brightness = appledisplay_bl_get_brightness(pdata->bd);
+diff --git a/drivers/usb/misc/cypress_cy7c63.c b/drivers/usb/misc/cypress_cy7c63.c
+index cecd7693b7413c..75f5a740cba397 100644
+--- a/drivers/usb/misc/cypress_cy7c63.c
++++ b/drivers/usb/misc/cypress_cy7c63.c
+@@ -88,6 +88,9 @@ static int vendor_command(struct cypress *dev, unsigned char request,
+ 				 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+ 				 address, data, iobuf, CYPRESS_MAX_REQSIZE,
+ 				 USB_CTRL_GET_TIMEOUT);
++	/* we must not process garbage */
++	if (retval < 2)
++		goto err_buf;
+ 
+ 	/* store returned data (more READs to be added) */
+ 	switch (request) {
+@@ -107,6 +110,7 @@ static int vendor_command(struct cypress *dev, unsigned char request,
+ 			break;
+ 	}
+ 
++err_buf:
+ 	kfree(iobuf);
+ error:
+ 	return retval;
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 9a0649d2369354..6adaaf66c14d7b 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -404,7 +404,6 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ 	struct usb_yurex *dev;
+ 	int len = 0;
+ 	char in_buffer[MAX_S64_STRLEN];
+-	unsigned long flags;
+ 
+ 	dev = file->private_data;
+ 
+@@ -419,9 +418,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ 		return -EIO;
+ 	}
+ 
+-	spin_lock_irqsave(&dev->lock, flags);
++	spin_lock_irq(&dev->lock);
+ 	scnprintf(in_buffer, MAX_S64_STRLEN, "%lld\n", dev->bbu);
+-	spin_unlock_irqrestore(&dev->lock, flags);
++	spin_unlock_irq(&dev->lock);
+ 	mutex_unlock(&dev->io_mutex);
+ 
+ 	return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+@@ -511,8 +510,11 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ 			__func__, retval);
+ 		goto error;
+ 	}
+-	if (set && timeout)
++	if (set && timeout) {
++		spin_lock_irq(&dev->lock);
+ 		dev->bbu = c2;
++		spin_unlock_irq(&dev->lock);
++	}
+ 	return timeout ? count : -EIO;
+ 
+ error:
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 4758914ccf8608..bf56f3d6962533 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -581,6 +581,9 @@ static void mlx5_vdpa_show_mr_leaks(struct mlx5_vdpa_dev *mvdev)
+ 
+ void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev)
+ {
++	if (!mvdev->res.valid)
++		return;
++
+ 	for (int i = 0; i < MLX5_VDPA_NUM_AS; i++)
+ 		mlx5_vdpa_update_mr(mvdev, NULL, i);
+ 
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 6b9c12acf4381a..b3d2a53f9bb77f 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -209,11 +209,9 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
+ 	if (irq < 0)
+ 		return;
+ 
+-	irq_bypass_unregister_producer(&vq->call_ctx.producer);
+ 	if (!vq->call_ctx.ctx)
+ 		return;
+ 
+-	vq->call_ctx.producer.token = vq->call_ctx.ctx;
+ 	vq->call_ctx.producer.irq = irq;
+ 	ret = irq_bypass_register_producer(&vq->call_ctx.producer);
+ 	if (unlikely(ret))
+@@ -709,6 +707,14 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ 			vq->last_avail_idx = vq_state.split.avail_index;
+ 		}
+ 		break;
++	case VHOST_SET_VRING_CALL:
++		if (vq->call_ctx.ctx) {
++			if (ops->get_status(vdpa) &
++			    VIRTIO_CONFIG_S_DRIVER_OK)
++				vhost_vdpa_unsetup_vq_irq(v, idx);
++			vq->call_ctx.producer.token = NULL;
++		}
++		break;
+ 	}
+ 
+ 	r = vhost_vring_ioctl(&v->vdev, cmd, argp);
+@@ -747,13 +753,16 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
+ 			cb.callback = vhost_vdpa_virtqueue_cb;
+ 			cb.private = vq;
+ 			cb.trigger = vq->call_ctx.ctx;
++			vq->call_ctx.producer.token = vq->call_ctx.ctx;
++			if (ops->get_status(vdpa) &
++			    VIRTIO_CONFIG_S_DRIVER_OK)
++				vhost_vdpa_setup_vq_irq(v, idx);
+ 		} else {
+ 			cb.callback = NULL;
+ 			cb.private = NULL;
+ 			cb.trigger = NULL;
+ 		}
+ 		ops->set_vq_cb(vdpa, idx, &cb);
+-		vhost_vdpa_setup_vq_irq(v, idx);
+ 		break;
+ 
+ 	case VHOST_SET_VRING_NUM:
+@@ -1421,6 +1430,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
+ 	for (i = 0; i < nvqs; i++) {
+ 		vqs[i] = &v->vqs[i];
+ 		vqs[i]->handle_kick = handle_vq_kick;
++		vqs[i]->call_ctx.ctx = NULL;
+ 	}
+ 	vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false,
+ 		       vhost_vdpa_process_iotlb_msg);
+diff --git a/drivers/video/fbdev/hpfb.c b/drivers/video/fbdev/hpfb.c
+index 66fac8e5393e08..a1144b15098266 100644
+--- a/drivers/video/fbdev/hpfb.c
++++ b/drivers/video/fbdev/hpfb.c
+@@ -345,6 +345,7 @@ static int hpfb_dio_probe(struct dio_dev *d, const struct dio_device_id *ent)
+ 	if (hpfb_init_one(paddr, vaddr)) {
+ 		if (d->scode >= DIOII_SCBASE)
+ 			iounmap((void *)vaddr);
++		release_mem_region(d->resource.start, resource_size(&d->resource));
+ 		return -ENOMEM;
+ 	}
+ 	return 0;
+diff --git a/drivers/video/fbdev/xen-fbfront.c b/drivers/video/fbdev/xen-fbfront.c
+index 66d4628a96ae04..c90f48ebb15e3f 100644
+--- a/drivers/video/fbdev/xen-fbfront.c
++++ b/drivers/video/fbdev/xen-fbfront.c
+@@ -407,6 +407,7 @@ static int xenfb_probe(struct xenbus_device *dev,
+ 	/* complete the abuse: */
+ 	fb_info->pseudo_palette = fb_info->par;
+ 	fb_info->par = info;
++	fb_info->device = &dev->dev;
+ 
+ 	fb_info->screen_buffer = info->fb;
+ 
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index e51fe1b78518f4..d73076b686d8ca 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -216,29 +216,6 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	return devm_watchdog_register_device(dev, wdog);
+ }
+ 
+-static int __maybe_unused imx_sc_wdt_suspend(struct device *dev)
+-{
+-	struct imx_sc_wdt_device *imx_sc_wdd = dev_get_drvdata(dev);
+-
+-	if (watchdog_active(&imx_sc_wdd->wdd))
+-		imx_sc_wdt_stop(&imx_sc_wdd->wdd);
+-
+-	return 0;
+-}
+-
+-static int __maybe_unused imx_sc_wdt_resume(struct device *dev)
+-{
+-	struct imx_sc_wdt_device *imx_sc_wdd = dev_get_drvdata(dev);
+-
+-	if (watchdog_active(&imx_sc_wdd->wdd))
+-		imx_sc_wdt_start(&imx_sc_wdd->wdd);
+-
+-	return 0;
+-}
+-
+-static SIMPLE_DEV_PM_OPS(imx_sc_wdt_pm_ops,
+-			 imx_sc_wdt_suspend, imx_sc_wdt_resume);
+-
+ static const struct of_device_id imx_sc_wdt_dt_ids[] = {
+ 	{ .compatible = "fsl,imx-sc-wdt", },
+ 	{ /* sentinel */ }
+@@ -250,7 +227,6 @@ static struct platform_driver imx_sc_wdt_driver = {
+ 	.driver		= {
+ 		.name	= "imx-sc-wdt",
+ 		.of_match_table = imx_sc_wdt_dt_ids,
+-		.pm	= &imx_sc_wdt_pm_ops,
+ 	},
+ };
+ module_platform_driver(imx_sc_wdt_driver);
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 6579ae3f6dac22..5e83d1e0bd184c 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
+ {
+ 	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
+ 	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
++	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
+ 
+ 	next_bfn = pfn_to_bfn(xen_pfn);
+ 
++	/* If buffer is physically aligned, ensure DMA alignment. */
++	if (IS_ALIGNED(p, algn) &&
++	    !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn))
++		return 1;
++
+ 	for (i = 1; i < nr_pages; i++)
+ 		if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
+ 			return 1;
+@@ -140,7 +146,7 @@ xen_swiotlb_alloc_coherent(struct device *dev, size_t size,
+ 	void *ret;
+ 
+ 	/* Align the allocation to the Xen page size */
+-	size = 1UL << (order + XEN_PAGE_SHIFT);
++	size = ALIGN(size, XEN_PAGE_SIZE);
+ 
+ 	ret = (void *)__get_free_pages(flags, get_order(size));
+ 	if (!ret)
+@@ -172,7 +178,7 @@ xen_swiotlb_free_coherent(struct device *dev, size_t size, void *vaddr,
+ 	int order = get_order(size);
+ 
+ 	/* Convert the size to actually allocated. */
+-	size = 1UL << (order + XEN_PAGE_SHIFT);
++	size = ALIGN(size, XEN_PAGE_SIZE);
+ 
+ 	if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) ||
+ 	    WARN_ON_ONCE(range_straddles_page_boundary(phys, size)))
+diff --git a/fs/autofs/inode.c b/fs/autofs/inode.c
+index 1f5db686366316..bb404bfce036b4 100644
+--- a/fs/autofs/inode.c
++++ b/fs/autofs/inode.c
+@@ -172,8 +172,7 @@ static int autofs_parse_fd(struct fs_context *fc, struct autofs_sb_info *sbi,
+ 	ret = autofs_check_pipe(pipe);
+ 	if (ret < 0) {
+ 		errorf(fc, "Invalid/unusable pipe");
+-		if (param->type != fs_value_is_file)
+-			fput(pipe);
++		fput(pipe);
+ 		return -EBADF;
+ 	}
+ 
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index 6ed495ca7a311d..bc69bf0dbe93d2 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -126,6 +126,7 @@ struct btrfs_inode {
+ 	 * logged_trans), to access/update delalloc_bytes, new_delalloc_bytes,
+ 	 * defrag_bytes, disk_i_size, outstanding_extents, csum_bytes and to
+ 	 * update the VFS' inode number of bytes used.
++	 * Also protects setting struct file::private_data.
+ 	 */
+ 	spinlock_t lock;
+ 
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index b2e4b30b8fae91..a4296ce1de7178 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -457,6 +457,8 @@ struct btrfs_file_private {
+ 	void *filldir_buf;
+ 	u64 last_index;
+ 	struct extent_state *llseek_cached_state;
++	/* Task that allocated this structure. */
++	struct task_struct *owner_task;
+ };
+ 
+ static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 55be8a7f0bb188..79dd2d9a62d9df 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6426,13 +6426,13 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 			continue;
+ 
+ 		ret = btrfs_trim_free_extents(device, &group_trimmed);
++
++		trimmed += group_trimmed;
+ 		if (ret) {
+ 			dev_failed++;
+ 			dev_ret = ret;
+ 			break;
+ 		}
+-
+-		trimmed += group_trimmed;
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+ 
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 66dfee87390672..0a14379a2d517d 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -3689,7 +3689,7 @@ static bool find_desired_extent_in_hole(struct btrfs_inode *inode, int whence,
+ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ {
+ 	struct btrfs_inode *inode = BTRFS_I(file->f_mapping->host);
+-	struct btrfs_file_private *private = file->private_data;
++	struct btrfs_file_private *private;
+ 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ 	struct extent_state *cached_state = NULL;
+ 	struct extent_state **delalloc_cached_state;
+@@ -3717,7 +3717,19 @@ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ 	    inode_get_bytes(&inode->vfs_inode) == i_size)
+ 		return i_size;
+ 
+-	if (!private) {
++	spin_lock(&inode->lock);
++	private = file->private_data;
++	spin_unlock(&inode->lock);
++
++	if (private && private->owner_task != current) {
++		/*
++		 * Not allocated by us, don't use it as its cached state is used
++		 * by the task that allocated it and we don't want neither to
++		 * mess with it nor get incorrect results because it reflects an
++		 * invalid state for the current task.
++		 */
++		private = NULL;
++	} else if (!private) {
+ 		private = kzalloc(sizeof(*private), GFP_KERNEL);
+ 		/*
+ 		 * No worries if memory allocation failed.
+@@ -3725,7 +3737,23 @@ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence)
+ 		 * lseek SEEK_HOLE/DATA calls to a file when there's delalloc,
+ 		 * so everything will still be correct.
+ 		 */
+-		file->private_data = private;
++		if (private) {
++			bool free = false;
++
++			private->owner_task = current;
++
++			spin_lock(&inode->lock);
++			if (file->private_data)
++				free = true;
++			else
++				file->private_data = private;
++			spin_unlock(&inode->lock);
++
++			if (free) {
++				kfree(private);
++				private = NULL;
++			}
++		}
+ 	}
+ 
+ 	if (private)
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index c1b0556e403683..4a30f34bc8300a 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -543,13 +543,11 @@ static noinline int btrfs_ioctl_fitrim(struct btrfs_fs_info *fs_info,
+ 
+ 	range.minlen = max(range.minlen, minlen);
+ 	ret = btrfs_trim_fs(fs_info, &range);
+-	if (ret < 0)
+-		return ret;
+ 
+ 	if (copy_to_user(arg, &range, sizeof(range)))
+ 		return -EFAULT;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int __pure btrfs_is_empty_uuid(u8 *uuid)
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index 54736f6238e659..831e96b607f505 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -766,8 +766,14 @@ void btrfs_folio_unlock_writer(struct btrfs_fs_info *fs_info,
+ }
+ 
+ #define GET_SUBPAGE_BITMAP(subpage, subpage_info, name, dst)		\
+-	bitmap_cut(dst, subpage->bitmaps, 0,				\
+-		   subpage_info->name##_offset, subpage_info->bitmap_nr_bits)
++{									\
++	const int bitmap_nr_bits = subpage_info->bitmap_nr_bits;	\
++									\
++	ASSERT(bitmap_nr_bits < BITS_PER_LONG);				\
++	*dst = bitmap_read(subpage->bitmaps,				\
++			   subpage_info->name##_offset,			\
++			   bitmap_nr_bits);				\
++}
+ 
+ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ 				      struct folio *folio, u64 start, u32 len)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index de1c063bc39dbe..42e2ec3cd1c11e 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1499,7 +1499,7 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 				     dref_objectid > BTRFS_LAST_FREE_OBJECTID)) {
+ 				extent_err(leaf, slot,
+ 					   "invalid data ref objectid value %llu",
+-					   dref_root);
++					   dref_objectid);
+ 				return -EUCLEAN;
+ 			}
+ 			if (unlikely(!IS_ALIGNED(dref_offset,
+diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c
+index 4dd8a993c60a8b..7c6f260a3be567 100644
+--- a/fs/cachefiles/xattr.c
++++ b/fs/cachefiles/xattr.c
+@@ -64,9 +64,15 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object)
+ 		memcpy(buf->data, fscache_get_aux(object->cookie), len);
+ 
+ 	ret = cachefiles_inject_write_error();
+-	if (ret == 0)
+-		ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache,
+-				   buf, sizeof(struct cachefiles_xattr) + len, 0);
++	if (ret == 0) {
++		ret = mnt_want_write_file(file);
++		if (ret == 0) {
++			ret = vfs_setxattr(&nop_mnt_idmap, dentry,
++					   cachefiles_xattr_cache, buf,
++					   sizeof(struct cachefiles_xattr) + len, 0);
++			mnt_drop_write_file(file);
++		}
++	}
+ 	if (ret < 0) {
+ 		trace_cachefiles_vfs_error(object, file_inode(file), ret,
+ 					   cachefiles_trace_setxattr_error);
+@@ -151,8 +157,14 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache,
+ 	int ret;
+ 
+ 	ret = cachefiles_inject_remove_error();
+-	if (ret == 0)
+-		ret = vfs_removexattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache);
++	if (ret == 0) {
++		ret = mnt_want_write(cache->mnt);
++		if (ret == 0) {
++			ret = vfs_removexattr(&nop_mnt_idmap, dentry,
++					      cachefiles_xattr_cache);
++			mnt_drop_write(cache->mnt);
++		}
++	}
+ 	if (ret < 0) {
+ 		trace_cachefiles_vfs_error(object, d_inode(dentry), ret,
+ 					   cachefiles_trace_remxattr_error);
+@@ -208,9 +220,15 @@ bool cachefiles_set_volume_xattr(struct cachefiles_volume *volume)
+ 	memcpy(buf->data, p, volume->vcookie->coherency_len);
+ 
+ 	ret = cachefiles_inject_write_error();
+-	if (ret == 0)
+-		ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache,
+-				   buf, len, 0);
++	if (ret == 0) {
++		ret = mnt_want_write(volume->cache->mnt);
++		if (ret == 0) {
++			ret = vfs_setxattr(&nop_mnt_idmap, dentry,
++					   cachefiles_xattr_cache,
++					   buf, len, 0);
++			mnt_drop_write(volume->cache->mnt);
++		}
++	}
+ 	if (ret < 0) {
+ 		trace_cachefiles_vfs_error(NULL, d_inode(dentry), ret,
+ 					   cachefiles_trace_setxattr_error);
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 8fd928899a59e4..66d9b3b4c5881d 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -89,12 +89,14 @@ enum {
+ 	Opt_uid,
+ 	Opt_gid,
+ 	Opt_mode,
++	Opt_source,
+ };
+ 
+ static const struct fs_parameter_spec debugfs_param_specs[] = {
+-	fsparam_u32	("gid",		Opt_gid),
++	fsparam_gid	("gid",		Opt_gid),
+ 	fsparam_u32oct	("mode",	Opt_mode),
+-	fsparam_u32	("uid",		Opt_uid),
++	fsparam_uid	("uid",		Opt_uid),
++	fsparam_string	("source",	Opt_source),
+ 	{}
+ };
+ 
+@@ -102,8 +104,6 @@ static int debugfs_parse_param(struct fs_context *fc, struct fs_parameter *param
+ {
+ 	struct debugfs_fs_info *opts = fc->s_fs_info;
+ 	struct fs_parse_result result;
+-	kuid_t uid;
+-	kgid_t gid;
+ 	int opt;
+ 
+ 	opt = fs_parse(fc, debugfs_param_specs, param, &result);
+@@ -120,20 +120,20 @@ static int debugfs_parse_param(struct fs_context *fc, struct fs_parameter *param
+ 
+ 	switch (opt) {
+ 	case Opt_uid:
+-		uid = make_kuid(current_user_ns(), result.uint_32);
+-		if (!uid_valid(uid))
+-			return invalf(fc, "Unknown uid");
+-		opts->uid = uid;
++		opts->uid = result.uid;
+ 		break;
+ 	case Opt_gid:
+-		gid = make_kgid(current_user_ns(), result.uint_32);
+-		if (!gid_valid(gid))
+-			return invalf(fc, "Unknown gid");
+-		opts->gid = gid;
++		opts->gid = result.gid;
+ 		break;
+ 	case Opt_mode:
+ 		opts->mode = result.uint_32 & S_IALLUGO;
+ 		break;
++	case Opt_source:
++		if (fc->source)
++			return invalfc(fc, "Multiple sources specified");
++		fc->source = param->string;
++		param->string = NULL;
++		break;
+ 	/*
+ 	 * We might like to report bad mount options here;
+ 	 * but traditionally debugfs has ignored all mount options
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 5f6439a63af798..f2cab9e4f3bcd8 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -178,12 +178,14 @@ static int erofs_fill_symlink(struct inode *inode, void *kaddr,
+ 			      unsigned int m_pofs)
+ {
+ 	struct erofs_inode *vi = EROFS_I(inode);
+-	unsigned int bsz = i_blocksize(inode);
++	loff_t off;
+ 	char *lnk;
+ 
+-	/* if it cannot be handled with fast symlink scheme */
+-	if (vi->datalayout != EROFS_INODE_FLAT_INLINE ||
+-	    inode->i_size >= bsz || inode->i_size < 0) {
++	m_pofs += vi->xattr_isize;
++	/* check if it cannot be handled with fast symlink scheme */
++	if (vi->datalayout != EROFS_INODE_FLAT_INLINE || inode->i_size < 0 ||
++	    check_add_overflow(m_pofs, inode->i_size, &off) ||
++	    off > i_blocksize(inode)) {
+ 		inode->i_op = &erofs_symlink_iops;
+ 		return 0;
+ 	}
+@@ -192,16 +194,6 @@ static int erofs_fill_symlink(struct inode *inode, void *kaddr,
+ 	if (!lnk)
+ 		return -ENOMEM;
+ 
+-	m_pofs += vi->xattr_isize;
+-	/* inline symlink data shouldn't cross block boundary */
+-	if (m_pofs + inode->i_size > bsz) {
+-		kfree(lnk);
+-		erofs_err(inode->i_sb,
+-			  "inline data cross block boundary @ nid %llu",
+-			  vi->nid);
+-		DBG_BUGON(1);
+-		return -EFSCORRUPTED;
+-	}
+ 	memcpy(lnk, kaddr + m_pofs, inode->i_size);
+ 	lnk[inode->i_size] = '\0';
+ 
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index d6fe002a4a7194..2535479fb82dbe 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -19,10 +19,7 @@
+ typedef void *z_erofs_next_pcluster_t;
+ 
+ struct z_erofs_bvec {
+-	union {
+-		struct page *page;
+-		struct folio *folio;
+-	};
++	struct page *page;
+ 	int offset;
+ 	unsigned int end;
+ };
+@@ -617,32 +614,31 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ 		fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE;
+ }
+ 
+-/* called by erofs_shrinker to get rid of all cached compressed bvecs */
++/* (erofs_shrinker) disconnect cached encoded data with pclusters */
+ int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+ 					struct erofs_workgroup *grp)
+ {
+ 	struct z_erofs_pcluster *const pcl =
+ 		container_of(grp, struct z_erofs_pcluster, obj);
+ 	unsigned int pclusterpages = z_erofs_pclusterpages(pcl);
++	struct folio *folio;
+ 	int i;
+ 
+ 	DBG_BUGON(z_erofs_is_inline_pcluster(pcl));
+-	/* There is no actice user since the pcluster is now freezed */
++	/* Each cached folio contains one page unless bs > ps is supported */
+ 	for (i = 0; i < pclusterpages; ++i) {
+-		struct folio *folio = pcl->compressed_bvecs[i].folio;
++		if (pcl->compressed_bvecs[i].page) {
++			folio = page_folio(pcl->compressed_bvecs[i].page);
++			/* Avoid reclaiming or migrating this folio */
++			if (!folio_trylock(folio))
++				return -EBUSY;
+ 
+-		if (!folio)
+-			continue;
+-
+-		/* Avoid reclaiming or migrating this folio */
+-		if (!folio_trylock(folio))
+-			return -EBUSY;
+-
+-		if (!erofs_folio_is_managed(sbi, folio))
+-			continue;
+-		pcl->compressed_bvecs[i].folio = NULL;
+-		folio_detach_private(folio);
+-		folio_unlock(folio);
++			if (!erofs_folio_is_managed(sbi, folio))
++				continue;
++			pcl->compressed_bvecs[i].page = NULL;
++			folio_detach_private(folio);
++			folio_unlock(folio);
++		}
+ 	}
+ 	return 0;
+ }
+@@ -650,9 +646,9 @@ int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ {
+ 	struct z_erofs_pcluster *pcl = folio_get_private(folio);
+-	unsigned int pclusterpages = z_erofs_pclusterpages(pcl);
++	struct z_erofs_bvec *bvec = pcl->compressed_bvecs;
++	struct z_erofs_bvec *end = bvec + z_erofs_pclusterpages(pcl);
+ 	bool ret;
+-	int i;
+ 
+ 	if (!folio_test_private(folio))
+ 		return true;
+@@ -661,9 +657,9 @@ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ 	spin_lock(&pcl->obj.lockref.lock);
+ 	if (pcl->obj.lockref.count <= 0) {
+ 		DBG_BUGON(z_erofs_is_inline_pcluster(pcl));
+-		for (i = 0; i < pclusterpages; ++i) {
+-			if (pcl->compressed_bvecs[i].folio == folio) {
+-				pcl->compressed_bvecs[i].folio = NULL;
++		for (; bvec < end; ++bvec) {
++			if (bvec->page && page_folio(bvec->page) == folio) {
++				bvec->page = NULL;
+ 				folio_detach_private(folio);
+ 				ret = true;
+ 				break;
+@@ -1066,7 +1062,7 @@ static bool z_erofs_is_sync_decompress(struct erofs_sb_info *sbi,
+ 
+ static bool z_erofs_page_is_invalidated(struct page *page)
+ {
+-	return !page->mapping && !z_erofs_is_shortlived_page(page);
++	return !page_folio(page)->mapping && !z_erofs_is_shortlived_page(page);
+ }
+ 
+ struct z_erofs_decompress_backend {
+@@ -1419,6 +1415,7 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ 	bool tocache = false;
+ 	struct z_erofs_bvec zbv;
+ 	struct address_space *mapping;
++	struct folio *folio;
+ 	struct page *page;
+ 	int bs = i_blocksize(f->inode);
+ 
+@@ -1429,23 +1426,24 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ 	spin_lock(&pcl->obj.lockref.lock);
+ 	zbv = pcl->compressed_bvecs[nr];
+ 	spin_unlock(&pcl->obj.lockref.lock);
+-	if (!zbv.folio)
++	if (!zbv.page)
+ 		goto out_allocfolio;
+ 
+-	bvec->bv_page = &zbv.folio->page;
++	bvec->bv_page = zbv.page;
+ 	DBG_BUGON(z_erofs_is_shortlived_page(bvec->bv_page));
++
++	folio = page_folio(zbv.page);
+ 	/*
+ 	 * Handle preallocated cached folios.  We tried to allocate such folios
+ 	 * without triggering direct reclaim.  If allocation failed, inplace
+ 	 * file-backed folios will be used instead.
+ 	 */
+-	if (zbv.folio->private == (void *)Z_EROFS_PREALLOCATED_PAGE) {
+-		zbv.folio->private = 0;
++	if (folio->private == (void *)Z_EROFS_PREALLOCATED_PAGE) {
+ 		tocache = true;
+ 		goto out_tocache;
+ 	}
+ 
+-	mapping = READ_ONCE(zbv.folio->mapping);
++	mapping = READ_ONCE(folio->mapping);
+ 	/*
+ 	 * File-backed folios for inplace I/Os are all locked steady,
+ 	 * therefore it is impossible for `mapping` to be NULL.
+@@ -1457,56 +1455,58 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ 		return;
+ 	}
+ 
+-	folio_lock(zbv.folio);
+-	if (zbv.folio->mapping == mc) {
++	folio_lock(folio);
++	if (likely(folio->mapping == mc)) {
+ 		/*
+ 		 * The cached folio is still in managed cache but without
+ 		 * a valid `->private` pcluster hint.  Let's reconnect them.
+ 		 */
+-		if (!folio_test_private(zbv.folio)) {
+-			folio_attach_private(zbv.folio, pcl);
++		if (!folio_test_private(folio)) {
++			folio_attach_private(folio, pcl);
+ 			/* compressed_bvecs[] already takes a ref before */
+-			folio_put(zbv.folio);
++			folio_put(folio);
+ 		}
+-
+-		/* no need to submit if it is already up-to-date */
+-		if (folio_test_uptodate(zbv.folio)) {
+-			folio_unlock(zbv.folio);
+-			bvec->bv_page = NULL;
++		if (likely(folio->private == pcl))  {
++			/* don't submit cache I/Os again if already uptodate */
++			if (folio_test_uptodate(folio)) {
++				folio_unlock(folio);
++				bvec->bv_page = NULL;
++			}
++			return;
+ 		}
+-		return;
++		/*
++		 * Already linked with another pcluster, which only appears in
++		 * crafted images by fuzzers for now.  But handle this anyway.
++		 */
++		tocache = false;	/* use temporary short-lived pages */
++	} else {
++		DBG_BUGON(1); /* referenced managed folios can't be truncated */
++		tocache = true;
+ 	}
+-
+-	/*
+-	 * It has been truncated, so it's unsafe to reuse this one. Let's
+-	 * allocate a new page for compressed data.
+-	 */
+-	DBG_BUGON(zbv.folio->mapping);
+-	tocache = true;
+-	folio_unlock(zbv.folio);
+-	folio_put(zbv.folio);
++	folio_unlock(folio);
++	folio_put(folio);
+ out_allocfolio:
+ 	page = erofs_allocpage(&f->pagepool, gfp | __GFP_NOFAIL);
+ 	spin_lock(&pcl->obj.lockref.lock);
+-	if (pcl->compressed_bvecs[nr].folio) {
++	if (unlikely(pcl->compressed_bvecs[nr].page != zbv.page)) {
+ 		erofs_pagepool_add(&f->pagepool, page);
+ 		spin_unlock(&pcl->obj.lockref.lock);
+ 		cond_resched();
+ 		goto repeat;
+ 	}
+-	pcl->compressed_bvecs[nr].folio = zbv.folio = page_folio(page);
++	bvec->bv_page = pcl->compressed_bvecs[nr].page = page;
++	folio = page_folio(page);
+ 	spin_unlock(&pcl->obj.lockref.lock);
+-	bvec->bv_page = page;
+ out_tocache:
+ 	if (!tocache || bs != PAGE_SIZE ||
+-	    filemap_add_folio(mc, zbv.folio, pcl->obj.index + nr, gfp)) {
++	    filemap_add_folio(mc, folio, pcl->obj.index + nr, gfp)) {
+ 		/* turn into a temporary shortlived folio (1 ref) */
+-		zbv.folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
++		folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
+ 		return;
+ 	}
+-	folio_attach_private(zbv.folio, pcl);
++	folio_attach_private(folio, pcl);
+ 	/* drop a refcount added by allocpage (then 2 refs in total here) */
+-	folio_put(zbv.folio);
++	folio_put(folio);
+ }
+ 
+ static struct z_erofs_decompressqueue *jobqueue_init(struct super_block *sb,
+@@ -1638,13 +1638,10 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ 		cur = mdev.m_pa;
+ 		end = cur + pcl->pclustersize;
+ 		do {
+-			z_erofs_fill_bio_vec(&bvec, f, pcl, i++, mc);
+-			if (!bvec.bv_page)
+-				continue;
+-
++			bvec.bv_page = NULL;
+ 			if (bio && (cur != last_pa ||
+ 				    bio->bi_bdev != mdev.m_bdev)) {
+-io_retry:
++drain_io:
+ 				if (!erofs_is_fscache_mode(sb))
+ 					submit_bio(bio);
+ 				else
+@@ -1657,6 +1654,15 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ 				bio = NULL;
+ 			}
+ 
++			if (!bvec.bv_page) {
++				z_erofs_fill_bio_vec(&bvec, f, pcl, i++, mc);
++				if (!bvec.bv_page)
++					continue;
++				if (cur + bvec.bv_len > end)
++					bvec.bv_len = end - cur;
++				DBG_BUGON(bvec.bv_len < sb->s_blocksize);
++			}
++
+ 			if (unlikely(PageWorkingset(bvec.bv_page)) &&
+ 			    !memstall) {
+ 				psi_memstall_enter(&pflags);
+@@ -1676,13 +1682,9 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ 				++nr_bios;
+ 			}
+ 
+-			if (cur + bvec.bv_len > end)
+-				bvec.bv_len = end - cur;
+-			DBG_BUGON(bvec.bv_len < sb->s_blocksize);
+ 			if (!bio_add_page(bio, bvec.bv_page, bvec.bv_len,
+ 					  bvec.bv_offset))
+-				goto io_retry;
+-
++				goto drain_io;
+ 			last_pa = cur + bvec.bv_len;
+ 			bypass = false;
+ 		} while ((cur += bvec.bv_len) < end);
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index f53ca4f7fceddd..6d0e2f547ae7d1 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -420,7 +420,7 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
+ 
+ static bool ep_busy_loop_on(struct eventpoll *ep)
+ {
+-	return !!ep->busy_poll_usecs || net_busy_loop_on();
++	return !!READ_ONCE(ep->busy_poll_usecs) || net_busy_loop_on();
+ }
+ 
+ static bool ep_busy_loop_end(void *p, unsigned long start_time)
+diff --git a/fs/exfat/nls.c b/fs/exfat/nls.c
+index afdf13c34ff526..1ac011088ce76e 100644
+--- a/fs/exfat/nls.c
++++ b/fs/exfat/nls.c
+@@ -779,8 +779,11 @@ int exfat_create_upcase_table(struct super_block *sb)
+ 				le32_to_cpu(ep->dentry.upcase.checksum));
+ 
+ 			brelse(bh);
+-			if (ret && ret != -EIO)
++			if (ret && ret != -EIO) {
++				/* free memory from exfat_load_upcase_table call */
++				exfat_free_upcase_table(sbi);
+ 				goto load_default;
++			}
+ 
+ 			/* load successfully */
+ 			return ret;
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index e9bbb1da2d0a24..5a3b4bc1241499 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -514,6 +514,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 	if (min_inodes < 1)
+ 		min_inodes = 1;
+ 	min_clusters = avefreec - EXT4_CLUSTERS_PER_GROUP(sb)*flex_size / 4;
++	if (min_clusters < 0)
++		min_clusters = 0;
+ 
+ 	/*
+ 	 * Start looking in the flex group where we last allocated an
+@@ -755,10 +757,10 @@ int ext4_mark_inode_used(struct super_block *sb, int ino)
+ 	struct ext4_group_desc *gdp;
+ 	ext4_group_t group;
+ 	int bit;
+-	int err = -EFSCORRUPTED;
++	int err;
+ 
+ 	if (ino < EXT4_FIRST_INO(sb) || ino > max_ino)
+-		goto out;
++		return -EFSCORRUPTED;
+ 
+ 	group = (ino - 1) / EXT4_INODES_PER_GROUP(sb);
+ 	bit = (ino - 1) % EXT4_INODES_PER_GROUP(sb);
+@@ -860,6 +862,7 @@ int ext4_mark_inode_used(struct super_block *sb, int ino)
+ 	err = ext4_handle_dirty_metadata(NULL, NULL, group_desc_bh);
+ 	sync_dirty_buffer(group_desc_bh);
+ out:
++	brelse(inode_bitmap_bh);
+ 	return err;
+ }
+ 
+@@ -1053,12 +1056,13 @@ struct inode *__ext4_new_inode(struct mnt_idmap *idmap,
+ 		brelse(inode_bitmap_bh);
+ 		inode_bitmap_bh = ext4_read_inode_bitmap(sb, group);
+ 		/* Skip groups with suspicious inode tables */
+-		if (((!(sbi->s_mount_state & EXT4_FC_REPLAY))
+-		     && EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) ||
+-		    IS_ERR(inode_bitmap_bh)) {
++		if (IS_ERR(inode_bitmap_bh)) {
+ 			inode_bitmap_bh = NULL;
+ 			goto next_group;
+ 		}
++		if (!(sbi->s_mount_state & EXT4_FC_REPLAY) &&
++		    EXT4_MB_GRP_IBITMAP_CORRUPT(grp))
++			goto next_group;
+ 
+ repeat_in_this_group:
+ 		ret2 = find_inode_bit(sb, group, inode_bitmap_bh, &ino);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index e7a09a99837b96..44a5f6df59ecda 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1664,24 +1664,36 @@ struct buffer_head *ext4_find_inline_entry(struct inode *dir,
+ 					struct ext4_dir_entry_2 **res_dir,
+ 					int *has_inline_data)
+ {
++	struct ext4_xattr_ibody_find is = {
++		.s = { .not_found = -ENODATA, },
++	};
++	struct ext4_xattr_info i = {
++		.name_index = EXT4_XATTR_INDEX_SYSTEM,
++		.name = EXT4_XATTR_SYSTEM_DATA,
++	};
+ 	int ret;
+-	struct ext4_iloc iloc;
+ 	void *inline_start;
+ 	int inline_size;
+ 
+-	if (ext4_get_inode_loc(dir, &iloc))
+-		return NULL;
++	ret = ext4_get_inode_loc(dir, &is.iloc);
++	if (ret)
++		return ERR_PTR(ret);
+ 
+ 	down_read(&EXT4_I(dir)->xattr_sem);
++
++	ret = ext4_xattr_ibody_find(dir, &i, &is);
++	if (ret)
++		goto out;
++
+ 	if (!ext4_has_inline_data(dir)) {
+ 		*has_inline_data = 0;
+ 		goto out;
+ 	}
+ 
+-	inline_start = (void *)ext4_raw_inode(&iloc)->i_block +
++	inline_start = (void *)ext4_raw_inode(&is.iloc)->i_block +
+ 						EXT4_INLINE_DOTDOT_SIZE;
+ 	inline_size = EXT4_MIN_INLINE_DATA_SIZE - EXT4_INLINE_DOTDOT_SIZE;
+-	ret = ext4_search_dir(iloc.bh, inline_start, inline_size,
++	ret = ext4_search_dir(is.iloc.bh, inline_start, inline_size,
+ 			      dir, fname, 0, res_dir);
+ 	if (ret == 1)
+ 		goto out_find;
+@@ -1691,20 +1703,23 @@ struct buffer_head *ext4_find_inline_entry(struct inode *dir,
+ 	if (ext4_get_inline_size(dir) == EXT4_MIN_INLINE_DATA_SIZE)
+ 		goto out;
+ 
+-	inline_start = ext4_get_inline_xattr_pos(dir, &iloc);
++	inline_start = ext4_get_inline_xattr_pos(dir, &is.iloc);
+ 	inline_size = ext4_get_inline_size(dir) - EXT4_MIN_INLINE_DATA_SIZE;
+ 
+-	ret = ext4_search_dir(iloc.bh, inline_start, inline_size,
++	ret = ext4_search_dir(is.iloc.bh, inline_start, inline_size,
+ 			      dir, fname, 0, res_dir);
+ 	if (ret == 1)
+ 		goto out_find;
+ 
+ out:
+-	brelse(iloc.bh);
+-	iloc.bh = NULL;
++	brelse(is.iloc.bh);
++	if (ret < 0)
++		is.iloc.bh = ERR_PTR(ret);
++	else
++		is.iloc.bh = NULL;
+ out_find:
+ 	up_read(&EXT4_I(dir)->xattr_sem);
+-	return iloc.bh;
++	return is.iloc.bh;
+ }
+ 
+ int ext4_delete_inline_entry(handle_t *handle,
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9dda9cd68ab2f5..dfecd25cee4eae 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3887,11 +3887,8 @@ static void ext4_free_data_in_buddy(struct super_block *sb,
+ 	/*
+ 	 * Clear the trimmed flag for the group so that the next
+ 	 * ext4_trim_fs can trim it.
+-	 * If the volume is mounted with -o discard, online discard
+-	 * is supported and the free blocks will be trimmed online.
+ 	 */
+-	if (!test_opt(sb, DISCARD))
+-		EXT4_MB_GRP_CLEAR_TRIMMED(db);
++	EXT4_MB_GRP_CLEAR_TRIMMED(db);
+ 
+ 	if (!db->bb_free_root.rb_node) {
+ 		/* No more items in the per group rb tree
+@@ -6515,8 +6512,9 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+ 					 " group:%u block:%d count:%lu failed"
+ 					 " with %d", block_group, bit, count,
+ 					 err);
+-		} else
+-			EXT4_MB_GRP_CLEAR_TRIMMED(e4b.bd_info);
++		}
++
++		EXT4_MB_GRP_CLEAR_TRIMMED(e4b.bd_info);
+ 
+ 		ext4_lock_group(sb, block_group);
+ 		mb_free_blocks(inode, &e4b, bit, count_clusters);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index c682fb927b64b8..edc692984404d5 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5177,6 +5177,18 @@ static int ext4_block_group_meta_init(struct super_block *sb, int silent)
+ 	return 0;
+ }
+ 
++/*
++ * It's hard to get stripe aligned blocks if stripe is not aligned with
++ * cluster, just disable stripe and alert user to simplify code and avoid
++ * stripe aligned allocation which will rarely succeed.
++ */
++static bool ext4_is_stripe_incompatible(struct super_block *sb, unsigned long stripe)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	return (stripe > 0 && sbi->s_cluster_ratio > 1 &&
++		stripe % sbi->s_cluster_ratio != 0);
++}
++
+ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ {
+ 	struct ext4_super_block *es = NULL;
+@@ -5284,13 +5296,7 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 		goto failed_mount3;
+ 
+ 	sbi->s_stripe = ext4_get_stripe_size(sbi);
+-	/*
+-	 * It's hard to get stripe aligned blocks if stripe is not aligned with
+-	 * cluster, just disable stripe and alert user to simpfy code and avoid
+-	 * stripe aligned allocation which will rarely successes.
+-	 */
+-	if (sbi->s_stripe > 0 && sbi->s_cluster_ratio > 1 &&
+-	    sbi->s_stripe % sbi->s_cluster_ratio != 0) {
++	if (ext4_is_stripe_incompatible(sb, sbi->s_stripe)) {
+ 		ext4_msg(sb, KERN_WARNING,
+ 			 "stripe (%lu) is not aligned with cluster size (%u), "
+ 			 "stripe is disabled",
+@@ -6453,6 +6459,15 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 
+ 	}
+ 
++	if ((ctx->spec & EXT4_SPEC_s_stripe) &&
++	    ext4_is_stripe_incompatible(sb, ctx->s_stripe)) {
++		ext4_msg(sb, KERN_WARNING,
++			 "stripe (%lu) is not aligned with cluster size (%u), "
++			 "stripe is disabled",
++			 ctx->s_stripe, sbi->s_cluster_ratio);
++		ctx->s_stripe = 0;
++	}
++
+ 	/*
+ 	 * Changing the DIOREAD_NOLOCK or DELALLOC mount options may cause
+ 	 * two calls to ext4_should_dioread_nolock() to return inconsistent
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1ef82a5463910e..b0506745adab7a 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -945,7 +945,7 @@ static int __f2fs_get_cluster_blocks(struct inode *inode,
+ 	unsigned int cluster_size = F2FS_I(inode)->i_cluster_size;
+ 	int count, i;
+ 
+-	for (i = 1, count = 1; i < cluster_size; i++) {
++	for (i = 0, count = 0; i < cluster_size; i++) {
+ 		block_t blkaddr = data_blkaddr(dn->inode, dn->node_page,
+ 							dn->ofs_in_node + i);
+ 
+@@ -956,8 +956,8 @@ static int __f2fs_get_cluster_blocks(struct inode *inode,
+ 	return count;
+ }
+ 
+-static int __f2fs_cluster_blocks(struct inode *inode,
+-				unsigned int cluster_idx, bool compr_blks)
++static int __f2fs_cluster_blocks(struct inode *inode, unsigned int cluster_idx,
++				enum cluster_check_type type)
+ {
+ 	struct dnode_of_data dn;
+ 	unsigned int start_idx = cluster_idx <<
+@@ -978,10 +978,12 @@ static int __f2fs_cluster_blocks(struct inode *inode,
+ 	}
+ 
+ 	if (dn.data_blkaddr == COMPRESS_ADDR) {
+-		if (compr_blks)
+-			ret = __f2fs_get_cluster_blocks(inode, &dn);
+-		else
++		if (type == CLUSTER_COMPR_BLKS)
++			ret = 1 + __f2fs_get_cluster_blocks(inode, &dn);
++		else if (type == CLUSTER_IS_COMPR)
+ 			ret = 1;
++	} else if (type == CLUSTER_RAW_BLKS) {
++		ret = __f2fs_get_cluster_blocks(inode, &dn);
+ 	}
+ fail:
+ 	f2fs_put_dnode(&dn);
+@@ -991,7 +993,16 @@ static int __f2fs_cluster_blocks(struct inode *inode,
+ /* return # of compressed blocks in compressed cluster */
+ static int f2fs_compressed_blocks(struct compress_ctx *cc)
+ {
+-	return __f2fs_cluster_blocks(cc->inode, cc->cluster_idx, true);
++	return __f2fs_cluster_blocks(cc->inode, cc->cluster_idx,
++		CLUSTER_COMPR_BLKS);
++}
++
++/* return # of raw blocks in non-compressed cluster */
++static int f2fs_decompressed_blocks(struct inode *inode,
++				unsigned int cluster_idx)
++{
++	return __f2fs_cluster_blocks(inode, cluster_idx,
++		CLUSTER_RAW_BLKS);
+ }
+ 
+ /* return whether cluster is compressed one or not */
+@@ -999,7 +1010,16 @@ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
+ {
+ 	return __f2fs_cluster_blocks(inode,
+ 		index >> F2FS_I(inode)->i_log_cluster_size,
+-		false);
++		CLUSTER_IS_COMPR);
++}
++
++/* return whether cluster contains non raw blocks or not */
++bool f2fs_is_sparse_cluster(struct inode *inode, pgoff_t index)
++{
++	unsigned int cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size;
++
++	return f2fs_decompressed_blocks(inode, cluster_idx) !=
++		F2FS_I(inode)->i_cluster_size;
+ }
+ 
+ static bool cluster_may_compress(struct compress_ctx *cc)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 467f67cf2b3800..825f6bcb7fc2ee 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2645,10 +2645,13 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ 	struct dnode_of_data dn;
+ 	struct node_info ni;
+ 	bool ipu_force = false;
++	bool atomic_commit;
+ 	int err = 0;
+ 
+ 	/* Use COW inode to make dnode_of_data for atomic write */
+-	if (f2fs_is_atomic_file(inode))
++	atomic_commit = f2fs_is_atomic_file(inode) &&
++				page_private_atomic(fio->page);
++	if (atomic_commit)
+ 		set_new_dnode(&dn, F2FS_I(inode)->cow_inode, NULL, NULL, 0);
+ 	else
+ 		set_new_dnode(&dn, inode, NULL, NULL, 0);
+@@ -2747,6 +2750,8 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ 	f2fs_outplace_write_data(&dn, fio);
+ 	trace_f2fs_do_write_data_page(page_folio(page), OPU);
+ 	set_inode_flag(inode, FI_APPEND_WRITE);
++	if (atomic_commit)
++		clear_page_private_atomic(page);
+ out_writepage:
+ 	f2fs_put_dnode(&dn);
+ out:
+@@ -3716,6 +3721,9 @@ static int f2fs_write_end(struct file *file,
+ 
+ 	set_page_dirty(page);
+ 
++	if (f2fs_is_atomic_file(inode))
++		set_page_private_atomic(page);
++
+ 	if (pos + copied > i_size_read(inode) &&
+ 	    !f2fs_verity_in_progress(inode)) {
+ 		f2fs_i_size_write(inode, pos + copied);
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 02c9355176d3b5..5a00e010f8c112 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -157,7 +157,8 @@ static unsigned long dir_block_index(unsigned int level,
+ 	unsigned long bidx = 0;
+ 
+ 	for (i = 0; i < level; i++)
+-		bidx += dir_buckets(i, dir_level) * bucket_blocks(i);
++		bidx += mul_u32_u32(dir_buckets(i, dir_level),
++				    bucket_blocks(i));
+ 	bidx += idx * bucket_blocks(level);
+ 	return bidx;
+ }
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index fd1fc06359eea3..62ac440d94168a 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -366,7 +366,7 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ static void __drop_largest_extent(struct extent_tree *et,
+ 					pgoff_t fofs, unsigned int len)
+ {
+-	if (fofs < et->largest.fofs + et->largest.len &&
++	if (fofs < (pgoff_t)et->largest.fofs + et->largest.len &&
+ 			fofs + len > et->largest.fofs) {
+ 		et->largest.len = 0;
+ 		et->largest_updated = true;
+@@ -456,7 +456,7 @@ static bool __lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+ 
+ 	if (type == EX_READ &&
+ 			et->largest.fofs <= pgofs &&
+-			et->largest.fofs + et->largest.len > pgofs) {
++			(pgoff_t)et->largest.fofs + et->largest.len > pgofs) {
+ 		*ei = et->largest;
+ 		ret = true;
+ 		stat_inc_largest_node_hit(sbi);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 92fda31c68cdc3..9c8acb98f4dbf6 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -285,6 +285,7 @@ enum {
+ 	APPEND_INO,		/* for append ino list */
+ 	UPDATE_INO,		/* for update ino list */
+ 	TRANS_DIR_INO,		/* for transactions dir ino list */
++	XATTR_DIR_INO,		/* for xattr updated dir ino list */
+ 	FLUSH_INO,		/* for multiple device flushing */
+ 	MAX_INO_ENTRY,		/* max. list */
+ };
+@@ -784,7 +785,6 @@ enum {
+ 	FI_NEED_IPU,		/* used for ipu per file */
+ 	FI_ATOMIC_FILE,		/* indicate atomic file */
+ 	FI_DATA_EXIST,		/* indicate data exists */
+-	FI_INLINE_DOTS,		/* indicate inline dot dentries */
+ 	FI_SKIP_WRITES,		/* should skip data page writeback */
+ 	FI_OPU_WRITE,		/* used for opu per file */
+ 	FI_DIRTY_FILE,		/* indicate regular/symlink has dirty pages */
+@@ -802,6 +802,7 @@ enum {
+ 	FI_ALIGNED_WRITE,	/* enable aligned write */
+ 	FI_COW_FILE,		/* indicate COW file */
+ 	FI_ATOMIC_COMMITTED,	/* indicate atomic commit completed except disk sync */
++	FI_ATOMIC_DIRTIED,	/* indicate atomic file is dirtied */
+ 	FI_ATOMIC_REPLACE,	/* indicate atomic replace */
+ 	FI_OPENED_FILE,		/* indicate file has been opened */
+ 	FI_MAX,			/* max flag, never be used */
+@@ -1155,6 +1156,7 @@ enum cp_reason_type {
+ 	CP_FASTBOOT_MODE,
+ 	CP_SPEC_LOG_NUM,
+ 	CP_RECOVER_DIR,
++	CP_XATTR_DIR,
+ };
+ 
+ enum iostat_type {
+@@ -1418,7 +1420,8 @@ static inline void f2fs_clear_bit(unsigned int nr, char *addr);
+  * bit 1	PAGE_PRIVATE_ONGOING_MIGRATION
+  * bit 2	PAGE_PRIVATE_INLINE_INODE
+  * bit 3	PAGE_PRIVATE_REF_RESOURCE
+- * bit 4-	f2fs private data
++ * bit 4	PAGE_PRIVATE_ATOMIC_WRITE
++ * bit 5-	f2fs private data
+  *
+  * Layout B: lowest bit should be 0
+  * page.private is a wrapped pointer.
+@@ -1428,6 +1431,7 @@ enum {
+ 	PAGE_PRIVATE_ONGOING_MIGRATION,		/* data page which is on-going migrating */
+ 	PAGE_PRIVATE_INLINE_INODE,		/* inode page contains inline data */
+ 	PAGE_PRIVATE_REF_RESOURCE,		/* dirty page has referenced resources */
++	PAGE_PRIVATE_ATOMIC_WRITE,		/* data page from atomic write path */
+ 	PAGE_PRIVATE_MAX
+ };
+ 
+@@ -2396,14 +2400,17 @@ static inline void clear_page_private_##name(struct page *page) \
+ PAGE_PRIVATE_GET_FUNC(nonpointer, NOT_POINTER);
+ PAGE_PRIVATE_GET_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_GET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_GET_FUNC(atomic, ATOMIC_WRITE);
+ 
+ PAGE_PRIVATE_SET_FUNC(reference, REF_RESOURCE);
+ PAGE_PRIVATE_SET_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_SET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_SET_FUNC(atomic, ATOMIC_WRITE);
+ 
+ PAGE_PRIVATE_CLEAR_FUNC(reference, REF_RESOURCE);
+ PAGE_PRIVATE_CLEAR_FUNC(inline, INLINE_INODE);
+ PAGE_PRIVATE_CLEAR_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_CLEAR_FUNC(atomic, ATOMIC_WRITE);
+ 
+ static inline unsigned long get_page_private_data(struct page *page)
+ {
+@@ -2435,6 +2442,7 @@ static inline void clear_page_private_all(struct page *page)
+ 	clear_page_private_reference(page);
+ 	clear_page_private_gcing(page);
+ 	clear_page_private_inline(page);
++	clear_page_private_atomic(page);
+ 
+ 	f2fs_bug_on(F2FS_P_SB(page), page_private(page));
+ }
+@@ -3038,10 +3046,8 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ 			return;
+ 		fallthrough;
+ 	case FI_DATA_EXIST:
+-	case FI_INLINE_DOTS:
+ 	case FI_PIN_FILE:
+ 	case FI_COMPRESS_RELEASED:
+-	case FI_ATOMIC_COMMITTED:
+ 		f2fs_mark_inode_dirty_sync(inode, true);
+ 	}
+ }
+@@ -3163,8 +3169,6 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri)
+ 		set_bit(FI_INLINE_DENTRY, fi->flags);
+ 	if (ri->i_inline & F2FS_DATA_EXIST)
+ 		set_bit(FI_DATA_EXIST, fi->flags);
+-	if (ri->i_inline & F2FS_INLINE_DOTS)
+-		set_bit(FI_INLINE_DOTS, fi->flags);
+ 	if (ri->i_inline & F2FS_EXTRA_ATTR)
+ 		set_bit(FI_EXTRA_ATTR, fi->flags);
+ 	if (ri->i_inline & F2FS_PIN_FILE)
+@@ -3185,8 +3189,6 @@ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
+ 		ri->i_inline |= F2FS_INLINE_DENTRY;
+ 	if (is_inode_flag_set(inode, FI_DATA_EXIST))
+ 		ri->i_inline |= F2FS_DATA_EXIST;
+-	if (is_inode_flag_set(inode, FI_INLINE_DOTS))
+-		ri->i_inline |= F2FS_INLINE_DOTS;
+ 	if (is_inode_flag_set(inode, FI_EXTRA_ATTR))
+ 		ri->i_inline |= F2FS_EXTRA_ATTR;
+ 	if (is_inode_flag_set(inode, FI_PIN_FILE))
+@@ -3273,11 +3275,6 @@ static inline int f2fs_exist_data(struct inode *inode)
+ 	return is_inode_flag_set(inode, FI_DATA_EXIST);
+ }
+ 
+-static inline int f2fs_has_inline_dots(struct inode *inode)
+-{
+-	return is_inode_flag_set(inode, FI_INLINE_DOTS);
+-}
+-
+ static inline int f2fs_is_mmap_file(struct inode *inode)
+ {
+ 	return is_inode_flag_set(inode, FI_MMAP_FILE);
+@@ -3501,7 +3498,7 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end);
+ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count);
+ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+-							bool readonly);
++						bool readonly, bool need_lock);
+ int f2fs_precache_extents(struct inode *inode);
+ int f2fs_fileattr_get(struct dentry *dentry, struct fileattr *fa);
+ int f2fs_fileattr_set(struct mnt_idmap *idmap,
+@@ -4280,6 +4277,11 @@ static inline bool f2fs_meta_inode_gc_required(struct inode *inode)
+  * compress.c
+  */
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
++enum cluster_check_type {
++	CLUSTER_IS_COMPR,   /* check only if compressed cluster */
++	CLUSTER_COMPR_BLKS, /* return # of compressed blocks in a cluster */
++	CLUSTER_RAW_BLKS    /* return # of raw blocks in a cluster */
++};
+ bool f2fs_is_compressed_page(struct page *page);
+ struct page *f2fs_compress_control_page(struct page *page);
+ int f2fs_prepare_compress_overwrite(struct inode *inode,
+@@ -4306,6 +4308,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ 						struct writeback_control *wbc,
+ 						enum iostat_type io_type);
+ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index);
++bool f2fs_is_sparse_cluster(struct inode *inode, pgoff_t index);
+ void f2fs_update_read_extent_tree_range_compressed(struct inode *inode,
+ 				pgoff_t fofs, block_t blkaddr,
+ 				unsigned int llen, unsigned int c_len);
+@@ -4392,6 +4395,12 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
+ static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
+ 							nid_t ino) { }
+ #define inc_compr_inode_stat(inode)		do { } while (0)
++static inline int f2fs_is_compressed_cluster(
++				struct inode *inode,
++				pgoff_t index) { return 0; }
++static inline bool f2fs_is_sparse_cluster(
++				struct inode *inode,
++				pgoff_t index) { return true; }
+ static inline void f2fs_update_read_extent_tree_range_compressed(
+ 				struct inode *inode,
+ 				pgoff_t fofs, block_t blkaddr,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 387ce167dda1b4..4bee980c6d1862 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -218,6 +218,9 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
+ 		f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
+ 							TRANS_DIR_INO))
+ 		cp_reason = CP_RECOVER_DIR;
++	else if (f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
++							XATTR_DIR_INO))
++		cp_reason = CP_XATTR_DIR;
+ 
+ 	return cp_reason;
+ }
+@@ -2117,10 +2120,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ 	struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+-	struct inode *pinode;
+ 	loff_t isize;
+ 	int ret;
+ 
++	if (!(filp->f_mode & FMODE_WRITE))
++		return -EBADF;
++
+ 	if (!inode_owner_or_capable(idmap, inode))
+ 		return -EACCES;
+ 
+@@ -2167,15 +2172,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ 	/* Check if the inode already has a COW inode */
+ 	if (fi->cow_inode == NULL) {
+ 		/* Create a COW inode for atomic write */
+-		pinode = f2fs_iget(inode->i_sb, fi->i_pino);
+-		if (IS_ERR(pinode)) {
+-			f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+-			ret = PTR_ERR(pinode);
+-			goto out;
+-		}
++		struct dentry *dentry = file_dentry(filp);
++		struct inode *dir = d_inode(dentry->d_parent);
+ 
+-		ret = f2fs_get_tmpfile(idmap, pinode, &fi->cow_inode);
+-		iput(pinode);
++		ret = f2fs_get_tmpfile(idmap, dir, &fi->cow_inode);
+ 		if (ret) {
+ 			f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+ 			goto out;
+@@ -2188,6 +2188,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate)
+ 		F2FS_I(fi->cow_inode)->atomic_inode = inode;
+ 	} else {
+ 		/* Reuse the already created COW inode */
++		f2fs_bug_on(sbi, get_dirty_pages(fi->cow_inode));
++
++		invalidate_mapping_pages(fi->cow_inode->i_mapping, 0, -1);
++
+ 		ret = f2fs_do_truncate_blocks(fi->cow_inode, 0, true);
+ 		if (ret) {
+ 			f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+@@ -2229,6 +2233,9 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
+ 	struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ 	int ret;
+ 
++	if (!(filp->f_mode & FMODE_WRITE))
++		return -EBADF;
++
+ 	if (!inode_owner_or_capable(idmap, inode))
+ 		return -EACCES;
+ 
+@@ -2261,6 +2268,9 @@ static int f2fs_ioc_abort_atomic_write(struct file *filp)
+ 	struct mnt_idmap *idmap = file_mnt_idmap(filp);
+ 	int ret;
+ 
++	if (!(filp->f_mode & FMODE_WRITE))
++		return -EBADF;
++
+ 	if (!inode_owner_or_capable(idmap, inode))
+ 		return -EACCES;
+ 
+@@ -2280,7 +2290,7 @@ static int f2fs_ioc_abort_atomic_write(struct file *filp)
+ }
+ 
+ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+-							bool readonly)
++						bool readonly, bool need_lock)
+ {
+ 	struct super_block *sb = sbi->sb;
+ 	int ret = 0;
+@@ -2327,12 +2337,19 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ 	if (readonly)
+ 		goto out;
+ 
++	/* grab sb->s_umount to avoid racing w/ remount() */
++	if (need_lock)
++		down_read(&sbi->sb->s_umount);
++
+ 	f2fs_stop_gc_thread(sbi);
+ 	f2fs_stop_discard_thread(sbi);
+ 
+ 	f2fs_drop_discard_cmd(sbi);
+ 	clear_opt(sbi, DISCARD);
+ 
++	if (need_lock)
++		up_read(&sbi->sb->s_umount);
++
+ 	f2fs_update_time(sbi, REQ_TIME);
+ out:
+ 
+@@ -2369,7 +2386,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 		}
+ 	}
+ 
+-	ret = f2fs_do_shutdown(sbi, in, readonly);
++	ret = f2fs_do_shutdown(sbi, in, readonly, true);
+ 
+ 	if (need_drop)
+ 		mnt_drop_write_file(filp);
+@@ -2687,7 +2704,8 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ 				(range->start + range->len) >> PAGE_SHIFT,
+ 				DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE));
+ 
+-	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED) ||
++		f2fs_is_atomic_file(inode)) {
+ 		err = -EINVAL;
+ 		goto unlock_out;
+ 	}
+@@ -2711,7 +2729,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ 	 * block addresses are continuous.
+ 	 */
+ 	if (f2fs_lookup_read_extent_cache(inode, pg_start, &ei)) {
+-		if (ei.fofs + ei.len >= pg_end)
++		if ((pgoff_t)ei.fofs + ei.len >= pg_end)
+ 			goto out;
+ 	}
+ 
+@@ -2794,6 +2812,8 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ 				goto clear_out;
+ 			}
+ 
++			f2fs_wait_on_page_writeback(page, DATA, true, true);
++
+ 			set_page_dirty(page);
+ 			set_page_private_gcing(page);
+ 			f2fs_put_page(page, 1);
+@@ -2918,6 +2938,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ 		goto out_unlock;
+ 	}
+ 
++	if (f2fs_is_atomic_file(src) || f2fs_is_atomic_file(dst)) {
++		ret = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	ret = -EINVAL;
+ 	if (pos_in + len > src->i_size || pos_in + len < pos_in)
+ 		goto out_unlock;
+@@ -3301,6 +3326,11 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
+ 
+ 	inode_lock(inode);
+ 
++	if (f2fs_is_atomic_file(inode)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	if (!pin) {
+ 		clear_inode_flag(inode, FI_PIN_FILE);
+ 		f2fs_i_gc_failures_write(inode, 0);
+@@ -4187,6 +4217,8 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
+ 		/* It will never fail, when page has pinned above */
+ 		f2fs_bug_on(F2FS_I_SB(inode), !page);
+ 
++		f2fs_wait_on_page_writeback(page, DATA, true, true);
++
+ 		set_page_dirty(page);
+ 		set_page_private_gcing(page);
+ 		f2fs_put_page(page, 1);
+@@ -4201,9 +4233,8 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ 	struct inode *inode = file_inode(filp);
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+-	pgoff_t page_idx = 0, last_idx;
+-	int cluster_size = fi->i_cluster_size;
+-	int count, ret;
++	pgoff_t page_idx = 0, last_idx, cluster_idx;
++	int ret;
+ 
+ 	if (!f2fs_sb_has_compression(sbi) ||
+ 			F2FS_OPTION(sbi).compress_mode != COMPR_MODE_USER)
+@@ -4236,10 +4267,15 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ 		goto out;
+ 
+ 	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
++	last_idx >>= fi->i_log_cluster_size;
++
++	for (cluster_idx = 0; cluster_idx < last_idx; cluster_idx++) {
++		page_idx = cluster_idx << fi->i_log_cluster_size;
++
++		if (!f2fs_is_compressed_cluster(inode, page_idx))
++			continue;
+ 
+-	count = last_idx - page_idx;
+-	while (count && count >= cluster_size) {
+-		ret = redirty_blocks(inode, page_idx, cluster_size);
++		ret = redirty_blocks(inode, page_idx, fi->i_cluster_size);
+ 		if (ret < 0)
+ 			break;
+ 
+@@ -4249,9 +4285,6 @@ static int f2fs_ioc_decompress_file(struct file *filp)
+ 				break;
+ 		}
+ 
+-		count -= cluster_size;
+-		page_idx += cluster_size;
+-
+ 		cond_resched();
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -EINTR;
+@@ -4278,9 +4311,9 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ {
+ 	struct inode *inode = file_inode(filp);
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+-	pgoff_t page_idx = 0, last_idx;
+-	int cluster_size = F2FS_I(inode)->i_cluster_size;
+-	int count, ret;
++	struct f2fs_inode_info *fi = F2FS_I(inode);
++	pgoff_t page_idx = 0, last_idx, cluster_idx;
++	int ret;
+ 
+ 	if (!f2fs_sb_has_compression(sbi) ||
+ 			F2FS_OPTION(sbi).compress_mode != COMPR_MODE_USER)
+@@ -4312,10 +4345,15 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ 	set_inode_flag(inode, FI_ENABLE_COMPRESS);
+ 
+ 	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
++	last_idx >>= fi->i_log_cluster_size;
++
++	for (cluster_idx = 0; cluster_idx < last_idx; cluster_idx++) {
++		page_idx = cluster_idx << fi->i_log_cluster_size;
+ 
+-	count = last_idx - page_idx;
+-	while (count && count >= cluster_size) {
+-		ret = redirty_blocks(inode, page_idx, cluster_size);
++		if (f2fs_is_sparse_cluster(inode, page_idx))
++			continue;
++
++		ret = redirty_blocks(inode, page_idx, fi->i_cluster_size);
+ 		if (ret < 0)
+ 			break;
+ 
+@@ -4325,9 +4363,6 @@ static int f2fs_ioc_compress_file(struct file *filp)
+ 				break;
+ 		}
+ 
+-		count -= cluster_size;
+-		page_idx += cluster_size;
+-
+ 		cond_resched();
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -EINTR;
+@@ -4587,6 +4622,10 @@ static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 		f2fs_trace_rw_file_path(iocb->ki_filp, iocb->ki_pos,
+ 					iov_iter_count(to), READ);
+ 
++	/* In LFS mode, if there is inflight dio, wait for its completion */
++	if (f2fs_lfs_mode(F2FS_I_SB(inode)))
++		inode_dio_wait(inode);
++
+ 	if (f2fs_should_use_dio(inode, iocb, to)) {
+ 		ret = f2fs_dio_read_iter(iocb, to);
+ 	} else {
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 57da02bfa823ef..0b23a0cb438fde 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -35,6 +35,11 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ 	if (f2fs_inode_dirtied(inode, sync))
+ 		return;
+ 
++	if (f2fs_is_atomic_file(inode)) {
++		set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++		return;
++	}
++
+ 	mark_inode_dirty_sync(inode);
+ }
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index e54f8c08bda832..a2c4091fca6fac 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -455,62 +455,6 @@ struct dentry *f2fs_get_parent(struct dentry *child)
+ 	return d_obtain_alias(f2fs_iget(child->d_sb, ino));
+ }
+ 
+-static int __recover_dot_dentries(struct inode *dir, nid_t pino)
+-{
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+-	struct qstr dot = QSTR_INIT(".", 1);
+-	struct f2fs_dir_entry *de;
+-	struct page *page;
+-	int err = 0;
+-
+-	if (f2fs_readonly(sbi->sb)) {
+-		f2fs_info(sbi, "skip recovering inline_dots inode (ino:%lu, pino:%u) in readonly mountpoint",
+-			  dir->i_ino, pino);
+-		return 0;
+-	}
+-
+-	if (!S_ISDIR(dir->i_mode)) {
+-		f2fs_err(sbi, "inconsistent inode status, skip recovering inline_dots inode (ino:%lu, i_mode:%u, pino:%u)",
+-			  dir->i_ino, dir->i_mode, pino);
+-		set_sbi_flag(sbi, SBI_NEED_FSCK);
+-		return -ENOTDIR;
+-	}
+-
+-	err = f2fs_dquot_initialize(dir);
+-	if (err)
+-		return err;
+-
+-	f2fs_balance_fs(sbi, true);
+-
+-	f2fs_lock_op(sbi);
+-
+-	de = f2fs_find_entry(dir, &dot, &page);
+-	if (de) {
+-		f2fs_put_page(page, 0);
+-	} else if (IS_ERR(page)) {
+-		err = PTR_ERR(page);
+-		goto out;
+-	} else {
+-		err = f2fs_do_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
+-		if (err)
+-			goto out;
+-	}
+-
+-	de = f2fs_find_entry(dir, &dotdot_name, &page);
+-	if (de)
+-		f2fs_put_page(page, 0);
+-	else if (IS_ERR(page))
+-		err = PTR_ERR(page);
+-	else
+-		err = f2fs_do_add_link(dir, &dotdot_name, NULL, pino, S_IFDIR);
+-out:
+-	if (!err)
+-		clear_inode_flag(dir, FI_INLINE_DOTS);
+-
+-	f2fs_unlock_op(sbi);
+-	return err;
+-}
+-
+ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ 		unsigned int flags)
+ {
+@@ -520,7 +464,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ 	struct dentry *new;
+ 	nid_t ino = -1;
+ 	int err = 0;
+-	unsigned int root_ino = F2FS_ROOT_INO(F2FS_I_SB(dir));
+ 	struct f2fs_filename fname;
+ 
+ 	trace_f2fs_lookup_start(dir, dentry, flags);
+@@ -556,17 +499,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ 		goto out;
+ 	}
+ 
+-	if ((dir->i_ino == root_ino) && f2fs_has_inline_dots(dir)) {
+-		err = __recover_dot_dentries(dir, root_ino);
+-		if (err)
+-			goto out_iput;
+-	}
+-
+-	if (f2fs_has_inline_dots(inode)) {
+-		err = __recover_dot_dentries(inode, dir->i_ino);
+-		if (err)
+-			goto out_iput;
+-	}
+ 	if (IS_ENCRYPTED(dir) &&
+ 	    (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) &&
+ 	    !fscrypt_has_permitted_context(dir, inode)) {
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 601825785226d2..425479d7692167 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -199,6 +199,10 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ 	clear_inode_flag(inode, FI_ATOMIC_COMMITTED);
+ 	clear_inode_flag(inode, FI_ATOMIC_REPLACE);
+ 	clear_inode_flag(inode, FI_ATOMIC_FILE);
++	if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
++		clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
++		f2fs_mark_inode_dirty_sync(inode, true);
++	}
+ 	stat_dec_atomic_inode(inode);
+ 
+ 	F2FS_I(inode)->atomic_write_task = NULL;
+@@ -366,6 +370,10 @@ static int __f2fs_commit_atomic_write(struct inode *inode)
+ 	} else {
+ 		sbi->committed_atomic_block += fi->atomic_write_cnt;
+ 		set_inode_flag(inode, FI_ATOMIC_COMMITTED);
++		if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
++			clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
++			f2fs_mark_inode_dirty_sync(inode, true);
++		}
+ 	}
+ 
+ 	__complete_revoke_list(inode, &revoke_list, ret ? true : false);
+@@ -1282,6 +1290,13 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ 						wait_list, issued);
+ 			return 0;
+ 		}
++
++		/*
++		 * Issue discard for conventional zones only if the device
++		 * supports discard.
++		 */
++		if (!bdev_max_discard_sectors(bdev))
++			return -EOPNOTSUPP;
+ 	}
+ #endif
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 1f1b3647a998c0..b4c8ac6c085989 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2571,7 +2571,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 
+ static void f2fs_shutdown(struct super_block *sb)
+ {
+-	f2fs_do_shutdown(F2FS_SB(sb), F2FS_GOING_DOWN_NOSYNC, false);
++	f2fs_do_shutdown(F2FS_SB(sb), F2FS_GOING_DOWN_NOSYNC, false, false);
+ }
+ 
+ #ifdef CONFIG_QUOTA
+@@ -3366,9 +3366,9 @@ static inline bool sanity_check_area_boundary(struct f2fs_sb_info *sbi,
+ 	u32 segment_count = le32_to_cpu(raw_super->segment_count);
+ 	u32 log_blocks_per_seg = le32_to_cpu(raw_super->log_blocks_per_seg);
+ 	u64 main_end_blkaddr = main_blkaddr +
+-				(segment_count_main << log_blocks_per_seg);
++				((u64)segment_count_main << log_blocks_per_seg);
+ 	u64 seg_end_blkaddr = segment0_blkaddr +
+-				(segment_count << log_blocks_per_seg);
++				((u64)segment_count << log_blocks_per_seg);
+ 
+ 	if (segment0_blkaddr != cp_blkaddr) {
+ 		f2fs_info(sbi, "Mismatch start address, segment0(%u) cp_blkaddr(%u)",
+@@ -4183,12 +4183,14 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+ 	}
+ 
+ 	f2fs_warn(sbi, "Remounting filesystem read-only");
++
+ 	/*
+-	 * Make sure updated value of ->s_mount_flags will be visible before
+-	 * ->s_flags update
++	 * We have already set CP_ERROR_FLAG flag to stop all updates
++	 * to filesystem, so it doesn't need to set SB_RDONLY flag here
++	 * because the flag should be set covered w/ sb->s_umount semaphore
++	 * via remount procedure, otherwise, it will confuse code like
++	 * freeze_super() which will lead to deadlocks and other problems.
+ 	 */
+-	smp_wmb();
+-	sb->s_flags |= SB_RDONLY;
+ }
+ 
+ static void f2fs_record_error_work(struct work_struct *work)
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index f290fe9327c498..3f387494367963 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -629,6 +629,7 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 			const char *name, const void *value, size_t size,
+ 			struct page *ipage, int flags)
+ {
++	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_xattr_entry *here, *last;
+ 	void *base_addr, *last_base_addr;
+ 	int found, newsize;
+@@ -772,9 +773,18 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 	if (index == F2FS_XATTR_INDEX_ENCRYPTION &&
+ 			!strcmp(name, F2FS_XATTR_NAME_ENCRYPTION_CONTEXT))
+ 		f2fs_set_encrypted_inode(inode);
+-	if (S_ISDIR(inode->i_mode))
+-		set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_CP);
+ 
++	if (!S_ISDIR(inode->i_mode))
++		goto same;
++	/*
++	 * In restrict mode, fsync() always try to trigger checkpoint for all
++	 * metadata consistency, in other mode, it triggers checkpoint when
++	 * parent's xattr metadata was updated.
++	 */
++	if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
++		set_sbi_flag(sbi, SBI_NEED_CP);
++	else
++		f2fs_add_ino_entry(sbi, inode->i_ino, XATTR_DIR_INO);
+ same:
+ 	if (is_inode_flag_set(inode, FI_ACL_MODE)) {
+ 		inode->i_mode = F2FS_I(inode)->i_acl_mode;
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 300e5d9ad913b5..c28dc6c005f174 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -87,8 +87,8 @@ static int setfl(int fd, struct file * filp, unsigned int arg)
+ 	return error;
+ }
+ 
+-static void f_modown(struct file *filp, struct pid *pid, enum pid_type type,
+-                     int force)
++void __f_setown(struct file *filp, struct pid *pid, enum pid_type type,
++		int force)
+ {
+ 	write_lock_irq(&filp->f_owner.lock);
+ 	if (force || !filp->f_owner.pid) {
+@@ -98,19 +98,13 @@ static void f_modown(struct file *filp, struct pid *pid, enum pid_type type,
+ 
+ 		if (pid) {
+ 			const struct cred *cred = current_cred();
++			security_file_set_fowner(filp);
+ 			filp->f_owner.uid = cred->uid;
+ 			filp->f_owner.euid = cred->euid;
+ 		}
+ 	}
+ 	write_unlock_irq(&filp->f_owner.lock);
+ }
+-
+-void __f_setown(struct file *filp, struct pid *pid, enum pid_type type,
+-		int force)
+-{
+-	security_file_set_fowner(filp);
+-	f_modown(filp, pid, type, force);
+-}
+ EXPORT_SYMBOL(__f_setown);
+ 
+ int f_setown(struct file *filp, int who, int force)
+@@ -146,7 +140,7 @@ EXPORT_SYMBOL(f_setown);
+ 
+ void f_delown(struct file *filp)
+ {
+-	f_modown(filp, NULL, PIDTYPE_TGID, 1);
++	__f_setown(filp, NULL, PIDTYPE_TGID, 1);
+ }
+ 
+ pid_t f_getown(struct file *filp)
+diff --git a/fs/fs_parser.c b/fs/fs_parser.c
+index a4d6ca0b8971e6..24727ec34e5aa4 100644
+--- a/fs/fs_parser.c
++++ b/fs/fs_parser.c
+@@ -308,6 +308,40 @@ int fs_param_is_fd(struct p_log *log, const struct fs_parameter_spec *p,
+ }
+ EXPORT_SYMBOL(fs_param_is_fd);
+ 
++int fs_param_is_uid(struct p_log *log, const struct fs_parameter_spec *p,
++		    struct fs_parameter *param, struct fs_parse_result *result)
++{
++	kuid_t uid;
++
++	if (fs_param_is_u32(log, p, param, result) != 0)
++		return fs_param_bad_value(log, param);
++
++	uid = make_kuid(current_user_ns(), result->uint_32);
++	if (!uid_valid(uid))
++		return inval_plog(log, "Invalid uid '%s'", param->string);
++
++	result->uid = uid;
++	return 0;
++}
++EXPORT_SYMBOL(fs_param_is_uid);
++
++int fs_param_is_gid(struct p_log *log, const struct fs_parameter_spec *p,
++		    struct fs_parameter *param, struct fs_parse_result *result)
++{
++	kgid_t gid;
++
++	if (fs_param_is_u32(log, p, param, result) != 0)
++		return fs_param_bad_value(log, param);
++
++	gid = make_kgid(current_user_ns(), result->uint_32);
++	if (!gid_valid(gid))
++		return inval_plog(log, "Invalid gid '%s'", param->string);
++
++	result->gid = gid;
++	return 0;
++}
++EXPORT_SYMBOL(fs_param_is_gid);
++
+ int fs_param_is_blockdev(struct p_log *log, const struct fs_parameter_spec *p,
+ 		  struct fs_parameter *param, struct fs_parse_result *result)
+ {
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index ed76121f73f2e0..08f7d538ca98f4 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1345,7 +1345,7 @@ static bool fuse_dio_wr_exclusive_lock(struct kiocb *iocb, struct iov_iter *from
+ 
+ 	/* shared locks are not allowed with parallel page cache IO */
+ 	if (test_bit(FUSE_I_CACHE_IO_MODE, &fi->state))
+-		return false;
++		return true;
+ 
+ 	/* Parallel dio beyond EOF is not supported, at least for now. */
+ 	if (fuse_io_past_eof(iocb, from))
+diff --git a/fs/inode.c b/fs/inode.c
+index f5add7222c98ec..3df67672986aa9 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -758,6 +758,10 @@ void evict_inodes(struct super_block *sb)
+ 			continue;
+ 
+ 		spin_lock(&inode->i_lock);
++		if (atomic_read(&inode->i_count)) {
++			spin_unlock(&inode->i_lock);
++			continue;
++		}
+ 		if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
+ 			spin_unlock(&inode->i_lock);
+ 			continue;
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 5713994328cbcb..0625d1c0d0649a 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -187,7 +187,7 @@ int dbMount(struct inode *ipbmap)
+ 	}
+ 
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+-	if (!bmp->db_numag) {
++	if (!bmp->db_numag || bmp->db_numag >= MAXAG) {
+ 		err = -EINVAL;
+ 		goto err_release_metapage;
+ 	}
+@@ -652,7 +652,7 @@ int dbNextAG(struct inode *ipbmap)
+ 	 * average free space.
+ 	 */
+ 	for (i = 0 ; i < bmp->db_numag; i++, agpref++) {
+-		if (agpref == bmp->db_numag)
++		if (agpref >= bmp->db_numag)
+ 			agpref = 0;
+ 
+ 		if (atomic_read(&bmp->db_active[agpref]))
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 1407feccbc2d05..a360b24ed320c0 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -1360,7 +1360,7 @@ int diAlloc(struct inode *pip, bool dir, struct inode *ip)
+ 	/* get the ag number of this iag */
+ 	agno = BLKTOAG(JFS_IP(pip)->agstart, JFS_SBI(pip->i_sb));
+ 	dn_numag = JFS_SBI(pip->i_sb)->bmap->db_numag;
+-	if (agno < 0 || agno > dn_numag)
++	if (agno < 0 || agno > dn_numag || agno >= MAXAG)
+ 		return -EIO;
+ 
+ 	if (atomic_read(&JFS_SBI(pip->i_sb)->bmap->db_active[agno])) {
+diff --git a/fs/namespace.c b/fs/namespace.c
+index e1ced589d8357f..a6675c2a238394 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2801,8 +2801,15 @@ static void mnt_warn_timestamp_expiry(struct path *mountpoint, struct vfsmount *
+ 	if (!__mnt_is_readonly(mnt) &&
+ 	   (!(sb->s_iflags & SB_I_TS_EXPIRY_WARNED)) &&
+ 	   (ktime_get_real_seconds() + TIME_UPTIME_SEC_MAX > sb->s_time_max)) {
+-		char *buf = (char *)__get_free_page(GFP_KERNEL);
+-		char *mntpath = buf ? d_path(mountpoint, buf, PAGE_SIZE) : ERR_PTR(-ENOMEM);
++		char *buf, *mntpath;
++
++		buf = (char *)__get_free_page(GFP_KERNEL);
++		if (buf)
++			mntpath = d_path(mountpoint, buf, PAGE_SIZE);
++		else
++			mntpath = ERR_PTR(-ENOMEM);
++		if (IS_ERR(mntpath))
++			mntpath = "(unknown)";
+ 
+ 		pr_warn("%s filesystem being %s at %s supports timestamps until %ptTd (0x%llx)\n",
+ 			sb->s_type->name,
+@@ -2810,8 +2817,9 @@ static void mnt_warn_timestamp_expiry(struct path *mountpoint, struct vfsmount *
+ 			mntpath, &sb->s_time_max,
+ 			(unsigned long long)sb->s_time_max);
+ 
+-		free_page((unsigned long)buf);
+ 		sb->s_iflags |= SB_I_TS_EXPIRY_WARNED;
++		if (buf)
++			free_page((unsigned long)buf);
+ 	}
+ }
+ 
+diff --git a/fs/netfs/main.c b/fs/netfs/main.c
+index db824c372842ad..b1d97335d7bb83 100644
+--- a/fs/netfs/main.c
++++ b/fs/netfs/main.c
+@@ -138,7 +138,7 @@ static int __init netfs_init(void)
+ 
+ error_fscache:
+ error_procfile:
+-	remove_proc_entry("fs/netfs", NULL);
++	remove_proc_subtree("fs/netfs", NULL);
+ error_proc:
+ 	mempool_exit(&netfs_subrequest_pool);
+ error_subreqpool:
+@@ -155,7 +155,7 @@ fs_initcall(netfs_init);
+ static void __exit netfs_exit(void)
+ {
+ 	fscache_exit();
+-	remove_proc_entry("fs/netfs", NULL);
++	remove_proc_subtree("fs/netfs", NULL);
+ 	mempool_exit(&netfs_subrequest_pool);
+ 	kmem_cache_destroy(netfs_subrequest_slab);
+ 	mempool_exit(&netfs_request_pool);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 5b452411e8fdf2..1153e166652bed 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1956,6 +1956,7 @@ static int nfs4_do_reclaim(struct nfs_client *clp, const struct nfs4_state_recov
+ 				set_bit(ops->owner_flag_bit, &sp->so_flags);
+ 				nfs4_put_state_owner(sp);
+ 				status = nfs4_recovery_handle_error(clp, status);
++				nfs4_free_state_owners(&freeme);
+ 				return (status != 0) ? status : -EAGAIN;
+ 			}
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index f4704f5d408675..e2e248032bfd04 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -1035,8 +1035,6 @@ nfsd_file_do_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (likely(ret == 0))
+ 		goto open_file;
+ 
+-	if (ret == -EEXIST)
+-		goto retry;
+ 	trace_nfsd_file_insert_err(rqstp, inode, may_flags, ret);
+ 	status = nfserr_jukebox;
+ 	goto construction_err;
+@@ -1051,6 +1049,7 @@ nfsd_file_do_acquire(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 			status = nfserr_jukebox;
+ 			goto construction_err;
+ 		}
++		nfsd_file_put(nf);
+ 		open_retry = false;
+ 		fh_put(fhp);
+ 		goto retry;
+diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
+index 7a806ac13e317e..8cca1329f3485c 100644
+--- a/fs/nfsd/nfs4idmap.c
++++ b/fs/nfsd/nfs4idmap.c
+@@ -581,6 +581,7 @@ static __be32 idmap_id_to_name(struct xdr_stream *xdr,
+ 		.id = id,
+ 		.type = type,
+ 	};
++	__be32 status = nfs_ok;
+ 	__be32 *p;
+ 	int ret;
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+@@ -593,12 +594,16 @@ static __be32 idmap_id_to_name(struct xdr_stream *xdr,
+ 		return nfserrno(ret);
+ 	ret = strlen(item->name);
+ 	WARN_ON_ONCE(ret > IDMAP_NAMESZ);
++
+ 	p = xdr_reserve_space(xdr, ret + 4);
+-	if (!p)
+-		return nfserr_resource;
+-	p = xdr_encode_opaque(p, item->name, ret);
++	if (unlikely(!p)) {
++		status = nfserr_resource;
++		goto out_put;
++	}
++	xdr_encode_opaque(p, item->name, ret);
++out_put:
+ 	cache_put(&item->h, nn->idtoname_cache);
+-	return 0;
++	return status;
+ }
+ 
+ static bool
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 67d8673a9391c7..69a3a84e159e62 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -809,6 +809,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ 			ci = &cmsg->cm_u.cm_clntinfo;
+ 			if (get_user(namelen, &ci->cc_name.cn_len))
+ 				return -EFAULT;
++			if (!namelen) {
++				dprintk("%s: namelen should not be zero", __func__);
++				return -EINVAL;
++			}
+ 			name.data = memdup_user(&ci->cc_name.cn_id, namelen);
+ 			if (IS_ERR(name.data))
+ 				return PTR_ERR(name.data);
+@@ -831,6 +835,10 @@ __cld_pipe_inprogress_downcall(const struct cld_msg_v2 __user *cmsg,
+ 			cnm = &cmsg->cm_u.cm_name;
+ 			if (get_user(namelen, &cnm->cn_len))
+ 				return -EFAULT;
++			if (!namelen) {
++				dprintk("%s: namelen should not be zero", __func__);
++				return -EINVAL;
++			}
+ 			name.data = memdup_user(&cnm->cn_id, namelen);
+ 			if (IS_ERR(name.data))
+ 				return PTR_ERR(name.data);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index a366fb1c1b9b4f..fe06779ea527a1 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5912,6 +5912,28 @@ static void nfsd4_open_deleg_none_ext(struct nfsd4_open *open, int status)
+ 	}
+ }
+ 
++static bool
++nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
++		     struct kstat *stat)
++{
++	struct nfsd_file *nf = find_rw_file(dp->dl_stid.sc_file);
++	struct path path;
++	int rc;
++
++	if (!nf)
++		return false;
++
++	path.mnt = currentfh->fh_export->ex_path.mnt;
++	path.dentry = file_dentry(nf->nf_file);
++
++	rc = vfs_getattr(&path, stat,
++			 (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
++			 AT_STATX_SYNC_AS_STAT);
++
++	nfsd_file_put(nf);
++	return rc == 0;
++}
++
+ /*
+  * The Linux NFS server does not offer write delegations to NFSv4.0
+  * clients in order to avoid conflicts between write delegations and
+@@ -5947,7 +5969,6 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ 	int cb_up;
+ 	int status = 0;
+ 	struct kstat stat;
+-	struct path path;
+ 
+ 	cb_up = nfsd4_cb_channel_good(oo->oo_owner.so_client);
+ 	open->op_recall = false;
+@@ -5983,20 +6004,16 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ 	memcpy(&open->op_delegate_stateid, &dp->dl_stid.sc_stateid, sizeof(dp->dl_stid.sc_stateid));
+ 
+ 	if (open->op_share_access & NFS4_SHARE_ACCESS_WRITE) {
+-		open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+-		trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+-		path.mnt = currentfh->fh_export->ex_path.mnt;
+-		path.dentry = currentfh->fh_dentry;
+-		if (vfs_getattr(&path, &stat,
+-				(STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
+-				AT_STATX_SYNC_AS_STAT)) {
++		if (!nfs4_delegation_stat(dp, currentfh, &stat)) {
+ 			nfs4_put_stid(&dp->dl_stid);
+ 			destroy_delegation(dp);
+ 			goto out_no_deleg;
+ 		}
++		open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+ 		dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
+ 		dp->dl_cb_fattr.ncf_initial_cinfo =
+ 			nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry));
++		trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+ 	} else {
+ 		open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+ 		trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid);
+@@ -8836,6 +8853,7 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct dentry *dentry,
+ 	__be32 status;
+ 	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+ 	struct file_lock_context *ctx;
++	struct nfs4_delegation *dp = NULL;
+ 	struct file_lease *fl;
+ 	struct iattr attrs;
+ 	struct nfs4_cb_fattr *ncf;
+@@ -8845,84 +8863,76 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct dentry *dentry,
+ 	ctx = locks_inode_context(inode);
+ 	if (!ctx)
+ 		return 0;
++
++#define NON_NFSD_LEASE ((void *)1)
++
+ 	spin_lock(&ctx->flc_lock);
+ 	for_each_file_lock(fl, &ctx->flc_lease) {
+-		unsigned char type = fl->c.flc_type;
+-
+ 		if (fl->c.flc_flags == FL_LAYOUT)
+ 			continue;
+-		if (fl->fl_lmops != &nfsd_lease_mng_ops) {
+-			/*
+-			 * non-nfs lease, if it's a lease with F_RDLCK then
+-			 * we are done; there isn't any write delegation
+-			 * on this inode
+-			 */
+-			if (type == F_RDLCK)
+-				break;
+-
+-			nfsd_stats_wdeleg_getattr_inc(nn);
+-			spin_unlock(&ctx->flc_lock);
+-
+-			status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
++		if (fl->c.flc_type == F_WRLCK) {
++			if (fl->fl_lmops == &nfsd_lease_mng_ops)
++				dp = fl->c.flc_owner;
++			else
++				dp = NON_NFSD_LEASE;
++		}
++		break;
++	}
++	if (dp == NULL || dp == NON_NFSD_LEASE ||
++	    dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
++		spin_unlock(&ctx->flc_lock);
++		if (dp == NON_NFSD_LEASE) {
++			status = nfserrno(nfsd_open_break_lease(inode,
++								NFSD_MAY_READ));
+ 			if (status != nfserr_jukebox ||
+ 			    !nfsd_wait_for_delegreturn(rqstp, inode))
+ 				return status;
+-			return 0;
+ 		}
+-		if (type == F_WRLCK) {
+-			struct nfs4_delegation *dp = fl->c.flc_owner;
++		return 0;
++	}
+ 
+-			if (dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
+-				spin_unlock(&ctx->flc_lock);
+-				return 0;
+-			}
+-			nfsd_stats_wdeleg_getattr_inc(nn);
+-			dp = fl->c.flc_owner;
+-			refcount_inc(&dp->dl_stid.sc_count);
+-			ncf = &dp->dl_cb_fattr;
+-			nfs4_cb_getattr(&dp->dl_cb_fattr);
+-			spin_unlock(&ctx->flc_lock);
+-			wait_on_bit_timeout(&ncf->ncf_cb_flags, CB_GETATTR_BUSY,
+-					TASK_INTERRUPTIBLE, NFSD_CB_GETATTR_TIMEOUT);
+-			if (ncf->ncf_cb_status) {
+-				/* Recall delegation only if client didn't respond */
+-				status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
+-				if (status != nfserr_jukebox ||
+-						!nfsd_wait_for_delegreturn(rqstp, inode)) {
+-					nfs4_put_stid(&dp->dl_stid);
+-					return status;
+-				}
+-			}
+-			if (!ncf->ncf_file_modified &&
+-					(ncf->ncf_initial_cinfo != ncf->ncf_cb_change ||
+-					ncf->ncf_cur_fsize != ncf->ncf_cb_fsize))
+-				ncf->ncf_file_modified = true;
+-			if (ncf->ncf_file_modified) {
+-				int err;
+-
+-				/*
+-				 * Per section 10.4.3 of RFC 8881, the server would
+-				 * not update the file's metadata with the client's
+-				 * modified size
+-				 */
+-				attrs.ia_mtime = attrs.ia_ctime = current_time(inode);
+-				attrs.ia_valid = ATTR_MTIME | ATTR_CTIME | ATTR_DELEG;
+-				inode_lock(inode);
+-				err = notify_change(&nop_mnt_idmap, dentry, &attrs, NULL);
+-				inode_unlock(inode);
+-				if (err) {
+-					nfs4_put_stid(&dp->dl_stid);
+-					return nfserrno(err);
+-				}
+-				ncf->ncf_cur_fsize = ncf->ncf_cb_fsize;
+-				*size = ncf->ncf_cur_fsize;
+-				*modified = true;
+-			}
+-			nfs4_put_stid(&dp->dl_stid);
+-			return 0;
++	nfsd_stats_wdeleg_getattr_inc(nn);
++	refcount_inc(&dp->dl_stid.sc_count);
++	ncf = &dp->dl_cb_fattr;
++	nfs4_cb_getattr(&dp->dl_cb_fattr);
++	spin_unlock(&ctx->flc_lock);
++
++	wait_on_bit_timeout(&ncf->ncf_cb_flags, CB_GETATTR_BUSY,
++			    TASK_INTERRUPTIBLE, NFSD_CB_GETATTR_TIMEOUT);
++	if (ncf->ncf_cb_status) {
++		/* Recall delegation only if client didn't respond */
++		status = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
++		if (status != nfserr_jukebox ||
++		    !nfsd_wait_for_delegreturn(rqstp, inode))
++			goto out_status;
++	}
++	if (!ncf->ncf_file_modified &&
++	    (ncf->ncf_initial_cinfo != ncf->ncf_cb_change ||
++	     ncf->ncf_cur_fsize != ncf->ncf_cb_fsize))
++		ncf->ncf_file_modified = true;
++	if (ncf->ncf_file_modified) {
++		int err;
++
++		/*
++		 * Per section 10.4.3 of RFC 8881, the server would
++		 * not update the file's metadata with the client's
++		 * modified size
++		 */
++		attrs.ia_mtime = attrs.ia_ctime = current_time(inode);
++		attrs.ia_valid = ATTR_MTIME | ATTR_CTIME | ATTR_DELEG;
++		inode_lock(inode);
++		err = notify_change(&nop_mnt_idmap, dentry, &attrs, NULL);
++		inode_unlock(inode);
++		if (err) {
++			status = nfserrno(err);
++			goto out_status;
+ 		}
+-		break;
++		ncf->ncf_cur_fsize = ncf->ncf_cb_fsize;
++		*size = ncf->ncf_cur_fsize;
++		*modified = true;
+ 	}
+-	spin_unlock(&ctx->flc_lock);
+-	return 0;
++	status = 0;
++out_status:
++	nfs4_put_stid(&dp->dl_stid);
++	return status;
+ }
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 862bdf23120e8a..ef5061bb56da1e 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -350,7 +350,7 @@ static int nilfs_btree_node_broken(const struct nilfs_btree_node *node,
+ 	if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+ 		     level >= NILFS_BTREE_LEVEL_MAX ||
+ 		     (flags & NILFS_BTREE_NODE_ROOT) ||
+-		     nchildren < 0 ||
++		     nchildren <= 0 ||
+ 		     nchildren > NILFS_BTREE_NODE_NCHILDREN_MAX(size))) {
+ 		nilfs_crit(inode->i_sb,
+ 			   "bad btree node (ino=%lu, blocknr=%llu): level = %d, flags = 0x%x, nchildren = %d",
+@@ -381,7 +381,8 @@ static int nilfs_btree_root_broken(const struct nilfs_btree_node *node,
+ 	if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+ 		     level >= NILFS_BTREE_LEVEL_MAX ||
+ 		     nchildren < 0 ||
+-		     nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) {
++		     nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX ||
++		     (nchildren == 0 && level > NILFS_BTREE_LEVEL_NODE_MIN))) {
+ 		nilfs_crit(inode->i_sb,
+ 			   "bad btree root (ino=%lu): level = %d, flags = 0x%x, nchildren = %d",
+ 			   inode->i_ino, level, flags, nchildren);
+@@ -1658,13 +1659,16 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
+ 	int nchildren, ret;
+ 
+ 	root = nilfs_btree_get_root(btree);
++	nchildren = nilfs_btree_node_get_nchildren(root);
++	if (unlikely(nchildren == 0))
++		return 0;
++
+ 	switch (nilfs_btree_height(btree)) {
+ 	case 2:
+ 		bh = NULL;
+ 		node = root;
+ 		break;
+ 	case 3:
+-		nchildren = nilfs_btree_node_get_nchildren(root);
+ 		if (nchildren > 1)
+ 			return 0;
+ 		ptr = nilfs_btree_node_get_ptr(root, nchildren - 1,
+@@ -1673,12 +1677,12 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
+ 		if (ret < 0)
+ 			return ret;
+ 		node = (struct nilfs_btree_node *)bh->b_data;
++		nchildren = nilfs_btree_node_get_nchildren(node);
+ 		break;
+ 	default:
+ 		return 0;
+ 	}
+ 
+-	nchildren = nilfs_btree_node_get_nchildren(node);
+ 	maxkey = nilfs_btree_node_get_key(node, nchildren - 1);
+ 	nextmaxkey = (nchildren > 1) ?
+ 		nilfs_btree_node_get_key(node, nchildren - 2) : 0;
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 627eb2f72ef376..23fcf9e9d6c555 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2408,7 +2408,7 @@ static int vfs_setup_quota_inode(struct inode *inode, int type)
+ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ 	unsigned int flags)
+ {
+-	struct quota_format_type *fmt = find_quota_format(format_id);
++	struct quota_format_type *fmt;
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 	int error;
+ 
+@@ -2418,6 +2418,7 @@ int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ 	if (WARN_ON_ONCE(flags & DQUOT_SUSPENDED))
+ 		return -EINVAL;
+ 
++	fmt = find_quota_format(format_id);
+ 	if (!fmt)
+ 		return -ESRCH;
+ 	if (!sb->dq_op || !sb->s_qcop ||
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 9e859ba010cf12..7cbd580120d129 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -496,7 +496,7 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 	int err = 0;
+ 
+ 	if (work->conn->connection_type) {
+-		if (!(fp->daccess & FILE_WRITE_DATA_LE)) {
++		if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE))) {
+ 			pr_err("no right to write(%pD)\n", fp->filp);
+ 			err = -EACCES;
+ 			goto out;
+@@ -1115,9 +1115,10 @@ static bool __dir_empty(struct dir_context *ctx, const char *name, int namlen,
+ 	struct ksmbd_readdir_data *buf;
+ 
+ 	buf = container_of(ctx, struct ksmbd_readdir_data, ctx);
+-	buf->dirent_count++;
++	if (!is_dot_dotdot(name, namlen))
++		buf->dirent_count++;
+ 
+-	return buf->dirent_count <= 2;
++	return !buf->dirent_count;
+ }
+ 
+ /**
+@@ -1137,7 +1138,7 @@ int ksmbd_vfs_empty_dir(struct ksmbd_file *fp)
+ 	readdir_data.dirent_count = 0;
+ 
+ 	err = iterate_dir(fp->filp, &readdir_data.ctx);
+-	if (readdir_data.dirent_count > 2)
++	if (readdir_data.dirent_count)
+ 		err = -ENOTEMPTY;
+ 	else
+ 		err = 0;
+@@ -1166,7 +1167,7 @@ static bool __caseless_lookup(struct dir_context *ctx, const char *name,
+ 	if (cmp < 0)
+ 		cmp = strncasecmp((char *)buf->private, name, namlen);
+ 	if (!cmp) {
+-		memcpy((char *)buf->private, name, namlen);
++		memcpy((char *)buf->private, name, buf->used);
+ 		buf->dirent_count = 1;
+ 		return false;
+ 	}
+@@ -1234,10 +1235,7 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ 		char *filepath;
+ 		size_t path_len, remain_len;
+ 
+-		filepath = kstrdup(name, GFP_KERNEL);
+-		if (!filepath)
+-			return -ENOMEM;
+-
++		filepath = name;
+ 		path_len = strlen(filepath);
+ 		remain_len = path_len;
+ 
+@@ -1280,10 +1278,9 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ 		err = -EINVAL;
+ out2:
+ 		path_put(parent_path);
+-out1:
+-		kfree(filepath);
+ 	}
+ 
++out1:
+ 	if (!err) {
+ 		err = mnt_want_write(parent_path->mnt);
+ 		if (err) {
+diff --git a/include/acpi/acoutput.h b/include/acpi/acoutput.h
+index b1571dd96310af..5e0346142f983d 100644
+--- a/include/acpi/acoutput.h
++++ b/include/acpi/acoutput.h
+@@ -193,6 +193,7 @@
+  */
+ #ifndef ACPI_NO_ERROR_MESSAGES
+ #define AE_INFO                         _acpi_module_name, __LINE__
++#define ACPI_ONCE(_fn, _plist)                  { static char _done; if (!_done) { _done = 1; _fn _plist; } }
+ 
+ /*
+  * Error reporting. Callers module and line number are inserted by AE_INFO,
+@@ -201,8 +202,10 @@
+  */
+ #define ACPI_INFO(plist)                acpi_info plist
+ #define ACPI_WARNING(plist)             acpi_warning plist
++#define ACPI_WARNING_ONCE(plist)        ACPI_ONCE(acpi_warning, plist)
+ #define ACPI_EXCEPTION(plist)           acpi_exception plist
+ #define ACPI_ERROR(plist)               acpi_error plist
++#define ACPI_ERROR_ONCE(plist)          ACPI_ONCE(acpi_error, plist)
+ #define ACPI_BIOS_WARNING(plist)        acpi_bios_warning plist
+ #define ACPI_BIOS_EXCEPTION(plist)      acpi_bios_exception plist
+ #define ACPI_BIOS_ERROR(plist)          acpi_bios_error plist
+@@ -214,8 +217,10 @@
+ 
+ #define ACPI_INFO(plist)
+ #define ACPI_WARNING(plist)
++#define ACPI_WARNING_ONCE(plist)
+ #define ACPI_EXCEPTION(plist)
+ #define ACPI_ERROR(plist)
++#define ACPI_ERROR_ONCE(plist)
+ #define ACPI_BIOS_WARNING(plist)
+ #define ACPI_BIOS_EXCEPTION(plist)
+ #define ACPI_BIOS_ERROR(plist)
+diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h
+index 930b6afba6f4d4..e1720d93066695 100644
+--- a/include/acpi/cppc_acpi.h
++++ b/include/acpi/cppc_acpi.h
+@@ -64,6 +64,8 @@ struct cpc_desc {
+ 	int cpu_id;
+ 	int write_cmd_status;
+ 	int write_cmd_id;
++	/* Lock used for RMW operations in cpc_write() */
++	spinlock_t rmw_lock;
+ 	struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT];
+ 	struct acpi_psd_package domain_info;
+ 	struct kobject kobj;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 5e694a308081aa..5e880f0b566203 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -694,6 +694,11 @@ enum bpf_type_flag {
+ 	/* DYNPTR points to xdp_buff */
+ 	DYNPTR_TYPE_XDP		= BIT(16 + BPF_BASE_TYPE_BITS),
+ 
++	/* Memory must be aligned on some architectures, used in combination with
++	 * MEM_FIXED_SIZE.
++	 */
++	MEM_ALIGNED		= BIT(17 + BPF_BASE_TYPE_BITS),
++
+ 	__BPF_TYPE_FLAG_MAX,
+ 	__BPF_TYPE_LAST_FLAG	= __BPF_TYPE_FLAG_MAX - 1,
+ };
+@@ -731,8 +736,6 @@ enum bpf_arg_type {
+ 	ARG_ANYTHING,		/* any (initialized) argument is ok */
+ 	ARG_PTR_TO_SPIN_LOCK,	/* pointer to bpf_spin_lock */
+ 	ARG_PTR_TO_SOCK_COMMON,	/* pointer to sock_common */
+-	ARG_PTR_TO_INT,		/* pointer to int */
+-	ARG_PTR_TO_LONG,	/* pointer to long */
+ 	ARG_PTR_TO_SOCKET,	/* pointer to bpf_sock (fullsock) */
+ 	ARG_PTR_TO_BTF_ID,	/* pointer to in-kernel struct */
+ 	ARG_PTR_TO_RINGBUF_MEM,	/* pointer to dynamically reserved ringbuf memory */
+@@ -919,6 +922,7 @@ static_assert(__BPF_REG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
+  */
+ struct bpf_insn_access_aux {
+ 	enum bpf_reg_type reg_type;
++	bool is_ldsx;
+ 	union {
+ 		int ctx_field_size;
+ 		struct {
+@@ -927,6 +931,7 @@ struct bpf_insn_access_aux {
+ 		};
+ 	};
+ 	struct bpf_verifier_log *log; /* for verbose logs */
++	bool is_retval; /* is accessing function return value ? */
+ };
+ 
+ static inline void
+diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
+index 1de7ece5d36d43..aefcd656425126 100644
+--- a/include/linux/bpf_lsm.h
++++ b/include/linux/bpf_lsm.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/sched.h>
+ #include <linux/bpf.h>
++#include <linux/bpf_verifier.h>
+ #include <linux/lsm_hooks.h>
+ 
+ #ifdef CONFIG_BPF_LSM
+@@ -45,6 +46,8 @@ void bpf_inode_storage_free(struct inode *inode);
+ 
+ void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, bpf_func_t *bpf_func);
+ 
++int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++			     struct bpf_retval_range *range);
+ #else /* !CONFIG_BPF_LSM */
+ 
+ static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+@@ -78,6 +81,11 @@ static inline void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
+ {
+ }
+ 
++static inline int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++					   struct bpf_retval_range *range)
++{
++	return -EOPNOTSUPP;
++}
+ #endif /* CONFIG_BPF_LSM */
+ 
+ #endif /* _LINUX_BPF_LSM_H */
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 8c252e073bd810..7325f88736b1cd 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -133,7 +133,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ #define annotate_unreachable() __annotate_unreachable(__COUNTER__)
+ 
+ /* Annotate a C jump table to allow objtool to follow the code flow */
+-#define __annotate_jump_table __section(".rodata..c_jump_table")
++#define __annotate_jump_table __section(".rodata..c_jump_table,\"a\",@progbits #")
+ 
+ #else /* !CONFIG_OBJTOOL */
+ #define annotate_reachable()
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 41d1d71c36ff5d..eee8577bcc543c 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -279,7 +279,7 @@ struct node_footer {
+ #define F2FS_INLINE_DATA	0x02	/* file inline data flag */
+ #define F2FS_INLINE_DENTRY	0x04	/* file inline dentry flag */
+ #define F2FS_DATA_EXIST		0x08	/* file inline data exist flag */
+-#define F2FS_INLINE_DOTS	0x10	/* file having implicit dot dentries */
++#define F2FS_INLINE_DOTS	0x10	/* file having implicit dot dentries (obsolete) */
+ #define F2FS_EXTRA_ATTR		0x20	/* file having extra attribute */
+ #define F2FS_PIN_FILE		0x40	/* file should not be gced */
+ #define F2FS_COMPRESS_RELEASED	0x80	/* file released compressed blocks */
+diff --git a/include/linux/fs_parser.h b/include/linux/fs_parser.h
+index d3350979115f0a..6cf713a7e6c6fc 100644
+--- a/include/linux/fs_parser.h
++++ b/include/linux/fs_parser.h
+@@ -28,7 +28,7 @@ typedef int fs_param_type(struct p_log *,
+  */
+ fs_param_type fs_param_is_bool, fs_param_is_u32, fs_param_is_s32, fs_param_is_u64,
+ 	fs_param_is_enum, fs_param_is_string, fs_param_is_blob, fs_param_is_blockdev,
+-	fs_param_is_path, fs_param_is_fd;
++	fs_param_is_path, fs_param_is_fd, fs_param_is_uid, fs_param_is_gid;
+ 
+ /*
+  * Specification of the type of value a parameter wants.
+@@ -57,6 +57,8 @@ struct fs_parse_result {
+ 		int		int_32;		/* For spec_s32/spec_enum */
+ 		unsigned int	uint_32;	/* For spec_u32{,_octal,_hex}/spec_enum */
+ 		u64		uint_64;	/* For spec_u64 */
++		kuid_t		uid;
++		kgid_t		gid;
+ 	};
+ };
+ 
+@@ -131,6 +133,8 @@ static inline bool fs_validate_description(const char *name,
+ #define fsparam_bdev(NAME, OPT)	__fsparam(fs_param_is_blockdev, NAME, OPT, 0, NULL)
+ #define fsparam_path(NAME, OPT)	__fsparam(fs_param_is_path, NAME, OPT, 0, NULL)
+ #define fsparam_fd(NAME, OPT)	__fsparam(fs_param_is_fd, NAME, OPT, 0, NULL)
++#define fsparam_uid(NAME, OPT) __fsparam(fs_param_is_uid, NAME, OPT, 0, NULL)
++#define fsparam_gid(NAME, OPT) __fsparam(fs_param_is_gid, NAME, OPT, 0, NULL)
+ 
+ /* String parameter that allows empty argument */
+ #define fsparam_string_empty(NAME, OPT) \
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 855db460e08bc6..19c333fafe1138 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -114,6 +114,7 @@ LSM_HOOK(int, 0, path_notify, const struct path *path, u64 mask,
+ 	 unsigned int obj_type)
+ LSM_HOOK(int, 0, inode_alloc_security, struct inode *inode)
+ LSM_HOOK(void, LSM_RET_VOID, inode_free_security, struct inode *inode)
++LSM_HOOK(void, LSM_RET_VOID, inode_free_security_rcu, void *inode_security)
+ LSM_HOOK(int, -EOPNOTSUPP, inode_init_security, struct inode *inode,
+ 	 struct inode *dir, const struct qstr *qstr, struct xattr *xattrs,
+ 	 int *xattr_count)
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index a2ade0ffe9e7d2..efd4a0655159cc 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -73,6 +73,7 @@ struct lsm_blob_sizes {
+ 	int	lbs_cred;
+ 	int	lbs_file;
+ 	int	lbs_inode;
++	int	lbs_sock;
+ 	int	lbs_superblock;
+ 	int	lbs_ipc;
+ 	int	lbs_msg_msg;
+diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
+index c09cdcc99471e1..189140bf11fc40 100644
+--- a/include/linux/sbitmap.h
++++ b/include/linux/sbitmap.h
+@@ -40,7 +40,7 @@ struct sbitmap_word {
+ 	/**
+ 	 * @swap_lock: serializes simultaneous updates of ->word and ->cleared
+ 	 */
+-	spinlock_t swap_lock;
++	raw_spinlock_t swap_lock;
+ } ____cacheline_aligned_in_smp;
+ 
+ /**
+diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
+index 0f038a1a033099..c3bca9c0bf2cf5 100644
+--- a/include/linux/soc/qcom/geni-se.h
++++ b/include/linux/soc/qcom/geni-se.h
+@@ -88,11 +88,15 @@ struct geni_se {
+ #define SE_GENI_M_IRQ_STATUS		0x610
+ #define SE_GENI_M_IRQ_EN		0x614
+ #define SE_GENI_M_IRQ_CLEAR		0x618
++#define SE_GENI_M_IRQ_EN_SET		0x61c
++#define SE_GENI_M_IRQ_EN_CLEAR		0x620
+ #define SE_GENI_S_CMD0			0x630
+ #define SE_GENI_S_CMD_CTRL_REG		0x634
+ #define SE_GENI_S_IRQ_STATUS		0x640
+ #define SE_GENI_S_IRQ_EN		0x644
+ #define SE_GENI_S_IRQ_CLEAR		0x648
++#define SE_GENI_S_IRQ_EN_SET		0x64c
++#define SE_GENI_S_IRQ_EN_CLEAR		0x650
+ #define SE_GENI_TX_FIFOn		0x700
+ #define SE_GENI_RX_FIFOn		0x780
+ #define SE_GENI_TX_FIFO_STATUS		0x800
+@@ -101,6 +105,8 @@ struct geni_se {
+ #define SE_GENI_RX_WATERMARK_REG	0x810
+ #define SE_GENI_RX_RFR_WATERMARK_REG	0x814
+ #define SE_GENI_IOS			0x908
++#define SE_GENI_M_GP_LENGTH		0x910
++#define SE_GENI_S_GP_LENGTH		0x914
+ #define SE_DMA_TX_IRQ_STAT		0xc40
+ #define SE_DMA_TX_IRQ_CLR		0xc44
+ #define SE_DMA_TX_FSM_RST		0xc58
+@@ -234,6 +240,9 @@ struct geni_se {
+ #define IO2_DATA_IN			BIT(1)
+ #define RX_DATA_IN			BIT(0)
+ 
++/* SE_GENI_M_GP_LENGTH and SE_GENI_S_GP_LENGTH fields */
++#define GP_LENGTH			GENMASK(31, 0)
++
+ /* SE_DMA_TX_IRQ_STAT Register fields */
+ #define TX_DMA_DONE			BIT(0)
+ #define TX_EOT				BIT(1)
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 9f08a584d70785..0b9f1e598e3a6b 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -76,8 +76,23 @@ struct usbnet {
+ #		define EVENT_LINK_CHANGE	11
+ #		define EVENT_SET_RX_MODE	12
+ #		define EVENT_NO_IP_ALIGN	13
++/* This one is special, as it indicates that the device is going away
++ * there are cyclic dependencies between tasklet, timer and bh
++ * that must be broken
++ */
++#		define EVENT_UNPLUG		31
+ };
+ 
++static inline bool usbnet_going_away(struct usbnet *ubn)
++{
++	return test_bit(EVENT_UNPLUG, &ubn->flags);
++}
++
++static inline void usbnet_mark_going_away(struct usbnet *ubn)
++{
++	set_bit(EVENT_UNPLUG, &ubn->flags);
++}
++
+ static inline struct usb_driver *driver_of(struct usb_interface *intf)
+ {
+ 	return to_usb_driver(intf->dev.driver);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index ecb6824e9add83..9cfd1ce0fd36c4 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -2258,8 +2258,8 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 			      bool mgmt_connected);
+ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 			    u8 link_type, u8 addr_type, u8 status);
+-void mgmt_connect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+-			 u8 addr_type, u8 status);
++void mgmt_connect_failed(struct hci_dev *hdev, struct hci_conn *conn,
++			 u8 status);
+ void mgmt_pin_code_request(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 secure);
+ void mgmt_pin_code_reply_complete(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 				  u8 status);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index c5606cadb1a552..82248813619e3f 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -795,6 +795,8 @@ static inline void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
+ }
+ 
+ bool icmp_global_allow(void);
++void icmp_global_consume(void);
++
+ extern int sysctl_icmp_msgs_per_sec;
+ extern int sysctl_icmp_msgs_burst;
+ 
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 45ad37adbe3287..7b3f56d862de40 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -953,8 +953,9 @@ enum mac80211_tx_info_flags {
+  *	of their QoS TID or other priority field values.
+  * @IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX: first MLO TX, used mostly internally
+  *	for sequence number assignment
+- * @IEEE80211_TX_CTRL_SCAN_TX: Indicates that this frame is transmitted
+- *	due to scanning, not in normal operation on the interface.
++ * @IEEE80211_TX_CTRL_DONT_USE_RATE_MASK: Don't use rate mask for this frame
++ *	which is transmitted due to scanning or offchannel TX, not in normal
++ *	operation on the interface.
+  * @IEEE80211_TX_CTRL_MLO_LINK: If not @IEEE80211_LINK_UNSPECIFIED, this
+  *	frame should be transmitted on the specific link. This really is
+  *	only relevant for frames that do not have data present, and is
+@@ -975,7 +976,7 @@ enum mac80211_tx_control_flags {
+ 	IEEE80211_TX_CTRL_NO_SEQNO		= BIT(7),
+ 	IEEE80211_TX_CTRL_DONT_REORDER		= BIT(8),
+ 	IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX	= BIT(9),
+-	IEEE80211_TX_CTRL_SCAN_TX		= BIT(10),
++	IEEE80211_TX_CTRL_DONT_USE_RATE_MASK	= BIT(10),
+ 	IEEE80211_TX_CTRL_MLO_LINK		= 0xf0000000,
+ };
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 45bbb54e42e85e..ac647c952cf0c5 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2437,9 +2437,26 @@ static inline s64 tcp_rto_delta_us(const struct sock *sk)
+ {
+ 	const struct sk_buff *skb = tcp_rtx_queue_head(sk);
+ 	u32 rto = inet_csk(sk)->icsk_rto;
+-	u64 rto_time_stamp_us = tcp_skb_timestamp_us(skb) + jiffies_to_usecs(rto);
+ 
+-	return rto_time_stamp_us - tcp_sk(sk)->tcp_mstamp;
++	if (likely(skb)) {
++		u64 rto_time_stamp_us = tcp_skb_timestamp_us(skb) + jiffies_to_usecs(rto);
++
++		return rto_time_stamp_us - tcp_sk(sk)->tcp_mstamp;
++	} else {
++		WARN_ONCE(1,
++			"rtx queue emtpy: "
++			"out:%u sacked:%u lost:%u retrans:%u "
++			"tlp_high_seq:%u sk_state:%u ca_state:%u "
++			"advmss:%u mss_cache:%u pmtu:%u\n",
++			tcp_sk(sk)->packets_out, tcp_sk(sk)->sacked_out,
++			tcp_sk(sk)->lost_out, tcp_sk(sk)->retrans_out,
++			tcp_sk(sk)->tlp_high_seq, sk->sk_state,
++			inet_csk(sk)->icsk_ca_state,
++			tcp_sk(sk)->advmss, tcp_sk(sk)->mss_cache,
++			inet_csk(sk)->icsk_pmtu_cookie);
++		return jiffies_to_usecs(rto);
++	}
++
+ }
+ 
+ /*
+diff --git a/include/sound/tas2781.h b/include/sound/tas2781.h
+index 99ca3e401fd1fe..6f6e3e2f652c96 100644
+--- a/include/sound/tas2781.h
++++ b/include/sound/tas2781.h
+@@ -80,11 +80,6 @@ struct tasdevice {
+ 	bool is_loaderr;
+ };
+ 
+-struct tasdevice_irqinfo {
+-	int irq_gpio;
+-	int irq;
+-};
+-
+ struct calidata {
+ 	unsigned char *data;
+ 	unsigned long total_sz;
+@@ -92,7 +87,6 @@ struct calidata {
+ 
+ struct tasdevice_priv {
+ 	struct tasdevice tasdevice[TASDEVICE_MAX_CHANNELS];
+-	struct tasdevice_irqinfo irq_info;
+ 	struct tasdevice_rca rcabin;
+ 	struct calidata cali_data;
+ 	struct tasdevice_fw *fmw;
+@@ -113,6 +107,7 @@ struct tasdevice_priv {
+ 	unsigned int chip_id;
+ 	unsigned int sysclk;
+ 
++	int irq;
+ 	int cur_prog;
+ 	int cur_conf;
+ 	int fw_state;
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index ed794b5fefbe3a..2851c823095bc1 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -139,7 +139,8 @@ TRACE_DEFINE_ENUM(EX_BLOCK_AGE);
+ 		{ CP_NODE_NEED_CP,	"node needs cp" },		\
+ 		{ CP_FASTBOOT_MODE,	"fastboot mode" },		\
+ 		{ CP_SPEC_LOG_NUM,	"log type is 2" },		\
+-		{ CP_RECOVER_DIR,	"dir needs recovery" })
++		{ CP_RECOVER_DIR,	"dir needs recovery" },		\
++		{ CP_XATTR_DIR,		"dir's xattr updated" })
+ 
+ #define show_shutdown_mode(type)					\
+ 	__print_symbolic(type,						\
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 22dac5850327fc..b59a1bf844cf9d 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -13,6 +13,7 @@
+ #include <linux/slab.h>
+ #include <linux/rculist_nulls.h>
+ #include <linux/cpu.h>
++#include <linux/cpuset.h>
+ #include <linux/task_work.h>
+ #include <linux/audit.h>
+ #include <linux/mmu_context.h>
+@@ -1166,7 +1167,7 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ 
+ 	if (!alloc_cpumask_var(&wq->cpu_mask, GFP_KERNEL))
+ 		goto err;
+-	cpumask_copy(wq->cpu_mask, cpu_possible_mask);
++	cpuset_cpus_allowed(data->task, wq->cpu_mask);
+ 	wq->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
+ 	wq->acct[IO_WQ_ACCT_UNBOUND].max_workers =
+ 				task_rlimit(current, RLIMIT_NPROC);
+@@ -1321,17 +1322,29 @@ static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
+ 
+ int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask)
+ {
++	cpumask_var_t allowed_mask;
++	int ret = 0;
++
+ 	if (!tctx || !tctx->io_wq)
+ 		return -EINVAL;
+ 
++	if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL))
++		return -ENOMEM;
++
+ 	rcu_read_lock();
+-	if (mask)
+-		cpumask_copy(tctx->io_wq->cpu_mask, mask);
+-	else
+-		cpumask_copy(tctx->io_wq->cpu_mask, cpu_possible_mask);
++	cpuset_cpus_allowed(tctx->io_wq->task, allowed_mask);
++	if (mask) {
++		if (cpumask_subset(mask, allowed_mask))
++			cpumask_copy(tctx->io_wq->cpu_mask, mask);
++		else
++			ret = -EINVAL;
++	} else {
++		cpumask_copy(tctx->io_wq->cpu_mask, allowed_mask);
++	}
+ 	rcu_read_unlock();
+ 
+-	return 0;
++	free_cpumask_var(allowed_mask);
++	return ret;
+ }
+ 
+ /*
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 896e707e06187f..c0d8ee0c9786df 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2401,7 +2401,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 		return 1;
+ 	if (unlikely(!llist_empty(&ctx->work_llist)))
+ 		return 1;
+-	if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
++	if (unlikely(task_work_pending(current)))
+ 		return 1;
+ 	if (unlikely(task_sigpending(current)))
+ 		return -EINTR;
+@@ -2502,9 +2502,9 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		 * If we got woken because of task_work being processed, run it
+ 		 * now rather than let the caller do another wait loop.
+ 		 */
+-		io_run_task_work();
+ 		if (!llist_empty(&ctx->work_llist))
+ 			io_run_local_work(ctx, nr_wait);
++		io_run_task_work();
+ 
+ 		/*
+ 		 * Non-local task_work will be run on exit to userspace, but
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 1a2128459cb4c8..e6b44367df67b0 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -856,6 +856,14 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	ret = io_iter_do_read(rw, &io->iter);
+ 
++	/*
++	 * Some file systems like to return -EOPNOTSUPP for an IOCB_NOWAIT
++	 * issue, even though they should be returning -EAGAIN. To be safe,
++	 * retry from blocking context for either.
++	 */
++	if (ret == -EOPNOTSUPP && force_nonblock)
++		ret = -EAGAIN;
++
+ 	if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
+ 		req->flags &= ~REQ_F_REISSUE;
+ 		/* If we can poll, just do that. */
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index b3722e5275e77e..4dcb1c586536d7 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -10,6 +10,7 @@
+ #include <linux/slab.h>
+ #include <linux/audit.h>
+ #include <linux/security.h>
++#include <linux/cpuset.h>
+ #include <linux/io_uring.h>
+ 
+ #include <uapi/linux/io_uring.h>
+@@ -460,11 +461,22 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 			return 0;
+ 
+ 		if (p->flags & IORING_SETUP_SQ_AFF) {
++			cpumask_var_t allowed_mask;
+ 			int cpu = p->sq_thread_cpu;
+ 
+ 			ret = -EINVAL;
+ 			if (cpu >= nr_cpu_ids || !cpu_online(cpu))
+ 				goto err_sqpoll;
++			ret = -ENOMEM;
++			if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL))
++				goto err_sqpoll;
++			ret = -EINVAL;
++			cpuset_cpus_allowed(current, allowed_mask);
++			if (!cpumask_test_cpu(cpu, allowed_mask)) {
++				free_cpumask_var(allowed_mask);
++				goto err_sqpoll;
++			}
++			free_cpumask_var(allowed_mask);
+ 			sqd->sq_cpu = cpu;
+ 		} else {
+ 			sqd->sq_cpu = -1;
+diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
+index 68240c3c6e7dea..7f8a66a62661b9 100644
+--- a/kernel/bpf/bpf_lsm.c
++++ b/kernel/bpf/bpf_lsm.c
+@@ -11,7 +11,6 @@
+ #include <linux/lsm_hooks.h>
+ #include <linux/bpf_lsm.h>
+ #include <linux/kallsyms.h>
+-#include <linux/bpf_verifier.h>
+ #include <net/bpf_sk_storage.h>
+ #include <linux/bpf_local_storage.h>
+ #include <linux/btf_ids.h>
+@@ -389,3 +388,36 @@ const struct bpf_verifier_ops lsm_verifier_ops = {
+ 	.get_func_proto = bpf_lsm_func_proto,
+ 	.is_valid_access = btf_ctx_access,
+ };
++
++/* hooks return 0 or 1 */
++BTF_SET_START(bool_lsm_hooks)
++#ifdef CONFIG_SECURITY_NETWORK_XFRM
++BTF_ID(func, bpf_lsm_xfrm_state_pol_flow_match)
++#endif
++#ifdef CONFIG_AUDIT
++BTF_ID(func, bpf_lsm_audit_rule_known)
++#endif
++BTF_ID(func, bpf_lsm_inode_xattr_skipcap)
++BTF_SET_END(bool_lsm_hooks)
++
++int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
++			     struct bpf_retval_range *retval_range)
++{
++	/* no return value range for void hooks */
++	if (!prog->aux->attach_func_proto->type)
++		return -EINVAL;
++
++	if (btf_id_set_contains(&bool_lsm_hooks, prog->aux->attach_btf_id)) {
++		retval_range->minval = 0;
++		retval_range->maxval = 1;
++	} else {
++		/* All other available LSM hooks, except task_prctl, return 0
++		 * on success and negative error code on failure.
++		 * To keep things simple, we only allow bpf progs to return 0
++		 * or negative errno for task_prctl too.
++		 */
++		retval_range->minval = -MAX_ERRNO;
++		retval_range->maxval = 0;
++	}
++	return 0;
++}
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 2f157ffbc67cea..96e8596a76ceab 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6250,8 +6250,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ 
+ 	if (arg == nr_args) {
+ 		switch (prog->expected_attach_type) {
+-		case BPF_LSM_CGROUP:
+ 		case BPF_LSM_MAC:
++			/* mark we are accessing the return value */
++			info->is_retval = true;
++			fallthrough;
++		case BPF_LSM_CGROUP:
+ 		case BPF_TRACE_FEXIT:
+ 			/* When LSM programs are attached to void LSM hooks
+ 			 * they use FEXIT trampolines and when attached to
+@@ -8715,6 +8718,7 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
+ 	struct bpf_core_cand_list cands = {};
+ 	struct bpf_core_relo_res targ_res;
+ 	struct bpf_core_spec *specs;
++	const struct btf_type *type;
+ 	int err;
+ 
+ 	/* ~4k of temp memory necessary to convert LLVM spec like "0:1:0:5"
+@@ -8724,6 +8728,13 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
+ 	if (!specs)
+ 		return -ENOMEM;
+ 
++	type = btf_type_by_id(ctx->btf, relo->type_id);
++	if (!type) {
++		bpf_log(ctx->log, "relo #%u: bad type id %u\n",
++			relo_idx, relo->type_id);
++		return -EINVAL;
++	}
++
+ 	if (need_cands) {
+ 		struct bpf_cand_cache *cc;
+ 		int i;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 7268370600f6e6..a9ae8f8e618075 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -517,11 +517,12 @@ static int __bpf_strtoll(const char *buf, size_t buf_len, u64 flags,
+ }
+ 
+ BPF_CALL_4(bpf_strtol, const char *, buf, size_t, buf_len, u64, flags,
+-	   long *, res)
++	   s64 *, res)
+ {
+ 	long long _res;
+ 	int err;
+ 
++	*res = 0;
+ 	err = __bpf_strtoll(buf, buf_len, flags, &_res);
+ 	if (err < 0)
+ 		return err;
+@@ -538,16 +539,18 @@ const struct bpf_func_proto bpf_strtol_proto = {
+ 	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_LONG,
++	.arg4_type	= ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg4_size	= sizeof(s64),
+ };
+ 
+ BPF_CALL_4(bpf_strtoul, const char *, buf, size_t, buf_len, u64, flags,
+-	   unsigned long *, res)
++	   u64 *, res)
+ {
+ 	unsigned long long _res;
+ 	bool is_negative;
+ 	int err;
+ 
++	*res = 0;
+ 	err = __bpf_strtoull(buf, buf_len, flags, &_res, &is_negative);
+ 	if (err < 0)
+ 		return err;
+@@ -566,7 +569,8 @@ const struct bpf_func_proto bpf_strtoul_proto = {
+ 	.arg1_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+ 	.arg2_type	= ARG_CONST_SIZE,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_LONG,
++	.arg4_type	= ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg4_size	= sizeof(u64),
+ };
+ 
+ BPF_CALL_3(bpf_strncmp, const char *, s1, u32, s1_sz, const char *, s2)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index f45ed6adc092af..f6b6d297fad6ed 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5910,6 +5910,7 @@ static const struct bpf_func_proto bpf_sys_close_proto = {
+ 
+ BPF_CALL_4(bpf_kallsyms_lookup_name, const char *, name, int, name_sz, int, flags, u64 *, res)
+ {
++	*res = 0;
+ 	if (flags)
+ 		return -EINVAL;
+ 
+@@ -5930,7 +5931,8 @@ static const struct bpf_func_proto bpf_kallsyms_lookup_name_proto = {
+ 	.arg1_type	= ARG_PTR_TO_MEM,
+ 	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
+ 	.arg3_type	= ARG_ANYTHING,
+-	.arg4_type	= ARG_PTR_TO_LONG,
++	.arg4_type	= ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg4_size	= sizeof(u64),
+ };
+ 
+ static const struct bpf_func_proto *
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 73f55f4b945eed..c821713249c81d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2334,6 +2334,25 @@ static void mark_reg_unknown(struct bpf_verifier_env *env,
+ 	__mark_reg_unknown(env, regs + regno);
+ }
+ 
++static int __mark_reg_s32_range(struct bpf_verifier_env *env,
++				struct bpf_reg_state *regs,
++				u32 regno,
++				s32 s32_min,
++				s32 s32_max)
++{
++	struct bpf_reg_state *reg = regs + regno;
++
++	reg->s32_min_value = max_t(s32, reg->s32_min_value, s32_min);
++	reg->s32_max_value = min_t(s32, reg->s32_max_value, s32_max);
++
++	reg->smin_value = max_t(s64, reg->smin_value, s32_min);
++	reg->smax_value = min_t(s64, reg->smax_value, s32_max);
++
++	reg_bounds_sync(reg);
++
++	return reg_bounds_sanity_check(env, reg, "s32_range");
++}
++
+ static void __mark_reg_not_init(const struct bpf_verifier_env *env,
+ 				struct bpf_reg_state *reg)
+ {
+@@ -5575,11 +5594,13 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+ /* check access to 'struct bpf_context' fields.  Supports fixed offsets only */
+ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size,
+ 			    enum bpf_access_type t, enum bpf_reg_type *reg_type,
+-			    struct btf **btf, u32 *btf_id)
++			    struct btf **btf, u32 *btf_id, bool *is_retval, bool is_ldsx)
+ {
+ 	struct bpf_insn_access_aux info = {
+ 		.reg_type = *reg_type,
+ 		.log = &env->log,
++		.is_retval = false,
++		.is_ldsx = is_ldsx,
+ 	};
+ 
+ 	if (env->ops->is_valid_access &&
+@@ -5592,6 +5613,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
+ 		 * type of narrower access.
+ 		 */
+ 		*reg_type = info.reg_type;
++		*is_retval = info.is_retval;
+ 
+ 		if (base_type(*reg_type) == PTR_TO_BTF_ID) {
+ 			*btf = info.btf;
+@@ -6760,6 +6782,17 @@ static int check_stack_access_within_bounds(
+ 	return grow_stack_state(env, state, -min_off /* size */);
+ }
+ 
++static bool get_func_retval_range(struct bpf_prog *prog,
++				  struct bpf_retval_range *range)
++{
++	if (prog->type == BPF_PROG_TYPE_LSM &&
++		prog->expected_attach_type == BPF_LSM_MAC &&
++		!bpf_lsm_get_retval_range(prog, range)) {
++		return true;
++	}
++	return false;
++}
++
+ /* check whether memory at (regno + off) is accessible for t = (read | write)
+  * if t==write, value_regno is a register which value is stored into memory
+  * if t==read, value_regno is a register which will receive the value from memory
+@@ -6864,6 +6897,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 		if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
+ 			mark_reg_unknown(env, regs, value_regno);
+ 	} else if (reg->type == PTR_TO_CTX) {
++		bool is_retval = false;
++		struct bpf_retval_range range;
+ 		enum bpf_reg_type reg_type = SCALAR_VALUE;
+ 		struct btf *btf = NULL;
+ 		u32 btf_id = 0;
+@@ -6879,7 +6914,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			return err;
+ 
+ 		err = check_ctx_access(env, insn_idx, off, size, t, &reg_type, &btf,
+-				       &btf_id);
++				       &btf_id, &is_retval, is_ldsx);
+ 		if (err)
+ 			verbose_linfo(env, insn_idx, "; ");
+ 		if (!err && t == BPF_READ && value_regno >= 0) {
+@@ -6888,7 +6923,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			 * case, we know the offset is zero.
+ 			 */
+ 			if (reg_type == SCALAR_VALUE) {
+-				mark_reg_unknown(env, regs, value_regno);
++				if (is_retval && get_func_retval_range(env->prog, &range)) {
++					err = __mark_reg_s32_range(env, regs, value_regno,
++								   range.minval, range.maxval);
++					if (err)
++						return err;
++				} else {
++					mark_reg_unknown(env, regs, value_regno);
++				}
+ 			} else {
+ 				mark_reg_known_zero(env, regs,
+ 						    value_regno);
+@@ -8116,6 +8158,12 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
+ 	       type == ARG_CONST_SIZE_OR_ZERO;
+ }
+ 
++static bool arg_type_is_raw_mem(enum bpf_arg_type type)
++{
++	return base_type(type) == ARG_PTR_TO_MEM &&
++	       type & MEM_UNINIT;
++}
++
+ static bool arg_type_is_release(enum bpf_arg_type type)
+ {
+ 	return type & OBJ_RELEASE;
+@@ -8126,16 +8174,6 @@ static bool arg_type_is_dynptr(enum bpf_arg_type type)
+ 	return base_type(type) == ARG_PTR_TO_DYNPTR;
+ }
+ 
+-static int int_ptr_type_to_size(enum bpf_arg_type type)
+-{
+-	if (type == ARG_PTR_TO_INT)
+-		return sizeof(u32);
+-	else if (type == ARG_PTR_TO_LONG)
+-		return sizeof(u64);
+-
+-	return -EINVAL;
+-}
+-
+ static int resolve_map_arg_type(struct bpf_verifier_env *env,
+ 				 const struct bpf_call_arg_meta *meta,
+ 				 enum bpf_arg_type *arg_type)
+@@ -8208,16 +8246,6 @@ static const struct bpf_reg_types mem_types = {
+ 	},
+ };
+ 
+-static const struct bpf_reg_types int_ptr_types = {
+-	.types = {
+-		PTR_TO_STACK,
+-		PTR_TO_PACKET,
+-		PTR_TO_PACKET_META,
+-		PTR_TO_MAP_KEY,
+-		PTR_TO_MAP_VALUE,
+-	},
+-};
+-
+ static const struct bpf_reg_types spin_lock_types = {
+ 	.types = {
+ 		PTR_TO_MAP_VALUE,
+@@ -8273,8 +8301,6 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
+ 	[ARG_PTR_TO_SPIN_LOCK]		= &spin_lock_types,
+ 	[ARG_PTR_TO_MEM]		= &mem_types,
+ 	[ARG_PTR_TO_RINGBUF_MEM]	= &ringbuf_mem_types,
+-	[ARG_PTR_TO_INT]		= &int_ptr_types,
+-	[ARG_PTR_TO_LONG]		= &int_ptr_types,
+ 	[ARG_PTR_TO_PERCPU_BTF_ID]	= &percpu_btf_ptr_types,
+ 	[ARG_PTR_TO_FUNC]		= &func_ptr_types,
+ 	[ARG_PTR_TO_STACK]		= &stack_ptr_types,
+@@ -8835,9 +8861,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ 		 */
+ 		meta->raw_mode = arg_type & MEM_UNINIT;
+ 		if (arg_type & MEM_FIXED_SIZE) {
+-			err = check_helper_mem_access(env, regno,
+-						      fn->arg_size[arg], false,
+-						      meta);
++			err = check_helper_mem_access(env, regno, fn->arg_size[arg], false, meta);
++			if (err)
++				return err;
++			if (arg_type & MEM_ALIGNED)
++				err = check_ptr_alignment(env, reg, 0, fn->arg_size[arg], true);
+ 		}
+ 		break;
+ 	case ARG_CONST_SIZE:
+@@ -8862,17 +8890,6 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ 		if (err)
+ 			return err;
+ 		break;
+-	case ARG_PTR_TO_INT:
+-	case ARG_PTR_TO_LONG:
+-	{
+-		int size = int_ptr_type_to_size(arg_type);
+-
+-		err = check_helper_mem_access(env, regno, size, false, meta);
+-		if (err)
+-			return err;
+-		err = check_ptr_alignment(env, reg, 0, size, true);
+-		break;
+-	}
+ 	case ARG_PTR_TO_CONST_STR:
+ 	{
+ 		err = check_reg_const_str(env, reg, regno);
+@@ -9189,15 +9206,15 @@ static bool check_raw_mode_ok(const struct bpf_func_proto *fn)
+ {
+ 	int count = 0;
+ 
+-	if (fn->arg1_type == ARG_PTR_TO_UNINIT_MEM)
++	if (arg_type_is_raw_mem(fn->arg1_type))
+ 		count++;
+-	if (fn->arg2_type == ARG_PTR_TO_UNINIT_MEM)
++	if (arg_type_is_raw_mem(fn->arg2_type))
+ 		count++;
+-	if (fn->arg3_type == ARG_PTR_TO_UNINIT_MEM)
++	if (arg_type_is_raw_mem(fn->arg3_type))
+ 		count++;
+-	if (fn->arg4_type == ARG_PTR_TO_UNINIT_MEM)
++	if (arg_type_is_raw_mem(fn->arg4_type))
+ 		count++;
+-	if (fn->arg5_type == ARG_PTR_TO_UNINIT_MEM)
++	if (arg_type_is_raw_mem(fn->arg5_type))
+ 		count++;
+ 
+ 	/* We only support one arg being in raw mode at the moment,
+@@ -9911,9 +9928,13 @@ static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env)
+ 	return is_rbtree_lock_required_kfunc(kfunc_btf_id);
+ }
+ 
+-static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg)
++static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg,
++				bool return_32bit)
+ {
+-	return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
++	if (return_32bit)
++		return range.minval <= reg->s32_min_value && reg->s32_max_value <= range.maxval;
++	else
++		return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
+ }
+ 
+ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+@@ -9950,8 +9971,8 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ 		if (err)
+ 			return err;
+ 
+-		/* enforce R0 return value range */
+-		if (!retval_range_within(callee->callback_ret_range, r0)) {
++		/* enforce R0 return value range, and bpf_callback_t returns 64bit */
++		if (!retval_range_within(callee->callback_ret_range, r0, false)) {
+ 			verbose_invalid_scalar(env, r0, callee->callback_ret_range,
+ 					       "At callback return", "R0");
+ 			return -EINVAL;
+@@ -15572,6 +15593,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ 	int err;
+ 	struct bpf_func_state *frame = env->cur_state->frame[0];
+ 	const bool is_subprog = frame->subprogno;
++	bool return_32bit = false;
+ 
+ 	/* LSM and struct_ops func-ptr's return type could be "void" */
+ 	if (!is_subprog || frame->in_exception_callback_fn) {
+@@ -15677,12 +15699,14 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ 
+ 	case BPF_PROG_TYPE_LSM:
+ 		if (env->prog->expected_attach_type != BPF_LSM_CGROUP) {
+-			/* Regular BPF_PROG_TYPE_LSM programs can return
+-			 * any value.
+-			 */
+-			return 0;
+-		}
+-		if (!env->prog->aux->attach_func_proto->type) {
++			/* no range found, any return value is allowed */
++			if (!get_func_retval_range(env->prog, &range))
++				return 0;
++			/* no restricted range, any return value is allowed */
++			if (range.minval == S32_MIN && range.maxval == S32_MAX)
++				return 0;
++			return_32bit = true;
++		} else if (!env->prog->aux->attach_func_proto->type) {
+ 			/* Make sure programs that attach to void
+ 			 * hooks don't try to modify return value.
+ 			 */
+@@ -15712,7 +15736,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ 	if (err)
+ 		return err;
+ 
+-	if (!retval_range_within(range, reg)) {
++	if (!retval_range_within(range, reg, return_32bit)) {
+ 		verbose_invalid_scalar(env, reg, range, exit_ctx, reg_name);
+ 		if (!is_subprog &&
+ 		    prog->expected_attach_type == BPF_LSM_CGROUP &&
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index f7be976ff88af7..db4ceb0f503cca 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -845,8 +845,16 @@ int kthread_worker_fn(void *worker_ptr)
+ 		 * event only cares about the address.
+ 		 */
+ 		trace_sched_kthread_work_execute_end(work, func);
+-	} else if (!freezing(current))
++	} else if (!freezing(current)) {
+ 		schedule();
++	} else {
++		/*
++		 * Handle the case where the current remains
++		 * TASK_INTERRUPTIBLE. try_to_freeze() expects
++		 * the current to be TASK_RUNNING.
++		 */
++		__set_current_state(TASK_RUNNING);
++	}
+ 
+ 	try_to_freeze();
+ 	cond_resched();
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 151bd3de59363a..3468d8230e5f75 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -6184,25 +6184,27 @@ static struct pending_free *get_pending_free(void)
+ static void free_zapped_rcu(struct rcu_head *cb);
+ 
+ /*
+- * Schedule an RCU callback if no RCU callback is pending. Must be called with
+- * the graph lock held.
+- */
+-static void call_rcu_zapped(struct pending_free *pf)
++* See if we need to queue an RCU callback, must called with
++* the lockdep lock held, returns false if either we don't have
++* any pending free or the callback is already scheduled.
++* Otherwise, a call_rcu() must follow this function call.
++*/
++static bool prepare_call_rcu_zapped(struct pending_free *pf)
+ {
+ 	WARN_ON_ONCE(inside_selftest());
+ 
+ 	if (list_empty(&pf->zapped))
+-		return;
++		return false;
+ 
+ 	if (delayed_free.scheduled)
+-		return;
++		return false;
+ 
+ 	delayed_free.scheduled = true;
+ 
+ 	WARN_ON_ONCE(delayed_free.pf + delayed_free.index != pf);
+ 	delayed_free.index ^= 1;
+ 
+-	call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
++	return true;
+ }
+ 
+ /* The caller must hold the graph lock. May be called from RCU context. */
+@@ -6228,6 +6230,7 @@ static void free_zapped_rcu(struct rcu_head *ch)
+ {
+ 	struct pending_free *pf;
+ 	unsigned long flags;
++	bool need_callback;
+ 
+ 	if (WARN_ON_ONCE(ch != &delayed_free.rcu_head))
+ 		return;
+@@ -6239,14 +6242,18 @@ static void free_zapped_rcu(struct rcu_head *ch)
+ 	pf = delayed_free.pf + (delayed_free.index ^ 1);
+ 	__free_zapped_classes(pf);
+ 	delayed_free.scheduled = false;
++	need_callback =
++		prepare_call_rcu_zapped(delayed_free.pf + delayed_free.index);
++	lockdep_unlock();
++	raw_local_irq_restore(flags);
+ 
+ 	/*
+-	 * If there's anything on the open list, close and start a new callback.
+-	 */
+-	call_rcu_zapped(delayed_free.pf + delayed_free.index);
++	* If there's pending free and its callback has not been scheduled,
++	* queue an RCU callback.
++	*/
++	if (need_callback)
++		call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+ 
+-	lockdep_unlock();
+-	raw_local_irq_restore(flags);
+ }
+ 
+ /*
+@@ -6286,6 +6293,7 @@ static void lockdep_free_key_range_reg(void *start, unsigned long size)
+ {
+ 	struct pending_free *pf;
+ 	unsigned long flags;
++	bool need_callback;
+ 
+ 	init_data_structures_once();
+ 
+@@ -6293,10 +6301,11 @@ static void lockdep_free_key_range_reg(void *start, unsigned long size)
+ 	lockdep_lock();
+ 	pf = get_pending_free();
+ 	__lockdep_free_key_range(pf, start, size);
+-	call_rcu_zapped(pf);
++	need_callback = prepare_call_rcu_zapped(pf);
+ 	lockdep_unlock();
+ 	raw_local_irq_restore(flags);
+-
++	if (need_callback)
++		call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+ 	/*
+ 	 * Wait for any possible iterators from look_up_lock_class() to pass
+ 	 * before continuing to free the memory they refer to.
+@@ -6390,6 +6399,7 @@ static void lockdep_reset_lock_reg(struct lockdep_map *lock)
+ 	struct pending_free *pf;
+ 	unsigned long flags;
+ 	int locked;
++	bool need_callback = false;
+ 
+ 	raw_local_irq_save(flags);
+ 	locked = graph_lock();
+@@ -6398,11 +6408,13 @@ static void lockdep_reset_lock_reg(struct lockdep_map *lock)
+ 
+ 	pf = get_pending_free();
+ 	__lockdep_reset_lock(pf, lock);
+-	call_rcu_zapped(pf);
++	need_callback = prepare_call_rcu_zapped(pf);
+ 
+ 	graph_unlock();
+ out_irq:
+ 	raw_local_irq_restore(flags);
++	if (need_callback)
++		call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
+ }
+ 
+ /*
+@@ -6446,6 +6458,7 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 	struct pending_free *pf;
+ 	unsigned long flags;
+ 	bool found = false;
++	bool need_callback = false;
+ 
+ 	might_sleep();
+ 
+@@ -6466,11 +6479,14 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ 	if (found) {
+ 		pf = get_pending_free();
+ 		__lockdep_free_key_range(pf, key, 1);
+-		call_rcu_zapped(pf);
++		need_callback = prepare_call_rcu_zapped(pf);
+ 	}
+ 	lockdep_unlock();
+ 	raw_local_irq_restore(flags);
+ 
++	if (need_callback)
++		call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
++
+ 	/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+ 	synchronize_rcu();
+ }
+diff --git a/kernel/module/Makefile b/kernel/module/Makefile
+index a10b2b9a6fdfc6..50ffcc413b5450 100644
+--- a/kernel/module/Makefile
++++ b/kernel/module/Makefile
+@@ -5,7 +5,7 @@
+ 
+ # These are called from save_stack_trace() on slub debug path,
+ # and produce insane amounts of uninteresting coverage.
+-KCOV_INSTRUMENT_module.o := n
++KCOV_INSTRUMENT_main.o := n
+ 
+ obj-y += main.o
+ obj-y += strict_rwx.o
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 0fa6c289546032..d899f34558afcc 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -404,7 +404,8 @@ void padata_do_serial(struct padata_priv *padata)
+ 	/* Sort in ascending order of sequence number. */
+ 	list_for_each_prev(pos, &reorder->list) {
+ 		cur = list_entry(pos, struct padata_priv, list);
+-		if (cur->seq_nr < padata->seq_nr)
++		/* Compare by difference to consider integer wrap around */
++		if ((signed int)(cur->seq_nr - padata->seq_nr) < 0)
+ 			break;
+ 	}
+ 	list_add(&padata->list, pos);
+@@ -512,9 +513,12 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
+ 	 * thread function.  Load balance large jobs between threads by
+ 	 * increasing the number of chunks, guarantee at least the minimum
+ 	 * chunk size from the caller, and honor the caller's alignment.
++	 * Ensure chunk_size is at least 1 to prevent divide-by-0
++	 * panic in padata_mt_helper().
+ 	 */
+ 	ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
+ 	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
++	ps.chunk_size = max(ps.chunk_size, 1ul);
+ 	ps.chunk_size = roundup(ps.chunk_size, job->align);
+ 
+ 	/*
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 2d9eed2bf75093..9b485f1060c017 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -220,7 +220,10 @@ static bool __wake_nocb_gp(struct rcu_data *rdp_gp,
+ 	raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags);
+ 	if (needwake) {
+ 		trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("DoWake"));
+-		wake_up_process(rdp_gp->nocb_gp_kthread);
++		if (cpu_is_offline(raw_smp_processor_id()))
++			swake_up_one_online(&rdp_gp->nocb_gp_wq);
++		else
++			wake_up_process(rdp_gp->nocb_gp_kthread);
+ 	}
+ 
+ 	return needwake;
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 9bedd148f0075c..09faca47e90fb8 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1599,46 +1599,40 @@ static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
+ 	return dl_time_before(__node_2_dle(a)->deadline, __node_2_dle(b)->deadline);
+ }
+ 
+-static inline struct sched_statistics *
++static __always_inline struct sched_statistics *
+ __schedstats_from_dl_se(struct sched_dl_entity *dl_se)
+ {
++	if (!schedstat_enabled())
++		return NULL;
++
++	if (dl_server(dl_se))
++		return NULL;
++
+ 	return &dl_task_of(dl_se)->stats;
+ }
+ 
+ static inline void
+ update_stats_wait_start_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+-	struct sched_statistics *stats;
+-
+-	if (!schedstat_enabled())
+-		return;
+-
+-	stats = __schedstats_from_dl_se(dl_se);
+-	__update_stats_wait_start(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++	struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++	if (stats)
++		__update_stats_wait_start(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+ 
+ static inline void
+ update_stats_wait_end_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+-	struct sched_statistics *stats;
+-
+-	if (!schedstat_enabled())
+-		return;
+-
+-	stats = __schedstats_from_dl_se(dl_se);
+-	__update_stats_wait_end(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++	struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++	if (stats)
++		__update_stats_wait_end(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+ 
+ static inline void
+ update_stats_enqueue_sleeper_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se)
+ {
+-	struct sched_statistics *stats;
+-
+-	if (!schedstat_enabled())
+-		return;
+-
+-	stats = __schedstats_from_dl_se(dl_se);
+-	__update_stats_enqueue_sleeper(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
++	struct sched_statistics *stats = __schedstats_from_dl_se(dl_se);
++	if (stats)
++		__update_stats_enqueue_sleeper(rq_of_dl_rq(dl_rq), dl_task_of(dl_se), stats);
+ }
+ 
+ static inline void
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 483c137b9d3d7e..5e4162d02afc15 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -511,7 +511,7 @@ static int cfs_rq_is_idle(struct cfs_rq *cfs_rq)
+ 
+ static int se_is_idle(struct sched_entity *se)
+ {
+-	return 0;
++	return task_has_idle_policy(task_of(se));
+ }
+ 
+ #endif	/* CONFIG_FAIR_GROUP_SCHED */
+@@ -3188,6 +3188,15 @@ static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *vma)
+ 		return true;
+ 	}
+ 
++	/*
++	 * This vma has not been accessed for a while, and if the number
++	 * the threads in the same process is low, which means no other
++	 * threads can help scan this vma, force a vma scan.
++	 */
++	if (READ_ONCE(mm->numa_scan_seq) >
++	   (vma->numab_state->prev_scan_seq + get_nr_threads(current)))
++		return true;
++
+ 	return false;
+ }
+ 
+@@ -8381,16 +8390,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ 	if (test_tsk_need_resched(curr))
+ 		return;
+ 
+-	/* Idle tasks are by definition preempted by non-idle tasks. */
+-	if (unlikely(task_has_idle_policy(curr)) &&
+-	    likely(!task_has_idle_policy(p)))
+-		goto preempt;
+-
+-	/*
+-	 * Batch and idle tasks do not preempt non-idle tasks (their preemption
+-	 * is driven by the tick):
+-	 */
+-	if (unlikely(p->policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
++	if (!sched_feat(WAKEUP_PREEMPTION))
+ 		return;
+ 
+ 	find_matching_se(&se, &pse);
+@@ -8400,7 +8400,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ 	pse_is_idle = se_is_idle(pse);
+ 
+ 	/*
+-	 * Preempt an idle group in favor of a non-idle group (and don't preempt
++	 * Preempt an idle entity in favor of a non-idle entity (and don't preempt
+ 	 * in the inverse case).
+ 	 */
+ 	if (cse_is_idle && !pse_is_idle)
+@@ -8408,9 +8408,14 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ 	if (cse_is_idle != pse_is_idle)
+ 		return;
+ 
++	/*
++	 * BATCH and IDLE tasks do not preempt others.
++	 */
++	if (unlikely(p->policy != SCHED_NORMAL))
++		return;
++
+ 	cfs_rq = cfs_rq_of(se);
+ 	update_curr(cfs_rq);
+-
+ 	/*
+ 	 * XXX pick_eevdf(cfs_rq) != se ?
+ 	 */
+@@ -9360,9 +9365,10 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
+ 
+ 	hw_pressure = arch_scale_hw_pressure(cpu_of(rq));
+ 
++	/* hw_pressure doesn't care about invariance */
+ 	decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
+ 		  update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
+-		  update_hw_load_avg(now, rq, hw_pressure) |
++		  update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ 		  update_irq_load_avg(rq, 0);
+ 
+ 	if (others_have_blocked(rq))
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index d1daeab1bbc141..433be6bc8b7786 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1226,7 +1226,8 @@ static const struct bpf_func_proto bpf_get_func_arg_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+ 	.arg2_type	= ARG_ANYTHING,
+-	.arg3_type	= ARG_PTR_TO_LONG,
++	.arg3_type	= ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg3_size	= sizeof(u64),
+ };
+ 
+ BPF_CALL_2(get_func_ret, void *, ctx, u64 *, value)
+@@ -1242,7 +1243,8 @@ static const struct bpf_func_proto bpf_get_func_ret_proto = {
+ 	.func		= get_func_ret,
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+-	.arg2_type	= ARG_PTR_TO_LONG,
++	.arg2_type	= ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg2_size	= sizeof(u64),
+ };
+ 
+ BPF_CALL_1(get_func_arg_cnt, void *, ctx)
+@@ -3482,17 +3484,20 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 					     uprobes[i].ref_ctr_offset,
+ 					     &uprobes[i].consumer);
+ 		if (err) {
+-			bpf_uprobe_unregister(&path, uprobes, i);
+-			goto error_free;
++			link->cnt = i;
++			goto error_unregister;
+ 		}
+ 	}
+ 
+ 	err = bpf_link_prime(&link->link, &link_primer);
+ 	if (err)
+-		goto error_free;
++		goto error_unregister;
+ 
+ 	return bpf_link_settle(&link_primer);
+ 
++error_unregister:
++	bpf_uprobe_unregister(&path, uprobes, link->cnt);
++
+ error_free:
+ 	kvfree(uprobes);
+ 	kfree(link);
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 7cea91e193a8f0..1ea8af72849cdb 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -142,13 +142,14 @@ static void fill_pool(void)
+ 	 * READ_ONCE()s pair with the WRITE_ONCE()s in pool_lock critical
+ 	 * sections.
+ 	 */
+-	while (READ_ONCE(obj_nr_tofree) && (READ_ONCE(obj_pool_free) < obj_pool_min_free)) {
++	while (READ_ONCE(obj_nr_tofree) &&
++	       READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
+ 		raw_spin_lock_irqsave(&pool_lock, flags);
+ 		/*
+ 		 * Recheck with the lock held as the worker thread might have
+ 		 * won the race and freed the global free list already.
+ 		 */
+-		while (obj_nr_tofree && (obj_pool_free < obj_pool_min_free)) {
++		while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
+ 			obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
+ 			hlist_del(&obj->node);
+ 			WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 5e2e93307f0d05..d3412984170c03 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -65,7 +65,7 @@ static inline bool sbitmap_deferred_clear(struct sbitmap_word *map,
+ {
+ 	unsigned long mask, word_mask;
+ 
+-	guard(spinlock_irqsave)(&map->swap_lock);
++	guard(raw_spinlock_irqsave)(&map->swap_lock);
+ 
+ 	if (!map->cleared) {
+ 		if (depth == 0)
+@@ -136,7 +136,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
+ 	}
+ 
+ 	for (i = 0; i < sb->map_nr; i++)
+-		spin_lock_init(&sb->map[i].swap_lock);
++		raw_spin_lock_init(&sb->map[i].swap_lock);
+ 
+ 	return 0;
+ }
+diff --git a/lib/xz/xz_crc32.c b/lib/xz/xz_crc32.c
+index 88a2c35e1b5971..5627b00fca296e 100644
+--- a/lib/xz/xz_crc32.c
++++ b/lib/xz/xz_crc32.c
+@@ -29,7 +29,7 @@ STATIC_RW_DATA uint32_t xz_crc32_table[256];
+ 
+ XZ_EXTERN void xz_crc32_init(void)
+ {
+-	const uint32_t poly = CRC32_POLY_LE;
++	const uint32_t poly = 0xEDB88320;
+ 
+ 	uint32_t i;
+ 	uint32_t j;
+diff --git a/lib/xz/xz_private.h b/lib/xz/xz_private.h
+index bf1e94ec7873cf..d9fd49b45fd758 100644
+--- a/lib/xz/xz_private.h
++++ b/lib/xz/xz_private.h
+@@ -105,10 +105,6 @@
+ #	endif
+ #endif
+ 
+-#ifndef CRC32_POLY_LE
+-#define CRC32_POLY_LE 0xedb88320
+-#endif
+-
+ /*
+  * Allocate memory for LZMA2 decoder. xz_dec_lzma2_reset() must be used
+  * before calling xz_dec_lzma2_run().
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 381559e4a1faba..26877b290de63a 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -126,6 +126,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
+ 	 * If this is too slow, it can be optimised to examine the maple
+ 	 * tree gaps.
+ 	 */
++	rcu_read_lock();
+ 	for_each_vma(vmi, vma) {
+ 		unsigned long gap;
+ 
+@@ -146,6 +147,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
+ next:
+ 		prev = vma;
+ 	}
++	rcu_read_unlock();
+ 
+ 	if (!sz_range(&second_gap) || !sz_range(&first_gap))
+ 		return -EINVAL;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 4d9c1277e5e4d1..cedc93028894c6 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -214,6 +214,8 @@ static bool get_huge_zero_page(void)
+ 		count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
+ 		return false;
+ 	}
++	/* Ensure zero folio won't have large_rmappable flag set. */
++	folio_clear_large_rmappable(zero_folio);
+ 	preempt_disable();
+ 	if (cmpxchg(&huge_zero_folio, NULL, zero_folio)) {
+ 		preempt_enable();
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 92a2e8dcb79655..423d20453b90c9 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3919,100 +3919,124 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
+ 	return 0;
+ }
+ 
+-static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
++static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst,
++				       struct list_head *src_list)
+ {
+-	int i, nid = folio_nid(folio);
+-	struct hstate *target_hstate;
+-	struct page *subpage;
+-	struct folio *inner_folio;
+-	int rc = 0;
+-
+-	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
++	long rc;
++	struct folio *folio, *next;
++	LIST_HEAD(dst_list);
++	LIST_HEAD(ret_list);
+ 
+-	remove_hugetlb_folio(h, folio, false);
+-	spin_unlock_irq(&hugetlb_lock);
+-
+-	/*
+-	 * If vmemmap already existed for folio, the remove routine above would
+-	 * have cleared the hugetlb folio flag.  Hence the folio is technically
+-	 * no longer a hugetlb folio.  hugetlb_vmemmap_restore_folio can only be
+-	 * passed hugetlb folios and will BUG otherwise.
+-	 */
+-	if (folio_test_hugetlb(folio)) {
+-		rc = hugetlb_vmemmap_restore_folio(h, folio);
+-		if (rc) {
+-			/* Allocation of vmemmmap failed, we can not demote folio */
+-			spin_lock_irq(&hugetlb_lock);
+-			add_hugetlb_folio(h, folio, false);
+-			return rc;
+-		}
+-	}
+-
+-	/*
+-	 * Use destroy_compound_hugetlb_folio_for_demote for all huge page
+-	 * sizes as it will not ref count folios.
+-	 */
+-	destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h));
++	rc = hugetlb_vmemmap_restore_folios(src, src_list, &ret_list);
++	list_splice_init(&ret_list, src_list);
+ 
+ 	/*
+ 	 * Taking target hstate mutex synchronizes with set_max_huge_pages.
+ 	 * Without the mutex, pages added to target hstate could be marked
+ 	 * as surplus.
+ 	 *
+-	 * Note that we already hold h->resize_lock.  To prevent deadlock,
++	 * Note that we already hold src->resize_lock.  To prevent deadlock,
+ 	 * use the convention of always taking larger size hstate mutex first.
+ 	 */
+-	mutex_lock(&target_hstate->resize_lock);
+-	for (i = 0; i < pages_per_huge_page(h);
+-				i += pages_per_huge_page(target_hstate)) {
+-		subpage = folio_page(folio, i);
+-		inner_folio = page_folio(subpage);
+-		if (hstate_is_gigantic(target_hstate))
+-			prep_compound_gigantic_folio_for_demote(inner_folio,
+-							target_hstate->order);
+-		else
+-			prep_compound_page(subpage, target_hstate->order);
+-		folio_change_private(inner_folio, NULL);
+-		prep_new_hugetlb_folio(target_hstate, inner_folio, nid);
+-		free_huge_folio(inner_folio);
++	mutex_lock(&dst->resize_lock);
++
++	list_for_each_entry_safe(folio, next, src_list, lru) {
++		int i;
++
++		if (folio_test_hugetlb_vmemmap_optimized(folio))
++			continue;
++
++		list_del(&folio->lru);
++		/*
++		 * Use destroy_compound_hugetlb_folio_for_demote for all huge page
++		 * sizes as it will not ref count folios.
++		 */
++		destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(src));
++
++		for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) {
++			struct page *page = folio_page(folio, i);
++
++			if (hstate_is_gigantic(dst))
++				prep_compound_gigantic_folio_for_demote(page_folio(page),
++									dst->order);
++			else
++				prep_compound_page(page, dst->order);
++			set_page_private(page, 0);
++
++			init_new_hugetlb_folio(dst, page_folio(page));
++			list_add(&page->lru, &dst_list);
++		}
+ 	}
+-	mutex_unlock(&target_hstate->resize_lock);
+ 
+-	spin_lock_irq(&hugetlb_lock);
++	prep_and_add_allocated_folios(dst, &dst_list);
+ 
+-	/*
+-	 * Not absolutely necessary, but for consistency update max_huge_pages
+-	 * based on pool changes for the demoted page.
+-	 */
+-	h->max_huge_pages--;
+-	target_hstate->max_huge_pages +=
+-		pages_per_huge_page(h) / pages_per_huge_page(target_hstate);
++	mutex_unlock(&dst->resize_lock);
+ 
+ 	return rc;
+ }
+ 
+-static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
++static long demote_pool_huge_page(struct hstate *src, nodemask_t *nodes_allowed,
++				  unsigned long nr_to_demote)
+ 	__must_hold(&hugetlb_lock)
+ {
+ 	int nr_nodes, node;
+-	struct folio *folio;
++	struct hstate *dst;
++	long rc = 0;
++	long nr_demoted = 0;
+ 
+ 	lockdep_assert_held(&hugetlb_lock);
+ 
+ 	/* We should never get here if no demote order */
+-	if (!h->demote_order) {
++	if (!src->demote_order) {
+ 		pr_warn("HugeTLB: NULL demote order passed to demote_pool_huge_page.\n");
+ 		return -EINVAL;		/* internal error */
+ 	}
++	dst = size_to_hstate(PAGE_SIZE << src->demote_order);
+ 
+-	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
+-		list_for_each_entry(folio, &h->hugepage_freelists[node], lru) {
++	for_each_node_mask_to_free(src, nr_nodes, node, nodes_allowed) {
++		LIST_HEAD(list);
++		struct folio *folio, *next;
++
++		list_for_each_entry_safe(folio, next, &src->hugepage_freelists[node], lru) {
+ 			if (folio_test_hwpoison(folio))
+ 				continue;
+-			return demote_free_hugetlb_folio(h, folio);
++
++			remove_hugetlb_folio(src, folio, false);
++			list_add(&folio->lru, &list);
++
++			if (++nr_demoted == nr_to_demote)
++				break;
+ 		}
++
++		spin_unlock_irq(&hugetlb_lock);
++
++		rc = demote_free_hugetlb_folios(src, dst, &list);
++
++		spin_lock_irq(&hugetlb_lock);
++
++		list_for_each_entry_safe(folio, next, &list, lru) {
++			list_del(&folio->lru);
++			add_hugetlb_folio(src, folio, false);
++
++			nr_demoted--;
++		}
++
++		if (rc < 0 || nr_demoted == nr_to_demote)
++			break;
+ 	}
+ 
++	/*
++	 * Not absolutely necessary, but for consistency update max_huge_pages
++	 * based on pool changes for the demoted page.
++	 */
++	src->max_huge_pages -= nr_demoted;
++	dst->max_huge_pages += nr_demoted << (huge_page_order(src) - huge_page_order(dst));
++
++	if (rc < 0)
++		return rc;
++
++	if (nr_demoted)
++		return nr_demoted;
+ 	/*
+ 	 * Only way to get here is if all pages on free lists are poisoned.
+ 	 * Return -EBUSY so that caller will not retry.
+@@ -4247,6 +4271,8 @@ static ssize_t demote_store(struct kobject *kobj,
+ 	spin_lock_irq(&hugetlb_lock);
+ 
+ 	while (nr_demote) {
++		long rc;
++
+ 		/*
+ 		 * Check for available pages to demote each time thorough the
+ 		 * loop as demote_pool_huge_page will drop hugetlb_lock.
+@@ -4259,11 +4285,13 @@ static ssize_t demote_store(struct kobject *kobj,
+ 		if (!nr_available)
+ 			break;
+ 
+-		err = demote_pool_huge_page(h, n_mask);
+-		if (err)
++		rc = demote_pool_huge_page(h, n_mask, nr_demote);
++		if (rc < 0) {
++			err = rc;
+ 			break;
++		}
+ 
+-		nr_demote--;
++		nr_demote -= rc;
+ 	}
+ 
+ 	spin_unlock_irq(&hugetlb_lock);
+@@ -6047,7 +6075,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
+ 	 * When the original hugepage is shared one, it does not have
+ 	 * anon_vma prepared.
+ 	 */
+-	ret = vmf_anon_prepare(vmf);
++	ret = __vmf_anon_prepare(vmf);
+ 	if (unlikely(ret))
+ 		goto out_release_all;
+ 
+@@ -6246,7 +6274,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
+ 		}
+ 
+ 		if (!(vma->vm_flags & VM_MAYSHARE)) {
+-			ret = vmf_anon_prepare(vmf);
++			ret = __vmf_anon_prepare(vmf);
+ 			if (unlikely(ret))
+ 				goto out;
+ 		}
+@@ -6378,6 +6406,14 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
+ 	folio_unlock(folio);
+ out:
+ 	hugetlb_vma_unlock_read(vma);
++
++	/*
++	 * We must check to release the per-VMA lock. __vmf_anon_prepare() is
++	 * the only way ret can be set to VM_FAULT_RETRY.
++	 */
++	if (unlikely(ret & VM_FAULT_RETRY))
++		vma_end_read(vma);
++
+ 	mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 	return ret;
+ 
+@@ -6599,6 +6635,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	}
+ out_mutex:
+ 	hugetlb_vma_unlock_read(vma);
++
++	/*
++	 * We must check to release the per-VMA lock. __vmf_anon_prepare() in
++	 * hugetlb_wp() is the only way ret can be set to VM_FAULT_RETRY.
++	 */
++	if (unlikely(ret & VM_FAULT_RETRY))
++		vma_end_read(vma);
++
+ 	mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 	/*
+ 	 * Generally it's safe to hold refcount during waiting page lock. But
+diff --git a/mm/internal.h b/mm/internal.h
+index cc2c5e07fad3ba..6ef5eecabfc4f9 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -293,7 +293,16 @@ static inline void wake_throttle_isolated(pg_data_t *pgdat)
+ 		wake_up(wqh);
+ }
+ 
+-vm_fault_t vmf_anon_prepare(struct vm_fault *vmf);
++vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf);
++static inline vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
++{
++	vm_fault_t ret = __vmf_anon_prepare(vmf);
++
++	if (unlikely(ret & VM_FAULT_RETRY))
++		vma_end_read(vmf->vma);
++	return ret;
++}
++
+ vm_fault_t do_swap_page(struct vm_fault *vmf);
+ void folio_rotate_reclaimable(struct folio *folio);
+ bool __folio_end_writeback(struct folio *folio);
+diff --git a/mm/memory.c b/mm/memory.c
+index 7a898b85788dd9..cfc4df9fe99544 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3226,7 +3226,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
+ }
+ 
+ /**
+- * vmf_anon_prepare - Prepare to handle an anonymous fault.
++ * __vmf_anon_prepare - Prepare to handle an anonymous fault.
+  * @vmf: The vm_fault descriptor passed from the fault handler.
+  *
+  * When preparing to insert an anonymous page into a VMA from a
+@@ -3240,7 +3240,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
+  * Return: 0 if fault handling can proceed.  Any other value should be
+  * returned to the caller.
+  */
+-vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
++vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	vm_fault_t ret = 0;
+@@ -3248,10 +3248,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
+ 	if (likely(vma->anon_vma))
+ 		return 0;
+ 	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+-		if (!mmap_read_trylock(vma->vm_mm)) {
+-			vma_end_read(vma);
++		if (!mmap_read_trylock(vma->vm_mm))
+ 			return VM_FAULT_RETRY;
+-		}
+ 	}
+ 	if (__anon_vma_prepare(vma))
+ 		ret = VM_FAULT_OOM;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 9dabeb90f772d8..817019be893653 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1129,7 +1129,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
+ 	int rc = -EAGAIN;
+ 	int old_page_state = 0;
+ 	struct anon_vma *anon_vma = NULL;
+-	bool is_lru = !__folio_test_movable(src);
++	bool is_lru = data_race(!__folio_test_movable(src));
+ 	bool locked = false;
+ 	bool dst_locked = false;
+ 
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 83b4682ec85cfa..47bd6f65f44139 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -3127,8 +3127,12 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
+ 		flags |= MAP_LOCKED;
+ 
+ 	file = get_file(vma->vm_file);
++	ret = security_mmap_file(vma->vm_file, prot, flags);
++	if (ret)
++		goto out_fput;
+ 	ret = do_mmap(vma->vm_file, start, size,
+ 			prot, flags, 0, pgoff, &populate, NULL);
++out_fput:
+ 	fput(file);
+ out:
+ 	mmap_write_unlock(mm);
+diff --git a/mm/util.c b/mm/util.c
+index fe723241b66f73..37a5d75f8adf05 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -451,7 +451,7 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+ 	if (gap + pad > gap)
+ 		gap += pad;
+ 
+-	if (gap < MIN_GAP)
++	if (gap < MIN_GAP && MIN_GAP < MAX_GAP)
+ 		gap = MIN_GAP;
+ 	else if (gap > MAX_GAP)
+ 		gap = MAX_GAP;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3c74d171085deb..bfa773730f3bd7 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -107,8 +107,7 @@ void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
+ 	 * where a timeout + cancel does indicate an actual failure.
+ 	 */
+ 	if (status && status != HCI_ERROR_UNKNOWN_CONN_ID)
+-		mgmt_connect_failed(hdev, &conn->dst, conn->type,
+-				    conn->dst_type, status);
++		mgmt_connect_failed(hdev, conn, status);
+ 
+ 	/* The connection attempt was doing scan for new RPA, and is
+ 	 * in scan phase. If params are not associated with any other
+@@ -1251,8 +1250,7 @@ void hci_conn_failed(struct hci_conn *conn, u8 status)
+ 		hci_le_conn_failed(conn, status);
+ 		break;
+ 	case ACL_LINK:
+-		mgmt_connect_failed(hdev, &conn->dst, conn->type,
+-				    conn->dst_type, status);
++		mgmt_connect_failed(hdev, conn, status);
+ 		break;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index f4a54dbc07f19a..86fee9d6c14248 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -5331,7 +5331,10 @@ int hci_stop_discovery_sync(struct hci_dev *hdev)
+ 		if (!e)
+ 			return 0;
+ 
+-		return hci_remote_name_cancel_sync(hdev, &e->data.bdaddr);
++		/* Ignore cancel errors since it should interfere with stopping
++		 * of the discovery.
++		 */
++		hci_remote_name_cancel_sync(hdev, &e->data.bdaddr);
+ 	}
+ 
+ 	return 0;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index ba28907afb3fa6..c383eb44d516bc 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -9734,13 +9734,18 @@ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	mgmt_pending_remove(cmd);
+ }
+ 
+-void mgmt_connect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+-			 u8 addr_type, u8 status)
++void mgmt_connect_failed(struct hci_dev *hdev, struct hci_conn *conn, u8 status)
+ {
+ 	struct mgmt_ev_connect_failed ev;
+ 
+-	bacpy(&ev.addr.bdaddr, bdaddr);
+-	ev.addr.type = link_to_bdaddr(link_type, addr_type);
++	if (test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) {
++		mgmt_device_disconnected(hdev, &conn->dst, conn->type,
++					 conn->dst_type, status, true);
++		return;
++	}
++
++	bacpy(&ev.addr.bdaddr, &conn->dst);
++	ev.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+ 	ev.status = mgmt_status(status);
+ 
+ 	mgmt_event(MGMT_EV_CONNECT_FAILED, hdev, &ev, sizeof(ev), NULL);
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 46d3ec3aa44b4a..217049fa496e9d 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -1471,8 +1471,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
+ 		/* remove device reference, if this is our bound device */
+ 		if (bo->bound && bo->ifindex == dev->ifindex) {
+ #if IS_ENABLED(CONFIG_PROC_FS)
+-			if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read)
++			if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) {
+ 				remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir);
++				bo->bcm_proc_read = NULL;
++			}
+ #endif
+ 			bo->bound   = 0;
+ 			bo->ifindex = 0;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 4be73de5033cb7..319f47df33300c 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1179,10 +1179,10 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer)
+ 		break;
+ 	case -ENETDOWN:
+ 		/* In this case we should get a netdev_event(), all active
+-		 * sessions will be cleared by
+-		 * j1939_cancel_all_active_sessions(). So handle this as an
+-		 * error, but let j1939_cancel_all_active_sessions() do the
+-		 * cleanup including propagation of the error to user space.
++		 * sessions will be cleared by j1939_cancel_active_session().
++		 * So handle this as an error, but let
++		 * j1939_cancel_active_session() do the cleanup including
++		 * propagation of the error to user space.
+ 		 */
+ 		break;
+ 	case -EOVERFLOW:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 55b1d9de2334db..61da5512cd4d2a 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6258,20 +6258,25 @@ BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb,
+ 	int ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+ 	struct net_device *dev = skb->dev;
+ 	int skb_len, dev_len;
+-	int mtu;
++	int mtu = 0;
+ 
+-	if (unlikely(flags & ~(BPF_MTU_CHK_SEGS)))
+-		return -EINVAL;
++	if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+-	if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len)))
+-		return -EINVAL;
++	if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	dev = __dev_via_ifindex(dev, ifindex);
+-	if (unlikely(!dev))
+-		return -ENODEV;
++	if (unlikely(!dev)) {
++		ret = -ENODEV;
++		goto out;
++	}
+ 
+ 	mtu = READ_ONCE(dev->mtu);
+-
+ 	dev_len = mtu + dev->hard_header_len;
+ 
+ 	/* If set use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */
+@@ -6289,15 +6294,12 @@ BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb,
+ 	 */
+ 	if (skb_is_gso(skb)) {
+ 		ret = BPF_MTU_CHK_RET_SUCCESS;
+-
+ 		if (flags & BPF_MTU_CHK_SEGS &&
+ 		    !skb_gso_validate_network_len(skb, mtu))
+ 			ret = BPF_MTU_CHK_RET_SEGS_TOOBIG;
+ 	}
+ out:
+-	/* BPF verifier guarantees valid pointer */
+ 	*mtu_len = mtu;
+-
+ 	return ret;
+ }
+ 
+@@ -6307,19 +6309,21 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ 	struct net_device *dev = xdp->rxq->dev;
+ 	int xdp_len = xdp->data_end - xdp->data;
+ 	int ret = BPF_MTU_CHK_RET_SUCCESS;
+-	int mtu, dev_len;
++	int mtu = 0, dev_len;
+ 
+ 	/* XDP variant doesn't support multi-buffer segment check (yet) */
+-	if (unlikely(flags))
+-		return -EINVAL;
++	if (unlikely(flags)) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	dev = __dev_via_ifindex(dev, ifindex);
+-	if (unlikely(!dev))
+-		return -ENODEV;
++	if (unlikely(!dev)) {
++		ret = -ENODEV;
++		goto out;
++	}
+ 
+ 	mtu = READ_ONCE(dev->mtu);
+-
+-	/* Add L2-header as dev MTU is L3 size */
+ 	dev_len = mtu + dev->hard_header_len;
+ 
+ 	/* Use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */
+@@ -6329,10 +6333,8 @@ BPF_CALL_5(bpf_xdp_check_mtu, struct xdp_buff *, xdp,
+ 	xdp_len += len_diff; /* minus result pass check */
+ 	if (xdp_len > dev_len)
+ 		ret = BPF_MTU_CHK_RET_FRAG_NEEDED;
+-
+-	/* BPF verifier guarantees valid pointer */
++out:
+ 	*mtu_len = mtu;
+-
+ 	return ret;
+ }
+ 
+@@ -6342,7 +6344,8 @@ static const struct bpf_func_proto bpf_skb_check_mtu_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+ 	.arg2_type      = ARG_ANYTHING,
+-	.arg3_type      = ARG_PTR_TO_INT,
++	.arg3_type      = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg3_size	= sizeof(u32),
+ 	.arg4_type      = ARG_ANYTHING,
+ 	.arg5_type      = ARG_ANYTHING,
+ };
+@@ -6353,7 +6356,8 @@ static const struct bpf_func_proto bpf_xdp_check_mtu_proto = {
+ 	.ret_type	= RET_INTEGER,
+ 	.arg1_type      = ARG_PTR_TO_CTX,
+ 	.arg2_type      = ARG_ANYTHING,
+-	.arg3_type      = ARG_PTR_TO_INT,
++	.arg3_type      = ARG_PTR_TO_FIXED_SIZE_MEM | MEM_UNINIT | MEM_ALIGNED,
++	.arg3_size	= sizeof(u32),
+ 	.arg4_type      = ARG_ANYTHING,
+ 	.arg5_type      = ARG_ANYTHING,
+ };
+@@ -8568,13 +8572,16 @@ static bool bpf_skb_is_valid_access(int off, int size, enum bpf_access_type type
+ 		if (off + size > offsetofend(struct __sk_buff, cb[4]))
+ 			return false;
+ 		break;
++	case bpf_ctx_range(struct __sk_buff, data):
++	case bpf_ctx_range(struct __sk_buff, data_meta):
++	case bpf_ctx_range(struct __sk_buff, data_end):
++		if (info->is_ldsx || size != size_default)
++			return false;
++		break;
+ 	case bpf_ctx_range_till(struct __sk_buff, remote_ip6[0], remote_ip6[3]):
+ 	case bpf_ctx_range_till(struct __sk_buff, local_ip6[0], local_ip6[3]):
+ 	case bpf_ctx_range_till(struct __sk_buff, remote_ip4, remote_ip4):
+ 	case bpf_ctx_range_till(struct __sk_buff, local_ip4, local_ip4):
+-	case bpf_ctx_range(struct __sk_buff, data):
+-	case bpf_ctx_range(struct __sk_buff, data_meta):
+-	case bpf_ctx_range(struct __sk_buff, data_end):
+ 		if (size != size_default)
+ 			return false;
+ 		break;
+@@ -9018,6 +9025,14 @@ static bool xdp_is_valid_access(int off, int size,
+ 			}
+ 		}
+ 		return false;
++	} else {
++		switch (off) {
++		case offsetof(struct xdp_md, data_meta):
++		case offsetof(struct xdp_md, data):
++		case offsetof(struct xdp_md, data_end):
++			if (info->is_ldsx)
++				return false;
++		}
+ 	}
+ 
+ 	switch (off) {
+@@ -9343,12 +9358,12 @@ static bool flow_dissector_is_valid_access(int off, int size,
+ 
+ 	switch (off) {
+ 	case bpf_ctx_range(struct __sk_buff, data):
+-		if (size != size_default)
++		if (info->is_ldsx || size != size_default)
+ 			return false;
+ 		info->reg_type = PTR_TO_PACKET;
+ 		return true;
+ 	case bpf_ctx_range(struct __sk_buff, data_end):
+-		if (size != size_default)
++		if (info->is_ldsx || size != size_default)
+ 			return false;
+ 		info->reg_type = PTR_TO_PACKET_END;
+ 		return true;
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index d3dbb92153f2fe..724b6856fcc3e9 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1183,6 +1183,7 @@ static void sock_hash_free(struct bpf_map *map)
+ 			sock_put(elem->sk);
+ 			sock_hash_free_elem(htab, elem);
+ 		}
++		cond_resched();
+ 	}
+ 
+ 	/* wait for psock readers accessing its map link */
+diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c
+index af6cf64a00e081..464f683e016dbb 100644
+--- a/net/hsr/hsr_slave.c
++++ b/net/hsr/hsr_slave.c
+@@ -67,7 +67,16 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
+ 		skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
+ 	skb_reset_mac_len(skb);
+ 
+-	hsr_forward_skb(skb, port);
++	/* Only the frames received over the interlink port will assign a
++	 * sequence number and require synchronisation vs other sender.
++	 */
++	if (port->type == HSR_PT_INTERLINK) {
++		spin_lock_bh(&hsr->seqnr_lock);
++		hsr_forward_skb(skb, port);
++		spin_unlock_bh(&hsr->seqnr_lock);
++	} else {
++		hsr_forward_skb(skb, port);
++	}
+ 
+ finish_consume:
+ 	return RX_HANDLER_CONSUMED;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index ab6d0d98dbc34c..336518e623b280 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -224,57 +224,59 @@ int sysctl_icmp_msgs_per_sec __read_mostly = 1000;
+ int sysctl_icmp_msgs_burst __read_mostly = 50;
+ 
+ static struct {
+-	spinlock_t	lock;
+-	u32		credit;
++	atomic_t	credit;
+ 	u32		stamp;
+-} icmp_global = {
+-	.lock		= __SPIN_LOCK_UNLOCKED(icmp_global.lock),
+-};
++} icmp_global;
+ 
+ /**
+  * icmp_global_allow - Are we allowed to send one more ICMP message ?
+  *
+  * Uses a token bucket to limit our ICMP messages to ~sysctl_icmp_msgs_per_sec.
+  * Returns false if we reached the limit and can not send another packet.
+- * Note: called with BH disabled
++ * Works in tandem with icmp_global_consume().
+  */
+ bool icmp_global_allow(void)
+ {
+-	u32 credit, delta, incr = 0, now = (u32)jiffies;
+-	bool rc = false;
++	u32 delta, now, oldstamp;
++	int incr, new, old;
+ 
+-	/* Check if token bucket is empty and cannot be refilled
+-	 * without taking the spinlock. The READ_ONCE() are paired
+-	 * with the following WRITE_ONCE() in this same function.
++	/* Note: many cpus could find this condition true.
++	 * Then later icmp_global_consume() could consume more credits,
++	 * this is an acceptable race.
+ 	 */
+-	if (!READ_ONCE(icmp_global.credit)) {
+-		delta = min_t(u32, now - READ_ONCE(icmp_global.stamp), HZ);
+-		if (delta < HZ / 50)
+-			return false;
+-	}
++	if (atomic_read(&icmp_global.credit) > 0)
++		return true;
+ 
+-	spin_lock(&icmp_global.lock);
+-	delta = min_t(u32, now - icmp_global.stamp, HZ);
+-	if (delta >= HZ / 50) {
+-		incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
+-		if (incr)
+-			WRITE_ONCE(icmp_global.stamp, now);
+-	}
+-	credit = min_t(u32, icmp_global.credit + incr,
+-		       READ_ONCE(sysctl_icmp_msgs_burst));
+-	if (credit) {
+-		/* We want to use a credit of one in average, but need to randomize
+-		 * it for security reasons.
+-		 */
+-		credit = max_t(int, credit - get_random_u32_below(3), 0);
+-		rc = true;
++	now = jiffies;
++	oldstamp = READ_ONCE(icmp_global.stamp);
++	delta = min_t(u32, now - oldstamp, HZ);
++	if (delta < HZ / 50)
++		return false;
++
++	incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ;
++	if (!incr)
++		return false;
++
++	if (cmpxchg(&icmp_global.stamp, oldstamp, now) == oldstamp) {
++		old = atomic_read(&icmp_global.credit);
++		do {
++			new = min(old + incr, READ_ONCE(sysctl_icmp_msgs_burst));
++		} while (!atomic_try_cmpxchg(&icmp_global.credit, &old, new));
+ 	}
+-	WRITE_ONCE(icmp_global.credit, credit);
+-	spin_unlock(&icmp_global.lock);
+-	return rc;
++	return true;
+ }
+ EXPORT_SYMBOL(icmp_global_allow);
+ 
++void icmp_global_consume(void)
++{
++	int credits = get_random_u32_below(3);
++
++	/* Note: this might make icmp_global.credit negative. */
++	if (credits)
++		atomic_sub(credits, &icmp_global.credit);
++}
++EXPORT_SYMBOL(icmp_global_consume);
++
+ static bool icmpv4_mask_allow(struct net *net, int type, int code)
+ {
+ 	if (type > NR_ICMP_TYPES)
+@@ -291,14 +293,16 @@ static bool icmpv4_mask_allow(struct net *net, int type, int code)
+ 	return false;
+ }
+ 
+-static bool icmpv4_global_allow(struct net *net, int type, int code)
++static bool icmpv4_global_allow(struct net *net, int type, int code,
++				bool *apply_ratelimit)
+ {
+ 	if (icmpv4_mask_allow(net, type, code))
+ 		return true;
+ 
+-	if (icmp_global_allow())
++	if (icmp_global_allow()) {
++		*apply_ratelimit = true;
+ 		return true;
+-
++	}
+ 	__ICMP_INC_STATS(net, ICMP_MIB_RATELIMITGLOBAL);
+ 	return false;
+ }
+@@ -308,15 +312,16 @@ static bool icmpv4_global_allow(struct net *net, int type, int code)
+  */
+ 
+ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+-			       struct flowi4 *fl4, int type, int code)
++			       struct flowi4 *fl4, int type, int code,
++			       bool apply_ratelimit)
+ {
+ 	struct dst_entry *dst = &rt->dst;
+ 	struct inet_peer *peer;
+ 	bool rc = true;
+ 	int vif;
+ 
+-	if (icmpv4_mask_allow(net, type, code))
+-		goto out;
++	if (!apply_ratelimit)
++		return true;
+ 
+ 	/* No rate limit on loopback */
+ 	if (dst->dev && (dst->dev->flags&IFF_LOOPBACK))
+@@ -331,6 +336,8 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ out:
+ 	if (!rc)
+ 		__ICMP_INC_STATS(net, ICMP_MIB_RATELIMITHOST);
++	else
++		icmp_global_consume();
+ 	return rc;
+ }
+ 
+@@ -402,6 +409,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ 	struct ipcm_cookie ipc;
+ 	struct rtable *rt = skb_rtable(skb);
+ 	struct net *net = dev_net(rt->dst.dev);
++	bool apply_ratelimit = false;
+ 	struct flowi4 fl4;
+ 	struct sock *sk;
+ 	struct inet_sock *inet;
+@@ -413,11 +421,11 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ 	if (ip_options_echo(net, &icmp_param->replyopts.opt.opt, skb))
+ 		return;
+ 
+-	/* Needed by both icmp_global_allow and icmp_xmit_lock */
++	/* Needed by both icmpv4_global_allow and icmp_xmit_lock */
+ 	local_bh_disable();
+ 
+-	/* global icmp_msgs_per_sec */
+-	if (!icmpv4_global_allow(net, type, code))
++	/* is global icmp_msgs_per_sec exhausted ? */
++	if (!icmpv4_global_allow(net, type, code, &apply_ratelimit))
+ 		goto out_bh_enable;
+ 
+ 	sk = icmp_xmit_lock(net);
+@@ -450,7 +458,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ 	rt = ip_route_output_key(net, &fl4);
+ 	if (IS_ERR(rt))
+ 		goto out_unlock;
+-	if (icmpv4_xrlim_allow(net, rt, &fl4, type, code))
++	if (icmpv4_xrlim_allow(net, rt, &fl4, type, code, apply_ratelimit))
+ 		icmp_push_reply(sk, icmp_param, &fl4, &ipc, &rt);
+ 	ip_rt_put(rt);
+ out_unlock:
+@@ -596,6 +604,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 	int room;
+ 	struct icmp_bxm icmp_param;
+ 	struct rtable *rt = skb_rtable(skb_in);
++	bool apply_ratelimit = false;
+ 	struct ipcm_cookie ipc;
+ 	struct flowi4 fl4;
+ 	__be32 saddr;
+@@ -677,7 +686,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 		}
+ 	}
+ 
+-	/* Needed by both icmp_global_allow and icmp_xmit_lock */
++	/* Needed by both icmpv4_global_allow and icmp_xmit_lock */
+ 	local_bh_disable();
+ 
+ 	/* Check global sysctl_icmp_msgs_per_sec ratelimit, unless
+@@ -685,7 +694,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 	 * loopback, then peer ratelimit still work (in icmpv4_xrlim_allow)
+ 	 */
+ 	if (!(skb_in->dev && (skb_in->dev->flags&IFF_LOOPBACK)) &&
+-	      !icmpv4_global_allow(net, type, code))
++	      !icmpv4_global_allow(net, type, code, &apply_ratelimit))
+ 		goto out_bh_enable;
+ 
+ 	sk = icmp_xmit_lock(net);
+@@ -744,7 +753,7 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ 		goto out_unlock;
+ 
+ 	/* peer icmp_ratelimit */
+-	if (!icmpv4_xrlim_allow(net, rt, &fl4, type, code))
++	if (!icmpv4_xrlim_allow(net, rt, &fl4, type, code, apply_ratelimit))
+ 		goto ende;
+ 
+ 	/* RFC says return as much as we can without exceeding 576 bytes. */
+diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig
+index 08d4b7132d4c45..1c9c686d9522f7 100644
+--- a/net/ipv6/Kconfig
++++ b/net/ipv6/Kconfig
+@@ -323,6 +323,7 @@ config IPV6_RPL_LWTUNNEL
+ 	bool "IPv6: RPL Source Routing Header support"
+ 	depends on IPV6
+ 	select LWTUNNEL
++	select DST_CACHE
+ 	help
+ 	  Support for RFC6554 RPL Source Routing Header using the lightweight
+ 	  tunnels mechanism.
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index 7b31674644efc3..46f70e4a835139 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -175,14 +175,16 @@ static bool icmpv6_mask_allow(struct net *net, int type)
+ 	return false;
+ }
+ 
+-static bool icmpv6_global_allow(struct net *net, int type)
++static bool icmpv6_global_allow(struct net *net, int type,
++				bool *apply_ratelimit)
+ {
+ 	if (icmpv6_mask_allow(net, type))
+ 		return true;
+ 
+-	if (icmp_global_allow())
++	if (icmp_global_allow()) {
++		*apply_ratelimit = true;
+ 		return true;
+-
++	}
+ 	__ICMP_INC_STATS(net, ICMP_MIB_RATELIMITGLOBAL);
+ 	return false;
+ }
+@@ -191,13 +193,13 @@ static bool icmpv6_global_allow(struct net *net, int type)
+  * Check the ICMP output rate limit
+  */
+ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+-			       struct flowi6 *fl6)
++			       struct flowi6 *fl6, bool apply_ratelimit)
+ {
+ 	struct net *net = sock_net(sk);
+ 	struct dst_entry *dst;
+ 	bool res = false;
+ 
+-	if (icmpv6_mask_allow(net, type))
++	if (!apply_ratelimit)
+ 		return true;
+ 
+ 	/*
+@@ -228,6 +230,8 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ 	if (!res)
+ 		__ICMP6_INC_STATS(net, ip6_dst_idev(dst),
+ 				  ICMP6_MIB_RATELIMITHOST);
++	else
++		icmp_global_consume();
+ 	dst_release(dst);
+ 	return res;
+ }
+@@ -452,6 +456,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 	struct net *net;
+ 	struct ipv6_pinfo *np;
+ 	const struct in6_addr *saddr = NULL;
++	bool apply_ratelimit = false;
+ 	struct dst_entry *dst;
+ 	struct icmp6hdr tmp_hdr;
+ 	struct flowi6 fl6;
+@@ -533,11 +538,12 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 		return;
+ 	}
+ 
+-	/* Needed by both icmp_global_allow and icmpv6_xmit_lock */
++	/* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */
+ 	local_bh_disable();
+ 
+ 	/* Check global sysctl_icmp_msgs_per_sec ratelimit */
+-	if (!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, type))
++	if (!(skb->dev->flags & IFF_LOOPBACK) &&
++	    !icmpv6_global_allow(net, type, &apply_ratelimit))
+ 		goto out_bh_enable;
+ 
+ 	mip6_addr_swap(skb, parm);
+@@ -575,7 +581,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 
+ 	np = inet6_sk(sk);
+ 
+-	if (!icmpv6_xrlim_allow(sk, type, &fl6))
++	if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit))
+ 		goto out;
+ 
+ 	tmp_hdr.icmp6_type = type;
+@@ -717,6 +723,7 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ 	struct ipv6_pinfo *np;
+ 	const struct in6_addr *saddr = NULL;
+ 	struct icmp6hdr *icmph = icmp6_hdr(skb);
++	bool apply_ratelimit = false;
+ 	struct icmp6hdr tmp_hdr;
+ 	struct flowi6 fl6;
+ 	struct icmpv6_msg msg;
+@@ -781,8 +788,9 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ 		goto out;
+ 
+ 	/* Check the ratelimit */
+-	if ((!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, ICMPV6_ECHO_REPLY)) ||
+-	    !icmpv6_xrlim_allow(sk, ICMPV6_ECHO_REPLY, &fl6))
++	if ((!(skb->dev->flags & IFF_LOOPBACK) &&
++	    !icmpv6_global_allow(net, ICMPV6_ECHO_REPLY, &apply_ratelimit)) ||
++	    !icmpv6_xrlim_allow(sk, ICMPV6_ECHO_REPLY, &fl6, apply_ratelimit))
+ 		goto out_dst_release;
+ 
+ 	idev = __in6_dev_get(skb->dev);
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index dedee264b8f6c8..b9457473c176df 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -223,33 +223,23 @@ void nf_reject_ip6_tcphdr_put(struct sk_buff *nskb,
+ 			      const struct tcphdr *oth, unsigned int otcplen)
+ {
+ 	struct tcphdr *tcph;
+-	int needs_ack;
+ 
+ 	skb_reset_transport_header(nskb);
+-	tcph = skb_put(nskb, sizeof(struct tcphdr));
++	tcph = skb_put_zero(nskb, sizeof(struct tcphdr));
+ 	/* Truncate to length (no data) */
+ 	tcph->doff = sizeof(struct tcphdr)/4;
+ 	tcph->source = oth->dest;
+ 	tcph->dest = oth->source;
+ 
+ 	if (oth->ack) {
+-		needs_ack = 0;
+ 		tcph->seq = oth->ack_seq;
+-		tcph->ack_seq = 0;
+ 	} else {
+-		needs_ack = 1;
+ 		tcph->ack_seq = htonl(ntohl(oth->seq) + oth->syn + oth->fin +
+ 				      otcplen - (oth->doff<<2));
+-		tcph->seq = 0;
++		tcph->ack = 1;
+ 	}
+ 
+-	/* Reset flags */
+-	((u_int8_t *)tcph)[13] = 0;
+ 	tcph->rst = 1;
+-	tcph->ack = needs_ack;
+-	tcph->window = 0;
+-	tcph->urg_ptr = 0;
+-	tcph->check = 0;
+ 
+ 	/* Adjust TCP checksum */
+ 	tcph->check = csum_ipv6_magic(&ipv6_hdr(nskb)->saddr,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index a9644a8edb9609..1febd95822c9a7 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -175,7 +175,7 @@ static void rt6_uncached_list_flush_dev(struct net_device *dev)
+ 			struct net_device *rt_dev = rt->dst.dev;
+ 			bool handled = false;
+ 
+-			if (rt_idev->dev == dev) {
++			if (rt_idev && rt_idev->dev == dev) {
+ 				rt->rt6i_idev = in6_dev_get(blackhole_netdev);
+ 				in6_dev_put(rt_idev);
+ 				handled = true;
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 2c83b7586422dd..db3c19a42e1ca7 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -263,10 +263,8 @@ static int rpl_input(struct sk_buff *skb)
+ 	rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+ 
+ 	err = rpl_do_srh(skb, rlwt);
+-	if (unlikely(err)) {
+-		kfree_skb(skb);
+-		return err;
+-	}
++	if (unlikely(err))
++		goto drop;
+ 
+ 	local_bh_disable();
+ 	dst = dst_cache_get(&rlwt->cache);
+@@ -286,9 +284,13 @@ static int rpl_input(struct sk_buff *skb)
+ 
+ 	err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+ 	if (unlikely(err))
+-		return err;
++		goto drop;
+ 
+ 	return dst_input(skb);
++
++drop:
++	kfree_skb(skb);
++	return err;
+ }
+ 
+ static int nla_put_rpl_srh(struct sk_buff *skb, int attrtype,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index b935bb5d8ed1f7..3e3814076006e6 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -462,6 +462,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	unsigned long flags;
++	struct sk_buff_head freeq;
+ 	struct sk_buff *skb, *tmp;
+ 	u32 hw_reconf_flags = 0;
+ 	int i, flushed;
+@@ -641,18 +642,32 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ 		skb_queue_purge(&sdata->status_queue);
+ 	}
+ 
++	/*
++	 * Since ieee80211_free_txskb() may issue __dev_queue_xmit()
++	 * which should be called with interrupts enabled, reclamation
++	 * is done in two phases:
++	 */
++	__skb_queue_head_init(&freeq);
++
++	/* unlink from local queues... */
+ 	spin_lock_irqsave(&local->queue_stop_reason_lock, flags);
+ 	for (i = 0; i < IEEE80211_MAX_QUEUES; i++) {
+ 		skb_queue_walk_safe(&local->pending[i], skb, tmp) {
+ 			struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 			if (info->control.vif == &sdata->vif) {
+ 				__skb_unlink(skb, &local->pending[i]);
+-				ieee80211_free_txskb(&local->hw, skb);
++				__skb_queue_tail(&freeq, skb);
+ 			}
+ 		}
+ 	}
+ 	spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
+ 
++	/* ... and perform actual reclamation with interrupts enabled. */
++	skb_queue_walk_safe(&freeq, skb, tmp) {
++		__skb_unlink(skb, &freeq);
++		ieee80211_free_txskb(&local->hw, skb);
++	}
++
+ 	if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ 		ieee80211_txq_remove_vlan(local, sdata);
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index ad2ce9c92ba8a2..1faf4d7c115f08 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4317,7 +4317,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ 	    ((assoc_data->wmm && !elems->wmm_param) ||
+ 	     (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HT &&
+ 	      (!elems->ht_cap_elem || !elems->ht_operation)) ||
+-	     (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT &&
++	     (is_5ghz && link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT &&
+ 	      (!elems->vht_cap_elem || !elems->vht_operation)))) {
+ 		const struct cfg80211_bss_ies *ies;
+ 		struct ieee802_11_elems *bss_elems;
+@@ -4365,19 +4365,22 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ 			sdata_info(sdata,
+ 				   "AP bug: HT operation missing from AssocResp\n");
+ 		}
+-		if (!elems->vht_cap_elem && bss_elems->vht_cap_elem &&
+-		    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
+-			elems->vht_cap_elem = bss_elems->vht_cap_elem;
+-			sdata_info(sdata,
+-				   "AP bug: VHT capa missing from AssocResp\n");
+-		}
+-		if (!elems->vht_operation && bss_elems->vht_operation &&
+-		    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
+-			elems->vht_operation = bss_elems->vht_operation;
+-			sdata_info(sdata,
+-				   "AP bug: VHT operation missing from AssocResp\n");
+-		}
+ 
++		if (is_5ghz) {
++			if (!elems->vht_cap_elem && bss_elems->vht_cap_elem &&
++			    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
++				elems->vht_cap_elem = bss_elems->vht_cap_elem;
++				sdata_info(sdata,
++					   "AP bug: VHT capa missing from AssocResp\n");
++			}
++
++			if (!elems->vht_operation && bss_elems->vht_operation &&
++			    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_VHT) {
++				elems->vht_operation = bss_elems->vht_operation;
++				sdata_info(sdata,
++					   "AP bug: VHT operation missing from AssocResp\n");
++			}
++		}
+ 		kfree(bss_elems);
+ 	}
+ 
+@@ -7121,6 +7124,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
+ 	lockdep_assert_wiphy(sdata->local->hw.wiphy);
+ 
+ 	assoc_data->tries++;
++	assoc_data->comeback = false;
+ 	if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) {
+ 		sdata_info(sdata, "association with %pM timed out\n",
+ 			   assoc_data->ap_addr);
+diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c
+index 65e1e9e971fd69..5810d938edc44c 100644
+--- a/net/mac80211/offchannel.c
++++ b/net/mac80211/offchannel.c
+@@ -964,6 +964,7 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	}
+ 
+ 	IEEE80211_SKB_CB(skb)->flags = flags;
++	IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
+ 
+ 	skb->dev = sdata->dev;
+ 
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 4dc1def6954865..3dc9752188d58f 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -890,7 +890,7 @@ void ieee80211_get_tx_rates(struct ieee80211_vif *vif,
+ 	if (ieee80211_is_tx_data(skb))
+ 		rate_control_apply_mask(sdata, sta, sband, dest, max_rates);
+ 
+-	if (!(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX))
++	if (!(info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK))
+ 		mask = sdata->rc_rateidx_mask[info->band];
+ 
+ 	if (dest[0].idx < 0)
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index b5f2df61c7f671..1c5d99975ad04d 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -649,7 +649,7 @@ static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
+ 				cpu_to_le16(IEEE80211_SN_TO_SEQ(sn));
+ 		}
+ 		IEEE80211_SKB_CB(skb)->flags |= tx_flags;
+-		IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_SCAN_TX;
++		IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
+ 		ieee80211_tx_skb_tid_band(sdata, skb, 7, channel->band);
+ 	}
+ }
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index bca7b341dd772d..a9ee8698225929 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -699,7 +699,7 @@ ieee80211_tx_h_rate_ctrl(struct ieee80211_tx_data *tx)
+ 	txrc.skb = tx->skb;
+ 	txrc.reported_rate.idx = -1;
+ 
+-	if (unlikely(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX)) {
++	if (unlikely(info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK)) {
+ 		txrc.rate_idx_mask = ~0;
+ 	} else {
+ 		txrc.rate_idx_mask = tx->sdata->rc_rateidx_mask[info->band];
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 4cbf71d0786b0d..c55cf5bc36b2f2 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -382,7 +382,7 @@ static int ctnetlink_dump_secctx(struct sk_buff *skb, const struct nf_conn *ct)
+ #define ctnetlink_dump_secctx(a, b) (0)
+ #endif
+ 
+-#ifdef CONFIG_NF_CONNTRACK_LABELS
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
+ static inline int ctnetlink_label_size(const struct nf_conn *ct)
+ {
+ 	struct nf_conn_labels *labels = nf_ct_labels_find(ct);
+@@ -391,6 +391,7 @@ static inline int ctnetlink_label_size(const struct nf_conn *ct)
+ 		return 0;
+ 	return nla_total_size(sizeof(labels->bits));
+ }
++#endif
+ 
+ static int
+ ctnetlink_dump_labels(struct sk_buff *skb, const struct nf_conn *ct)
+@@ -411,10 +412,6 @@ ctnetlink_dump_labels(struct sk_buff *skb, const struct nf_conn *ct)
+ 
+ 	return 0;
+ }
+-#else
+-#define ctnetlink_dump_labels(a, b) (0)
+-#define ctnetlink_label_size(a)	(0)
+-#endif
+ 
+ #define master_tuple(ct) &(ct->master->tuplehash[IP_CT_DIR_ORIGINAL].tuple)
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 41d7faeb101cfc..465cc43c75e30e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1795,7 +1795,7 @@ static int nft_dump_basechain_hook(struct sk_buff *skb, int family,
+ 		if (!hook_list)
+ 			hook_list = &basechain->hook_list;
+ 
+-		list_for_each_entry(hook, hook_list, list) {
++		list_for_each_entry_rcu(hook, hook_list, list) {
+ 			if (!first)
+ 				first = hook;
+ 
+@@ -4544,7 +4544,7 @@ int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result)
+ 		return -ERANGE;
+ 
+ 	ms *= NSEC_PER_MSEC;
+-	*result = nsecs_to_jiffies64(ms);
++	*result = nsecs_to_jiffies64(ms) ? : !!ms;
+ 	return 0;
+ }
+ 
+@@ -6631,7 +6631,7 @@ static int nft_setelem_catchall_insert(const struct net *net,
+ 		}
+ 	}
+ 
+-	catchall = kmalloc(sizeof(*catchall), GFP_KERNEL);
++	catchall = kmalloc(sizeof(*catchall), GFP_KERNEL_ACCOUNT);
+ 	if (!catchall)
+ 		return -ENOMEM;
+ 
+@@ -6867,17 +6867,23 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 			return err;
+ 	} else if (set->flags & NFT_SET_TIMEOUT &&
+ 		   !(flags & NFT_SET_ELEM_INTERVAL_END)) {
+-		timeout = READ_ONCE(set->timeout);
++		timeout = set->timeout;
+ 	}
+ 
+ 	expiration = 0;
+ 	if (nla[NFTA_SET_ELEM_EXPIRATION] != NULL) {
+ 		if (!(set->flags & NFT_SET_TIMEOUT))
+ 			return -EINVAL;
++		if (timeout == 0)
++			return -EOPNOTSUPP;
++
+ 		err = nf_msecs_to_jiffies64(nla[NFTA_SET_ELEM_EXPIRATION],
+ 					    &expiration);
+ 		if (err)
+ 			return err;
++
++		if (expiration > timeout)
++			return -ERANGE;
+ 	}
+ 
+ 	if (nla[NFTA_SET_ELEM_EXPR]) {
+@@ -6968,7 +6974,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ 		if (err < 0)
+ 			goto err_parse_key_end;
+ 
+-		if (timeout != READ_ONCE(set->timeout)) {
++		if (timeout != set->timeout) {
+ 			err = nft_set_ext_add(&tmpl, NFT_SET_EXT_TIMEOUT);
+ 			if (err < 0)
+ 				goto err_parse_key_end;
+@@ -9131,7 +9137,7 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ 		flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
+ 					    FLOW_BLOCK_UNBIND);
+ 		list_del_rcu(&hook->list);
+-		kfree(hook);
++		kfree_rcu(hook, rcu);
+ 	}
+ 	kfree(flowtable->name);
+ 	module_put(flowtable->data.type->owner);
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index d3d11dede54507..85450f60114263 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -536,7 +536,7 @@ nft_match_large_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	struct xt_match *m = expr->ops->data;
+ 	int ret;
+ 
+-	priv->info = kmalloc(XT_ALIGN(m->matchsize), GFP_KERNEL);
++	priv->info = kmalloc(XT_ALIGN(m->matchsize), GFP_KERNEL_ACCOUNT);
+ 	if (!priv->info)
+ 		return -ENOMEM;
+ 
+@@ -810,7 +810,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL_ACCOUNT);
+ 	if (!ops) {
+ 		err = -ENOMEM;
+ 		goto err;
+@@ -900,7 +900,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL_ACCOUNT);
+ 	if (!ops) {
+ 		err = -ENOMEM;
+ 		goto err;
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index b4ada3ab21679b..489a9b34f1ecc1 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -56,7 +56,7 @@ static struct nft_elem_priv *nft_dynset_new(struct nft_set *set,
+ 	if (!atomic_add_unless(&set->nelems, 1, set->size))
+ 		return NULL;
+ 
+-	timeout = priv->timeout ? : set->timeout;
++	timeout = priv->timeout ? : READ_ONCE(set->timeout);
+ 	elem_priv = nft_set_elem_init(set, &priv->tmpl,
+ 				      &regs->data[priv->sreg_key], NULL,
+ 				      &regs->data[priv->sreg_data],
+@@ -95,7 +95,7 @@ void nft_dynset_eval(const struct nft_expr *expr,
+ 			     expr, regs, &ext)) {
+ 		if (priv->op == NFT_DYNSET_OP_UPDATE &&
+ 		    nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) {
+-			timeout = priv->timeout ? : set->timeout;
++			timeout = priv->timeout ? : READ_ONCE(set->timeout);
+ 			*nft_set_ext_expiration(ext) = get_jiffies_64() + timeout;
+ 		}
+ 
+@@ -313,7 +313,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		nft_dynset_ext_add_expr(priv);
+ 
+ 	if (set->flags & NFT_SET_TIMEOUT) {
+-		if (timeout || set->timeout) {
++		if (timeout || READ_ONCE(set->timeout)) {
+ 			nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_TIMEOUT);
+ 			nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_EXPIRATION);
+ 		}
+diff --git a/net/netfilter/nft_log.c b/net/netfilter/nft_log.c
+index 5defe6e4fd9820..e3558813799579 100644
+--- a/net/netfilter/nft_log.c
++++ b/net/netfilter/nft_log.c
+@@ -163,7 +163,7 @@ static int nft_log_init(const struct nft_ctx *ctx,
+ 
+ 	nla = tb[NFTA_LOG_PREFIX];
+ 	if (nla != NULL) {
+-		priv->prefix = kmalloc(nla_len(nla) + 1, GFP_KERNEL);
++		priv->prefix = kmalloc(nla_len(nla) + 1, GFP_KERNEL_ACCOUNT);
+ 		if (priv->prefix == NULL)
+ 			return -ENOMEM;
+ 		nla_strscpy(priv->prefix, nla, nla_len(nla) + 1);
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 9139ce38ea7b9a..f23faf565b6873 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -954,7 +954,7 @@ static int nft_secmark_obj_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_SECMARK_CTX] == NULL)
+ 		return -EINVAL;
+ 
+-	priv->ctx = nla_strdup(tb[NFTA_SECMARK_CTX], GFP_KERNEL);
++	priv->ctx = nla_strdup(tb[NFTA_SECMARK_CTX], GFP_KERNEL_ACCOUNT);
+ 	if (!priv->ctx)
+ 		return -ENOMEM;
+ 
+diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c
+index 7d29db7c2ac0f0..bd058babfc820c 100644
+--- a/net/netfilter/nft_numgen.c
++++ b/net/netfilter/nft_numgen.c
+@@ -66,7 +66,7 @@ static int nft_ng_inc_init(const struct nft_ctx *ctx,
+ 	if (priv->offset + priv->modulus - 1 < priv->offset)
+ 		return -EOVERFLOW;
+ 
+-	priv->counter = kmalloc(sizeof(*priv->counter), GFP_KERNEL);
++	priv->counter = kmalloc(sizeof(*priv->counter), GFP_KERNEL_ACCOUNT);
+ 	if (!priv->counter)
+ 		return -ENOMEM;
+ 
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index eb4c4a4ac7acea..7be342b495f5f7 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -663,7 +663,7 @@ static int pipapo_realloc_mt(struct nft_pipapo_field *f,
+ 	    check_add_overflow(rules, extra, &rules_alloc))
+ 		return -EOVERFLOW;
+ 
+-	new_mt = kvmalloc_array(rules_alloc, sizeof(*new_mt), GFP_KERNEL);
++	new_mt = kvmalloc_array(rules_alloc, sizeof(*new_mt), GFP_KERNEL_ACCOUNT);
+ 	if (!new_mt)
+ 		return -ENOMEM;
+ 
+@@ -936,7 +936,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ 		return;
+ 	}
+ 
+-	new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL);
++	new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL_ACCOUNT);
+ 	if (!new_lt)
+ 		return;
+ 
+@@ -1212,7 +1212,7 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 		scratch = kzalloc_node(struct_size(scratch, map,
+ 						   bsize_max * 2) +
+ 				       NFT_PIPAPO_ALIGN_HEADROOM,
+-				       GFP_KERNEL, cpu_to_node(i));
++				       GFP_KERNEL_ACCOUNT, cpu_to_node(i));
+ 		if (!scratch) {
+ 			/* On failure, there's no need to undo previous
+ 			 * allocations: this means that some scratch maps have
+@@ -1427,7 +1427,7 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	struct nft_pipapo_match *new;
+ 	int i;
+ 
+-	new = kmalloc(struct_size(new, f, old->field_count), GFP_KERNEL);
++	new = kmalloc(struct_size(new, f, old->field_count), GFP_KERNEL_ACCOUNT);
+ 	if (!new)
+ 		return NULL;
+ 
+@@ -1457,7 +1457,7 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 		new_lt = kvzalloc(src->groups * NFT_PIPAPO_BUCKETS(src->bb) *
+ 				  src->bsize * sizeof(*dst->lt) +
+ 				  NFT_PIPAPO_ALIGN_HEADROOM,
+-				  GFP_KERNEL);
++				  GFP_KERNEL_ACCOUNT);
+ 		if (!new_lt)
+ 			goto out_lt;
+ 
+@@ -1470,7 +1470,8 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 
+ 		if (src->rules > 0) {
+ 			dst->mt = kvmalloc_array(src->rules_alloc,
+-						 sizeof(*src->mt), GFP_KERNEL);
++						 sizeof(*src->mt),
++						 GFP_KERNEL_ACCOUNT);
+ 			if (!dst->mt)
+ 				goto out_mt;
+ 
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 60a76e6e348e7b..5c6ed68cc6e058 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -509,13 +509,14 @@ static int nft_tunnel_obj_init(const struct nft_ctx *ctx,
+ 			return err;
+ 	}
+ 
+-	md = metadata_dst_alloc(priv->opts.len, METADATA_IP_TUNNEL, GFP_KERNEL);
++	md = metadata_dst_alloc(priv->opts.len, METADATA_IP_TUNNEL,
++				GFP_KERNEL_ACCOUNT);
+ 	if (!md)
+ 		return -ENOMEM;
+ 
+ 	memcpy(&md->u.tun_info, &info, sizeof(info));
+ #ifdef CONFIG_DST_CACHE
+-	err = dst_cache_init(&md->u.tun_info.dst_cache, GFP_KERNEL);
++	err = dst_cache_init(&md->u.tun_info.dst_cache, GFP_KERNEL_ACCOUNT);
+ 	if (err < 0) {
+ 		metadata_dst_free(md);
+ 		return err;
+diff --git a/net/qrtr/af_qrtr.c b/net/qrtr/af_qrtr.c
+index 41ece61eb57ab7..00c51cf693f3d0 100644
+--- a/net/qrtr/af_qrtr.c
++++ b/net/qrtr/af_qrtr.c
+@@ -884,7 +884,7 @@ static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ 
+ 	mutex_lock(&qrtr_node_lock);
+ 	list_for_each_entry(node, &qrtr_all_nodes, item) {
+-		skbn = skb_clone(skb, GFP_KERNEL);
++		skbn = pskb_copy(skb, GFP_KERNEL);
+ 		if (!skbn)
+ 			break;
+ 		skb_set_owner_w(skbn, skb->sk);
+diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
+index 593846d252143c..114fef65f92eab 100644
+--- a/net/tipc/bcast.c
++++ b/net/tipc/bcast.c
+@@ -320,8 +320,8 @@ static int tipc_mcast_send_sync(struct net *net, struct sk_buff *skb,
+ {
+ 	struct tipc_msg *hdr, *_hdr;
+ 	struct sk_buff_head tmpq;
++	u16 cong_link_cnt = 0;
+ 	struct sk_buff *_skb;
+-	u16 cong_link_cnt;
+ 	int rc = 0;
+ 
+ 	/* Is a cluster supporting with new capabilities ? */
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 967bc4935b4ed9..e3bf14e489c5de 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -9658,7 +9658,8 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	if (n_ssids)
+-		request->ssids = (void *)&request->channels[n_channels];
++		request->ssids = (void *)request +
++			struct_size(request, channels, n_channels);
+ 	request->n_ssids = n_ssids;
+ 	if (ie_len) {
+ 		if (n_ssids)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 64c779788a6466..44c93b9a9751c1 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -3452,8 +3452,8 @@ int cfg80211_wext_siwscan(struct net_device *dev,
+ 		n_channels = ieee80211_get_num_supported_channels(wiphy);
+ 	}
+ 
+-	creq = kzalloc(sizeof(*creq) + sizeof(struct cfg80211_ssid) +
+-		       n_channels * sizeof(void *),
++	creq = kzalloc(struct_size(creq, channels, n_channels) +
++		       sizeof(struct cfg80211_ssid),
+ 		       GFP_ATOMIC);
+ 	if (!creq)
+ 		return -ENOMEM;
+@@ -3461,7 +3461,7 @@ int cfg80211_wext_siwscan(struct net_device *dev,
+ 	creq->wiphy = wiphy;
+ 	creq->wdev = dev->ieee80211_ptr;
+ 	/* SSIDs come after channels */
+-	creq->ssids = (void *)&creq->channels[n_channels];
++	creq->ssids = (void *)creq + struct_size(creq, channels, n_channels);
+ 	creq->n_channels = n_channels;
+ 	creq->n_ssids = 1;
+ 	creq->scan_start = jiffies;
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 1cfe673bc52f37..4b80af0edbe976 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -115,7 +115,8 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
+ 		n_channels = i;
+ 	}
+ 	request->n_channels = n_channels;
+-	request->ssids = (void *)&request->channels[n_channels];
++	request->ssids = (void *)request +
++		struct_size(request, channels, n_channels);
+ 	request->n_ssids = 1;
+ 
+ 	memcpy(request->ssids[0].ssid, wdev->conn->params.ssid,
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index af6ec719567fc1..f1193aca79bae8 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -998,10 +998,10 @@ unsigned int cfg80211_classify8021d(struct sk_buff *skb,
+ 	 * Diffserv Service Classes no update is needed:
+ 	 * - Standard: DF
+ 	 * - Low Priority Data: CS1
+-	 * - Multimedia Streaming: AF31, AF32, AF33
+ 	 * - Multimedia Conferencing: AF41, AF42, AF43
+ 	 * - Network Control Traffic: CS7
+ 	 * - Real-Time Interactive: CS4
++	 * - Signaling: CS5
+ 	 */
+ 	switch (dscp >> 2) {
+ 	case 10:
+@@ -1026,9 +1026,11 @@ unsigned int cfg80211_classify8021d(struct sk_buff *skb,
+ 		/* Broadcasting video: CS3 */
+ 		ret = 4;
+ 		break;
+-	case 40:
+-		/* Signaling: CS5 */
+-		ret = 5;
++	case 26:
++	case 28:
++	case 30:
++		/* Multimedia Streaming: AF31, AF32, AF33 */
++		ret = 4;
+ 		break;
+ 	case 44:
+ 		/* Voice Admit: VA */
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index c0e0204b963045..b0f24ebd05f0ba 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
+ 	return nb_entries;
+ }
+ 
+-u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
++static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
++			 u32 max)
+ {
+-	u32 nb_entries1 = 0, nb_entries2;
++	int i;
+ 
+-	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
++	for (i = 0; i < max; i++) {
+ 		struct xdp_buff *buff;
+ 
+-		/* Slow path */
+ 		buff = xp_alloc(pool);
+-		if (buff)
+-			*xdp = buff;
+-		return !!buff;
++		if (unlikely(!buff))
++			return i;
++		*xdp = buff;
++		xdp++;
+ 	}
+ 
++	return max;
++}
++
++u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
++{
++	u32 nb_entries1 = 0, nb_entries2;
++
++	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
++		return xp_alloc_slow(pool, xdp, max);
++
+ 	if (unlikely(pool->free_list_cnt)) {
+ 		nb_entries1 = xp_alloc_reused(pool, xdp, max);
+ 		if (nb_entries1 == max)
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 3e003dd6bea09c..dca56aa360ff32 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -169,6 +169,10 @@ BPF_EXTRA_CFLAGS += -I$(srctree)/arch/mips/include/asm/mach-generic
+ endif
+ endif
+ 
++ifeq ($(ARCH), x86)
++BPF_EXTRA_CFLAGS += -fcf-protection
++endif
++
+ TPROGS_CFLAGS += -Wall -O2
+ TPROGS_CFLAGS += -Wmissing-prototypes
+ TPROGS_CFLAGS += -Wstrict-prototypes
+@@ -405,7 +409,7 @@ $(obj)/%.o: $(src)/%.c
+ 		-Wno-gnu-variable-sized-type-not-at-end \
+ 		-Wno-address-of-packed-member -Wno-tautological-compare \
+ 		-Wno-unknown-warning-option $(CLANG_ARCH_ARGS) \
+-		-fno-asynchronous-unwind-tables -fcf-protection \
++		-fno-asynchronous-unwind-tables \
+ 		-I$(srctree)/samples/bpf/ -include asm_goto_workaround.h \
+ 		-O2 -emit-llvm -Xclang -disable-llvm-passes -c $< -o - | \
+ 		$(OPT) -O2 -mtriple=bpf-pc-linux | $(LLVM_DIS) | \
+diff --git a/security/apparmor/include/net.h b/security/apparmor/include/net.h
+index 67bf888c3bd6b9..c42ed8a73f1cef 100644
+--- a/security/apparmor/include/net.h
++++ b/security/apparmor/include/net.h
+@@ -51,10 +51,9 @@ struct aa_sk_ctx {
+ 	struct aa_label *peer;
+ };
+ 
+-#define SK_CTX(X) ((X)->sk_security)
+ static inline struct aa_sk_ctx *aa_sock(const struct sock *sk)
+ {
+-	return sk->sk_security;
++	return sk->sk_security + apparmor_blob_sizes.lbs_sock;
+ }
+ 
+ #define DEFINE_AUDIT_NET(NAME, OP, SK, F, T, P)				  \
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 4373b914acf204..b8366fca98d23e 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -1057,27 +1057,12 @@ static int apparmor_userns_create(const struct cred *cred)
+ 	return error;
+ }
+ 
+-static int apparmor_sk_alloc_security(struct sock *sk, int family, gfp_t flags)
+-{
+-	struct aa_sk_ctx *ctx;
+-
+-	ctx = kzalloc(sizeof(*ctx), flags);
+-	if (!ctx)
+-		return -ENOMEM;
+-
+-	sk->sk_security = ctx;
+-
+-	return 0;
+-}
+-
+ static void apparmor_sk_free_security(struct sock *sk)
+ {
+ 	struct aa_sk_ctx *ctx = aa_sock(sk);
+ 
+-	sk->sk_security = NULL;
+ 	aa_put_label(ctx->label);
+ 	aa_put_label(ctx->peer);
+-	kfree(ctx);
+ }
+ 
+ /**
+@@ -1432,6 +1417,7 @@ struct lsm_blob_sizes apparmor_blob_sizes __ro_after_init = {
+ 	.lbs_cred = sizeof(struct aa_label *),
+ 	.lbs_file = sizeof(struct aa_file_ctx),
+ 	.lbs_task = sizeof(struct aa_task_ctx),
++	.lbs_sock = sizeof(struct aa_sk_ctx),
+ };
+ 
+ static const struct lsm_id apparmor_lsmid = {
+@@ -1477,7 +1463,6 @@ static struct security_hook_list apparmor_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(getprocattr, apparmor_getprocattr),
+ 	LSM_HOOK_INIT(setprocattr, apparmor_setprocattr),
+ 
+-	LSM_HOOK_INIT(sk_alloc_security, apparmor_sk_alloc_security),
+ 	LSM_HOOK_INIT(sk_free_security, apparmor_sk_free_security),
+ 	LSM_HOOK_INIT(sk_clone_security, apparmor_sk_clone_security),
+ 
+diff --git a/security/apparmor/net.c b/security/apparmor/net.c
+index 87e934b2b54887..77413a5191179a 100644
+--- a/security/apparmor/net.c
++++ b/security/apparmor/net.c
+@@ -151,7 +151,7 @@ static int aa_label_sk_perm(const struct cred *subj_cred,
+ 			    const char *op, u32 request,
+ 			    struct sock *sk)
+ {
+-	struct aa_sk_ctx *ctx = SK_CTX(sk);
++	struct aa_sk_ctx *ctx = aa_sock(sk);
+ 	int error = 0;
+ 
+ 	AA_BUG(!label);
+diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
+index 57b9ffd53c98ae..3663aec7bcbd21 100644
+--- a/security/bpf/hooks.c
++++ b/security/bpf/hooks.c
+@@ -31,7 +31,6 @@ static int __init bpf_lsm_init(void)
+ 
+ struct lsm_blob_sizes bpf_lsm_blob_sizes __ro_after_init = {
+ 	.lbs_inode = sizeof(struct bpf_storage_blob),
+-	.lbs_task = sizeof(struct bpf_storage_blob),
+ };
+ 
+ DEFINE_LSM(bpf) = {
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index c51e24d24d1e9e..3c323ca213d42c 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -223,7 +223,7 @@ static inline void ima_inode_set_iint(const struct inode *inode,
+ 
+ struct ima_iint_cache *ima_iint_find(struct inode *inode);
+ struct ima_iint_cache *ima_inode_get(struct inode *inode);
+-void ima_inode_free(struct inode *inode);
++void ima_inode_free_rcu(void *inode_security);
+ void __init ima_iintcache_init(void);
+ 
+ extern const int read_idmap[];
+diff --git a/security/integrity/ima/ima_iint.c b/security/integrity/ima/ima_iint.c
+index e23412a2c56b09..00b249101f98d3 100644
+--- a/security/integrity/ima/ima_iint.c
++++ b/security/integrity/ima/ima_iint.c
+@@ -109,22 +109,18 @@ struct ima_iint_cache *ima_inode_get(struct inode *inode)
+ }
+ 
+ /**
+- * ima_inode_free - Called on inode free
+- * @inode: Pointer to the inode
++ * ima_inode_free_rcu - Called to free an inode via a RCU callback
++ * @inode_security: The inode->i_security pointer
+  *
+- * Free the iint associated with an inode.
++ * Free the IMA data associated with an inode.
+  */
+-void ima_inode_free(struct inode *inode)
++void ima_inode_free_rcu(void *inode_security)
+ {
+-	struct ima_iint_cache *iint;
+-
+-	if (!IS_IMA(inode))
+-		return;
+-
+-	iint = ima_iint_find(inode);
+-	ima_inode_set_iint(inode, NULL);
++	struct ima_iint_cache **iint_p = inode_security + ima_blob_sizes.lbs_inode;
+ 
+-	ima_iint_free(iint);
++	/* *iint_p should be NULL if !IS_IMA(inode) */
++	if (*iint_p)
++		ima_iint_free(*iint_p);
+ }
+ 
+ static void ima_iint_init_once(void *foo)
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index f04f43af651c8e..5b3394864b218e 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -1193,7 +1193,7 @@ static struct security_hook_list ima_hooks[] __ro_after_init = {
+ #ifdef CONFIG_INTEGRITY_ASYMMETRIC_KEYS
+ 	LSM_HOOK_INIT(kernel_module_request, ima_kernel_module_request),
+ #endif
+-	LSM_HOOK_INIT(inode_free_security, ima_inode_free),
++	LSM_HOOK_INIT(inode_free_security_rcu, ima_inode_free_rcu),
+ };
+ 
+ static const struct lsm_id ima_lsmid = {
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 7877a64cc6b87c..0804f76a67be2e 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -1207,13 +1207,16 @@ static int current_check_refer_path(struct dentry *const old_dentry,
+ 
+ /* Inode hooks */
+ 
+-static void hook_inode_free_security(struct inode *const inode)
++static void hook_inode_free_security_rcu(void *inode_security)
+ {
++	struct landlock_inode_security *inode_sec;
++
+ 	/*
+ 	 * All inodes must already have been untied from their object by
+ 	 * release_inode() or hook_sb_delete().
+ 	 */
+-	WARN_ON_ONCE(landlock_inode(inode)->object);
++	inode_sec = inode_security + landlock_blob_sizes.lbs_inode;
++	WARN_ON_ONCE(inode_sec->object);
+ }
+ 
+ /* Super-block hooks */
+@@ -1637,7 +1640,7 @@ static int hook_file_ioctl_compat(struct file *file, unsigned int cmd,
+ }
+ 
+ static struct security_hook_list landlock_hooks[] __ro_after_init = {
+-	LSM_HOOK_INIT(inode_free_security, hook_inode_free_security),
++	LSM_HOOK_INIT(inode_free_security_rcu, hook_inode_free_security_rcu),
+ 
+ 	LSM_HOOK_INIT(sb_delete, hook_sb_delete),
+ 	LSM_HOOK_INIT(sb_mount, hook_sb_mount),
+diff --git a/security/security.c b/security/security.c
+index 8cee5b6c6e6d53..43166e341526c0 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -29,6 +29,7 @@
+ #include <linux/msg.h>
+ #include <linux/overflow.h>
+ #include <net/flow.h>
++#include <net/sock.h>
+ 
+ /* How many LSMs were built into the kernel? */
+ #define LSM_COUNT (__end_lsm_info - __start_lsm_info)
+@@ -227,6 +228,7 @@ static void __init lsm_set_blob_sizes(struct lsm_blob_sizes *needed)
+ 	lsm_set_blob_size(&needed->lbs_inode, &blob_sizes.lbs_inode);
+ 	lsm_set_blob_size(&needed->lbs_ipc, &blob_sizes.lbs_ipc);
+ 	lsm_set_blob_size(&needed->lbs_msg_msg, &blob_sizes.lbs_msg_msg);
++	lsm_set_blob_size(&needed->lbs_sock, &blob_sizes.lbs_sock);
+ 	lsm_set_blob_size(&needed->lbs_superblock, &blob_sizes.lbs_superblock);
+ 	lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
+ 	lsm_set_blob_size(&needed->lbs_xattr_count,
+@@ -401,6 +403,7 @@ static void __init ordered_lsm_init(void)
+ 	init_debug("inode blob size      = %d\n", blob_sizes.lbs_inode);
+ 	init_debug("ipc blob size        = %d\n", blob_sizes.lbs_ipc);
+ 	init_debug("msg_msg blob size    = %d\n", blob_sizes.lbs_msg_msg);
++	init_debug("sock blob size       = %d\n", blob_sizes.lbs_sock);
+ 	init_debug("superblock blob size = %d\n", blob_sizes.lbs_superblock);
+ 	init_debug("task blob size       = %d\n", blob_sizes.lbs_task);
+ 	init_debug("xattr slots          = %d\n", blob_sizes.lbs_xattr_count);
+@@ -1596,9 +1599,8 @@ int security_inode_alloc(struct inode *inode)
+ 
+ static void inode_free_by_rcu(struct rcu_head *head)
+ {
+-	/*
+-	 * The rcu head is at the start of the inode blob
+-	 */
++	/* The rcu head is at the start of the inode blob */
++	call_void_hook(inode_free_security_rcu, head);
+ 	kmem_cache_free(lsm_inode_cache, head);
+ }
+ 
+@@ -1606,23 +1608,24 @@ static void inode_free_by_rcu(struct rcu_head *head)
+  * security_inode_free() - Free an inode's LSM blob
+  * @inode: the inode
+  *
+- * Deallocate the inode security structure and set @inode->i_security to NULL.
++ * Release any LSM resources associated with @inode, although due to the
++ * inode's RCU protections it is possible that the resources will not be
++ * fully released until after the current RCU grace period has elapsed.
++ *
++ * It is important for LSMs to note that despite being present in a call to
++ * security_inode_free(), @inode may still be referenced in a VFS path walk
++ * and calls to security_inode_permission() may be made during, or after,
++ * a call to security_inode_free().  For this reason the inode->i_security
++ * field is released via a call_rcu() callback and any LSMs which need to
++ * retain inode state for use in security_inode_permission() should only
++ * release that state in the inode_free_security_rcu() LSM hook callback.
+  */
+ void security_inode_free(struct inode *inode)
+ {
+ 	call_void_hook(inode_free_security, inode);
+-	/*
+-	 * The inode may still be referenced in a path walk and
+-	 * a call to security_inode_permission() can be made
+-	 * after inode_free_security() is called. Ideally, the VFS
+-	 * wouldn't do this, but fixing that is a much harder
+-	 * job. For now, simply free the i_security via RCU, and
+-	 * leave the current inode->i_security pointer intact.
+-	 * The inode will be freed after the RCU grace period too.
+-	 */
+-	if (inode->i_security)
+-		call_rcu((struct rcu_head *)inode->i_security,
+-			 inode_free_by_rcu);
++	if (!inode->i_security)
++		return;
++	call_rcu((struct rcu_head *)inode->i_security, inode_free_by_rcu);
+ }
+ 
+ /**
+@@ -4673,6 +4676,28 @@ int security_socket_getpeersec_dgram(struct socket *sock,
+ }
+ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+ 
++/**
++ * lsm_sock_alloc - allocate a composite sock blob
++ * @sock: the sock that needs a blob
++ * @priority: allocation mode
++ *
++ * Allocate the sock blob for all the modules
++ *
++ * Returns 0, or -ENOMEM if memory can't be allocated.
++ */
++static int lsm_sock_alloc(struct sock *sock, gfp_t priority)
++{
++	if (blob_sizes.lbs_sock == 0) {
++		sock->sk_security = NULL;
++		return 0;
++	}
++
++	sock->sk_security = kzalloc(blob_sizes.lbs_sock, priority);
++	if (sock->sk_security == NULL)
++		return -ENOMEM;
++	return 0;
++}
++
+ /**
+  * security_sk_alloc() - Allocate and initialize a sock's LSM blob
+  * @sk: sock
+@@ -4686,7 +4711,14 @@ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+  */
+ int security_sk_alloc(struct sock *sk, int family, gfp_t priority)
+ {
+-	return call_int_hook(sk_alloc_security, sk, family, priority);
++	int rc = lsm_sock_alloc(sk, priority);
++
++	if (unlikely(rc))
++		return rc;
++	rc = call_int_hook(sk_alloc_security, sk, family, priority);
++	if (unlikely(rc))
++		security_sk_free(sk);
++	return rc;
+ }
+ 
+ /**
+@@ -4698,6 +4730,8 @@ int security_sk_alloc(struct sock *sk, int family, gfp_t priority)
+ void security_sk_free(struct sock *sk)
+ {
+ 	call_void_hook(sk_free_security, sk);
++	kfree(sk->sk_security);
++	sk->sk_security = NULL;
+ }
+ 
+ /**
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 400eca4ad0fb6c..c11303d662d809 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4594,7 +4594,7 @@ static int socket_sockcreate_sid(const struct task_security_struct *tsec,
+ 
+ static int sock_has_perm(struct sock *sk, u32 perms)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net;
+ 
+@@ -4662,7 +4662,7 @@ static int selinux_socket_post_create(struct socket *sock, int family,
+ 	isec->initialized = LABEL_INITIALIZED;
+ 
+ 	if (sock->sk) {
+-		sksec = sock->sk->sk_security;
++		sksec = selinux_sock(sock->sk);
+ 		sksec->sclass = sclass;
+ 		sksec->sid = sid;
+ 		/* Allows detection of the first association on this socket */
+@@ -4678,8 +4678,8 @@ static int selinux_socket_post_create(struct socket *sock, int family,
+ static int selinux_socket_socketpair(struct socket *socka,
+ 				     struct socket *sockb)
+ {
+-	struct sk_security_struct *sksec_a = socka->sk->sk_security;
+-	struct sk_security_struct *sksec_b = sockb->sk->sk_security;
++	struct sk_security_struct *sksec_a = selinux_sock(socka->sk);
++	struct sk_security_struct *sksec_b = selinux_sock(sockb->sk);
+ 
+ 	sksec_a->peer_sid = sksec_b->sid;
+ 	sksec_b->peer_sid = sksec_a->sid;
+@@ -4694,7 +4694,7 @@ static int selinux_socket_socketpair(struct socket *socka,
+ static int selinux_socket_bind(struct socket *sock, struct sockaddr *address, int addrlen)
+ {
+ 	struct sock *sk = sock->sk;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	u16 family;
+ 	int err;
+ 
+@@ -4834,7 +4834,7 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ 					 struct sockaddr *address, int addrlen)
+ {
+ 	struct sock *sk = sock->sk;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	int err;
+ 
+ 	err = sock_has_perm(sk, SOCKET__CONNECT);
+@@ -5012,9 +5012,9 @@ static int selinux_socket_unix_stream_connect(struct sock *sock,
+ 					      struct sock *other,
+ 					      struct sock *newsk)
+ {
+-	struct sk_security_struct *sksec_sock = sock->sk_security;
+-	struct sk_security_struct *sksec_other = other->sk_security;
+-	struct sk_security_struct *sksec_new = newsk->sk_security;
++	struct sk_security_struct *sksec_sock = selinux_sock(sock);
++	struct sk_security_struct *sksec_other = selinux_sock(other);
++	struct sk_security_struct *sksec_new = selinux_sock(newsk);
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net;
+ 	int err;
+@@ -5043,8 +5043,8 @@ static int selinux_socket_unix_stream_connect(struct sock *sock,
+ static int selinux_socket_unix_may_send(struct socket *sock,
+ 					struct socket *other)
+ {
+-	struct sk_security_struct *ssec = sock->sk->sk_security;
+-	struct sk_security_struct *osec = other->sk->sk_security;
++	struct sk_security_struct *ssec = selinux_sock(sock->sk);
++	struct sk_security_struct *osec = selinux_sock(other->sk);
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net;
+ 
+@@ -5081,7 +5081,7 @@ static int selinux_sock_rcv_skb_compat(struct sock *sk, struct sk_buff *skb,
+ 				       u16 family)
+ {
+ 	int err = 0;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	u32 sk_sid = sksec->sid;
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net;
+@@ -5110,7 +5110,7 @@ static int selinux_sock_rcv_skb_compat(struct sock *sk, struct sk_buff *skb,
+ static int selinux_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ 	int err, peerlbl_active, secmark_active;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	u16 family = sk->sk_family;
+ 	u32 sk_sid = sksec->sid;
+ 	struct common_audit_data ad;
+@@ -5178,7 +5178,7 @@ static int selinux_socket_getpeersec_stream(struct socket *sock,
+ 	int err = 0;
+ 	char *scontext = NULL;
+ 	u32 scontext_len;
+-	struct sk_security_struct *sksec = sock->sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sock->sk);
+ 	u32 peer_sid = SECSID_NULL;
+ 
+ 	if (sksec->sclass == SECCLASS_UNIX_STREAM_SOCKET ||
+@@ -5238,34 +5238,27 @@ static int selinux_socket_getpeersec_dgram(struct socket *sock,
+ 
+ static int selinux_sk_alloc_security(struct sock *sk, int family, gfp_t priority)
+ {
+-	struct sk_security_struct *sksec;
+-
+-	sksec = kzalloc(sizeof(*sksec), priority);
+-	if (!sksec)
+-		return -ENOMEM;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	sksec->peer_sid = SECINITSID_UNLABELED;
+ 	sksec->sid = SECINITSID_UNLABELED;
+ 	sksec->sclass = SECCLASS_SOCKET;
+ 	selinux_netlbl_sk_security_reset(sksec);
+-	sk->sk_security = sksec;
+ 
+ 	return 0;
+ }
+ 
+ static void selinux_sk_free_security(struct sock *sk)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+-	sk->sk_security = NULL;
+ 	selinux_netlbl_sk_security_free(sksec);
+-	kfree(sksec);
+ }
+ 
+ static void selinux_sk_clone_security(const struct sock *sk, struct sock *newsk)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
+-	struct sk_security_struct *newsksec = newsk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
++	struct sk_security_struct *newsksec = selinux_sock(newsk);
+ 
+ 	newsksec->sid = sksec->sid;
+ 	newsksec->peer_sid = sksec->peer_sid;
+@@ -5279,7 +5272,7 @@ static void selinux_sk_getsecid(const struct sock *sk, u32 *secid)
+ 	if (!sk)
+ 		*secid = SECINITSID_ANY_SOCKET;
+ 	else {
+-		const struct sk_security_struct *sksec = sk->sk_security;
++		const struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 		*secid = sksec->sid;
+ 	}
+@@ -5289,7 +5282,7 @@ static void selinux_sock_graft(struct sock *sk, struct socket *parent)
+ {
+ 	struct inode_security_struct *isec =
+ 		inode_security_novalidate(SOCK_INODE(parent));
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	if (sk->sk_family == PF_INET || sk->sk_family == PF_INET6 ||
+ 	    sk->sk_family == PF_UNIX)
+@@ -5306,7 +5299,7 @@ static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
+ {
+ 	struct sock *sk = asoc->base.sk;
+ 	u16 family = sk->sk_family;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct common_audit_data ad;
+ 	struct lsm_network_audit net;
+ 	int err;
+@@ -5361,7 +5354,7 @@ static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
+ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ 				      struct sk_buff *skb)
+ {
+-	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+ 	u32 conn_sid;
+ 	int err;
+ 
+@@ -5394,7 +5387,7 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ static int selinux_sctp_assoc_established(struct sctp_association *asoc,
+ 					  struct sk_buff *skb)
+ {
+-	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+ 
+ 	if (!selinux_policycap_extsockclass())
+ 		return 0;
+@@ -5493,8 +5486,8 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ static void selinux_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ 				  struct sock *newsk)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
+-	struct sk_security_struct *newsksec = newsk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
++	struct sk_security_struct *newsksec = selinux_sock(newsk);
+ 
+ 	/* If policy does not support SECCLASS_SCTP_SOCKET then call
+ 	 * the non-sctp clone version.
+@@ -5510,8 +5503,8 @@ static void selinux_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk
+ 
+ static int selinux_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
+ {
+-	struct sk_security_struct *ssksec = ssk->sk_security;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *ssksec = selinux_sock(ssk);
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	ssksec->sclass = sksec->sclass;
+ 	ssksec->sid = sksec->sid;
+@@ -5526,7 +5519,7 @@ static int selinux_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
+ static int selinux_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ 				     struct request_sock *req)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	int err;
+ 	u16 family = req->rsk_ops->family;
+ 	u32 connsid;
+@@ -5547,7 +5540,7 @@ static int selinux_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ static void selinux_inet_csk_clone(struct sock *newsk,
+ 				   const struct request_sock *req)
+ {
+-	struct sk_security_struct *newsksec = newsk->sk_security;
++	struct sk_security_struct *newsksec = selinux_sock(newsk);
+ 
+ 	newsksec->sid = req->secid;
+ 	newsksec->peer_sid = req->peer_secid;
+@@ -5564,7 +5557,7 @@ static void selinux_inet_csk_clone(struct sock *newsk,
+ static void selinux_inet_conn_established(struct sock *sk, struct sk_buff *skb)
+ {
+ 	u16 family = sk->sk_family;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	/* handle mapped IPv4 packets arriving via IPv6 sockets */
+ 	if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
+@@ -5639,7 +5632,7 @@ static int selinux_tun_dev_attach_queue(void *security)
+ static int selinux_tun_dev_attach(struct sock *sk, void *security)
+ {
+ 	struct tun_security_struct *tunsec = security;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	/* we don't currently perform any NetLabel based labeling here and it
+ 	 * isn't clear that we would want to do so anyway; while we could apply
+@@ -5762,7 +5755,7 @@ static unsigned int selinux_ip_output(void *priv, struct sk_buff *skb,
+ 			return NF_ACCEPT;
+ 
+ 		/* standard practice, label using the parent socket */
+-		sksec = sk->sk_security;
++		sksec = selinux_sock(sk);
+ 		sid = sksec->sid;
+ 	} else
+ 		sid = SECINITSID_KERNEL;
+@@ -5785,7 +5778,7 @@ static unsigned int selinux_ip_postroute_compat(struct sk_buff *skb,
+ 	sk = skb_to_full_sk(skb);
+ 	if (sk == NULL)
+ 		return NF_ACCEPT;
+-	sksec = sk->sk_security;
++	sksec = selinux_sock(sk);
+ 
+ 	ad_net_init_from_iif(&ad, &net, state->out->ifindex, state->pf);
+ 	if (selinux_parse_skb(skb, &ad, NULL, 0, &proto))
+@@ -5874,7 +5867,7 @@ static unsigned int selinux_ip_postroute(void *priv,
+ 		u32 skb_sid;
+ 		struct sk_security_struct *sksec;
+ 
+-		sksec = sk->sk_security;
++		sksec = selinux_sock(sk);
+ 		if (selinux_skb_peerlbl_sid(skb, family, &skb_sid))
+ 			return NF_DROP;
+ 		/* At this point, if the returned skb peerlbl is SECSID_NULL
+@@ -5903,7 +5896,7 @@ static unsigned int selinux_ip_postroute(void *priv,
+ 	} else {
+ 		/* Locally generated packet, fetch the security label from the
+ 		 * associated socket. */
+-		struct sk_security_struct *sksec = sk->sk_security;
++		struct sk_security_struct *sksec = selinux_sock(sk);
+ 		peer_sid = sksec->sid;
+ 		secmark_perm = PACKET__SEND;
+ 	}
+@@ -5946,7 +5939,7 @@ static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb)
+ 	unsigned int data_len = skb->len;
+ 	unsigned char *data = skb->data;
+ 	struct nlmsghdr *nlh;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	u16 sclass = sksec->sclass;
+ 	u32 perm;
+ 
+@@ -7004,6 +6997,7 @@ struct lsm_blob_sizes selinux_blob_sizes __ro_after_init = {
+ 	.lbs_inode = sizeof(struct inode_security_struct),
+ 	.lbs_ipc = sizeof(struct ipc_security_struct),
+ 	.lbs_msg_msg = sizeof(struct msg_security_struct),
++	.lbs_sock = sizeof(struct sk_security_struct),
+ 	.lbs_superblock = sizeof(struct superblock_security_struct),
+ 	.lbs_xattr_count = SELINUX_INODE_INIT_XATTRS,
+ };
+diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
+index dea1d6f3ed2d3b..b074099acbaf79 100644
+--- a/security/selinux/include/objsec.h
++++ b/security/selinux/include/objsec.h
+@@ -195,4 +195,9 @@ selinux_superblock(const struct super_block *superblock)
+ 	return superblock->s_security + selinux_blob_sizes.lbs_superblock;
+ }
+ 
++static inline struct sk_security_struct *selinux_sock(const struct sock *sock)
++{
++	return sock->sk_security + selinux_blob_sizes.lbs_sock;
++}
++
+ #endif /* _SELINUX_OBJSEC_H_ */
+diff --git a/security/selinux/netlabel.c b/security/selinux/netlabel.c
+index 55885634e8804a..fbe5f8c29f8137 100644
+--- a/security/selinux/netlabel.c
++++ b/security/selinux/netlabel.c
+@@ -17,6 +17,7 @@
+ #include <linux/gfp.h>
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
++#include <linux/lsm_hooks.h>
+ #include <net/sock.h>
+ #include <net/netlabel.h>
+ #include <net/ip.h>
+@@ -68,7 +69,7 @@ static int selinux_netlbl_sidlookup_cached(struct sk_buff *skb,
+ static struct netlbl_lsm_secattr *selinux_netlbl_sock_genattr(struct sock *sk)
+ {
+ 	int rc;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct netlbl_lsm_secattr *secattr;
+ 
+ 	if (sksec->nlbl_secattr != NULL)
+@@ -100,7 +101,7 @@ static struct netlbl_lsm_secattr *selinux_netlbl_sock_getattr(
+ 							const struct sock *sk,
+ 							u32 sid)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct netlbl_lsm_secattr *secattr = sksec->nlbl_secattr;
+ 
+ 	if (secattr == NULL)
+@@ -240,7 +241,7 @@ int selinux_netlbl_skbuff_setsid(struct sk_buff *skb,
+ 	 * being labeled by it's parent socket, if it is just exit */
+ 	sk = skb_to_full_sk(skb);
+ 	if (sk != NULL) {
+-		struct sk_security_struct *sksec = sk->sk_security;
++		struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 		if (sksec->nlbl_state != NLBL_REQSKB)
+ 			return 0;
+@@ -277,7 +278,7 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_association *asoc,
+ {
+ 	int rc;
+ 	struct netlbl_lsm_secattr secattr;
+-	struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(asoc->base.sk);
+ 	struct sockaddr_in addr4;
+ 	struct sockaddr_in6 addr6;
+ 
+@@ -356,7 +357,7 @@ int selinux_netlbl_inet_conn_request(struct request_sock *req, u16 family)
+  */
+ void selinux_netlbl_inet_csk_clone(struct sock *sk, u16 family)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	if (family == PF_INET)
+ 		sksec->nlbl_state = NLBL_LABELED;
+@@ -374,8 +375,8 @@ void selinux_netlbl_inet_csk_clone(struct sock *sk, u16 family)
+  */
+ void selinux_netlbl_sctp_sk_clone(struct sock *sk, struct sock *newsk)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
+-	struct sk_security_struct *newsksec = newsk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
++	struct sk_security_struct *newsksec = selinux_sock(newsk);
+ 
+ 	newsksec->nlbl_state = sksec->nlbl_state;
+ }
+@@ -393,7 +394,7 @@ void selinux_netlbl_sctp_sk_clone(struct sock *sk, struct sock *newsk)
+ int selinux_netlbl_socket_post_create(struct sock *sk, u16 family)
+ {
+ 	int rc;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct netlbl_lsm_secattr *secattr;
+ 
+ 	if (family != PF_INET && family != PF_INET6)
+@@ -510,7 +511,7 @@ int selinux_netlbl_socket_setsockopt(struct socket *sock,
+ {
+ 	int rc = 0;
+ 	struct sock *sk = sock->sk;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct netlbl_lsm_secattr secattr;
+ 
+ 	if (selinux_netlbl_option(level, optname) &&
+@@ -548,7 +549,7 @@ static int selinux_netlbl_socket_connect_helper(struct sock *sk,
+ 						struct sockaddr *addr)
+ {
+ 	int rc;
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 	struct netlbl_lsm_secattr *secattr;
+ 
+ 	/* connected sockets are allowed to disconnect when the address family
+@@ -587,7 +588,7 @@ static int selinux_netlbl_socket_connect_helper(struct sock *sk,
+ int selinux_netlbl_socket_connect_locked(struct sock *sk,
+ 					 struct sockaddr *addr)
+ {
+-	struct sk_security_struct *sksec = sk->sk_security;
++	struct sk_security_struct *sksec = selinux_sock(sk);
+ 
+ 	if (sksec->nlbl_state != NLBL_REQSKB &&
+ 	    sksec->nlbl_state != NLBL_CONNLABELED)
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index 041688e5a77a3e..297f21446f4568 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -355,6 +355,11 @@ static inline struct superblock_smack *smack_superblock(
+ 	return superblock->s_security + smack_blob_sizes.lbs_superblock;
+ }
+ 
++static inline struct socket_smack *smack_sock(const struct sock *sock)
++{
++	return sock->sk_security + smack_blob_sizes.lbs_sock;
++}
++
+ /*
+  * Is the directory transmuting?
+  */
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 002a1b9ed83a56..6ec9a40f3ec592 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -1606,7 +1606,7 @@ static int smack_inode_getsecurity(struct mnt_idmap *idmap,
+ 		if (sock == NULL || sock->sk == NULL)
+ 			return -EOPNOTSUPP;
+ 
+-		ssp = sock->sk->sk_security;
++		ssp = smack_sock(sock->sk);
+ 
+ 		if (strcmp(name, XATTR_SMACK_IPIN) == 0)
+ 			isp = ssp->smk_in;
+@@ -1994,7 +1994,7 @@ static int smack_file_receive(struct file *file)
+ 
+ 	if (inode->i_sb->s_magic == SOCKFS_MAGIC) {
+ 		sock = SOCKET_I(inode);
+-		ssp = sock->sk->sk_security;
++		ssp = smack_sock(sock->sk);
+ 		tsp = smack_cred(current_cred());
+ 		/*
+ 		 * If the receiving process can't write to the
+@@ -2409,11 +2409,7 @@ static void smack_task_to_inode(struct task_struct *p, struct inode *inode)
+ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+ {
+ 	struct smack_known *skp = smk_of_current();
+-	struct socket_smack *ssp;
+-
+-	ssp = kzalloc(sizeof(struct socket_smack), gfp_flags);
+-	if (ssp == NULL)
+-		return -ENOMEM;
++	struct socket_smack *ssp = smack_sock(sk);
+ 
+ 	/*
+ 	 * Sockets created by kernel threads receive web label.
+@@ -2427,11 +2423,10 @@ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+ 	}
+ 	ssp->smk_packet = NULL;
+ 
+-	sk->sk_security = ssp;
+-
+ 	return 0;
+ }
+ 
++#ifdef SMACK_IPV6_PORT_LABELING
+ /**
+  * smack_sk_free_security - Free a socket blob
+  * @sk: the socket
+@@ -2440,7 +2435,6 @@ static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
+  */
+ static void smack_sk_free_security(struct sock *sk)
+ {
+-#ifdef SMACK_IPV6_PORT_LABELING
+ 	struct smk_port_label *spp;
+ 
+ 	if (sk->sk_family == PF_INET6) {
+@@ -2453,9 +2447,8 @@ static void smack_sk_free_security(struct sock *sk)
+ 		}
+ 		rcu_read_unlock();
+ 	}
+-#endif
+-	kfree(sk->sk_security);
+ }
++#endif
+ 
+ /**
+  * smack_sk_clone_security - Copy security context
+@@ -2466,8 +2459,8 @@ static void smack_sk_free_security(struct sock *sk)
+  */
+ static void smack_sk_clone_security(const struct sock *sk, struct sock *newsk)
+ {
+-	struct socket_smack *ssp_old = sk->sk_security;
+-	struct socket_smack *ssp_new = newsk->sk_security;
++	struct socket_smack *ssp_old = smack_sock(sk);
++	struct socket_smack *ssp_new = smack_sock(newsk);
+ 
+ 	*ssp_new = *ssp_old;
+ }
+@@ -2583,7 +2576,7 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+  */
+ static int smack_netlbl_add(struct sock *sk)
+ {
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct smack_known *skp = ssp->smk_out;
+ 	int rc;
+ 
+@@ -2616,7 +2609,7 @@ static int smack_netlbl_add(struct sock *sk)
+  */
+ static void smack_netlbl_delete(struct sock *sk)
+ {
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 
+ 	/*
+ 	 * Take the label off the socket if one is set.
+@@ -2648,7 +2641,7 @@ static int smk_ipv4_check(struct sock *sk, struct sockaddr_in *sap)
+ 	struct smack_known *skp;
+ 	int rc = 0;
+ 	struct smack_known *hkp;
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct smk_audit_info ad;
+ 
+ 	rcu_read_lock();
+@@ -2721,7 +2714,7 @@ static void smk_ipv6_port_label(struct socket *sock, struct sockaddr *address)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct sockaddr_in6 *addr6;
+-	struct socket_smack *ssp = sock->sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sock->sk);
+ 	struct smk_port_label *spp;
+ 	unsigned short port = 0;
+ 
+@@ -2809,7 +2802,7 @@ static int smk_ipv6_port_check(struct sock *sk, struct sockaddr_in6 *address,
+ 				int act)
+ {
+ 	struct smk_port_label *spp;
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct smack_known *skp = NULL;
+ 	unsigned short port;
+ 	struct smack_known *object;
+@@ -2912,7 +2905,7 @@ static int smack_inode_setsecurity(struct inode *inode, const char *name,
+ 	if (sock == NULL || sock->sk == NULL)
+ 		return -EOPNOTSUPP;
+ 
+-	ssp = sock->sk->sk_security;
++	ssp = smack_sock(sock->sk);
+ 
+ 	if (strcmp(name, XATTR_SMACK_IPIN) == 0)
+ 		ssp->smk_in = skp;
+@@ -2960,7 +2953,7 @@ static int smack_socket_post_create(struct socket *sock, int family,
+ 	 * Sockets created by kernel threads receive web label.
+ 	 */
+ 	if (unlikely(current->flags & PF_KTHREAD)) {
+-		ssp = sock->sk->sk_security;
++		ssp = smack_sock(sock->sk);
+ 		ssp->smk_in = &smack_known_web;
+ 		ssp->smk_out = &smack_known_web;
+ 	}
+@@ -2985,8 +2978,8 @@ static int smack_socket_post_create(struct socket *sock, int family,
+ static int smack_socket_socketpair(struct socket *socka,
+ 		                   struct socket *sockb)
+ {
+-	struct socket_smack *asp = socka->sk->sk_security;
+-	struct socket_smack *bsp = sockb->sk->sk_security;
++	struct socket_smack *asp = smack_sock(socka->sk);
++	struct socket_smack *bsp = smack_sock(sockb->sk);
+ 
+ 	asp->smk_packet = bsp->smk_out;
+ 	bsp->smk_packet = asp->smk_out;
+@@ -3049,7 +3042,7 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ 		if (__is_defined(SMACK_IPV6_SECMARK_LABELING))
+ 			rsp = smack_ipv6host_label(sip);
+ 		if (rsp != NULL) {
+-			struct socket_smack *ssp = sock->sk->sk_security;
++			struct socket_smack *ssp = smack_sock(sock->sk);
+ 
+ 			rc = smk_ipv6_check(ssp->smk_out, rsp, sip,
+ 					    SMK_CONNECTING);
+@@ -3844,9 +3837,9 @@ static int smack_unix_stream_connect(struct sock *sock,
+ {
+ 	struct smack_known *skp;
+ 	struct smack_known *okp;
+-	struct socket_smack *ssp = sock->sk_security;
+-	struct socket_smack *osp = other->sk_security;
+-	struct socket_smack *nsp = newsk->sk_security;
++	struct socket_smack *ssp = smack_sock(sock);
++	struct socket_smack *osp = smack_sock(other);
++	struct socket_smack *nsp = smack_sock(newsk);
+ 	struct smk_audit_info ad;
+ 	int rc = 0;
+ #ifdef CONFIG_AUDIT
+@@ -3898,8 +3891,8 @@ static int smack_unix_stream_connect(struct sock *sock,
+  */
+ static int smack_unix_may_send(struct socket *sock, struct socket *other)
+ {
+-	struct socket_smack *ssp = sock->sk->sk_security;
+-	struct socket_smack *osp = other->sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sock->sk);
++	struct socket_smack *osp = smack_sock(other->sk);
+ 	struct smk_audit_info ad;
+ 	int rc;
+ 
+@@ -3936,7 +3929,7 @@ static int smack_socket_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	struct sockaddr_in6 *sap = (struct sockaddr_in6 *) msg->msg_name;
+ #endif
+ #ifdef SMACK_IPV6_SECMARK_LABELING
+-	struct socket_smack *ssp = sock->sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sock->sk);
+ 	struct smack_known *rsp;
+ #endif
+ 	int rc = 0;
+@@ -4148,7 +4141,7 @@ static struct smack_known *smack_from_netlbl(const struct sock *sk, u16 family,
+ 	netlbl_secattr_init(&secattr);
+ 
+ 	if (sk)
+-		ssp = sk->sk_security;
++		ssp = smack_sock(sk);
+ 
+ 	if (netlbl_skbuff_getattr(skb, family, &secattr) == 0) {
+ 		skp = smack_from_secattr(&secattr, ssp);
+@@ -4170,7 +4163,7 @@ static struct smack_known *smack_from_netlbl(const struct sock *sk, u16 family,
+  */
+ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ {
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct smack_known *skp = NULL;
+ 	int rc = 0;
+ 	struct smk_audit_info ad;
+@@ -4274,7 +4267,7 @@ static int smack_socket_getpeersec_stream(struct socket *sock,
+ 	u32 slen = 1;
+ 	int rc = 0;
+ 
+-	ssp = sock->sk->sk_security;
++	ssp = smack_sock(sock->sk);
+ 	if (ssp->smk_packet != NULL) {
+ 		rcp = ssp->smk_packet->smk_known;
+ 		slen = strlen(rcp) + 1;
+@@ -4324,7 +4317,7 @@ static int smack_socket_getpeersec_dgram(struct socket *sock,
+ 
+ 	switch (family) {
+ 	case PF_UNIX:
+-		ssp = sock->sk->sk_security;
++		ssp = smack_sock(sock->sk);
+ 		s = ssp->smk_out->smk_secid;
+ 		break;
+ 	case PF_INET:
+@@ -4373,7 +4366,7 @@ static void smack_sock_graft(struct sock *sk, struct socket *parent)
+ 	    (sk->sk_family != PF_INET && sk->sk_family != PF_INET6))
+ 		return;
+ 
+-	ssp = sk->sk_security;
++	ssp = smack_sock(sk);
+ 	ssp->smk_in = skp;
+ 	ssp->smk_out = skp;
+ 	/* cssp->smk_packet is already set in smack_inet_csk_clone() */
+@@ -4393,7 +4386,7 @@ static int smack_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ {
+ 	u16 family = sk->sk_family;
+ 	struct smack_known *skp;
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct sockaddr_in addr;
+ 	struct iphdr *hdr;
+ 	struct smack_known *hskp;
+@@ -4479,7 +4472,7 @@ static int smack_inet_conn_request(const struct sock *sk, struct sk_buff *skb,
+ static void smack_inet_csk_clone(struct sock *sk,
+ 				 const struct request_sock *req)
+ {
+-	struct socket_smack *ssp = sk->sk_security;
++	struct socket_smack *ssp = smack_sock(sk);
+ 	struct smack_known *skp;
+ 
+ 	if (req->peer_secid != 0) {
+@@ -5049,6 +5042,7 @@ struct lsm_blob_sizes smack_blob_sizes __ro_after_init = {
+ 	.lbs_inode = sizeof(struct inode_smack),
+ 	.lbs_ipc = sizeof(struct smack_known *),
+ 	.lbs_msg_msg = sizeof(struct smack_known *),
++	.lbs_sock = sizeof(struct socket_smack),
+ 	.lbs_superblock = sizeof(struct superblock_smack),
+ 	.lbs_xattr_count = SMACK_INODE_INIT_XATTRS,
+ };
+@@ -5173,7 +5167,9 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(socket_getpeersec_stream, smack_socket_getpeersec_stream),
+ 	LSM_HOOK_INIT(socket_getpeersec_dgram, smack_socket_getpeersec_dgram),
+ 	LSM_HOOK_INIT(sk_alloc_security, smack_sk_alloc_security),
++#ifdef SMACK_IPV6_PORT_LABELING
+ 	LSM_HOOK_INIT(sk_free_security, smack_sk_free_security),
++#endif
+ 	LSM_HOOK_INIT(sk_clone_security, smack_sk_clone_security),
+ 	LSM_HOOK_INIT(sock_graft, smack_sock_graft),
+ 	LSM_HOOK_INIT(inet_conn_request, smack_inet_conn_request),
+diff --git a/security/smack/smack_netfilter.c b/security/smack/smack_netfilter.c
+index b945c1d3a7431a..bad71b7e648da7 100644
+--- a/security/smack/smack_netfilter.c
++++ b/security/smack/smack_netfilter.c
+@@ -26,8 +26,8 @@ static unsigned int smack_ip_output(void *priv,
+ 	struct socket_smack *ssp;
+ 	struct smack_known *skp;
+ 
+-	if (sk && sk->sk_security) {
+-		ssp = sk->sk_security;
++	if (sk) {
++		ssp = smack_sock(sk);
+ 		skp = ssp->smk_out;
+ 		skb->secmark = skp->smk_secid;
+ 	}
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index e22aad7604e8ac..5dd1e164f9b13d 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -932,7 +932,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	}
+ 	if (rc >= 0) {
+ 		old_cat = skp->smk_netlabel.attr.mls.cat;
+-		skp->smk_netlabel.attr.mls.cat = ncats.attr.mls.cat;
++		rcu_assign_pointer(skp->smk_netlabel.attr.mls.cat, ncats.attr.mls.cat);
+ 		skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
+ 		synchronize_rcu();
+ 		netlbl_catmap_free(old_cat);
+diff --git a/sound/pci/hda/cs35l41_hda_spi.c b/sound/pci/hda/cs35l41_hda_spi.c
+index b76c0dfd5fefc4..f8c356ad0d340f 100644
+--- a/sound/pci/hda/cs35l41_hda_spi.c
++++ b/sound/pci/hda/cs35l41_hda_spi.c
+@@ -38,6 +38,7 @@ static const struct spi_device_id cs35l41_hda_spi_id[] = {
+ 	{ "cs35l41-hda", 0 },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(spi, cs35l41_hda_spi_id);
+ 
+ static const struct acpi_device_id cs35l41_acpi_hda_match[] = {
+ 	{ "CSC3551", 0 },
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index 9e88d39eac1e22..0676b416056607 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -822,7 +822,7 @@ static int tas2781_hda_i2c_probe(struct i2c_client *clt)
+ 	} else
+ 		return -ENODEV;
+ 
+-	tas_hda->priv->irq_info.irq = clt->irq;
++	tas_hda->priv->irq = clt->irq;
+ 	ret = tas2781_read_acpi(tas_hda->priv, device_name);
+ 	if (ret)
+ 		return dev_err_probe(tas_hda->dev, ret,
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index e3aca9c785a079..aa163ec4086223 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2903,8 +2903,10 @@ int rt5682_register_dai_clks(struct rt5682_priv *rt5682)
+ 		}
+ 
+ 		if (dev->of_node) {
+-			devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
++			ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get,
+ 						    dai_clk_hw);
++			if (ret)
++				return ret;
+ 		} else {
+ 			ret = devm_clk_hw_register_clkdev(dev, dai_clk_hw,
+ 							  init.name,
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index f50f196d700d72..ce2e88e066f3e5 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -2828,7 +2828,9 @@ static int rt5682s_register_dai_clks(struct snd_soc_component *component)
+ 		}
+ 
+ 		if (dev->of_node) {
+-			devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, dai_clk_hw);
++			ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, dai_clk_hw);
++			if (ret)
++				return ret;
+ 		} else {
+ 			ret = devm_clk_hw_register_clkdev(dev, dai_clk_hw,
+ 							  init.name, dev_name(dev));
+diff --git a/sound/soc/codecs/tas2781-comlib.c b/sound/soc/codecs/tas2781-comlib.c
+index 3aa81514dad76f..0444cf90c5119f 100644
+--- a/sound/soc/codecs/tas2781-comlib.c
++++ b/sound/soc/codecs/tas2781-comlib.c
+@@ -14,7 +14,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+@@ -406,8 +405,6 @@ EXPORT_SYMBOL_GPL(tasdevice_dsp_remove);
+ 
+ void tasdevice_remove(struct tasdevice_priv *tas_priv)
+ {
+-	if (gpio_is_valid(tas_priv->irq_info.irq_gpio))
+-		gpio_free(tas_priv->irq_info.irq_gpio);
+ 	mutex_destroy(&tas_priv->codec_lock);
+ }
+ EXPORT_SYMBOL_GPL(tasdevice_remove);
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index 8f9a3ae7153e94..f3a7605f071043 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -13,7 +13,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index c64d458e524e2c..edd1ad3062c886 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -21,7 +21,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
++#include <linux/of_address.h>
+ #include <linux/of_irq.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
+@@ -618,7 +618,7 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ {
+ 	struct i2c_client *client = (struct i2c_client *)tas_priv->client;
+ 	unsigned int dev_addrs[TASDEVICE_MAX_CHANNELS];
+-	int rc, i, ndev = 0;
++	int i, ndev = 0;
+ 
+ 	if (tas_priv->isacpi) {
+ 		ndev = device_property_read_u32_array(&client->dev,
+@@ -633,64 +633,34 @@ static void tasdevice_parse_dt(struct tasdevice_priv *tas_priv)
+ 				"ti,audio-slots", dev_addrs, ndev);
+ 		}
+ 
+-		tas_priv->irq_info.irq_gpio =
++		tas_priv->irq =
+ 			acpi_dev_gpio_irq_get(ACPI_COMPANION(&client->dev), 0);
+-	} else {
++	} else if (IS_ENABLED(CONFIG_OF)) {
+ 		struct device_node *np = tas_priv->dev->of_node;
+-#ifdef CONFIG_OF
+-		const __be32 *reg, *reg_end;
+-		int len, sw, aw;
+-
+-		aw = of_n_addr_cells(np);
+-		sw = of_n_size_cells(np);
+-		if (sw == 0) {
+-			reg = (const __be32 *)of_get_property(np,
+-				"reg", &len);
+-			reg_end = reg + len/sizeof(*reg);
+-			ndev = 0;
+-			do {
+-				dev_addrs[ndev] = of_read_number(reg, aw);
+-				reg += aw;
+-				ndev++;
+-			} while (reg < reg_end);
+-		} else {
+-			ndev = 1;
+-			dev_addrs[0] = client->addr;
++		u64 addr;
++
++		for (i = 0; i < TASDEVICE_MAX_CHANNELS; i++) {
++			if (of_property_read_reg(np, i, &addr, NULL))
++				break;
++			dev_addrs[ndev++] = addr;
+ 		}
+-#else
++
++		tas_priv->irq = of_irq_get(np, 0);
++	} else {
+ 		ndev = 1;
+ 		dev_addrs[0] = client->addr;
+-#endif
+-		tas_priv->irq_info.irq_gpio = of_irq_get(np, 0);
+ 	}
+ 	tas_priv->ndev = ndev;
+ 	for (i = 0; i < ndev; i++)
+ 		tas_priv->tasdevice[i].dev_addr = dev_addrs[i];
+ 
+ 	tas_priv->reset = devm_gpiod_get_optional(&client->dev,
+-			"reset-gpios", GPIOD_OUT_HIGH);
++			"reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(tas_priv->reset))
+ 		dev_err(tas_priv->dev, "%s Can't get reset GPIO\n",
+ 			__func__);
+ 
+ 	strcpy(tas_priv->dev_name, tasdevice_id[tas_priv->chip_id].name);
+-
+-	if (gpio_is_valid(tas_priv->irq_info.irq_gpio)) {
+-		rc = gpio_request(tas_priv->irq_info.irq_gpio,
+-				"AUDEV-IRQ");
+-		if (!rc) {
+-			gpio_direction_input(
+-				tas_priv->irq_info.irq_gpio);
+-
+-			tas_priv->irq_info.irq =
+-				gpio_to_irq(tas_priv->irq_info.irq_gpio);
+-		} else
+-			dev_err(tas_priv->dev, "%s: GPIO %d request error\n",
+-				__func__, tas_priv->irq_info.irq_gpio);
+-	} else
+-		dev_err(tas_priv->dev,
+-			"Looking up irq-gpio property failed %d\n",
+-			tas_priv->irq_info.irq_gpio);
+ }
+ 
+ static int tasdevice_i2c_probe(struct i2c_client *i2c)
+diff --git a/sound/soc/loongson/loongson_card.c b/sound/soc/loongson/loongson_card.c
+index fae5e9312bf08c..2c8dbdba27c5f8 100644
+--- a/sound/soc/loongson/loongson_card.c
++++ b/sound/soc/loongson/loongson_card.c
+@@ -127,8 +127,8 @@ static int loongson_card_parse_of(struct loongson_card_data *data)
+ 	codec = of_get_child_by_name(dev->of_node, "codec");
+ 	if (!codec) {
+ 		dev_err(dev, "audio-codec property missing or invalid\n");
+-		ret = -EINVAL;
+-		goto err;
++		of_node_put(cpu);
++		return -EINVAL;
+ 	}
+ 
+ 	for (i = 0; i < card->num_links; i++) {
+diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile
+index d8288936c9120f..c4f1f1735af659 100644
+--- a/tools/bpf/runqslower/Makefile
++++ b/tools/bpf/runqslower/Makefile
+@@ -15,6 +15,7 @@ INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../include/uapi)
+ CFLAGS := -g -Wall $(CLANG_CROSS_FLAGS)
+ CFLAGS += $(EXTRA_CFLAGS)
+ LDFLAGS += $(EXTRA_LDFLAGS)
++LDLIBS += -lelf -lz
+ 
+ # Try to detect best kernel BTF source
+ KERNEL_REL := $(shell uname -r)
+@@ -51,7 +52,7 @@ clean:
+ libbpf_hdrs: $(BPFOBJ)
+ 
+ $(OUTPUT)/runqslower: $(OUTPUT)/runqslower.o $(BPFOBJ)
+-	$(QUIET_LINK)$(CC) $(CFLAGS) $^ -lelf -lz -o $@
++	$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
+ 
+ $(OUTPUT)/runqslower.o: runqslower.h $(OUTPUT)/runqslower.skel.h	      \
+ 			$(OUTPUT)/runqslower.bpf.o | libbpf_hdrs
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index dd0a18c2ef8fc0..6f4bf386a3b5c4 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -134,10 +134,6 @@
+ #undef main
+ #endif
+ 
+-#define main main_test_libcapstone
+-# include "test-libcapstone.c"
+-#undef main
+-
+ #define main main_test_lzma
+ # include "test-lzma.c"
+ #undef main
+diff --git a/tools/include/nolibc/string.h b/tools/include/nolibc/string.h
+index f9ab28421e6dcd..9ec9c24f38c092 100644
+--- a/tools/include/nolibc/string.h
++++ b/tools/include/nolibc/string.h
+@@ -7,6 +7,7 @@
+ #ifndef _NOLIBC_STRING_H
+ #define _NOLIBC_STRING_H
+ 
++#include "arch.h"
+ #include "std.h"
+ 
+ static void *malloc(size_t len);
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 5edb717647847b..3ecb33188336f6 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -473,8 +473,6 @@ struct bpf_program {
+ };
+ 
+ struct bpf_struct_ops {
+-	const char *tname;
+-	const struct btf_type *type;
+ 	struct bpf_program **progs;
+ 	__u32 *kern_func_off;
+ 	/* e.g. struct tcp_congestion_ops in bpf_prog's btf format */
+@@ -1059,11 +1057,14 @@ static int bpf_object_adjust_struct_ops_autoload(struct bpf_object *obj)
+ 			continue;
+ 
+ 		for (j = 0; j < obj->nr_maps; ++j) {
++			const struct btf_type *type;
++
+ 			map = &obj->maps[j];
+ 			if (!bpf_map__is_struct_ops(map))
+ 				continue;
+ 
+-			vlen = btf_vlen(map->st_ops->type);
++			type = btf__type_by_id(obj->btf, map->st_ops->type_id);
++			vlen = btf_vlen(type);
+ 			for (k = 0; k < vlen; ++k) {
+ 				slot_prog = map->st_ops->progs[k];
+ 				if (prog != slot_prog)
+@@ -1097,8 +1098,8 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
+ 	int err;
+ 
+ 	st_ops = map->st_ops;
+-	type = st_ops->type;
+-	tname = st_ops->tname;
++	type = btf__type_by_id(btf, st_ops->type_id);
++	tname = btf__name_by_offset(btf, type->name_off);
+ 	err = find_struct_ops_kern_types(obj, tname, &mod_btf,
+ 					 &kern_type, &kern_type_id,
+ 					 &kern_vtype, &kern_vtype_id,
+@@ -1398,8 +1399,6 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
+ 		memcpy(st_ops->data,
+ 		       data->d_buf + vsi->offset,
+ 		       type->size);
+-		st_ops->tname = tname;
+-		st_ops->type = type;
+ 		st_ops->type_id = type_id;
+ 
+ 		pr_debug("struct_ops init: struct %s(type_id=%u) %s found at offset %u\n",
+@@ -7867,16 +7866,19 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
+ }
+ 
+ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf, size_t obj_buf_sz,
++					  const char *obj_name,
+ 					  const struct bpf_object_open_opts *opts)
+ {
+-	const char *obj_name, *kconfig, *btf_tmp_path, *token_path;
++	const char *kconfig, *btf_tmp_path, *token_path;
+ 	struct bpf_object *obj;
+-	char tmp_name[64];
+ 	int err;
+ 	char *log_buf;
+ 	size_t log_size;
+ 	__u32 log_level;
+ 
++	if (obj_buf && !obj_name)
++		return ERR_PTR(-EINVAL);
++
+ 	if (elf_version(EV_CURRENT) == EV_NONE) {
+ 		pr_warn("failed to init libelf for %s\n",
+ 			path ? : "(mem buf)");
+@@ -7886,16 +7888,12 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
+ 	if (!OPTS_VALID(opts, bpf_object_open_opts))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	obj_name = OPTS_GET(opts, object_name, NULL);
++	obj_name = OPTS_GET(opts, object_name, NULL) ?: obj_name;
+ 	if (obj_buf) {
+-		if (!obj_name) {
+-			snprintf(tmp_name, sizeof(tmp_name), "%lx-%lx",
+-				 (unsigned long)obj_buf,
+-				 (unsigned long)obj_buf_sz);
+-			obj_name = tmp_name;
+-		}
+ 		path = obj_name;
+ 		pr_debug("loading object '%s' from buffer\n", obj_name);
++	} else {
++		pr_debug("loading object from %s\n", path);
+ 	}
+ 
+ 	log_buf = OPTS_GET(opts, kernel_log_buf, NULL);
+@@ -7979,9 +7977,7 @@ bpf_object__open_file(const char *path, const struct bpf_object_open_opts *opts)
+ 	if (!path)
+ 		return libbpf_err_ptr(-EINVAL);
+ 
+-	pr_debug("loading %s\n", path);
+-
+-	return libbpf_ptr(bpf_object_open(path, NULL, 0, opts));
++	return libbpf_ptr(bpf_object_open(path, NULL, 0, NULL, opts));
+ }
+ 
+ struct bpf_object *bpf_object__open(const char *path)
+@@ -7993,10 +7989,15 @@ struct bpf_object *
+ bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
+ 		     const struct bpf_object_open_opts *opts)
+ {
++	char tmp_name[64];
++
+ 	if (!obj_buf || obj_buf_sz == 0)
+ 		return libbpf_err_ptr(-EINVAL);
+ 
+-	return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, opts));
++	/* create a (quite useless) default "name" for this memory buffer object */
++	snprintf(tmp_name, sizeof(tmp_name), "%lx-%zx", (unsigned long)obj_buf, obj_buf_sz);
++
++	return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, tmp_name, opts));
+ }
+ 
+ static int bpf_object_unload(struct bpf_object *obj)
+@@ -8406,11 +8407,13 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
+ 
+ static void bpf_map_prepare_vdata(const struct bpf_map *map)
+ {
++	const struct btf_type *type;
+ 	struct bpf_struct_ops *st_ops;
+ 	__u32 i;
+ 
+ 	st_ops = map->st_ops;
+-	for (i = 0; i < btf_vlen(st_ops->type); i++) {
++	type = btf__type_by_id(map->obj->btf, st_ops->type_id);
++	for (i = 0; i < btf_vlen(type); i++) {
+ 		struct bpf_program *prog = st_ops->progs[i];
+ 		void *kern_data;
+ 		int prog_fd;
+@@ -9673,6 +9676,7 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
+ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
+ 					    Elf64_Shdr *shdr, Elf_Data *data)
+ {
++	const struct btf_type *type;
+ 	const struct btf_member *member;
+ 	struct bpf_struct_ops *st_ops;
+ 	struct bpf_program *prog;
+@@ -9732,13 +9736,14 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
+ 		}
+ 		insn_idx = sym->st_value / BPF_INSN_SZ;
+ 
+-		member = find_member_by_offset(st_ops->type, moff * 8);
++		type = btf__type_by_id(btf, st_ops->type_id);
++		member = find_member_by_offset(type, moff * 8);
+ 		if (!member) {
+ 			pr_warn("struct_ops reloc %s: cannot find member at moff %u\n",
+ 				map->name, moff);
+ 			return -EINVAL;
+ 		}
+-		member_idx = member - btf_members(st_ops->type);
++		member_idx = member - btf_members(type);
+ 		name = btf__name_by_offset(btf, member->name_off);
+ 
+ 		if (!resolve_func_ptr(btf, member->type, NULL)) {
+@@ -13715,29 +13720,13 @@ static int populate_skeleton_progs(const struct bpf_object *obj,
+ int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
+ 			      const struct bpf_object_open_opts *opts)
+ {
+-	DECLARE_LIBBPF_OPTS(bpf_object_open_opts, skel_opts,
+-		.object_name = s->name,
+-	);
+ 	struct bpf_object *obj;
+ 	int err;
+ 
+-	/* Attempt to preserve opts->object_name, unless overriden by user
+-	 * explicitly. Overwriting object name for skeletons is discouraged,
+-	 * as it breaks global data maps, because they contain object name
+-	 * prefix as their own map name prefix. When skeleton is generated,
+-	 * bpftool is making an assumption that this name will stay the same.
+-	 */
+-	if (opts) {
+-		memcpy(&skel_opts, opts, sizeof(*opts));
+-		if (!opts->object_name)
+-			skel_opts.object_name = s->name;
+-	}
+-
+-	obj = bpf_object__open_mem(s->data, s->data_sz, &skel_opts);
+-	err = libbpf_get_error(obj);
+-	if (err) {
+-		pr_warn("failed to initialize skeleton BPF object '%s': %d\n",
+-			s->name, err);
++	obj = bpf_object_open(NULL, s->data, s->data_sz, s->name, opts);
++	if (IS_ERR(obj)) {
++		err = PTR_ERR(obj);
++		pr_warn("failed to initialize skeleton BPF object '%s': %d\n", s->name, err);
+ 		return libbpf_err(err);
+ 	}
+ 
+diff --git a/tools/objtool/arch/loongarch/decode.c b/tools/objtool/arch/loongarch/decode.c
+index aee479d2191c21..69b66994f2a155 100644
+--- a/tools/objtool/arch/loongarch/decode.c
++++ b/tools/objtool/arch/loongarch/decode.c
+@@ -122,7 +122,7 @@ static bool decode_insn_reg2i12_fomat(union loongarch_instruction inst,
+ 	switch (inst.reg2i12_format.opcode) {
+ 	case addid_op:
+ 		if ((inst.reg2i12_format.rd == CFI_SP) || (inst.reg2i12_format.rj == CFI_SP)) {
+-			/* addi.d sp,sp,si12 or addi.d fp,sp,si12 */
++			/* addi.d sp,sp,si12 or addi.d fp,sp,si12 or addi.d sp,fp,si12 */
+ 			insn->immediate = sign_extend64(inst.reg2i12_format.immediate, 11);
+ 			ADD_OP(op) {
+ 				op->src.type = OP_SRC_ADD;
+@@ -132,6 +132,15 @@ static bool decode_insn_reg2i12_fomat(union loongarch_instruction inst,
+ 				op->dest.reg = inst.reg2i12_format.rd;
+ 			}
+ 		}
++		if ((inst.reg2i12_format.rd == CFI_SP) && (inst.reg2i12_format.rj == CFI_FP)) {
++			/* addi.d sp,fp,si12 */
++			struct symbol *func = find_func_containing(insn->sec, insn->offset);
++
++			if (!func)
++				return false;
++
++			func->frame_pointer = true;
++		}
+ 		break;
+ 	case ldd_op:
+ 		if (inst.reg2i12_format.rj == CFI_SP) {
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 0a33d9195b7a91..60680b2bec96c0 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2991,10 +2991,27 @@ static int update_cfi_state(struct instruction *insn,
+ 				break;
+ 			}
+ 
+-			if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
++			if (op->dest.reg == CFI_BP && op->src.reg == CFI_SP &&
++			    insn->sym->frame_pointer) {
++				/* addi.d fp,sp,imm on LoongArch */
++				if (cfa->base == CFI_SP && cfa->offset == op->src.offset) {
++					cfa->base = CFI_BP;
++					cfa->offset = 0;
++				}
++				break;
++			}
+ 
+-				/* lea disp(%rbp), %rsp */
+-				cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
++			if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
++				/* addi.d sp,fp,imm on LoongArch */
++				if (cfa->base == CFI_BP && cfa->offset == 0) {
++					if (insn->sym->frame_pointer) {
++						cfa->base = CFI_SP;
++						cfa->offset = -op->src.offset;
++					}
++				} else {
++					/* lea disp(%rbp), %rsp */
++					cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
++				}
+ 				break;
+ 			}
+ 
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index 2b8a69de4db871..d7e815c2fd1567 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -68,6 +68,7 @@ struct symbol {
+ 	u8 warned	     : 1;
+ 	u8 embedded_insn     : 1;
+ 	u8 local_label       : 1;
++	u8 frame_pointer     : 1;
+ 	struct list_head pv_target;
+ 	struct reloc *relocs;
+ };
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index c157bd31f2e5a4..7298f360706220 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -3266,7 +3266,7 @@ static int perf_c2c__record(int argc, const char **argv)
+ 		return -1;
+ 	}
+ 
+-	if (perf_pmu__mem_events_init(pmu)) {
++	if (perf_pmu__mem_events_init()) {
+ 		pr_err("failed: memory events not supported\n");
+ 		return -1;
+ 	}
+@@ -3290,19 +3290,15 @@ static int perf_c2c__record(int argc, const char **argv)
+ 		 * PERF_MEM_EVENTS__LOAD_STORE if it is supported.
+ 		 */
+ 		if (e->tag) {
+-			e->record = true;
++			perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
+ 			rec_argv[i++] = "-W";
+ 		} else {
+-			e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+-			e->record = true;
+-
+-			e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__STORE);
+-			e->record = true;
++			perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
++			perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
+ 		}
+ 	}
+ 
+-	e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+-	if (e->record)
++	if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
+ 		rec_argv[i++] = "-W";
+ 
+ 	rec_argv[i++] = "-d";
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index a212678d47beb1..c80fb0f60e611f 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -2204,6 +2204,7 @@ int cmd_inject(int argc, const char **argv)
+ 			.finished_init	= perf_event__repipe_op2_synth,
+ 			.compressed	= perf_event__repipe_op4_synth,
+ 			.auxtrace	= perf_event__repipe_auxtrace,
++			.dont_split_sample_group = true,
+ 		},
+ 		.input_name  = "-",
+ 		.samples = LIST_HEAD_INIT(inject.samples),
+diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
+index 863fcd735daee9..08724fa508e14c 100644
+--- a/tools/perf/builtin-mem.c
++++ b/tools/perf/builtin-mem.c
+@@ -97,7 +97,7 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
+ 		return -1;
+ 	}
+ 
+-	if (perf_pmu__mem_events_init(pmu)) {
++	if (perf_pmu__mem_events_init()) {
+ 		pr_err("failed: memory events not supported\n");
+ 		return -1;
+ 	}
+@@ -126,22 +126,17 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
+ 	if (e->tag &&
+ 	    (mem->operation & MEM_OPERATION_LOAD) &&
+ 	    (mem->operation & MEM_OPERATION_STORE)) {
+-		e->record = true;
++		perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
+ 		rec_argv[i++] = "-W";
+ 	} else {
+-		if (mem->operation & MEM_OPERATION_LOAD) {
+-			e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+-			e->record = true;
+-		}
++		if (mem->operation & MEM_OPERATION_LOAD)
++			perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
+ 
+-		if (mem->operation & MEM_OPERATION_STORE) {
+-			e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__STORE);
+-			e->record = true;
+-		}
++		if (mem->operation & MEM_OPERATION_STORE)
++			perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
+ 	}
+ 
+-	e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD);
+-	if (e->record)
++	if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
+ 		rec_argv[i++] = "-W";
+ 
+ 	rec_argv[i++] = "-d";
+@@ -372,6 +367,7 @@ static int report_events(int argc, const char **argv, struct perf_mem *mem)
+ 		rep_argv[i] = argv[j];
+ 
+ 	ret = cmd_report(i, rep_argv);
++	free(new_sort_order);
+ 	free(rep_argv);
+ 	return ret;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 69618fb0110b64..8bebaba56bc3f5 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -565,6 +565,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
+ 		struct hists *hists = evsel__hists(pos);
+ 		const char *evname = evsel__name(pos);
+ 
++		i++;
+ 		if (symbol_conf.event_group && !evsel__is_group_leader(pos))
+ 			continue;
+ 
+@@ -574,7 +575,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
+ 		hists__fprintf_nr_sample_events(hists, rep, evname, stdout);
+ 
+ 		if (rep->total_cycles_mode) {
+-			report__browse_block_hists(&rep->block_reports[i++].hist,
++			report__browse_block_hists(&rep->block_reports[i - 1].hist,
+ 						   rep->min_percent, pos, NULL);
+ 			continue;
+ 		}
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 5977c49ae2c76b..ed11a20bc14935 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -2596,9 +2596,12 @@ static int timehist_sched_change_event(struct perf_tool *tool,
+ 	 * - previous sched event is out of window - we are done
+ 	 * - sample time is beyond window user cares about - reset it
+ 	 *   to close out stats for time window interest
++	 * - If tprev is 0, that is, sched_in event for current task is
++	 *   not recorded, cannot determine whether sched_in event is
++	 *   within time window interest - ignore it
+ 	 */
+ 	if (ptime->end) {
+-		if (tprev > ptime->end)
++		if (!tprev || tprev > ptime->end)
+ 			goto out;
+ 
+ 		if (t > ptime->end)
+@@ -3031,7 +3034,8 @@ static int perf_sched__timehist(struct perf_sched *sched)
+ 
+ 	if (perf_time__parse_str(&sched->ptime, sched->time_str) != 0) {
+ 		pr_err("Invalid time string\n");
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	if (timehist_check_attr(sched, evlist) != 0)
+diff --git a/tools/perf/scripts/python/arm-cs-trace-disasm.py b/tools/perf/scripts/python/arm-cs-trace-disasm.py
+index d973c2baed1c85..7aff02d84ffb3b 100755
+--- a/tools/perf/scripts/python/arm-cs-trace-disasm.py
++++ b/tools/perf/scripts/python/arm-cs-trace-disasm.py
+@@ -192,17 +192,16 @@ def process_event(param_dict):
+ 	ip = sample["ip"]
+ 	addr = sample["addr"]
+ 
++	if (options.verbose == True):
++		print("Event type: %s" % name)
++		print_sample(sample)
++
+ 	# Initialize CPU data if it's empty, and directly return back
+ 	# if this is the first tracing event for this CPU.
+ 	if (cpu_data.get(str(cpu) + 'addr') == None):
+ 		cpu_data[str(cpu) + 'addr'] = addr
+ 		return
+ 
+-
+-	if (options.verbose == True):
+-		print("Event type: %s" % name)
+-		print_sample(sample)
+-
+ 	# If cannot find dso so cannot dump assembler, bail out
+ 	if (dso == '[unknown]'):
+ 		return
+diff --git a/tools/perf/util/annotate-data.c b/tools/perf/util/annotate-data.c
+index 965da6c0b5427e..79c1f2ae7affdc 100644
+--- a/tools/perf/util/annotate-data.c
++++ b/tools/perf/util/annotate-data.c
+@@ -104,7 +104,7 @@ static void pr_debug_location(Dwarf_Die *die, u64 pc, int reg)
+ 		return;
+ 
+ 	while ((off = dwarf_getlocations(&attr, off, &base, &start, &end, &ops, &nops)) > 0) {
+-		if (reg != DWARF_REG_PC && end < pc)
++		if (reg != DWARF_REG_PC && end <= pc)
+ 			continue;
+ 		if (reg != DWARF_REG_PC && start > pc)
+ 			break;
+diff --git a/tools/perf/util/bpf_skel/lock_data.h b/tools/perf/util/bpf_skel/lock_data.h
+index 36af11faad03c1..de12892f992f8d 100644
+--- a/tools/perf/util/bpf_skel/lock_data.h
++++ b/tools/perf/util/bpf_skel/lock_data.h
+@@ -7,11 +7,11 @@ struct tstamp_data {
+ 	u64 timestamp;
+ 	u64 lock;
+ 	u32 flags;
+-	u32 stack_id;
++	s32 stack_id;
+ };
+ 
+ struct contention_key {
+-	u32 stack_id;
++	s32 stack_id;
+ 	u32 pid;
+ 	u64 lock_addr_or_cgroup;
+ };
+diff --git a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+index e9028235d7717b..d818e30c545713 100644
+--- a/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
++++ b/tools/perf/util/bpf_skel/vmlinux/vmlinux.h
+@@ -15,6 +15,7 @@
+ 
+ typedef __u8 u8;
+ typedef __u32 u32;
++typedef __s32 s32;
+ typedef __u64 u64;
+ typedef __s64 s64;
+ 
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 44ef968a7ad331..1b0e59f4d8e936 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -1444,7 +1444,7 @@ static int __die_find_var_reg_cb(Dwarf_Die *die_mem, void *arg)
+ 
+ 	while ((off = dwarf_getlocations(&attr, off, &base, &start, &end, &ops, &nops)) > 0) {
+ 		/* Assuming the location list is sorted by address */
+-		if (end < data->pc)
++		if (end <= data->pc)
+ 			continue;
+ 		if (start > data->pc)
+ 			break;
+@@ -1598,6 +1598,9 @@ static int __die_collect_vars_cb(Dwarf_Die *die_mem, void *arg)
+ 	if (dwarf_getlocations(&attr, 0, &base, &start, &end, &ops, &nops) <= 0)
+ 		return DIE_FIND_CB_SIBLING;
+ 
++	if (!check_allowed_ops(ops, nops))
++		return DIE_FIND_CB_SIBLING;
++
+ 	if (die_get_real_type(die_mem, &type_die) == NULL)
+ 		return DIE_FIND_CB_SIBLING;
+ 
+@@ -1974,8 +1977,15 @@ static int __die_find_member_offset_cb(Dwarf_Die *die_mem, void *arg)
+ 		return DIE_FIND_CB_SIBLING;
+ 
+ 	/* Unions might not have location */
+-	if (die_get_data_member_location(die_mem, &loc) < 0)
+-		loc = 0;
++	if (die_get_data_member_location(die_mem, &loc) < 0) {
++		Dwarf_Attribute attr;
++
++		if (dwarf_attr_integrate(die_mem, DW_AT_data_bit_offset, &attr) &&
++		    dwarf_formudata(&attr, &loc) == 0)
++			loc /= 8;
++		else
++			loc = 0;
++	}
+ 
+ 	if (offset == loc)
+ 		return DIE_FIND_CB_END;
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 6dda47bb774faa..1f1e1063efe378 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -28,6 +28,8 @@ struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX] = {
+ };
+ #undef E
+ 
++bool perf_mem_record[PERF_MEM_EVENTS__MAX] = { 0 };
++
+ static char mem_loads_name[100];
+ static char mem_stores_name[100];
+ 
+@@ -162,7 +164,7 @@ int perf_pmu__mem_events_parse(struct perf_pmu *pmu, const char *str)
+ 				continue;
+ 
+ 			if (strstr(e->tag, tok))
+-				e->record = found = true;
++				perf_mem_record[j] = found = true;
+ 		}
+ 
+ 		tok = strtok_r(NULL, ",", &saveptr);
+@@ -191,7 +193,7 @@ static bool perf_pmu__mem_events_supported(const char *mnt, struct perf_pmu *pmu
+ 	return !stat(path, &st);
+ }
+ 
+-int perf_pmu__mem_events_init(struct perf_pmu *pmu)
++static int __perf_pmu__mem_events_init(struct perf_pmu *pmu)
+ {
+ 	const char *mnt = sysfs__mount();
+ 	bool found = false;
+@@ -218,6 +220,18 @@ int perf_pmu__mem_events_init(struct perf_pmu *pmu)
+ 	return found ? 0 : -ENOENT;
+ }
+ 
++int perf_pmu__mem_events_init(void)
++{
++	struct perf_pmu *pmu = NULL;
++
++	while ((pmu = perf_pmus__scan_mem(pmu)) != NULL) {
++		if (__perf_pmu__mem_events_init(pmu))
++			return -ENOENT;
++	}
++
++	return 0;
++}
++
+ void perf_pmu__mem_events_list(struct perf_pmu *pmu)
+ {
+ 	int j;
+@@ -247,7 +261,7 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr)
+ 		for (int j = 0; j < PERF_MEM_EVENTS__MAX; j++) {
+ 			e = perf_pmu__mem_events_ptr(pmu, j);
+ 
+-			if (!e->record)
++			if (!perf_mem_record[j])
+ 				continue;
+ 
+ 			if (!e->supported) {
+diff --git a/tools/perf/util/mem-events.h b/tools/perf/util/mem-events.h
+index ca31014d7934f4..8dc27db9fd52f4 100644
+--- a/tools/perf/util/mem-events.h
++++ b/tools/perf/util/mem-events.h
+@@ -6,7 +6,6 @@
+ #include <linux/types.h>
+ 
+ struct perf_mem_event {
+-	bool		record;
+ 	bool		supported;
+ 	bool		ldlat;
+ 	u32		aux_event;
+@@ -28,9 +27,10 @@ struct perf_pmu;
+ 
+ extern unsigned int perf_mem_events__loads_ldlat;
+ extern struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX];
++extern bool perf_mem_record[PERF_MEM_EVENTS__MAX];
+ 
+ int perf_pmu__mem_events_parse(struct perf_pmu *pmu, const char *str);
+-int perf_pmu__mem_events_init(struct perf_pmu *pmu);
++int perf_pmu__mem_events_init(void);
+ 
+ struct perf_mem_event *perf_pmu__mem_events_ptr(struct perf_pmu *pmu, int i);
+ struct perf_pmu *perf_mem_events_find_pmu(void);
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index a10343b9dcd419..1457cd5288abc8 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -1511,6 +1511,9 @@ static int deliver_sample_group(struct evlist *evlist,
+ 	int ret = -EINVAL;
+ 	struct sample_read_value *v = sample->read.group.values;
+ 
++	if (tool->dont_split_sample_group)
++		return deliver_sample_value(evlist, tool, event, sample, v, machine);
++
+ 	sample_read_group__for_each(v, sample->read.group.nr, read_format) {
+ 		ret = deliver_sample_value(evlist, tool, event, sample, v,
+ 					   machine);
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 186305fd2d0efa..ac79ac4ccbe022 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -1226,7 +1226,8 @@ static void print_metric_headers(struct perf_stat_config *config,
+ 
+ 	/* Print metrics headers only */
+ 	evlist__for_each_entry(evlist, counter) {
+-		if (config->aggr_mode != AGGR_NONE && counter->metric_leader != counter)
++		if (!config->iostat_run &&
++		    config->aggr_mode != AGGR_NONE && counter->metric_leader != counter)
+ 			continue;
+ 
+ 		os.evsel = counter;
+diff --git a/tools/perf/util/time-utils.c b/tools/perf/util/time-utils.c
+index 30244392168163..1b91ccd4d52348 100644
+--- a/tools/perf/util/time-utils.c
++++ b/tools/perf/util/time-utils.c
+@@ -20,7 +20,7 @@ int parse_nsec_time(const char *str, u64 *ptime)
+ 	u64 time_sec, time_nsec;
+ 	char *end;
+ 
+-	time_sec = strtoul(str, &end, 10);
++	time_sec = strtoull(str, &end, 10);
+ 	if (*end != '.' && *end != '\0')
+ 		return -1;
+ 
+@@ -38,7 +38,7 @@ int parse_nsec_time(const char *str, u64 *ptime)
+ 		for (i = strlen(nsec_buf); i < 9; i++)
+ 			nsec_buf[i] = '0';
+ 
+-		time_nsec = strtoul(nsec_buf, &end, 10);
++		time_nsec = strtoull(nsec_buf, &end, 10);
+ 		if (*end != '\0')
+ 			return -1;
+ 	} else
+diff --git a/tools/perf/util/tool.h b/tools/perf/util/tool.h
+index c957fb849ac633..62bbc9cec151bb 100644
+--- a/tools/perf/util/tool.h
++++ b/tools/perf/util/tool.h
+@@ -85,6 +85,7 @@ struct perf_tool {
+ 	bool		namespace_events;
+ 	bool		cgroup_events;
+ 	bool		no_warn;
++	bool		dont_split_sample_group;
+ 	enum show_feature_header show_feat_hdr;
+ };
+ 
+diff --git a/tools/power/cpupower/lib/powercap.c b/tools/power/cpupower/lib/powercap.c
+index a7a59c6bacda81..94a0c69e55ef5e 100644
+--- a/tools/power/cpupower/lib/powercap.c
++++ b/tools/power/cpupower/lib/powercap.c
+@@ -77,6 +77,14 @@ int powercap_get_enabled(int *mode)
+ 	return sysfs_get_enabled(path, mode);
+ }
+ 
++/*
++ * TODO: implement function. Returns dummy 0 for now.
++ */
++int powercap_set_enabled(int mode)
++{
++	return 0;
++}
++
+ /*
+  * Hardcoded, because rapl is the only powercap implementation
+ - * this needs to get more generic if more powercap implementations
+diff --git a/tools/testing/selftests/arm64/signal/Makefile b/tools/testing/selftests/arm64/signal/Makefile
+index 8f5febaf1a9a25..edb3613513b8a8 100644
+--- a/tools/testing/selftests/arm64/signal/Makefile
++++ b/tools/testing/selftests/arm64/signal/Makefile
+@@ -23,7 +23,7 @@ $(TEST_GEN_PROGS): $(PROGS)
+ # Common test-unit targets to build common-layout test-cases executables
+ # Needs secondary expansion to properly include the testcase c-file in pre-reqs
+ COMMON_SOURCES := test_signals.c test_signals_utils.c testcases/testcases.c \
+-	signals.S
++	signals.S sve_helpers.c
+ COMMON_HEADERS := test_signals.h test_signals_utils.h testcases/testcases.h
+ 
+ .SECONDEXPANSION:
+diff --git a/tools/testing/selftests/arm64/signal/sve_helpers.c b/tools/testing/selftests/arm64/signal/sve_helpers.c
+new file mode 100644
+index 00000000000000..0acc121af3062a
+--- /dev/null
++++ b/tools/testing/selftests/arm64/signal/sve_helpers.c
+@@ -0,0 +1,56 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (C) 2024 ARM Limited
++ *
++ * Common helper functions for SVE and SME functionality.
++ */
++
++#include <stdbool.h>
++#include <kselftest.h>
++#include <asm/sigcontext.h>
++#include <sys/prctl.h>
++
++unsigned int vls[SVE_VQ_MAX];
++unsigned int nvls;
++
++int sve_fill_vls(bool use_sme, int min_vls)
++{
++	int vq, vl;
++	int pr_set_vl = use_sme ? PR_SME_SET_VL : PR_SVE_SET_VL;
++	int len_mask = use_sme ? PR_SME_VL_LEN_MASK : PR_SVE_VL_LEN_MASK;
++
++	/*
++	 * Enumerate up to SVE_VQ_MAX vector lengths
++	 */
++	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
++		vl = prctl(pr_set_vl, vq * 16);
++		if (vl == -1)
++			return KSFT_FAIL;
++
++		vl &= len_mask;
++
++		/*
++		 * Unlike SVE, SME does not require the minimum vector length
++		 * to be implemented, or the VLs to be consecutive, so any call
++		 * to the prctl might return the single implemented VL, which
++		 * might be larger than 16. So to avoid this loop never
++		 * terminating,  bail out here when we find a higher VL than
++		 * we asked for.
++		 * See the ARM ARM, DDI 0487K.a, B1.4.2: I_QQRNR and I_NWYBP.
++		 */
++		if (vq < sve_vq_from_vl(vl))
++			break;
++
++		/* Skip missing VLs */
++		vq = sve_vq_from_vl(vl);
++
++		vls[nvls++] = vl;
++	}
++
++	if (nvls < min_vls) {
++		fprintf(stderr, "Only %d VL supported\n", nvls);
++		return KSFT_SKIP;
++	}
++
++	return KSFT_PASS;
++}
+diff --git a/tools/testing/selftests/arm64/signal/sve_helpers.h b/tools/testing/selftests/arm64/signal/sve_helpers.h
+new file mode 100644
+index 00000000000000..50948ce471cc62
+--- /dev/null
++++ b/tools/testing/selftests/arm64/signal/sve_helpers.h
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) 2024 ARM Limited
++ *
++ * Common helper functions for SVE and SME functionality.
++ */
++
++#ifndef __SVE_HELPERS_H__
++#define __SVE_HELPERS_H__
++
++#include <stdbool.h>
++
++#define VLS_USE_SVE	false
++#define VLS_USE_SME	true
++
++extern unsigned int vls[];
++extern unsigned int nvls;
++
++int sve_fill_vls(bool use_sme, int min_vls);
++
++#endif
+diff --git a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
+index ebd5815b54bbaa..dfd6a2badf9fb3 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
++++ b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c
+@@ -6,44 +6,28 @@
+  * handler, this is not supported and is expected to segfault.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ struct fake_sigframe sf;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sme_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SME, 2);
+ 
+-	/*
+-	 * Enumerate up to SVE_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SVE_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
++	if (!res)
++		return true;
+ 
+-		vl &= PR_SME_VL_LEN_MASK;
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
+-
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least two VLs */
+-	if (nvls < 2) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
+-
+-	return true;
++	return false;
+ }
+ 
+ static int fake_sigreturn_ssve_change_vl(struct tdescr *td,
+@@ -51,30 +35,30 @@ static int fake_sigreturn_ssve_change_vl(struct tdescr *td,
+ {
+ 	size_t resv_sz, offset;
+ 	struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf);
+-	struct sve_context *sve;
++	struct za_context *za;
+ 
+ 	/* Get a signal context with a SME ZA frame in it */
+ 	if (!get_current_context(td, &sf.uc, sizeof(sf.uc)))
+ 		return 1;
+ 
+ 	resv_sz = GET_SF_RESV_SIZE(sf);
+-	head = get_header(head, SVE_MAGIC, resv_sz, &offset);
++	head = get_header(head, ZA_MAGIC, resv_sz, &offset);
+ 	if (!head) {
+-		fprintf(stderr, "No SVE context\n");
++		fprintf(stderr, "No ZA context\n");
+ 		return 1;
+ 	}
+ 
+-	if (head->size != sizeof(struct sve_context)) {
++	if (head->size != sizeof(struct za_context)) {
+ 		fprintf(stderr, "Register data present, aborting\n");
+ 		return 1;
+ 	}
+ 
+-	sve = (struct sve_context *)head;
++	za = (struct za_context *)head;
+ 
+ 	/* No changes are supported; init left us at minimum VL so go to max */
+ 	fprintf(stderr, "Attempting to change VL from %d to %d\n",
+-		sve->vl, vls[0]);
+-	sve->vl = vls[0];
++		za->vl, vls[0]);
++	za->vl = vls[0];
+ 
+ 	fake_sigreturn(&sf, sizeof(sf), 0);
+ 
+diff --git a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
+index e2a452190511ff..e1ccf8f85a70c8 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
++++ b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c
+@@ -12,40 +12,22 @@
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ struct fake_sigframe sf;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sve_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SVE, 2);
+ 
+-	/*
+-	 * Enumerate up to SVE_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SVE_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
++	if (!res)
++		return true;
+ 
+-		vl &= PR_SVE_VL_LEN_MASK;
+-
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
+-
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least two VLs */
+-	if (nvls < 2) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
++	if (res == KSFT_SKIP)
+ 		td->result = KSFT_SKIP;
+-		return false;
+-	}
+ 
+-	return true;
++	return false;
+ }
+ 
+ static int fake_sigreturn_sve_change_vl(struct tdescr *td,
+diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+index 3d37daafcff513..6dbe48cf8b09ed 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+@@ -6,51 +6,31 @@
+  * set up as expected.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ static union {
+ 	ucontext_t uc;
+ 	char buf[1024 * 64];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sme_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SME, 1);
+ 
+-	/*
+-	 * Enumerate up to SVE_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SME_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
+-
+-		vl &= PR_SME_VL_LEN_MASK;
+-
+-		/* Did we find the lowest supported VL? */
+-		if (vq < sve_vq_from_vl(vl))
+-			break;
++	if (!res)
++		return true;
+ 
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
+-
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least one VL */
+-	if (nvls < 1) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-	return true;
++	return false;
+ }
+ 
+ static void setup_ssve_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
+index 9dc5f128bbc0d5..5557e116e97363 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c
+@@ -6,51 +6,31 @@
+  * signal frames is set up as expected when enabled simultaneously.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ static union {
+ 	ucontext_t uc;
+ 	char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sme_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SME, 1);
+ 
+-	/*
+-	 * Enumerate up to SVE_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SME_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
+-
+-		vl &= PR_SME_VL_LEN_MASK;
+-
+-		/* Did we find the lowest supported VL? */
+-		if (vq < sve_vq_from_vl(vl))
+-			break;
++	if (!res)
++		return true;
+ 
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
+-
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least one VL */
+-	if (nvls < 1) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-	return true;
++	return false;
+ }
+ 
+ static void setup_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/sve_regs.c b/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
+index 8b16eabbb7697e..8143eb1c58c187 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/sve_regs.c
+@@ -6,47 +6,31 @@
+  * expected.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ static union {
+ 	ucontext_t uc;
+ 	char buf[1024 * 64];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sve_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SVE, 1);
+ 
+-	/*
+-	 * Enumerate up to SVE_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SVE_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
+-
+-		vl &= PR_SVE_VL_LEN_MASK;
+-
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
++	if (!res)
++		return true;
+ 
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least one VL */
+-	if (nvls < 1) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-	return true;
++	return false;
+ }
+ 
+ static void setup_sve_regs(void)
+diff --git a/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c b/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
+index 4d6f94b6178f36..ce26e9c2fa5e34 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/za_no_regs.c
+@@ -6,47 +6,31 @@
+  * expected.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ static union {
+ 	ucontext_t uc;
+ 	char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sme_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SME, 1);
+ 
+-	/*
+-	 * Enumerate up to SME_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SME_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
+-
+-		vl &= PR_SME_VL_LEN_MASK;
+-
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
++	if (!res)
++		return true;
+ 
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least one VL */
+-	if (nvls < 1) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-	return true;
++	return false;
+ }
+ 
+ static int do_one_sme_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc,
+diff --git a/tools/testing/selftests/arm64/signal/testcases/za_regs.c b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+index 174ad665669647..b9e13f27f1f9aa 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/za_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+@@ -6,51 +6,31 @@
+  * expected.
+  */
+ 
++#include <kselftest.h>
+ #include <signal.h>
+ #include <ucontext.h>
+ #include <sys/prctl.h>
+ 
+ #include "test_signals_utils.h"
++#include "sve_helpers.h"
+ #include "testcases.h"
+ 
+ static union {
+ 	ucontext_t uc;
+ 	char buf[1024 * 128];
+ } context;
+-static unsigned int vls[SVE_VQ_MAX];
+-unsigned int nvls = 0;
+ 
+ static bool sme_get_vls(struct tdescr *td)
+ {
+-	int vq, vl;
++	int res = sve_fill_vls(VLS_USE_SME, 1);
+ 
+-	/*
+-	 * Enumerate up to SME_VQ_MAX vector lengths
+-	 */
+-	for (vq = SVE_VQ_MAX; vq > 0; --vq) {
+-		vl = prctl(PR_SME_SET_VL, vq * 16);
+-		if (vl == -1)
+-			return false;
+-
+-		vl &= PR_SME_VL_LEN_MASK;
+-
+-		/* Did we find the lowest supported VL? */
+-		if (vq < sve_vq_from_vl(vl))
+-			break;
++	if (!res)
++		return true;
+ 
+-		/* Skip missing VLs */
+-		vq = sve_vq_from_vl(vl);
+-
+-		vls[nvls++] = vl;
+-	}
+-
+-	/* We need at least one VL */
+-	if (nvls < 1) {
+-		fprintf(stderr, "Only %d VL supported\n", nvls);
+-		return false;
+-	}
++	if (res == KSFT_SKIP)
++		td->result = KSFT_SKIP;
+ 
+-	return true;
++	return false;
+ }
+ 
+ static void setup_za_regs(void)
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index dd49c1d23a604c..88e8a316e7686a 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -427,23 +427,24 @@ $(OUTPUT)/cgroup_getset_retval_hooks.o: cgroup_getset_retval_hooks.h
+ # $1 - input .c file
+ # $2 - output .o file
+ # $3 - CFLAGS
++# $4 - binary name
+ define CLANG_BPF_BUILD_RULE
+-	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++	$(call msg,CLNG-BPF,$4,$2)
+ 	$(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v3 -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32
+ define CLANG_NOALU32_BPF_BUILD_RULE
+-	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++	$(call msg,CLNG-BPF,$4,$2)
+ 	$(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with cpu-v4
+ define CLANG_CPUV4_BPF_BUILD_RULE
+-	$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
++	$(call msg,CLNG-BPF,$4,$2)
+ 	$(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v4 -o $2
+ endef
+ # Build BPF object using GCC
+ define GCC_BPF_BUILD_RULE
+-	$(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
++	$(call msg,GCC-BPF,$4,$2)
+ 	$(Q)$(BPF_GCC) $3 -DBPF_NO_PRESERVE_ACCESS_INDEX -Wno-attributes -O2 -c $1 -o $2
+ endef
+ 
+@@ -535,7 +536,7 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.bpf.o:				\
+ 	$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@,			\
+ 					  $(TRUNNER_BPF_CFLAGS)         \
+ 					  $$($$<-CFLAGS)		\
+-					  $$($$<-$2-CFLAGS))
++					  $$($$<-$2-CFLAGS),$(TRUNNER_BINARY))
+ 
+ $(TRUNNER_BPF_SKELS): %.skel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ 	$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+@@ -762,6 +763,8 @@ $(OUTPUT)/veristat: $(OUTPUT)/veristat.o
+ 	$(call msg,BINARY,,$@)
+ 	$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
+ 
++# Linking uprobe_multi can fail due to relocation overflows on mips.
++$(OUTPUT)/uprobe_multi: CFLAGS += $(if $(filter mips, $(ARCH)),-mxgot)
+ $(OUTPUT)/uprobe_multi: uprobe_multi.c
+ 	$(call msg,BINARY,,$@)
+ 	$(Q)$(CC) $(CFLAGS) -O0 $(LDFLAGS) $^ $(LDLIBS) -o $@
+diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
+index 627b74ae041b52..90dc3aca32bd8f 100644
+--- a/tools/testing/selftests/bpf/bench.c
++++ b/tools/testing/selftests/bpf/bench.c
+@@ -10,6 +10,7 @@
+ #include <sys/sysinfo.h>
+ #include <signal.h>
+ #include "bench.h"
++#include "bpf_util.h"
+ #include "testing_helpers.h"
+ 
+ struct env env = {
+diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h
+index 68180d8f8558ec..005c401b3e2275 100644
+--- a/tools/testing/selftests/bpf/bench.h
++++ b/tools/testing/selftests/bpf/bench.h
+@@ -10,6 +10,7 @@
+ #include <math.h>
+ #include <time.h>
+ #include <sys/syscall.h>
++#include <limits.h>
+ 
+ struct cpu_set {
+ 	bool *cpus;
+diff --git a/tools/testing/selftests/bpf/map_tests/sk_storage_map.c b/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
+index 18405c3b7cee9a..af10c309359a77 100644
+--- a/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
++++ b/tools/testing/selftests/bpf/map_tests/sk_storage_map.c
+@@ -412,7 +412,7 @@ static void test_sk_storage_map_stress_free(void)
+ 		rlim_new.rlim_max = rlim_new.rlim_cur + 128;
+ 		err = setrlimit(RLIMIT_NOFILE, &rlim_new);
+ 		CHECK(err, "setrlimit(RLIMIT_NOFILE)", "rlim_new:%lu errno:%d",
+-		      rlim_new.rlim_cur, errno);
++		      (unsigned long) rlim_new.rlim_cur, errno);
+ 	}
+ 
+ 	err = do_sk_storage_map_stress_free();
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
+index b52ff8ce34db82..16bed9dd8e6a30 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c
+@@ -95,7 +95,7 @@ static unsigned short get_local_port(int fd)
+ 	struct sockaddr_in6 addr;
+ 	socklen_t addrlen = sizeof(addr);
+ 
+-	if (!getsockname(fd, &addr, &addrlen))
++	if (!getsockname(fd, (struct sockaddr *)&addr, &addrlen))
+ 		return ntohs(addr.sin6_port);
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 47f42e6801056b..26019313e1fc20 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include "progs/core_reloc_types.h"
+ #include "bpf_testmod/bpf_testmod.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
+index b1a3a49a822a7b..42bd07f7218dc3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
++++ b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
+@@ -4,7 +4,6 @@
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <net/if.h>
+-#include <linux/in6.h>
+ #include <linux/if_alg.h>
+ 
+ #include "test_progs.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+index dcb9e5070cc3d9..d79f398ec6b7c1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
++++ b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c
+@@ -4,7 +4,6 @@
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <net/if.h>
+-#include <linux/in6.h>
+ 
+ #include "test_progs.h"
+ #include "network_helpers.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+index 9e5f38739104bf..3171047414a7dc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
++++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+-#include <error.h>
+ #include <linux/if_tun.h>
+ #include <sys/uio.h>
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+index c07991544a789e..34f8822fd2219c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
++++ b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+ #include "kfree_skb.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+index 835a1d756c1662..b6e8d822e8e95f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+@@ -47,7 +47,6 @@
+ #include <linux/if_ether.h>
+ #include <linux/if_packet.h>
+ #include <linux/if_tun.h>
+-#include <linux/icmp.h>
+ #include <arpa/inet.h>
+ #include <unistd.h>
+ #include <errno.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c b/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
+index 03825d2b45a8b7..6c50c0f63f4365 100644
+--- a/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
++++ b/tools/testing/selftests/bpf/prog_tests/lwt_reroute.c
+@@ -49,6 +49,7 @@
+  *  is not crashed, it is considered successful.
+  */
+ #define NETNS "ns_lwt_reroute"
++#include <netinet/in.h>
+ #include "lwt_helpers.h"
+ #include "network_helpers.h"
+ #include <linux/net_tstamp.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+index e72d75d6baa71e..c29787e092d66a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+@@ -11,7 +11,7 @@
+ #include <sched.h>
+ #include <sys/wait.h>
+ #include <sys/mount.h>
+-#include <sys/fcntl.h>
++#include <fcntl.h>
+ #include "network_helpers.h"
+ 
+ #define STACK_SIZE (1024 * 1024)
+diff --git a/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c b/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
+index daa952711d8fdf..e9c07d561ded6d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
++++ b/tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ 
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include <network_helpers.h>
+ #include "test_parse_tcp_hdr_opt.skel.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index de2466547efe0f..a1ab0af004549b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -18,7 +18,6 @@
+ #include <arpa/inet.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index b1073d36d77ac2..a80a83e0440e3f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -471,7 +471,7 @@ static int set_forwarding(bool enable)
+ 
+ static int __rcv_tstamp(int fd, const char *expected, size_t s, __u64 *tstamp)
+ {
+-	struct __kernel_timespec pkt_ts = {};
++	struct timespec pkt_ts = {};
+ 	char ctl[CMSG_SPACE(sizeof(pkt_ts))];
+ 	struct timespec now_ts;
+ 	struct msghdr msg = {};
+@@ -495,7 +495,7 @@ static int __rcv_tstamp(int fd, const char *expected, size_t s, __u64 *tstamp)
+ 
+ 	cmsg = CMSG_FIRSTHDR(&msg);
+ 	if (cmsg && cmsg->cmsg_level == SOL_SOCKET &&
+-	    cmsg->cmsg_type == SO_TIMESTAMPNS_NEW)
++	    cmsg->cmsg_type == SO_TIMESTAMPNS)
+ 		memcpy(&pkt_ts, CMSG_DATA(cmsg), sizeof(pkt_ts));
+ 
+ 	pkt_ns = pkt_ts.tv_sec * NSEC_PER_SEC + pkt_ts.tv_nsec;
+@@ -537,9 +537,9 @@ static int wait_netstamp_needed_key(void)
+ 	if (!ASSERT_GE(srv_fd, 0, "start_server"))
+ 		goto done;
+ 
+-	err = setsockopt(srv_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
++	err = setsockopt(srv_fd, SOL_SOCKET, SO_TIMESTAMPNS,
+ 			 &opt, sizeof(opt));
+-	if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
++	if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS)"))
+ 		goto done;
+ 
+ 	cli_fd = connect_to_fd(srv_fd, TIMEOUT_MILLIS);
+@@ -621,9 +621,9 @@ static void test_inet_dtime(int family, int type, const char *addr, __u16 port)
+ 		return;
+ 
+ 	/* Ensure the kernel puts the (rcv) timestamp for all skb */
+-	err = setsockopt(listen_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
++	err = setsockopt(listen_fd, SOL_SOCKET, SO_TIMESTAMPNS,
+ 			 &opt, sizeof(opt));
+-	if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
++	if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS)"))
+ 		goto done;
+ 
+ 	if (type == SOCK_STREAM) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
+index f2b99d95d91607..c38784c1c066e6 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
++++ b/tools/testing/selftests/bpf/prog_tests/tcp_rtt.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#define _GNU_SOURCE
+ #include <test_progs.h>
+ #include "cgroup_helpers.h"
+ #include "network_helpers.h"
+diff --git a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+index e51721df14fc19..dfff6feac12c3c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+@@ -4,6 +4,7 @@
+ #define _GNU_SOURCE
+ #include <linux/compiler.h>
+ #include <linux/ring_buffer.h>
++#include <linux/build_bug.h>
+ #include <pthread.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
+index fb2f5513e29e1c..20541a7cd8070d 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
++++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
+@@ -2,14 +2,17 @@
+ #ifndef __BPF_MISC_H__
+ #define __BPF_MISC_H__
+ 
++#define XSTR(s) STR(s)
++#define STR(s) #s
++
+ /* This set of attributes controls behavior of the
+  * test_loader.c:test_loader__run_subtests().
+  *
+  * The test_loader sequentially loads each program in a skeleton.
+  * Programs could be loaded in privileged and unprivileged modes.
+- * - __success, __failure, __msg imply privileged mode;
+- * - __success_unpriv, __failure_unpriv, __msg_unpriv imply
+- *   unprivileged mode.
++ * - __success, __failure, __msg, __regex imply privileged mode;
++ * - __success_unpriv, __failure_unpriv, __msg_unpriv, __regex_unpriv
++ *   imply unprivileged mode.
+  * If combination of privileged and unprivileged attributes is present
+  * both modes are used. If none are present privileged mode is implied.
+  *
+@@ -24,6 +27,12 @@
+  *                   Multiple __msg attributes could be specified.
+  * __msg_unpriv      Same as __msg but for unprivileged mode.
+  *
++ * __regex           Same as __msg, but using a regular expression.
++ * __regex_unpriv    Same as __msg_unpriv but using a regular expression.
++ * __xlated          Expect a line in a disassembly log after verifier applies rewrites.
++ *                   Multiple __xlated attributes could be specified.
++ * __xlated_unpriv   Same as __xlated but for unprivileged mode.
++ *
+  * __success         Expect program load success in privileged mode.
+  * __success_unpriv  Expect program load success in unprivileged mode.
+  *
+@@ -57,12 +66,20 @@
+  * __auxiliary         Annotated program is not a separate test, but used as auxiliary
+  *                     for some other test cases and should always be loaded.
+  * __auxiliary_unpriv  Same, but load program in unprivileged mode.
++ *
++ * __arch_*          Specify on which architecture the test case should be tested.
++ *                   Several __arch_* annotations could be specified at once.
++ *                   When test case is not run on current arch it is marked as skipped.
+  */
+-#define __msg(msg)		__attribute__((btf_decl_tag("comment:test_expect_msg=" msg)))
++#define __msg(msg)		__attribute__((btf_decl_tag("comment:test_expect_msg=" XSTR(__COUNTER__) "=" msg)))
++#define __regex(regex)		__attribute__((btf_decl_tag("comment:test_expect_regex=" XSTR(__COUNTER__) "=" regex)))
++#define __xlated(msg)		__attribute__((btf_decl_tag("comment:test_expect_xlated=" XSTR(__COUNTER__) "=" msg)))
+ #define __failure		__attribute__((btf_decl_tag("comment:test_expect_failure")))
+ #define __success		__attribute__((btf_decl_tag("comment:test_expect_success")))
+ #define __description(desc)	__attribute__((btf_decl_tag("comment:test_description=" desc)))
+-#define __msg_unpriv(msg)	__attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" msg)))
++#define __msg_unpriv(msg)	__attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" XSTR(__COUNTER__) "=" msg)))
++#define __regex_unpriv(regex)	__attribute__((btf_decl_tag("comment:test_expect_regex_unpriv=" XSTR(__COUNTER__) "=" regex)))
++#define __xlated_unpriv(msg)	__attribute__((btf_decl_tag("comment:test_expect_xlated_unpriv=" XSTR(__COUNTER__) "=" msg)))
+ #define __failure_unpriv	__attribute__((btf_decl_tag("comment:test_expect_failure_unpriv")))
+ #define __success_unpriv	__attribute__((btf_decl_tag("comment:test_expect_success_unpriv")))
+ #define __log_level(lvl)	__attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
+@@ -72,6 +89,10 @@
+ #define __auxiliary		__attribute__((btf_decl_tag("comment:test_auxiliary")))
+ #define __auxiliary_unpriv	__attribute__((btf_decl_tag("comment:test_auxiliary_unpriv")))
+ #define __btf_path(path)	__attribute__((btf_decl_tag("comment:test_btf_path=" path)))
++#define __arch(arch)		__attribute__((btf_decl_tag("comment:test_arch=" arch)))
++#define __arch_x86_64		__arch("X86_64")
++#define __arch_arm64		__arch("ARM64")
++#define __arch_riscv64		__arch("RISCV64")
+ 
+ /* Convenience macro for use with 'asm volatile' blocks */
+ #define __naked __attribute__((naked))
+diff --git a/tools/testing/selftests/bpf/progs/cg_storage_multi.h b/tools/testing/selftests/bpf/progs/cg_storage_multi.h
+index a0778fe7857a14..41d59f0ee606c7 100644
+--- a/tools/testing/selftests/bpf/progs/cg_storage_multi.h
++++ b/tools/testing/selftests/bpf/progs/cg_storage_multi.h
+@@ -3,8 +3,6 @@
+ #ifndef __PROGS_CG_STORAGE_MULTI_H
+ #define __PROGS_CG_STORAGE_MULTI_H
+ 
+-#include <asm/types.h>
+-
+ struct cgroup_value {
+ 	__u32 egress_pkts;
+ 	__u32 ingress_pkts;
+diff --git a/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c b/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
+index f5ac5f3e89196f..568816307f7125 100644
+--- a/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
++++ b/tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c
+@@ -31,6 +31,7 @@ int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode)
+ 
+ 	if (fmode & FMODE_WRITE)
+ 		return -EACCES;
++	barrier();
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+index 85e48069c9e616..d4b99c3b4719b2 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
++++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
+@@ -1213,10 +1213,10 @@ __success  __log_level(2)
+  * - once for path entry - label 2;
+  * - once for path entry - label 1 - label 2.
+  */
+-__msg("r1 = *(u64 *)(r10 -8)")
+-__msg("exit")
+-__msg("r1 = *(u64 *)(r10 -8)")
+-__msg("exit")
++__msg("8: (79) r1 = *(u64 *)(r10 -8)")
++__msg("9: (95) exit")
++__msg("from 2 to 7")
++__msg("8: safe")
+ __msg("processed 11 insns")
+ __flag(BPF_F_TEST_STATE_FREQ)
+ __naked void old_stack_misc_vs_cur_ctx_ptr(void)
+diff --git a/tools/testing/selftests/bpf/test_cpp.cpp b/tools/testing/selftests/bpf/test_cpp.cpp
+index dde0bb16e782e9..abc2a56ab26164 100644
+--- a/tools/testing/selftests/bpf/test_cpp.cpp
++++ b/tools/testing/selftests/bpf/test_cpp.cpp
+@@ -6,6 +6,10 @@
+ #include <bpf/libbpf.h>
+ #include <bpf/bpf.h>
+ #include <bpf/btf.h>
++
++#ifndef _Bool
++#define _Bool bool
++#endif
+ #include "test_core_extern.skel.h"
+ #include "struct_ops_module.skel.h"
+ 
+diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
+index 524c38e9cde48f..2150e9c9b53fb1 100644
+--- a/tools/testing/selftests/bpf/test_loader.c
++++ b/tools/testing/selftests/bpf/test_loader.c
+@@ -2,10 +2,12 @@
+ /* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+ #include <linux/capability.h>
+ #include <stdlib.h>
++#include <regex.h>
+ #include <test_progs.h>
+ #include <bpf/btf.h>
+ 
+ #include "autoconf_helper.h"
++#include "disasm_helpers.h"
+ #include "unpriv_helpers.h"
+ #include "cap_helpers.h"
+ 
+@@ -17,9 +19,13 @@
+ #define TEST_TAG_EXPECT_FAILURE "comment:test_expect_failure"
+ #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
+ #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
++#define TEST_TAG_EXPECT_REGEX_PFX "comment:test_expect_regex="
++#define TEST_TAG_EXPECT_XLATED_PFX "comment:test_expect_xlated="
+ #define TEST_TAG_EXPECT_FAILURE_UNPRIV "comment:test_expect_failure_unpriv"
+ #define TEST_TAG_EXPECT_SUCCESS_UNPRIV "comment:test_expect_success_unpriv"
+ #define TEST_TAG_EXPECT_MSG_PFX_UNPRIV "comment:test_expect_msg_unpriv="
++#define TEST_TAG_EXPECT_REGEX_PFX_UNPRIV "comment:test_expect_regex_unpriv="
++#define TEST_TAG_EXPECT_XLATED_PFX_UNPRIV "comment:test_expect_xlated_unpriv="
+ #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
+ #define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags="
+ #define TEST_TAG_DESCRIPTION_PFX "comment:test_description="
+@@ -28,6 +34,7 @@
+ #define TEST_TAG_AUXILIARY "comment:test_auxiliary"
+ #define TEST_TAG_AUXILIARY_UNPRIV "comment:test_auxiliary_unpriv"
+ #define TEST_BTF_PATH "comment:test_btf_path="
++#define TEST_TAG_ARCH "comment:test_arch="
+ 
+ /* Warning: duplicated in bpf_misc.h */
+ #define POINTER_VALUE	0xcafe4all
+@@ -46,11 +53,22 @@ enum mode {
+ 	UNPRIV = 2
+ };
+ 
++struct expect_msg {
++	const char *substr; /* substring match */
++	const char *regex_str; /* regex-based match */
++	regex_t regex;
++};
++
++struct expected_msgs {
++	struct expect_msg *patterns;
++	size_t cnt;
++};
++
+ struct test_subspec {
+ 	char *name;
+ 	bool expect_failure;
+-	const char **expect_msgs;
+-	size_t expect_msg_cnt;
++	struct expected_msgs expect_msgs;
++	struct expected_msgs expect_xlated;
+ 	int retval;
+ 	bool execute;
+ };
+@@ -63,6 +81,7 @@ struct test_spec {
+ 	int log_level;
+ 	int prog_flags;
+ 	int mode_mask;
++	int arch_mask;
+ 	bool auxiliary;
+ 	bool valid;
+ };
+@@ -87,31 +106,64 @@ void test_loader_fini(struct test_loader *tester)
+ 	free(tester->log_buf);
+ }
+ 
++static void free_msgs(struct expected_msgs *msgs)
++{
++	int i;
++
++	for (i = 0; i < msgs->cnt; i++)
++		if (msgs->patterns[i].regex_str)
++			regfree(&msgs->patterns[i].regex);
++	free(msgs->patterns);
++	msgs->patterns = NULL;
++	msgs->cnt = 0;
++}
++
+ static void free_test_spec(struct test_spec *spec)
+ {
++	/* Deallocate expect_msgs arrays. */
++	free_msgs(&spec->priv.expect_msgs);
++	free_msgs(&spec->unpriv.expect_msgs);
++	free_msgs(&spec->priv.expect_xlated);
++	free_msgs(&spec->unpriv.expect_xlated);
++
+ 	free(spec->priv.name);
+ 	free(spec->unpriv.name);
+-	free(spec->priv.expect_msgs);
+-	free(spec->unpriv.expect_msgs);
+-
+ 	spec->priv.name = NULL;
+ 	spec->unpriv.name = NULL;
+-	spec->priv.expect_msgs = NULL;
+-	spec->unpriv.expect_msgs = NULL;
+ }
+ 
+-static int push_msg(const char *msg, struct test_subspec *subspec)
++static int push_msg(const char *substr, const char *regex_str, struct expected_msgs *msgs)
+ {
+ 	void *tmp;
++	int regcomp_res;
++	char error_msg[100];
++	struct expect_msg *msg;
+ 
+-	tmp = realloc(subspec->expect_msgs, (1 + subspec->expect_msg_cnt) * sizeof(void *));
++	tmp = realloc(msgs->patterns,
++		      (1 + msgs->cnt) * sizeof(struct expect_msg));
+ 	if (!tmp) {
+ 		ASSERT_FAIL("failed to realloc memory for messages\n");
+ 		return -ENOMEM;
+ 	}
+-	subspec->expect_msgs = tmp;
+-	subspec->expect_msgs[subspec->expect_msg_cnt++] = msg;
++	msgs->patterns = tmp;
++	msg = &msgs->patterns[msgs->cnt];
++
++	if (substr) {
++		msg->substr = substr;
++		msg->regex_str = NULL;
++	} else {
++		msg->regex_str = regex_str;
++		msg->substr = NULL;
++		regcomp_res = regcomp(&msg->regex, regex_str, REG_EXTENDED|REG_NEWLINE);
++		if (regcomp_res != 0) {
++			regerror(regcomp_res, &msg->regex, error_msg, sizeof(error_msg));
++			PRINT_FAIL("Regexp compilation error in '%s': '%s'\n",
++				   regex_str, error_msg);
++			return -EINVAL;
++		}
++	}
+ 
++	msgs->cnt += 1;
+ 	return 0;
+ }
+ 
+@@ -163,6 +215,41 @@ static void update_flags(int *flags, int flag, bool clear)
+ 		*flags |= flag;
+ }
+ 
++/* Matches a string of form '<pfx>[^=]=.*' and returns it's suffix.
++ * Used to parse btf_decl_tag values.
++ * Such values require unique prefix because compiler does not add
++ * same __attribute__((btf_decl_tag(...))) twice.
++ * Test suite uses two-component tags for such cases:
++ *
++ *   <pfx> __COUNTER__ '='
++ *
++ * For example, two consecutive __msg tags '__msg("foo") __msg("foo")'
++ * would be encoded as:
++ *
++ *   [18] DECL_TAG 'comment:test_expect_msg=0=foo' type_id=15 component_idx=-1
++ *   [19] DECL_TAG 'comment:test_expect_msg=1=foo' type_id=15 component_idx=-1
++ *
++ * And the purpose of this function is to extract 'foo' from the above.
++ */
++static const char *skip_dynamic_pfx(const char *s, const char *pfx)
++{
++	const char *msg;
++
++	if (strncmp(s, pfx, strlen(pfx)) != 0)
++		return NULL;
++	msg = s + strlen(pfx);
++	msg = strchr(msg, '=');
++	if (!msg)
++		return NULL;
++	return msg + 1;
++}
++
++enum arch {
++	ARCH_X86_64	= 0x1,
++	ARCH_ARM64	= 0x2,
++	ARCH_RISCV64	= 0x4,
++};
++
+ /* Uses btf_decl_tag attributes to describe the expected test
+  * behavior, see bpf_misc.h for detailed description of each attribute
+  * and attribute combinations.
+@@ -176,6 +263,7 @@ static int parse_test_spec(struct test_loader *tester,
+ 	bool has_unpriv_result = false;
+ 	bool has_unpriv_retval = false;
+ 	int func_id, i, err = 0;
++	u32 arch_mask = 0;
+ 	struct btf *btf;
+ 
+ 	memset(spec, 0, sizeof(*spec));
+@@ -231,15 +319,33 @@ static int parse_test_spec(struct test_loader *tester,
+ 		} else if (strcmp(s, TEST_TAG_AUXILIARY_UNPRIV) == 0) {
+ 			spec->auxiliary = true;
+ 			spec->mode_mask |= UNPRIV;
+-		} else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
+-			msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
+-			err = push_msg(msg, &spec->priv);
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_MSG_PFX))) {
++			err = push_msg(msg, NULL, &spec->priv.expect_msgs);
++			if (err)
++				goto cleanup;
++			spec->mode_mask |= PRIV;
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV))) {
++			err = push_msg(msg, NULL, &spec->unpriv.expect_msgs);
++			if (err)
++				goto cleanup;
++			spec->mode_mask |= UNPRIV;
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_REGEX_PFX))) {
++			err = push_msg(NULL, msg, &spec->priv.expect_msgs);
++			if (err)
++				goto cleanup;
++			spec->mode_mask |= PRIV;
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_REGEX_PFX_UNPRIV))) {
++			err = push_msg(NULL, msg, &spec->unpriv.expect_msgs);
++			if (err)
++				goto cleanup;
++			spec->mode_mask |= UNPRIV;
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_XLATED_PFX))) {
++			err = push_msg(msg, NULL, &spec->priv.expect_xlated);
+ 			if (err)
+ 				goto cleanup;
+ 			spec->mode_mask |= PRIV;
+-		} else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV)) {
+-			msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX_UNPRIV) - 1;
+-			err = push_msg(msg, &spec->unpriv);
++		} else if ((msg = skip_dynamic_pfx(s, TEST_TAG_EXPECT_XLATED_PFX_UNPRIV))) {
++			err = push_msg(msg, NULL, &spec->unpriv.expect_xlated);
+ 			if (err)
+ 				goto cleanup;
+ 			spec->mode_mask |= UNPRIV;
+@@ -290,11 +396,26 @@ static int parse_test_spec(struct test_loader *tester,
+ 					goto cleanup;
+ 				update_flags(&spec->prog_flags, flags, clear);
+ 			}
++		} else if (str_has_pfx(s, TEST_TAG_ARCH)) {
++			val = s + sizeof(TEST_TAG_ARCH) - 1;
++			if (strcmp(val, "X86_64") == 0) {
++				arch_mask |= ARCH_X86_64;
++			} else if (strcmp(val, "ARM64") == 0) {
++				arch_mask |= ARCH_ARM64;
++			} else if (strcmp(val, "RISCV64") == 0) {
++				arch_mask |= ARCH_RISCV64;
++			} else {
++				PRINT_FAIL("bad arch spec: '%s'", val);
++				err = -EINVAL;
++				goto cleanup;
++			}
+ 		} else if (str_has_pfx(s, TEST_BTF_PATH)) {
+ 			spec->btf_custom_path = s + sizeof(TEST_BTF_PATH) - 1;
+ 		}
+ 	}
+ 
++	spec->arch_mask = arch_mask;
++
+ 	if (spec->mode_mask == 0)
+ 		spec->mode_mask = PRIV;
+ 
+@@ -336,17 +457,25 @@ static int parse_test_spec(struct test_loader *tester,
+ 			spec->unpriv.execute = spec->priv.execute;
+ 		}
+ 
+-		if (!spec->unpriv.expect_msgs) {
+-			size_t sz = spec->priv.expect_msg_cnt * sizeof(void *);
++		if (spec->unpriv.expect_msgs.cnt == 0) {
++			for (i = 0; i < spec->priv.expect_msgs.cnt; i++) {
++				struct expect_msg *msg = &spec->priv.expect_msgs.patterns[i];
+ 
+-			spec->unpriv.expect_msgs = malloc(sz);
+-			if (!spec->unpriv.expect_msgs) {
+-				PRINT_FAIL("failed to allocate memory for unpriv.expect_msgs\n");
+-				err = -ENOMEM;
+-				goto cleanup;
++				err = push_msg(msg->substr, msg->regex_str,
++					       &spec->unpriv.expect_msgs);
++				if (err)
++					goto cleanup;
++			}
++		}
++		if (spec->unpriv.expect_xlated.cnt == 0) {
++			for (i = 0; i < spec->priv.expect_xlated.cnt; i++) {
++				struct expect_msg *msg = &spec->priv.expect_xlated.patterns[i];
++
++				err = push_msg(msg->substr, msg->regex_str,
++					       &spec->unpriv.expect_xlated);
++				if (err)
++					goto cleanup;
+ 			}
+-			memcpy(spec->unpriv.expect_msgs, spec->priv.expect_msgs, sz);
+-			spec->unpriv.expect_msg_cnt = spec->priv.expect_msg_cnt;
+ 		}
+ 	}
+ 
+@@ -386,7 +515,6 @@ static void prepare_case(struct test_loader *tester,
+ 	bpf_program__set_flags(prog, prog_flags | spec->prog_flags);
+ 
+ 	tester->log_buf[0] = '\0';
+-	tester->next_match_pos = 0;
+ }
+ 
+ static void emit_verifier_log(const char *log_buf, bool force)
+@@ -396,33 +524,48 @@ static void emit_verifier_log(const char *log_buf, bool force)
+ 	fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log_buf);
+ }
+ 
+-static void validate_case(struct test_loader *tester,
+-			  struct test_subspec *subspec,
+-			  struct bpf_object *obj,
+-			  struct bpf_program *prog,
+-			  int load_err)
++static void emit_xlated(const char *xlated, bool force)
+ {
+-	int i, j;
+-
+-	for (i = 0; i < subspec->expect_msg_cnt; i++) {
+-		char *match;
+-		const char *expect_msg;
++	if (!force && env.verbosity == VERBOSE_NONE)
++		return;
++	fprintf(stdout, "XLATED:\n=============\n%s=============\n", xlated);
++}
+ 
+-		expect_msg = subspec->expect_msgs[i];
++static void validate_msgs(char *log_buf, struct expected_msgs *msgs,
++			  void (*emit_fn)(const char *buf, bool force))
++{
++	regmatch_t reg_match[1];
++	const char *log = log_buf;
++	int i, j, err;
++
++	for (i = 0; i < msgs->cnt; i++) {
++		struct expect_msg *msg = &msgs->patterns[i];
++		const char *match = NULL;
++
++		if (msg->substr) {
++			match = strstr(log, msg->substr);
++			if (match)
++				log = match + strlen(msg->substr);
++		} else {
++			err = regexec(&msg->regex, log, 1, reg_match, 0);
++			if (err == 0) {
++				match = log + reg_match[0].rm_so;
++				log += reg_match[0].rm_eo;
++			}
++		}
+ 
+-		match = strstr(tester->log_buf + tester->next_match_pos, expect_msg);
+ 		if (!ASSERT_OK_PTR(match, "expect_msg")) {
+-			/* if we are in verbose mode, we've already emitted log */
+ 			if (env.verbosity == VERBOSE_NONE)
+-				emit_verifier_log(tester->log_buf, true /*force*/);
+-			for (j = 0; j < i; j++)
+-				fprintf(stderr,
+-					"MATCHED  MSG: '%s'\n", subspec->expect_msgs[j]);
+-			fprintf(stderr, "EXPECTED MSG: '%s'\n", expect_msg);
++				emit_fn(log_buf, true /*force*/);
++			for (j = 0; j <= i; j++) {
++				msg = &msgs->patterns[j];
++				fprintf(stderr, "%s %s: '%s'\n",
++					j < i ? "MATCHED " : "EXPECTED",
++					msg->substr ? "SUBSTR" : " REGEX",
++					msg->substr ?: msg->regex_str);
++			}
+ 			return;
+ 		}
+-
+-		tester->next_match_pos = match - tester->log_buf + strlen(expect_msg);
+ 	}
+ }
+ 
+@@ -550,6 +693,51 @@ static bool should_do_test_run(struct test_spec *spec, struct test_subspec *subs
+ 	return true;
+ }
+ 
++/* Get a disassembly of BPF program after verifier applies all rewrites */
++static int get_xlated_program_text(int prog_fd, char *text, size_t text_sz)
++{
++	struct bpf_insn *insn_start = NULL, *insn, *insn_end;
++	__u32 insns_cnt = 0, i;
++	char buf[64];
++	FILE *out = NULL;
++	int err;
++
++	err = get_xlated_program(prog_fd, &insn_start, &insns_cnt);
++	if (!ASSERT_OK(err, "get_xlated_program"))
++		goto out;
++	out = fmemopen(text, text_sz, "w");
++	if (!ASSERT_OK_PTR(out, "open_memstream"))
++		goto out;
++	insn_end = insn_start + insns_cnt;
++	insn = insn_start;
++	while (insn < insn_end) {
++		i = insn - insn_start;
++		insn = disasm_insn(insn, buf, sizeof(buf));
++		fprintf(out, "%d: %s\n", i, buf);
++	}
++	fflush(out);
++
++out:
++	free(insn_start);
++	if (out)
++		fclose(out);
++	return err;
++}
++
++static bool run_on_current_arch(int arch_mask)
++{
++	if (arch_mask == 0)
++		return true;
++#if defined(__x86_64__)
++	return arch_mask & ARCH_X86_64;
++#elif defined(__aarch64__)
++	return arch_mask & ARCH_ARM64;
++#elif defined(__riscv) && __riscv_xlen == 64
++	return arch_mask & ARCH_RISCV64;
++#endif
++	return false;
++}
++
+ /* this function is forced noinline and has short generic name to look better
+  * in test_progs output (in case of a failure)
+  */
+@@ -574,6 +762,11 @@ void run_subtest(struct test_loader *tester,
+ 	if (!test__start_subtest(subspec->name))
+ 		return;
+ 
++	if (!run_on_current_arch(spec->arch_mask)) {
++		test__skip();
++		return;
++	}
++
+ 	if (unpriv) {
+ 		if (!can_execute_unpriv(tester, spec)) {
+ 			test__skip();
+@@ -634,9 +827,17 @@ void run_subtest(struct test_loader *tester,
+ 			goto tobj_cleanup;
+ 		}
+ 	}
+-
+ 	emit_verifier_log(tester->log_buf, false /*force*/);
+-	validate_case(tester, subspec, tobj, tprog, err);
++	validate_msgs(tester->log_buf, &subspec->expect_msgs, emit_verifier_log);
++
++	if (subspec->expect_xlated.cnt) {
++		err = get_xlated_program_text(bpf_program__fd(tprog),
++					      tester->log_buf, tester->log_buf_sz);
++		if (err)
++			goto tobj_cleanup;
++		emit_xlated(tester->log_buf, false /*force*/);
++		validate_msgs(tester->log_buf, &subspec->expect_xlated, emit_xlated);
++	}
+ 
+ 	if (should_do_test_run(spec, subspec)) {
+ 		/* For some reason test_verifier executes programs
+diff --git a/tools/testing/selftests/bpf/test_lru_map.c b/tools/testing/selftests/bpf/test_lru_map.c
+index 4d0650cfb5cd8b..fda7589c50236c 100644
+--- a/tools/testing/selftests/bpf/test_lru_map.c
++++ b/tools/testing/selftests/bpf/test_lru_map.c
+@@ -126,7 +126,8 @@ static int sched_next_online(int pid, int *next_to_try)
+ 
+ 	while (next < nr_cpus) {
+ 		CPU_ZERO(&cpuset);
+-		CPU_SET(next++, &cpuset);
++		CPU_SET(next, &cpuset);
++		next++;
+ 		if (!sched_setaffinity(pid, sizeof(cpuset), &cpuset)) {
+ 			ret = 0;
+ 			break;
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 89ff704e9dad5e..d5d0cb4eb1975b 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -10,7 +10,6 @@
+ #include <sched.h>
+ #include <signal.h>
+ #include <string.h>
+-#include <execinfo.h> /* backtrace */
+ #include <sys/sysinfo.h> /* get_nprocs */
+ #include <netinet/in.h>
+ #include <sys/select.h>
+@@ -19,6 +18,21 @@
+ #include <bpf/btf.h>
+ #include "json_writer.h"
+ 
++#ifdef __GLIBC__
++#include <execinfo.h> /* backtrace */
++#endif
++
++/* Default backtrace funcs if missing at link */
++__weak int backtrace(void **buffer, int size)
++{
++	return 0;
++}
++
++__weak void backtrace_symbols_fd(void *const *buffer, int size, int fd)
++{
++	dprintf(fd, "<backtrace not supported>\n");
++}
++
+ static bool verbose(void)
+ {
+ 	return env.verbosity > VERBOSE_NONE;
+@@ -1731,7 +1745,7 @@ int main(int argc, char **argv)
+ 	/* launch workers if requested */
+ 	env.worker_id = -1; /* main process */
+ 	if (env.workers) {
+-		env.worker_pids = calloc(sizeof(__pid_t), env.workers);
++		env.worker_pids = calloc(sizeof(pid_t), env.workers);
+ 		env.worker_socks = calloc(sizeof(int), env.workers);
+ 		if (env.debug)
+ 			fprintf(stdout, "Launching %d workers.\n", env.workers);
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index 0ba5a20b19ba8e..8e997de596db05 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -438,7 +438,6 @@ typedef int (*pre_execution_cb)(struct bpf_object *obj);
+ struct test_loader {
+ 	char *log_buf;
+ 	size_t log_buf_sz;
+-	size_t next_match_pos;
+ 	pre_execution_cb pre_execution_cb;
+ 
+ 	struct bpf_object *obj;
+diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c
+index d5379a0e6da804..680e452583a78b 100644
+--- a/tools/testing/selftests/bpf/testing_helpers.c
++++ b/tools/testing/selftests/bpf/testing_helpers.c
+@@ -220,13 +220,13 @@ int parse_test_list(const char *s,
+ 		    bool is_glob_pattern)
+ {
+ 	char *input, *state = NULL, *test_spec;
+-	int err = 0;
++	int err = 0, cnt = 0;
+ 
+ 	input = strdup(s);
+ 	if (!input)
+ 		return -ENOMEM;
+ 
+-	while ((test_spec = strtok_r(state ? NULL : input, ",", &state))) {
++	while ((test_spec = strtok_r(cnt++ ? NULL : input, ",", &state))) {
+ 		err = insert_test(set, test_spec, is_glob_pattern);
+ 		if (err)
+ 			break;
+@@ -451,7 +451,7 @@ int get_xlated_program(int fd_prog, struct bpf_insn **buf, __u32 *cnt)
+ 
+ 	*cnt = xlated_prog_len / buf_element_size;
+ 	*buf = calloc(*cnt, buf_element_size);
+-	if (!buf) {
++	if (!*buf) {
+ 		perror("can't allocate xlated program buffer");
+ 		return -ENOMEM;
+ 	}
+diff --git a/tools/testing/selftests/bpf/unpriv_helpers.c b/tools/testing/selftests/bpf/unpriv_helpers.c
+index b6d016461fb023..220f6a96381345 100644
+--- a/tools/testing/selftests/bpf/unpriv_helpers.c
++++ b/tools/testing/selftests/bpf/unpriv_helpers.c
+@@ -2,7 +2,6 @@
+ 
+ #include <stdbool.h>
+ #include <stdlib.h>
+-#include <error.h>
+ #include <stdio.h>
+ #include <string.h>
+ #include <unistd.h>
+diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
+index b2854238d4a0eb..fd9780082ff48a 100644
+--- a/tools/testing/selftests/bpf/veristat.c
++++ b/tools/testing/selftests/bpf/veristat.c
+@@ -784,13 +784,13 @@ static int parse_stat(const char *stat_name, struct stat_specs *specs)
+ static int parse_stats(const char *stats_str, struct stat_specs *specs)
+ {
+ 	char *input, *state = NULL, *next;
+-	int err;
++	int err, cnt = 0;
+ 
+ 	input = strdup(stats_str);
+ 	if (!input)
+ 		return -ENOMEM;
+ 
+-	while ((next = strtok_r(state ? NULL : input, ",", &state))) {
++	while ((next = strtok_r(cnt++ ? NULL : input, ",", &state))) {
+ 		err = parse_stat(next, specs);
+ 		if (err) {
+ 			free(input);
+@@ -1493,7 +1493,7 @@ static int parse_stats_csv(const char *filename, struct stat_specs *specs,
+ 	while (fgets(line, sizeof(line), f)) {
+ 		char *input = line, *state = NULL, *next;
+ 		struct verif_stats *st = NULL;
+-		int col = 0;
++		int col = 0, cnt = 0;
+ 
+ 		if (!header) {
+ 			void *tmp;
+@@ -1511,7 +1511,7 @@ static int parse_stats_csv(const char *filename, struct stat_specs *specs,
+ 			*stat_cntp += 1;
+ 		}
+ 
+-		while ((next = strtok_r(state ? NULL : input, ",\n", &state))) {
++		while ((next = strtok_r(cnt++ ? NULL : input, ",\n", &state))) {
+ 			if (header) {
+ 				/* for the first line, set up spec stats */
+ 				err = parse_stat(next, specs);
+diff --git a/tools/testing/selftests/dt/test_unprobed_devices.sh b/tools/testing/selftests/dt/test_unprobed_devices.sh
+index 2d7e70c5ad2d36..5e3f42ef249eec 100755
+--- a/tools/testing/selftests/dt/test_unprobed_devices.sh
++++ b/tools/testing/selftests/dt/test_unprobed_devices.sh
+@@ -34,8 +34,21 @@ nodes_compatible=$(
+ 		# Check if node is available
+ 		if [[ -e "${node}"/status ]]; then
+ 			status=$(tr -d '\000' < "${node}"/status)
+-			[[ "${status}" != "okay" && "${status}" != "ok" ]] && continue
++			if [[ "${status}" != "okay" && "${status}" != "ok" ]]; then
++				if [ -n "${disabled_nodes_regex}" ]; then
++					disabled_nodes_regex="${disabled_nodes_regex}|${node}"
++				else
++					disabled_nodes_regex="${node}"
++				fi
++				continue
++			fi
+ 		fi
++
++		# Ignore this node if one of its ancestors was disabled
++		if [ -n "${disabled_nodes_regex}" ]; then
++			echo "${node}" | grep -q -E "${disabled_nodes_regex}" && continue
++		fi
++
+ 		echo "${node}" | sed -e 's|\/proc\/device-tree||'
+ 	done | sort
+ 	)
+diff --git a/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc b/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
+index c45094d1e1d2db..803efd7b56c77d 100644
+--- a/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
++++ b/tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc
+@@ -6,6 +6,18 @@ original_group=`stat -c "%g" .`
+ original_owner=`stat -c "%u" .`
+ 
+ mount_point=`stat -c '%m' .`
++
++# If stat -c '%m' does not work (e.g. busybox) or failed, try to use the
++# current working directory (which should be a tracefs) as the mount point.
++if [ ! -d "$mount_point" ]; then
++	if mount | grep -qw $PWD ; then
++		mount_point=$PWD
++	else
++		# If PWD doesn't work, that is an environmental problem.
++		exit_unresolved
++	fi
++fi
++
+ mount_options=`mount | grep "$mount_point" | sed -e 's/.*(\(.*\)).*/\1/'`
+ 
+ # find another owner and group that is not the original
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+index 073a748b9380a1..263f6b798c853e 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
+@@ -19,7 +19,14 @@ fail() { # mesg
+ 
+ FILTER=set_ftrace_filter
+ FUNC1="schedule"
+-FUNC2="sched_tick"
++if grep '^sched_tick\b' available_filter_functions; then
++    FUNC2="sched_tick"
++elif grep '^scheduler_tick\b' available_filter_functions; then
++    FUNC2="scheduler_tick"
++else
++    exit_unresolved
++fi
++
+ 
+ ALL_FUNCS="#### all functions enabled ####"
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
+index e21c9c27ece476..77f4c07cdcb899 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc
+@@ -1,7 +1,7 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: Kprobe event char type argument
+-# requires: kprobe_events
++# requires: kprobe_events available_filter_functions
+ 
+ case `uname -m` in
+ x86_64)
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
+index 93217d4595563f..39001073f7ed5d 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc
+@@ -1,7 +1,7 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ # description: Kprobe event string type argument
+-# requires: kprobe_events
++# requires: kprobe_events available_filter_functions
+ 
+ case `uname -m` in
+ x86_64)
+diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
+index 76c2a6945d3e83..f9214e7cdd1345 100644
+--- a/tools/testing/selftests/kselftest.h
++++ b/tools/testing/selftests/kselftest.h
+@@ -61,6 +61,7 @@
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+ #endif
+ 
++#if defined(__i386__) || defined(__x86_64__) /* arch */
+ /*
+  * gcc cpuid.h provides __cpuid_count() since v4.4.
+  * Clang/LLVM cpuid.h provides  __cpuid_count() since v3.4.0.
+@@ -75,6 +76,7 @@
+ 			      : "=a" (a), "=b" (b), "=c" (c), "=d" (d)	\
+ 			      : "0" (level), "2" (count))
+ #endif
++#endif /* end arch */
+ 
+ /* define kselftest exit codes */
+ #define KSFT_PASS  0
+diff --git a/tools/testing/selftests/net/netfilter/ipvs.sh b/tools/testing/selftests/net/netfilter/ipvs.sh
+index 4ceee9fb39495b..d3edb16cd4b3f6 100755
+--- a/tools/testing/selftests/net/netfilter/ipvs.sh
++++ b/tools/testing/selftests/net/netfilter/ipvs.sh
+@@ -97,7 +97,7 @@ cleanup() {
+ }
+ 
+ server_listen() {
+-	ip netns exec "$ns2" socat -u -4 TCP-LISTEN:8080,reuseaddr STDOUT > "${outfile}" &
++	ip netns exec "$ns2" timeout 5 socat -u -4 TCP-LISTEN:8080,reuseaddr STDOUT > "${outfile}" &
+ 	server_pid=$!
+ 	sleep 0.2
+ }
+diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
+index 55315ed695f47e..18565f02163e72 100644
+--- a/tools/testing/selftests/resctrl/cat_test.c
++++ b/tools/testing/selftests/resctrl/cat_test.c
+@@ -293,12 +293,12 @@ static int cat_run_test(const struct resctrl_test *test, const struct user_param
+ 
+ static bool arch_supports_noncont_cat(const struct resctrl_test *test)
+ {
+-	unsigned int eax, ebx, ecx, edx;
+-
+ 	/* AMD always supports non-contiguous CBM. */
+ 	if (get_vendor() == ARCH_AMD)
+ 		return true;
+ 
++#if defined(__i386__) || defined(__x86_64__) /* arch */
++	unsigned int eax, ebx, ecx, edx;
+ 	/* Intel support for non-contiguous CBM needs to be discovered. */
+ 	if (!strcmp(test->resource, "L3"))
+ 		__cpuid_count(0x10, 1, eax, ebx, ecx, edx);
+@@ -308,6 +308,9 @@ static bool arch_supports_noncont_cat(const struct resctrl_test *test)
+ 		return false;
+ 
+ 	return ((ecx >> 3) & 1);
++#endif /* end arch */
++
++	return false;
+ }
+ 
+ static int noncont_cat_run_test(const struct resctrl_test *test,
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 1192942aef911d..3163bcf8148a64 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -5500,6 +5500,7 @@ __visible bool kvm_rebooting;
+ EXPORT_SYMBOL_GPL(kvm_rebooting);
+ 
+ static DEFINE_PER_CPU(bool, hardware_enabled);
++static DEFINE_MUTEX(kvm_usage_lock);
+ static int kvm_usage_count;
+ 
+ static int __hardware_enable_nolock(void)
+@@ -5532,10 +5533,10 @@ static int kvm_online_cpu(unsigned int cpu)
+ 	 * be enabled. Otherwise running VMs would encounter unrecoverable
+ 	 * errors when scheduled to this CPU.
+ 	 */
+-	mutex_lock(&kvm_lock);
++	mutex_lock(&kvm_usage_lock);
+ 	if (kvm_usage_count)
+ 		ret = __hardware_enable_nolock();
+-	mutex_unlock(&kvm_lock);
++	mutex_unlock(&kvm_usage_lock);
+ 	return ret;
+ }
+ 
+@@ -5555,10 +5556,10 @@ static void hardware_disable_nolock(void *junk)
+ 
+ static int kvm_offline_cpu(unsigned int cpu)
+ {
+-	mutex_lock(&kvm_lock);
++	mutex_lock(&kvm_usage_lock);
+ 	if (kvm_usage_count)
+ 		hardware_disable_nolock(NULL);
+-	mutex_unlock(&kvm_lock);
++	mutex_unlock(&kvm_usage_lock);
+ 	return 0;
+ }
+ 
+@@ -5574,9 +5575,9 @@ static void hardware_disable_all_nolock(void)
+ static void hardware_disable_all(void)
+ {
+ 	cpus_read_lock();
+-	mutex_lock(&kvm_lock);
++	mutex_lock(&kvm_usage_lock);
+ 	hardware_disable_all_nolock();
+-	mutex_unlock(&kvm_lock);
++	mutex_unlock(&kvm_usage_lock);
+ 	cpus_read_unlock();
+ }
+ 
+@@ -5607,7 +5608,7 @@ static int hardware_enable_all(void)
+ 	 * enable hardware multiple times.
+ 	 */
+ 	cpus_read_lock();
+-	mutex_lock(&kvm_lock);
++	mutex_lock(&kvm_usage_lock);
+ 
+ 	r = 0;
+ 
+@@ -5621,7 +5622,7 @@ static int hardware_enable_all(void)
+ 		}
+ 	}
+ 
+-	mutex_unlock(&kvm_lock);
++	mutex_unlock(&kvm_usage_lock);
+ 	cpus_read_unlock();
+ 
+ 	return r;
+@@ -5649,13 +5650,13 @@ static int kvm_suspend(void)
+ {
+ 	/*
+ 	 * Secondary CPUs and CPU hotplug are disabled across the suspend/resume
+-	 * callbacks, i.e. no need to acquire kvm_lock to ensure the usage count
+-	 * is stable.  Assert that kvm_lock is not held to ensure the system
+-	 * isn't suspended while KVM is enabling hardware.  Hardware enabling
+-	 * can be preempted, but the task cannot be frozen until it has dropped
+-	 * all locks (userspace tasks are frozen via a fake signal).
++	 * callbacks, i.e. no need to acquire kvm_usage_lock to ensure the usage
++	 * count is stable.  Assert that kvm_usage_lock is not held to ensure
++	 * the system isn't suspended while KVM is enabling hardware.  Hardware
++	 * enabling can be preempted, but the task cannot be frozen until it has
++	 * dropped all locks (userspace tasks are frozen via a fake signal).
+ 	 */
+-	lockdep_assert_not_held(&kvm_lock);
++	lockdep_assert_not_held(&kvm_usage_lock);
+ 	lockdep_assert_irqs_disabled();
+ 
+ 	if (kvm_usage_count)
+@@ -5665,7 +5666,7 @@ static int kvm_suspend(void)
+ 
+ static void kvm_resume(void)
+ {
+-	lockdep_assert_not_held(&kvm_lock);
++	lockdep_assert_not_held(&kvm_usage_lock);
+ 	lockdep_assert_irqs_disabled();
+ 
+ 	if (kvm_usage_count)


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.10 commit in: /
@ 2024-10-10 11:35 Mike Pagano
  0 siblings, 0 replies; 38+ messages in thread
From: Mike Pagano @ 2024-10-10 11:35 UTC (permalink / raw
  To: gentoo-commits

commit:     151a87774b91fdecc8861f7848cd2976ccac56d6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 10 11:34:58 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 10 11:34:58 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=151a8777

Linux patch 6.10.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1013_linux-6.10.14.patch | 23711 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 23715 insertions(+)

diff --git a/0000_README b/0000_README
index fdea4bd1..e9229972 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-6.10.13.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.10.13
 
+Patch:  1013_linux-6.10.14.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.10.14
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1013_linux-6.10.14.patch b/1013_linux-6.10.14.patch
new file mode 100644
index 00000000..4c54c083
--- /dev/null
+++ b/1013_linux-6.10.14.patch
@@ -0,0 +1,23711 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index cad6c3dc1f9c1f..d0c1acfcad405e 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -763,3 +763,25 @@ Date:		November 2023
+ Contact:	"Chao Yu" <chao@kernel.org>
+ Description:	It controls to enable/disable IO aware feature for background discard.
+ 		By default, the value is 1 which indicates IO aware is on.
++
++What:		/sys/fs/f2fs/<disk>/blkzone_alloc_policy
++Date:		July 2024
++Contact:	"Yuanhong Liao" <liaoyuanhong@vivo.com>
++Description:	The zone UFS we are currently using consists of two parts:
++		conventional zones and sequential zones. It can be used to control which part
++		to prioritize for writes, with a default value of 0.
++
++		========================  =========================================
++		value					  description
++		blkzone_alloc_policy = 0  Prioritize writing to sequential zones
++		blkzone_alloc_policy = 1  Only allow writing to sequential zones
++		blkzone_alloc_policy = 2  Prioritize writing to conventional zones
++		========================  =========================================
++
++What:		/sys/fs/f2fs/<disk>/migration_window_granularity
++Date:		September 2024
++Contact:	"Daeho Jeong" <daehojeong@google.com>
++Description:	Controls migration window granularity of garbage collection on large
++		section. it can control the scanning window granularity for GC migration
++		in a unit of segment, while migration_granularity controls the number
++		of segments which can be migrated at the same turn.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index c82446cef8e211..2c8e062eb2ce55 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4791,6 +4791,16 @@
+ 	printk.time=	Show timing data prefixed to each printk message line
+ 			Format: <bool>  (1/Y/y=enable, 0/N/n=disable)
+ 
++	proc_mem.force_override= [KNL]
++			Format: {always | ptrace | never}
++			Traditionally /proc/pid/mem allows memory permissions to be
++			overridden without restrictions. This option may be set to
++			restrict that. Can be one of:
++			- 'always': traditional behavior always allows mem overrides.
++			- 'ptrace': only allow mem overrides for active ptracers.
++			- 'never':  never allow mem overrides.
++			If not specified, default is the CONFIG_PROC_MEM_* choice.
++
+ 	processor.max_cstate=	[HW,ACPI]
+ 			Limit processor to maximum C-state
+ 			max_cstate=9 overrides any DMI blacklist limit.
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 39c52385f11fb3..8cd4f365044b67 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -146,6 +146,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A715     | #2645198        | ARM64_ERRATUM_2645198       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A715     | #3456084        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A720     | #3456091        | ARM64_ERRATUM_3194386       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A725     | #3456106        | ARM64_ERRATUM_3194386       |
+@@ -186,6 +188,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-N2     | #3324339        | ARM64_ERRATUM_3194386       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Neoverse-N3     | #3456111        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-V1     | #1619801        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Neoverse-V1     | #3324341        | ARM64_ERRATUM_3194386       |
+@@ -289,3 +293,5 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Microsoft      | Azure Cobalt 100| #2253138        | ARM64_ERRATUM_2253138       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Microsoft      | Azure Cobalt 100| #3324339        | ARM64_ERRATUM_3194386       |
+++----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+index bbe89ea9590ceb..e95c216282818e 100644
+--- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
++++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+@@ -34,6 +34,7 @@ properties:
+       and length of the AXI DMA controller IO space, unless
+       axistream-connected is specified, in which case the reg
+       attribute of the node referenced by it is used.
++    minItems: 1
+     maxItems: 2
+ 
+   interrupts:
+@@ -181,7 +182,7 @@ examples:
+         clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk";
+         clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
+         phy-mode = "mii";
+-        reg = <0x00 0x40000000 0x00 0x40000>;
++        reg = <0x40000000 0x40000>;
+         xlnx,rxcsum = <0x2>;
+         xlnx,rxmem = <0x800>;
+         xlnx,txcsum = <0x2>;
+diff --git a/Documentation/networking/net_cachelines/net_device.rst b/Documentation/networking/net_cachelines/net_device.rst
+index 70c4fb9d4e5ce0..d68f37f5b1f821 100644
+--- a/Documentation/networking/net_cachelines/net_device.rst
++++ b/Documentation/networking/net_cachelines/net_device.rst
+@@ -98,7 +98,7 @@ unsigned_int                        num_rx_queues
+ unsigned_int                        real_num_rx_queues      -                   read_mostly         get_rps_cpu
+ struct_bpf_prog*                    xdp_prog                -                   read_mostly         netif_elide_gro()
+ unsigned_long                       gro_flush_timeout       -                   read_mostly         napi_complete_done
+-int                                 napi_defer_hard_irqs    -                   read_mostly         napi_complete_done
++u32                                 napi_defer_hard_irqs    -                   read_mostly         napi_complete_done
+ unsigned_int                        gro_max_size            -                   read_mostly         skb_gro_receive
+ unsigned_int                        gro_ipv4_max_size       -                   read_mostly         skb_gro_receive
+ rx_handler_func_t*                  rx_handler              read_mostly         -                   __netif_receive_skb_core
+diff --git a/Makefile b/Makefile
+index 93731d0b1a04ac..0ba45bdf4a3b00 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 10
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c
+index b668c97663ec0c..f5b66f4cf45d96 100644
+--- a/arch/arm/crypto/aes-ce-glue.c
++++ b/arch/arm/crypto/aes-ce-glue.c
+@@ -711,7 +711,7 @@ static int __init aes_init(void)
+ 		algname = aes_algs[i].base.cra_name + 2;
+ 		drvname = aes_algs[i].base.cra_driver_name + 2;
+ 		basename = aes_algs[i].base.cra_driver_name;
+-		simd = simd_skcipher_create_compat(algname, drvname, basename);
++		simd = simd_skcipher_create_compat(aes_algs + i, algname, drvname, basename);
+ 		err = PTR_ERR(simd);
+ 		if (IS_ERR(simd))
+ 			goto unregister_simds;
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index f00f042ef3570e..0ca94b90bc4ec5 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -539,7 +539,7 @@ static int __init aes_init(void)
+ 		algname = aes_algs[i].base.cra_name + 2;
+ 		drvname = aes_algs[i].base.cra_driver_name + 2;
+ 		basename = aes_algs[i].base.cra_driver_name;
+-		simd = simd_skcipher_create_compat(algname, drvname, basename);
++		simd = simd_skcipher_create_compat(aes_algs + i, algname, drvname, basename);
+ 		err = PTR_ERR(simd);
+ 		if (IS_ERR(simd))
+ 			goto unregister_simds;
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index cd9772b1fd95ee..43d79f87fa1808 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -195,7 +195,8 @@ config ARM64
+ 	select HAVE_DMA_CONTIGUOUS
+ 	select HAVE_DYNAMIC_FTRACE
+ 	select HAVE_DYNAMIC_FTRACE_WITH_ARGS \
+-		if $(cc-option,-fpatchable-function-entry=2)
++		if (GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS || \
++		    CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS)
+ 	select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS \
+ 		if DYNAMIC_FTRACE_WITH_ARGS && DYNAMIC_FTRACE_WITH_CALL_OPS
+ 	select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \
+@@ -268,12 +269,10 @@ config CLANG_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS
+ 	def_bool CC_IS_CLANG
+ 	# https://github.com/ClangBuiltLinux/linux/issues/1507
+ 	depends on AS_IS_GNU || (AS_IS_LLVM && (LD_IS_LLD || LD_VERSION >= 23600))
+-	select HAVE_DYNAMIC_FTRACE_WITH_ARGS
+ 
+ config GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS
+ 	def_bool CC_IS_GCC
+ 	depends on $(cc-option,-fpatchable-function-entry=2)
+-	select HAVE_DYNAMIC_FTRACE_WITH_ARGS
+ 
+ config 64BIT
+ 	def_bool y
+@@ -1079,6 +1078,7 @@ config ARM64_ERRATUM_3194386
+ 	  * ARM Cortex-A78C erratum 3324346
+ 	  * ARM Cortex-A78C erratum 3324347
+ 	  * ARM Cortex-A710 erratam 3324338
++	  * ARM Cortex-A715 errartum 3456084
+ 	  * ARM Cortex-A720 erratum 3456091
+ 	  * ARM Cortex-A725 erratum 3456106
+ 	  * ARM Cortex-X1 erratum 3324344
+@@ -1089,6 +1089,7 @@ config ARM64_ERRATUM_3194386
+ 	  * ARM Cortex-X925 erratum 3324334
+ 	  * ARM Neoverse-N1 erratum 3324349
+ 	  * ARM Neoverse N2 erratum 3324339
++	  * ARM Neoverse-N3 erratum 3456111
+ 	  * ARM Neoverse-V1 erratum 3324341
+ 	  * ARM Neoverse V2 erratum 3324336
+ 	  * ARM Neoverse-V3 erratum 3312417
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 5a7dfeb8e8eb55..488f8e75134959 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -94,6 +94,7 @@
+ #define ARM_CPU_PART_NEOVERSE_V3	0xD84
+ #define ARM_CPU_PART_CORTEX_X925	0xD85
+ #define ARM_CPU_PART_CORTEX_A725	0xD87
++#define ARM_CPU_PART_NEOVERSE_N3	0xD8E
+ 
+ #define APM_CPU_PART_XGENE		0x000
+ #define APM_CPU_VAR_POTENZA		0x00
+@@ -176,6 +177,7 @@
+ #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
+ #define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
+ #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
++#define MIDR_NEOVERSE_N3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N3)
+ #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 36b8e97bf49ec4..545ce446791a3a 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -1364,11 +1364,6 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
+ 		sign_extend64(__val, id##_##fld##_WIDTH - 1);		\
+ 	})
+ 
+-#define expand_field_sign(id, fld, val)					\
+-	(id##_##fld##_SIGNED ?						\
+-	 __expand_field_sign_signed(id, fld, val) :			\
+-	 __expand_field_sign_unsigned(id, fld, val))
+-
+ #define get_idreg_field_unsigned(kvm, id, fld)				\
+ 	({								\
+ 		u64 __val = IDREG((kvm), SYS_##id);			\
+@@ -1384,20 +1379,26 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
+ #define get_idreg_field_enum(kvm, id, fld)				\
+ 	get_idreg_field_unsigned(kvm, id, fld)
+ 
+-#define get_idreg_field(kvm, id, fld)					\
++#define kvm_cmp_feat_signed(kvm, id, fld, op, limit)			\
++	(get_idreg_field_signed((kvm), id, fld) op __expand_field_sign_signed(id, fld, limit))
++
++#define kvm_cmp_feat_unsigned(kvm, id, fld, op, limit)			\
++	(get_idreg_field_unsigned((kvm), id, fld) op __expand_field_sign_unsigned(id, fld, limit))
++
++#define kvm_cmp_feat(kvm, id, fld, op, limit)				\
+ 	(id##_##fld##_SIGNED ?						\
+-	 get_idreg_field_signed(kvm, id, fld) :				\
+-	 get_idreg_field_unsigned(kvm, id, fld))
++	 kvm_cmp_feat_signed(kvm, id, fld, op, limit) :			\
++	 kvm_cmp_feat_unsigned(kvm, id, fld, op, limit))
+ 
+ #define kvm_has_feat(kvm, id, fld, limit)				\
+-	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
++	kvm_cmp_feat(kvm, id, fld, >=, limit)
+ 
+ #define kvm_has_feat_enum(kvm, id, fld, val)				\
+-	(get_idreg_field_unsigned((kvm), id, fld) == __expand_field_sign_unsigned(id, fld, val))
++	kvm_cmp_feat_unsigned(kvm, id, fld, ==, val)
+ 
+ #define kvm_has_feat_range(kvm, id, fld, min, max)			\
+-	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
+-	 get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
++	(kvm_cmp_feat(kvm, id, fld, >=, min) &&				\
++	kvm_cmp_feat(kvm, id, fld, <=, max))
+ 
+ /* Check for a given level of PAuth support */
+ #define kvm_has_pauth(k, l)						\
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index dfefbdf4073a6a..a78f247029aec3 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -439,6 +439,7 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A715),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A725),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+@@ -447,8 +448,10 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_X4),
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_X925),
++	MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N3),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3),
+diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
+index 5139a28130c088..0f7b484cb2ff20 100644
+--- a/arch/arm64/mm/trans_pgd.c
++++ b/arch/arm64/mm/trans_pgd.c
+@@ -42,14 +42,16 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
+ 		 * the temporary mappings we use during restore.
+ 		 */
+ 		__set_pte(dst_ptep, pte_mkwrite_novma(pte));
+-	} else if ((debug_pagealloc_enabled() ||
+-		   is_kfence_address((void *)addr)) && !pte_none(pte)) {
++	} else if (!pte_none(pte)) {
+ 		/*
+ 		 * debug_pagealloc will removed the PTE_VALID bit if
+ 		 * the page isn't in use by the resume kernel. It may have
+ 		 * been in use by the original kernel, in which case we need
+ 		 * to put it back in our copy to do the restore.
+ 		 *
++		 * Other cases include kfence / vmalloc / memfd_secret which
++		 * may call `set_direct_map_invalid_noflush()`.
++		 *
+ 		 * Before marking this entry valid, check the pfn should
+ 		 * be mapped.
+ 		 */
+diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
+index b4252c357c8e23..75b366407a60a3 100644
+--- a/arch/loongarch/configs/loongson3_defconfig
++++ b/arch/loongarch/configs/loongson3_defconfig
+@@ -96,7 +96,6 @@ CONFIG_ZPOOL=y
+ CONFIG_ZSWAP=y
+ CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD=y
+ CONFIG_ZBUD=y
+-CONFIG_Z3FOLD=y
+ CONFIG_ZSMALLOC=m
+ # CONFIG_COMPAT_BRK is not set
+ CONFIG_MEMORY_HOTPLUG=y
+diff --git a/arch/parisc/include/asm/mman.h b/arch/parisc/include/asm/mman.h
+index 47c5a1991d1034..89b6beeda0b869 100644
+--- a/arch/parisc/include/asm/mman.h
++++ b/arch/parisc/include/asm/mman.h
+@@ -11,4 +11,18 @@ static inline bool arch_memory_deny_write_exec_supported(void)
+ }
+ #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+ 
++static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
++{
++	/*
++	 * The stack on parisc grows upwards, so if userspace requests memory
++	 * for a stack, mark it with VM_GROWSUP so that the stack expansion in
++	 * the fault handler will work.
++	 */
++	if (flags & MAP_STACK)
++		return VM_GROWSUP;
++
++	return 0;
++}
++#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
++
+ #endif /* __ASM_MMAN_H__ */
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index ab23e61a6f016a..ea57bcc21dc5fe 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1051,8 +1051,7 @@ ENTRY_CFI(intr_save)		/* for os_hpmc */
+ 	STREG           %r16, PT_ISR(%r29)
+ 	STREG           %r17, PT_IOR(%r29)
+ 
+-#if 0 && defined(CONFIG_64BIT)
+-	/* Revisit when we have 64-bit code above 4Gb */
++#if defined(CONFIG_64BIT)
+ 	b,n		intr_save2
+ 
+ skip_save_ior:
+@@ -1060,8 +1059,7 @@ skip_save_ior:
+ 	 * need to adjust iasq/iaoq here in the same way we adjusted isr/ior
+ 	 * above.
+ 	 */
+-	extrd,u,*	%r8,PSW_W_BIT,1,%r1
+-	cmpib,COND(=),n	1,%r1,intr_save2
++	bb,COND(>=),n	%r8,PSW_W_BIT,intr_save2
+ 	LDREG		PT_IASQ0(%r29), %r16
+ 	LDREG		PT_IAOQ0(%r29), %r17
+ 	/* adjust iasq/iaoq */
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 1f51aa9c8230cc..0fa81bf1466b15 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -243,10 +243,10 @@ linux_gateway_entry:
+ 
+ #ifdef CONFIG_64BIT
+ 	ldil	L%sys_call_table, %r1
+-	or,=	%r2,%r2,%r2
+-	addil	L%(sys_call_table64-sys_call_table), %r1
++	or,ev	%r2,%r2,%r2
++	ldil	L%sys_call_table64, %r1
+ 	ldo	R%sys_call_table(%r1), %r19
+-	or,=	%r2,%r2,%r2
++	or,ev	%r2,%r2,%r2
+ 	ldo	R%sys_call_table64(%r1), %r19
+ #else
+ 	load32	sys_call_table, %r19
+@@ -379,10 +379,10 @@ tracesys_next:
+ 	extrd,u	%r19,63,1,%r2			/* W hidden in bottom bit */
+ 
+ 	ldil	L%sys_call_table, %r1
+-	or,=	%r2,%r2,%r2
+-	addil	L%(sys_call_table64-sys_call_table), %r1
++	or,ev	%r2,%r2,%r2
++	ldil	L%sys_call_table64, %r1
+ 	ldo	R%sys_call_table(%r1), %r19
+-	or,=	%r2,%r2,%r2
++	or,ev	%r2,%r2,%r2
+ 	ldo	R%sys_call_table64(%r1), %r19
+ #else
+ 	load32	sys_call_table, %r19
+@@ -1327,6 +1327,8 @@ ENTRY(sys_call_table)
+ END(sys_call_table)
+ 
+ #ifdef CONFIG_64BIT
++#undef __SYSCALL_WITH_COMPAT
++#define __SYSCALL_WITH_COMPAT(nr, native, compat)	__SYSCALL(nr, native)
+ 	.align 8
+ ENTRY(sys_call_table64)
+ #include <asm/syscall_table_64.h>    /* 64-bit syscalls */
+diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
+index 544a65fda77bcb..d39284489aa263 100644
+--- a/arch/powerpc/configs/ppc64_defconfig
++++ b/arch/powerpc/configs/ppc64_defconfig
+@@ -81,7 +81,6 @@ CONFIG_MODULE_SIG_SHA512=y
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_BINFMT_MISC=m
+ CONFIG_ZSWAP=y
+-CONFIG_Z3FOLD=y
+ CONFIG_ZSMALLOC=y
+ # CONFIG_SLAB_MERGE_DEFAULT is not set
+ CONFIG_SLAB_FREELIST_RANDOM=y
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index a585c8e538ff0f..939daf6b695ef1 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -111,6 +111,21 @@ extern struct vdso_arch_data *vdso_data;
+ 	addi	\ptr, \ptr, (_vdso_datapage - 999b)@l
+ .endm
+ 
++#include <asm/asm-offsets.h>
++#include <asm/page.h>
++
++.macro get_realdatapage ptr scratch
++	get_datapage \ptr
++#ifdef CONFIG_TIME_NS
++	lwz	\scratch, VDSO_CLOCKMODE_OFFSET(\ptr)
++	xoris	\scratch, \scratch, VDSO_CLOCKMODE_TIMENS@h
++	xori	\scratch, \scratch, VDSO_CLOCKMODE_TIMENS@l
++	cntlzw	\scratch, \scratch
++	rlwinm	\scratch, \scratch, PAGE_SHIFT - 5, 1 << PAGE_SHIFT
++	add	\ptr, \ptr, \scratch
++#endif
++.endm
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #endif /* __KERNEL__ */
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index f029755f9e69af..0c5c0fbf62417c 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -346,6 +346,8 @@ int main(void)
+ #else
+ 	OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map);
+ #endif
++	OFFSET(VDSO_CLOCKMODE_OFFSET, vdso_arch_data, data[0].clock_mode);
++	DEFINE(VDSO_CLOCKMODE_TIMENS, VDSO_CLOCKMODE_TIMENS);
+ 
+ #ifdef CONFIG_BUG
+ 	DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
+diff --git a/arch/powerpc/kernel/vdso/cacheflush.S b/arch/powerpc/kernel/vdso/cacheflush.S
+index 0085ae464dac9c..3b2479bd2f9a1d 100644
+--- a/arch/powerpc/kernel/vdso/cacheflush.S
++++ b/arch/powerpc/kernel/vdso/cacheflush.S
+@@ -30,7 +30,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
+ #ifdef CONFIG_PPC64
+ 	mflr	r12
+   .cfi_register lr,r12
+-	get_datapage	r10
++	get_realdatapage	r10, r11
+ 	mtlr	r12
+   .cfi_restore	lr
+ #endif
+diff --git a/arch/powerpc/kernel/vdso/datapage.S b/arch/powerpc/kernel/vdso/datapage.S
+index db8e167f01667e..2b19b6201a33a8 100644
+--- a/arch/powerpc/kernel/vdso/datapage.S
++++ b/arch/powerpc/kernel/vdso/datapage.S
+@@ -28,7 +28,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
+ 	mflr	r12
+   .cfi_register lr,r12
+ 	mr.	r4,r3
+-	get_datapage	r3
++	get_realdatapage	r3, r11
+ 	mtlr	r12
+ #ifdef __powerpc64__
+ 	addi	r3,r3,CFG_SYSCALL_MAP64
+@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_get_tbfreq)
+   .cfi_startproc
+ 	mflr	r12
+   .cfi_register lr,r12
+-	get_datapage	r3
++	get_realdatapage	r3, r11
+ #ifndef __powerpc64__
+ 	lwz	r4,(CFG_TB_TICKS_PER_SEC + 4)(r3)
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index 47f8eabd1bee31..9873b916b23704 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -334,23 +334,6 @@ int handle_dlpar_errorlog(struct pseries_hp_errorlog *hp_elog)
+ {
+ 	int rc;
+ 
+-	/* pseries error logs are in BE format, convert to cpu type */
+-	switch (hp_elog->id_type) {
+-	case PSERIES_HP_ELOG_ID_DRC_COUNT:
+-		hp_elog->_drc_u.drc_count =
+-				be32_to_cpu(hp_elog->_drc_u.drc_count);
+-		break;
+-	case PSERIES_HP_ELOG_ID_DRC_INDEX:
+-		hp_elog->_drc_u.drc_index =
+-				be32_to_cpu(hp_elog->_drc_u.drc_index);
+-		break;
+-	case PSERIES_HP_ELOG_ID_DRC_IC:
+-		hp_elog->_drc_u.ic.count =
+-				be32_to_cpu(hp_elog->_drc_u.ic.count);
+-		hp_elog->_drc_u.ic.index =
+-				be32_to_cpu(hp_elog->_drc_u.ic.index);
+-	}
+-
+ 	switch (hp_elog->resource) {
+ 	case PSERIES_HP_ELOG_RESOURCE_MEM:
+ 		rc = dlpar_memory(hp_elog);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index e62835a12d73fc..6838a0fcda296b 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -757,7 +757,7 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
+ 	u32 drc_index;
+ 	int rc;
+ 
+-	drc_index = hp_elog->_drc_u.drc_index;
++	drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ 
+ 	lock_device_hotplug();
+ 
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 3fe3ddb30c04b4..38dc4f7c9296b2 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -817,16 +817,16 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
+ 	case PSERIES_HP_ELOG_ACTION_ADD:
+ 		switch (hp_elog->id_type) {
+ 		case PSERIES_HP_ELOG_ID_DRC_COUNT:
+-			count = hp_elog->_drc_u.drc_count;
++			count = be32_to_cpu(hp_elog->_drc_u.drc_count);
+ 			rc = dlpar_memory_add_by_count(count);
+ 			break;
+ 		case PSERIES_HP_ELOG_ID_DRC_INDEX:
+-			drc_index = hp_elog->_drc_u.drc_index;
++			drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ 			rc = dlpar_memory_add_by_index(drc_index);
+ 			break;
+ 		case PSERIES_HP_ELOG_ID_DRC_IC:
+-			count = hp_elog->_drc_u.ic.count;
+-			drc_index = hp_elog->_drc_u.ic.index;
++			count = be32_to_cpu(hp_elog->_drc_u.ic.count);
++			drc_index = be32_to_cpu(hp_elog->_drc_u.ic.index);
+ 			rc = dlpar_memory_add_by_ic(count, drc_index);
+ 			break;
+ 		default:
+@@ -838,16 +838,16 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
+ 	case PSERIES_HP_ELOG_ACTION_REMOVE:
+ 		switch (hp_elog->id_type) {
+ 		case PSERIES_HP_ELOG_ID_DRC_COUNT:
+-			count = hp_elog->_drc_u.drc_count;
++			count = be32_to_cpu(hp_elog->_drc_u.drc_count);
+ 			rc = dlpar_memory_remove_by_count(count);
+ 			break;
+ 		case PSERIES_HP_ELOG_ID_DRC_INDEX:
+-			drc_index = hp_elog->_drc_u.drc_index;
++			drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ 			rc = dlpar_memory_remove_by_index(drc_index);
+ 			break;
+ 		case PSERIES_HP_ELOG_ID_DRC_IC:
+-			count = hp_elog->_drc_u.ic.count;
+-			drc_index = hp_elog->_drc_u.ic.index;
++			count = be32_to_cpu(hp_elog->_drc_u.ic.count);
++			drc_index = be32_to_cpu(hp_elog->_drc_u.ic.index);
+ 			rc = dlpar_memory_remove_by_ic(count, drc_index);
+ 			break;
+ 		default:
+diff --git a/arch/powerpc/platforms/pseries/pmem.c b/arch/powerpc/platforms/pseries/pmem.c
+index 3c290b9ed01b39..0f1d45f32e4a44 100644
+--- a/arch/powerpc/platforms/pseries/pmem.c
++++ b/arch/powerpc/platforms/pseries/pmem.c
+@@ -121,7 +121,7 @@ int dlpar_hp_pmem(struct pseries_hp_errorlog *hp_elog)
+ 		return -EINVAL;
+ 	}
+ 
+-	drc_index = hp_elog->_drc_u.drc_index;
++	drc_index = be32_to_cpu(hp_elog->_drc_u.drc_index);
+ 
+ 	lock_device_hotplug();
+ 
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 006232b67b467d..7d521a31884029 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -312,6 +312,11 @@ config GENERIC_HWEIGHT
+ config FIX_EARLYCON_MEM
+ 	def_bool MMU
+ 
++config ILLEGAL_POINTER_VALUE
++	hex
++	default 0 if 32BIT
++	default 0xdead000000000000 if 64BIT
++
+ config PGTABLE_LEVELS
+ 	int
+ 	default 5 if 64BIT
+@@ -710,8 +715,7 @@ config IRQ_STACKS
+ config THREAD_SIZE_ORDER
+ 	int "Kernel stack size (in power-of-two numbers of page size)" if VMAP_STACK && EXPERT
+ 	range 0 4
+-	default 1 if 32BIT && !KASAN
+-	default 3 if 64BIT && KASAN
++	default 1 if 32BIT
+ 	default 2
+ 	help
+ 	  Specify the Pages of thread stack size (from 4KB to 64KB), which also
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 5d473343634b9d..eec9d4394f5ba3 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -12,7 +12,12 @@
+ #include <linux/const.h>
+ 
+ /* thread information allocation */
+-#define THREAD_SIZE_ORDER	CONFIG_THREAD_SIZE_ORDER
++#ifdef CONFIG_KASAN
++#define KASAN_STACK_ORDER	1
++#else
++#define KASAN_STACK_ORDER	0
++#endif
++#define THREAD_SIZE_ORDER	(CONFIG_THREAD_SIZE_ORDER + KASAN_STACK_ORDER)
+ #define THREAD_SIZE		(PAGE_SIZE << THREAD_SIZE_ORDER)
+ 
+ /*
+diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
+index 0ffb072be95615..0bbec1c75cd0be 100644
+--- a/arch/x86/crypto/sha256-avx2-asm.S
++++ b/arch/x86/crypto/sha256-avx2-asm.S
+@@ -592,22 +592,22 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ 	leaq	K256+0*32(%rip), INP		## reuse INP as scratch reg
+ 	vpaddd	(INP, SRND), X0, XFER
+ 	vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
+-	FOUR_ROUNDS_AND_SCHED	_XFER + 0*32
++	FOUR_ROUNDS_AND_SCHED	(_XFER + 0*32)
+ 
+ 	leaq	K256+1*32(%rip), INP
+ 	vpaddd	(INP, SRND), X0, XFER
+ 	vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
+-	FOUR_ROUNDS_AND_SCHED	_XFER + 1*32
++	FOUR_ROUNDS_AND_SCHED	(_XFER + 1*32)
+ 
+ 	leaq	K256+2*32(%rip), INP
+ 	vpaddd	(INP, SRND), X0, XFER
+ 	vmovdqa XFER, 2*32+_XFER(%rsp, SRND)
+-	FOUR_ROUNDS_AND_SCHED	_XFER + 2*32
++	FOUR_ROUNDS_AND_SCHED	(_XFER + 2*32)
+ 
+ 	leaq	K256+3*32(%rip), INP
+ 	vpaddd	(INP, SRND), X0, XFER
+ 	vmovdqa XFER, 3*32+_XFER(%rsp, SRND)
+-	FOUR_ROUNDS_AND_SCHED	_XFER + 3*32
++	FOUR_ROUNDS_AND_SCHED	(_XFER + 3*32)
+ 
+ 	add	$4*32, SRND
+ 	cmp	$3*4*32, SRND
+@@ -618,12 +618,12 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ 	leaq	K256+0*32(%rip), INP
+ 	vpaddd	(INP, SRND), X0, XFER
+ 	vmovdqa XFER, 0*32+_XFER(%rsp, SRND)
+-	DO_4ROUNDS	_XFER + 0*32
++	DO_4ROUNDS	(_XFER + 0*32)
+ 
+ 	leaq	K256+1*32(%rip), INP
+ 	vpaddd	(INP, SRND), X1, XFER
+ 	vmovdqa XFER, 1*32+_XFER(%rsp, SRND)
+-	DO_4ROUNDS	_XFER + 1*32
++	DO_4ROUNDS	(_XFER + 1*32)
+ 	add	$2*32, SRND
+ 
+ 	vmovdqa	X2, X0
+@@ -651,8 +651,8 @@ SYM_TYPED_FUNC_START(sha256_transform_rorx)
+ 	xor	SRND, SRND
+ .align 16
+ .Lloop3:
+-	DO_4ROUNDS	 _XFER + 0*32 + 16
+-	DO_4ROUNDS	 _XFER + 1*32 + 16
++	DO_4ROUNDS	(_XFER + 0*32 + 16)
++	DO_4ROUNDS	(_XFER + 1*32 + 16)
+ 	add	$2*32, SRND
+ 	cmp	$4*4*32, SRND
+ 	jb	.Lloop3
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 83d12dd3f831a5..d77a97056844b0 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -41,6 +41,8 @@
+ #include <asm/desc.h>
+ #include <asm/ldt.h>
+ #include <asm/unwind.h>
++#include <asm/uprobes.h>
++#include <asm/ibt.h>
+ 
+ #include "perf_event.h"
+ 
+@@ -2814,6 +2816,46 @@ static unsigned long get_segment_base(unsigned int segment)
+ 	return get_desc_base(desc);
+ }
+ 
++#ifdef CONFIG_UPROBES
++/*
++ * Heuristic-based check if uprobe is installed at the function entry.
++ *
++ * Under assumption of user code being compiled with frame pointers,
++ * `push %rbp/%ebp` is a good indicator that we indeed are.
++ *
++ * Similarly, `endbr64` (assuming 64-bit mode) is also a common pattern.
++ * If we get this wrong, captured stack trace might have one extra bogus
++ * entry, but the rest of stack trace will still be meaningful.
++ */
++static bool is_uprobe_at_func_entry(struct pt_regs *regs)
++{
++	struct arch_uprobe *auprobe;
++
++	if (!current->utask)
++		return false;
++
++	auprobe = current->utask->auprobe;
++	if (!auprobe)
++		return false;
++
++	/* push %rbp/%ebp */
++	if (auprobe->insn[0] == 0x55)
++		return true;
++
++	/* endbr64 (64-bit only) */
++	if (user_64bit_mode(regs) && is_endbr(*(u32 *)auprobe->insn))
++		return true;
++
++	return false;
++}
++
++#else
++static bool is_uprobe_at_func_entry(struct pt_regs *regs)
++{
++	return false;
++}
++#endif /* CONFIG_UPROBES */
++
+ #ifdef CONFIG_IA32_EMULATION
+ 
+ #include <linux/compat.h>
+@@ -2825,6 +2867,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ 	unsigned long ss_base, cs_base;
+ 	struct stack_frame_ia32 frame;
+ 	const struct stack_frame_ia32 __user *fp;
++	u32 ret_addr;
+ 
+ 	if (user_64bit_mode(regs))
+ 		return 0;
+@@ -2834,6 +2877,12 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
+ 
+ 	fp = compat_ptr(ss_base + regs->bp);
+ 	pagefault_disable();
++
++	/* see perf_callchain_user() below for why we do this */
++	if (is_uprobe_at_func_entry(regs) &&
++	    !get_user(ret_addr, (const u32 __user *)regs->sp))
++		perf_callchain_store(entry, ret_addr);
++
+ 	while (entry->nr < entry->max_stack) {
+ 		if (!valid_user_frame(fp, sizeof(frame)))
+ 			break;
+@@ -2862,6 +2911,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ {
+ 	struct stack_frame frame;
+ 	const struct stack_frame __user *fp;
++	unsigned long ret_addr;
+ 
+ 	if (perf_guest_state()) {
+ 		/* TODO: We don't support guest os callchain now */
+@@ -2885,6 +2935,19 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 		return;
+ 
+ 	pagefault_disable();
++
++	/*
++	 * If we are called from uprobe handler, and we are indeed at the very
++	 * entry to user function (which is normally a `push %rbp` instruction,
++	 * under assumption of application being compiled with frame pointers),
++	 * we should read return address from *regs->sp before proceeding
++	 * to follow frame pointers, otherwise we'll skip immediate caller
++	 * as %rbp is not yet setup.
++	 */
++	if (is_uprobe_at_func_entry(regs) &&
++	    !get_user(ret_addr, (const unsigned long __user *)regs->sp))
++		perf_callchain_store(entry, ret_addr);
++
+ 	while (entry->nr < entry->max_stack) {
+ 		if (!valid_user_frame(fp, sizeof(frame)))
+ 			break;
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 9327eb00e96d09..be2045a18e69b9 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -345,20 +345,12 @@ extern struct apic *apic;
+  * APIC drivers are probed based on how they are listed in the .apicdrivers
+  * section. So the order is important and enforced by the ordering
+  * of different apic driver files in the Makefile.
+- *
+- * For the files having two apic drivers, we use apic_drivers()
+- * to enforce the order with in them.
+  */
+ #define apic_driver(sym)					\
+ 	static const struct apic *__apicdrivers_##sym __used		\
+ 	__aligned(sizeof(struct apic *))			\
+ 	__section(".apicdrivers") = { &sym }
+ 
+-#define apic_drivers(sym1, sym2)					\
+-	static struct apic *__apicdrivers_##sym1##sym2[2] __used	\
+-	__aligned(sizeof(struct apic *))				\
+-	__section(".apicdrivers") = { &sym1, &sym2 }
+-
+ extern struct apic *__apicdrivers[], *__apicdrivers_end[];
+ 
+ /*
+diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h
+index 611fa41711affd..eccc75bc9c4f3d 100644
+--- a/arch/x86/include/asm/fpu/signal.h
++++ b/arch/x86/include/asm/fpu/signal.h
+@@ -29,7 +29,7 @@ fpu__alloc_mathframe(unsigned long sp, int ia32_frame,
+ 
+ unsigned long fpu__get_fpstate_size(void);
+ 
+-extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size);
++extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size, u32 pkru);
+ extern void fpu__clear_user_states(struct fpu *fpu);
+ extern bool fpu__restore_sig(void __user *buf, int ia32_frame);
+ 
+diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
+index 2fc7bc3863ff6f..7c488ff0c7641b 100644
+--- a/arch/x86/include/asm/syscall.h
++++ b/arch/x86/include/asm/syscall.h
+@@ -82,7 +82,12 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ 					 struct pt_regs *regs,
+ 					 unsigned long *args)
+ {
+-	memcpy(args, &regs->bx, 6 * sizeof(args[0]));
++	args[0] = regs->bx;
++	args[1] = regs->cx;
++	args[2] = regs->dx;
++	args[3] = regs->si;
++	args[4] = regs->di;
++	args[5] = regs->bp;
+ }
+ 
+ static inline int syscall_get_arch(struct task_struct *task)
+diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
+index f37ad3392fec91..e0308d8c4e6c27 100644
+--- a/arch/x86/kernel/apic/apic_flat_64.c
++++ b/arch/x86/kernel/apic/apic_flat_64.c
+@@ -8,129 +8,25 @@
+  * Martin Bligh, Andi Kleen, James Bottomley, John Stultz, and
+  * James Cleverdon.
+  */
+-#include <linux/cpumask.h>
+ #include <linux/export.h>
+-#include <linux/acpi.h>
+ 
+-#include <asm/jailhouse_para.h>
+ #include <asm/apic.h>
+ 
+ #include "local.h"
+ 
+-static struct apic apic_physflat;
+-static struct apic apic_flat;
+-
+-struct apic *apic __ro_after_init = &apic_flat;
+-EXPORT_SYMBOL_GPL(apic);
+-
+-static int flat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+-{
+-	return 1;
+-}
+-
+-static void _flat_send_IPI_mask(unsigned long mask, int vector)
+-{
+-	unsigned long flags;
+-
+-	local_irq_save(flags);
+-	__default_send_IPI_dest_field(mask, vector, APIC_DEST_LOGICAL);
+-	local_irq_restore(flags);
+-}
+-
+-static void flat_send_IPI_mask(const struct cpumask *cpumask, int vector)
+-{
+-	unsigned long mask = cpumask_bits(cpumask)[0];
+-
+-	_flat_send_IPI_mask(mask, vector);
+-}
+-
+-static void
+-flat_send_IPI_mask_allbutself(const struct cpumask *cpumask, int vector)
+-{
+-	unsigned long mask = cpumask_bits(cpumask)[0];
+-	int cpu = smp_processor_id();
+-
+-	if (cpu < BITS_PER_LONG)
+-		__clear_bit(cpu, &mask);
+-
+-	_flat_send_IPI_mask(mask, vector);
+-}
+-
+-static u32 flat_get_apic_id(u32 x)
++static u32 physflat_get_apic_id(u32 x)
+ {
+ 	return (x >> 24) & 0xFF;
+ }
+ 
+-static int flat_probe(void)
++static int physflat_probe(void)
+ {
+ 	return 1;
+ }
+ 
+-static struct apic apic_flat __ro_after_init = {
+-	.name				= "flat",
+-	.probe				= flat_probe,
+-	.acpi_madt_oem_check		= flat_acpi_madt_oem_check,
+-
+-	.dest_mode_logical		= true,
+-
+-	.disable_esr			= 0,
+-
+-	.init_apic_ldr			= default_init_apic_ldr,
+-	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
+-
+-	.max_apic_id			= 0xFE,
+-	.get_apic_id			= flat_get_apic_id,
+-
+-	.calc_dest_apicid		= apic_flat_calc_apicid,
+-
+-	.send_IPI			= default_send_IPI_single,
+-	.send_IPI_mask			= flat_send_IPI_mask,
+-	.send_IPI_mask_allbutself	= flat_send_IPI_mask_allbutself,
+-	.send_IPI_allbutself		= default_send_IPI_allbutself,
+-	.send_IPI_all			= default_send_IPI_all,
+-	.send_IPI_self			= default_send_IPI_self,
+-	.nmi_to_offline_cpu		= true,
+-
+-	.read				= native_apic_mem_read,
+-	.write				= native_apic_mem_write,
+-	.eoi				= native_apic_mem_eoi,
+-	.icr_read			= native_apic_icr_read,
+-	.icr_write			= native_apic_icr_write,
+-	.wait_icr_idle			= apic_mem_wait_icr_idle,
+-	.safe_wait_icr_idle		= apic_mem_wait_icr_idle_timeout,
+-};
+-
+-/*
+- * Physflat mode is used when there are more than 8 CPUs on a system.
+- * We cannot use logical delivery in this case because the mask
+- * overflows, so use physical mode.
+- */
+ static int physflat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+ {
+-#ifdef CONFIG_ACPI
+-	/*
+-	 * Quirk: some x86_64 machines can only use physical APIC mode
+-	 * regardless of how many processors are present (x86_64 ES7000
+-	 * is an example).
+-	 */
+-	if (acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID &&
+-		(acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL)) {
+-		printk(KERN_DEBUG "system APIC only can use physical flat");
+-		return 1;
+-	}
+-
+-	if (!strncmp(oem_id, "IBM", 3) && !strncmp(oem_table_id, "EXA", 3)) {
+-		printk(KERN_DEBUG "IBM Summit detected, will use apic physical");
+-		return 1;
+-	}
+-#endif
+-
+-	return 0;
+-}
+-
+-static int physflat_probe(void)
+-{
+-	return apic == &apic_physflat || num_possible_cpus() > 8 || jailhouse_paravirt();
++	return 1;
+ }
+ 
+ static struct apic apic_physflat __ro_after_init = {
+@@ -146,7 +42,7 @@ static struct apic apic_physflat __ro_after_init = {
+ 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
+ 
+ 	.max_apic_id			= 0xFE,
+-	.get_apic_id			= flat_get_apic_id,
++	.get_apic_id			= physflat_get_apic_id,
+ 
+ 	.calc_dest_apicid		= apic_default_calc_apicid,
+ 
+@@ -166,8 +62,7 @@ static struct apic apic_physflat __ro_after_init = {
+ 	.wait_icr_idle			= apic_mem_wait_icr_idle,
+ 	.safe_wait_icr_idle		= apic_mem_wait_icr_idle_timeout,
+ };
++apic_driver(apic_physflat);
+ 
+-/*
+- * We need to check for physflat first, so this order is important.
+- */
+-apic_drivers(apic_physflat, apic_flat);
++struct apic *apic __ro_after_init = &apic_physflat;
++EXPORT_SYMBOL_GPL(apic);
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 477b740b2f267b..d1ec1dcb637af0 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -352,27 +352,26 @@ static void ioapic_mask_entry(int apic, int pin)
+  * shared ISA-space IRQs, so we have to support them. We are super
+  * fast in the common case, and fast for shared ISA-space IRQs.
+  */
+-static int __add_pin_to_irq_node(struct mp_chip_data *data,
+-				 int node, int apic, int pin)
++static bool add_pin_to_irq_node(struct mp_chip_data *data, int node, int apic, int pin)
+ {
+ 	struct irq_pin_list *entry;
+ 
+-	/* don't allow duplicates */
+-	for_each_irq_pin(entry, data->irq_2_pin)
++	/* Don't allow duplicates */
++	for_each_irq_pin(entry, data->irq_2_pin) {
+ 		if (entry->apic == apic && entry->pin == pin)
+-			return 0;
++			return true;
++	}
+ 
+ 	entry = kzalloc_node(sizeof(struct irq_pin_list), GFP_ATOMIC, node);
+ 	if (!entry) {
+-		pr_err("can not alloc irq_pin_list (%d,%d,%d)\n",
+-		       node, apic, pin);
+-		return -ENOMEM;
++		pr_err("Cannot allocate irq_pin_list (%d,%d,%d)\n", node, apic, pin);
++		return false;
+ 	}
++
+ 	entry->apic = apic;
+ 	entry->pin = pin;
+ 	list_add_tail(&entry->list, &data->irq_2_pin);
+-
+-	return 0;
++	return true;
+ }
+ 
+ static void __remove_pin_from_irq(struct mp_chip_data *data, int apic, int pin)
+@@ -387,13 +386,6 @@ static void __remove_pin_from_irq(struct mp_chip_data *data, int apic, int pin)
+ 		}
+ }
+ 
+-static void add_pin_to_irq_node(struct mp_chip_data *data,
+-				int node, int apic, int pin)
+-{
+-	if (__add_pin_to_irq_node(data, node, apic, pin))
+-		panic("IO-APIC: failed to add irq-pin. Can not proceed\n");
+-}
+-
+ /*
+  * Reroute an IRQ to a different pin.
+  */
+@@ -1002,8 +994,7 @@ static int alloc_isa_irq_from_domain(struct irq_domain *domain,
+ 	if (irq_data && irq_data->parent_data) {
+ 		if (!mp_check_pin_attr(irq, info))
+ 			return -EBUSY;
+-		if (__add_pin_to_irq_node(irq_data->chip_data, node, ioapic,
+-					  info->ioapic.pin))
++		if (!add_pin_to_irq_node(irq_data->chip_data, node, ioapic, info->ioapic.pin))
+ 			return -ENOMEM;
+ 	} else {
+ 		info->flags |= X86_IRQ_ALLOC_LEGACY;
+@@ -3017,10 +3008,8 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ 		return -ENOMEM;
+ 
+ 	ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, info);
+-	if (ret < 0) {
+-		kfree(data);
+-		return ret;
+-	}
++	if (ret < 0)
++		goto free_data;
+ 
+ 	INIT_LIST_HEAD(&data->irq_2_pin);
+ 	irq_data->hwirq = info->ioapic.pin;
+@@ -3029,7 +3018,10 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ 	irq_data->chip_data = data;
+ 	mp_irqdomain_get_attr(mp_pin_to_gsi(ioapic, pin), data, info);
+ 
+-	add_pin_to_irq_node(data, ioapic_alloc_attr_node(info), ioapic, pin);
++	if (!add_pin_to_irq_node(data, ioapic_alloc_attr_node(info), ioapic, pin)) {
++		ret = -ENOMEM;
++		goto free_irqs;
++	}
+ 
+ 	mp_preconfigure_entry(data);
+ 	mp_register_handler(virq, data->is_level);
+@@ -3044,6 +3036,12 @@ int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
+ 		    ioapic, mpc_ioapic_id(ioapic), pin, virq,
+ 		    data->is_level, data->active_low);
+ 	return 0;
++
++free_irqs:
++	irq_domain_free_irqs_parent(domain, virq, nr_irqs);
++free_data:
++	kfree(data);
++	return ret;
+ }
+ 
+ void mp_irqdomain_free(struct irq_domain *domain, unsigned int virq,
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index b6f927f6c567e1..47c84503ad9be7 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -2545,10 +2545,9 @@ static void __init srso_select_mitigation(void)
+ {
+ 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
+ 
+-	if (cpu_mitigations_off())
+-		return;
+-
+-	if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
++	if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
++	    cpu_mitigations_off() ||
++	    srso_cmd == SRSO_CMD_OFF) {
+ 		if (boot_cpu_has(X86_FEATURE_SBPB))
+ 			x86_pred_cmd = PRED_CMD_SBPB;
+ 		return;
+@@ -2579,11 +2578,6 @@ static void __init srso_select_mitigation(void)
+ 	}
+ 
+ 	switch (srso_cmd) {
+-	case SRSO_CMD_OFF:
+-		if (boot_cpu_has(X86_FEATURE_SBPB))
+-			x86_pred_cmd = PRED_CMD_SBPB;
+-		return;
+-
+ 	case SRSO_CMD_MICROCODE:
+ 		if (has_microcode) {
+ 			srso_mitigation = SRSO_MITIGATION_MICROCODE;
+@@ -2637,6 +2631,8 @@ static void __init srso_select_mitigation(void)
+ 			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+                 }
+ 		break;
++	default:
++		break;
+ 	}
+ 
+ out:
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index d4e539d4e158cc..be307c9ef263d8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1165,8 +1165,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 
+ 	VULNWL_INTEL(INTEL_CORE_YONAH,		NO_SSB),
+ 
+-	VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID,	NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT),
+-	VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP,	NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
++	VULNWL_INTEL(INTEL_ATOM_AIRMONT_MID,	NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | MSBDS_ONLY),
++	VULNWL_INTEL(INTEL_ATOM_AIRMONT_NP,	NO_SSB | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT),
+ 
+ 	VULNWL_INTEL(INTEL_ATOM_GOLDMONT,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+ 	VULNWL_INTEL(INTEL_ATOM_GOLDMONT_D,	NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO),
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 247f2225aa9f36..2b3b9e140dd41b 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -156,7 +156,7 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+ 	return !err;
+ }
+ 
+-static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
++static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ 	if (use_xsave())
+ 		return xsave_to_user_sigframe(buf);
+@@ -185,7 +185,7 @@ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
+  * For [f]xsave state, update the SW reserved fields in the [f]xsave frame
+  * indicating the absence/presence of the extended state to the user.
+  */
+-bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
++bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size, u32 pkru)
+ {
+ 	struct task_struct *tsk = current;
+ 	struct fpstate *fpstate = tsk->thread.fpu.fpstate;
+@@ -228,7 +228,7 @@ bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
+ 		fpregs_restore_userregs();
+ 
+ 	pagefault_disable();
+-	ret = copy_fpregs_to_sigframe(buf_fx);
++	ret = copy_fpregs_to_sigframe(buf_fx, pkru);
+ 	pagefault_enable();
+ 	fpregs_unlock();
+ 
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index cc0f7f70b17ba3..9c9ac606893e99 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -28,6 +28,7 @@
+ #include <asm/setup.h>
+ #include <asm/set_memory.h>
+ #include <asm/cpu.h>
++#include <asm/efi.h>
+ 
+ #ifdef CONFIG_ACPI
+ /*
+@@ -87,6 +88,8 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
+ {
+ #ifdef CONFIG_EFI
+ 	unsigned long mstart, mend;
++	void *kaddr;
++	int ret;
+ 
+ 	if (!efi_enabled(EFI_BOOT))
+ 		return 0;
+@@ -102,6 +105,30 @@ map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
+ 	if (!mstart)
+ 		return 0;
+ 
++	ret = kernel_ident_mapping_init(info, level4p, mstart, mend);
++	if (ret)
++		return ret;
++
++	kaddr = memremap(mstart, mend - mstart, MEMREMAP_WB);
++	if (!kaddr) {
++		pr_err("Could not map UEFI system table\n");
++		return -ENOMEM;
++	}
++
++	mstart = efi_config_table;
++
++	if (efi_enabled(EFI_64BIT)) {
++		efi_system_table_64_t *stbl = (efi_system_table_64_t *)kaddr;
++
++		mend = mstart + sizeof(efi_config_table_64_t) * stbl->nr_tables;
++	} else {
++		efi_system_table_32_t *stbl = (efi_system_table_32_t *)kaddr;
++
++		mend = mstart + sizeof(efi_config_table_32_t) * stbl->nr_tables;
++	}
++
++	memunmap(kaddr);
++
+ 	return kernel_ident_mapping_init(info, level4p, mstart, mend);
+ #endif
+ 	return 0;
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 31b6f5dddfc274..1f1e8e0ac5a341 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -84,6 +84,7 @@ get_sigframe(struct ksignal *ksig, struct pt_regs *regs, size_t frame_size,
+ 	unsigned long math_size = 0;
+ 	unsigned long sp = regs->sp;
+ 	unsigned long buf_fx = 0;
++	u32 pkru = read_pkru();
+ 
+ 	/* redzone */
+ 	if (!ia32_frame)
+@@ -139,7 +140,7 @@ get_sigframe(struct ksignal *ksig, struct pt_regs *regs, size_t frame_size,
+ 	}
+ 
+ 	/* save i387 and extended state */
+-	if (!copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size))
++	if (!copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size, pkru))
+ 		return (void __user *)-1L;
+ 
+ 	return (void __user *)sp;
+diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
+index 8a94053c544465..ee9453891901b7 100644
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -260,13 +260,13 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ 
+ 	set_current_blocked(&set);
+ 
+-	if (!restore_sigcontext(regs, &frame->uc.uc_mcontext, uc_flags))
++	if (restore_altstack(&frame->uc.uc_stack))
+ 		goto badframe;
+ 
+-	if (restore_signal_shadow_stack())
++	if (!restore_sigcontext(regs, &frame->uc.uc_mcontext, uc_flags))
+ 		goto badframe;
+ 
+-	if (restore_altstack(&frame->uc.uc_stack))
++	if (restore_signal_shadow_stack())
+ 		goto badframe;
+ 
+ 	return regs->ax;
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index 968d7005f4a724..a204a332c71fc5 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ 	for (; addr < end; addr = next) {
+ 		pud_t *pud = pud_page + pud_index(addr);
+ 		pmd_t *pmd;
++		bool use_gbpage;
+ 
+ 		next = (addr & PUD_MASK) + PUD_SIZE;
+ 		if (next > end)
+ 			next = end;
+ 
+-		if (info->direct_gbpages) {
+-			pud_t pudval;
++		/* if this is already a gbpage, this portion is already mapped */
++		if (pud_leaf(*pud))
++			continue;
++
++		/* Is using a gbpage allowed? */
++		use_gbpage = info->direct_gbpages;
+ 
+-			if (pud_present(*pud))
+-				continue;
++		/* Don't use gbpage if it maps more than the requested region. */
++		/* at the begining: */
++		use_gbpage &= ((addr & ~PUD_MASK) == 0);
++		/* ... or at the end: */
++		use_gbpage &= ((next & ~PUD_MASK) == 0);
++
++		/* Never overwrite existing mappings */
++		use_gbpage &= !pud_present(*pud);
++
++		if (use_gbpage) {
++			pud_t pudval;
+ 
+-			addr &= PUD_MASK;
+ 			pudval = __pud((addr - info->offset) | info->page_flag);
+ 			set_pud(pud, pudval);
+ 			continue;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 690ca99dfaca67..5a6098a3db57e0 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2076,7 +2076,7 @@ static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors,
+ 			      struct ioc_now *now)
+ {
+ 	struct ioc_gq *iocg;
+-	u64 dur, usage_pct, nr_cycles;
++	u64 dur, usage_pct, nr_cycles, nr_cycles_shift;
+ 
+ 	/* if no debtor, reset the cycle */
+ 	if (!nr_debtors) {
+@@ -2138,10 +2138,12 @@ static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors,
+ 		old_debt = iocg->abs_vdebt;
+ 		old_delay = iocg->delay;
+ 
++		nr_cycles_shift = min_t(u64, nr_cycles, BITS_PER_LONG - 1);
+ 		if (iocg->abs_vdebt)
+-			iocg->abs_vdebt = iocg->abs_vdebt >> nr_cycles ?: 1;
++			iocg->abs_vdebt = iocg->abs_vdebt >> nr_cycles_shift ?: 1;
++
+ 		if (iocg->delay)
+-			iocg->delay = iocg->delay >> nr_cycles ?: 1;
++			iocg->delay = iocg->delay >> nr_cycles_shift ?: 1;
+ 
+ 		iocg_kick_waitq(iocg, true, now);
+ 
+diff --git a/block/ioctl.c b/block/ioctl.c
+index d570e16958961e..4515d4679eefd6 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -126,7 +126,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ 		return -EINVAL;
+ 
+ 	filemap_invalidate_lock(bdev->bd_mapping);
+-	err = truncate_bdev_range(bdev, mode, start, start + len - 1);
++	err = truncate_bdev_range(bdev, mode, start, end - 1);
+ 	if (err)
+ 		goto fail;
+ 
+@@ -163,7 +163,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
+ 		void __user *argp)
+ {
+-	uint64_t start, len;
++	uint64_t start, len, end;
+ 	uint64_t range[2];
+ 	int err;
+ 
+@@ -178,11 +178,12 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
+ 	len = range[1];
+ 	if ((start & 511) || (len & 511))
+ 		return -EINVAL;
+-	if (start + len > bdev_nr_bytes(bdev))
++	if (check_add_overflow(start, len, &end) ||
++	    end > bdev_nr_bytes(bdev))
+ 		return -EINVAL;
+ 
+ 	filemap_invalidate_lock(bdev->bd_mapping);
+-	err = truncate_bdev_range(bdev, mode, start, start + len - 1);
++	err = truncate_bdev_range(bdev, mode, start, end - 1);
+ 	if (!err)
+ 		err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
+ 						GFP_KERNEL);
+diff --git a/crypto/simd.c b/crypto/simd.c
+index edaa479a1ec5e5..d109866641a265 100644
+--- a/crypto/simd.c
++++ b/crypto/simd.c
+@@ -136,27 +136,19 @@ static int simd_skcipher_init(struct crypto_skcipher *tfm)
+ 	return 0;
+ }
+ 
+-struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
++struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
++						      const char *algname,
+ 						      const char *drvname,
+ 						      const char *basename)
+ {
+ 	struct simd_skcipher_alg *salg;
+-	struct crypto_skcipher *tfm;
+-	struct skcipher_alg *ialg;
+ 	struct skcipher_alg *alg;
+ 	int err;
+ 
+-	tfm = crypto_alloc_skcipher(basename, CRYPTO_ALG_INTERNAL,
+-				    CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC);
+-	if (IS_ERR(tfm))
+-		return ERR_CAST(tfm);
+-
+-	ialg = crypto_skcipher_alg(tfm);
+-
+ 	salg = kzalloc(sizeof(*salg), GFP_KERNEL);
+ 	if (!salg) {
+ 		salg = ERR_PTR(-ENOMEM);
+-		goto out_put_tfm;
++		goto out;
+ 	}
+ 
+ 	salg->ialg_name = basename;
+@@ -195,30 +187,16 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
+ 	if (err)
+ 		goto out_free_salg;
+ 
+-out_put_tfm:
+-	crypto_free_skcipher(tfm);
++out:
+ 	return salg;
+ 
+ out_free_salg:
+ 	kfree(salg);
+ 	salg = ERR_PTR(err);
+-	goto out_put_tfm;
++	goto out;
+ }
+ EXPORT_SYMBOL_GPL(simd_skcipher_create_compat);
+ 
+-struct simd_skcipher_alg *simd_skcipher_create(const char *algname,
+-					       const char *basename)
+-{
+-	char drvname[CRYPTO_MAX_ALG_NAME];
+-
+-	if (snprintf(drvname, CRYPTO_MAX_ALG_NAME, "simd-%s", basename) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return ERR_PTR(-ENAMETOOLONG);
+-
+-	return simd_skcipher_create_compat(algname, drvname, basename);
+-}
+-EXPORT_SYMBOL_GPL(simd_skcipher_create);
+-
+ void simd_skcipher_free(struct simd_skcipher_alg *salg)
+ {
+ 	crypto_unregister_skcipher(&salg->alg);
+@@ -246,7 +224,7 @@ int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
+ 		algname = algs[i].base.cra_name + 2;
+ 		drvname = algs[i].base.cra_driver_name + 2;
+ 		basename = algs[i].base.cra_driver_name;
+-		simd = simd_skcipher_create_compat(algname, drvname, basename);
++		simd = simd_skcipher_create_compat(algs + i, algname, drvname, basename);
+ 		err = PTR_ERR(simd);
+ 		if (IS_ERR(simd))
+ 			goto err_unregister;
+@@ -383,27 +361,19 @@ static int simd_aead_init(struct crypto_aead *tfm)
+ 	return 0;
+ }
+ 
+-struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+-					      const char *drvname,
+-					      const char *basename)
++static struct simd_aead_alg *simd_aead_create_compat(struct aead_alg *ialg,
++						     const char *algname,
++						     const char *drvname,
++						     const char *basename)
+ {
+ 	struct simd_aead_alg *salg;
+-	struct crypto_aead *tfm;
+-	struct aead_alg *ialg;
+ 	struct aead_alg *alg;
+ 	int err;
+ 
+-	tfm = crypto_alloc_aead(basename, CRYPTO_ALG_INTERNAL,
+-				CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC);
+-	if (IS_ERR(tfm))
+-		return ERR_CAST(tfm);
+-
+-	ialg = crypto_aead_alg(tfm);
+-
+ 	salg = kzalloc(sizeof(*salg), GFP_KERNEL);
+ 	if (!salg) {
+ 		salg = ERR_PTR(-ENOMEM);
+-		goto out_put_tfm;
++		goto out;
+ 	}
+ 
+ 	salg->ialg_name = basename;
+@@ -442,36 +412,20 @@ struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+ 	if (err)
+ 		goto out_free_salg;
+ 
+-out_put_tfm:
+-	crypto_free_aead(tfm);
++out:
+ 	return salg;
+ 
+ out_free_salg:
+ 	kfree(salg);
+ 	salg = ERR_PTR(err);
+-	goto out_put_tfm;
+-}
+-EXPORT_SYMBOL_GPL(simd_aead_create_compat);
+-
+-struct simd_aead_alg *simd_aead_create(const char *algname,
+-				       const char *basename)
+-{
+-	char drvname[CRYPTO_MAX_ALG_NAME];
+-
+-	if (snprintf(drvname, CRYPTO_MAX_ALG_NAME, "simd-%s", basename) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return ERR_PTR(-ENAMETOOLONG);
+-
+-	return simd_aead_create_compat(algname, drvname, basename);
++	goto out;
+ }
+-EXPORT_SYMBOL_GPL(simd_aead_create);
+ 
+-void simd_aead_free(struct simd_aead_alg *salg)
++static void simd_aead_free(struct simd_aead_alg *salg)
+ {
+ 	crypto_unregister_aead(&salg->alg);
+ 	kfree(salg);
+ }
+-EXPORT_SYMBOL_GPL(simd_aead_free);
+ 
+ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ 			       struct simd_aead_alg **simd_algs)
+@@ -493,7 +447,7 @@ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ 		algname = algs[i].base.cra_name + 2;
+ 		drvname = algs[i].base.cra_driver_name + 2;
+ 		basename = algs[i].base.cra_driver_name;
+-		simd = simd_aead_create_compat(algname, drvname, basename);
++		simd = simd_aead_create_compat(algs + i, algname, drvname, basename);
+ 		err = PTR_ERR(simd);
+ 		if (IS_ERR(simd))
+ 			goto err_unregister;
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index 1457300828bf15..ef717802a3c8cf 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -58,6 +58,10 @@ static struct {
+ 	{ IVPU_HW_40XX, "intel/vpu/vpu_40xx_v0.0.bin" },
+ };
+ 
++/* Production fw_names from the table above */
++MODULE_FIRMWARE("intel/vpu/vpu_37xx_v0.0.bin");
++MODULE_FIRMWARE("intel/vpu/vpu_40xx_v0.0.bin");
++
+ static int ivpu_fw_request(struct ivpu_device *vdev)
+ {
+ 	int ret = -ENOENT;
+diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
+index bd1ad07f029073..e84509b19f94dc 100644
+--- a/drivers/acpi/acpi_pad.c
++++ b/drivers/acpi/acpi_pad.c
+@@ -132,8 +132,10 @@ static void exit_round_robin(unsigned int tsk_index)
+ {
+ 	struct cpumask *pad_busy_cpus = to_cpumask(pad_busy_cpus_bits);
+ 
+-	cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
+-	tsk_in_cpu[tsk_index] = -1;
++	if (tsk_in_cpu[tsk_index] != -1) {
++		cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
++		tsk_in_cpu[tsk_index] = -1;
++	}
+ }
+ 
+ static unsigned int idle_pct = 5; /* percentage */
+diff --git a/drivers/acpi/acpica/dbconvert.c b/drivers/acpi/acpica/dbconvert.c
+index 2b84ac093698a3..8dbab693204998 100644
+--- a/drivers/acpi/acpica/dbconvert.c
++++ b/drivers/acpi/acpica/dbconvert.c
+@@ -174,6 +174,8 @@ acpi_status acpi_db_convert_to_package(char *string, union acpi_object *object)
+ 	elements =
+ 	    ACPI_ALLOCATE_ZEROED(DB_DEFAULT_PKG_ELEMENTS *
+ 				 sizeof(union acpi_object));
++	if (!elements)
++		return (AE_NO_MEMORY);
+ 
+ 	this = string;
+ 	for (i = 0; i < (DB_DEFAULT_PKG_ELEMENTS - 1); i++) {
+diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c
+index 08196fa17080e2..82b1fa2d201fed 100644
+--- a/drivers/acpi/acpica/exprep.c
++++ b/drivers/acpi/acpica/exprep.c
+@@ -437,6 +437,9 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
+ 
+ 		if (info->connection_node) {
+ 			second_desc = info->connection_node->object;
++			if (second_desc == NULL) {
++				break;
++			}
+ 			if (!(second_desc->common.flags & AOPOBJ_DATA_VALID)) {
+ 				status =
+ 				    acpi_ds_get_buffer_arguments(second_desc);
+diff --git a/drivers/acpi/acpica/psargs.c b/drivers/acpi/acpica/psargs.c
+index 422c074ed2897b..28582adfc0acaf 100644
+--- a/drivers/acpi/acpica/psargs.c
++++ b/drivers/acpi/acpica/psargs.c
+@@ -25,6 +25,8 @@ acpi_ps_get_next_package_length(struct acpi_parse_state *parser_state);
+ static union acpi_parse_object *acpi_ps_get_next_field(struct acpi_parse_state
+ 						       *parser_state);
+ 
++static void acpi_ps_free_field_list(union acpi_parse_object *start);
++
+ /*******************************************************************************
+  *
+  * FUNCTION:    acpi_ps_get_next_package_length
+@@ -683,6 +685,39 @@ static union acpi_parse_object *acpi_ps_get_next_field(struct acpi_parse_state
+ 	return_PTR(field);
+ }
+ 
++/*******************************************************************************
++ *
++ * FUNCTION:    acpi_ps_free_field_list
++ *
++ * PARAMETERS:  start               - First Op in field list
++ *
++ * RETURN:      None.
++ *
++ * DESCRIPTION: Free all Op objects inside a field list.
++ *
++ ******************************************************************************/
++
++static void acpi_ps_free_field_list(union acpi_parse_object *start)
++{
++	union acpi_parse_object *cur = start;
++	union acpi_parse_object *next;
++	union acpi_parse_object *arg;
++
++	while (cur) {
++		next = cur->common.next;
++
++		/* AML_INT_CONNECTION_OP can have a single argument */
++
++		arg = acpi_ps_get_arg(cur, 0);
++		if (arg) {
++			acpi_ps_free_op(arg);
++		}
++
++		acpi_ps_free_op(cur);
++		cur = next;
++	}
++}
++
+ /*******************************************************************************
+  *
+  * FUNCTION:    acpi_ps_get_next_arg
+@@ -751,6 +786,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ 			while (parser_state->aml < parser_state->pkg_end) {
+ 				field = acpi_ps_get_next_field(parser_state);
+ 				if (!field) {
++					if (arg) {
++						acpi_ps_free_field_list(arg);
++					}
++
+ 					return_ACPI_STATUS(AE_NO_MEMORY);
+ 				}
+ 
+@@ -820,6 +859,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ 			    acpi_ps_get_next_namepath(walk_state, parser_state,
+ 						      arg,
+ 						      ACPI_NOT_METHOD_CALL);
++			if (ACPI_FAILURE(status)) {
++				acpi_ps_free_op(arg);
++				return_ACPI_STATUS(status);
++			}
+ 		} else {
+ 			/* Single complex argument, nothing returned */
+ 
+@@ -854,6 +897,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
+ 			    acpi_ps_get_next_namepath(walk_state, parser_state,
+ 						      arg,
+ 						      ACPI_POSSIBLE_METHOD_CALL);
++			if (ACPI_FAILURE(status)) {
++				acpi_ps_free_op(arg);
++				return_ACPI_STATUS(status);
++			}
+ 
+ 			if (arg->common.aml_opcode == AML_INT_METHODCALL_OP) {
+ 
+diff --git a/drivers/acpi/apei/einj-cxl.c b/drivers/acpi/apei/einj-cxl.c
+index 8b8be0c90709f9..d64e2713aae4bf 100644
+--- a/drivers/acpi/apei/einj-cxl.c
++++ b/drivers/acpi/apei/einj-cxl.c
+@@ -63,7 +63,7 @@ static int cxl_dport_get_sbdf(struct pci_dev *dport_dev, u64 *sbdf)
+ 		seg = bridge->domain_nr;
+ 
+ 	bus = pbus->number;
+-	*sbdf = (seg << 24) | (bus << 16) | dport_dev->devfn;
++	*sbdf = (seg << 24) | (bus << 16) | (dport_dev->devfn << 8);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 44ca989f164661..916cdf44be8937 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -703,28 +703,35 @@ static LIST_HEAD(acpi_battery_list);
+ static LIST_HEAD(battery_hook_list);
+ static DEFINE_MUTEX(hook_mutex);
+ 
+-static void __battery_hook_unregister(struct acpi_battery_hook *hook, int lock)
++static void battery_hook_unregister_unlocked(struct acpi_battery_hook *hook)
+ {
+ 	struct acpi_battery *battery;
++
+ 	/*
+ 	 * In order to remove a hook, we first need to
+ 	 * de-register all the batteries that are registered.
+ 	 */
+-	if (lock)
+-		mutex_lock(&hook_mutex);
+ 	list_for_each_entry(battery, &acpi_battery_list, list) {
+ 		if (!hook->remove_battery(battery->bat, hook))
+ 			power_supply_changed(battery->bat);
+ 	}
+-	list_del(&hook->list);
+-	if (lock)
+-		mutex_unlock(&hook_mutex);
++	list_del_init(&hook->list);
++
+ 	pr_info("extension unregistered: %s\n", hook->name);
+ }
+ 
+ void battery_hook_unregister(struct acpi_battery_hook *hook)
+ {
+-	__battery_hook_unregister(hook, 1);
++	mutex_lock(&hook_mutex);
++	/*
++	 * Ignore already unregistered battery hooks. This might happen
++	 * if a battery hook was previously unloaded due to an error when
++	 * adding a new battery.
++	 */
++	if (!list_empty(&hook->list))
++		battery_hook_unregister_unlocked(hook);
++
++	mutex_unlock(&hook_mutex);
+ }
+ EXPORT_SYMBOL_GPL(battery_hook_unregister);
+ 
+@@ -733,7 +740,6 @@ void battery_hook_register(struct acpi_battery_hook *hook)
+ 	struct acpi_battery *battery;
+ 
+ 	mutex_lock(&hook_mutex);
+-	INIT_LIST_HEAD(&hook->list);
+ 	list_add(&hook->list, &battery_hook_list);
+ 	/*
+ 	 * Now that the driver is registered, we need
+@@ -750,7 +756,7 @@ void battery_hook_register(struct acpi_battery_hook *hook)
+ 			 * hooks.
+ 			 */
+ 			pr_err("extension failed to load: %s", hook->name);
+-			__battery_hook_unregister(hook, 0);
++			battery_hook_unregister_unlocked(hook);
+ 			goto end;
+ 		}
+ 
+@@ -789,7 +795,7 @@ static void battery_hook_add_battery(struct acpi_battery *battery)
+ 			 */
+ 			pr_err("error in extension, unloading: %s",
+ 					hook_node->name);
+-			__battery_hook_unregister(hook_node, 0);
++			battery_hook_unregister_unlocked(hook_node);
+ 		}
+ 	}
+ 	mutex_unlock(&hook_mutex);
+@@ -822,7 +828,7 @@ static void __exit battery_hook_exit(void)
+ 	 * need to remove the hooks.
+ 	 */
+ 	list_for_each_entry_safe(hook, ptr, &battery_hook_list, list) {
+-		__battery_hook_unregister(hook, 1);
++		battery_hook_unregister(hook);
+ 	}
+ 	mutex_destroy(&hook_mutex);
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 2a588e4ed4af44..6a048d44fbcf6b 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -103,6 +103,11 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+ 				(cpc)->cpc_entry.reg.space_id ==	\
+ 				ACPI_ADR_SPACE_PLATFORM_COMM)
+ 
++/* Check if a CPC register is in FFH */
++#define CPC_IN_FFH(cpc) ((cpc)->type == ACPI_TYPE_BUFFER &&		\
++				(cpc)->cpc_entry.reg.space_id ==	\
++				ACPI_ADR_SPACE_FIXED_HARDWARE)
++
+ /* Check if a CPC register is in SystemMemory */
+ #define CPC_IN_SYSTEM_MEMORY(cpc) ((cpc)->type == ACPI_TYPE_BUFFER &&	\
+ 				(cpc)->cpc_entry.reg.space_id ==	\
+@@ -1519,9 +1524,12 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
+ 		/* after writing CPC, transfer the ownership of PCC to platform */
+ 		ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);
+ 		up_write(&pcc_ss_data->pcc_lock);
++	} else if (osc_cpc_flexible_adr_space_confirmed &&
++		   CPC_SUPPORTED(epp_set_reg) && CPC_IN_FFH(epp_set_reg)) {
++		ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf);
+ 	} else {
+ 		ret = -ENOTSUPP;
+-		pr_debug("_CPC in PCC is not supported\n");
++		pr_debug("_CPC in PCC and _CPC in FFH are not supported\n");
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 38d2f6e6b12b4f..25399f6dde7e27 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -783,6 +783,9 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec,
+ 	unsigned long tmp;
+ 	int ret = 0;
+ 
++	if (t->rdata)
++		memset(t->rdata, 0, t->rlen);
++
+ 	/* start transaction */
+ 	spin_lock_irqsave(&ec->lock, tmp);
+ 	/* Enable GPE for command processing (IBF=0/OBF=1) */
+@@ -819,8 +822,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)
+ 
+ 	if (!ec || (!t) || (t->wlen && !t->wdata) || (t->rlen && !t->rdata))
+ 		return -EINVAL;
+-	if (t->rdata)
+-		memset(t->rdata, 0, t->rlen);
+ 
+ 	mutex_lock(&ec->mutex);
+ 	if (ec->global_lock) {
+@@ -847,7 +848,7 @@ static int acpi_ec_burst_enable(struct acpi_ec *ec)
+ 				.wdata = NULL, .rdata = &d,
+ 				.wlen = 0, .rlen = 1};
+ 
+-	return acpi_ec_transaction(ec, &t);
++	return acpi_ec_transaction_unlocked(ec, &t);
+ }
+ 
+ static int acpi_ec_burst_disable(struct acpi_ec *ec)
+@@ -857,7 +858,7 @@ static int acpi_ec_burst_disable(struct acpi_ec *ec)
+ 				.wlen = 0, .rlen = 0};
+ 
+ 	return (acpi_ec_read_status(ec) & ACPI_EC_FLAG_BURST) ?
+-				acpi_ec_transaction(ec, &t) : 0;
++				acpi_ec_transaction_unlocked(ec, &t) : 0;
+ }
+ 
+ static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 *data)
+@@ -873,6 +874,19 @@ static int acpi_ec_read(struct acpi_ec *ec, u8 address, u8 *data)
+ 	return result;
+ }
+ 
++static int acpi_ec_read_unlocked(struct acpi_ec *ec, u8 address, u8 *data)
++{
++	int result;
++	u8 d;
++	struct transaction t = {.command = ACPI_EC_COMMAND_READ,
++				.wdata = &address, .rdata = &d,
++				.wlen = 1, .rlen = 1};
++
++	result = acpi_ec_transaction_unlocked(ec, &t);
++	*data = d;
++	return result;
++}
++
+ static int acpi_ec_write(struct acpi_ec *ec, u8 address, u8 data)
+ {
+ 	u8 wdata[2] = { address, data };
+@@ -883,6 +897,16 @@ static int acpi_ec_write(struct acpi_ec *ec, u8 address, u8 data)
+ 	return acpi_ec_transaction(ec, &t);
+ }
+ 
++static int acpi_ec_write_unlocked(struct acpi_ec *ec, u8 address, u8 data)
++{
++	u8 wdata[2] = { address, data };
++	struct transaction t = {.command = ACPI_EC_COMMAND_WRITE,
++				.wdata = wdata, .rdata = NULL,
++				.wlen = 2, .rlen = 0};
++
++	return acpi_ec_transaction_unlocked(ec, &t);
++}
++
+ int ec_read(u8 addr, u8 *val)
+ {
+ 	int err;
+@@ -1323,6 +1347,7 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ 	struct acpi_ec *ec = handler_context;
+ 	int result = 0, i, bytes = bits / 8;
+ 	u8 *value = (u8 *)value64;
++	u32 glk;
+ 
+ 	if ((address > 0xFF) || !value || !handler_context)
+ 		return AE_BAD_PARAMETER;
+@@ -1330,13 +1355,25 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ 	if (function != ACPI_READ && function != ACPI_WRITE)
+ 		return AE_BAD_PARAMETER;
+ 
++	mutex_lock(&ec->mutex);
++
++	if (ec->global_lock) {
++		acpi_status status;
++
++		status = acpi_acquire_global_lock(ACPI_EC_UDELAY_GLK, &glk);
++		if (ACPI_FAILURE(status)) {
++			result = -ENODEV;
++			goto unlock;
++		}
++	}
++
+ 	if (ec->busy_polling || bits > 8)
+ 		acpi_ec_burst_enable(ec);
+ 
+ 	for (i = 0; i < bytes; ++i, ++address, ++value) {
+ 		result = (function == ACPI_READ) ?
+-			acpi_ec_read(ec, address, value) :
+-			acpi_ec_write(ec, address, *value);
++			acpi_ec_read_unlocked(ec, address, value) :
++			acpi_ec_write_unlocked(ec, address, *value);
+ 		if (result < 0)
+ 			break;
+ 	}
+@@ -1344,6 +1381,12 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ 	if (ec->busy_polling || bits > 8)
+ 		acpi_ec_burst_disable(ec);
+ 
++	if (ec->global_lock)
++		acpi_release_global_lock(glk);
++
++unlock:
++	mutex_unlock(&ec->mutex);
++
+ 	switch (result) {
+ 	case -EINVAL:
+ 		return AE_BAD_PARAMETER;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index cb2aacbb93357e..3d74ebe9dbd804 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ 		},
+ 	},
++	{
++		/* Asus Vivobook X1704VAP */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "X1704VAP"),
++		},
++	},
+ 	{
+ 		/* Asus ExpertBook B1402CBA */
+ 		.matches = {
+@@ -504,17 +511,24 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ 		},
+ 	},
+ 	{
+-		/* Asus Vivobook E1504GA */
++		/* Asus ExpertBook B2502CVA */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_BOARD_NAME, "E1504GA"),
++			DMI_MATCH(DMI_BOARD_NAME, "B2502CVA"),
+ 		},
+ 	},
+ 	{
+-		/* Asus Vivobook E1504GAB */
++		/* Asus Vivobook Go E1404GA* */
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_BOARD_NAME, "E1504GAB"),
++			DMI_MATCH(DMI_BOARD_NAME, "E1404GA"),
++		},
++	},
++	{
++		/* Asus Vivobook E1504GA* */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "E1504GA"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 75a5f559402f87..b05064578293f4 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -254,6 +254,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "PCG-FRV35"),
+ 		},
+ 	},
++	{
++	 .callback = video_detect_force_vendor,
++	 /* Panasonic Toughbook CF-18 */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Matsushita Electric Industrial"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "CF-18"),
++		},
++	},
+ 
+ 	/*
+ 	 * Toshiba models with Transflective display, these need to use
+@@ -836,6 +844,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ 	 * controller board in their ACPI tables (and may even have one), but
+ 	 * which need native backlight control nevertheless.
+ 	 */
++	{
++	 /* https://github.com/zabbly/linux/issues/26 */
++	 .callback = video_detect_force_native,
++	 /* Dell OptiPlex 5480 AIO */
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 5480 AIO"),
++		},
++	},
+ 	{
+ 	 /* https://bugzilla.redhat.com/show_bug.cgi?id=2303936 */
+ 	 .callback = video_detect_force_native,
+diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c
+index 549ff24a982311..4edddf6bcc1507 100644
+--- a/drivers/ata/pata_serverworks.c
++++ b/drivers/ata/pata_serverworks.c
+@@ -46,10 +46,11 @@
+ #define SVWKS_CSB5_REVISION_NEW	0x92 /* min PCI_REVISION_ID for UDMA5 (A2.0) */
+ #define SVWKS_CSB6_REVISION	0xa0 /* min PCI_REVISION_ID for UDMA4 (A1.0) */
+ 
+-/* Seagate Barracuda ATA IV Family drives in UDMA mode 5
+- * can overrun their FIFOs when used with the CSB5 */
+-
+-static const char *csb_bad_ata100[] = {
++/*
++ * Seagate Barracuda ATA IV Family drives in UDMA mode 5
++ * can overrun their FIFOs when used with the CSB5.
++ */
++static const char * const csb_bad_ata100[] = {
+ 	"ST320011A",
+ 	"ST340016A",
+ 	"ST360021A",
+@@ -163,10 +164,11 @@ static unsigned int serverworks_osb4_filter(struct ata_device *adev, unsigned in
+  *	@adev: ATA device
+  *	@mask: Mask of proposed modes
+  *
+- *	Check the blacklist and disable UDMA5 if matched
++ *	Check the list of devices with broken UDMA5 and
++ *	disable UDMA5 if matched.
+  */
+-
+-static unsigned int serverworks_csb_filter(struct ata_device *adev, unsigned int mask)
++static unsigned int serverworks_csb_filter(struct ata_device *adev,
++					   unsigned int mask)
+ {
+ 	const char *p;
+ 	char model_num[ATA_ID_PROD_LEN + 1];
+diff --git a/drivers/ata/sata_sil.c b/drivers/ata/sata_sil.c
+index cc77c024828431..df095659bae0f5 100644
+--- a/drivers/ata/sata_sil.c
++++ b/drivers/ata/sata_sil.c
+@@ -128,7 +128,7 @@ static const struct pci_device_id sil_pci_tbl[] = {
+ static const struct sil_drivelist {
+ 	const char *product;
+ 	unsigned int quirk;
+-} sil_blacklist [] = {
++} sil_quirks[] = {
+ 	{ "ST320012AS",		SIL_QUIRK_MOD15WRITE },
+ 	{ "ST330013AS",		SIL_QUIRK_MOD15WRITE },
+ 	{ "ST340017AS",		SIL_QUIRK_MOD15WRITE },
+@@ -600,8 +600,8 @@ static void sil_thaw(struct ata_port *ap)
+  *	list, and apply the fixups to only the specific
+  *	devices/hosts/firmwares that need it.
+  *
+- *	20040111 - Seagate drives affected by the Mod15Write bug are blacklisted
+- *	The Maxtor quirk is in the blacklist, but I'm keeping the original
++ *	20040111 - Seagate drives affected by the Mod15Write bug are quirked
++ *	The Maxtor quirk is in sil_quirks, but I'm keeping the original
+  *	pessimistic fix for the following reasons...
+  *	- There seems to be less info on it, only one device gleaned off the
+  *	Windows	driver, maybe only one is affected.  More info would be greatly
+@@ -620,9 +620,9 @@ static void sil_dev_config(struct ata_device *dev)
+ 
+ 	ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));
+ 
+-	for (n = 0; sil_blacklist[n].product; n++)
+-		if (!strcmp(sil_blacklist[n].product, model_num)) {
+-			quirks = sil_blacklist[n].quirk;
++	for (n = 0; sil_quirks[n].product; n++)
++		if (!strcmp(sil_quirks[n].product, model_num)) {
++			quirks = sil_quirks[n].quirk;
+ 			break;
+ 		}
+ 
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index cc9077b588d7e7..d1f4ddc576451a 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -361,6 +361,7 @@ ata_rw_frameinit(struct frame *f)
+ 	}
+ 
+ 	ah->cmdstat = ATA_CMD_PIO_READ | writebit | extbit;
++	dev_hold(t->ifp->nd);
+ 	skb->dev = t->ifp->nd;
+ }
+ 
+@@ -401,6 +402,8 @@ aoecmd_ata_rw(struct aoedev *d)
+ 		__skb_queue_head_init(&queue);
+ 		__skb_queue_tail(&queue, skb);
+ 		aoenet_xmit(&queue);
++	} else {
++		dev_put(f->t->ifp->nd);
+ 	}
+ 	return 1;
+ }
+@@ -483,10 +486,13 @@ resend(struct aoedev *d, struct frame *f)
+ 	memcpy(h->dst, t->addr, sizeof h->dst);
+ 	memcpy(h->src, t->ifp->nd->dev_addr, sizeof h->src);
+ 
++	dev_hold(t->ifp->nd);
+ 	skb->dev = t->ifp->nd;
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+-	if (skb == NULL)
++	if (skb == NULL) {
++		dev_put(t->ifp->nd);
+ 		return;
++	}
+ 	f->sent = ktime_get();
+ 	__skb_queue_head_init(&queue);
+ 	__skb_queue_tail(&queue, skb);
+@@ -617,6 +623,8 @@ probe(struct aoetgt *t)
+ 		__skb_queue_head_init(&queue);
+ 		__skb_queue_tail(&queue, skb);
+ 		aoenet_xmit(&queue);
++	} else {
++		dev_put(f->t->ifp->nd);
+ 	}
+ }
+ 
+@@ -1395,6 +1403,7 @@ aoecmd_ata_id(struct aoedev *d)
+ 	ah->cmdstat = ATA_CMD_ID_ATA;
+ 	ah->lba3 = 0xa0;
+ 
++	dev_hold(t->ifp->nd);
+ 	skb->dev = t->ifp->nd;
+ 
+ 	d->rttavg = RTTAVG_INIT;
+@@ -1404,6 +1413,8 @@ aoecmd_ata_id(struct aoedev *d)
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+ 	if (skb)
+ 		f->sent = ktime_get();
++	else
++		dev_put(t->ifp->nd);
+ 
+ 	return skb;
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 1153721bc7c250..41cfcf9efcfc54 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -211,13 +211,10 @@ static void __loop_update_dio(struct loop_device *lo, bool dio)
+ 	if (lo->lo_state == Lo_bound)
+ 		blk_mq_freeze_queue(lo->lo_queue);
+ 	lo->use_dio = use_dio;
+-	if (use_dio) {
+-		blk_queue_flag_clear(QUEUE_FLAG_NOMERGES, lo->lo_queue);
++	if (use_dio)
+ 		lo->lo_flags |= LO_FLAGS_DIRECT_IO;
+-	} else {
+-		blk_queue_flag_set(QUEUE_FLAG_NOMERGES, lo->lo_queue);
++	else
+ 		lo->lo_flags &= ~LO_FLAGS_DIRECT_IO;
+-	}
+ 	if (lo->lo_state == Lo_bound)
+ 		blk_mq_unfreeze_queue(lo->lo_queue);
+ }
+@@ -2059,14 +2056,6 @@ static int loop_add(int i)
+ 	}
+ 	lo->lo_queue = lo->lo_disk->queue;
+ 
+-	/*
+-	 * By default, we do buffer IO, so it doesn't make sense to enable
+-	 * merge because the I/O submitted to backing file is handled page by
+-	 * page. For directio mode, merge does help to dispatch bigger request
+-	 * to underlayer disk. We will enable merge once directio is enabled.
+-	 */
+-	blk_queue_flag_set(QUEUE_FLAG_NOMERGES, lo->lo_queue);
+-
+ 	/*
+ 	 * Disable partition scanning by default. The in-kernel partition
+ 	 * scanning can be requested individually per-device during its
+diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
+index 85b7f2bb425982..07cd308f7abf6d 100644
+--- a/drivers/bluetooth/btmrvl_sdio.c
++++ b/drivers/bluetooth/btmrvl_sdio.c
+@@ -92,7 +92,7 @@ static int btmrvl_sdio_probe_of(struct device *dev,
+ 		} else {
+ 			ret = devm_request_irq(dev, cfg->irq_bt,
+ 					       btmrvl_wake_irq_bt,
+-					       0, "bt_wake", card);
++					       IRQF_NO_AUTOEN, "bt_wake", card);
+ 			if (ret) {
+ 				dev_err(dev,
+ 					"Failed to request irq_bt %d (%d)\n",
+@@ -101,7 +101,6 @@ static int btmrvl_sdio_probe_of(struct device *dev,
+ 
+ 			/* Configure wakeup (enabled by default) */
+ 			device_init_wakeup(dev, true);
+-			disable_irq(cfg->irq_bt);
+ 		}
+ 	}
+ 
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index bfcb41a57655f3..78b5d44558d732 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1296,6 +1296,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 			btrealtek_set_flag(hdev, REALTEK_ALT6_CONTINUOUS_TX_CHIP);
+ 
+ 		if (btrtl_dev->project_id == CHIP_ID_8852A ||
++		    btrtl_dev->project_id == CHIP_ID_8852B ||
+ 		    btrtl_dev->project_id == CHIP_ID_8852C)
+ 			set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks);
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index c41b86608ba866..dd7d9b7fd1c423 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -539,6 +539,8 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x13d3, 0x3592), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0489, 0xe122), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 
+ 	/* Realtek 8852BE Bluetooth devices */
+ 	{ USB_DEVICE(0x0cb8, 0xc559), .driver_info = BTUSB_REALTEK |
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 2720cbc40e0acc..b98d7226d0aebc 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1665,7 +1665,7 @@ static int __alpha_pll_trion_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l);
++	regmap_update_bits(pll->clkr.regmap, PLL_L_VAL(pll), LUCID_EVO_PLL_L_VAL_MASK,  l);
+ 	regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a);
+ 
+ 	/* Latch the PLL input */
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index bb82abeed88f3b..4acde937114af3 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -263,6 +263,8 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
+ 		cmd_state = 0;
+ 	}
+ 
++	cmd_state = min(cmd_state, BCM_TCS_CMD_VOTE_MASK);
++
+ 	if (c->last_sent_aggr_state != cmd_state) {
+ 		cmd.addr = c->res_addr;
+ 		cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
+diff --git a/drivers/clk/qcom/dispcc-sm8250.c b/drivers/clk/qcom/dispcc-sm8250.c
+index 2103e22ca3ddee..425dbd62c2c103 100644
+--- a/drivers/clk/qcom/dispcc-sm8250.c
++++ b/drivers/clk/qcom/dispcc-sm8250.c
+@@ -849,6 +849,7 @@ static struct clk_branch disp_cc_mdss_dp_link1_intf_clk = {
+ 				&disp_cc_mdss_dp_link1_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -884,6 +885,7 @@ static struct clk_branch disp_cc_mdss_dp_link_intf_clk = {
+ 				&disp_cc_mdss_dp_link_div_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+@@ -1009,6 +1011,7 @@ static struct clk_branch disp_cc_mdss_mdp_lut_clk = {
+ 				&disp_cc_mdss_mdp_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
++			.flags = CLK_SET_RATE_PARENT,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+diff --git a/drivers/clk/qcom/gcc-sc8180x.c b/drivers/clk/qcom/gcc-sc8180x.c
+index 5261bfc92b3dc3..71ab56adab7cc7 100644
+--- a/drivers/clk/qcom/gcc-sc8180x.c
++++ b/drivers/clk/qcom/gcc-sc8180x.c
+@@ -142,6 +142,23 @@ static struct clk_alpha_pll gpll7 = {
+ 	},
+ };
+ 
++static struct clk_alpha_pll gpll9 = {
++	.offset = 0x1c000,
++	.regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_TRION],
++	.clkr = {
++		.enable_reg = 0x52000,
++		.enable_mask = BIT(9),
++		.hw.init = &(const struct clk_init_data) {
++			.name = "gpll9",
++			.parent_data = &(const struct clk_parent_data) {
++				.fw_name = "bi_tcxo",
++			},
++			.num_parents = 1,
++			.ops = &clk_alpha_pll_fixed_trion_ops,
++		},
++	},
++};
++
+ static const struct parent_map gcc_parent_map_0[] = {
+ 	{ P_BI_TCXO, 0 },
+ 	{ P_GPLL0_OUT_MAIN, 1 },
+@@ -241,7 +258,7 @@ static const struct parent_map gcc_parent_map_7[] = {
+ static const struct clk_parent_data gcc_parents_7[] = {
+ 	{ .fw_name = "bi_tcxo", },
+ 	{ .hw = &gpll0.clkr.hw },
+-	{ .name = "gppl9" },
++	{ .hw = &gpll9.clkr.hw },
+ 	{ .hw = &gpll4.clkr.hw },
+ 	{ .hw = &gpll0_out_even.clkr.hw },
+ };
+@@ -260,28 +277,6 @@ static const struct clk_parent_data gcc_parents_8[] = {
+ 	{ .hw = &gpll0_out_even.clkr.hw },
+ };
+ 
+-static const struct freq_tbl ftbl_gcc_cpuss_ahb_clk_src[] = {
+-	F(19200000, P_BI_TCXO, 1, 0, 0),
+-	F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+-	F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+-	{ }
+-};
+-
+-static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+-	.cmd_rcgr = 0x48014,
+-	.mnd_width = 0,
+-	.hid_width = 5,
+-	.parent_map = gcc_parent_map_0,
+-	.freq_tbl = ftbl_gcc_cpuss_ahb_clk_src,
+-	.clkr.hw.init = &(struct clk_init_data){
+-		.name = "gcc_cpuss_ahb_clk_src",
+-		.parent_data = gcc_parents_0,
+-		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
+-	},
+-};
+-
+ static const struct freq_tbl ftbl_gcc_emac_ptp_clk_src[] = {
+ 	F(19200000, P_BI_TCXO, 1, 0, 0),
+ 	F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+@@ -916,7 +911,7 @@ static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = {
+ 	F(25000000, P_GPLL0_OUT_MAIN, 12, 1, 2),
+ 	F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+ 	F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+-	F(200000000, P_GPLL0_OUT_MAIN, 3, 0, 0),
++	F(202000000, P_GPLL9_OUT_MAIN, 4, 0, 0),
+ 	{ }
+ };
+ 
+@@ -939,9 +934,8 @@ static const struct freq_tbl ftbl_gcc_sdcc4_apps_clk_src[] = {
+ 	F(400000, P_BI_TCXO, 12, 1, 4),
+ 	F(9600000, P_BI_TCXO, 2, 0, 0),
+ 	F(19200000, P_BI_TCXO, 1, 0, 0),
+-	F(37500000, P_GPLL0_OUT_MAIN, 16, 0, 0),
+ 	F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+-	F(75000000, P_GPLL0_OUT_MAIN, 8, 0, 0),
++	F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ 	{ }
+ };
+ 
+@@ -1599,25 +1593,6 @@ static struct clk_branch gcc_cfg_noc_usb3_sec_axi_clk = {
+ 	},
+ };
+ 
+-/* For CPUSS functionality the AHB clock needs to be left enabled */
+-static struct clk_branch gcc_cpuss_ahb_clk = {
+-	.halt_reg = 0x48000,
+-	.halt_check = BRANCH_HALT_VOTED,
+-	.clkr = {
+-		.enable_reg = 0x52004,
+-		.enable_mask = BIT(21),
+-		.hw.init = &(struct clk_init_data){
+-			.name = "gcc_cpuss_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){
+-				      &gcc_cpuss_ahb_clk_src.clkr.hw
+-			},
+-			.num_parents = 1,
+-			.flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT,
+-			.ops = &clk_branch2_ops,
+-		},
+-	},
+-};
+-
+ static struct clk_branch gcc_cpuss_rbcpr_clk = {
+ 	.halt_reg = 0x48008,
+ 	.halt_check = BRANCH_HALT,
+@@ -3150,25 +3125,6 @@ static struct clk_branch gcc_sdcc4_apps_clk = {
+ 	},
+ };
+ 
+-/* For CPUSS functionality the SYS NOC clock needs to be left enabled */
+-static struct clk_branch gcc_sys_noc_cpuss_ahb_clk = {
+-	.halt_reg = 0x4819c,
+-	.halt_check = BRANCH_HALT_VOTED,
+-	.clkr = {
+-		.enable_reg = 0x52004,
+-		.enable_mask = BIT(0),
+-		.hw.init = &(struct clk_init_data){
+-			.name = "gcc_sys_noc_cpuss_ahb_clk",
+-			.parent_hws = (const struct clk_hw *[]){
+-				      &gcc_cpuss_ahb_clk_src.clkr.hw
+-			},
+-			.num_parents = 1,
+-			.flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT,
+-			.ops = &clk_branch2_ops,
+-		},
+-	},
+-};
+-
+ static struct clk_branch gcc_tsif_ahb_clk = {
+ 	.halt_reg = 0x36004,
+ 	.halt_check = BRANCH_HALT,
+@@ -4284,8 +4240,6 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ 	[GCC_CFG_NOC_USB3_MP_AXI_CLK] = &gcc_cfg_noc_usb3_mp_axi_clk.clkr,
+ 	[GCC_CFG_NOC_USB3_PRIM_AXI_CLK] = &gcc_cfg_noc_usb3_prim_axi_clk.clkr,
+ 	[GCC_CFG_NOC_USB3_SEC_AXI_CLK] = &gcc_cfg_noc_usb3_sec_axi_clk.clkr,
+-	[GCC_CPUSS_AHB_CLK] = &gcc_cpuss_ahb_clk.clkr,
+-	[GCC_CPUSS_AHB_CLK_SRC] = &gcc_cpuss_ahb_clk_src.clkr,
+ 	[GCC_CPUSS_RBCPR_CLK] = &gcc_cpuss_rbcpr_clk.clkr,
+ 	[GCC_DDRSS_GPU_AXI_CLK] = &gcc_ddrss_gpu_axi_clk.clkr,
+ 	[GCC_DISP_HF_AXI_CLK] = &gcc_disp_hf_axi_clk.clkr,
+@@ -4422,7 +4376,6 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ 	[GCC_SDCC4_AHB_CLK] = &gcc_sdcc4_ahb_clk.clkr,
+ 	[GCC_SDCC4_APPS_CLK] = &gcc_sdcc4_apps_clk.clkr,
+ 	[GCC_SDCC4_APPS_CLK_SRC] = &gcc_sdcc4_apps_clk_src.clkr,
+-	[GCC_SYS_NOC_CPUSS_AHB_CLK] = &gcc_sys_noc_cpuss_ahb_clk.clkr,
+ 	[GCC_TSIF_AHB_CLK] = &gcc_tsif_ahb_clk.clkr,
+ 	[GCC_TSIF_INACTIVITY_TIMERS_CLK] = &gcc_tsif_inactivity_timers_clk.clkr,
+ 	[GCC_TSIF_REF_CLK] = &gcc_tsif_ref_clk.clkr,
+@@ -4511,6 +4464,7 @@ static struct clk_regmap *gcc_sc8180x_clocks[] = {
+ 	[GPLL1] = &gpll1.clkr,
+ 	[GPLL4] = &gpll4.clkr,
+ 	[GPLL7] = &gpll7.clkr,
++	[GPLL9] = &gpll9.clkr,
+ };
+ 
+ static const struct qcom_reset_map gcc_sc8180x_resets[] = {
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index e630bfa2d0c179..e71b7b7cb5147c 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -3226,7 +3226,7 @@ static struct gdsc pcie_0_gdsc = {
+ 	.pd = {
+ 		.name = "pcie_0_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc pcie_1_gdsc = {
+@@ -3234,7 +3234,7 @@ static struct gdsc pcie_1_gdsc = {
+ 	.pd = {
+ 		.name = "pcie_1_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc pcie_2_gdsc = {
+@@ -3242,7 +3242,7 @@ static struct gdsc pcie_2_gdsc = {
+ 	.pd = {
+ 		.name = "pcie_2_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc ufs_card_gdsc = {
+diff --git a/drivers/clk/qcom/gcc-sm8450.c b/drivers/clk/qcom/gcc-sm8450.c
+index e86c58bc5e48bc..827f574e99b25b 100644
+--- a/drivers/clk/qcom/gcc-sm8450.c
++++ b/drivers/clk/qcom/gcc-sm8450.c
+@@ -2974,7 +2974,7 @@ static struct gdsc pcie_0_gdsc = {
+ 	.pd = {
+ 		.name = "pcie_0_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc pcie_1_gdsc = {
+@@ -2982,7 +2982,7 @@ static struct gdsc pcie_1_gdsc = {
+ 	.pd = {
+ 		.name = "pcie_1_gdsc",
+ 	},
+-	.pwrsts = PWRSTS_OFF_ON,
++	.pwrsts = PWRSTS_RET_ON,
+ };
+ 
+ static struct gdsc ufs_phy_gdsc = {
+diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
+index 73d2cbdc716b45..2fa7253c73b2cd 100644
+--- a/drivers/clk/rockchip/clk.c
++++ b/drivers/clk/rockchip/clk.c
+@@ -450,12 +450,13 @@ void rockchip_clk_register_branches(struct rockchip_clk_provider *ctx,
+ 				    struct rockchip_clk_branch *list,
+ 				    unsigned int nr_clk)
+ {
+-	struct clk *clk = NULL;
++	struct clk *clk;
+ 	unsigned int idx;
+ 	unsigned long flags;
+ 
+ 	for (idx = 0; idx < nr_clk; idx++, list++) {
+ 		flags = list->flags;
++		clk = NULL;
+ 
+ 		/* catch simple muxes */
+ 		switch (list->branch_type) {
+diff --git a/drivers/clk/samsung/clk-exynos7885.c b/drivers/clk/samsung/clk-exynos7885.c
+index f7d7427a558ba0..87387d4cbf48a2 100644
+--- a/drivers/clk/samsung/clk-exynos7885.c
++++ b/drivers/clk/samsung/clk-exynos7885.c
+@@ -20,7 +20,7 @@
+ #define CLKS_NR_TOP			(CLK_GOUT_FSYS_USB30DRD + 1)
+ #define CLKS_NR_CORE			(CLK_GOUT_TREX_P_CORE_PCLK_P_CORE + 1)
+ #define CLKS_NR_PERI			(CLK_GOUT_WDT1_PCLK + 1)
+-#define CLKS_NR_FSYS			(CLK_GOUT_MMC_SDIO_SDCLKIN + 1)
++#define CLKS_NR_FSYS			(CLK_MOUT_FSYS_USB30DRD_USER + 1)
+ 
+ /* ---- CMU_TOP ------------------------------------------------------------- */
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c31914a9876fa1..b694e474acece0 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1622,7 +1622,7 @@ static void intel_pstate_notify_work(struct work_struct *work)
+ 	wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
+ }
+ 
+-static DEFINE_SPINLOCK(hwp_notify_lock);
++static DEFINE_RAW_SPINLOCK(hwp_notify_lock);
+ static cpumask_t hwp_intr_enable_mask;
+ 
+ void notify_hwp_interrupt(void)
+@@ -1638,7 +1638,7 @@ void notify_hwp_interrupt(void)
+ 	if (!(value & 0x01))
+ 		return;
+ 
+-	spin_lock_irqsave(&hwp_notify_lock, flags);
++	raw_spin_lock_irqsave(&hwp_notify_lock, flags);
+ 
+ 	if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask))
+ 		goto ack_intr;
+@@ -1646,13 +1646,13 @@ void notify_hwp_interrupt(void)
+ 	schedule_delayed_work(&all_cpu_data[this_cpu]->hwp_notify_work,
+ 			      msecs_to_jiffies(10));
+ 
+-	spin_unlock_irqrestore(&hwp_notify_lock, flags);
++	raw_spin_unlock_irqrestore(&hwp_notify_lock, flags);
+ 
+ 	return;
+ 
+ ack_intr:
+ 	wrmsrl_safe(MSR_HWP_STATUS, 0);
+-	spin_unlock_irqrestore(&hwp_notify_lock, flags);
++	raw_spin_unlock_irqrestore(&hwp_notify_lock, flags);
+ }
+ 
+ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
+@@ -1665,9 +1665,9 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
+ 	/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
+ 	wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
+ 
+-	spin_lock_irq(&hwp_notify_lock);
++	raw_spin_lock_irq(&hwp_notify_lock);
+ 	cancel_work = cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask);
+-	spin_unlock_irq(&hwp_notify_lock);
++	raw_spin_unlock_irq(&hwp_notify_lock);
+ 
+ 	if (cancel_work)
+ 		cancel_delayed_work_sync(&cpudata->hwp_notify_work);
+@@ -1677,10 +1677,10 @@ static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
+ {
+ 	/* Enable HWP notification interrupt for guaranteed performance change */
+ 	if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) {
+-		spin_lock_irq(&hwp_notify_lock);
++		raw_spin_lock_irq(&hwp_notify_lock);
+ 		INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work);
+ 		cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask);
+-		spin_unlock_irq(&hwp_notify_lock);
++		raw_spin_unlock_irq(&hwp_notify_lock);
+ 
+ 		/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
+ 		wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x01);
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 568acd0aee3fa5..c974f95cd126fd 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -225,7 +225,7 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+ 	dma_addr_t curr_sgl_dma = 0;
+ 	struct acc_hw_sge *curr_hw_sge;
+ 	struct scatterlist *sg;
+-	int sg_n;
++	int sg_n, ret;
+ 
+ 	if (!dev || !sgl || !pool || !hw_sgl_dma || index >= pool->count)
+ 		return ERR_PTR(-EINVAL);
+@@ -240,14 +240,15 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+ 
+ 	if (sg_n_mapped > pool->sge_nr) {
+ 		dev_err(dev, "the number of entries in input scatterlist is bigger than SGL pool setting.\n");
+-		return ERR_PTR(-EINVAL);
++		ret = -EINVAL;
++		goto err_unmap;
+ 	}
+ 
+ 	curr_hw_sgl = acc_get_sgl(pool, index, &curr_sgl_dma);
+ 	if (IS_ERR(curr_hw_sgl)) {
+ 		dev_err(dev, "Get SGL error!\n");
+-		dma_unmap_sg(dev, sgl, sg_n, DMA_BIDIRECTIONAL);
+-		return ERR_PTR(-ENOMEM);
++		ret = -ENOMEM;
++		goto err_unmap;
+ 	}
+ 	curr_hw_sgl->entry_length_in_sgl = cpu_to_le16(pool->sge_nr);
+ 	curr_hw_sge = curr_hw_sgl->sge_entries;
+@@ -262,6 +263,11 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
+ 	*hw_sgl_dma = curr_sgl_dma;
+ 
+ 	return curr_hw_sgl;
++
++err_unmap:
++	dma_unmap_sg(dev, sgl, sg_n, DMA_BIDIRECTIONAL);
++
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(hisi_acc_sg_buf_map_to_hw_sgl);
+ 
+diff --git a/drivers/crypto/marvell/Kconfig b/drivers/crypto/marvell/Kconfig
+index a48591af12d025..78217577aa5403 100644
+--- a/drivers/crypto/marvell/Kconfig
++++ b/drivers/crypto/marvell/Kconfig
+@@ -28,6 +28,7 @@ config CRYPTO_DEV_OCTEONTX_CPT
+ 	select CRYPTO_SKCIPHER
+ 	select CRYPTO_HASH
+ 	select CRYPTO_AEAD
++	select CRYPTO_AUTHENC
+ 	select CRYPTO_DEV_MARVELL
+ 	help
+ 		This driver allows you to utilize the Marvell Cryptographic
+@@ -47,6 +48,7 @@ config CRYPTO_DEV_OCTEONTX2_CPT
+ 	select CRYPTO_SKCIPHER
+ 	select CRYPTO_HASH
+ 	select CRYPTO_AEAD
++	select CRYPTO_AUTHENC
+ 	select NET_DEVLINK
+ 	help
+ 		This driver allows you to utilize the Marvell Cryptographic
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+index 3c5d577d8f0d5e..0a1b85ad0057f1 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+@@ -17,7 +17,6 @@
+ #include <crypto/sha2.h>
+ #include <crypto/xts.h>
+ #include <crypto/scatterwalk.h>
+-#include <linux/rtnetlink.h>
+ #include <linux/sort.h>
+ #include <linux/module.h>
+ #include "otx_cptvf.h"
+@@ -66,6 +65,8 @@ static struct cpt_device_table ae_devices = {
+ 	.count = ATOMIC_INIT(0)
+ };
+ 
++static struct otx_cpt_sdesc *alloc_sdesc(struct crypto_shash *alg);
++
+ static inline int get_se_device(struct pci_dev **pdev, int *cpu_num)
+ {
+ 	int count, ret = 0;
+@@ -509,44 +510,61 @@ static int cpt_aead_init(struct crypto_aead *tfm, u8 cipher_type, u8 mac_type)
+ 	ctx->cipher_type = cipher_type;
+ 	ctx->mac_type = mac_type;
+ 
++	switch (ctx->mac_type) {
++	case OTX_CPT_SHA1:
++		ctx->hashalg = crypto_alloc_shash("sha1", 0, 0);
++		break;
++
++	case OTX_CPT_SHA256:
++		ctx->hashalg = crypto_alloc_shash("sha256", 0, 0);
++		break;
++
++	case OTX_CPT_SHA384:
++		ctx->hashalg = crypto_alloc_shash("sha384", 0, 0);
++		break;
++
++	case OTX_CPT_SHA512:
++		ctx->hashalg = crypto_alloc_shash("sha512", 0, 0);
++		break;
++	}
++
++	if (IS_ERR(ctx->hashalg))
++		return PTR_ERR(ctx->hashalg);
++
++	crypto_aead_set_reqsize_dma(tfm, sizeof(struct otx_cpt_req_ctx));
++
++	if (!ctx->hashalg)
++		return 0;
++
+ 	/*
+ 	 * When selected cipher is NULL we use HMAC opcode instead of
+ 	 * FLEXICRYPTO opcode therefore we don't need to use HASH algorithms
+ 	 * for calculating ipad and opad
+ 	 */
+ 	if (ctx->cipher_type != OTX_CPT_CIPHER_NULL) {
+-		switch (ctx->mac_type) {
+-		case OTX_CPT_SHA1:
+-			ctx->hashalg = crypto_alloc_shash("sha1", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
+-
+-		case OTX_CPT_SHA256:
+-			ctx->hashalg = crypto_alloc_shash("sha256", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++		int ss = crypto_shash_statesize(ctx->hashalg);
+ 
+-		case OTX_CPT_SHA384:
+-			ctx->hashalg = crypto_alloc_shash("sha384", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++		ctx->ipad = kzalloc(ss, GFP_KERNEL);
++		if (!ctx->ipad) {
++			crypto_free_shash(ctx->hashalg);
++			return -ENOMEM;
++		}
+ 
+-		case OTX_CPT_SHA512:
+-			ctx->hashalg = crypto_alloc_shash("sha512", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++		ctx->opad = kzalloc(ss, GFP_KERNEL);
++		if (!ctx->opad) {
++			kfree(ctx->ipad);
++			crypto_free_shash(ctx->hashalg);
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+-	crypto_aead_set_reqsize_dma(tfm, sizeof(struct otx_cpt_req_ctx));
++	ctx->sdesc = alloc_sdesc(ctx->hashalg);
++	if (!ctx->sdesc) {
++		kfree(ctx->opad);
++		kfree(ctx->ipad);
++		crypto_free_shash(ctx->hashalg);
++		return -ENOMEM;
++	}
+ 
+ 	return 0;
+ }
+@@ -602,8 +620,7 @@ static void otx_cpt_aead_exit(struct crypto_aead *tfm)
+ 
+ 	kfree(ctx->ipad);
+ 	kfree(ctx->opad);
+-	if (ctx->hashalg)
+-		crypto_free_shash(ctx->hashalg);
++	crypto_free_shash(ctx->hashalg);
+ 	kfree(ctx->sdesc);
+ }
+ 
+@@ -699,7 +716,7 @@ static inline void swap_data64(void *buf, u32 len)
+ 		*dst = cpu_to_be64p(src);
+ }
+ 
+-static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
++static int swap_pad(u8 mac_type, u8 *pad)
+ {
+ 	struct sha512_state *sha512;
+ 	struct sha256_state *sha256;
+@@ -707,22 +724,19 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ 
+ 	switch (mac_type) {
+ 	case OTX_CPT_SHA1:
+-		sha1 = (struct sha1_state *) in_pad;
++		sha1 = (struct sha1_state *)pad;
+ 		swap_data32(sha1->state, SHA1_DIGEST_SIZE);
+-		memcpy(out_pad, &sha1->state, SHA1_DIGEST_SIZE);
+ 		break;
+ 
+ 	case OTX_CPT_SHA256:
+-		sha256 = (struct sha256_state *) in_pad;
++		sha256 = (struct sha256_state *)pad;
+ 		swap_data32(sha256->state, SHA256_DIGEST_SIZE);
+-		memcpy(out_pad, &sha256->state, SHA256_DIGEST_SIZE);
+ 		break;
+ 
+ 	case OTX_CPT_SHA384:
+ 	case OTX_CPT_SHA512:
+-		sha512 = (struct sha512_state *) in_pad;
++		sha512 = (struct sha512_state *)pad;
+ 		swap_data64(sha512->state, SHA512_DIGEST_SIZE);
+-		memcpy(out_pad, &sha512->state, SHA512_DIGEST_SIZE);
+ 		break;
+ 
+ 	default:
+@@ -732,55 +746,53 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ 	return 0;
+ }
+ 
+-static int aead_hmac_init(struct crypto_aead *cipher)
++static int aead_hmac_init(struct crypto_aead *cipher,
++			  struct crypto_authenc_keys *keys)
+ {
+ 	struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	int state_size = crypto_shash_statesize(ctx->hashalg);
+ 	int ds = crypto_shash_digestsize(ctx->hashalg);
+ 	int bs = crypto_shash_blocksize(ctx->hashalg);
+-	int authkeylen = ctx->auth_key_len;
++	int authkeylen = keys->authkeylen;
+ 	u8 *ipad = NULL, *opad = NULL;
+-	int ret = 0, icount = 0;
++	int icount = 0;
++	int ret;
+ 
+-	ctx->sdesc = alloc_sdesc(ctx->hashalg);
+-	if (!ctx->sdesc)
+-		return -ENOMEM;
++	if (authkeylen > bs) {
++		ret = crypto_shash_digest(&ctx->sdesc->shash, keys->authkey,
++					  authkeylen, ctx->key);
++		if (ret)
++			return ret;
++		authkeylen = ds;
++	} else
++		memcpy(ctx->key, keys->authkey, authkeylen);
+ 
+-	ctx->ipad = kzalloc(bs, GFP_KERNEL);
+-	if (!ctx->ipad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++	ctx->enc_key_len = keys->enckeylen;
++	ctx->auth_key_len = authkeylen;
+ 
+-	ctx->opad = kzalloc(bs, GFP_KERNEL);
+-	if (!ctx->opad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++	if (ctx->cipher_type == OTX_CPT_CIPHER_NULL)
++		return keys->enckeylen ? -EINVAL : 0;
+ 
+-	ipad = kzalloc(state_size, GFP_KERNEL);
+-	if (!ipad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
++	switch (keys->enckeylen) {
++	case AES_KEYSIZE_128:
++		ctx->key_type = OTX_CPT_AES_128_BIT;
++		break;
++	case AES_KEYSIZE_192:
++		ctx->key_type = OTX_CPT_AES_192_BIT;
++		break;
++	case AES_KEYSIZE_256:
++		ctx->key_type = OTX_CPT_AES_256_BIT;
++		break;
++	default:
++		/* Invalid key length */
++		return -EINVAL;
+ 	}
+ 
+-	opad = kzalloc(state_size, GFP_KERNEL);
+-	if (!opad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++	memcpy(ctx->key + authkeylen, keys->enckey, keys->enckeylen);
+ 
+-	if (authkeylen > bs) {
+-		ret = crypto_shash_digest(&ctx->sdesc->shash, ctx->key,
+-					  authkeylen, ipad);
+-		if (ret)
+-			goto calc_fail;
+-
+-		authkeylen = ds;
+-	} else {
+-		memcpy(ipad, ctx->key, authkeylen);
+-	}
++	ipad = ctx->ipad;
++	opad = ctx->opad;
+ 
++	memcpy(ipad, ctx->key, authkeylen);
+ 	memset(ipad + authkeylen, 0, bs - authkeylen);
+ 	memcpy(opad, ipad, bs);
+ 
+@@ -798,7 +810,7 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ 	crypto_shash_init(&ctx->sdesc->shash);
+ 	crypto_shash_update(&ctx->sdesc->shash, ipad, bs);
+ 	crypto_shash_export(&ctx->sdesc->shash, ipad);
+-	ret = copy_pad(ctx->mac_type, ctx->ipad, ipad);
++	ret = swap_pad(ctx->mac_type, ipad);
+ 	if (ret)
+ 		goto calc_fail;
+ 
+@@ -806,25 +818,9 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ 	crypto_shash_init(&ctx->sdesc->shash);
+ 	crypto_shash_update(&ctx->sdesc->shash, opad, bs);
+ 	crypto_shash_export(&ctx->sdesc->shash, opad);
+-	ret = copy_pad(ctx->mac_type, ctx->opad, opad);
+-	if (ret)
+-		goto calc_fail;
+-
+-	kfree(ipad);
+-	kfree(opad);
+-
+-	return 0;
++	ret = swap_pad(ctx->mac_type, opad);
+ 
+ calc_fail:
+-	kfree(ctx->ipad);
+-	ctx->ipad = NULL;
+-	kfree(ctx->opad);
+-	ctx->opad = NULL;
+-	kfree(ipad);
+-	kfree(opad);
+-	kfree(ctx->sdesc);
+-	ctx->sdesc = NULL;
+-
+ 	return ret;
+ }
+ 
+@@ -832,57 +828,15 @@ static int otx_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher,
+ 					   const unsigned char *key,
+ 					   unsigned int keylen)
+ {
+-	struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	struct crypto_authenc_key_param *param;
+-	int enckeylen = 0, authkeylen = 0;
+-	struct rtattr *rta = (void *)key;
+-	int status = -EINVAL;
+-
+-	if (!RTA_OK(rta, keylen))
+-		goto badkey;
+-
+-	if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+-		goto badkey;
+-
+-	if (RTA_PAYLOAD(rta) < sizeof(*param))
+-		goto badkey;
+-
+-	param = RTA_DATA(rta);
+-	enckeylen = be32_to_cpu(param->enckeylen);
+-	key += RTA_ALIGN(rta->rta_len);
+-	keylen -= RTA_ALIGN(rta->rta_len);
+-	if (keylen < enckeylen)
+-		goto badkey;
++	struct crypto_authenc_keys authenc_keys;
++	int status;
+ 
+-	if (keylen > OTX_CPT_MAX_KEY_SIZE)
+-		goto badkey;
+-
+-	authkeylen = keylen - enckeylen;
+-	memcpy(ctx->key, key, keylen);
+-
+-	switch (enckeylen) {
+-	case AES_KEYSIZE_128:
+-		ctx->key_type = OTX_CPT_AES_128_BIT;
+-		break;
+-	case AES_KEYSIZE_192:
+-		ctx->key_type = OTX_CPT_AES_192_BIT;
+-		break;
+-	case AES_KEYSIZE_256:
+-		ctx->key_type = OTX_CPT_AES_256_BIT;
+-		break;
+-	default:
+-		/* Invalid key length */
+-		goto badkey;
+-	}
+-
+-	ctx->enc_key_len = enckeylen;
+-	ctx->auth_key_len = authkeylen;
+-
+-	status = aead_hmac_init(cipher);
++	status = crypto_authenc_extractkeys(&authenc_keys, key, keylen);
+ 	if (status)
+ 		goto badkey;
+ 
+-	return 0;
++	status = aead_hmac_init(cipher, &authenc_keys);
++
+ badkey:
+ 	return status;
+ }
+@@ -891,36 +845,7 @@ static int otx_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher,
+ 					    const unsigned char *key,
+ 					    unsigned int keylen)
+ {
+-	struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	struct crypto_authenc_key_param *param;
+-	struct rtattr *rta = (void *)key;
+-	int enckeylen = 0;
+-
+-	if (!RTA_OK(rta, keylen))
+-		goto badkey;
+-
+-	if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+-		goto badkey;
+-
+-	if (RTA_PAYLOAD(rta) < sizeof(*param))
+-		goto badkey;
+-
+-	param = RTA_DATA(rta);
+-	enckeylen = be32_to_cpu(param->enckeylen);
+-	key += RTA_ALIGN(rta->rta_len);
+-	keylen -= RTA_ALIGN(rta->rta_len);
+-	if (enckeylen != 0)
+-		goto badkey;
+-
+-	if (keylen > OTX_CPT_MAX_KEY_SIZE)
+-		goto badkey;
+-
+-	memcpy(ctx->key, key, keylen);
+-	ctx->enc_key_len = enckeylen;
+-	ctx->auth_key_len = keylen;
+-	return 0;
+-badkey:
+-	return -EINVAL;
++	return otx_cpt_aead_cbc_aes_sha_setkey(cipher, key, keylen);
+ }
+ 
+ static int otx_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher,
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+index 1604fc58dc13ec..5aa56f20f888ce 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+@@ -11,7 +11,6 @@
+ #include <crypto/xts.h>
+ #include <crypto/gcm.h>
+ #include <crypto/scatterwalk.h>
+-#include <linux/rtnetlink.h>
+ #include <linux/sort.h>
+ #include <linux/module.h>
+ #include "otx2_cptvf.h"
+@@ -55,6 +54,8 @@ static struct cpt_device_table se_devices = {
+ 	.count = ATOMIC_INIT(0)
+ };
+ 
++static struct otx2_cpt_sdesc *alloc_sdesc(struct crypto_shash *alg);
++
+ static inline int get_se_device(struct pci_dev **pdev, int *cpu_num)
+ {
+ 	int count;
+@@ -598,40 +599,56 @@ static int cpt_aead_init(struct crypto_aead *atfm, u8 cipher_type, u8 mac_type)
+ 	ctx->cipher_type = cipher_type;
+ 	ctx->mac_type = mac_type;
+ 
++	switch (ctx->mac_type) {
++	case OTX2_CPT_SHA1:
++		ctx->hashalg = crypto_alloc_shash("sha1", 0, 0);
++		break;
++
++	case OTX2_CPT_SHA256:
++		ctx->hashalg = crypto_alloc_shash("sha256", 0, 0);
++		break;
++
++	case OTX2_CPT_SHA384:
++		ctx->hashalg = crypto_alloc_shash("sha384", 0, 0);
++		break;
++
++	case OTX2_CPT_SHA512:
++		ctx->hashalg = crypto_alloc_shash("sha512", 0, 0);
++		break;
++	}
++
++	if (IS_ERR(ctx->hashalg))
++		return PTR_ERR(ctx->hashalg);
++
++	if (ctx->hashalg) {
++		ctx->sdesc = alloc_sdesc(ctx->hashalg);
++		if (!ctx->sdesc) {
++			crypto_free_shash(ctx->hashalg);
++			return -ENOMEM;
++		}
++	}
++
+ 	/*
+ 	 * When selected cipher is NULL we use HMAC opcode instead of
+ 	 * FLEXICRYPTO opcode therefore we don't need to use HASH algorithms
+ 	 * for calculating ipad and opad
+ 	 */
+-	if (ctx->cipher_type != OTX2_CPT_CIPHER_NULL) {
+-		switch (ctx->mac_type) {
+-		case OTX2_CPT_SHA1:
+-			ctx->hashalg = crypto_alloc_shash("sha1", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
+-
+-		case OTX2_CPT_SHA256:
+-			ctx->hashalg = crypto_alloc_shash("sha256", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++	if (ctx->cipher_type != OTX2_CPT_CIPHER_NULL && ctx->hashalg) {
++		int ss = crypto_shash_statesize(ctx->hashalg);
+ 
+-		case OTX2_CPT_SHA384:
+-			ctx->hashalg = crypto_alloc_shash("sha384", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++		ctx->ipad = kzalloc(ss, GFP_KERNEL);
++		if (!ctx->ipad) {
++			kfree(ctx->sdesc);
++			crypto_free_shash(ctx->hashalg);
++			return -ENOMEM;
++		}
+ 
+-		case OTX2_CPT_SHA512:
+-			ctx->hashalg = crypto_alloc_shash("sha512", 0,
+-							  CRYPTO_ALG_ASYNC);
+-			if (IS_ERR(ctx->hashalg))
+-				return PTR_ERR(ctx->hashalg);
+-			break;
++		ctx->opad = kzalloc(ss, GFP_KERNEL);
++		if (!ctx->opad) {
++			kfree(ctx->ipad);
++			kfree(ctx->sdesc);
++			crypto_free_shash(ctx->hashalg);
++			return -ENOMEM;
+ 		}
+ 	}
+ 	switch (ctx->cipher_type) {
+@@ -713,8 +730,7 @@ static void otx2_cpt_aead_exit(struct crypto_aead *tfm)
+ 
+ 	kfree(ctx->ipad);
+ 	kfree(ctx->opad);
+-	if (ctx->hashalg)
+-		crypto_free_shash(ctx->hashalg);
++	crypto_free_shash(ctx->hashalg);
+ 	kfree(ctx->sdesc);
+ 
+ 	if (ctx->fbk_cipher) {
+@@ -788,7 +804,7 @@ static inline void swap_data64(void *buf, u32 len)
+ 		cpu_to_be64s(src);
+ }
+ 
+-static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
++static int swap_pad(u8 mac_type, u8 *pad)
+ {
+ 	struct sha512_state *sha512;
+ 	struct sha256_state *sha256;
+@@ -796,22 +812,19 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ 
+ 	switch (mac_type) {
+ 	case OTX2_CPT_SHA1:
+-		sha1 = (struct sha1_state *) in_pad;
++		sha1 = (struct sha1_state *)pad;
+ 		swap_data32(sha1->state, SHA1_DIGEST_SIZE);
+-		memcpy(out_pad, &sha1->state, SHA1_DIGEST_SIZE);
+ 		break;
+ 
+ 	case OTX2_CPT_SHA256:
+-		sha256 = (struct sha256_state *) in_pad;
++		sha256 = (struct sha256_state *)pad;
+ 		swap_data32(sha256->state, SHA256_DIGEST_SIZE);
+-		memcpy(out_pad, &sha256->state, SHA256_DIGEST_SIZE);
+ 		break;
+ 
+ 	case OTX2_CPT_SHA384:
+ 	case OTX2_CPT_SHA512:
+-		sha512 = (struct sha512_state *) in_pad;
++		sha512 = (struct sha512_state *)pad;
+ 		swap_data64(sha512->state, SHA512_DIGEST_SIZE);
+-		memcpy(out_pad, &sha512->state, SHA512_DIGEST_SIZE);
+ 		break;
+ 
+ 	default:
+@@ -821,55 +834,54 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad)
+ 	return 0;
+ }
+ 
+-static int aead_hmac_init(struct crypto_aead *cipher)
++static int aead_hmac_init(struct crypto_aead *cipher,
++			  struct crypto_authenc_keys *keys)
+ {
+ 	struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	int state_size = crypto_shash_statesize(ctx->hashalg);
+ 	int ds = crypto_shash_digestsize(ctx->hashalg);
+ 	int bs = crypto_shash_blocksize(ctx->hashalg);
+-	int authkeylen = ctx->auth_key_len;
++	int authkeylen = keys->authkeylen;
+ 	u8 *ipad = NULL, *opad = NULL;
+-	int ret = 0, icount = 0;
++	int icount = 0;
++	int ret;
+ 
+-	ctx->sdesc = alloc_sdesc(ctx->hashalg);
+-	if (!ctx->sdesc)
+-		return -ENOMEM;
++	if (authkeylen > bs) {
++		ret = crypto_shash_digest(&ctx->sdesc->shash, keys->authkey,
++					  authkeylen, ctx->key);
++		if (ret)
++			goto calc_fail;
+ 
+-	ctx->ipad = kzalloc(bs, GFP_KERNEL);
+-	if (!ctx->ipad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++		authkeylen = ds;
++	} else
++		memcpy(ctx->key, keys->authkey, authkeylen);
+ 
+-	ctx->opad = kzalloc(bs, GFP_KERNEL);
+-	if (!ctx->opad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++	ctx->enc_key_len = keys->enckeylen;
++	ctx->auth_key_len = authkeylen;
+ 
+-	ipad = kzalloc(state_size, GFP_KERNEL);
+-	if (!ipad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
+-	}
++	if (ctx->cipher_type == OTX2_CPT_CIPHER_NULL)
++		return keys->enckeylen ? -EINVAL : 0;
+ 
+-	opad = kzalloc(state_size, GFP_KERNEL);
+-	if (!opad) {
+-		ret = -ENOMEM;
+-		goto calc_fail;
++	switch (keys->enckeylen) {
++	case AES_KEYSIZE_128:
++		ctx->key_type = OTX2_CPT_AES_128_BIT;
++		break;
++	case AES_KEYSIZE_192:
++		ctx->key_type = OTX2_CPT_AES_192_BIT;
++		break;
++	case AES_KEYSIZE_256:
++		ctx->key_type = OTX2_CPT_AES_256_BIT;
++		break;
++	default:
++		/* Invalid key length */
++		return -EINVAL;
+ 	}
+ 
+-	if (authkeylen > bs) {
+-		ret = crypto_shash_digest(&ctx->sdesc->shash, ctx->key,
+-					  authkeylen, ipad);
+-		if (ret)
+-			goto calc_fail;
++	memcpy(ctx->key + authkeylen, keys->enckey, keys->enckeylen);
+ 
+-		authkeylen = ds;
+-	} else {
+-		memcpy(ipad, ctx->key, authkeylen);
+-	}
++	ipad = ctx->ipad;
++	opad = ctx->opad;
+ 
++	memcpy(ipad, ctx->key, authkeylen);
+ 	memset(ipad + authkeylen, 0, bs - authkeylen);
+ 	memcpy(opad, ipad, bs);
+ 
+@@ -887,7 +899,7 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ 	crypto_shash_init(&ctx->sdesc->shash);
+ 	crypto_shash_update(&ctx->sdesc->shash, ipad, bs);
+ 	crypto_shash_export(&ctx->sdesc->shash, ipad);
+-	ret = copy_pad(ctx->mac_type, ctx->ipad, ipad);
++	ret = swap_pad(ctx->mac_type, ipad);
+ 	if (ret)
+ 		goto calc_fail;
+ 
+@@ -895,25 +907,9 @@ static int aead_hmac_init(struct crypto_aead *cipher)
+ 	crypto_shash_init(&ctx->sdesc->shash);
+ 	crypto_shash_update(&ctx->sdesc->shash, opad, bs);
+ 	crypto_shash_export(&ctx->sdesc->shash, opad);
+-	ret = copy_pad(ctx->mac_type, ctx->opad, opad);
+-	if (ret)
+-		goto calc_fail;
+-
+-	kfree(ipad);
+-	kfree(opad);
+-
+-	return 0;
++	ret = swap_pad(ctx->mac_type, opad);
+ 
+ calc_fail:
+-	kfree(ctx->ipad);
+-	ctx->ipad = NULL;
+-	kfree(ctx->opad);
+-	ctx->opad = NULL;
+-	kfree(ipad);
+-	kfree(opad);
+-	kfree(ctx->sdesc);
+-	ctx->sdesc = NULL;
+-
+ 	return ret;
+ }
+ 
+@@ -921,87 +917,17 @@ static int otx2_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher,
+ 					    const unsigned char *key,
+ 					    unsigned int keylen)
+ {
+-	struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	struct crypto_authenc_key_param *param;
+-	int enckeylen = 0, authkeylen = 0;
+-	struct rtattr *rta = (void *)key;
+-
+-	if (!RTA_OK(rta, keylen))
+-		return -EINVAL;
++	struct crypto_authenc_keys authenc_keys;
+ 
+-	if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+-		return -EINVAL;
+-
+-	if (RTA_PAYLOAD(rta) < sizeof(*param))
+-		return -EINVAL;
+-
+-	param = RTA_DATA(rta);
+-	enckeylen = be32_to_cpu(param->enckeylen);
+-	key += RTA_ALIGN(rta->rta_len);
+-	keylen -= RTA_ALIGN(rta->rta_len);
+-	if (keylen < enckeylen)
+-		return -EINVAL;
+-
+-	if (keylen > OTX2_CPT_MAX_KEY_SIZE)
+-		return -EINVAL;
+-
+-	authkeylen = keylen - enckeylen;
+-	memcpy(ctx->key, key, keylen);
+-
+-	switch (enckeylen) {
+-	case AES_KEYSIZE_128:
+-		ctx->key_type = OTX2_CPT_AES_128_BIT;
+-		break;
+-	case AES_KEYSIZE_192:
+-		ctx->key_type = OTX2_CPT_AES_192_BIT;
+-		break;
+-	case AES_KEYSIZE_256:
+-		ctx->key_type = OTX2_CPT_AES_256_BIT;
+-		break;
+-	default:
+-		/* Invalid key length */
+-		return -EINVAL;
+-	}
+-
+-	ctx->enc_key_len = enckeylen;
+-	ctx->auth_key_len = authkeylen;
+-
+-	return aead_hmac_init(cipher);
++	return crypto_authenc_extractkeys(&authenc_keys, key, keylen) ?:
++	       aead_hmac_init(cipher, &authenc_keys);
+ }
+ 
+ static int otx2_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher,
+ 					     const unsigned char *key,
+ 					     unsigned int keylen)
+ {
+-	struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher);
+-	struct crypto_authenc_key_param *param;
+-	struct rtattr *rta = (void *)key;
+-	int enckeylen = 0;
+-
+-	if (!RTA_OK(rta, keylen))
+-		return -EINVAL;
+-
+-	if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+-		return -EINVAL;
+-
+-	if (RTA_PAYLOAD(rta) < sizeof(*param))
+-		return -EINVAL;
+-
+-	param = RTA_DATA(rta);
+-	enckeylen = be32_to_cpu(param->enckeylen);
+-	key += RTA_ALIGN(rta->rta_len);
+-	keylen -= RTA_ALIGN(rta->rta_len);
+-	if (enckeylen != 0)
+-		return -EINVAL;
+-
+-	if (keylen > OTX2_CPT_MAX_KEY_SIZE)
+-		return -EINVAL;
+-
+-	memcpy(ctx->key, key, keylen);
+-	ctx->enc_key_len = enckeylen;
+-	ctx->auth_key_len = keylen;
+-
+-	return 0;
++	return otx2_cpt_aead_cbc_aes_sha_setkey(cipher, key, keylen);
+ }
+ 
+ static int otx2_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher,
+diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
+index 02a07d3d0d40a9..a3df782fa687b0 100644
+--- a/drivers/firmware/sysfb.c
++++ b/drivers/firmware/sysfb.c
+@@ -67,9 +67,11 @@ static bool sysfb_unregister(void)
+ void sysfb_disable(struct device *dev)
+ {
+ 	struct screen_info *si = &screen_info;
++	struct device *parent;
+ 
+ 	mutex_lock(&disable_lock);
+-	if (!dev || dev == sysfb_parent_dev(si)) {
++	parent = sysfb_parent_dev(si);
++	if (!dev || !parent || dev == parent) {
+ 		sysfb_unregister();
+ 		disabled = true;
+ 	}
+diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
+index c1590d3aa9cb78..c3a1dc3449617f 100644
+--- a/drivers/firmware/tegra/bpmp.c
++++ b/drivers/firmware/tegra/bpmp.c
+@@ -24,12 +24,6 @@
+ #define MSG_RING	BIT(1)
+ #define TAG_SZ		32
+ 
+-static inline struct tegra_bpmp *
+-mbox_client_to_bpmp(struct mbox_client *client)
+-{
+-	return container_of(client, struct tegra_bpmp, mbox.client);
+-}
+-
+ static inline const struct tegra_bpmp_ops *
+ channel_to_ops(struct tegra_bpmp_channel *channel)
+ {
+diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
+index 1d0175d6350b78..0ecfa7de5ce26e 100644
+--- a/drivers/gpio/gpio-davinci.c
++++ b/drivers/gpio/gpio-davinci.c
+@@ -289,7 +289,7 @@ static int davinci_gpio_probe(struct platform_device *pdev)
+  * serve as EDMA event triggers.
+  */
+ 
+-static void gpio_irq_disable(struct irq_data *d)
++static void gpio_irq_mask(struct irq_data *d)
+ {
+ 	struct davinci_gpio_regs __iomem *g = irq2regs(d);
+ 	uintptr_t mask = (uintptr_t)irq_data_get_irq_handler_data(d);
+@@ -298,7 +298,7 @@ static void gpio_irq_disable(struct irq_data *d)
+ 	writel_relaxed(mask, &g->clr_rising);
+ }
+ 
+-static void gpio_irq_enable(struct irq_data *d)
++static void gpio_irq_unmask(struct irq_data *d)
+ {
+ 	struct davinci_gpio_regs __iomem *g = irq2regs(d);
+ 	uintptr_t mask = (uintptr_t)irq_data_get_irq_handler_data(d);
+@@ -324,8 +324,8 @@ static int gpio_irq_type(struct irq_data *d, unsigned trigger)
+ 
+ static struct irq_chip gpio_irqchip = {
+ 	.name		= "GPIO",
+-	.irq_enable	= gpio_irq_enable,
+-	.irq_disable	= gpio_irq_disable,
++	.irq_unmask	= gpio_irq_unmask,
++	.irq_mask	= gpio_irq_mask,
+ 	.irq_set_type	= gpio_irq_type,
+ 	.flags		= IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index 9baee7c246b6d3..a513819b723115 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -80,6 +80,9 @@ static void aca_banks_release(struct aca_banks *banks)
+ {
+ 	struct aca_bank_node *node, *tmp;
+ 
++	if (list_empty(&banks->list))
++		return;
++
+ 	list_for_each_entry_safe(node, tmp, &banks->list, node) {
+ 		list_del(&node->node);
+ 		kvfree(node);
+@@ -562,9 +565,13 @@ static void aca_error_fini(struct aca_error *aerr)
+ 	struct aca_bank_error *bank_error, *tmp;
+ 
+ 	mutex_lock(&aerr->lock);
++	if (list_empty(&aerr->list))
++		goto out_unlock;
++
+ 	list_for_each_entry_safe(bank_error, tmp, &aerr->list, node)
+ 		aca_bank_error_remove(aerr, bank_error);
+ 
++out_unlock:
+ 	mutex_destroy(&aerr->lock);
+ }
+ 
+@@ -680,6 +687,9 @@ static void aca_manager_fini(struct aca_handle_manager *mgr)
+ {
+ 	struct aca_handle *handle, *tmp;
+ 
++	if (list_empty(&mgr->list))
++		return;
++
+ 	list_for_each_entry_safe(handle, tmp, &mgr->list, node)
+ 		amdgpu_aca_remove_handle(handle);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index e3738d4172458c..26ecca3e8e9003 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -360,15 +360,15 @@ int amdgpu_amdkfd_alloc_gtt_mem(struct amdgpu_device *adev, size_t size,
+ 	return r;
+ }
+ 
+-void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void *mem_obj)
++void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void **mem_obj)
+ {
+-	struct amdgpu_bo *bo = (struct amdgpu_bo *) mem_obj;
++	struct amdgpu_bo **bo = (struct amdgpu_bo **) mem_obj;
+ 
+-	amdgpu_bo_reserve(bo, true);
+-	amdgpu_bo_kunmap(bo);
+-	amdgpu_bo_unpin(bo);
+-	amdgpu_bo_unreserve(bo);
+-	amdgpu_bo_unref(&(bo));
++	amdgpu_bo_reserve(*bo, true);
++	amdgpu_bo_kunmap(*bo);
++	amdgpu_bo_unpin(*bo);
++	amdgpu_bo_unreserve(*bo);
++	amdgpu_bo_unref(bo);
+ }
+ 
+ int amdgpu_amdkfd_alloc_gws(struct amdgpu_device *adev, size_t size,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index 1de021ebdd467b..ee16d8a9ba5596 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -233,7 +233,7 @@ int amdgpu_amdkfd_bo_validate_and_fence(struct amdgpu_bo *bo,
+ int amdgpu_amdkfd_alloc_gtt_mem(struct amdgpu_device *adev, size_t size,
+ 				void **mem_obj, uint64_t *gpu_addr,
+ 				void **cpu_ptr, bool mqd_gfx9);
+-void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void *mem_obj);
++void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void **mem_obj);
+ int amdgpu_amdkfd_alloc_gws(struct amdgpu_device *adev, size_t size,
+ 				void **mem_obj);
+ void amdgpu_amdkfd_free_gws(struct amdgpu_device *adev, void *mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+index 7dc102f0bc1d3c..0c8975ac5af9ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+@@ -1018,8 +1018,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ 		if (clock_type == COMPUTE_ENGINE_PLL_PARAM) {
+ 			args.v3.ulClockParams = cpu_to_le32((clock_type << 24) | clock);
+ 
+-			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-				sizeof(args));
++			if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++			    index, (uint32_t *)&args, sizeof(args)))
++				return -EINVAL;
+ 
+ 			dividers->post_div = args.v3.ucPostDiv;
+ 			dividers->enable_post_div = (args.v3.ucCntlFlag &
+@@ -1039,8 +1040,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ 			if (strobe_mode)
+ 				args.v5.ucInputFlag = ATOM_PLL_INPUT_FLAG_PLL_STROBE_MODE_EN;
+ 
+-			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-				sizeof(args));
++			if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++			    index, (uint32_t *)&args, sizeof(args)))
++				return -EINVAL;
+ 
+ 			dividers->post_div = args.v5.ucPostDiv;
+ 			dividers->enable_post_div = (args.v5.ucCntlFlag &
+@@ -1058,8 +1060,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ 		/* fusion */
+ 		args.v4.ulClock = cpu_to_le32(clock);	/* 10 khz */
+ 
+-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-			sizeof(args));
++		if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++		    index, (uint32_t *)&args, sizeof(args)))
++			return -EINVAL;
+ 
+ 		dividers->post_divider = dividers->post_div = args.v4.ucPostDiv;
+ 		dividers->real_clock = le32_to_cpu(args.v4.ulClock);
+@@ -1070,8 +1073,9 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
+ 		args.v6_in.ulClock.ulComputeClockFlag = clock_type;
+ 		args.v6_in.ulClock.ulClockFreq = cpu_to_le32(clock);	/* 10 khz */
+ 
+-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-			sizeof(args));
++		if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++		    index, (uint32_t *)&args, sizeof(args)))
++			return -EINVAL;
+ 
+ 		dividers->whole_fb_div = le16_to_cpu(args.v6_out.ulFbDiv.usFbDiv);
+ 		dividers->frac_fb_div = le16_to_cpu(args.v6_out.ulFbDiv.usFbDivFrac);
+@@ -1113,8 +1117,9 @@ int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
+ 			if (strobe_mode)
+ 				args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
+ 
+-			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-				sizeof(args));
++			if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++			    index, (uint32_t *)&args, sizeof(args)))
++				return -EINVAL;
+ 
+ 			mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
+ 			mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
+@@ -1211,8 +1216,9 @@ int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+ 		args.v2.ucVoltageMode = 0;
+ 		args.v2.usVoltageLevel = 0;
+ 
+-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-			sizeof(args));
++		if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++		    index, (uint32_t *)&args, sizeof(args)))
++			return -EINVAL;
+ 
+ 		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
+ 		break;
+@@ -1221,8 +1227,9 @@ int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+ 		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
+ 		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
+ 
+-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args,
+-			sizeof(args));
++		if (amdgpu_atom_execute_table(adev->mode_info.atom_context,
++		    index, (uint32_t *)&args, sizeof(args)))
++			return -EINVAL;
+ 
+ 		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 6dfdff58bffd1f..78b3c067fea7e2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -263,6 +263,10 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
+ 			if (size < sizeof(struct drm_amdgpu_bo_list_in))
+ 				goto free_partial_kdata;
+ 
++			/* Only a single BO list is allowed to simplify handling. */
++			if (p->bo_list)
++				ret = -EINVAL;
++
+ 			ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata);
+ 			if (ret)
+ 				goto free_partial_kdata;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index e92bdc9a39d353..1935b211b527df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -816,8 +816,11 @@ int amdgpu_gfx_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *r
+ 	int r;
+ 
+ 	if (amdgpu_ras_is_supported(adev, ras_block->block)) {
+-		if (!amdgpu_persistent_edc_harvesting_supported(adev))
+-			amdgpu_ras_reset_error_status(adev, AMDGPU_RAS_BLOCK__GFX);
++		if (!amdgpu_persistent_edc_harvesting_supported(adev)) {
++			r = amdgpu_ras_reset_error_status(adev, AMDGPU_RAS_BLOCK__GFX);
++			if (r)
++				return r;
++		}
+ 
+ 		r = amdgpu_ras_block_late_init(adev, ras_block);
+ 		if (r)
+@@ -961,7 +964,10 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg, uint32_t xcc_
+ 		pr_err("critical bug! too many kiq readers\n");
+ 		goto failed_unlock;
+ 	}
+-	amdgpu_ring_alloc(ring, 32);
++	r = amdgpu_ring_alloc(ring, 32);
++	if (r)
++		goto failed_unlock;
++
+ 	amdgpu_ring_emit_rreg(ring, reg, reg_val_offs);
+ 	r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
+ 	if (r)
+@@ -1027,7 +1033,10 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, uint3
+ 	}
+ 
+ 	spin_lock_irqsave(&kiq->ring_lock, flags);
+-	amdgpu_ring_alloc(ring, 32);
++	r = amdgpu_ring_alloc(ring, 32);
++	if (r)
++		goto failed_unlock;
++
+ 	amdgpu_ring_emit_wreg(ring, reg, v);
+ 	r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
+ 	if (r)
+@@ -1063,6 +1072,7 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, uint3
+ 
+ failed_undo:
+ 	amdgpu_ring_undo(ring);
++failed_unlock:
+ 	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ failed_kiq_write:
+ 	dev_err(adev->dev, "failed to write reg:%x\n", reg);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 977cde6d13626e..6d4e774b6cedc4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -43,6 +43,7 @@
+ #include "amdgpu_gem.h"
+ #include "amdgpu_display.h"
+ #include "amdgpu_ras.h"
++#include "amdgpu_reset.h"
+ #include "amd_pcie.h"
+ 
+ void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
+@@ -778,6 +779,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 				    ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_READ_MMR_REG: {
++		int ret = 0;
+ 		unsigned int n, alloc_size;
+ 		uint32_t *regs;
+ 		unsigned int se_num = (info->read_mmr_reg.instance >>
+@@ -787,24 +789,37 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 				   AMDGPU_INFO_MMR_SH_INDEX_SHIFT) &
+ 				  AMDGPU_INFO_MMR_SH_INDEX_MASK;
+ 
++		if (!down_read_trylock(&adev->reset_domain->sem))
++			return -ENOENT;
++
+ 		/* set full masks if the userspace set all bits
+ 		 * in the bitfields
+ 		 */
+-		if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
++		if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK) {
+ 			se_num = 0xffffffff;
+-		else if (se_num >= AMDGPU_GFX_MAX_SE)
+-			return -EINVAL;
+-		if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
++		} else if (se_num >= AMDGPU_GFX_MAX_SE) {
++			ret = -EINVAL;
++			goto out;
++		}
++
++		if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK) {
+ 			sh_num = 0xffffffff;
+-		else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE)
+-			return -EINVAL;
++		} else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
+-		if (info->read_mmr_reg.count > 128)
+-			return -EINVAL;
++		if (info->read_mmr_reg.count > 128) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
+ 		regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL);
+-		if (!regs)
+-			return -ENOMEM;
++		if (!regs) {
++			ret = -ENOMEM;
++			goto out;
++		}
++
+ 		alloc_size = info->read_mmr_reg.count * sizeof(*regs);
+ 
+ 		amdgpu_gfx_off_ctrl(adev, false);
+@@ -816,13 +831,17 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 					      info->read_mmr_reg.dword_offset + i);
+ 				kfree(regs);
+ 				amdgpu_gfx_off_ctrl(adev, true);
+-				return -EFAULT;
++				ret = -EFAULT;
++				goto out;
+ 			}
+ 		}
+ 		amdgpu_gfx_off_ctrl(adev, true);
+ 		n = copy_to_user(out, regs, min(size, alloc_size));
+ 		kfree(regs);
+-		return n ? -EFAULT : 0;
++		ret = (n ? -EFAULT : 0);
++out:
++		up_read(&adev->reset_domain->sem);
++		return ret;
+ 	}
+ 	case AMDGPU_INFO_DEV_INFO: {
+ 		struct drm_amdgpu_info_device *dev_info;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
+index 90138bc5f03d1c..32775260556f44 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
+@@ -180,6 +180,6 @@ amdgpu_get_next_xcp(struct amdgpu_xcp_mgr *xcp_mgr, int *from)
+ 
+ #define for_each_xcp(xcp_mgr, xcp, i)                            \
+ 	for (i = 0, xcp = amdgpu_get_next_xcp(xcp_mgr, &i); xcp; \
+-	     xcp = amdgpu_get_next_xcp(xcp_mgr, &i))
++	     ++i, xcp = amdgpu_get_next_xcp(xcp_mgr, &i))
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 536287ddd2ec13..6204336750c6ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -8897,7 +8897,9 @@ static void gfx_v10_0_ring_soft_recovery(struct amdgpu_ring *ring,
+ 	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ 	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ 	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ 	WREG32_SOC15(GC, 0, mmSQ_CMD, value);
++	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 4ba8eb45ac1748..6b5cd0dcd25f46 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -4497,6 +4497,8 @@ static int gfx_v11_0_soft_reset(void *handle)
+ 	int r, i, j, k;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	gfx_v11_0_set_safe_mode(adev, 0);
++
+ 	tmp = RREG32_SOC15(GC, 0, regCP_INT_CNTL);
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, CMP_BUSY_INT_ENABLE, 0);
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, CNTX_BUSY_INT_ENABLE, 0);
+@@ -4504,8 +4506,6 @@ static int gfx_v11_0_soft_reset(void *handle)
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL, GFX_IDLE_INT_ENABLE, 0);
+ 	WREG32_SOC15(GC, 0, regCP_INT_CNTL, tmp);
+ 
+-	gfx_v11_0_set_safe_mode(adev, 0);
+-
+ 	mutex_lock(&adev->srbm_mutex);
+ 	for (i = 0; i < adev->gfx.mec.num_mec; ++i) {
+ 		for (j = 0; j < adev->gfx.mec.num_queue_per_pipe; j++) {
+@@ -5792,7 +5792,9 @@ static void gfx_v11_0_ring_soft_recovery(struct amdgpu_ring *ring,
+ 	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ 	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ 	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ 	WREG32_SOC15(GC, 0, regSQ_CMD, value);
++	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 3c8c5abf35abde..d8d3d2c93d8ee1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1172,6 +1172,10 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ 	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
+ 	/* Apple MacBook Pro (15-inch, 2019) Radeon Pro Vega 20 4 GB */
+ 	{ 0x1002, 0x69af, 0x106b, 0x019a, 0xc0 },
++	/* https://bbs.openkylin.top/t/topic/171497 */
++	{ 0x1002, 0x15d8, 0x19e5, 0x3e14, 0xc2 },
++	/* HP 705G4 DM with R5 2400G */
++	{ 0x1002, 0x15dd, 0x103c, 0x8464, 0xd6 },
+ 	{ 0, 0, 0, 0, 0 },
+ };
+ 
+@@ -2473,7 +2477,7 @@ static void gfx_v9_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, enable ? 1 : 0);
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, enable ? 1 : 0);
+ 	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, enable ? 1 : 0);
+-	if(adev->gfx.num_gfx_rings)
++	if (adev->gfx.num_gfx_rings)
+ 		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, enable ? 1 : 0);
+ 
+ 	WREG32_SOC15(GC, 0, mmCP_INT_CNTL_RING0, tmp);
+@@ -5697,7 +5701,9 @@ static void gfx_v9_0_ring_soft_recovery(struct amdgpu_ring *ring, unsigned vmid)
+ 	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ 	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+ 	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
++	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
+ 	WREG32_SOC15(GC, 0, mmSQ_CMD, value);
++	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+ }
+ 
+ static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
+@@ -5768,17 +5774,59 @@ static void gfx_v9_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev,
+ 	}
+ }
+ 
++static u32 gfx_v9_0_get_cpc_int_cntl(struct amdgpu_device *adev,
++				     int me, int pipe)
++{
++	/*
++	 * amdgpu controls only the first MEC. That's why this function only
++	 * handles the setting of interrupts for this specific MEC. All other
++	 * pipes' interrupts are set by amdkfd.
++	 */
++	if (me != 1)
++		return 0;
++
++	switch (pipe) {
++	case 0:
++		return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
++	case 1:
++		return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE1_INT_CNTL);
++	case 2:
++		return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE2_INT_CNTL);
++	case 3:
++		return SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE3_INT_CNTL);
++	default:
++		return 0;
++	}
++}
++
+ static int gfx_v9_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+ 					     struct amdgpu_irq_src *source,
+ 					     unsigned type,
+ 					     enum amdgpu_interrupt_state state)
+ {
++	u32 cp_int_cntl_reg, cp_int_cntl;
++	int i, j;
++
+ 	switch (state) {
+ 	case AMDGPU_IRQ_STATE_DISABLE:
+ 	case AMDGPU_IRQ_STATE_ENABLE:
+ 		WREG32_FIELD15(GC, 0, CP_INT_CNTL_RING0,
+ 			       PRIV_REG_INT_ENABLE,
+ 			       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++		for (i = 0; i < adev->gfx.mec.num_mec; i++) {
++			for (j = 0; j < adev->gfx.mec.num_pipe_per_mec; j++) {
++				/* MECs start at 1 */
++				cp_int_cntl_reg = gfx_v9_0_get_cpc_int_cntl(adev, i + 1, j);
++
++				if (cp_int_cntl_reg) {
++					cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
++					cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_ME1_PIPE0_INT_CNTL,
++								    PRIV_REG_INT_ENABLE,
++								    state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++					WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
++				}
++			}
++		}
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index f5b9f443cfdd79..2564a003526ae9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -2824,21 +2824,63 @@ static void gfx_v9_4_3_xcc_set_compute_eop_interrupt_state(
+ 	}
+ }
+ 
++static u32 gfx_v9_4_3_get_cpc_int_cntl(struct amdgpu_device *adev,
++				     int xcc_id, int me, int pipe)
++{
++	/*
++	 * amdgpu controls only the first MEC. That's why this function only
++	 * handles the setting of interrupts for this specific MEC. All other
++	 * pipes' interrupts are set by amdkfd.
++	 */
++	if (me != 1)
++		return 0;
++
++	switch (pipe) {
++	case 0:
++		return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE0_INT_CNTL);
++	case 1:
++		return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE1_INT_CNTL);
++	case 2:
++		return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE2_INT_CNTL);
++	case 3:
++		return SOC15_REG_OFFSET(GC, GET_INST(GC, xcc_id), regCP_ME1_PIPE3_INT_CNTL);
++	default:
++		return 0;
++	}
++}
++
+ static int gfx_v9_4_3_set_priv_reg_fault_state(struct amdgpu_device *adev,
+ 					     struct amdgpu_irq_src *source,
+ 					     unsigned type,
+ 					     enum amdgpu_interrupt_state state)
+ {
+-	int i, num_xcc;
++	u32 mec_int_cntl_reg, mec_int_cntl;
++	int i, j, k, num_xcc;
+ 
+ 	num_xcc = NUM_XCC(adev->gfx.xcc_mask);
+ 	switch (state) {
+ 	case AMDGPU_IRQ_STATE_DISABLE:
+ 	case AMDGPU_IRQ_STATE_ENABLE:
+-		for (i = 0; i < num_xcc; i++)
++		for (i = 0; i < num_xcc; i++) {
+ 			WREG32_FIELD15_PREREG(GC, GET_INST(GC, i), CP_INT_CNTL_RING0,
+-				PRIV_REG_INT_ENABLE,
+-				state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++					      PRIV_REG_INT_ENABLE,
++					      state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
++			for (j = 0; j < adev->gfx.mec.num_mec; j++) {
++				for (k = 0; k < adev->gfx.mec.num_pipe_per_mec; k++) {
++					/* MECs start at 1 */
++					mec_int_cntl_reg = gfx_v9_4_3_get_cpc_int_cntl(adev, i, j + 1, k);
++
++					if (mec_int_cntl_reg) {
++						mec_int_cntl = RREG32_XCC(mec_int_cntl_reg, i);
++						mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
++									     PRIV_REG_INT_ENABLE,
++									     state == AMDGPU_IRQ_STATE_ENABLE ?
++									     1 : 0);
++						WREG32_XCC(mec_int_cntl_reg, mec_int_cntl, i);
++					}
++				}
++			}
++		}
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index fdf171ad4a3c6b..4f260adce8c463 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -423,7 +423,7 @@ static int kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
+ 
+ err_create_queue:
+ 	if (wptr_bo)
+-		amdgpu_amdkfd_free_gtt_mem(dev->adev, wptr_bo);
++		amdgpu_amdkfd_free_gtt_mem(dev->adev, (void **)&wptr_bo);
+ err_wptr_map_gart:
+ err_bind_process:
+ err_pdd:
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index afc57df421cd9c..3343079f28c903 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -863,7 +863,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ kfd_doorbell_error:
+ 	kfd_gtt_sa_fini(kfd);
+ kfd_gtt_sa_init_error:
+-	amdgpu_amdkfd_free_gtt_mem(kfd->adev, kfd->gtt_mem);
++	amdgpu_amdkfd_free_gtt_mem(kfd->adev, &kfd->gtt_mem);
+ alloc_gtt_mem_failure:
+ 	dev_err(kfd_device,
+ 		"device %x:%x NOT added due to errors\n",
+@@ -881,7 +881,7 @@ void kgd2kfd_device_exit(struct kfd_dev *kfd)
+ 		kfd_doorbell_fini(kfd);
+ 		ida_destroy(&kfd->doorbell_ida);
+ 		kfd_gtt_sa_fini(kfd);
+-		amdgpu_amdkfd_free_gtt_mem(kfd->adev, kfd->gtt_mem);
++		amdgpu_amdkfd_free_gtt_mem(kfd->adev, &kfd->gtt_mem);
+ 	}
+ 
+ 	kfree(kfd);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index c08b6ee252898d..dbef9eac2694f9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -2633,7 +2633,7 @@ static void deallocate_hiq_sdma_mqd(struct kfd_node *dev,
+ {
+ 	WARN(!mqd, "No hiq sdma mqd trunk to free");
+ 
+-	amdgpu_amdkfd_free_gtt_mem(dev->adev, mqd->gtt_mem);
++	amdgpu_amdkfd_free_gtt_mem(dev->adev, &mqd->gtt_mem);
+ }
+ 
+ void device_queue_manager_uninit(struct device_queue_manager *dqm)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+index 78dde62fb04ad7..c282f5253c4458 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+@@ -414,25 +414,9 @@ static void event_interrupt_wq_v9(struct kfd_node *dev,
+ 		   client_id == SOC15_IH_CLIENTID_UTCL2) {
+ 		struct kfd_vm_fault_info info = {0};
+ 		uint16_t ring_id = SOC15_RING_ID_FROM_IH_ENTRY(ih_ring_entry);
+-		uint32_t node_id = SOC15_NODEID_FROM_IH_ENTRY(ih_ring_entry);
+-		uint32_t vmid_type = SOC15_VMID_TYPE_FROM_IH_ENTRY(ih_ring_entry);
+-		int hub_inst = 0;
+ 		struct kfd_hsa_memory_exception_data exception_data;
+ 
+-		/* gfxhub */
+-		if (!vmid_type && dev->adev->gfx.funcs->ih_node_to_logical_xcc) {
+-			hub_inst = dev->adev->gfx.funcs->ih_node_to_logical_xcc(dev->adev,
+-				node_id);
+-			if (hub_inst < 0)
+-				hub_inst = 0;
+-		}
+-
+-		/* mmhub */
+-		if (vmid_type && client_id == SOC15_IH_CLIENTID_VMC)
+-			hub_inst = node_id / 4;
+-
+-		if (amdgpu_amdkfd_ras_query_utcl2_poison_status(dev->adev,
+-					hub_inst, vmid_type)) {
++		if (source_id == SOC15_INTSRC_VMC_UTCL2_POISON) {
+ 			event_interrupt_poison_consumption_v9(dev, pasid, client_id);
+ 			return;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 8746a61a852dc2..d501fd2222dc39 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -223,7 +223,7 @@ void kfd_free_mqd_cp(struct mqd_manager *mm, void *mqd,
+ 	      struct kfd_mem_obj *mqd_mem_obj)
+ {
+ 	if (mqd_mem_obj->gtt_mem) {
+-		amdgpu_amdkfd_free_gtt_mem(mm->dev->adev, mqd_mem_obj->gtt_mem);
++		amdgpu_amdkfd_free_gtt_mem(mm->dev->adev, &mqd_mem_obj->gtt_mem);
+ 		kfree(mqd_mem_obj);
+ 	} else {
+ 		kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 451bb058cc6203..66150ea8e64d80 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1048,7 +1048,7 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
+ 
+ 		if (pdd->dev->kfd->shared_resources.enable_mes)
+ 			amdgpu_amdkfd_free_gtt_mem(pdd->dev->adev,
+-						   pdd->proc_ctx_bo);
++						   &pdd->proc_ctx_bo);
+ 		/*
+ 		 * before destroying pdd, make sure to report availability
+ 		 * for auto suspend
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index a5bdc3258ae54a..db2b71f7226f43 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -201,9 +201,9 @@ static void pqm_clean_queue_resource(struct process_queue_manager *pqm,
+ 	}
+ 
+ 	if (dev->kfd->shared_resources.enable_mes) {
+-		amdgpu_amdkfd_free_gtt_mem(dev->adev, pqn->q->gang_ctx_bo);
++		amdgpu_amdkfd_free_gtt_mem(dev->adev, &pqn->q->gang_ctx_bo);
+ 		if (pqn->q->wptr_bo)
+-			amdgpu_amdkfd_free_gtt_mem(dev->adev, pqn->q->wptr_bo);
++			amdgpu_amdkfd_free_gtt_mem(dev->adev, (void **)&pqn->q->wptr_bo);
+ 	}
+ }
+ 
+@@ -984,6 +984,7 @@ int kfd_criu_restore_queue(struct kfd_process *p,
+ 		pr_debug("Queue id %d was restored successfully\n", queue_id);
+ 
+ 	kfree(q_data);
++	kfree(q_extra_data);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/soc15_int.h b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
+index 10138676f27fd7..e5c0205f26181e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/soc15_int.h
++++ b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
+@@ -29,6 +29,7 @@
+ #define SOC15_INTSRC_CP_BAD_OPCODE	183
+ #define SOC15_INTSRC_SQ_INTERRUPT_MSG	239
+ #define SOC15_INTSRC_VMC_FAULT		0
++#define SOC15_INTSRC_VMC_UTCL2_POISON	1
+ #define SOC15_INTSRC_SDMA_TRAP		224
+ #define SOC15_INTSRC_SDMA_ECC		220
+ #define SOC21_INTSRC_SDMA_TRAP		49
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3541d154cc8d06..83f4ff9e848d78 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -758,6 +758,12 @@ static void dmub_hpd_callback(struct amdgpu_device *adev,
+ 		return;
+ 	}
+ 
++	/* Skip DMUB HPD IRQ in suspend/resume. We will probe them later. */
++	if (notify->type == DMUB_NOTIFICATION_HPD && adev->in_suspend) {
++		DRM_INFO("Skip DMUB HPD IRQ callback in suspend/resume\n");
++		return;
++	}
++
+ 	link_index = notify->link_index;
+ 	link = adev->dm.dc->links[link_index];
+ 	dev = adev->dm.ddev;
+@@ -4170,7 +4176,7 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ 		int spread = caps.max_input_signal - caps.min_input_signal;
+ 
+ 		if (caps.max_input_signal > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
+-		    caps.min_input_signal < AMDGPU_DM_DEFAULT_MIN_BACKLIGHT ||
++		    caps.min_input_signal < 0 ||
+ 		    spread > AMDGPU_DM_DEFAULT_MAX_BACKLIGHT ||
+ 		    spread < AMDGPU_DM_MIN_SPREAD) {
+ 			DRM_DEBUG_KMS("DM: Invalid backlight caps: min=%d, max=%d\n",
+@@ -6389,12 +6395,21 @@ create_stream_for_sink(struct drm_connector *connector,
+ 	if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT ||
+ 	    stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST ||
+ 	    stream->signal == SIGNAL_TYPE_EDP) {
++		const struct dc_edid_caps *edid_caps;
++		unsigned int disable_colorimetry = 0;
++
++		if (aconnector->dc_sink) {
++			edid_caps = &aconnector->dc_sink->edid_caps;
++			disable_colorimetry = edid_caps->panel_patch.disable_colorimetry;
++		}
++
+ 		//
+ 		// should decide stream support vsc sdp colorimetry capability
+ 		// before building vsc info packet
+ 		//
+ 		stream->use_vsc_sdp_for_colorimetry = stream->link->dpcd_caps.dpcd_rev.raw >= 0x14 &&
+-						      stream->link->dpcd_caps.dprx_feature.bits.VSC_SDP_COLORIMETRY_SUPPORTED;
++						      stream->link->dpcd_caps.dprx_feature.bits.VSC_SDP_COLORIMETRY_SUPPORTED &&
++						      !disable_colorimetry;
+ 
+ 		if (stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22)
+ 			tf = TRANSFER_FUNC_GAMMA_22;
+@@ -6951,6 +6966,9 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 	int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8;
+ 	enum dc_status dc_result = DC_OK;
+ 
++	if (!dm_state)
++		return NULL;
++
+ 	do {
+ 		stream = create_stream_for_sink(connector, drm_mode,
+ 						dm_state, old_stream,
+@@ -8963,7 +8981,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 		if (acrtc)
+ 			old_crtc_state = drm_atomic_get_old_crtc_state(state, &acrtc->base);
+ 
+-		if (!acrtc->wb_enabled)
++		if (!acrtc || !acrtc->wb_enabled)
+ 			continue;
+ 
+ 		dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+@@ -9362,9 +9380,10 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 
+ 			DRM_INFO("[HDCP_DM] hdcp_update_display enable_encryption = %x\n", enable_encryption);
+ 
+-			hdcp_update_display(
+-				adev->dm.hdcp_workqueue, aconnector->dc_link->link_index, aconnector,
+-				new_con_state->hdcp_content_type, enable_encryption);
++			if (aconnector->dc_link)
++				hdcp_update_display(
++					adev->dm.hdcp_workqueue, aconnector->dc_link->link_index, aconnector,
++					new_con_state->hdcp_content_type, enable_encryption);
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 2c36f3d00ca256..3c074120456ed4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -72,6 +72,10 @@ static void apply_edid_quirks(struct edid *edid, struct dc_edid_caps *edid_caps)
+ 		DRM_DEBUG_DRIVER("Clearing DPCD 0x317 on monitor with panel id %X\n", panel_id);
+ 		edid_caps->panel_patch.remove_sink_ext_caps = true;
+ 		break;
++	case drm_edid_encode_panel_id('S', 'D', 'C', 0x4154):
++		DRM_DEBUG_DRIVER("Disabling VSC on monitor with panel id %X\n", panel_id);
++		edid_caps->panel_patch.disable_colorimetry = true;
++		break;
+ 	default:
+ 		return;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 9a620773141682..71695597b7e333 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1264,9 +1264,6 @@ static bool is_dsc_need_re_compute(
+ 		}
+ 	}
+ 
+-	if (new_stream_on_link_num == 0)
+-		return false;
+-
+ 	if (new_stream_on_link_num == 0)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 7d47acdd11d55b..fe7a99aee47dd5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -1285,7 +1285,8 @@ void amdgpu_dm_plane_handle_cursor_update(struct drm_plane *plane,
+ 	    adev->dm.dc->caps.color.dpp.gamma_corr)
+ 		attributes.attribute_flags.bits.ENABLE_CURSOR_DEGAMMA = 1;
+ 
+-	attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
++	if (afb)
++		attributes.pitch = afb->base.pitches[0] / afb->base.format->cpp[0];
+ 
+ 	if (crtc_state->stream) {
+ 		mutex_lock(&adev->dm.dc_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index da237f718dbdd1..daeb80abf435f2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -3970,7 +3970,8 @@ static void commit_planes_for_stream(struct dc *dc,
+ 	}
+ 
+ 	if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
+-		if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
++		if (top_pipe_to_program &&
++		    top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
+ 			top_pipe_to_program->stream_res.tg->funcs->wait_for_state(
+ 				top_pipe_to_program->stream_res.tg,
+ 				CRTC_STATE_VACTIVE);
+@@ -5210,7 +5211,8 @@ void dc_allow_idle_optimizations_internal(struct dc *dc, bool allow, char const
+ 	if (allow == dc->idle_optimizations_allowed)
+ 		return;
+ 
+-	if (dc->hwss.apply_idle_power_optimizations && dc->hwss.apply_idle_power_optimizations(dc, allow))
++	if (dc->hwss.apply_idle_power_optimizations && dc->clk_mgr != NULL &&
++	    dc->hwss.apply_idle_power_optimizations(dc, allow))
+ 		dc->idle_optimizations_allowed = allow;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 786b56e96a8162..58f6155fecc5aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -3134,6 +3134,8 @@ static bool are_stream_backends_same(
+ bool dc_is_stream_unchanged(
+ 	struct dc_stream_state *old_stream, struct dc_stream_state *stream)
+ {
++	if (!old_stream || !stream)
++		return false;
+ 
+ 	if (!are_stream_backends_same(old_stream, stream))
+ 		return false;
+@@ -3662,8 +3664,10 @@ static bool planes_changed_for_existing_stream(struct dc_state *context,
+ 		}
+ 	}
+ 
+-	if (!stream_status)
++	if (!stream_status) {
+ 		ASSERT(0);
++		return false;
++	}
+ 
+ 	for (i = 0; i < set_count; i++)
+ 		if (set[i].stream == stream)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 0f66d00ef80f51..1d17d6497fec1c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -178,6 +178,7 @@ struct dc_panel_patch {
+ 	unsigned int skip_avmute;
+ 	unsigned int mst_start_top_delay;
+ 	unsigned int remove_sink_ext_caps;
++	unsigned int disable_colorimetry;
+ };
+ 
+ struct dc_edid_caps {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+index 0b49362f71b06c..eaed5d1c398aa0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+@@ -591,6 +591,8 @@ bool cm_helper_translate_curve_to_degamma_hw_format(
+ 				i += increment) {
+ 			if (j == hw_points - 1)
+ 				break;
++			if (i >= TRANSFER_FUNC_POINTS)
++				return false;
+ 			rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ 			rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ 			rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+index b8327237ed4418..0433f6b5dac78a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_cm_common.c
+@@ -177,6 +177,8 @@ bool cm3_helper_translate_curve_to_hw_format(
+ 				i += increment) {
+ 			if (j == hw_points)
+ 				break;
++			if (i >= TRANSFER_FUNC_POINTS)
++				return false;
+ 			rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ 			rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ 			rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+@@ -335,6 +337,8 @@ bool cm3_helper_translate_curve_to_degamma_hw_format(
+ 				i += increment) {
+ 			if (j == hw_points - 1)
+ 				break;
++			if (i >= TRANSFER_FUNC_POINTS)
++				return false;
+ 			rgb_resulted[j].red = output_tf->tf_pts.red[i];
+ 			rgb_resulted[j].green = output_tf->tf_pts.green[i];
+ 			rgb_resulted[j].blue = output_tf->tf_pts.blue[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+index 0fc9f3e3ffaefd..f603486af6e306 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+@@ -78,7 +78,7 @@ static void calculate_ttu_cursor(struct display_mode_lib *mode_lib,
+ 
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+-	unsigned int ret_val = 0;
++	unsigned int ret_val = 1;
+ 
+ 	if (source_format == dm_444_16) {
+ 		if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+index 618f4b682ab1b1..9f28e4d3c664c7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+@@ -53,7 +53,7 @@ static void calculate_ttu_cursor(
+ 
+ static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
+ {
+-	unsigned int ret_val = 0;
++	unsigned int ret_val = 1;
+ 
+ 	if (source_format == dm_444_16) {
+ 		if (!is_chroma)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index ebcf5ece209a45..bf8c89fe95a7e5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -871,8 +871,9 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context)
+ 	 * for VBLANK: (VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
+ 	 * and the max of (VBLANK blanking time, MALL region)).
+ 	 */
+-	if (stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
+-			subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
++	if (drr_timing &&
++	    stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
++	    subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
+ 		schedulable = true;
+ 
+ 	return schedulable;
+@@ -937,7 +938,7 @@ static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
+ 		if (!subvp_pipe && pipe_mall_type == SUBVP_MAIN)
+ 			subvp_pipe = pipe;
+ 	}
+-	if (found) {
++	if (found && subvp_pipe) {
+ 		phantom_stream = dc_state_get_paired_subvp_stream(context, subvp_pipe->stream);
+ 		main_timing = &subvp_pipe->stream->timing;
+ 		phantom_timing = &phantom_stream->timing;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+index c4c52173ef2240..11c904ae29586d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_policy.c
+@@ -303,7 +303,6 @@ void build_unoptimized_policy_settings(enum dml_project_id project, struct dml_m
+ 	if (project == dml_project_dcn35 ||
+ 		project == dml_project_dcn351) {
+ 		policy->DCCProgrammingAssumesScanDirectionUnknownFinal = false;
+-		policy->EnhancedPrefetchScheduleAccelerationFinal = 0;
+ 		policy->AllowForPStateChangeOrStutterInVBlankFinal = dml_prefetch_support_uclk_fclk_and_stutter_if_possible; /*new*/
+ 		policy->UseOnlyMaxPrefetchModes = 1;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index edff6b447680c5..d5dbfb33f93dc1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -828,7 +828,9 @@ static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc
+ 	memcpy(out, &temp_pipe->plane_res.scl_data, sizeof(*out));
+ }
+ 
+-static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_stream_state *in)
++static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location,
++					 const struct dc_stream_state *in,
++					 const struct soc_bounding_box_st *soc)
+ {
+ 	dml_uint_t width, height;
+ 
+@@ -845,7 +847,7 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ 	out->CursorBPP[location] = dml_cur_32bit;
+ 	out->CursorWidth[location] = 256;
+ 
+-	out->GPUVMMinPageSizeKBytes[location] = 256;
++	out->GPUVMMinPageSizeKBytes[location] = soc->gpuvm_min_page_size_kbytes;
+ 
+ 	out->ViewportWidth[location] = width;
+ 	out->ViewportHeight[location] = height;
+@@ -882,7 +884,9 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ 	out->ScalerEnabled[location] = false;
+ }
+ 
+-static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_plane_state *in, struct dc_state *context)
++static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location,
++						    const struct dc_plane_state *in, struct dc_state *context,
++						    const struct soc_bounding_box_st *soc)
+ {
+ 	struct scaler_data *scaler_data = kzalloc(sizeof(*scaler_data), GFP_KERNEL);
+ 	if (!scaler_data)
+@@ -893,7 +897,7 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ 	out->CursorBPP[location] = dml_cur_32bit;
+ 	out->CursorWidth[location] = 256;
+ 
+-	out->GPUVMMinPageSizeKBytes[location] = 256;
++	out->GPUVMMinPageSizeKBytes[location] = soc->gpuvm_min_page_size_kbytes;
+ 
+ 	out->ViewportWidth[location] = scaler_data->viewport.width;
+ 	out->ViewportHeight[location] = scaler_data->viewport.height;
+@@ -1174,7 +1178,8 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ 			disp_cfg_plane_location = dml_dispcfg->num_surfaces++;
+ 
+ 			populate_dummy_dml_surface_cfg(&dml_dispcfg->surface, disp_cfg_plane_location, context->streams[i]);
+-			populate_dummy_dml_plane_cfg(&dml_dispcfg->plane, disp_cfg_plane_location, context->streams[i]);
++			populate_dummy_dml_plane_cfg(&dml_dispcfg->plane, disp_cfg_plane_location,
++						     context->streams[i], &dml2->v20.dml_core_ctx.soc);
+ 
+ 			dml_dispcfg->plane.BlendingAndTiming[disp_cfg_plane_location] = disp_cfg_stream_location;
+ 
+@@ -1190,7 +1195,10 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ 				ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+ 
+ 				populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]);
+-				populate_dml_plane_cfg_from_plane_state(&dml_dispcfg->plane, disp_cfg_plane_location, context->stream_status[i].plane_states[j], context);
++				populate_dml_plane_cfg_from_plane_state(
++					&dml_dispcfg->plane, disp_cfg_plane_location,
++					context->stream_status[i].plane_states[j], context,
++					&dml2->v20.dml_core_ctx.soc);
+ 
+ 				if (stream_mall_type == SUBVP_MAIN) {
+ 					dml_dispcfg->plane.UseMALLForPStateChange[disp_cfg_plane_location] = dml_use_mall_pstate_change_sub_viewport;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index e9e9f80a02a775..542d669bf5e30d 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -2020,13 +2020,20 @@ static void set_drr(struct pipe_ctx **pipe_ctx,
+ 	 * as well.
+ 	 */
+ 	for (i = 0; i < num_pipes; i++) {
+-		pipe_ctx[i]->stream_res.tg->funcs->set_drr(
+-			pipe_ctx[i]->stream_res.tg, &params);
+-
+-		if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+-			pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(
+-					pipe_ctx[i]->stream_res.tg,
+-					event_triggers, num_frames);
++		/* dc_state_destruct() might null the stream resources, so fetch tg
++		 * here first to avoid a race condition. The lifetime of the pointee
++		 * itself (the timing_generator object) is not a problem here.
++		 */
++		struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
++
++		if ((tg != NULL) && tg->funcs) {
++			if (tg->funcs->set_drr)
++				tg->funcs->set_drr(tg, &params);
++			if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
++				if (tg->funcs->set_static_screen_control)
++					tg->funcs->set_static_screen_control(
++						tg, event_triggers, num_frames);
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 7d833fa6dd77c3..58e8b7482f4f58 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1040,7 +1040,8 @@ bool dcn20_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 	/*
+ 	 * if above if is not executed then 'params' equal to 0 and set in bypass
+ 	 */
+-	mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++	if (mpc->funcs->set_output_gamma)
++		mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index 05c5d4f04e1bd8..0f72a54e92af62 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -626,7 +626,7 @@ void dcn30_init_hw(struct dc *dc)
+ 	uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+ 	uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
+ 		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+ 
+ 	// Initialize the dccg
+@@ -787,11 +787,12 @@ void dcn30_init_hw(struct dc *dc)
+ 	if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ 		dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+ 
+-	if (dc->clk_mgr->funcs->notify_wm_ranges)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
+ 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ 
+ 	//if softmax is enabled then hardmax will be set by a different call
+-	if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
++	    !dc->clk_mgr->dc_mode_softmax_enabled)
+ 		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+ 
+ 	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index 5fc377f51f5621..c050acc4ff065f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -581,7 +581,9 @@ bool dcn32_set_output_transfer_func(struct dc *dc,
+ 		}
+ 	}
+ 
+-	mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++	if (mpc->funcs->set_output_gamma)
++		mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
++
+ 	return ret;
+ }
+ 
+@@ -752,7 +754,7 @@ void dcn32_init_hw(struct dc *dc)
+ 	uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+ 	uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
+ 		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+ 
+ 	// Initialize the dccg
+@@ -931,10 +933,11 @@ void dcn32_init_hw(struct dc *dc)
+ 	if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ 		dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+ 
+-	if (dc->clk_mgr->funcs->notify_wm_ranges)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
+ 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+ 
+-	if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
++	if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
++	    !dc->clk_mgr->dc_mode_softmax_enabled)
+ 		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+ 
+ 	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
+index e1257404357b11..cec68c5dba1322 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
+@@ -28,6 +28,8 @@
+ #include "dccg.h"
+ #include "clk_mgr.h"
+ 
++#define DC_LOGGER link->ctx->logger
++
+ void set_hpo_dp_throttled_vcp_size(struct pipe_ctx *pipe_ctx,
+ 		struct fixed31_32 throttled_vcp_size)
+ {
+@@ -108,6 +110,11 @@ void enable_hpo_dp_link_output(struct dc_link *link,
+ 		enum clock_source_id clock_source,
+ 		const struct dc_link_settings *link_settings)
+ {
++	if (!link_res->hpo_dp_link_enc) {
++		DC_LOG_ERROR("%s: invalid hpo_dp_link_enc\n", __func__);
++		return;
++	}
++
+ 	if (link->dc->res_pool->dccg->funcs->set_symclk32_le_root_clock_gating)
+ 		link->dc->res_pool->dccg->funcs->set_symclk32_le_root_clock_gating(
+ 				link->dc->res_pool->dccg,
+@@ -124,6 +131,11 @@ void disable_hpo_dp_link_output(struct dc_link *link,
+ 		const struct link_resource *link_res,
+ 		enum signal_type signal)
+ {
++	if (!link_res->hpo_dp_link_enc) {
++		DC_LOG_ERROR("%s: invalid hpo_dp_link_enc\n", __func__);
++		return;
++	}
++
+ 		link_res->hpo_dp_link_enc->funcs->link_disable(link_res->hpo_dp_link_enc);
+ 		link_res->hpo_dp_link_enc->funcs->disable_link_phy(
+ 				link_res->hpo_dp_link_enc, signal);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index 72df9bdfb23ffe..608491f860b295 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -385,7 +385,7 @@ static void link_destruct(struct dc_link *link)
+ 	if (link->panel_cntl)
+ 		link->panel_cntl->funcs->destroy(&link->panel_cntl);
+ 
+-	if (link->link_enc) {
++	if (link->link_enc && !link->is_dig_mapping_flexible) {
+ 		/* Update link encoder resource tracking variables. These are used for
+ 		 * the dynamic assignment of link encoders to streams. Virtual links
+ 		 * are not assigned encoder resources on creation.
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+index 6b380e037e3f8b..c0d1b41eb90042 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+@@ -2033,6 +2033,7 @@ bool dcn20_fast_validate_bw(
+ {
+ 	bool out = false;
+ 	int split[MAX_PIPES] = { 0 };
++	bool merge[MAX_PIPES] = { false };
+ 	int pipe_cnt, i, pipe_idx, vlevel;
+ 
+ 	ASSERT(pipes);
+@@ -2057,7 +2058,7 @@ bool dcn20_fast_validate_bw(
+ 	if (vlevel > context->bw_ctx.dml.soc.num_states)
+ 		goto validate_fail;
+ 
+-	vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, NULL);
++	vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, merge);
+ 
+ 	/*initialize pipe_just_split_from to invalid idx*/
+ 	for (i = 0; i < MAX_PIPES; i++)
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
+index 070a4efb308bdf..1aeede348bd39b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
+@@ -1005,8 +1005,10 @@ static struct pipe_ctx *dcn201_acquire_free_pipe_for_layer(
+ 	struct pipe_ctx *head_pipe = resource_get_otg_master_for_stream(res_ctx, opp_head_pipe->stream);
+ 	struct pipe_ctx *idle_pipe = resource_find_free_secondary_pipe_legacy(res_ctx, pool, head_pipe);
+ 
+-	if (!head_pipe)
++	if (!head_pipe) {
+ 		ASSERT(0);
++		return NULL;
++	}
+ 
+ 	if (!idle_pipe)
+ 		return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+index 8663cbc3d1cf5e..347e6aaea582fb 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+@@ -774,6 +774,7 @@ bool dcn21_fast_validate_bw(struct dc *dc,
+ {
+ 	bool out = false;
+ 	int split[MAX_PIPES] = { 0 };
++	bool merge[MAX_PIPES] = { false };
+ 	int pipe_cnt, i, pipe_idx, vlevel;
+ 
+ 	ASSERT(pipes);
+@@ -816,7 +817,7 @@ bool dcn21_fast_validate_bw(struct dc *dc,
+ 			goto validate_fail;
+ 	}
+ 
+-	vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, NULL);
++	vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, merge);
+ 
+ 	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+ 		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+index d84c8e0e5c2f03..55fbe86383c04c 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+@@ -1714,6 +1714,9 @@ void dcn32_add_phantom_pipes(struct dc *dc, struct dc_state *context,
+ 	// be a valid candidate for SubVP (i.e. has a plane, stream, doesn't
+ 	// already have phantom pipe assigned, etc.) by previous checks.
+ 	phantom_stream = dcn32_enable_phantom_stream(dc, context, pipes, pipe_cnt, index);
++	if (!phantom_stream)
++		return;
++
+ 	dcn32_enable_phantom_plane(dc, context, phantom_stream, index);
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+@@ -2664,8 +2667,10 @@ static struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
+ 	struct resource_context *old_ctx = &stream->ctx->dc->current_state->res_ctx;
+ 	int head_index;
+ 
+-	if (!head_pipe)
++	if (!head_pipe) {
+ 		ASSERT(0);
++		return NULL;
++	}
+ 
+ 	/*
+ 	 * Modified from dcn20_acquire_idle_pipe_for_layer
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
+index 5794b64507bf94..56a22575258064 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/processpptables.c
+@@ -1185,6 +1185,8 @@ static int init_overdrive_limits(struct pp_hwmgr *hwmgr,
+ 	fw_info = smu_atom_get_data_table(hwmgr->adev,
+ 			 GetIndexIntoMasterTable(DATA, FirmwareInfo),
+ 			 &size, &frev, &crev);
++	PP_ASSERT_WITH_CODE(fw_info != NULL,
++			    "Missing firmware info!", return -EINVAL);
+ 
+ 	if ((fw_info->ucTableFormatRevision == 1)
+ 	    && (le16_to_cpu(fw_info->usStructureSize) >= sizeof(ATOM_FIRMWARE_INFO_V1_4)))
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 106292d6ed2688..9e9b2f3f106cce 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -543,7 +543,7 @@ static int drm_atomic_plane_set_property(struct drm_plane *plane,
+ 					&state->fb_damage_clips,
+ 					val,
+ 					-1,
+-					sizeof(struct drm_rect),
++					sizeof(struct drm_mode_rect),
+ 					&replaced);
+ 		return ret;
+ 	} else if (property == plane->scaling_filter_property) {
+diff --git a/drivers/gpu/drm/drm_print.c b/drivers/gpu/drm/drm_print.c
+index cf2efb44722c92..1d122d4de70ec5 100644
+--- a/drivers/gpu/drm/drm_print.c
++++ b/drivers/gpu/drm/drm_print.c
+@@ -100,8 +100,9 @@ void __drm_puts_coredump(struct drm_printer *p, const char *str)
+ 			copy = iterator->remain;
+ 
+ 		/* Copy out the bit of the string that we need */
+-		memcpy(iterator->data,
+-			str + (iterator->start - iterator->offset), copy);
++		if (iterator->data)
++			memcpy(iterator->data,
++			       str + (iterator->start - iterator->offset), copy);
+ 
+ 		iterator->offset = iterator->start + copy;
+ 		iterator->remain -= copy;
+@@ -110,7 +111,8 @@ void __drm_puts_coredump(struct drm_printer *p, const char *str)
+ 
+ 		len = min_t(ssize_t, strlen(str), iterator->remain);
+ 
+-		memcpy(iterator->data + pos, str, len);
++		if (iterator->data)
++			memcpy(iterator->data + pos, str, len);
+ 
+ 		iterator->offset += len;
+ 		iterator->remain -= len;
+@@ -140,8 +142,9 @@ void __drm_printfn_coredump(struct drm_printer *p, struct va_format *vaf)
+ 	if ((iterator->offset >= iterator->start) && (len < iterator->remain)) {
+ 		ssize_t pos = iterator->offset - iterator->start;
+ 
+-		snprintf(((char *) iterator->data) + pos,
+-			iterator->remain, "%pV", vaf);
++		if (iterator->data)
++			snprintf(((char *) iterator->data) + pos,
++				 iterator->remain, "%pV", vaf);
+ 
+ 		iterator->offset += len;
+ 		iterator->remain -= len;
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 6bff169fa8d4c9..f92c46297ec4ba 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -908,7 +908,7 @@ intel_ddi_main_link_aux_domain(struct intel_digital_port *dig_port,
+ 	 * instead of a specific AUX_IO_<port> reference without powering up any
+ 	 * extra wells.
+ 	 */
+-	if (intel_encoder_can_psr(&dig_port->base))
++	if (intel_psr_needs_aux_io_power(&dig_port->base, crtc_state))
+ 		return intel_display_power_aux_io_domain(i915, dig_port->aux_ch);
+ 	else if (DISPLAY_VER(i915) < 14 &&
+ 		 (intel_crtc_has_dp_encoder(crtc_state) ||
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index a7d91ca1d8bafe..bd3ce34fb6a6c8 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -3944,6 +3944,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector
+ 			 drm_dp_is_branch(intel_dp->dpcd));
+ 	intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
+ 
++	intel_dp->colorimetry_support =
++		intel_dp_get_colorimetry_status(intel_dp);
++
+ 	/*
+ 	 * Read the eDP display control registers.
+ 	 *
+@@ -4057,6 +4060,9 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
+ 
+ 		intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident);
+ 
++		intel_dp->colorimetry_support =
++			intel_dp_get_colorimetry_status(intel_dp);
++
+ 		intel_dp_update_sink_caps(intel_dp);
+ 	}
+ 
+@@ -6774,9 +6780,6 @@ intel_dp_init_connector(struct intel_digital_port *dig_port,
+ 				    "HDCP init failed, skipping.\n");
+ 	}
+ 
+-	intel_dp->colorimetry_support =
+-		intel_dp_get_colorimetry_status(intel_dp);
+-
+ 	intel_dp->frl.is_trained = false;
+ 	intel_dp->frl.trained_rate_gbps = 0;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 7173ffc7c66c13..857f776e55509d 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -201,6 +201,25 @@ bool intel_encoder_can_psr(struct intel_encoder *encoder)
+ 		return false;
+ }
+ 
++bool intel_psr_needs_aux_io_power(struct intel_encoder *encoder,
++				  const struct intel_crtc_state *crtc_state)
++{
++	/*
++	 * For PSR/PR modes only eDP requires the AUX IO power to be enabled whenever
++	 * the output is enabled. For non-eDP outputs the main link is always
++	 * on, hence it doesn't require the HW initiated AUX wake-up signaling used
++	 * for eDP.
++	 *
++	 * TODO:
++	 * - Consider leaving AUX IO disabled for eDP / PR as well, in case
++	 *   the ALPM with main-link off mode is not enabled.
++	 * - Leave AUX IO enabled for DP / PR, once support for ALPM with
++	 *   main-link off mode is added for it and this mode gets enabled.
++	 */
++	return intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP) &&
++	       intel_encoder_can_psr(encoder);
++}
++
+ static bool psr_global_enabled(struct intel_dp *intel_dp)
+ {
+ 	struct intel_connector *connector = intel_dp->attached_connector;
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.h b/drivers/gpu/drm/i915/display/intel_psr.h
+index d483c85870e1db..e719f548e1606b 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.h
++++ b/drivers/gpu/drm/i915/display/intel_psr.h
+@@ -25,6 +25,8 @@ struct intel_plane_state;
+ 				    (intel_dp)->psr.source_panel_replay_support)
+ 
+ bool intel_encoder_can_psr(struct intel_encoder *encoder);
++bool intel_psr_needs_aux_io_power(struct intel_encoder *encoder,
++				  const struct intel_crtc_state *crtc_state);
+ void intel_psr_init_dpcd(struct intel_dp *intel_dp);
+ void intel_psr_enable_sink(struct intel_dp *intel_dp,
+ 			   const struct intel_crtc_state *crtc_state);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+index 5c72462d1f57e3..b22e2019768f04 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+@@ -1131,7 +1131,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
+ 		GEM_WARN_ON(!i915_ttm_cpu_maps_iomem(bo->resource));
+ 	}
+ 
+-	if (wakeref & CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND)
++	if (wakeref && CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND != 0)
+ 		intel_wakeref_auto(&to_i915(obj->base.dev)->runtime_pm.userfault_wakeref,
+ 				   msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+index 2b62d64759181e..6068026f044d9c 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+@@ -523,8 +523,10 @@ static int ovl_adaptor_comp_init(struct device *dev, struct component_match **ma
+ 		}
+ 
+ 		comp_pdev = of_find_device_by_node(node);
+-		if (!comp_pdev)
++		if (!comp_pdev) {
++			of_node_put(node);
+ 			return -EPROBE_DEFER;
++		}
+ 
+ 		priv->ovl_adaptor_comp[id] = &comp_pdev->dev;
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index d5d9361e11aa53..8e8f55225e1ead 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -1079,6 +1079,7 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ 	adreno_gpu->chip_id = config->chip_id;
+ 
+ 	gpu->allow_relocs = config->info->family < ADRENO_6XX_GEN1;
++	gpu->pdev = pdev;
+ 
+ 	/* Only handle the core clock when GMU is not in use (or is absent). */
+ 	if (adreno_has_gmu_wrapper(adreno_gpu) ||
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index cd185b9636d261..56b6de049bd7b8 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -929,7 +929,6 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ 	if (IS_ERR(gpu->gpu_cx))
+ 		gpu->gpu_cx = NULL;
+ 
+-	gpu->pdev = pdev;
+ 	platform_set_drvdata(pdev, &gpu->adreno_smmu);
+ 
+ 	msm_devfreq_init(gpu);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index 6598c9c08ba11e..d3eac4817d7687 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -695,6 +695,10 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ 	soc = soc_device_match(omapdrm_soc_devices);
+ 	priv->omaprev = soc ? (uintptr_t)soc->data : 0;
+ 	priv->wq = alloc_ordered_workqueue("omapdrm", 0);
++	if (!priv->wq) {
++		ret = -ENOMEM;
++		goto err_alloc_workqueue;
++	}
+ 
+ 	mutex_init(&priv->list_lock);
+ 	INIT_LIST_HEAD(&priv->obj_list);
+@@ -753,6 +757,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ 	drm_mode_config_cleanup(ddev);
+ 	omap_gem_deinit(ddev);
+ 	destroy_workqueue(priv->wq);
++err_alloc_workqueue:
+ 	omap_disconnect_pipelines(ddev);
+ 	drm_dev_put(ddev);
+ 	return ret;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index cc6e13a9778358..ce8e8a93d70767 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -1251,9 +1251,17 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
+ 		goto err_cleanup;
+ 	}
+ 
++	/* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our
++	 * pre-allocated BO if the <BO,VM> association exists. Given we
++	 * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will
++	 * be called immediately, and we have to hold the VM resv lock when
++	 * calling this function.
++	 */
++	dma_resv_lock(panthor_vm_resv(vm), NULL);
+ 	mutex_lock(&bo->gpuva_list_lock);
+ 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
+ 	mutex_unlock(&bo->gpuva_list_lock);
++	dma_resv_unlock(panthor_vm_resv(vm));
+ 
+ 	/* If the a vm_bo for this <VM,BO> combination exists, it already
+ 	 * retains a pin ref, and we can release the one we took earlier.
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 12b272a912f861..4d1d5a342a4a6e 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -1103,7 +1103,13 @@ cs_slot_sync_queue_state_locked(struct panthor_device *ptdev, u32 csg_id, u32 cs
+ 			list_move_tail(&group->wait_node,
+ 				       &group->ptdev->scheduler->groups.waiting);
+ 		}
+-		group->blocked_queues |= BIT(cs_id);
++
++		/* The queue is only blocked if there's no deferred operation
++		 * pending, which can be checked through the scoreboard status.
++		 */
++		if (!cs_iface->output->status_scoreboards)
++			group->blocked_queues |= BIT(cs_id);
++
+ 		queue->syncwait.gpu_va = cs_iface->output->status_wait_sync_ptr;
+ 		queue->syncwait.ref = cs_iface->output->status_wait_sync_value;
+ 		status_wait_cond = cs_iface->output->status_wait & CS_STATUS_WAIT_SYNC_COND_MASK;
+@@ -2046,6 +2052,7 @@ static void
+ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ 		 struct panthor_sched_tick_ctx *ctx)
+ {
++	struct panthor_device *ptdev = sched->ptdev;
+ 	struct panthor_group *group, *tmp;
+ 	u32 i;
+ 
+@@ -2054,7 +2061,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ 			/* If everything went fine, we should only have groups
+ 			 * to be terminated in the old_groups lists.
+ 			 */
+-			drm_WARN_ON(&group->ptdev->base, !ctx->csg_upd_failed_mask &&
++			drm_WARN_ON(&ptdev->base, !ctx->csg_upd_failed_mask &&
+ 				    group_can_run(group));
+ 
+ 			if (!group_can_run(group)) {
+@@ -2077,7 +2084,7 @@ tick_ctx_cleanup(struct panthor_scheduler *sched,
+ 		/* If everything went fine, the groups to schedule lists should
+ 		 * be empty.
+ 		 */
+-		drm_WARN_ON(&group->ptdev->base,
++		drm_WARN_ON(&ptdev->base,
+ 			    !ctx->csg_upd_failed_mask && !list_empty(&ctx->groups[i]));
+ 
+ 		list_for_each_entry_safe(group, tmp, &ctx->groups[i], run_node) {
+@@ -3242,6 +3249,18 @@ int panthor_group_destroy(struct panthor_file *pfile, u32 group_handle)
+ 	return 0;
+ }
+ 
++static struct panthor_group *group_from_handle(struct panthor_group_pool *pool,
++					       u32 group_handle)
++{
++	struct panthor_group *group;
++
++	xa_lock(&pool->xa);
++	group = group_get(xa_load(&pool->xa, group_handle));
++	xa_unlock(&pool->xa);
++
++	return group;
++}
++
+ int panthor_group_get_state(struct panthor_file *pfile,
+ 			    struct drm_panthor_group_get_state *get_state)
+ {
+@@ -3253,7 +3272,7 @@ int panthor_group_get_state(struct panthor_file *pfile,
+ 	if (get_state->pad)
+ 		return -EINVAL;
+ 
+-	group = group_get(xa_load(&gpool->xa, get_state->group_handle));
++	group = group_from_handle(gpool, get_state->group_handle);
+ 	if (!group)
+ 		return -EINVAL;
+ 
+@@ -3384,7 +3403,7 @@ panthor_job_create(struct panthor_file *pfile,
+ 	job->call_info.latest_flush = qsubmit->latest_flush;
+ 	INIT_LIST_HEAD(&job->node);
+ 
+-	job->group = group_get(xa_load(&gpool->xa, group_handle));
++	job->group = group_from_handle(gpool, group_handle);
+ 	if (!job->group) {
+ 		ret = -EINVAL;
+ 		goto err_put_job;
+@@ -3424,13 +3443,8 @@ void panthor_job_update_resvs(struct drm_exec *exec, struct drm_sched_job *sched
+ {
+ 	struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
+ 
+-	/* Still not sure why we want USAGE_WRITE for external objects, since I
+-	 * was assuming this would be handled through explicit syncs being imported
+-	 * to external BOs with DMA_BUF_IOCTL_IMPORT_SYNC_FILE, but other drivers
+-	 * seem to pass DMA_RESV_USAGE_WRITE, so there must be a good reason.
+-	 */
+ 	panthor_vm_update_resvs(job->group->vm, exec, &sched_job->s_fence->finished,
+-				DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE);
++				DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_BOOKKEEP);
+ }
+ 
+ void panthor_sched_unplug(struct panthor_device *ptdev)
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index 0b1e19345f43a7..bfd42e3e161e98 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -1016,45 +1016,65 @@ static int r100_cp_init_microcode(struct radeon_device *rdev)
+ 
+ 	DRM_DEBUG_KMS("\n");
+ 
+-	if ((rdev->family == CHIP_R100) || (rdev->family == CHIP_RV100) ||
+-	    (rdev->family == CHIP_RV200) || (rdev->family == CHIP_RS100) ||
+-	    (rdev->family == CHIP_RS200)) {
++	switch (rdev->family) {
++	case CHIP_R100:
++	case CHIP_RV100:
++	case CHIP_RV200:
++	case CHIP_RS100:
++	case CHIP_RS200:
+ 		DRM_INFO("Loading R100 Microcode\n");
+ 		fw_name = FIRMWARE_R100;
+-	} else if ((rdev->family == CHIP_R200) ||
+-		   (rdev->family == CHIP_RV250) ||
+-		   (rdev->family == CHIP_RV280) ||
+-		   (rdev->family == CHIP_RS300)) {
++		break;
++
++	case CHIP_R200:
++	case CHIP_RV250:
++	case CHIP_RV280:
++	case CHIP_RS300:
+ 		DRM_INFO("Loading R200 Microcode\n");
+ 		fw_name = FIRMWARE_R200;
+-	} else if ((rdev->family == CHIP_R300) ||
+-		   (rdev->family == CHIP_R350) ||
+-		   (rdev->family == CHIP_RV350) ||
+-		   (rdev->family == CHIP_RV380) ||
+-		   (rdev->family == CHIP_RS400) ||
+-		   (rdev->family == CHIP_RS480)) {
++		break;
++
++	case CHIP_R300:
++	case CHIP_R350:
++	case CHIP_RV350:
++	case CHIP_RV380:
++	case CHIP_RS400:
++	case CHIP_RS480:
+ 		DRM_INFO("Loading R300 Microcode\n");
+ 		fw_name = FIRMWARE_R300;
+-	} else if ((rdev->family == CHIP_R420) ||
+-		   (rdev->family == CHIP_R423) ||
+-		   (rdev->family == CHIP_RV410)) {
++		break;
++
++	case CHIP_R420:
++	case CHIP_R423:
++	case CHIP_RV410:
+ 		DRM_INFO("Loading R400 Microcode\n");
+ 		fw_name = FIRMWARE_R420;
+-	} else if ((rdev->family == CHIP_RS690) ||
+-		   (rdev->family == CHIP_RS740)) {
++		break;
++
++	case CHIP_RS690:
++	case CHIP_RS740:
+ 		DRM_INFO("Loading RS690/RS740 Microcode\n");
+ 		fw_name = FIRMWARE_RS690;
+-	} else if (rdev->family == CHIP_RS600) {
++		break;
++
++	case CHIP_RS600:
+ 		DRM_INFO("Loading RS600 Microcode\n");
+ 		fw_name = FIRMWARE_RS600;
+-	} else if ((rdev->family == CHIP_RV515) ||
+-		   (rdev->family == CHIP_R520) ||
+-		   (rdev->family == CHIP_RV530) ||
+-		   (rdev->family == CHIP_R580) ||
+-		   (rdev->family == CHIP_RV560) ||
+-		   (rdev->family == CHIP_RV570)) {
++		break;
++
++	case CHIP_RV515:
++	case CHIP_R520:
++	case CHIP_RV530:
++	case CHIP_R580:
++	case CHIP_RV560:
++	case CHIP_RV570:
+ 		DRM_INFO("Loading R500 Microcode\n");
+ 		fw_name = FIRMWARE_R520;
++		break;
++
++	default:
++		DRM_ERROR("Unsupported Radeon family %u\n", rdev->family);
++		return -EINVAL;
+ 	}
+ 
+ 	err = request_firmware(&rdev->me_fw, fw_name, rdev->dev);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 4a9c6ea7f15dc3..f161f40d8ce4c8 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1583,6 +1583,10 @@ static void vop_crtc_atomic_flush(struct drm_crtc *crtc,
+ 	VOP_AFBC_SET(vop, enable, s->enable_afbc);
+ 	vop_cfg_done(vop);
+ 
++	/* Ack the DMA transfer of the previous frame (RK3066). */
++	if (VOP_HAS_REG(vop, common, dma_stop))
++		VOP_REG_SET(vop, common, dma_stop, 0);
++
+ 	spin_unlock(&vop->reg_lock);
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+index b33e5bdc26be16..0cf512cc16144b 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h
+@@ -122,6 +122,7 @@ struct vop_common {
+ 	struct vop_reg lut_buffer_index;
+ 	struct vop_reg gate_en;
+ 	struct vop_reg mmu_en;
++	struct vop_reg dma_stop;
+ 	struct vop_reg out_mode;
+ 	struct vop_reg standby;
+ };
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index b9ee02061d5bf3..e2c6ba26f4377d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -466,6 +466,7 @@ static const struct vop_output rk3066_output = {
+ };
+ 
+ static const struct vop_common rk3066_common = {
++	.dma_stop = VOP_REG(RK3066_SYS_CTRL0, 0x1, 0),
+ 	.standby = VOP_REG(RK3066_SYS_CTRL0, 0x1, 1),
+ 	.out_mode = VOP_REG(RK3066_DSP_CTRL0, 0xf, 0),
+ 	.cfg_done = VOP_REG(RK3066_REG_CFG_DONE, 0x1, 0),
+@@ -514,6 +515,7 @@ static const struct vop_data rk3066_vop = {
+ 	.output = &rk3066_output,
+ 	.win = rk3066_vop_win_data,
+ 	.win_size = ARRAY_SIZE(rk3066_vop_win_data),
++	.feature = VOP_FEATURE_INTERNAL_RGB,
+ 	.max_output = { 1920, 1080 },
+ };
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 58c8161289fea9..a75eede8bf8dab 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -133,8 +133,10 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+ {
+ 	WARN_ON(!num_sched_list || !sched_list);
+ 
++	spin_lock(&entity->rq_lock);
+ 	entity->sched_list = sched_list;
+ 	entity->num_sched_list = num_sched_list;
++	spin_unlock(&entity->rq_lock);
+ }
+ EXPORT_SYMBOL(drm_sched_entity_modify_sched);
+ 
+@@ -380,7 +382,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
+ 		container_of(cb, struct drm_sched_entity, cb);
+ 
+ 	drm_sched_entity_clear_dep(f, cb);
+-	drm_sched_wakeup(entity->rq->sched, entity);
++	drm_sched_wakeup(entity->rq->sched);
+ }
+ 
+ /**
+@@ -597,6 +599,9 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
+ 
+ 	/* first job wakes up scheduler */
+ 	if (first) {
++		struct drm_gpu_scheduler *sched;
++		struct drm_sched_rq *rq;
++
+ 		/* Add the entity to the run queue */
+ 		spin_lock(&entity->rq_lock);
+ 		if (entity->stopped) {
+@@ -606,13 +611,16 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
+ 			return;
+ 		}
+ 
+-		drm_sched_rq_add_entity(entity->rq, entity);
++		rq = entity->rq;
++		sched = rq->sched;
++
++		drm_sched_rq_add_entity(rq, entity);
+ 		spin_unlock(&entity->rq_lock);
+ 
+ 		if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
+ 			drm_sched_rq_update_fifo(entity, submit_ts);
+ 
+-		drm_sched_wakeup(entity->rq->sched, entity);
++		drm_sched_wakeup(sched);
+ 	}
+ }
+ EXPORT_SYMBOL(drm_sched_entity_push_job);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 7e90c9f95611a0..a124d5e77b5e86 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -1022,15 +1022,12 @@ EXPORT_SYMBOL(drm_sched_job_cleanup);
+ /**
+  * drm_sched_wakeup - Wake up the scheduler if it is ready to queue
+  * @sched: scheduler instance
+- * @entity: the scheduler entity
+  *
+  * Wake up the scheduler if we can queue jobs.
+  */
+-void drm_sched_wakeup(struct drm_gpu_scheduler *sched,
+-		      struct drm_sched_entity *entity)
++void drm_sched_wakeup(struct drm_gpu_scheduler *sched)
+ {
+-	if (drm_sched_can_queue(sched, entity))
+-		drm_sched_run_job_queue(sched);
++	drm_sched_run_job_queue(sched);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/stm/drv.c b/drivers/gpu/drm/stm/drv.c
+index 4d2db079ad4ff3..e1232f74dfa537 100644
+--- a/drivers/gpu/drm/stm/drv.c
++++ b/drivers/gpu/drm/stm/drv.c
+@@ -25,6 +25,7 @@
+ #include <drm/drm_module.h>
+ #include <drm/drm_probe_helper.h>
+ #include <drm/drm_vblank.h>
++#include <drm/drm_managed.h>
+ 
+ #include "ltdc.h"
+ 
+@@ -75,7 +76,7 @@ static int drv_load(struct drm_device *ddev)
+ 
+ 	DRM_DEBUG("%s\n", __func__);
+ 
+-	ldev = devm_kzalloc(ddev->dev, sizeof(*ldev), GFP_KERNEL);
++	ldev = drmm_kzalloc(ddev, sizeof(*ldev), GFP_KERNEL);
+ 	if (!ldev)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 5aec1e58c968c2..0832b749b66e7f 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -36,6 +36,7 @@
+ #include <drm/drm_probe_helper.h>
+ #include <drm/drm_simple_kms_helper.h>
+ #include <drm/drm_vblank.h>
++#include <drm/drm_managed.h>
+ 
+ #include <video/videomode.h>
+ 
+@@ -1199,7 +1200,6 @@ static void ltdc_crtc_atomic_print_state(struct drm_printer *p,
+ }
+ 
+ static const struct drm_crtc_funcs ltdc_crtc_funcs = {
+-	.destroy = drm_crtc_cleanup,
+ 	.set_config = drm_atomic_helper_set_config,
+ 	.page_flip = drm_atomic_helper_page_flip,
+ 	.reset = drm_atomic_helper_crtc_reset,
+@@ -1212,7 +1212,6 @@ static const struct drm_crtc_funcs ltdc_crtc_funcs = {
+ };
+ 
+ static const struct drm_crtc_funcs ltdc_crtc_with_crc_support_funcs = {
+-	.destroy = drm_crtc_cleanup,
+ 	.set_config = drm_atomic_helper_set_config,
+ 	.page_flip = drm_atomic_helper_page_flip,
+ 	.reset = drm_atomic_helper_crtc_reset,
+@@ -1514,6 +1513,9 @@ static void ltdc_plane_atomic_disable(struct drm_plane *plane,
+ 	/* Disable layer */
+ 	regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN | LXCR_CLUTEN |  LXCR_HMEN, 0);
+ 
++	/* Reset the layer transparency to hide any related background color */
++	regmap_write_bits(ldev->regmap, LTDC_L1CACR + lofs, LXCACR_CONSTA, 0x00);
++
+ 	/* Commit shadow registers = update plane at next vblank */
+ 	if (ldev->caps.plane_reg_shadow)
+ 		regmap_write_bits(ldev->regmap, LTDC_L1RCR + lofs,
+@@ -1545,7 +1547,6 @@ static void ltdc_plane_atomic_print_state(struct drm_printer *p,
+ static const struct drm_plane_funcs ltdc_plane_funcs = {
+ 	.update_plane = drm_atomic_helper_update_plane,
+ 	.disable_plane = drm_atomic_helper_disable_plane,
+-	.destroy = drm_plane_cleanup,
+ 	.reset = drm_atomic_helper_plane_reset,
+ 	.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+ 	.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+@@ -1572,7 +1573,6 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ 	const u64 *modifiers = ltdc_format_modifiers;
+ 	u32 lofs = index * LAY_OFS;
+ 	u32 val;
+-	int ret;
+ 
+ 	/* Allocate the biggest size according to supported color formats */
+ 	formats = devm_kzalloc(dev, (ldev->caps.pix_fmt_nb +
+@@ -1615,14 +1615,10 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ 		}
+ 	}
+ 
+-	plane = devm_kzalloc(dev, sizeof(*plane), GFP_KERNEL);
+-	if (!plane)
+-		return NULL;
+-
+-	ret = drm_universal_plane_init(ddev, plane, possible_crtcs,
+-				       &ltdc_plane_funcs, formats, nb_fmt,
+-				       modifiers, type, NULL);
+-	if (ret < 0)
++	plane = drmm_universal_plane_alloc(ddev, struct drm_plane, dev,
++					   possible_crtcs, &ltdc_plane_funcs, formats,
++					   nb_fmt, modifiers, type, NULL);
++	if (IS_ERR(plane))
+ 		return NULL;
+ 
+ 	if (ldev->caps.ycbcr_input) {
+@@ -1645,15 +1641,6 @@ static struct drm_plane *ltdc_plane_create(struct drm_device *ddev,
+ 	return plane;
+ }
+ 
+-static void ltdc_plane_destroy_all(struct drm_device *ddev)
+-{
+-	struct drm_plane *plane, *plane_temp;
+-
+-	list_for_each_entry_safe(plane, plane_temp,
+-				 &ddev->mode_config.plane_list, head)
+-		drm_plane_cleanup(plane);
+-}
+-
+ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ {
+ 	struct ltdc_device *ldev = ddev->dev_private;
+@@ -1679,14 +1666,14 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ 
+ 	/* Init CRTC according to its hardware features */
+ 	if (ldev->caps.crc)
+-		ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL,
+-						&ltdc_crtc_with_crc_support_funcs, NULL);
++		ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL,
++						 &ltdc_crtc_with_crc_support_funcs, NULL);
+ 	else
+-		ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL,
+-						&ltdc_crtc_funcs, NULL);
++		ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL,
++						 &ltdc_crtc_funcs, NULL);
+ 	if (ret) {
+ 		DRM_ERROR("Can not initialize CRTC\n");
+-		goto cleanup;
++		return ret;
+ 	}
+ 
+ 	drm_crtc_helper_add(crtc, &ltdc_crtc_helper_funcs);
+@@ -1700,9 +1687,8 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ 	for (i = 1; i < ldev->caps.nb_layers; i++) {
+ 		overlay = ltdc_plane_create(ddev, DRM_PLANE_TYPE_OVERLAY, i);
+ 		if (!overlay) {
+-			ret = -ENOMEM;
+ 			DRM_ERROR("Can not create overlay plane %d\n", i);
+-			goto cleanup;
++			return -ENOMEM;
+ 		}
+ 		if (ldev->caps.dynamic_zorder)
+ 			drm_plane_create_zpos_property(overlay, i, 0, ldev->caps.nb_layers - 1);
+@@ -1715,10 +1701,6 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
+ 	}
+ 
+ 	return 0;
+-
+-cleanup:
+-	ltdc_plane_destroy_all(ddev);
+-	return ret;
+ }
+ 
+ static void ltdc_encoder_disable(struct drm_encoder *encoder)
+@@ -1778,23 +1760,19 @@ static int ltdc_encoder_init(struct drm_device *ddev, struct drm_bridge *bridge)
+ 	struct drm_encoder *encoder;
+ 	int ret;
+ 
+-	encoder = devm_kzalloc(ddev->dev, sizeof(*encoder), GFP_KERNEL);
+-	if (!encoder)
+-		return -ENOMEM;
++	encoder = drmm_simple_encoder_alloc(ddev, struct drm_encoder, dev,
++					    DRM_MODE_ENCODER_DPI);
++	if (IS_ERR(encoder))
++		return PTR_ERR(encoder);
+ 
+ 	encoder->possible_crtcs = CRTC_MASK;
+ 	encoder->possible_clones = 0;	/* No cloning support */
+ 
+-	drm_simple_encoder_init(ddev, encoder, DRM_MODE_ENCODER_DPI);
+-
+ 	drm_encoder_helper_add(encoder, &ltdc_encoder_helper_funcs);
+ 
+ 	ret = drm_bridge_attach(encoder, bridge, NULL, 0);
+-	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			drm_encoder_cleanup(encoder);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	DRM_DEBUG_DRIVER("Bridge encoder:%d created\n", encoder->base.id);
+ 
+@@ -1964,8 +1942,7 @@ int ltdc_load(struct drm_device *ddev)
+ 			goto err;
+ 
+ 		if (panel) {
+-			bridge = drm_panel_bridge_add_typed(panel,
+-							    DRM_MODE_CONNECTOR_DPI);
++			bridge = drmm_panel_bridge_add(ddev, panel);
+ 			if (IS_ERR(bridge)) {
+ 				DRM_ERROR("panel-bridge endpoint %d\n", i);
+ 				ret = PTR_ERR(bridge);
+@@ -2047,7 +2024,7 @@ int ltdc_load(struct drm_device *ddev)
+ 		}
+ 	}
+ 
+-	crtc = devm_kzalloc(dev, sizeof(*crtc), GFP_KERNEL);
++	crtc = drmm_kzalloc(ddev, sizeof(*crtc), GFP_KERNEL);
+ 	if (!crtc) {
+ 		DRM_ERROR("Failed to allocate crtc\n");
+ 		ret = -ENOMEM;
+@@ -2074,9 +2051,6 @@ int ltdc_load(struct drm_device *ddev)
+ 
+ 	return 0;
+ err:
+-	for (i = 0; i < nb_endpoints; i++)
+-		drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i);
+-
+ 	clk_disable_unprepare(ldev->pixel_clk);
+ 
+ 	return ret;
+@@ -2084,16 +2058,8 @@ int ltdc_load(struct drm_device *ddev)
+ 
+ void ltdc_unload(struct drm_device *ddev)
+ {
+-	struct device *dev = ddev->dev;
+-	int nb_endpoints, i;
+-
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	nb_endpoints = of_graph_get_endpoint_count(dev->of_node);
+-
+-	for (i = 0; i < nb_endpoints; i++)
+-		drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i);
+-
+ 	pm_runtime_disable(ddev->dev);
+ }
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_submit.c b/drivers/gpu/drm/v3d/v3d_submit.c
+index 4cdfabbf4964f9..d310e95aa66293 100644
+--- a/drivers/gpu/drm/v3d/v3d_submit.c
++++ b/drivers/gpu/drm/v3d/v3d_submit.c
+@@ -671,6 +671,9 @@ v3d_get_cpu_reset_performance_params(struct drm_file *file_priv,
+ 	if (reset.nperfmons > V3D_MAX_PERFMONS)
+ 		return -EINVAL;
+ 
++	if (reset.nperfmons > V3D_MAX_PERFMONS)
++		return -EINVAL;
++
+ 	job->job_type = V3D_CPU_JOB_TYPE_RESET_PERFORMANCE_QUERY;
+ 
+ 	job->performance_query.queries = kvmalloc_array(reset.count,
+@@ -755,6 +758,9 @@ v3d_get_cpu_copy_performance_query_params(struct drm_file *file_priv,
+ 	if (copy.nperfmons > V3D_MAX_PERFMONS)
+ 		return -EINVAL;
+ 
++	if (copy.nperfmons > V3D_MAX_PERFMONS)
++		return -EINVAL;
++
+ 	job->job_type = V3D_CPU_JOB_TYPE_COPY_PERFORMANCE_QUERY;
+ 
+ 	job->performance_query.queries = kvmalloc_array(copy.count,
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index b3d3c065dd9d8a..2f935771658e6b 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -39,10 +39,14 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ {
+ 	struct xe_tile *tile = xe_device_get_root_tile(xe);
+ 	struct xe_gt *gt = tile->media_gt;
++	struct xe_gsc *gsc = &gt->uc.gsc;
+ 	bool ret = true;
+ 
+-	if (!xe_uc_fw_is_enabled(&gt->uc.gsc.fw))
++	if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) {
++		drm_dbg_kms(&xe->drm,
++			    "GSC Components not ready for HDCP2.x\n");
+ 		return false;
++	}
+ 
+ 	xe_pm_runtime_get(xe);
+ 	if (xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC)) {
+@@ -52,7 +56,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ 		goto out;
+ 	}
+ 
+-	if (!xe_gsc_proxy_init_done(&gt->uc.gsc))
++	if (!xe_gsc_proxy_init_done(gsc))
+ 		ret = false;
+ 
+ 	xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC);
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index f5e3012eff20d8..97506bf9f5e0c9 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -653,8 +653,8 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ 	tt_has_data = ttm && (ttm_tt_is_populated(ttm) ||
+ 			      (ttm->page_flags & TTM_TT_FLAG_SWAPPED));
+ 
+-	move_lacks_source = handle_system_ccs ? (!bo->ccs_cleared)  :
+-						(!mem_type_is_vram(old_mem_type) && !tt_has_data);
++	move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) :
++					 (!mem_type_is_vram(old_mem_type) && !tt_has_data));
+ 
+ 	needs_clear = (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) ||
+ 		(!ttm && ttm_bo->type == ttm_bo_type_device);
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index a1cbdafbff75e2..599bf7f9e8c5c5 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -231,6 +231,9 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy)
+ 	if (xe->unordered_wq)
+ 		destroy_workqueue(xe->unordered_wq);
+ 
++	if (xe->destroy_wq)
++		destroy_workqueue(xe->destroy_wq);
++
+ 	ttm_device_fini(&xe->ttm);
+ }
+ 
+@@ -293,8 +296,9 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
+ 	xe->preempt_fence_wq = alloc_ordered_workqueue("xe-preempt-fence-wq", 0);
+ 	xe->ordered_wq = alloc_ordered_workqueue("xe-ordered-wq", 0);
+ 	xe->unordered_wq = alloc_workqueue("xe-unordered-wq", 0, 0);
++	xe->destroy_wq = alloc_workqueue("xe-destroy-wq", 0, 0);
+ 	if (!xe->ordered_wq || !xe->unordered_wq ||
+-	    !xe->preempt_fence_wq) {
++	    !xe->preempt_fence_wq || !xe->destroy_wq) {
+ 		/*
+ 		 * Cleanup done in xe_device_destroy via
+ 		 * drmm_add_action_or_reset register above
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 2e62450d86e185..f671300e0c9bdd 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -376,6 +376,9 @@ struct xe_device {
+ 	/** @unordered_wq: used to serialize unordered work, mostly display */
+ 	struct workqueue_struct *unordered_wq;
+ 
++	/** @destroy_wq: used to serialize user destroy work, like queue */
++	struct workqueue_struct *destroy_wq;
++
+ 	/** @tiles: device tiles */
+ 	struct xe_tile tiles[XE_MAX_TILES_PER_DEVICE];
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+index e4ad1d6ce1d5ff..7f24e58cc992f2 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+@@ -90,6 +90,11 @@ void xe_sched_submission_stop(struct xe_gpu_scheduler *sched)
+ 	cancel_work_sync(&sched->work_process_msg);
+ }
+ 
++void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched)
++{
++	drm_sched_resume_timeout(&sched->base, sched->base.timeout);
++}
++
+ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
+ 		      struct xe_sched_msg *msg)
+ {
+diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+index 10c6bb9c938681..6aac7fe686735a 100644
+--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
++++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+@@ -22,6 +22,8 @@ void xe_sched_fini(struct xe_gpu_scheduler *sched);
+ void xe_sched_submission_start(struct xe_gpu_scheduler *sched);
+ void xe_sched_submission_stop(struct xe_gpu_scheduler *sched);
+ 
++void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched);
++
+ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
+ 		      struct xe_sched_msg *msg);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+index 67e8efcaa93f1c..d924fdd8f6f97c 100644
+--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
++++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+@@ -307,7 +307,7 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
+ 			PFD_VIRTUAL_ADDR_LO_SHIFT;
+ 
+ 		pf_queue->tail = (pf_queue->tail + PF_MSG_LEN_DW) %
+-			PF_QUEUE_NUM_DW;
++			pf_queue->num_dw;
+ 		ret = true;
+ 	}
+ 	spin_unlock_irq(&pf_queue->lock);
+@@ -319,7 +319,8 @@ static bool pf_queue_full(struct pf_queue *pf_queue)
+ {
+ 	lockdep_assert_held(&pf_queue->lock);
+ 
+-	return CIRC_SPACE(pf_queue->head, pf_queue->tail, PF_QUEUE_NUM_DW) <=
++	return CIRC_SPACE(pf_queue->head, pf_queue->tail,
++			  pf_queue->num_dw) <=
+ 		PF_MSG_LEN_DW;
+ }
+ 
+@@ -332,22 +333,23 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
+ 	u32 asid;
+ 	bool full;
+ 
+-	/*
+-	 * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
+-	 */
+-	BUILD_BUG_ON(PF_QUEUE_NUM_DW % PF_MSG_LEN_DW);
+-
+ 	if (unlikely(len != PF_MSG_LEN_DW))
+ 		return -EPROTO;
+ 
+ 	asid = FIELD_GET(PFD_ASID, msg[1]);
+ 	pf_queue = gt->usm.pf_queue + (asid % NUM_PF_QUEUE);
+ 
++	/*
++	 * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
++	 */
++	xe_gt_assert(gt, !(pf_queue->num_dw % PF_MSG_LEN_DW));
++
+ 	spin_lock_irqsave(&pf_queue->lock, flags);
+ 	full = pf_queue_full(pf_queue);
+ 	if (!full) {
+ 		memcpy(pf_queue->data + pf_queue->head, msg, len * sizeof(u32));
+-		pf_queue->head = (pf_queue->head + len) % PF_QUEUE_NUM_DW;
++		pf_queue->head = (pf_queue->head + len) %
++			pf_queue->num_dw;
+ 		queue_work(gt->usm.pf_wq, &pf_queue->worker);
+ 	} else {
+ 		drm_warn(&xe->drm, "PF Queue full, shouldn't be possible");
+@@ -414,18 +416,47 @@ static void pagefault_fini(void *arg)
+ 	destroy_workqueue(gt->usm.pf_wq);
+ }
+ 
++static int xe_alloc_pf_queue(struct xe_gt *gt, struct pf_queue *pf_queue)
++{
++	struct xe_device *xe = gt_to_xe(gt);
++	xe_dss_mask_t all_dss;
++	int num_dss, num_eus;
++
++	bitmap_or(all_dss, gt->fuse_topo.g_dss_mask, gt->fuse_topo.c_dss_mask,
++		  XE_MAX_DSS_FUSE_BITS);
++
++	num_dss = bitmap_weight(all_dss, XE_MAX_DSS_FUSE_BITS);
++	num_eus = bitmap_weight(gt->fuse_topo.eu_mask_per_dss,
++				XE_MAX_EU_FUSE_BITS) * num_dss;
++
++	/* user can issue separate page faults per EU and per CS */
++	pf_queue->num_dw =
++		(num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW;
++
++	pf_queue->gt = gt;
++	pf_queue->data = devm_kcalloc(xe->drm.dev, pf_queue->num_dw,
++				      sizeof(u32), GFP_KERNEL);
++	if (!pf_queue->data)
++		return -ENOMEM;
++
++	spin_lock_init(&pf_queue->lock);
++	INIT_WORK(&pf_queue->worker, pf_queue_work_func);
++
++	return 0;
++}
++
+ int xe_gt_pagefault_init(struct xe_gt *gt)
+ {
+ 	struct xe_device *xe = gt_to_xe(gt);
+-	int i;
++	int i, ret = 0;
+ 
+ 	if (!xe->info.has_usm)
+ 		return 0;
+ 
+ 	for (i = 0; i < NUM_PF_QUEUE; ++i) {
+-		gt->usm.pf_queue[i].gt = gt;
+-		spin_lock_init(&gt->usm.pf_queue[i].lock);
+-		INIT_WORK(&gt->usm.pf_queue[i].worker, pf_queue_work_func);
++		ret = xe_alloc_pf_queue(gt, &gt->usm.pf_queue[i]);
++		if (ret)
++			return ret;
+ 	}
+ 	for (i = 0; i < NUM_ACC_QUEUE; ++i) {
+ 		gt->usm.acc_queue[i].gt = gt;
+diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
+index cfdc761ff7f46b..2dbea50cd8f98e 100644
+--- a/drivers/gpu/drm/xe/xe_gt_types.h
++++ b/drivers/gpu/drm/xe/xe_gt_types.h
+@@ -229,9 +229,14 @@ struct xe_gt {
+ 		struct pf_queue {
+ 			/** @usm.pf_queue.gt: back pointer to GT */
+ 			struct xe_gt *gt;
+-#define PF_QUEUE_NUM_DW	128
+ 			/** @usm.pf_queue.data: data in the page fault queue */
+-			u32 data[PF_QUEUE_NUM_DW];
++			u32 *data;
++			/**
++			 * @usm.pf_queue.num_dw: number of DWORDS in the page
++			 * fault queue. Dynamically calculated based on the number
++			 * of compute resources available.
++			 */
++			u32 num_dw;
+ 			/**
+ 			 * @usm.pf_queue.tail: tail pointer in DWs for page fault queue,
+ 			 * moved by worker which processes faults (consumer).
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index 23382ced4ea747..69f8b6fdaeaea5 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -897,7 +897,7 @@ static void xe_guc_pc_fini(struct drm_device *drm, void *arg)
+ 	struct xe_guc_pc *pc = arg;
+ 
+ 	XE_WARN_ON(xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL));
+-	XE_WARN_ON(xe_guc_pc_gucrc_disable(pc));
++	xe_guc_pc_gucrc_disable(pc);
+ 	XE_WARN_ON(xe_guc_pc_stop(pc));
+ 	xe_force_wake_put(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 0a496612c810fa..a0f82994880301 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -233,10 +233,26 @@ static struct workqueue_struct *get_submit_wq(struct xe_guc *guc)
+ }
+ #endif
+ 
++static void xe_guc_submit_fini(struct xe_guc *guc)
++{
++	struct xe_device *xe = guc_to_xe(guc);
++	struct xe_gt *gt = guc_to_gt(guc);
++	int ret;
++
++	ret = wait_event_timeout(guc->submission_state.fini_wq,
++				 xa_empty(&guc->submission_state.exec_queue_lookup),
++				 HZ * 5);
++
++	drain_workqueue(xe->destroy_wq);
++
++	xe_gt_assert(gt, ret);
++}
++
+ static void guc_submit_fini(struct drm_device *drm, void *arg)
+ {
+ 	struct xe_guc *guc = arg;
+ 
++	xe_guc_submit_fini(guc);
+ 	xa_destroy(&guc->submission_state.exec_queue_lookup);
+ 	free_submit_wq(guc);
+ }
+@@ -251,7 +267,6 @@ static void primelockdep(struct xe_guc *guc)
+ 	fs_reclaim_acquire(GFP_KERNEL);
+ 
+ 	mutex_lock(&guc->submission_state.lock);
+-	might_lock(&guc->submission_state.suspend.lock);
+ 	mutex_unlock(&guc->submission_state.lock);
+ 
+ 	fs_reclaim_release(GFP_KERNEL);
+@@ -279,8 +294,7 @@ int xe_guc_submit_init(struct xe_guc *guc)
+ 
+ 	xa_init(&guc->submission_state.exec_queue_lookup);
+ 
+-	spin_lock_init(&guc->submission_state.suspend.lock);
+-	guc->submission_state.suspend.context = dma_fence_context_alloc(1);
++	init_waitqueue_head(&guc->submission_state.fini_wq);
+ 
+ 	primelockdep(guc);
+ 
+@@ -298,6 +312,9 @@ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa
+ 
+ 	xe_guc_id_mgr_release_locked(&guc->submission_state.idm,
+ 				     q->guc->id, q->width);
++
++	if (xa_empty(&guc->submission_state.exec_queue_lookup))
++		wake_up(&guc->submission_state.fini_wq);
+ }
+ 
+ static int alloc_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
+@@ -1029,13 +1046,16 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ 
+ static void guc_exec_queue_fini_async(struct xe_exec_queue *q)
+ {
++	struct xe_guc *guc = exec_queue_to_guc(q);
++	struct xe_device *xe = guc_to_xe(guc);
++
+ 	INIT_WORK(&q->guc->fini_async, __guc_exec_queue_fini_async);
+ 
+ 	/* We must block on kernel engines so slabs are empty on driver unload */
+ 	if (q->flags & EXEC_QUEUE_FLAG_PERMANENT)
+ 		__guc_exec_queue_fini_async(&q->guc->fini_async);
+ 	else
+-		queue_work(system_wq, &q->guc->fini_async);
++		queue_work(xe->destroy_wq, &q->guc->fini_async);
+ }
+ 
+ static void __guc_exec_queue_fini(struct xe_guc *guc, struct xe_exec_queue *q)
+@@ -1500,6 +1520,7 @@ static void guc_exec_queue_start(struct xe_exec_queue *q)
+ 	}
+ 
+ 	xe_sched_submission_start(sched);
++	xe_sched_submission_resume_tdr(sched);
+ }
+ 
+ int xe_guc_submit_start(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc_types.h b/drivers/gpu/drm/xe/xe_guc_types.h
+index 82bd93f7867d13..69046f69827174 100644
+--- a/drivers/gpu/drm/xe/xe_guc_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_types.h
+@@ -72,15 +72,6 @@ struct xe_guc {
+ 		atomic_t stopped;
+ 		/** @submission_state.lock: protects submission state */
+ 		struct mutex lock;
+-		/** @submission_state.suspend: suspend fence state */
+-		struct {
+-			/** @submission_state.suspend.lock: suspend fences lock */
+-			spinlock_t lock;
+-			/** @submission_state.suspend.context: suspend fences context */
+-			u64 context;
+-			/** @submission_state.suspend.seqno: suspend fences seqno */
+-			u32 seqno;
+-		} suspend;
+ #ifdef CONFIG_PROVE_LOCKING
+ #define NUM_SUBMIT_WQ	256
+ 		/** @submission_state.submit_wq_pool: submission ordered workqueues pool */
+@@ -90,6 +81,8 @@ struct xe_guc {
+ #endif
+ 		/** @submission_state.enabled: submission is enabled */
+ 		bool enabled;
++		/** @submission_state.fini_wq: submit fini wait queue */
++		wait_queue_head_t fini_wq;
+ 	} submission_state;
+ 	/** @hwconfig: Hardware config state */
+ 	struct {
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index f326dbb1cecd9f..99824e19a376f2 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -868,6 +868,8 @@ static int xe_pci_resume(struct device *dev)
+ 	if (err)
+ 		return err;
+ 
++	pci_restore_state(pdev);
++
+ 	err = pci_enable_device(pdev);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 781c5aa298598a..06104a4e0fdc15 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -417,24 +417,8 @@
+ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W	0x0401
+ #define USB_DEVICE_ID_HP_X2		0x074d
+ #define USB_DEVICE_ID_HP_X2_10_COVER	0x0755
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15	0x2d05
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100	0x29CF
+-#define I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV	0x2CF9
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_15	0x2817
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG  0x29DF
+-#define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8
+-#define I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN 0x2C82
+-#define I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN 0x2F2C
+-#define I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN 0x4116
+ #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN	0x2544
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
+-#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN	0x261A
+-#define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN	0x2A1C
+-#define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN	0x279F
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100	0x29F5
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1	0x2BED
+-#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2	0x2BEE
+-#define I2C_DEVICE_ID_HP_ENVY_X360_15_EU0556NG		0x2D02
+ #define I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM	0x2F81
+ 
+ #define USB_VENDOR_ID_ELECOM		0x056e
+@@ -810,6 +794,7 @@
+ #define USB_DEVICE_ID_LENOVO_X1_TAB	0x60a3
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3	0x60b5
+ #define USB_DEVICE_ID_LENOVO_X12_TAB	0x60fe
++#define USB_DEVICE_ID_LENOVO_X12_TAB2	0x61ae
+ #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E	0x600e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D	0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019	0x6019
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index c9094a4f281e90..fda9dce3da9980 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -373,14 +373,6 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
+ 	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN),
+@@ -391,32 +383,13 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	  HID_BATTERY_QUIRK_AVOID_QUERY },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW),
+ 	  HID_BATTERY_QUIRK_AVOID_QUERY },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2),
+-	  HID_BATTERY_QUIRK_IGNORE },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15_EU0556NG),
+-	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM),
+ 	  HID_BATTERY_QUIRK_AVOID_QUERY },
++	/*
++	 * Elan I2C-HID touchscreens seem to all report a non present battery,
++	 * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C-HID devices.
++	 */
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE },
+ 	{}
+ };
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 99812c0f830b5e..c4a6908bbe5404 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2113,6 +2113,12 @@ static const struct hid_device_id mt_devices[] = {
+ 			   USB_VENDOR_ID_LENOVO,
+ 			   USB_DEVICE_ID_LENOVO_X12_TAB) },
+ 
++	/* Lenovo X12 TAB Gen 2 */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++		HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++			   USB_VENDOR_ID_LENOVO,
++			   USB_DEVICE_ID_LENOVO_X12_TAB2) },
++
+ 	/* Logitech devices */
+ 	{ .driver_data = MT_CLS_NSMU,
+ 		HID_DEVICE(BUS_BLUETOOTH, HID_GROUP_MULTITOUCH_WIN_8,
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 632eaf9e11a6b6..2f8a9d3f1e861e 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -105,6 +105,7 @@ struct i2c_hid {
+ 
+ 	wait_queue_head_t	wait;		/* For waiting the interrupt */
+ 
++	struct mutex		cmd_lock;	/* protects cmdbuf and rawbuf */
+ 	struct mutex		reset_lock;
+ 
+ 	struct i2chid_ops	*ops;
+@@ -220,6 +221,8 @@ static int i2c_hid_xfer(struct i2c_hid *ihid,
+ static int i2c_hid_read_register(struct i2c_hid *ihid, __le16 reg,
+ 				 void *buf, size_t len)
+ {
++	guard(mutex)(&ihid->cmd_lock);
++
+ 	*(__le16 *)ihid->cmdbuf = reg;
+ 
+ 	return i2c_hid_xfer(ihid, ihid->cmdbuf, sizeof(__le16), buf, len);
+@@ -252,6 +255,8 @@ static int i2c_hid_get_report(struct i2c_hid *ihid,
+ 
+ 	i2c_hid_dbg(ihid, "%s\n", __func__);
+ 
++	guard(mutex)(&ihid->cmd_lock);
++
+ 	/* Command register goes first */
+ 	*(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+ 	length += sizeof(__le16);
+@@ -342,6 +347,8 @@ static int i2c_hid_set_or_send_report(struct i2c_hid *ihid,
+ 	if (!do_set && le16_to_cpu(ihid->hdesc.wMaxOutputLength) == 0)
+ 		return -ENOSYS;
+ 
++	guard(mutex)(&ihid->cmd_lock);
++
+ 	if (do_set) {
+ 		/* Command register goes first */
+ 		*(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+@@ -384,6 +391,8 @@ static int i2c_hid_set_power_command(struct i2c_hid *ihid, int power_state)
+ {
+ 	size_t length;
+ 
++	guard(mutex)(&ihid->cmd_lock);
++
+ 	/* SET_POWER uses command register */
+ 	*(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+ 	length = sizeof(__le16);
+@@ -440,25 +449,27 @@ static int i2c_hid_start_hwreset(struct i2c_hid *ihid)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Prepare reset command. Command register goes first. */
+-	*(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
+-	length += sizeof(__le16);
+-	/* Next is RESET command itself */
+-	length += i2c_hid_encode_command(ihid->cmdbuf + length,
+-					 I2C_HID_OPCODE_RESET, 0, 0);
++	scoped_guard(mutex, &ihid->cmd_lock) {
++		/* Prepare reset command. Command register goes first. */
++		*(__le16 *)ihid->cmdbuf = ihid->hdesc.wCommandRegister;
++		length += sizeof(__le16);
++		/* Next is RESET command itself */
++		length += i2c_hid_encode_command(ihid->cmdbuf + length,
++						 I2C_HID_OPCODE_RESET, 0, 0);
+ 
+-	set_bit(I2C_HID_RESET_PENDING, &ihid->flags);
++		set_bit(I2C_HID_RESET_PENDING, &ihid->flags);
+ 
+-	ret = i2c_hid_xfer(ihid, ihid->cmdbuf, length, NULL, 0);
+-	if (ret) {
+-		dev_err(&ihid->client->dev,
+-			"failed to reset device: %d\n", ret);
+-		goto err_clear_reset;
+-	}
++		ret = i2c_hid_xfer(ihid, ihid->cmdbuf, length, NULL, 0);
++		if (ret) {
++			dev_err(&ihid->client->dev,
++				"failed to reset device: %d\n", ret);
++			break;
++		}
+ 
+-	return 0;
++		return 0;
++	}
+ 
+-err_clear_reset:
++	/* Clean up if sending reset command failed */
+ 	clear_bit(I2C_HID_RESET_PENDING, &ihid->flags);
+ 	i2c_hid_set_power(ihid, I2C_HID_PWR_SLEEP);
+ 	return ret;
+@@ -1200,6 +1211,7 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+ 	ihid->is_panel_follower = drm_is_panel_follower(&client->dev);
+ 
+ 	init_waitqueue_head(&ihid->wait);
++	mutex_init(&ihid->cmd_lock);
+ 	mutex_init(&ihid->reset_lock);
+ 	INIT_WORK(&ihid->panel_follower_prepare_work, ihid_core_panel_prepare_work);
+ 
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index 9aa4dcf4a6f336..096f1daa8f2bcf 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -1269,6 +1269,7 @@ static const char * const asus_msi_boards[] = {
+ 	"EX-B760M-V5 D4",
+ 	"EX-H510M-V3",
+ 	"EX-H610M-V3 D4",
++	"G15CF",
+ 	"PRIME A620M-A",
+ 	"PRIME B560-PLUS",
+ 	"PRIME B560-PLUS AC-HES",
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index e8a688d04aee0f..edda6a70907b43 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -441,6 +441,7 @@ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
+ 
+ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ {
++	struct i2c_timings *t = &dev->timings;
+ 	unsigned int raw_intr_stats;
+ 	unsigned int enable;
+ 	int timeout = 100;
+@@ -453,6 +454,19 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ 
+ 	abort_needed = raw_intr_stats & DW_IC_INTR_MST_ON_HOLD;
+ 	if (abort_needed) {
++		if (!(enable & DW_IC_ENABLE_ENABLE)) {
++			regmap_write(dev->map, DW_IC_ENABLE, DW_IC_ENABLE_ENABLE);
++			/*
++			 * Wait 10 times the signaling period of the highest I2C
++			 * transfer supported by the driver (for 400KHz this is
++			 * 25us) to ensure the I2C ENABLE bit is already set
++			 * as described in the DesignWare I2C databook.
++			 */
++			fsleep(DIV_ROUND_CLOSEST_ULL(10 * MICRO, t->bus_freq_hz));
++			/* Set ENABLE bit before setting ABORT */
++			enable |= DW_IC_ENABLE_ENABLE;
++		}
++
+ 		regmap_write(dev->map, DW_IC_ENABLE, enable | DW_IC_ENABLE_ABORT);
+ 		ret = regmap_read_poll_timeout(dev->map, DW_IC_ENABLE, enable,
+ 					       !(enable & DW_IC_ENABLE_ABORT), 10,
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index e9606c00b8d103..e45daedad96724 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -109,6 +109,7 @@
+ 						 DW_IC_INTR_RX_UNDER | \
+ 						 DW_IC_INTR_RD_REQ)
+ 
++#define DW_IC_ENABLE_ENABLE			BIT(0)
+ #define DW_IC_ENABLE_ABORT			BIT(1)
+ 
+ #define DW_IC_STATUS_ACTIVITY			BIT(0)
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index c7e56002809ace..7b260a3617f69d 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -253,6 +253,34 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev)
+ 	__i2c_dw_write_intr_mask(dev, DW_IC_INTR_MASTER_MASK);
+ }
+ 
++/*
++ * This function waits for the controller to be idle before disabling I2C
++ * When the controller is not in the IDLE state, the MST_ACTIVITY bit
++ * (IC_STATUS[5]) is set.
++ *
++ * Values:
++ * 0x1 (ACTIVE): Controller not idle
++ * 0x0 (IDLE): Controller is idle
++ *
++ * The function is called after completing the current transfer.
++ *
++ * Returns:
++ * False when the controller is in the IDLE state.
++ * True when the controller is in the ACTIVE state.
++ */
++static bool i2c_dw_is_controller_active(struct dw_i2c_dev *dev)
++{
++	u32 status;
++
++	regmap_read(dev->map, DW_IC_STATUS, &status);
++	if (!(status & DW_IC_STATUS_MASTER_ACTIVITY))
++		return false;
++
++	return regmap_read_poll_timeout(dev->map, DW_IC_STATUS, status,
++				       !(status & DW_IC_STATUS_MASTER_ACTIVITY),
++				       1100, 20000) != 0;
++}
++
+ static int i2c_dw_check_stopbit(struct dw_i2c_dev *dev)
+ {
+ 	u32 val;
+@@ -788,6 +816,16 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+ 		goto done;
+ 	}
+ 
++	/*
++	 * This happens rarely (~1:500) and is hard to reproduce. Debug trace
++	 * showed that IC_STATUS had value of 0x23 when STOP_DET occurred,
++	 * if disable IC_ENABLE.ENABLE immediately that can result in
++	 * IC_RAW_INTR_STAT.MASTER_ON_HOLD holding SCL low. Check if
++	 * controller is still ACTIVE before disabling I2C.
++	 */
++	if (i2c_dw_is_controller_active(dev))
++		dev_err(dev->dev, "controller active\n");
++
+ 	/*
+ 	 * We must disable the adapter before returning and signaling the end
+ 	 * of the current transfer. Otherwise the hardware might continue
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 06e836e3e87733..4c9050a4d58e7d 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -818,15 +818,13 @@ static int geni_i2c_probe(struct platform_device *pdev)
+ 	init_completion(&gi2c->done);
+ 	spin_lock_init(&gi2c->lock);
+ 	platform_set_drvdata(pdev, gi2c);
+-	ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, 0,
++	ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
+ 			       dev_name(dev), gi2c);
+ 	if (ret) {
+ 		dev_err(dev, "Request_irq failed:%d: err:%d\n",
+ 			gi2c->irq, ret);
+ 		return ret;
+ 	}
+-	/* Disable the interrupt so that the system can enter low-power mode */
+-	disable_irq(gi2c->irq);
+ 	i2c_set_adapdata(&gi2c->adap, gi2c);
+ 	gi2c->adap.dev.parent = dev;
+ 	gi2c->adap.dev.of_node = dev->of_node;
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index cfee2d9c09de36..0174ead99de6c1 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -2395,7 +2395,7 @@ static int __maybe_unused stm32f7_i2c_runtime_suspend(struct device *dev)
+ 	struct stm32f7_i2c_dev *i2c_dev = dev_get_drvdata(dev);
+ 
+ 	if (!stm32f7_i2c_is_slave_registered(i2c_dev))
+-		clk_disable_unprepare(i2c_dev->clk);
++		clk_disable(i2c_dev->clk);
+ 
+ 	return 0;
+ }
+@@ -2406,9 +2406,9 @@ static int __maybe_unused stm32f7_i2c_runtime_resume(struct device *dev)
+ 	int ret;
+ 
+ 	if (!stm32f7_i2c_is_slave_registered(i2c_dev)) {
+-		ret = clk_prepare_enable(i2c_dev->clk);
++		ret = clk_enable(i2c_dev->clk);
+ 		if (ret) {
+-			dev_err(dev, "failed to prepare_enable clock\n");
++			dev_err(dev, "failed to enable clock\n");
+ 			return ret;
+ 		}
+ 	}
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 4eccbcd0fbfc00..bbb9062669e4b2 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -550,12 +550,13 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ 	device_property_read_u32(&pdev->dev, "socionext,pclk-rate",
+ 				 &i2c->pclkrate);
+ 
+-	pclk = devm_clk_get_enabled(&pdev->dev, "pclk");
++	pclk = devm_clk_get_optional_enabled(&pdev->dev, "pclk");
+ 	if (IS_ERR(pclk))
+ 		return dev_err_probe(&pdev->dev, PTR_ERR(pclk),
+ 				     "failed to get and enable clock\n");
+ 
+-	i2c->pclkrate = clk_get_rate(pclk);
++	if (pclk)
++		i2c->pclkrate = clk_get_rate(pclk);
+ 
+ 	if (i2c->pclkrate < SYNQUACER_I2C_MIN_CLK_RATE ||
+ 	    i2c->pclkrate > SYNQUACER_I2C_MAX_CLK_RATE)
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 71391b590adaeb..1d68177241a6b3 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -772,14 +772,17 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 			goto out;
+ 		}
+ 
+-		xiic_fill_tx_fifo(i2c);
+-
+-		/* current message sent and there is space in the fifo */
+-		if (!xiic_tx_space(i2c) && xiic_tx_fifo_space(i2c) >= 2) {
++		if (xiic_tx_space(i2c)) {
++			xiic_fill_tx_fifo(i2c);
++		} else {
++			/* current message fully written */
+ 			dev_dbg(i2c->adap.dev.parent,
+ 				"%s end of message sent, nmsgs: %d\n",
+ 				__func__, i2c->nmsgs);
+-			if (i2c->nmsgs > 1) {
++			/* Don't move onto the next message until the TX FIFO empties,
++			 * to ensure that a NAK is not missed.
++			 */
++			if (i2c->nmsgs > 1 && (pend & XIIC_INTR_TX_EMPTY_MASK)) {
+ 				i2c->nmsgs--;
+ 				i2c->tx_msg++;
+ 				xfer_more = 1;
+@@ -790,11 +793,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 					"%s Got TX IRQ but no more to do...\n",
+ 					__func__);
+ 			}
+-		} else if (!xiic_tx_space(i2c) && (i2c->nmsgs == 1))
+-			/* current frame is sent and is last,
+-			 * make sure to disable tx half
+-			 */
+-			xiic_irq_dis(i2c, XIIC_INTR_TX_HALF_MASK);
++		}
+ 	}
+ 
+ 	if (pend & XIIC_INTR_BNB_MASK) {
+@@ -844,23 +843,11 @@ static int xiic_bus_busy(struct xiic_i2c *i2c)
+ 	return (sr & XIIC_SR_BUS_BUSY_MASK) ? -EBUSY : 0;
+ }
+ 
+-static int xiic_busy(struct xiic_i2c *i2c)
++static int xiic_wait_not_busy(struct xiic_i2c *i2c)
+ {
+ 	int tries = 3;
+ 	int err;
+ 
+-	if (i2c->tx_msg || i2c->rx_msg)
+-		return -EBUSY;
+-
+-	/* In single master mode bus can only be busy, when in use by this
+-	 * driver. If the register indicates bus being busy for some reason we
+-	 * should ignore it, since bus will never be released and i2c will be
+-	 * stuck forever.
+-	 */
+-	if (i2c->singlemaster) {
+-		return 0;
+-	}
+-
+ 	/* for instance if previous transfer was terminated due to TX error
+ 	 * it might be that the bus is on it's way to become available
+ 	 * give it at most 3 ms to wake
+@@ -1104,9 +1091,35 @@ static int xiic_start_xfer(struct xiic_i2c *i2c, struct i2c_msg *msgs, int num)
+ 
+ 	mutex_lock(&i2c->lock);
+ 
+-	ret = xiic_busy(i2c);
+-	if (ret)
++	if (i2c->tx_msg || i2c->rx_msg) {
++		dev_err(i2c->adap.dev.parent,
++			"cannot start a transfer while busy\n");
++		ret = -EBUSY;
+ 		goto out;
++	}
++
++	/* In single master mode bus can only be busy, when in use by this
++	 * driver. If the register indicates bus being busy for some reason we
++	 * should ignore it, since bus will never be released and i2c will be
++	 * stuck forever.
++	 */
++	if (!i2c->singlemaster) {
++		ret = xiic_wait_not_busy(i2c);
++		if (ret) {
++			/* If the bus is stuck in a busy state, such as due to spurious low
++			 * pulses on the bus causing a false start condition to be detected,
++			 * then try to recover by re-initializing the controller and check
++			 * again if the bus is still busy.
++			 */
++			dev_warn(i2c->adap.dev.parent, "I2C bus busy timeout, reinitializing\n");
++			ret = xiic_reinit(i2c);
++			if (ret)
++				goto out;
++			ret = xiic_wait_not_busy(i2c);
++			if (ret)
++				goto out;
++		}
++	}
+ 
+ 	i2c->tx_msg = msgs;
+ 	i2c->rx_msg = NULL;
+@@ -1164,10 +1177,8 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 		return err;
+ 
+ 	err = xiic_start_xfer(i2c, msgs, num);
+-	if (err < 0) {
+-		dev_err(adap->dev.parent, "Error xiic_start_xfer\n");
++	if (err < 0)
+ 		goto out;
+-	}
+ 
+ 	err = wait_for_completion_timeout(&i2c->completion, XIIC_XFER_TIMEOUT);
+ 	mutex_lock(&i2c->lock);
+@@ -1326,8 +1337,8 @@ static int xiic_i2c_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_pm_disable:
+-	pm_runtime_set_suspended(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 7e7b15440832b3..43da2c21165537 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -915,6 +915,27 @@ int i2c_dev_irq_from_resources(const struct resource *resources,
+ 	return 0;
+ }
+ 
++/*
++ * Serialize device instantiation in case it can be instantiated explicitly
++ * and by auto-detection
++ */
++static int i2c_lock_addr(struct i2c_adapter *adap, unsigned short addr,
++			 unsigned short flags)
++{
++	if (!(flags & I2C_CLIENT_TEN) &&
++	    test_and_set_bit(addr, adap->addrs_in_instantiation))
++		return -EBUSY;
++
++	return 0;
++}
++
++static void i2c_unlock_addr(struct i2c_adapter *adap, unsigned short addr,
++			    unsigned short flags)
++{
++	if (!(flags & I2C_CLIENT_TEN))
++		clear_bit(addr, adap->addrs_in_instantiation);
++}
++
+ /**
+  * i2c_new_client_device - instantiate an i2c device
+  * @adap: the adapter managing the device
+@@ -962,6 +983,10 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ 		goto out_err_silent;
+ 	}
+ 
++	status = i2c_lock_addr(adap, client->addr, client->flags);
++	if (status)
++		goto out_err_silent;
++
+ 	/* Check for address business */
+ 	status = i2c_check_addr_busy(adap, i2c_encode_flags_to_addr(client));
+ 	if (status)
+@@ -993,6 +1018,8 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ 	dev_dbg(&adap->dev, "client [%s] registered with bus id %s\n",
+ 		client->name, dev_name(&client->dev));
+ 
++	i2c_unlock_addr(adap, client->addr, client->flags);
++
+ 	return client;
+ 
+ out_remove_swnode:
+@@ -1004,6 +1031,7 @@ i2c_new_client_device(struct i2c_adapter *adap, struct i2c_board_info const *inf
+ 	dev_err(&adap->dev,
+ 		"Failed to register i2c client %s at 0x%02x (%d)\n",
+ 		client->name, client->addr, status);
++	i2c_unlock_addr(adap, client->addr, client->flags);
+ out_err_silent:
+ 	if (need_put)
+ 		put_device(&client->dev);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index f0362509319e0f..1e500d9e6d2ef1 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1750,6 +1750,7 @@ static void svc_i3c_master_remove(struct platform_device *pdev)
+ {
+ 	struct svc_i3c_master *master = platform_get_drvdata(pdev);
+ 
++	cancel_work_sync(&master->hj_work);
+ 	i3c_master_unregister(&master->base);
+ 
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index ccbebe5b66cde2..e78de8a971c7c4 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -692,22 +692,8 @@ static int ak8975_start_read_axis(struct ak8975_data *data,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* This will be executed only for non-interrupt based waiting case */
+-	if (ret & data->def->ctrl_masks[ST1_DRDY]) {
+-		ret = i2c_smbus_read_byte_data(client,
+-					       data->def->ctrl_regs[ST2]);
+-		if (ret < 0) {
+-			dev_err(&client->dev, "Error in reading ST2\n");
+-			return ret;
+-		}
+-		if (ret & (data->def->ctrl_masks[ST2_DERR] |
+-			   data->def->ctrl_masks[ST2_HOFL])) {
+-			dev_err(&client->dev, "ST2 status error 0x%x\n", ret);
+-			return -EINVAL;
+-		}
+-	}
+-
+-	return 0;
++	/* Return with zero if the data is ready. */
++	return !data->def->ctrl_regs[ST1_DRDY];
+ }
+ 
+ /* Retrieve raw flux value for one of the x, y, or z axis.  */
+@@ -734,6 +720,20 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
+ 	if (ret < 0)
+ 		goto exit;
+ 
++	/* Read out ST2 for release lock on measurment data. */
++	ret = i2c_smbus_read_byte_data(client, data->def->ctrl_regs[ST2]);
++	if (ret < 0) {
++		dev_err(&client->dev, "Error in reading ST2\n");
++		goto exit;
++	}
++
++	if (ret & (data->def->ctrl_masks[ST2_DERR] |
++		   data->def->ctrl_masks[ST2_HOFL])) {
++		dev_err(&client->dev, "ST2 status error 0x%x\n", ret);
++		ret = -EINVAL;
++		goto exit;
++	}
++
+ 	mutex_unlock(&data->lock);
+ 
+ 	pm_runtime_mark_last_busy(&data->client->dev);
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 221fa2c552ae2c..1549f361a473fe 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -52,7 +52,6 @@
+  */
+ enum { AC1, AC2, AC3, AC4, AC5, AC6, B1, B2, MB, MC, MD };
+ 
+-
+ enum bmp380_odr {
+ 	BMP380_ODR_200HZ,
+ 	BMP380_ODR_100HZ,
+@@ -181,18 +180,19 @@ static int bmp280_read_calib(struct bmp280_data *data)
+ 	struct bmp280_calib *calib = &data->calib.bmp280;
+ 	int ret;
+ 
+-
+ 	/* Read temperature and pressure calibration values. */
+ 	ret = regmap_bulk_read(data->regmap, BMP280_REG_COMP_TEMP_START,
+-			       data->bmp280_cal_buf, sizeof(data->bmp280_cal_buf));
++			       data->bmp280_cal_buf,
++			       sizeof(data->bmp280_cal_buf));
+ 	if (ret < 0) {
+ 		dev_err(data->dev,
+-			"failed to read temperature and pressure calibration parameters\n");
++			"failed to read calibration parameters\n");
+ 		return ret;
+ 	}
+ 
+-	/* Toss the temperature and pressure calibration data into the entropy pool */
+-	add_device_randomness(data->bmp280_cal_buf, sizeof(data->bmp280_cal_buf));
++	/* Toss calibration data into the entropy pool */
++	add_device_randomness(data->bmp280_cal_buf,
++			      sizeof(data->bmp280_cal_buf));
+ 
+ 	/* Parse temperature calibration values. */
+ 	calib->T1 = le16_to_cpu(data->bmp280_cal_buf[T1]);
+@@ -223,7 +223,7 @@ static int bme280_read_calib(struct bmp280_data *data)
+ 	/* Load shared calibration params with bmp280 first */
+ 	ret = bmp280_read_calib(data);
+ 	if  (ret < 0) {
+-		dev_err(dev, "failed to read common bmp280 calibration parameters\n");
++		dev_err(dev, "failed to read calibration parameters\n");
+ 		return ret;
+ 	}
+ 
+@@ -235,14 +235,14 @@ static int bme280_read_calib(struct bmp280_data *data)
+ 	 * Humidity data is only available on BME280.
+ 	 */
+ 
+-	ret = regmap_read(data->regmap, BMP280_REG_COMP_H1, &tmp);
++	ret = regmap_read(data->regmap, BME280_REG_COMP_H1, &tmp);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H1 comp value\n");
+ 		return ret;
+ 	}
+ 	calib->H1 = tmp;
+ 
+-	ret = regmap_bulk_read(data->regmap, BMP280_REG_COMP_H2,
++	ret = regmap_bulk_read(data->regmap, BME280_REG_COMP_H2,
+ 			       &data->le16, sizeof(data->le16));
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H2 comp value\n");
+@@ -250,14 +250,14 @@ static int bme280_read_calib(struct bmp280_data *data)
+ 	}
+ 	calib->H2 = sign_extend32(le16_to_cpu(data->le16), 15);
+ 
+-	ret = regmap_read(data->regmap, BMP280_REG_COMP_H3, &tmp);
++	ret = regmap_read(data->regmap, BME280_REG_COMP_H3, &tmp);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H3 comp value\n");
+ 		return ret;
+ 	}
+ 	calib->H3 = tmp;
+ 
+-	ret = regmap_bulk_read(data->regmap, BMP280_REG_COMP_H4,
++	ret = regmap_bulk_read(data->regmap, BME280_REG_COMP_H4,
+ 			       &data->be16, sizeof(data->be16));
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H4 comp value\n");
+@@ -266,15 +266,15 @@ static int bme280_read_calib(struct bmp280_data *data)
+ 	calib->H4 = sign_extend32(((be16_to_cpu(data->be16) >> 4) & 0xff0) |
+ 				  (be16_to_cpu(data->be16) & 0xf), 11);
+ 
+-	ret = regmap_bulk_read(data->regmap, BMP280_REG_COMP_H5,
++	ret = regmap_bulk_read(data->regmap, BME280_REG_COMP_H5,
+ 			       &data->le16, sizeof(data->le16));
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H5 comp value\n");
+ 		return ret;
+ 	}
+-	calib->H5 = sign_extend32(FIELD_GET(BMP280_COMP_H5_MASK, le16_to_cpu(data->le16)), 11);
++	calib->H5 = sign_extend32(FIELD_GET(BME280_COMP_H5_MASK, le16_to_cpu(data->le16)), 11);
+ 
+-	ret = regmap_read(data->regmap, BMP280_REG_COMP_H6, &tmp);
++	ret = regmap_read(data->regmap, BME280_REG_COMP_H6, &tmp);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to read H6 comp value\n");
+ 		return ret;
+@@ -283,13 +283,14 @@ static int bme280_read_calib(struct bmp280_data *data)
+ 
+ 	return 0;
+ }
++
+ /*
+  * Returns humidity in percent, resolution is 0.01 percent. Output value of
+  * "47445" represents 47445/1024 = 46.333 %RH.
+  *
+  * Taken from BME280 datasheet, Section 4.2.3, "Compensation formula".
+  */
+-static u32 bmp280_compensate_humidity(struct bmp280_data *data,
++static u32 bme280_compensate_humidity(struct bmp280_data *data,
+ 				      s32 adc_humidity)
+ {
+ 	struct bmp280_calib *calib = &data->calib.bmp280;
+@@ -305,7 +306,7 @@ static u32 bmp280_compensate_humidity(struct bmp280_data *data,
+ 	var = clamp_val(var, 0, 419430400);
+ 
+ 	return var >> 12;
+-};
++}
+ 
+ /*
+  * Returns temperature in DegC, resolution is 0.01 DegC.  Output value of
+@@ -429,7 +430,7 @@ static int bmp280_read_press(struct bmp280_data *data,
+ 	return IIO_VAL_FRACTIONAL;
+ }
+ 
+-static int bmp280_read_humid(struct bmp280_data *data, int *val, int *val2)
++static int bme280_read_humid(struct bmp280_data *data, int *val, int *val2)
+ {
+ 	u32 comp_humidity;
+ 	s32 adc_humidity;
+@@ -440,7 +441,7 @@ static int bmp280_read_humid(struct bmp280_data *data, int *val, int *val2)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = regmap_bulk_read(data->regmap, BMP280_REG_HUMIDITY_MSB,
++	ret = regmap_bulk_read(data->regmap, BME280_REG_HUMIDITY_MSB,
+ 			       &data->be16, sizeof(data->be16));
+ 	if (ret < 0) {
+ 		dev_err(data->dev, "failed to read humidity\n");
+@@ -453,7 +454,7 @@ static int bmp280_read_humid(struct bmp280_data *data, int *val, int *val2)
+ 		dev_err(data->dev, "reading humidity skipped\n");
+ 		return -EIO;
+ 	}
+-	comp_humidity = bmp280_compensate_humidity(data, adc_humidity);
++	comp_humidity = bme280_compensate_humidity(data, adc_humidity);
+ 
+ 	*val = comp_humidity * 1000 / 1024;
+ 
+@@ -537,8 +538,8 @@ static int bmp280_read_raw(struct iio_dev *indio_dev,
+ 	return ret;
+ }
+ 
+-static int bmp280_write_oversampling_ratio_humid(struct bmp280_data *data,
+-					       int val)
++static int bme280_write_oversampling_ratio_humid(struct bmp280_data *data,
++						 int val)
+ {
+ 	const int *avail = data->chip_info->oversampling_humid_avail;
+ 	const int n = data->chip_info->num_oversampling_humid_avail;
+@@ -563,7 +564,7 @@ static int bmp280_write_oversampling_ratio_humid(struct bmp280_data *data,
+ }
+ 
+ static int bmp280_write_oversampling_ratio_temp(struct bmp280_data *data,
+-					       int val)
++						int val)
+ {
+ 	const int *avail = data->chip_info->oversampling_temp_avail;
+ 	const int n = data->chip_info->num_oversampling_temp_avail;
+@@ -588,7 +589,7 @@ static int bmp280_write_oversampling_ratio_temp(struct bmp280_data *data,
+ }
+ 
+ static int bmp280_write_oversampling_ratio_press(struct bmp280_data *data,
+-					       int val)
++						 int val)
+ {
+ 	const int *avail = data->chip_info->oversampling_press_avail;
+ 	const int n = data->chip_info->num_oversampling_press_avail;
+@@ -681,7 +682,7 @@ static int bmp280_write_raw(struct iio_dev *indio_dev,
+ 		mutex_lock(&data->lock);
+ 		switch (chan->type) {
+ 		case IIO_HUMIDITYRELATIVE:
+-			ret = bmp280_write_oversampling_ratio_humid(data, val);
++			ret = bme280_write_oversampling_ratio_humid(data, val);
+ 			break;
+ 		case IIO_PRESSURE:
+ 			ret = bmp280_write_oversampling_ratio_press(data, val);
+@@ -772,13 +773,12 @@ static int bmp280_chip_config(struct bmp280_data *data)
+ 	int ret;
+ 
+ 	ret = regmap_write_bits(data->regmap, BMP280_REG_CTRL_MEAS,
+-				 BMP280_OSRS_TEMP_MASK |
+-				 BMP280_OSRS_PRESS_MASK |
+-				 BMP280_MODE_MASK,
+-				 osrs | BMP280_MODE_NORMAL);
++				BMP280_OSRS_TEMP_MASK |
++				BMP280_OSRS_PRESS_MASK |
++				BMP280_MODE_MASK,
++				osrs | BMP280_MODE_NORMAL);
+ 	if (ret < 0) {
+-		dev_err(data->dev,
+-			"failed to write ctrl_meas register\n");
++		dev_err(data->dev, "failed to write ctrl_meas register\n");
+ 		return ret;
+ 	}
+ 
+@@ -786,8 +786,7 @@ static int bmp280_chip_config(struct bmp280_data *data)
+ 				 BMP280_FILTER_MASK,
+ 				 BMP280_FILTER_4X);
+ 	if (ret < 0) {
+-		dev_err(data->dev,
+-			"failed to write config register\n");
++		dev_err(data->dev, "failed to write config register\n");
+ 		return ret;
+ 	}
+ 
+@@ -833,16 +832,15 @@ EXPORT_SYMBOL_NS(bmp280_chip_info, IIO_BMP280);
+ 
+ static int bme280_chip_config(struct bmp280_data *data)
+ {
+-	u8 osrs = FIELD_PREP(BMP280_OSRS_HUMIDITY_MASK, data->oversampling_humid + 1);
++	u8 osrs = FIELD_PREP(BME280_OSRS_HUMIDITY_MASK, data->oversampling_humid + 1);
+ 	int ret;
+ 
+ 	/*
+ 	 * Oversampling of humidity must be set before oversampling of
+ 	 * temperature/pressure is set to become effective.
+ 	 */
+-	ret = regmap_update_bits(data->regmap, BMP280_REG_CTRL_HUMIDITY,
+-				  BMP280_OSRS_HUMIDITY_MASK, osrs);
+-
++	ret = regmap_update_bits(data->regmap, BME280_REG_CTRL_HUMIDITY,
++				 BME280_OSRS_HUMIDITY_MASK, osrs);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -855,7 +853,7 @@ const struct bmp280_chip_info bme280_chip_info = {
+ 	.id_reg = BMP280_REG_ID,
+ 	.chip_id = bme280_chip_ids,
+ 	.num_chip_id = ARRAY_SIZE(bme280_chip_ids),
+-	.regmap_config = &bmp280_regmap_config,
++	.regmap_config = &bme280_regmap_config,
+ 	.start_up_time = 2000,
+ 	.channels = bmp280_channels,
+ 	.num_channels = 3,
+@@ -870,12 +868,12 @@ const struct bmp280_chip_info bme280_chip_info = {
+ 
+ 	.oversampling_humid_avail = bmp280_oversampling_avail,
+ 	.num_oversampling_humid_avail = ARRAY_SIZE(bmp280_oversampling_avail),
+-	.oversampling_humid_default = BMP280_OSRS_HUMIDITY_16X - 1,
++	.oversampling_humid_default = BME280_OSRS_HUMIDITY_16X - 1,
+ 
+ 	.chip_config = bme280_chip_config,
+ 	.read_temp = bmp280_read_temp,
+ 	.read_press = bmp280_read_press,
+-	.read_humid = bmp280_read_humid,
++	.read_humid = bme280_read_humid,
+ 	.read_calib = bme280_read_calib,
+ };
+ EXPORT_SYMBOL_NS(bme280_chip_info, IIO_BMP280);
+@@ -926,8 +924,8 @@ static int bmp380_cmd(struct bmp280_data *data, u8 cmd)
+ }
+ 
+ /*
+- * Returns temperature in Celsius degrees, resolution is 0.01º C. Output value of
+- * "5123" equals 51.2º C. t_fine carries fine temperature as global value.
++ * Returns temperature in Celsius degrees, resolution is 0.01º C. Output value
++ * of "5123" equals 51.2º C. t_fine carries fine temperature as global value.
+  *
+  * Taken from datasheet, Section Appendix 9, "Compensation formula" and repo
+  * https://github.com/BoschSensortec/BMP3-Sensor-API.
+@@ -1069,7 +1067,8 @@ static int bmp380_read_calib(struct bmp280_data *data)
+ 
+ 	/* Read temperature and pressure calibration data */
+ 	ret = regmap_bulk_read(data->regmap, BMP380_REG_CALIB_TEMP_START,
+-			       data->bmp380_cal_buf, sizeof(data->bmp380_cal_buf));
++			       data->bmp380_cal_buf,
++			       sizeof(data->bmp380_cal_buf));
+ 	if (ret) {
+ 		dev_err(data->dev,
+ 			"failed to read temperature calibration parameters\n");
+@@ -1077,7 +1076,8 @@ static int bmp380_read_calib(struct bmp280_data *data)
+ 	}
+ 
+ 	/* Toss the temperature calibration data into the entropy pool */
+-	add_device_randomness(data->bmp380_cal_buf, sizeof(data->bmp380_cal_buf));
++	add_device_randomness(data->bmp380_cal_buf,
++			      sizeof(data->bmp380_cal_buf));
+ 
+ 	/* Parse calibration values */
+ 	calib->T1 = get_unaligned_le16(&data->bmp380_cal_buf[BMP380_T1]);
+@@ -1159,7 +1159,8 @@ static int bmp380_chip_config(struct bmp280_data *data)
+ 
+ 	/* Configure output data rate */
+ 	ret = regmap_update_bits_check(data->regmap, BMP380_REG_ODR,
+-				       BMP380_ODRS_MASK, data->sampling_freq, &aux);
++				       BMP380_ODRS_MASK, data->sampling_freq,
++				       &aux);
+ 	if (ret) {
+ 		dev_err(data->dev, "failed to write ODR selection register\n");
+ 		return ret;
+@@ -1178,12 +1179,13 @@ static int bmp380_chip_config(struct bmp280_data *data)
+ 
+ 	if (change) {
+ 		/*
+-		 * The configurations errors are detected on the fly during a measurement
+-		 * cycle. If the sampling frequency is too low, it's faster to reset
+-		 * the measurement loop than wait until the next measurement is due.
++		 * The configurations errors are detected on the fly during a
++		 * measurement cycle. If the sampling frequency is too low, it's
++		 * faster to reset the measurement loop than wait until the next
++		 * measurement is due.
+ 		 *
+-		 * Resets sensor measurement loop toggling between sleep and normal
+-		 * operating modes.
++		 * Resets sensor measurement loop toggling between sleep and
++		 * normal operating modes.
+ 		 */
+ 		ret = regmap_write_bits(data->regmap, BMP380_REG_POWER_CONTROL,
+ 					BMP380_MODE_MASK,
+@@ -1201,22 +1203,22 @@ static int bmp380_chip_config(struct bmp280_data *data)
+ 			return ret;
+ 		}
+ 		/*
+-		 * Waits for measurement before checking configuration error flag.
+-		 * Selected longest measure time indicated in section 3.9.1
+-		 * in the datasheet.
++		 * Waits for measurement before checking configuration error
++		 * flag. Selected longest measurement time, calculated from
++		 * formula in datasheet section 3.9.2 with an offset of ~+15%
++		 * as it seen as well in table 3.9.1.
+ 		 */
+-		msleep(80);
++		msleep(150);
+ 
+ 		/* Check config error flag */
+ 		ret = regmap_read(data->regmap, BMP380_REG_ERROR, &tmp);
+ 		if (ret) {
+-			dev_err(data->dev,
+-				"failed to read error register\n");
++			dev_err(data->dev, "failed to read error register\n");
+ 			return ret;
+ 		}
+ 		if (tmp & BMP380_ERR_CONF_MASK) {
+ 			dev_warn(data->dev,
+-				"sensor flagged configuration as incompatible\n");
++				 "sensor flagged configuration as incompatible\n");
+ 			return -EINVAL;
+ 		}
+ 	}
+@@ -1317,9 +1319,11 @@ static int bmp580_nvm_operation(struct bmp280_data *data, bool is_write)
+ 	}
+ 
+ 	/* Start NVM operation sequence */
+-	ret = regmap_write(data->regmap, BMP580_REG_CMD, BMP580_CMD_NVM_OP_SEQ_0);
++	ret = regmap_write(data->regmap, BMP580_REG_CMD,
++			   BMP580_CMD_NVM_OP_SEQ_0);
+ 	if (ret) {
+-		dev_err(data->dev, "failed to send nvm operation's first sequence\n");
++		dev_err(data->dev,
++			"failed to send nvm operation's first sequence\n");
+ 		return ret;
+ 	}
+ 	if (is_write) {
+@@ -1327,7 +1331,8 @@ static int bmp580_nvm_operation(struct bmp280_data *data, bool is_write)
+ 		ret = regmap_write(data->regmap, BMP580_REG_CMD,
+ 				   BMP580_CMD_NVM_WRITE_SEQ_1);
+ 		if (ret) {
+-			dev_err(data->dev, "failed to send nvm write sequence\n");
++			dev_err(data->dev,
++				"failed to send nvm write sequence\n");
+ 			return ret;
+ 		}
+ 		/* Datasheet says on 4.8.1.2 it takes approximately 10ms */
+@@ -1338,7 +1343,8 @@ static int bmp580_nvm_operation(struct bmp280_data *data, bool is_write)
+ 		ret = regmap_write(data->regmap, BMP580_REG_CMD,
+ 				   BMP580_CMD_NVM_READ_SEQ_1);
+ 		if (ret) {
+-			dev_err(data->dev, "failed to send nvm read sequence\n");
++			dev_err(data->dev,
++				"failed to send nvm read sequence\n");
+ 			return ret;
+ 		}
+ 		/* Datasheet says on 4.8.1.1 it takes approximately 200us */
+@@ -1501,8 +1507,8 @@ static int bmp580_nvmem_read(void *priv, unsigned int offset, void *val,
+ 		if (ret)
+ 			goto exit;
+ 
+-		ret = regmap_bulk_read(data->regmap, BMP580_REG_NVM_DATA_LSB, &data->le16,
+-				       sizeof(data->le16));
++		ret = regmap_bulk_read(data->regmap, BMP580_REG_NVM_DATA_LSB,
++				       &data->le16, sizeof(data->le16));
+ 		if (ret) {
+ 			dev_err(data->dev, "error reading nvm data regs\n");
+ 			goto exit;
+@@ -1546,7 +1552,8 @@ static int bmp580_nvmem_write(void *priv, unsigned int offset, void *val,
+ 	while (bytes >= sizeof(*buf)) {
+ 		addr = bmp580_nvmem_addrs[offset / sizeof(*buf)];
+ 
+-		ret = regmap_write(data->regmap, BMP580_REG_NVM_ADDR, BMP580_NVM_PROG_EN |
++		ret = regmap_write(data->regmap, BMP580_REG_NVM_ADDR,
++				   BMP580_NVM_PROG_EN |
+ 				   FIELD_PREP(BMP580_NVM_ROW_ADDR_MASK, addr));
+ 		if (ret) {
+ 			dev_err(data->dev, "error writing nvm address\n");
+@@ -1554,8 +1561,8 @@ static int bmp580_nvmem_write(void *priv, unsigned int offset, void *val,
+ 		}
+ 		data->le16 = cpu_to_le16(*buf++);
+ 
+-		ret = regmap_bulk_write(data->regmap, BMP580_REG_NVM_DATA_LSB, &data->le16,
+-					sizeof(data->le16));
++		ret = regmap_bulk_write(data->regmap, BMP580_REG_NVM_DATA_LSB,
++					&data->le16, sizeof(data->le16));
+ 		if (ret) {
+ 			dev_err(data->dev, "error writing LSB NVM data regs\n");
+ 			goto exit;
+@@ -1662,7 +1669,8 @@ static int bmp580_chip_config(struct bmp280_data *data)
+ 		  BMP580_OSR_PRESS_EN;
+ 
+ 	ret = regmap_update_bits_check(data->regmap, BMP580_REG_OSR_CONFIG,
+-				       BMP580_OSR_TEMP_MASK | BMP580_OSR_PRESS_MASK |
++				       BMP580_OSR_TEMP_MASK |
++				       BMP580_OSR_PRESS_MASK |
+ 				       BMP580_OSR_PRESS_EN,
+ 				       reg_val, &aux);
+ 	if (ret) {
+@@ -1713,7 +1721,8 @@ static int bmp580_chip_config(struct bmp280_data *data)
+ 		 */
+ 		ret = regmap_read(data->regmap, BMP580_REG_EFF_OSR, &tmp);
+ 		if (ret) {
+-			dev_err(data->dev, "error reading effective OSR register\n");
++			dev_err(data->dev,
++				"error reading effective OSR register\n");
+ 			return ret;
+ 		}
+ 		if (!(tmp & BMP580_EFF_OSR_VALID_ODR)) {
+@@ -1848,7 +1857,8 @@ static int bmp180_read_calib(struct bmp280_data *data)
+ 	}
+ 
+ 	/* Toss the calibration data into the entropy pool */
+-	add_device_randomness(data->bmp180_cal_buf, sizeof(data->bmp180_cal_buf));
++	add_device_randomness(data->bmp180_cal_buf,
++			      sizeof(data->bmp180_cal_buf));
+ 
+ 	calib->AC1 = be16_to_cpu(data->bmp180_cal_buf[AC1]);
+ 	calib->AC2 = be16_to_cpu(data->bmp180_cal_buf[AC2]);
+@@ -1963,8 +1973,7 @@ static u32 bmp180_compensate_press(struct bmp280_data *data, s32 adc_press)
+ 	return p + ((x1 + x2 + 3791) >> 4);
+ }
+ 
+-static int bmp180_read_press(struct bmp280_data *data,
+-			     int *val, int *val2)
++static int bmp180_read_press(struct bmp280_data *data, int *val, int *val2)
+ {
+ 	u32 comp_press;
+ 	s32 adc_press;
+@@ -2241,6 +2250,7 @@ static int bmp280_runtime_resume(struct device *dev)
+ 	ret = regulator_bulk_enable(BMP280_NUM_SUPPLIES, data->supplies);
+ 	if (ret)
+ 		return ret;
++
+ 	usleep_range(data->start_up_time, data->start_up_time + 100);
+ 	return data->chip_info->chip_config(data);
+ }
+diff --git a/drivers/iio/pressure/bmp280-regmap.c b/drivers/iio/pressure/bmp280-regmap.c
+index 3ee56720428c5d..d27d68edd90656 100644
+--- a/drivers/iio/pressure/bmp280-regmap.c
++++ b/drivers/iio/pressure/bmp280-regmap.c
+@@ -41,11 +41,23 @@ const struct regmap_config bmp180_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS(bmp180_regmap_config, IIO_BMP280);
+ 
++static bool bme280_is_writeable_reg(struct device *dev, unsigned int reg)
++{
++	switch (reg) {
++	case BMP280_REG_CONFIG:
++	case BME280_REG_CTRL_HUMIDITY:
++	case BMP280_REG_CTRL_MEAS:
++	case BMP280_REG_RESET:
++		return true;
++	default:
++		return false;
++	}
++}
++
+ static bool bmp280_is_writeable_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case BMP280_REG_CONFIG:
+-	case BMP280_REG_CTRL_HUMIDITY:
+ 	case BMP280_REG_CTRL_MEAS:
+ 	case BMP280_REG_RESET:
+ 		return true;
+@@ -57,8 +69,6 @@ static bool bmp280_is_writeable_reg(struct device *dev, unsigned int reg)
+ static bool bmp280_is_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+-	case BMP280_REG_HUMIDITY_LSB:
+-	case BMP280_REG_HUMIDITY_MSB:
+ 	case BMP280_REG_TEMP_XLSB:
+ 	case BMP280_REG_TEMP_LSB:
+ 	case BMP280_REG_TEMP_MSB:
+@@ -72,6 +82,23 @@ static bool bmp280_is_volatile_reg(struct device *dev, unsigned int reg)
+ 	}
+ }
+ 
++static bool bme280_is_volatile_reg(struct device *dev, unsigned int reg)
++{
++	switch (reg) {
++	case BME280_REG_HUMIDITY_LSB:
++	case BME280_REG_HUMIDITY_MSB:
++	case BMP280_REG_TEMP_XLSB:
++	case BMP280_REG_TEMP_LSB:
++	case BMP280_REG_TEMP_MSB:
++	case BMP280_REG_PRESS_XLSB:
++	case BMP280_REG_PRESS_LSB:
++	case BMP280_REG_PRESS_MSB:
++	case BMP280_REG_STATUS:
++		return true;
++	default:
++		return false;
++	}
++}
+ static bool bmp380_is_writeable_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+@@ -167,7 +194,7 @@ const struct regmap_config bmp280_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+ 
+-	.max_register = BMP280_REG_HUMIDITY_LSB,
++	.max_register = BMP280_REG_TEMP_XLSB,
+ 	.cache_type = REGCACHE_RBTREE,
+ 
+ 	.writeable_reg = bmp280_is_writeable_reg,
+@@ -175,6 +202,18 @@ const struct regmap_config bmp280_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS(bmp280_regmap_config, IIO_BMP280);
+ 
++const struct regmap_config bme280_regmap_config = {
++	.reg_bits = 8,
++	.val_bits = 8,
++
++	.max_register = BME280_REG_HUMIDITY_LSB,
++	.cache_type = REGCACHE_RBTREE,
++
++	.writeable_reg = bme280_is_writeable_reg,
++	.volatile_reg = bme280_is_volatile_reg,
++};
++EXPORT_SYMBOL_NS(bme280_regmap_config, IIO_BMP280);
++
+ const struct regmap_config bmp380_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+diff --git a/drivers/iio/pressure/bmp280-spi.c b/drivers/iio/pressure/bmp280-spi.c
+index 4e19ea0b4d3984..62b4e58104cf98 100644
+--- a/drivers/iio/pressure/bmp280-spi.c
++++ b/drivers/iio/pressure/bmp280-spi.c
+@@ -13,7 +13,7 @@
+ #include "bmp280.h"
+ 
+ static int bmp280_regmap_spi_write(void *context, const void *data,
+-                                   size_t count)
++				   size_t count)
+ {
+ 	struct spi_device *spi = to_spi_device(context);
+ 	u8 buf[2];
+@@ -29,7 +29,7 @@ static int bmp280_regmap_spi_write(void *context, const void *data,
+ }
+ 
+ static int bmp280_regmap_spi_read(void *context, const void *reg,
+-                                  size_t reg_size, void *val, size_t val_size)
++				  size_t reg_size, void *val, size_t val_size)
+ {
+ 	struct spi_device *spi = to_spi_device(context);
+ 
+diff --git a/drivers/iio/pressure/bmp280.h b/drivers/iio/pressure/bmp280.h
+index 5812a344ed8e88..a651cb80099318 100644
+--- a/drivers/iio/pressure/bmp280.h
++++ b/drivers/iio/pressure/bmp280.h
+@@ -192,8 +192,6 @@
+ #define BMP380_PRESS_SKIPPED		0x800000
+ 
+ /* BMP280 specific registers */
+-#define BMP280_REG_HUMIDITY_LSB		0xFE
+-#define BMP280_REG_HUMIDITY_MSB		0xFD
+ #define BMP280_REG_TEMP_XLSB		0xFC
+ #define BMP280_REG_TEMP_LSB		0xFB
+ #define BMP280_REG_TEMP_MSB		0xFA
+@@ -207,15 +205,6 @@
+ #define BMP280_REG_CONFIG		0xF5
+ #define BMP280_REG_CTRL_MEAS		0xF4
+ #define BMP280_REG_STATUS		0xF3
+-#define BMP280_REG_CTRL_HUMIDITY	0xF2
+-
+-/* Due to non linear mapping, and data sizes we can't do a bulk read */
+-#define BMP280_REG_COMP_H1		0xA1
+-#define BMP280_REG_COMP_H2		0xE1
+-#define BMP280_REG_COMP_H3		0xE3
+-#define BMP280_REG_COMP_H4		0xE4
+-#define BMP280_REG_COMP_H5		0xE5
+-#define BMP280_REG_COMP_H6		0xE7
+ 
+ #define BMP280_REG_COMP_TEMP_START	0x88
+ #define BMP280_COMP_TEMP_REG_COUNT	6
+@@ -223,8 +212,6 @@
+ #define BMP280_REG_COMP_PRESS_START	0x8E
+ #define BMP280_COMP_PRESS_REG_COUNT	18
+ 
+-#define BMP280_COMP_H5_MASK		GENMASK(15, 4)
+-
+ #define BMP280_CONTIGUOUS_CALIB_REGS	(BMP280_COMP_TEMP_REG_COUNT + \
+ 					 BMP280_COMP_PRESS_REG_COUNT)
+ 
+@@ -235,14 +222,6 @@
+ #define BMP280_FILTER_8X		3
+ #define BMP280_FILTER_16X		4
+ 
+-#define BMP280_OSRS_HUMIDITY_MASK	GENMASK(2, 0)
+-#define BMP280_OSRS_HUMIDITY_SKIP	0
+-#define BMP280_OSRS_HUMIDITY_1X		1
+-#define BMP280_OSRS_HUMIDITY_2X		2
+-#define BMP280_OSRS_HUMIDITY_4X		3
+-#define BMP280_OSRS_HUMIDITY_8X		4
+-#define BMP280_OSRS_HUMIDITY_16X	5
+-
+ #define BMP280_OSRS_TEMP_MASK		GENMASK(7, 5)
+ #define BMP280_OSRS_TEMP_SKIP		0
+ #define BMP280_OSRS_TEMP_1X		1
+@@ -264,6 +243,30 @@
+ #define BMP280_MODE_FORCED		1
+ #define BMP280_MODE_NORMAL		3
+ 
++/* BME280 specific registers */
++#define BME280_REG_HUMIDITY_LSB		0xFE
++#define BME280_REG_HUMIDITY_MSB		0xFD
++
++#define BME280_REG_CTRL_HUMIDITY	0xF2
++
++/* Due to non linear mapping, and data sizes we can't do a bulk read */
++#define BME280_REG_COMP_H1		0xA1
++#define BME280_REG_COMP_H2		0xE1
++#define BME280_REG_COMP_H3		0xE3
++#define BME280_REG_COMP_H4		0xE4
++#define BME280_REG_COMP_H5		0xE5
++#define BME280_REG_COMP_H6		0xE7
++
++#define BME280_COMP_H5_MASK		GENMASK(15, 4)
++
++#define BME280_OSRS_HUMIDITY_MASK	GENMASK(2, 0)
++#define BME280_OSRS_HUMIDITY_SKIP	0
++#define BME280_OSRS_HUMIDITY_1X		1
++#define BME280_OSRS_HUMIDITY_2X		2
++#define BME280_OSRS_HUMIDITY_4X		3
++#define BME280_OSRS_HUMIDITY_8X		4
++#define BME280_OSRS_HUMIDITY_16X	5
++
+ /* BMP180 specific registers */
+ #define BMP180_REG_OUT_XLSB		0xF8
+ #define BMP180_REG_OUT_LSB		0xF7
+@@ -467,6 +470,7 @@ extern const struct bmp280_chip_info bmp580_chip_info;
+ /* Regmap configurations */
+ extern const struct regmap_config bmp180_regmap_config;
+ extern const struct regmap_config bmp280_regmap_config;
++extern const struct regmap_config bme280_regmap_config;
+ extern const struct regmap_config bmp380_regmap_config;
+ extern const struct regmap_config bmp580_regmap_config;
+ 
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 2a411357640e6c..f2a2ce800443a8 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -383,7 +383,7 @@ static int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem
+ 
+ 	create_req->length = umem->length;
+ 	create_req->offset_in_page = ib_umem_dma_offset(umem, page_sz);
+-	create_req->gdma_page_type = order_base_2(page_sz) - PAGE_SHIFT;
++	create_req->gdma_page_type = order_base_2(page_sz) - MANA_PAGE_SHIFT;
+ 	create_req->page_count = num_pages_total;
+ 
+ 	ibdev_dbg(&dev->ib_dev, "size_dma_region %lu num_pages_total %lu\n",
+@@ -511,13 +511,13 @@ int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
+ 	      PAGE_SHIFT;
+ 	prot = pgprot_writecombine(vma->vm_page_prot);
+ 
+-	ret = rdma_user_mmap_io(ibcontext, vma, pfn, gc->db_page_size, prot,
++	ret = rdma_user_mmap_io(ibcontext, vma, pfn, PAGE_SIZE, prot,
+ 				NULL);
+ 	if (ret)
+ 		ibdev_dbg(ibdev, "can't rdma_user_mmap_io ret %d\n", ret);
+ 	else
+-		ibdev_dbg(ibdev, "mapped I/O pfn 0x%llx page_size %u, ret %d\n",
+-			  pfn, gc->db_page_size, ret);
++		ibdev_dbg(ibdev, "mapped I/O pfn 0x%llx page_size %lu, ret %d\n",
++			  pfn, PAGE_SIZE, ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/input/keyboard/adp5589-keys.c b/drivers/input/keyboard/adp5589-keys.c
+index 8996e00cd63a82..922d3ab998f3a5 100644
+--- a/drivers/input/keyboard/adp5589-keys.c
++++ b/drivers/input/keyboard/adp5589-keys.c
+@@ -391,10 +391,17 @@ static int adp5589_gpio_get_value(struct gpio_chip *chip, unsigned off)
+ 	struct adp5589_kpad *kpad = gpiochip_get_data(chip);
+ 	unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
+ 	unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
++	int val;
+ 
+-	return !!(adp5589_read(kpad->client,
+-			       kpad->var->reg(ADP5589_GPI_STATUS_A) + bank) &
+-			       bit);
++	mutex_lock(&kpad->gpio_lock);
++	if (kpad->dir[bank] & bit)
++		val = kpad->dat_out[bank];
++	else
++		val = adp5589_read(kpad->client,
++				   kpad->var->reg(ADP5589_GPI_STATUS_A) + bank);
++	mutex_unlock(&kpad->gpio_lock);
++
++	return !!(val & bit);
+ }
+ 
+ static void adp5589_gpio_set_value(struct gpio_chip *chip,
+@@ -936,10 +943,9 @@ static int adp5589_keypad_add(struct adp5589_kpad *kpad, unsigned int revid)
+ 
+ static void adp5589_clear_config(void *data)
+ {
+-	struct i2c_client *client = data;
+-	struct adp5589_kpad *kpad = i2c_get_clientdata(client);
++	struct adp5589_kpad *kpad = data;
+ 
+-	adp5589_write(client, kpad->var->reg(ADP5589_GENERAL_CFG), 0);
++	adp5589_write(kpad->client, kpad->var->reg(ADP5589_GENERAL_CFG), 0);
+ }
+ 
+ static int adp5589_probe(struct i2c_client *client)
+@@ -983,7 +989,7 @@ static int adp5589_probe(struct i2c_client *client)
+ 	}
+ 
+ 	error = devm_add_action_or_reset(&client->dev, adp5589_clear_config,
+-					 client);
++					 kpad);
+ 	if (error)
+ 		return error;
+ 
+@@ -1010,8 +1016,6 @@ static int adp5589_probe(struct i2c_client *client)
+ 	if (error)
+ 		return error;
+ 
+-	i2c_set_clientdata(client, kpad);
+-
+ 	dev_info(&client->dev, "Rev.%d keypad, irq %d\n", revid, client->irq);
+ 	return 0;
+ }
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f456bcf1890baf..a5425519fecb85 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1000,7 +1000,8 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
+ 		used_bits[2] |=
+ 			cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
+ 				    STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
+-				    STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
++				    STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2S |
++				    STRTAB_STE_2_S2R);
+ 		used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
+ 	}
+ 
+@@ -1172,8 +1173,8 @@ static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
+ {
+ 	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+ 
+-	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
+-					     &l1_desc->l2ptr_dma, GFP_KERNEL);
++	l1_desc->l2ptr = dma_alloc_coherent(smmu->dev, size,
++					    &l1_desc->l2ptr_dma, GFP_KERNEL);
+ 	if (!l1_desc->l2ptr) {
+ 		dev_warn(smmu->dev,
+ 			 "failed to allocate context descriptor table\n");
+@@ -1372,17 +1373,17 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master)
+ 		cd_table->num_l1_ents = DIV_ROUND_UP(max_contexts,
+ 						  CTXDESC_L2_ENTRIES);
+ 
+-		cd_table->l1_desc = devm_kcalloc(smmu->dev, cd_table->num_l1_ents,
+-					      sizeof(*cd_table->l1_desc),
+-					      GFP_KERNEL);
++		cd_table->l1_desc = kcalloc(cd_table->num_l1_ents,
++					    sizeof(*cd_table->l1_desc),
++					    GFP_KERNEL);
+ 		if (!cd_table->l1_desc)
+ 			return -ENOMEM;
+ 
+ 		l1size = cd_table->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+ 	}
+ 
+-	cd_table->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cd_table->cdtab_dma,
+-					   GFP_KERNEL);
++	cd_table->cdtab = dma_alloc_coherent(smmu->dev, l1size,
++					     &cd_table->cdtab_dma, GFP_KERNEL);
+ 	if (!cd_table->cdtab) {
+ 		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+ 		ret = -ENOMEM;
+@@ -1393,7 +1394,7 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master)
+ 
+ err_free_l1:
+ 	if (cd_table->l1_desc) {
+-		devm_kfree(smmu->dev, cd_table->l1_desc);
++		kfree(cd_table->l1_desc);
+ 		cd_table->l1_desc = NULL;
+ 	}
+ 	return ret;
+@@ -1413,21 +1414,18 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_master *master)
+ 			if (!cd_table->l1_desc[i].l2ptr)
+ 				continue;
+ 
+-			dmam_free_coherent(smmu->dev, size,
+-					   cd_table->l1_desc[i].l2ptr,
+-					   cd_table->l1_desc[i].l2ptr_dma);
++			dma_free_coherent(smmu->dev, size,
++					  cd_table->l1_desc[i].l2ptr,
++					  cd_table->l1_desc[i].l2ptr_dma);
+ 		}
+-		devm_kfree(smmu->dev, cd_table->l1_desc);
+-		cd_table->l1_desc = NULL;
++		kfree(cd_table->l1_desc);
+ 
+ 		l1size = cd_table->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+ 	} else {
+ 		l1size = cd_table->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
+ 	}
+ 
+-	dmam_free_coherent(smmu->dev, l1size, cd_table->cdtab, cd_table->cdtab_dma);
+-	cd_table->cdtab_dma = 0;
+-	cd_table->cdtab = NULL;
++	dma_free_coherent(smmu->dev, l1size, cd_table->cdtab, cd_table->cdtab_dma);
+ }
+ 
+ bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+@@ -1629,6 +1627,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target,
+ 		STRTAB_STE_2_S2ENDI |
+ #endif
+ 		STRTAB_STE_2_S2PTW |
++		(master->stall_enabled ? STRTAB_STE_2_S2S : 0) |
+ 		STRTAB_STE_2_S2R);
+ 
+ 	target->data[3] = cpu_to_le64(pgtbl_cfg->arm_lpae_s2_cfg.vttbr &
+@@ -1722,10 +1721,6 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	/* Stage-2 is always pinned at the moment */
+-	if (evt[1] & EVTQ_1_S2)
+-		return -EFAULT;
+-
+ 	if (!(evt[1] & EVTQ_1_STALL))
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+index 1242a086c9f948..d9c2f763eaba48 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+@@ -264,6 +264,7 @@ struct arm_smmu_ste {
+ #define STRTAB_STE_2_S2AA64		(1UL << 51)
+ #define STRTAB_STE_2_S2ENDI		(1UL << 52)
+ #define STRTAB_STE_2_S2PTW		(1UL << 54)
++#define STRTAB_STE_2_S2S		(1UL << 57)
+ #define STRTAB_STE_2_S2R		(1UL << 58)
+ 
+ #define STRTAB_STE_3_S2TTB_MASK		GENMASK_ULL(51, 4)
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 1c8d3141cb55c0..01e157d89a1632 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1204,9 +1204,7 @@ static void free_iommu(struct intel_iommu *iommu)
+  */
+ static inline void reclaim_free_desc(struct q_inval *qi)
+ {
+-	while (qi->desc_status[qi->free_tail] == QI_DONE ||
+-	       qi->desc_status[qi->free_tail] == QI_ABORT) {
+-		qi->desc_status[qi->free_tail] = QI_FREE;
++	while (qi->desc_status[qi->free_tail] == QI_FREE && qi->free_tail != qi->free_head) {
+ 		qi->free_tail = (qi->free_tail + 1) % QI_LENGTH;
+ 		qi->free_cnt++;
+ 	}
+@@ -1463,8 +1461,16 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
+ 		raw_spin_lock(&qi->q_lock);
+ 	}
+ 
+-	for (i = 0; i < count; i++)
+-		qi->desc_status[(index + i) % QI_LENGTH] = QI_DONE;
++	/*
++	 * The reclaim code can free descriptors from multiple submissions
++	 * starting from the tail of the queue. When count == 0, the
++	 * status of the standalone wait descriptor at the tail of the queue
++	 * must be set to QI_FREE to allow the reclaim code to proceed.
++	 * It is also possible that descriptors from one of the previous
++	 * submissions has to be reclaimed by a subsequent submission.
++	 */
++	for (i = 0; i <= count; i++)
++		qi->desc_status[(index + i) % QI_LENGTH] = QI_FREE;
+ 
+ 	reclaim_free_desc(qi);
+ 	raw_spin_unlock_irqrestore(&qi->q_lock, flags);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index e9bea0305c2681..eed67326976d3d 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1462,10 +1462,10 @@ static int iommu_init_domains(struct intel_iommu *iommu)
+ 	 * entry for first-level or pass-through translation modes should
+ 	 * be programmed with a domain id different from those used for
+ 	 * second-level or nested translation. We reserve a domain id for
+-	 * this purpose.
++	 * this purpose. This domain id is also used for identity domain
++	 * in legacy mode.
+ 	 */
+-	if (sm_supported(iommu))
+-		set_bit(FLPT_DEFAULT_DID, iommu->domain_ids);
++	set_bit(FLPT_DEFAULT_DID, iommu->domain_ids);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index aabcdf75658178..57dd3530f68d4a 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -261,9 +261,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
+ 	else
+ 		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+ 
+-	/* Device IOTLB doesn't need to be flushed in caching mode. */
+-	if (!cap_caching_mode(iommu->cap))
+-		devtlb_invalidation_with_pasid(iommu, dev, pasid);
++	devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ }
+ 
+ /*
+@@ -490,9 +488,7 @@ int intel_pasid_setup_dirty_tracking(struct intel_iommu *iommu,
+ 
+ 	iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+ 
+-	/* Device IOTLB doesn't need to be flushed in caching mode. */
+-	if (!cap_caching_mode(iommu->cap))
+-		devtlb_invalidation_with_pasid(iommu, dev, pasid);
++	devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ 
+ 	return 0;
+ }
+@@ -569,9 +565,7 @@ void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
+ 	pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+ 	qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
+ 
+-	/* Device IOTLB doesn't need to be flushed in caching mode. */
+-	if (!cap_caching_mode(iommu->cap))
+-		devtlb_invalidation_with_pasid(iommu, dev, pasid);
++	devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ }
+ 
+ /**
+diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
+index 3b8842c4a34015..8d4d1cbb1d4ca6 100644
+--- a/drivers/mailbox/Kconfig
++++ b/drivers/mailbox/Kconfig
+@@ -25,6 +25,7 @@ config ARM_MHU_V2
+ 
+ config ARM_MHU_V3
+ 	tristate "ARM MHUv3 Mailbox"
++	depends on ARM64 || COMPILE_TEST
+ 	depends on HAS_IOMEM || COMPILE_TEST
+ 	depends on OF
+ 	help
+diff --git a/drivers/mailbox/bcm2835-mailbox.c b/drivers/mailbox/bcm2835-mailbox.c
+index fbfd0202047c37..ea12fb8d24015c 100644
+--- a/drivers/mailbox/bcm2835-mailbox.c
++++ b/drivers/mailbox/bcm2835-mailbox.c
+@@ -145,7 +145,8 @@ static int bcm2835_mbox_probe(struct platform_device *pdev)
+ 	spin_lock_init(&mbox->lock);
+ 
+ 	ret = devm_request_irq(dev, irq_of_parse_and_map(dev->of_node, 0),
+-			       bcm2835_mbox_irq, 0, dev_name(dev), mbox);
++			       bcm2835_mbox_irq, IRQF_NO_SUSPEND, dev_name(dev),
++			       mbox);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register a mailbox IRQ handler: %d\n",
+ 			ret);
+diff --git a/drivers/mailbox/rockchip-mailbox.c b/drivers/mailbox/rockchip-mailbox.c
+index 8ffad059e8984e..4d966cb2ed0367 100644
+--- a/drivers/mailbox/rockchip-mailbox.c
++++ b/drivers/mailbox/rockchip-mailbox.c
+@@ -159,7 +159,7 @@ static const struct of_device_id rockchip_mbox_of_match[] = {
+ 	{ .compatible = "rockchip,rk3368-mailbox", .data = &rk3368_drv_data},
+ 	{ },
+ };
+-MODULE_DEVICE_TABLE(of, rockchp_mbox_of_match);
++MODULE_DEVICE_TABLE(of, rockchip_mbox_of_match);
+ 
+ static int rockchip_mbox_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 358f1fe429751d..ceb713e4940207 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -2602,13 +2602,6 @@ int vb2_core_queue_init(struct vb2_queue *q)
+ 	if (WARN_ON(q->supports_requests && q->min_queued_buffers))
+ 		return -EINVAL;
+ 
+-	/*
+-	 * The minimum requirement is 2: one buffer is used
+-	 * by the hardware while the other is being processed by userspace.
+-	 */
+-	if (q->min_reqbufs_allocation < 2)
+-		q->min_reqbufs_allocation = 2;
+-
+ 	/*
+ 	 * If the driver needs 'min_queued_buffers' in the queue before
+ 	 * calling start_streaming() then the minimum requirement is
+diff --git a/drivers/media/i2c/ar0521.c b/drivers/media/i2c/ar0521.c
+index 09331cf95c62d0..d557f3b3de3d33 100644
+--- a/drivers/media/i2c/ar0521.c
++++ b/drivers/media/i2c/ar0521.c
+@@ -844,7 +844,8 @@ static int ar0521_power_off(struct device *dev)
+ 	clk_disable_unprepare(sensor->extclk);
+ 
+ 	if (sensor->reset_gpio)
+-		gpiod_set_value(sensor->reset_gpio, 1); /* assert RESET signal */
++		/* assert RESET signal */
++		gpiod_set_value_cansleep(sensor->reset_gpio, 1);
+ 
+ 	for (i = ARRAY_SIZE(ar0521_supply_names) - 1; i >= 0; i--) {
+ 		if (sensor->supplies[i])
+@@ -878,7 +879,7 @@ static int ar0521_power_on(struct device *dev)
+ 
+ 	if (sensor->reset_gpio)
+ 		/* deassert RESET signal */
+-		gpiod_set_value(sensor->reset_gpio, 0);
++		gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+ 	usleep_range(4500, 5000); /* min 45000 clocks */
+ 
+ 	for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) {
+diff --git a/drivers/media/i2c/imx335.c b/drivers/media/i2c/imx335.c
+index 990d74214cc2e4..54a1de53d49730 100644
+--- a/drivers/media/i2c/imx335.c
++++ b/drivers/media/i2c/imx335.c
+@@ -997,7 +997,7 @@ static int imx335_parse_hw_config(struct imx335 *imx335)
+ 
+ 	/* Request optional reset pin */
+ 	imx335->reset_gpio = devm_gpiod_get_optional(imx335->dev, "reset",
+-						     GPIOD_OUT_LOW);
++						     GPIOD_OUT_HIGH);
+ 	if (IS_ERR(imx335->reset_gpio)) {
+ 		dev_err(imx335->dev, "failed to get reset gpio %ld\n",
+ 			PTR_ERR(imx335->reset_gpio));
+@@ -1110,8 +1110,7 @@ static int imx335_power_on(struct device *dev)
+ 
+ 	usleep_range(500, 550); /* Tlow */
+ 
+-	/* Set XCLR */
+-	gpiod_set_value_cansleep(imx335->reset_gpio, 1);
++	gpiod_set_value_cansleep(imx335->reset_gpio, 0);
+ 
+ 	ret = clk_prepare_enable(imx335->inclk);
+ 	if (ret) {
+@@ -1124,7 +1123,7 @@ static int imx335_power_on(struct device *dev)
+ 	return 0;
+ 
+ error_reset:
+-	gpiod_set_value_cansleep(imx335->reset_gpio, 0);
++	gpiod_set_value_cansleep(imx335->reset_gpio, 1);
+ 	regulator_bulk_disable(ARRAY_SIZE(imx335_supply_name), imx335->supplies);
+ 
+ 	return ret;
+@@ -1141,7 +1140,7 @@ static int imx335_power_off(struct device *dev)
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ 	struct imx335 *imx335 = to_imx335(sd);
+ 
+-	gpiod_set_value_cansleep(imx335->reset_gpio, 0);
++	gpiod_set_value_cansleep(imx335->reset_gpio, 1);
+ 	clk_disable_unprepare(imx335->inclk);
+ 	regulator_bulk_disable(ARRAY_SIZE(imx335_supply_name), imx335->supplies);
+ 
+diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
+index 3641911bc73f68..5b5127f8953ff4 100644
+--- a/drivers/media/i2c/ov5675.c
++++ b/drivers/media/i2c/ov5675.c
+@@ -972,12 +972,10 @@ static int ov5675_set_stream(struct v4l2_subdev *sd, int enable)
+ 
+ static int ov5675_power_off(struct device *dev)
+ {
+-	/* 512 xvclk cycles after the last SCCB transation or MIPI frame end */
+-	u32 delay_us = DIV_ROUND_UP(512, OV5675_XVCLK_19_2 / 1000 / 1000);
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ 	struct ov5675 *ov5675 = to_ov5675(sd);
+ 
+-	usleep_range(delay_us, delay_us * 2);
++	usleep_range(90, 100);
+ 
+ 	clk_disable_unprepare(ov5675->xvclk);
+ 	gpiod_set_value_cansleep(ov5675->reset_gpio, 1);
+@@ -988,7 +986,6 @@ static int ov5675_power_off(struct device *dev)
+ 
+ static int ov5675_power_on(struct device *dev)
+ {
+-	u32 delay_us = DIV_ROUND_UP(8192, OV5675_XVCLK_19_2 / 1000 / 1000);
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ 	struct ov5675 *ov5675 = to_ov5675(sd);
+ 	int ret;
+@@ -1014,8 +1011,11 @@ static int ov5675_power_on(struct device *dev)
+ 
+ 	gpiod_set_value_cansleep(ov5675->reset_gpio, 0);
+ 
+-	/* 8192 xvclk cycles prior to the first SCCB transation */
+-	usleep_range(delay_us, delay_us * 2);
++	/* Worst case quiesence gap is 1.365 milliseconds @ 6MHz XVCLK
++	 * Add an additional threshold grace period to ensure reset
++	 * completion before initiating our first I2C transaction.
++	 */
++	usleep_range(1500, 1600);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
+index 54cd82f741154c..19202ce9870061 100644
+--- a/drivers/media/platform/qcom/camss/camss-video.c
++++ b/drivers/media/platform/qcom/camss/camss-video.c
+@@ -557,12 +557,6 @@ static void video_stop_streaming(struct vb2_queue *q)
+ 
+ 		ret = v4l2_subdev_call(subdev, video, s_stream, 0);
+ 
+-		if (entity->use_count > 1) {
+-			/* Don't stop if other instances of the pipeline are still running */
+-			dev_dbg(video->camss->dev, "Video pipeline still used, don't stop streaming.\n");
+-			return;
+-		}
+-
+ 		if (ret) {
+ 			dev_err(video->camss->dev, "Video pipeline stop failed: %d\n", ret);
+ 			return;
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index c90a28fa8891f1..f80c895b6b9572 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -1985,6 +1985,8 @@ static int camss_probe(struct platform_device *pdev)
+ 
+ 	v4l2_async_nf_init(&camss->notifier, &camss->v4l2_dev);
+ 
++	pm_runtime_enable(dev);
++
+ 	num_subdevs = camss_of_parse_ports(camss);
+ 	if (num_subdevs < 0) {
+ 		ret = num_subdevs;
+@@ -2021,8 +2023,6 @@ static int camss_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-
+ 	return 0;
+ 
+ err_register_subdevs:
+@@ -2030,6 +2030,7 @@ static int camss_probe(struct platform_device *pdev)
+ err_v4l2_device_unregister:
+ 	v4l2_device_unregister(&camss->v4l2_dev);
+ 	v4l2_async_nf_cleanup(&camss->notifier);
++	pm_runtime_disable(dev);
+ err_genpd_cleanup:
+ 	camss_genpd_cleanup(camss);
+ 
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index ce206b70975418..fd6cb85e1b1e5c 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -426,6 +426,7 @@ static void venus_remove(struct platform_device *pdev)
+ 	struct device *dev = core->dev;
+ 	int ret;
+ 
++	cancel_delayed_work_sync(&core->work);
+ 	ret = pm_runtime_get_sync(dev);
+ 	WARN_ON(ret < 0);
+ 
+diff --git a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+index 097a3a08ef7d1f..dbb26c7b2f8d7a 100644
+--- a/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
++++ b/drivers/media/platform/sunxi/sun4i-csi/sun4i_csi.c
+@@ -39,6 +39,10 @@ static const struct media_entity_operations sun4i_csi_video_entity_ops = {
+ 	.link_validate = v4l2_subdev_link_validate,
+ };
+ 
++static const struct media_entity_operations sun4i_csi_subdev_entity_ops = {
++	.link_validate = v4l2_subdev_link_validate,
++};
++
+ static int sun4i_csi_notify_bound(struct v4l2_async_notifier *notifier,
+ 				  struct v4l2_subdev *subdev,
+ 				  struct v4l2_async_connection *asd)
+@@ -214,6 +218,7 @@ static int sun4i_csi_probe(struct platform_device *pdev)
+ 	subdev->internal_ops = &sun4i_csi_subdev_internal_ops;
+ 	subdev->flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+ 	subdev->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
++	subdev->entity.ops = &sun4i_csi_subdev_entity_ops;
+ 	subdev->owner = THIS_MODULE;
+ 	snprintf(subdev->name, sizeof(subdev->name), "sun4i-csi-0");
+ 	v4l2_set_subdevdata(subdev, csi);
+diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
+index 57d9ae12fcfe1a..33d67d25171940 100644
+--- a/drivers/memory/tegra/tegra186-emc.c
++++ b/drivers/memory/tegra/tegra186-emc.c
+@@ -35,11 +35,6 @@ struct tegra186_emc {
+ 	struct icc_provider provider;
+ };
+ 
+-static inline struct tegra186_emc *to_tegra186_emc(struct icc_provider *provider)
+-{
+-	return container_of(provider, struct tegra186_emc, provider);
+-}
+-
+ /*
+  * debugfs interface
+  *
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index dfdc039d92a6c1..01aacdcda26066 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -65,15 +65,6 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+ 	if (!data)
+ 		return 0;
+ 
+-	if (data[IFLA_CAN_BITTIMING]) {
+-		struct can_bittiming bt;
+-
+-		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+-		err = can_validate_bittiming(&bt, extack);
+-		if (err)
+-			return err;
+-	}
+-
+ 	if (data[IFLA_CAN_CTRLMODE]) {
+ 		struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+ 		u32 tdc_flags = cm->flags & CAN_CTRLMODE_TDC_MASK;
+@@ -114,6 +105,15 @@ static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+ 		}
+ 	}
+ 
++	if (data[IFLA_CAN_BITTIMING]) {
++		struct can_bittiming bt;
++
++		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
++		err = can_validate_bittiming(&bt, extack);
++		if (err)
++			return err;
++	}
++
+ 	if (is_can_fd) {
+ 		if (!data[IFLA_CAN_BITTIMING] || !data[IFLA_CAN_DATA_BITTIMING])
+ 			return -EOPNOTSUPP;
+@@ -195,48 +195,6 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	/* We need synchronization with dev->stop() */
+ 	ASSERT_RTNL();
+ 
+-	if (data[IFLA_CAN_BITTIMING]) {
+-		struct can_bittiming bt;
+-
+-		/* Do not allow changing bittiming while running */
+-		if (dev->flags & IFF_UP)
+-			return -EBUSY;
+-
+-		/* Calculate bittiming parameters based on
+-		 * bittiming_const if set, otherwise pass bitrate
+-		 * directly via do_set_bitrate(). Bail out if neither
+-		 * is given.
+-		 */
+-		if (!priv->bittiming_const && !priv->do_set_bittiming &&
+-		    !priv->bitrate_const)
+-			return -EOPNOTSUPP;
+-
+-		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
+-		err = can_get_bittiming(dev, &bt,
+-					priv->bittiming_const,
+-					priv->bitrate_const,
+-					priv->bitrate_const_cnt,
+-					extack);
+-		if (err)
+-			return err;
+-
+-		if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
+-			NL_SET_ERR_MSG_FMT(extack,
+-					   "arbitration bitrate %u bps surpasses transceiver capabilities of %u bps",
+-					   bt.bitrate, priv->bitrate_max);
+-			return -EINVAL;
+-		}
+-
+-		memcpy(&priv->bittiming, &bt, sizeof(bt));
+-
+-		if (priv->do_set_bittiming) {
+-			/* Finally, set the bit-timing registers */
+-			err = priv->do_set_bittiming(dev);
+-			if (err)
+-				return err;
+-		}
+-	}
+-
+ 	if (data[IFLA_CAN_CTRLMODE]) {
+ 		struct can_ctrlmode *cm;
+ 		u32 ctrlstatic;
+@@ -284,6 +242,48 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ 			priv->ctrlmode &= cm->flags | ~CAN_CTRLMODE_TDC_MASK;
+ 	}
+ 
++	if (data[IFLA_CAN_BITTIMING]) {
++		struct can_bittiming bt;
++
++		/* Do not allow changing bittiming while running */
++		if (dev->flags & IFF_UP)
++			return -EBUSY;
++
++		/* Calculate bittiming parameters based on
++		 * bittiming_const if set, otherwise pass bitrate
++		 * directly via do_set_bitrate(). Bail out if neither
++		 * is given.
++		 */
++		if (!priv->bittiming_const && !priv->do_set_bittiming &&
++		    !priv->bitrate_const)
++			return -EOPNOTSUPP;
++
++		memcpy(&bt, nla_data(data[IFLA_CAN_BITTIMING]), sizeof(bt));
++		err = can_get_bittiming(dev, &bt,
++					priv->bittiming_const,
++					priv->bitrate_const,
++					priv->bitrate_const_cnt,
++					extack);
++		if (err)
++			return err;
++
++		if (priv->bitrate_max && bt.bitrate > priv->bitrate_max) {
++			NL_SET_ERR_MSG_FMT(extack,
++					   "arbitration bitrate %u bps surpasses transceiver capabilities of %u bps",
++					   bt.bitrate, priv->bitrate_max);
++			return -EINVAL;
++		}
++
++		memcpy(&priv->bittiming, &bt, sizeof(bt));
++
++		if (priv->do_set_bittiming) {
++			/* Finally, set the bit-timing registers */
++			err = priv->do_set_bittiming(dev);
++			if (err)
++				return err;
++		}
++	}
++
+ 	if (data[IFLA_CAN_RESTART_MS]) {
+ 		/* Do not allow changing restart delay while running */
+ 		if (dev->flags & IFF_UP)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+index a2606ee3b0a569..cfc413caf93f4d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+@@ -266,7 +266,7 @@ static void aq_ethtool_get_strings(struct net_device *ndev,
+ 		const int rx_stat_cnt = ARRAY_SIZE(aq_ethtool_queue_rx_stat_names);
+ 		const int tx_stat_cnt = ARRAY_SIZE(aq_ethtool_queue_tx_stat_names);
+ 		char tc_string[8];
+-		int tc;
++		unsigned int tc;
+ 
+ 		memset(tc_string, 0, sizeof(tc_string));
+ 		memcpy(p, aq_ethtool_stat_names,
+@@ -275,7 +275,7 @@ static void aq_ethtool_get_strings(struct net_device *ndev,
+ 
+ 		for (tc = 0; tc < cfg->tcs; tc++) {
+ 			if (cfg->is_qos)
+-				snprintf(tc_string, 8, "TC%d ", tc);
++				snprintf(tc_string, 8, "TC%u ", tc);
+ 
+ 			for (i = 0; i < cfg->vecs; i++) {
+ 				for (si = 0; si < rx_stat_cnt; si++) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 79c09c1cdf9363..0032c4ebd7e12e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -4146,7 +4146,7 @@ static void bnxt_get_pkgver(struct net_device *dev)
+ 
+ 	if (!bnxt_get_pkginfo(dev, buf, sizeof(buf))) {
+ 		len = strlen(bp->fw_ver_str);
+-		snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len - 1,
++		snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len,
+ 			 "/pkg %s", buf);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index a19cb2a786fd28..1cca0425d49397 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -691,10 +691,19 @@ struct fec_enet_private {
+ 	/* XDP BPF Program */
+ 	struct bpf_prog *xdp_prog;
+ 
++	struct {
++		int pps_enable;
++		u64 ns_sys, ns_phc;
++		u32 at_corr;
++		u8 at_inc_corr;
++	} ptp_saved_state;
++
+ 	u64 ethtool_stats[];
+ };
+ 
+ void fec_ptp_init(struct platform_device *pdev, int irq_idx);
++void fec_ptp_restore_state(struct fec_enet_private *fep);
++void fec_ptp_save_state(struct fec_enet_private *fep);
+ void fec_ptp_stop(struct platform_device *pdev);
+ void fec_ptp_start_cyclecounter(struct net_device *ndev);
+ int fec_ptp_set(struct net_device *ndev, struct kernel_hwtstamp_config *config,
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index fb19295529a218..8004f12352b6b5 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1077,6 +1077,8 @@ fec_restart(struct net_device *ndev)
+ 	u32 rcntl = OPT_FRAME_SIZE | 0x04;
+ 	u32 ecntl = FEC_ECR_ETHEREN;
+ 
++	fec_ptp_save_state(fep);
++
+ 	/* Whack a reset.  We should wait for this.
+ 	 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ 	 * instead of reset MAC itself.
+@@ -1244,8 +1246,10 @@ fec_restart(struct net_device *ndev)
+ 	writel(ecntl, fep->hwp + FEC_ECNTRL);
+ 	fec_enet_active_rxring(ndev);
+ 
+-	if (fep->bufdesc_ex)
++	if (fep->bufdesc_ex) {
+ 		fec_ptp_start_cyclecounter(ndev);
++		fec_ptp_restore_state(fep);
++	}
+ 
+ 	/* Enable interrupts we wish to service */
+ 	if (fep->link)
+@@ -1336,6 +1340,8 @@ fec_stop(struct net_device *ndev)
+ 			netdev_err(ndev, "Graceful transmit stop did not complete!\n");
+ 	}
+ 
++	fec_ptp_save_state(fep);
++
+ 	/* Whack a reset.  We should wait for this.
+ 	 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ 	 * instead of reset MAC itself.
+@@ -1366,6 +1372,9 @@ fec_stop(struct net_device *ndev)
+ 		val = readl(fep->hwp + FEC_ECNTRL);
+ 		val |= FEC_ECR_EN1588;
+ 		writel(val, fep->hwp + FEC_ECNTRL);
++
++		fec_ptp_start_cyclecounter(ndev);
++		fec_ptp_restore_state(fep);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 2e4f3e1782a252..5e8fac50f945d4 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -770,6 +770,56 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
+ 	schedule_delayed_work(&fep->time_keep, HZ);
+ }
+ 
++void fec_ptp_save_state(struct fec_enet_private *fep)
++{
++	unsigned long flags;
++	u32 atime_inc_corr;
++
++	spin_lock_irqsave(&fep->tmreg_lock, flags);
++
++	fep->ptp_saved_state.pps_enable = fep->pps_enable;
++
++	fep->ptp_saved_state.ns_phc = timecounter_read(&fep->tc);
++	fep->ptp_saved_state.ns_sys = ktime_get_ns();
++
++	fep->ptp_saved_state.at_corr = readl(fep->hwp + FEC_ATIME_CORR);
++	atime_inc_corr = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_CORR_MASK;
++	fep->ptp_saved_state.at_inc_corr = (u8)(atime_inc_corr >> FEC_T_INC_CORR_OFFSET);
++
++	spin_unlock_irqrestore(&fep->tmreg_lock, flags);
++}
++
++/* Restore PTP functionality after a reset */
++void fec_ptp_restore_state(struct fec_enet_private *fep)
++{
++	u32 atime_inc = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_MASK;
++	unsigned long flags;
++	u32 counter;
++	u64 ns;
++
++	spin_lock_irqsave(&fep->tmreg_lock, flags);
++
++	/* Reset turned it off, so adjust our status flag */
++	fep->pps_enable = 0;
++
++	writel(fep->ptp_saved_state.at_corr, fep->hwp + FEC_ATIME_CORR);
++	atime_inc |= ((u32)fep->ptp_saved_state.at_inc_corr) << FEC_T_INC_CORR_OFFSET;
++	writel(atime_inc, fep->hwp + FEC_ATIME_INC);
++
++	ns = ktime_get_ns() - fep->ptp_saved_state.ns_sys + fep->ptp_saved_state.ns_phc;
++	counter = ns & fep->cc.mask;
++	writel(counter, fep->hwp + FEC_ATIME);
++	timecounter_init(&fep->tc, &fep->cc, ns);
++
++	spin_unlock_irqrestore(&fep->tmreg_lock, flags);
++
++	/* Restart PPS if needed */
++	if (fep->ptp_saved_state.pps_enable) {
++		/* Re-enable PPS */
++		fec_ptp_enable_pps(fep, 1);
++	}
++}
++
+ void fec_ptp_stop(struct platform_device *pdev)
+ {
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index b91e7a06b97f7d..beb815e5289b12 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -947,6 +947,7 @@ static int hip04_mac_probe(struct platform_device *pdev)
+ 	priv->tx_coalesce_timer.function = tx_done;
+ 
+ 	priv->map = syscon_node_to_regmap(arg.np);
++	of_node_put(arg.np);
+ 	if (IS_ERR(priv->map)) {
+ 		dev_warn(d, "no syscon hisilicon,hip04-ppe\n");
+ 		ret = PTR_ERR(priv->map);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index f75668c4793519..616a2768e5048a 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -933,6 +933,7 @@ static int hns_mac_get_info(struct hns_mac_cb *mac_cb)
+ 			mac_cb->cpld_ctrl = NULL;
+ 		} else {
+ 			syscon = syscon_node_to_regmap(cpld_args.np);
++			of_node_put(cpld_args.np);
+ 			if (IS_ERR_OR_NULL(syscon)) {
+ 				dev_dbg(mac_cb->dev, "no cpld-syscon found!\n");
+ 				mac_cb->cpld_ctrl = NULL;
+diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
+index ed73707176c1af..8a047145f0c50b 100644
+--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
+@@ -575,6 +575,7 @@ static int hns_mdio_probe(struct platform_device *pdev)
+ 						MDIO_SC_RESET_ST;
+ 				}
+ 			}
++			of_node_put(reg_args.np);
+ 		} else {
+ 			dev_warn(&pdev->dev, "find syscon ret = %#x\n", ret);
+ 			mdio_dev->subctrl_vbase = NULL;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 3cd161c6672be4..e23eedc791d661 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6671,8 +6671,10 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ 		if (adapter->flags2 & FLAG2_HAS_PHY_WAKEUP) {
+ 			/* enable wakeup by the PHY */
+ 			retval = e1000_init_phy_wakeup(adapter, wufc);
+-			if (retval)
+-				return retval;
++			if (retval) {
++				e_err("Failed to enable wakeup\n");
++				goto skip_phy_configurations;
++			}
+ 		} else {
+ 			/* enable wakeup by the MAC */
+ 			ew32(WUFC, wufc);
+@@ -6693,8 +6695,10 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ 			 * or broadcast.
+ 			 */
+ 			retval = e1000_enable_ulp_lpt_lp(hw, !runtime);
+-			if (retval)
+-				return retval;
++			if (retval) {
++				e_err("Failed to enable ULP\n");
++				goto skip_phy_configurations;
++			}
+ 		}
+ 	}
+ 
+@@ -6726,6 +6730,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ 		hw->phy.ops.release(hw);
+ 	}
+ 
++skip_phy_configurations:
+ 	/* Release control of h/w to f/w.  If f/w is AMT enabled, this
+ 	 * would have already happened in close and is redundant.
+ 	 */
+@@ -6968,15 +6973,13 @@ static int e1000e_pm_suspend(struct device *dev)
+ 	e1000e_pm_freeze(dev);
+ 
+ 	rc = __e1000_shutdown(pdev, false);
+-	if (rc) {
+-		e1000e_pm_thaw(dev);
+-	} else {
++	if (!rc) {
+ 		/* Introduce S0ix implementation */
+ 		if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS)
+ 			e1000e_s0ix_entry_flow(adapter);
+ 	}
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static int e1000e_pm_resume(struct device *dev)
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index ecf8f5d6029219..6ca13c5dcb14e7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -28,9 +28,8 @@ ice_sched_add_root_node(struct ice_port_info *pi,
+ 	if (!root)
+ 		return -ENOMEM;
+ 
+-	/* coverity[suspicious_sizeof] */
+ 	root->children = devm_kcalloc(ice_hw_to_dev(hw), hw->max_children[0],
+-				      sizeof(*root), GFP_KERNEL);
++				      sizeof(*root->children), GFP_KERNEL);
+ 	if (!root->children) {
+ 		devm_kfree(ice_hw_to_dev(hw), root);
+ 		return -ENOMEM;
+@@ -186,10 +185,9 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+ 	if (!node)
+ 		return -ENOMEM;
+ 	if (hw->max_children[layer]) {
+-		/* coverity[suspicious_sizeof] */
+ 		node->children = devm_kcalloc(ice_hw_to_dev(hw),
+ 					      hw->max_children[layer],
+-					      sizeof(*node), GFP_KERNEL);
++					      sizeof(*node->children), GFP_KERNEL);
+ 		if (!node->children) {
+ 			devm_kfree(ice_hw_to_dev(hw), node);
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
+index 0b99828043708e..57e3dabf3a8099 100644
+--- a/drivers/net/ethernet/lantiq_etop.c
++++ b/drivers/net/ethernet/lantiq_etop.c
+@@ -482,7 +482,9 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
+ 	unsigned long flags;
+ 	u32 byte_offset;
+ 
+-	len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
++	if (skb_put_padto(skb, ETH_ZLEN))
++		return NETDEV_TX_OK;
++	len = skb->len;
+ 
+ 	if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
+ 		netdev_err(dev, "tx ring full\n");
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index e809f91c08fb9d..9e02e4367bec81 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1088,7 +1088,7 @@ struct mvpp2 {
+ 	unsigned int max_port_rxqs;
+ 
+ 	/* Workqueue to gather hardware statistics */
+-	char queue_name[30];
++	char queue_name[31];
+ 	struct workqueue_struct *stats_queue;
+ 
+ 	/* Debugfs root entry */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+index d4239e3b3c88ef..11f724ad90dbfb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
+@@ -23,6 +23,9 @@ struct mlx5e_tir_builder *mlx5e_tir_builder_alloc(bool modify)
+ 	struct mlx5e_tir_builder *builder;
+ 
+ 	builder = kvzalloc(sizeof(*builder), GFP_KERNEL);
++	if (!builder)
++		return NULL;
++
+ 	builder->modify = modify;
+ 
+ 	return builder;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 3d274599015be1..ca92e518be7669 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -67,7 +67,6 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
+ 		return;
+ 
+ 	spin_lock_bh(&x->lock);
+-	xfrm_state_check_expire(x);
+ 	if (x->km.state == XFRM_STATE_EXPIRED) {
+ 		sa_entry->attrs.drop = true;
+ 		spin_unlock_bh(&x->lock);
+@@ -75,6 +74,13 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
+ 		mlx5e_accel_ipsec_fs_modify(sa_entry);
+ 		return;
+ 	}
++
++	if (x->km.state != XFRM_STATE_VALID) {
++		spin_unlock_bh(&x->lock);
++		return;
++	}
++
++	xfrm_state_check_expire(x);
+ 	spin_unlock_bh(&x->lock);
+ 
+ 	queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index b09e9abd39f37f..f8c7912abe0e3f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -642,7 +642,6 @@ mlx5e_sq_xmit_mpwqe(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	return;
+ 
+ err_unmap:
+-	mlx5e_dma_unmap_wqe_err(sq, 1);
+ 	sq->stats->dropped++;
+ 	dev_kfree_skb_any(skb);
+ 	mlx5e_tx_flush(sq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
+index d0b595ba611014..432c98f2626db9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
+@@ -24,6 +24,11 @@
+ 	pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val))
+ #define VSC_MAX_RETRIES 2048
+ 
++/* Reading VSC registers can take relatively long time.
++ * Yield the cpu every 128 registers read.
++ */
++#define VSC_GW_READ_BLOCK_COUNT 128
++
+ enum {
+ 	VSC_CTRL_OFFSET = 0x4,
+ 	VSC_COUNTER_OFFSET = 0x8,
+@@ -273,6 +278,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ {
+ 	unsigned int next_read_addr = 0;
+ 	unsigned int read_addr = 0;
++	unsigned int count = 0;
+ 
+ 	while (read_addr < length) {
+ 		if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr,
+@@ -280,6 +286,10 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ 			return read_addr;
+ 
+ 		read_addr = next_read_addr;
++		if (++count == VSC_GW_READ_BLOCK_COUNT) {
++			cond_resched();
++			count = 0;
++		}
+ 	}
+ 	return length;
+ }
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+index f3f5fb4204689b..70427643f777c0 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+@@ -45,8 +45,12 @@ void sparx5_ifh_parse(u32 *ifh, struct frame_info *info)
+ 	fwd = (fwd >> 5);
+ 	info->src_port = FIELD_GET(GENMASK(7, 1), fwd);
+ 
++	/*
++	 * Bit 270-271 are occasionally unexpectedly set by the hardware,
++	 * clear bits before extracting timestamp
++	 */
+ 	info->timestamp =
+-		((u64)xtr_hdr[2] << 24) |
++		((u64)(xtr_hdr[2] & GENMASK(5, 0)) << 24) |
+ 		((u64)xtr_hdr[3] << 16) |
+ 		((u64)xtr_hdr[4] <<  8) |
+ 		((u64)xtr_hdr[5] <<  0);
+diff --git a/drivers/net/ethernet/microsoft/Kconfig b/drivers/net/ethernet/microsoft/Kconfig
+index 286f0d5697a16c..901fbffbf718ee 100644
+--- a/drivers/net/ethernet/microsoft/Kconfig
++++ b/drivers/net/ethernet/microsoft/Kconfig
+@@ -18,7 +18,7 @@ if NET_VENDOR_MICROSOFT
+ config MICROSOFT_MANA
+ 	tristate "Microsoft Azure Network Adapter (MANA) support"
+ 	depends on PCI_MSI
+-	depends on X86_64 || (ARM64 && !CPU_BIG_ENDIAN && ARM64_4K_PAGES)
++	depends on X86_64 || (ARM64 && !CPU_BIG_ENDIAN)
+ 	depends on PCI_HYPERV
+ 	select AUXILIARY_BUS
+ 	select PAGE_POOL
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index 1332db9a08eb9a..e1d70d21e207f9 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -182,7 +182,7 @@ int mana_gd_alloc_memory(struct gdma_context *gc, unsigned int length,
+ 	dma_addr_t dma_handle;
+ 	void *buf;
+ 
+-	if (length < PAGE_SIZE || !is_power_of_2(length))
++	if (length < MANA_PAGE_SIZE || !is_power_of_2(length))
+ 		return -EINVAL;
+ 
+ 	gmi->dev = gc->dev;
+@@ -717,7 +717,7 @@ EXPORT_SYMBOL_NS(mana_gd_destroy_dma_region, NET_MANA);
+ static int mana_gd_create_dma_region(struct gdma_dev *gd,
+ 				     struct gdma_mem_info *gmi)
+ {
+-	unsigned int num_page = gmi->length / PAGE_SIZE;
++	unsigned int num_page = gmi->length / MANA_PAGE_SIZE;
+ 	struct gdma_create_dma_region_req *req = NULL;
+ 	struct gdma_create_dma_region_resp resp = {};
+ 	struct gdma_context *gc = gd->gdma_context;
+@@ -727,10 +727,10 @@ static int mana_gd_create_dma_region(struct gdma_dev *gd,
+ 	int err;
+ 	int i;
+ 
+-	if (length < PAGE_SIZE || !is_power_of_2(length))
++	if (length < MANA_PAGE_SIZE || !is_power_of_2(length))
+ 		return -EINVAL;
+ 
+-	if (offset_in_page(gmi->virt_addr) != 0)
++	if (!MANA_PAGE_ALIGNED(gmi->virt_addr))
+ 		return -EINVAL;
+ 
+ 	hwc = gc->hwc.driver_data;
+@@ -751,7 +751,7 @@ static int mana_gd_create_dma_region(struct gdma_dev *gd,
+ 	req->page_addr_list_len = num_page;
+ 
+ 	for (i = 0; i < num_page; i++)
+-		req->page_addr_list[i] = gmi->dma_handle +  i * PAGE_SIZE;
++		req->page_addr_list[i] = gmi->dma_handle +  i * MANA_PAGE_SIZE;
+ 
+ 	err = mana_gd_send_request(gc, req_msg_size, req, sizeof(resp), &resp);
+ 	if (err)
+diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c
+index 0a868679d342e5..a00f915c51881f 100644
+--- a/drivers/net/ethernet/microsoft/mana/hw_channel.c
++++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c
+@@ -368,12 +368,12 @@ static int mana_hwc_create_cq(struct hw_channel_context *hwc, u16 q_depth,
+ 	int err;
+ 
+ 	eq_size = roundup_pow_of_two(GDMA_EQE_SIZE * q_depth);
+-	if (eq_size < MINIMUM_SUPPORTED_PAGE_SIZE)
+-		eq_size = MINIMUM_SUPPORTED_PAGE_SIZE;
++	if (eq_size < MANA_MIN_QSIZE)
++		eq_size = MANA_MIN_QSIZE;
+ 
+ 	cq_size = roundup_pow_of_two(GDMA_CQE_SIZE * q_depth);
+-	if (cq_size < MINIMUM_SUPPORTED_PAGE_SIZE)
+-		cq_size = MINIMUM_SUPPORTED_PAGE_SIZE;
++	if (cq_size < MANA_MIN_QSIZE)
++		cq_size = MANA_MIN_QSIZE;
+ 
+ 	hwc_cq = kzalloc(sizeof(*hwc_cq), GFP_KERNEL);
+ 	if (!hwc_cq)
+@@ -435,7 +435,7 @@ static int mana_hwc_alloc_dma_buf(struct hw_channel_context *hwc, u16 q_depth,
+ 
+ 	dma_buf->num_reqs = q_depth;
+ 
+-	buf_size = PAGE_ALIGN(q_depth * max_msg_size);
++	buf_size = MANA_PAGE_ALIGN(q_depth * max_msg_size);
+ 
+ 	gmi = &dma_buf->mem_info;
+ 	err = mana_gd_alloc_memory(gc, buf_size, gmi);
+@@ -503,8 +503,8 @@ static int mana_hwc_create_wq(struct hw_channel_context *hwc,
+ 	else
+ 		queue_size = roundup_pow_of_two(GDMA_MAX_SQE_SIZE * q_depth);
+ 
+-	if (queue_size < MINIMUM_SUPPORTED_PAGE_SIZE)
+-		queue_size = MINIMUM_SUPPORTED_PAGE_SIZE;
++	if (queue_size < MANA_MIN_QSIZE)
++		queue_size = MANA_MIN_QSIZE;
+ 
+ 	hwc_wq = kzalloc(sizeof(*hwc_wq), GFP_KERNEL);
+ 	if (!hwc_wq)
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index bb77327bfa8155..a637556dcfae8e 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -1901,10 +1901,10 @@ static int mana_create_txq(struct mana_port_context *apc,
+ 	 *  to prevent overflow.
+ 	 */
+ 	txq_size = MAX_SEND_BUFFERS_PER_QUEUE * 32;
+-	BUILD_BUG_ON(!PAGE_ALIGNED(txq_size));
++	BUILD_BUG_ON(!MANA_PAGE_ALIGNED(txq_size));
+ 
+ 	cq_size = MAX_SEND_BUFFERS_PER_QUEUE * COMP_ENTRY_SIZE;
+-	cq_size = PAGE_ALIGN(cq_size);
++	cq_size = MANA_PAGE_ALIGN(cq_size);
+ 
+ 	gc = gd->gdma_context;
+ 
+@@ -2203,8 +2203,8 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc,
+ 	if (err)
+ 		goto out;
+ 
+-	rq_size = PAGE_ALIGN(rq_size);
+-	cq_size = PAGE_ALIGN(cq_size);
++	rq_size = MANA_PAGE_ALIGN(rq_size);
++	cq_size = MANA_PAGE_ALIGN(cq_size);
+ 
+ 	/* Create RQ */
+ 	memset(&spec, 0, sizeof(spec));
+diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.c b/drivers/net/ethernet/microsoft/mana/shm_channel.c
+index 5553af9c8085a1..0f1679ebad96ba 100644
+--- a/drivers/net/ethernet/microsoft/mana/shm_channel.c
++++ b/drivers/net/ethernet/microsoft/mana/shm_channel.c
+@@ -6,6 +6,7 @@
+ #include <linux/io.h>
+ #include <linux/mm.h>
+ 
++#include <net/mana/gdma.h>
+ #include <net/mana/shm_channel.h>
+ 
+ #define PAGE_FRAME_L48_WIDTH_BYTES 6
+@@ -155,8 +156,8 @@ int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr,
+ 		return err;
+ 	}
+ 
+-	if (!PAGE_ALIGNED(eq_addr) || !PAGE_ALIGNED(cq_addr) ||
+-	    !PAGE_ALIGNED(rq_addr) || !PAGE_ALIGNED(sq_addr))
++	if (!MANA_PAGE_ALIGNED(eq_addr) || !MANA_PAGE_ALIGNED(cq_addr) ||
++	    !MANA_PAGE_ALIGNED(rq_addr) || !MANA_PAGE_ALIGNED(sq_addr))
+ 		return -EINVAL;
+ 
+ 	if ((eq_msix_index & VECTOR_MASK) != eq_msix_index)
+@@ -183,7 +184,7 @@ int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr,
+ 
+ 	/* EQ addr: low 48 bits of frame address */
+ 	shmem = (u64 *)ptr;
+-	frame_addr = PHYS_PFN(eq_addr);
++	frame_addr = MANA_PFN(eq_addr);
+ 	*shmem = frame_addr & PAGE_FRAME_L48_MASK;
+ 	all_addr_h4bits |= (frame_addr >> PAGE_FRAME_L48_WIDTH_BITS) <<
+ 		(frame_addr_seq++ * PAGE_FRAME_H4_WIDTH_BITS);
+@@ -191,7 +192,7 @@ int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr,
+ 
+ 	/* CQ addr: low 48 bits of frame address */
+ 	shmem = (u64 *)ptr;
+-	frame_addr = PHYS_PFN(cq_addr);
++	frame_addr = MANA_PFN(cq_addr);
+ 	*shmem = frame_addr & PAGE_FRAME_L48_MASK;
+ 	all_addr_h4bits |= (frame_addr >> PAGE_FRAME_L48_WIDTH_BITS) <<
+ 		(frame_addr_seq++ * PAGE_FRAME_H4_WIDTH_BITS);
+@@ -199,7 +200,7 @@ int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr,
+ 
+ 	/* RQ addr: low 48 bits of frame address */
+ 	shmem = (u64 *)ptr;
+-	frame_addr = PHYS_PFN(rq_addr);
++	frame_addr = MANA_PFN(rq_addr);
+ 	*shmem = frame_addr & PAGE_FRAME_L48_MASK;
+ 	all_addr_h4bits |= (frame_addr >> PAGE_FRAME_L48_WIDTH_BITS) <<
+ 		(frame_addr_seq++ * PAGE_FRAME_H4_WIDTH_BITS);
+@@ -207,7 +208,7 @@ int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr,
+ 
+ 	/* SQ addr: low 48 bits of frame address */
+ 	shmem = (u64 *)ptr;
+-	frame_addr = PHYS_PFN(sq_addr);
++	frame_addr = MANA_PFN(sq_addr);
+ 	*shmem = frame_addr & PAGE_FRAME_L48_MASK;
+ 	all_addr_h4bits |= (frame_addr >> PAGE_FRAME_L48_WIDTH_BITS) <<
+ 		(frame_addr_seq++ * PAGE_FRAME_H4_WIDTH_BITS);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index 182ba0a8b095bc..6e0929af0f725b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -821,14 +821,13 @@ nfp_net_prepare_vector(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
+ 
+ 	snprintf(r_vec->name, sizeof(r_vec->name),
+ 		 "%s-rxtx-%d", nfp_net_name(nn), idx);
+-	err = request_irq(r_vec->irq_vector, r_vec->handler, 0, r_vec->name,
+-			  r_vec);
++	err = request_irq(r_vec->irq_vector, r_vec->handler, IRQF_NO_AUTOEN,
++			  r_vec->name, r_vec);
+ 	if (err) {
+ 		nfp_net_napi_del(&nn->dp, r_vec);
+ 		nn_err(nn, "Error requesting IRQ %d\n", r_vec->irq_vector);
+ 		return err;
+ 	}
+-	disable_irq(r_vec->irq_vector);
+ 
+ 	irq_set_affinity_hint(r_vec->irq_vector, &r_vec->affinity_mask);
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index b6e89fc5a4ae7f..f5396aafe9ab62 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -576,7 +576,34 @@ struct rtl8169_counters {
+ 	__le64	rx_broadcast;
+ 	__le32	rx_multicast;
+ 	__le16	tx_aborted;
+-	__le16	tx_underun;
++	__le16	tx_underrun;
++	/* new since RTL8125 */
++	__le64 tx_octets;
++	__le64 rx_octets;
++	__le64 rx_multicast64;
++	__le64 tx_unicast64;
++	__le64 tx_broadcast64;
++	__le64 tx_multicast64;
++	__le32 tx_pause_on;
++	__le32 tx_pause_off;
++	__le32 tx_pause_all;
++	__le32 tx_deferred;
++	__le32 tx_late_collision;
++	__le32 tx_all_collision;
++	__le32 tx_aborted32;
++	__le32 align_errors32;
++	__le32 rx_frame_too_long;
++	__le32 rx_runt;
++	__le32 rx_pause_on;
++	__le32 rx_pause_off;
++	__le32 rx_pause_all;
++	__le32 rx_unknown_opcode;
++	__le32 rx_mac_error;
++	__le32 tx_underrun32;
++	__le32 rx_mac_missed;
++	__le32 rx_tcam_dropped;
++	__le32 tdu;
++	__le32 rdu;
+ };
+ 
+ struct rtl8169_tc_offsets {
+@@ -1841,7 +1868,7 @@ static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ 	data[9] = le64_to_cpu(counters->rx_broadcast);
+ 	data[10] = le32_to_cpu(counters->rx_multicast);
+ 	data[11] = le16_to_cpu(counters->tx_aborted);
+-	data[12] = le16_to_cpu(counters->tx_underun);
++	data[12] = le16_to_cpu(counters->tx_underrun);
+ }
+ 
+ static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 8e2049ed60159e..c79d70899493bb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/ethtool.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include "stmmac.h"
+ #include "stmmac_pcs.h"
+ #include "dwmac4.h"
+@@ -475,7 +476,7 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
+ 				    u8 index, u32 data)
+ {
+ 	void __iomem *ioaddr = (void __iomem *)dev->base_addr;
+-	int i, timeout = 10;
++	int ret;
+ 	u32 val;
+ 
+ 	if (index >= hw->num_vlan)
+@@ -491,16 +492,15 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
+ 
+ 	writel(val, ioaddr + GMAC_VLAN_TAG);
+ 
+-	for (i = 0; i < timeout; i++) {
+-		val = readl(ioaddr + GMAC_VLAN_TAG);
+-		if (!(val & GMAC_VLAN_TAG_CTRL_OB))
+-			return 0;
+-		udelay(1);
++	ret = readl_poll_timeout(ioaddr + GMAC_VLAN_TAG, val,
++				 !(val & GMAC_VLAN_TAG_CTRL_OB),
++				 1000, 500000);
++	if (ret) {
++		netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n");
++		return -EBUSY;
+ 	}
+ 
+-	netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n");
+-
+-	return -EBUSY;
++	return 0;
+ }
+ 
+ static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 996f2bcd07a243..308ef42417684e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -396,6 +396,7 @@ static int tc_setup_cbs(struct stmmac_priv *priv,
+ 			return ret;
+ 
+ 		priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_DCB;
++		return 0;
+ 	}
+ 
+ 	/* Final adjustments for HW */
+diff --git a/drivers/net/ieee802154/Kconfig b/drivers/net/ieee802154/Kconfig
+index 95da876c561384..1075e24b11defc 100644
+--- a/drivers/net/ieee802154/Kconfig
++++ b/drivers/net/ieee802154/Kconfig
+@@ -101,6 +101,7 @@ config IEEE802154_CA8210_DEBUGFS
+ 
+ config IEEE802154_MCR20A
+ 	tristate "MCR20A transceiver driver"
++	select REGMAP_SPI
+ 	depends on IEEE802154_DRIVERS && MAC802154
+ 	depends on SPI
+ 	help
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index 433fb583920310..020d392a98b69d 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -1302,16 +1302,13 @@ mcr20a_probe(struct spi_device *spi)
+ 		irq_type = IRQF_TRIGGER_FALLING;
+ 
+ 	ret = devm_request_irq(&spi->dev, spi->irq, mcr20a_irq_isr,
+-			       irq_type, dev_name(&spi->dev), lp);
++			       irq_type | IRQF_NO_AUTOEN, dev_name(&spi->dev), lp);
+ 	if (ret) {
+ 		dev_err(&spi->dev, "could not request_irq for mcr20a\n");
+ 		ret = -ENODEV;
+ 		goto free_dev;
+ 	}
+ 
+-	/* disable_irq by default and wait for starting hardware */
+-	disable_irq(spi->irq);
+-
+ 	ret = ieee802154_register_hw(hw);
+ 	if (ret) {
+ 		dev_crit(&spi->dev, "ieee802154_register_hw failed\n");
+diff --git a/drivers/net/pcs/pcs-xpcs-wx.c b/drivers/net/pcs/pcs-xpcs-wx.c
+index 19c75886f070ea..5f5cd3596cb846 100644
+--- a/drivers/net/pcs/pcs-xpcs-wx.c
++++ b/drivers/net/pcs/pcs-xpcs-wx.c
+@@ -109,7 +109,7 @@ static void txgbe_pma_config_1g(struct dw_xpcs *xpcs)
+ 	txgbe_write_pma(xpcs, TXGBE_DFE_TAP_CTL0, 0);
+ 	val = txgbe_read_pma(xpcs, TXGBE_RX_GEN_CTL3);
+ 	val = u16_replace_bits(val, 0x4, TXGBE_RX_GEN_CTL3_LOS_TRSHLD0);
+-	txgbe_write_pma(xpcs, TXGBE_RX_EQ_ATTN_CTL, val);
++	txgbe_write_pma(xpcs, TXGBE_RX_GEN_CTL3, val);
+ 
+ 	txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL0, 0x20);
+ 	txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL3, 0x46);
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index c4236564c1cd01..8495b111a524a4 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -342,14 +342,19 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
+ 		if (mdio_phy_id_is_c45(mii_data->phy_id)) {
+ 			prtad = mdio_phy_id_prtad(mii_data->phy_id);
+ 			devad = mdio_phy_id_devad(mii_data->phy_id);
+-			mii_data->val_out = mdiobus_c45_read(
+-				phydev->mdio.bus, prtad, devad,
+-				mii_data->reg_num);
++			ret = mdiobus_c45_read(phydev->mdio.bus, prtad, devad,
++					       mii_data->reg_num);
++
+ 		} else {
+-			mii_data->val_out = mdiobus_read(
+-				phydev->mdio.bus, mii_data->phy_id,
+-				mii_data->reg_num);
++			ret = mdiobus_read(phydev->mdio.bus, mii_data->phy_id,
++					   mii_data->reg_num);
+ 		}
++
++		if (ret < 0)
++			return ret;
++
++		mii_data->val_out = ret;
++
+ 		return 0;
+ 
+ 	case SIOCSMIIREG:
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index eb9acfcaeb0974..9d2656afba660a 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -2269,7 +2269,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
+ 	if (!pchb)
+ 		goto out_rcu;
+ 
+-	spin_lock(&pchb->downl);
++	spin_lock_bh(&pchb->downl);
+ 	if (!pchb->chan) {
+ 		/* channel got unregistered */
+ 		kfree_skb(skb);
+@@ -2281,7 +2281,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
+ 		kfree_skb(skb);
+ 
+ outl:
+-	spin_unlock(&pchb->downl);
++	spin_unlock_bh(&pchb->downl);
+ out_rcu:
+ 	rcu_read_unlock();
+ 
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 3a252ac5dd28a9..159841a5c3ab5c 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -628,7 +628,9 @@ static void vrf_finish_direct(struct sk_buff *skb)
+ 		eth_zero_addr(eth->h_dest);
+ 		eth->h_proto = skb->protocol;
+ 
++		rcu_read_lock_bh();
+ 		dev_queue_xmit_nit(skb, vrf_dev);
++		rcu_read_unlock_bh();
+ 
+ 		skb_pull(skb, ETH_HLEN);
+ 	}
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index aabde24d876327..88c7a7289d06e3 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2697,7 +2697,7 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ 		if (unlikely(push_reason !=
+ 			     HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION)) {
+ 			dev_kfree_skb_any(msdu);
+-			ab->soc_stats.hal_reo_error[dp->reo_dst_ring[ring_id].ring_id]++;
++			ab->soc_stats.hal_reo_error[ring_id]++;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 3cdc4c51d6dfe7..f8767496fa543a 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2702,7 +2702,7 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
+ 		if (push_reason !=
+ 		    HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
+ 			dev_kfree_skb_any(msdu);
+-			ab->soc_stats.hal_reo_error[dp->reo_dst_ring[ring_id].ring_id]++;
++			ab->soc_stats.hal_reo_error[ring_id]++;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
+index bf3da631c69fda..51abc470125b3c 100644
+--- a/drivers/net/wireless/ath/ath9k/debug.c
++++ b/drivers/net/wireless/ath/ath9k/debug.c
+@@ -1325,11 +1325,11 @@ void ath9k_get_et_stats(struct ieee80211_hw *hw,
+ 	struct ath_softc *sc = hw->priv;
+ 	int i = 0;
+ 
+-	data[i++] = (sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_pkts_all +
++	data[i++] = ((u64)sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_pkts_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BK)].tx_pkts_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VI)].tx_pkts_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VO)].tx_pkts_all);
+-	data[i++] = (sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_bytes_all +
++	data[i++] = ((u64)sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BE)].tx_bytes_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_BK)].tx_bytes_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VI)].tx_bytes_all +
+ 		     sc->debug.stats.txstats[PR_QNUM(IEEE80211_AC_VO)].tx_bytes_all);
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 0c7841f952287f..a3733c9b484e46 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -716,8 +716,7 @@ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ 	}
+ 
+ resubmit:
+-	skb_reset_tail_pointer(skb);
+-	skb_trim(skb, 0);
++	__skb_set_length(skb, 0);
+ 
+ 	usb_anchor_urb(urb, &hif_dev->rx_submitted);
+ 	ret = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -754,8 +753,7 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	case -ESHUTDOWN:
+ 		goto free_skb;
+ 	default:
+-		skb_reset_tail_pointer(skb);
+-		skb_trim(skb, 0);
++		__skb_set_length(skb, 0);
+ 
+ 		goto resubmit;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index ba9e656037a206..9b3d3405fb83d8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -356,6 +356,11 @@ int iwl_acpi_get_mcc(struct iwl_fw_runtime *fwrt, char *mcc)
+ 	}
+ 
+ 	mcc_val = wifi_pkg->package.elements[1].integer.value;
++	if (mcc_val != BIOS_MCC_CHINA) {
++		ret = -EINVAL;
++		IWL_DEBUG_RADIO(fwrt, "ACPI WRDD is supported only for CN\n");
++		goto out_free;
++	}
+ 
+ 	mcc[0] = (mcc_val >> 8) & 0xff;
+ 	mcc[1] = mcc_val & 0xff;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+index 6684506f4fc481..6cf237850ea0c3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+@@ -1132,6 +1132,19 @@ struct iwl_umac_scan_abort {
+ 	__le32 flags;
+ } __packed; /* SCAN_ABORT_CMD_UMAC_API_S_VER_1 */
+ 
++/**
++ * enum iwl_umac_scan_abort_status
++ *
++ * @IWL_UMAC_SCAN_ABORT_STATUS_SUCCESS: scan was successfully aborted
++ * @IWL_UMAC_SCAN_ABORT_STATUS_IN_PROGRESS: scan abort is in progress
++ * @IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND: nothing to abort
++ */
++enum iwl_umac_scan_abort_status {
++	IWL_UMAC_SCAN_ABORT_STATUS_SUCCESS = 0,
++	IWL_UMAC_SCAN_ABORT_STATUS_IN_PROGRESS,
++	IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND,
++};
++
+ /**
+  * struct iwl_umac_scan_complete
+  * @uid: scan id, &enum iwl_umac_scan_uid_offsets
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
+index 633c9ad9af841e..ecf482647617cf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/regulatory.h
+@@ -45,6 +45,8 @@
+ #define IWL_WTAS_ENABLE_IEC_MSK	0x4
+ #define IWL_WTAS_USA_UHB_MSK		BIT(16)
+ 
++#define BIOS_MCC_CHINA 0x434e
++
+ /*
+  * The profile for revision 2 is a superset of revision 1, which is in
+  * turn a superset of revision 0.  So we can store all revisions
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index fb982d4fe85100..2cf878f237ac68 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -638,7 +638,7 @@ int iwl_uefi_get_mcc(struct iwl_fw_runtime *fwrt, char *mcc)
+ 		goto out;
+ 	}
+ 
+-	if (data->mcc != UEFI_MCC_CHINA) {
++	if (data->mcc != BIOS_MCC_CHINA) {
+ 		ret = -EINVAL;
+ 		IWL_DEBUG_RADIO(fwrt, "UEFI WRDD is supported only for CN\n");
+ 		goto out;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+index 1f8884ca8997c4..e0ef981cd8f28d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+@@ -149,8 +149,6 @@ struct uefi_cnv_var_splc {
+ 	u32 default_pwr_limit;
+ } __packed;
+ 
+-#define UEFI_MCC_CHINA 0x434e
+-
+ /* struct uefi_cnv_var_wrdd - WRDD table as defined in UEFI
+  * @revision: the revision of the table
+  * @mcc: country identifier as defined in ISO/IEC 3166-1 Alpha 2 code
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 83551d962a46c4..6673a4e467c0b5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -834,20 +834,10 @@ void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
+ 	if (ieee80211_is_mgmt(hdr->frame_control))
+ 		sta = NULL;
+ 
+-	/* If there is no sta, and it's not offchannel - send through AP */
++	/* this shouldn't even happen: just drop */
+ 	if (!sta && info->control.vif->type == NL80211_IFTYPE_STATION &&
+-	    !offchannel) {
+-		struct iwl_mvm_vif *mvmvif =
+-			iwl_mvm_vif_from_mac80211(info->control.vif);
+-		u8 ap_sta_id = READ_ONCE(mvmvif->deflink.ap_sta_id);
+-
+-		if (ap_sta_id < mvm->fw->ucode_capa.num_stations) {
+-			/* mac80211 holds rcu read lock */
+-			sta = rcu_dereference(mvm->fw_id_to_mac_id[ap_sta_id]);
+-			if (IS_ERR_OR_NULL(sta))
+-				goto drop;
+-		}
+-	}
++	    !offchannel)
++		goto drop;
+ 
+ 	if (tmp_sta && !sta && link_id != IEEE80211_LINK_UNSPECIFIED &&
+ 	    !ieee80211_is_probe_resp(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+index 8a38fc4b0b0f97..455f5f41750642 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+@@ -144,7 +144,7 @@ static void iwl_mvm_mld_update_sta_key(struct ieee80211_hw *hw,
+ 	if (sta != data->sta || key->link_id >= 0)
+ 		return;
+ 
+-	err = iwl_mvm_send_cmd_pdu(mvm, cmd_id, CMD_ASYNC, sizeof(cmd), &cmd);
++	err = iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0, sizeof(cmd), &cmd);
+ 
+ 	if (err)
+ 		data->err = err;
+@@ -162,8 +162,8 @@ int iwl_mvm_mld_update_sta_keys(struct iwl_mvm *mvm,
+ 		.new_sta_mask = new_sta_mask,
+ 	};
+ 
+-	ieee80211_iter_keys_rcu(mvm->hw, vif, iwl_mvm_mld_update_sta_key,
+-				&data);
++	ieee80211_iter_keys(mvm->hw, vif, iwl_mvm_mld_update_sta_key,
++			    &data);
+ 	return data.err;
+ }
+ 
+@@ -402,7 +402,7 @@ void iwl_mvm_sec_key_remove_ap(struct iwl_mvm *mvm,
+ 	if (!sec_key_ver)
+ 		return;
+ 
+-	ieee80211_iter_keys_rcu(mvm->hw, vif,
+-				iwl_mvm_sec_key_remove_ap_iter,
+-				(void *)(uintptr_t)link_id);
++	ieee80211_iter_keys(mvm->hw, vif,
++			    iwl_mvm_sec_key_remove_ap_iter,
++			    (void *)(uintptr_t)link_id);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index d7c276237c74ea..d8a3d47f5c072c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -3313,13 +3313,23 @@ void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm,
+ 		       mvm->scan_start);
+ }
+ 
+-static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type)
++static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type, bool *wait)
+ {
+-	struct iwl_umac_scan_abort cmd = {};
++	struct iwl_umac_scan_abort abort_cmd = {};
++	struct iwl_host_cmd cmd = {
++		.id = WIDE_ID(IWL_ALWAYS_LONG_GROUP, SCAN_ABORT_UMAC),
++		.len = { sizeof(abort_cmd), },
++		.data = { &abort_cmd, },
++		.flags = CMD_SEND_IN_RFKILL,
++	};
++
+ 	int uid, ret;
++	u32 status = IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	*wait = true;
++
+ 	/* We should always get a valid index here, because we already
+ 	 * checked that this type of scan was running in the generic
+ 	 * code.
+@@ -3328,17 +3338,28 @@ static int iwl_mvm_umac_scan_abort(struct iwl_mvm *mvm, int type)
+ 	if (WARN_ON_ONCE(uid < 0))
+ 		return uid;
+ 
+-	cmd.uid = cpu_to_le32(uid);
++	abort_cmd.uid = cpu_to_le32(uid);
+ 
+ 	IWL_DEBUG_SCAN(mvm, "Sending scan abort, uid %u\n", uid);
+ 
+-	ret = iwl_mvm_send_cmd_pdu(mvm,
+-				   WIDE_ID(IWL_ALWAYS_LONG_GROUP, SCAN_ABORT_UMAC),
+-				   CMD_SEND_IN_RFKILL, sizeof(cmd), &cmd);
++	ret = iwl_mvm_send_cmd_status(mvm, &cmd, &status);
++
++	IWL_DEBUG_SCAN(mvm, "Scan abort: ret=%d, status=%u\n", ret, status);
+ 	if (!ret)
+ 		mvm->scan_uid_status[uid] = type << IWL_MVM_SCAN_STOPPING_SHIFT;
+ 
+-	IWL_DEBUG_SCAN(mvm, "Scan abort: ret=%d\n", ret);
++	/* Handle the case that the FW is no longer familiar with the scan that
++	 * is to be stopped. In such a case, it is expected that the scan
++	 * complete notification was already received but not yet processed.
++	 * In such a case, there is no need to wait for a scan complete
++	 * notification and the flow should continue similar to the case that
++	 * the scan was really aborted.
++	 */
++	if (status == IWL_UMAC_SCAN_ABORT_STATUS_NOT_FOUND) {
++		mvm->scan_uid_status[uid] = type << IWL_MVM_SCAN_STOPPING_SHIFT;
++		*wait = false;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -3348,6 +3369,7 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ 	static const u16 scan_done_notif[] = { SCAN_COMPLETE_UMAC,
+ 					      SCAN_OFFLOAD_COMPLETE, };
+ 	int ret;
++	bool wait = true;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -3359,7 +3381,7 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ 	IWL_DEBUG_SCAN(mvm, "Preparing to stop scan, type %x\n", type);
+ 
+ 	if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN))
+-		ret = iwl_mvm_umac_scan_abort(mvm, type);
++		ret = iwl_mvm_umac_scan_abort(mvm, type, &wait);
+ 	else
+ 		ret = iwl_mvm_lmac_scan_abort(mvm);
+ 
+@@ -3367,6 +3389,10 @@ static int iwl_mvm_scan_stop_wait(struct iwl_mvm *mvm, int type)
+ 		IWL_DEBUG_SCAN(mvm, "couldn't stop scan type %d\n", type);
+ 		iwl_remove_notification(&mvm->notif_wait, &wait_scan_done);
+ 		return ret;
++	} else if (!wait) {
++		IWL_DEBUG_SCAN(mvm, "no need to wait for scan type %d\n", type);
++		iwl_remove_notification(&mvm->notif_wait, &wait_scan_done);
++		return 0;
+ 	}
+ 
+ 	return iwl_wait_notification(&mvm->notif_wait, &wait_scan_done,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 1d695ece93e9e9..51c12d70e8c233 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1195,6 +1195,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	bool is_ampdu = false;
+ 	int hdrlen;
+ 
++	if (WARN_ON_ONCE(!sta))
++		return -1;
++
+ 	mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ 	fc = hdr->frame_control;
+ 	hdrlen = ieee80211_hdrlen(fc);
+@@ -1202,9 +1205,6 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	if (IWL_MVM_NON_TRANSMITTING_AP && ieee80211_is_probe_resp(fc))
+ 		return -1;
+ 
+-	if (WARN_ON_ONCE(!mvmsta))
+-		return -1;
+-
+ 	if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
+ 		return -1;
+ 
+@@ -1335,7 +1335,7 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 		       struct ieee80211_sta *sta)
+ {
+-	struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
++	struct iwl_mvm_sta *mvmsta;
+ 	struct ieee80211_tx_info info;
+ 	struct sk_buff_head mpdus_skbs;
+ 	struct ieee80211_vif *vif;
+@@ -1344,9 +1344,11 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	struct sk_buff *orig_skb = skb;
+ 	const u8 *addr3;
+ 
+-	if (WARN_ON_ONCE(!mvmsta))
++	if (WARN_ON_ONCE(!sta))
+ 		return -1;
+ 
++	mvmsta = iwl_mvm_sta_from_mac80211(sta);
++
+ 	if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
+ 		return -1;
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 3adc447b715f68..5b072120e3f213 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -1587,7 +1587,7 @@ struct host_cmd_ds_802_11_scan_rsp {
+ 
+ struct host_cmd_ds_802_11_scan_ext {
+ 	u32   reserved;
+-	u8    tlv_buffer[1];
++	u8    tlv_buffer[];
+ } __packed;
+ 
+ struct mwifiex_ie_types_bss_mode {
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index 0326b121747cb2..17ce84f5207e3a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -2530,8 +2530,7 @@ int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv,
+ 	ext_scan_resp = &resp->params.ext_scan;
+ 
+ 	tlv = (void *)ext_scan_resp->tlv_buffer;
+-	buf_left = le16_to_cpu(resp->size) - (sizeof(*ext_scan_resp) + S_DS_GEN
+-					      - 1);
++	buf_left = le16_to_cpu(resp->size) - (sizeof(*ext_scan_resp) + S_DS_GEN);
+ 
+ 	while (buf_left >= sizeof(struct mwifiex_ie_types_header)) {
+ 		type = le16_to_cpu(tlv->type);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 7bc3b4cd359255..6bef96e3d2a3d9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -400,6 +400,7 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
+ 	ieee80211_hw_set(hw, SUPPORTS_RX_DECAP_OFFLOAD);
+ 	ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
+ 	ieee80211_hw_set(hw, WANT_MONITOR_VIF);
++	ieee80211_hw_set(hw, SUPPORTS_TX_FRAG);
+ 
+ 	hw->max_tx_fragments = 4;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 8008ce3fa6c7ea..387d47e9fcd386 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1537,12 +1537,14 @@ void mt7915_mac_reset_work(struct work_struct *work)
+ 		set_bit(MT76_RESET, &phy2->mt76->state);
+ 		cancel_delayed_work_sync(&phy2->mt76->mac_work);
+ 	}
++
++	mutex_lock(&dev->mt76.mutex);
++
+ 	mt76_worker_disable(&dev->mt76.tx_worker);
+ 	mt76_for_each_q_rx(&dev->mt76, i)
+ 		napi_disable(&dev->mt76.napi[i]);
+ 	napi_disable(&dev->mt76.tx_napi);
+ 
+-	mutex_lock(&dev->mt76.mutex);
+ 
+ 	if (mtk_wed_device_active(&dev->mt76.mmio.wed))
+ 		mtk_wed_device_stop(&dev->mt76.mmio.wed);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index eea41b29f09675..3b2dcb410e0f0d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -1577,6 +1577,12 @@ mt7915_twt_teardown_request(struct ieee80211_hw *hw,
+ 	mutex_unlock(&dev->mt76.mutex);
+ }
+ 
++static int
++mt7915_set_frag_threshold(struct ieee80211_hw *hw, u32 val)
++{
++	return 0;
++}
++
+ static int
+ mt7915_set_radar_background(struct ieee80211_hw *hw,
+ 			    struct cfg80211_chan_def *chandef)
+@@ -1707,6 +1713,7 @@ const struct ieee80211_ops mt7915_ops = {
+ 	.sta_set_decap_offload = mt7915_sta_set_decap_offload,
+ 	.add_twt_setup = mt7915_mac_add_twt_setup,
+ 	.twt_teardown_request = mt7915_twt_teardown_request,
++	.set_frag_threshold = mt7915_set_frag_threshold,
+ 	CFG80211_TESTMODE_CMD(mt76_testmode_cmd)
+ 	CFG80211_TESTMODE_DUMP(mt76_testmode_dump)
+ #ifdef CONFIG_MAC80211_DEBUGFS
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 9599adf104b160..758249b20c2229 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -690,13 +690,17 @@ int mt7915_mcu_add_tx_ba(struct mt7915_dev *dev,
+ {
+ 	struct mt7915_sta *msta = (struct mt7915_sta *)params->sta->drv_priv;
+ 	struct mt7915_vif *mvif = msta->vif;
++	int ret;
+ 
++	mt76_worker_disable(&dev->mt76.tx_worker);
+ 	if (enable && !params->amsdu)
+ 		msta->wcid.amsdu = false;
++	ret = mt76_connac_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
++				     MCU_EXT_CMD(STA_REC_UPDATE),
++				     enable, true);
++	mt76_worker_enable(&dev->mt76.tx_worker);
+ 
+-	return mt76_connac_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
+-				      MCU_EXT_CMD(STA_REC_UPDATE),
+-				      enable, true);
++	return ret;
+ }
+ 
+ int mt7915_mcu_add_rx_ba(struct mt7915_dev *dev,
+diff --git a/drivers/net/wireless/realtek/rtw88/Kconfig b/drivers/net/wireless/realtek/rtw88/Kconfig
+index 22838ede03cd80..02b0d698413bec 100644
+--- a/drivers/net/wireless/realtek/rtw88/Kconfig
++++ b/drivers/net/wireless/realtek/rtw88/Kconfig
+@@ -12,6 +12,7 @@ if RTW88
+ 
+ config RTW88_CORE
+ 	tristate
++	select WANT_DEV_COREDUMP
+ 
+ config RTW88_PCI
+ 	tristate
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 112bdd95fc6ead..504660ee3cba32 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -3888,16 +3888,22 @@ struct rtw89_txpwr_conf {
+ 	const void *data;
+ };
+ 
++static inline bool rtw89_txpwr_entcpy(void *entry, const void *cursor, u8 size,
++				      const struct rtw89_txpwr_conf *conf)
++{
++	u8 valid_size = min(size, conf->ent_sz);
++
++	memcpy(entry, cursor, valid_size);
++	return true;
++}
++
+ #define rtw89_txpwr_conf_valid(conf) (!!(conf)->data)
+ 
+ #define rtw89_for_each_in_txpwr_conf(entry, cursor, conf) \
+-	for (typecheck(const void *, cursor), (cursor) = (conf)->data, \
+-	     memcpy(&(entry), cursor, \
+-		    min_t(u8, sizeof(entry), (conf)->ent_sz)); \
++	for (typecheck(const void *, cursor), (cursor) = (conf)->data; \
+ 	     (cursor) < (conf)->data + (conf)->num_ents * (conf)->ent_sz; \
+-	     (cursor) += (conf)->ent_sz, \
+-	     memcpy(&(entry), cursor, \
+-		    min_t(u8, sizeof(entry), (conf)->ent_sz)))
++	     (cursor) += (conf)->ent_sz) \
++		if (rtw89_txpwr_entcpy(&(entry), cursor, sizeof(entry), conf))
+ 
+ struct rtw89_txpwr_byrate_data {
+ 	struct rtw89_txpwr_conf conf;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 1ec97250e88e54..4fae0bd566f6a2 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -126,7 +126,9 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ 	rtwvif->rtwdev = rtwdev;
+ 	rtwvif->roc.state = RTW89_ROC_IDLE;
+ 	rtwvif->offchan = false;
+-	list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++	if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
++		list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++
+ 	INIT_WORK(&rtwvif->update_beacon_work, rtw89_core_update_beacon_work);
+ 	INIT_DELAYED_WORK(&rtwvif->roc.roc_work, rtw89_roc_work);
+ 	rtw89_leave_ps_mode(rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index a82b4c56a6f45a..f7c6b019b5be4e 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -352,8 +352,8 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ 		csi_mode = RTW89_RA_RPT_MODE_HT;
+ 		ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 48) |
+ 			   ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 36) |
+-			   (sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
+-			   (sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
++			   ((u64)sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
++			   ((u64)sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
+ 		high_rate_masks = rtw89_ra_mask_ht_rates;
+ 		if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ 			stbc_en = 1;
+diff --git a/drivers/net/wireless/realtek/rtw89/util.h b/drivers/net/wireless/realtek/rtw89/util.h
+index e2ed4565025dda..d4ee9078a4f48c 100644
+--- a/drivers/net/wireless/realtek/rtw89/util.h
++++ b/drivers/net/wireless/realtek/rtw89/util.h
+@@ -14,6 +14,24 @@
+ #define rtw89_for_each_rtwvif(rtwdev, rtwvif)				       \
+ 	list_for_each_entry(rtwvif, &(rtwdev)->rtwvifs_list, list)
+ 
++/* Before adding rtwvif to list, we need to check if it already exist, beacase
++ * in some case such as SER L2 happen during WoWLAN flow, calling reconfig
++ * twice cause the list to be added twice.
++ */
++static inline bool rtw89_rtwvif_in_list(struct rtw89_dev *rtwdev,
++					struct rtw89_vif *new)
++{
++	struct rtw89_vif *rtwvif;
++
++	lockdep_assert_held(&rtwdev->mutex);
++
++	rtw89_for_each_rtwvif(rtwdev, rtwvif)
++		if (rtwvif == new)
++			return true;
++
++	return false;
++}
++
+ /* The result of negative dividend and positive divisor is undefined, but it
+  * should be one case of round-down or round-up. So, make it round-down if the
+  * result is round-up.
+diff --git a/drivers/net/wwan/qcom_bam_dmux.c b/drivers/net/wwan/qcom_bam_dmux.c
+index 26ca719fa0de43..5dcb9a84a12e35 100644
+--- a/drivers/net/wwan/qcom_bam_dmux.c
++++ b/drivers/net/wwan/qcom_bam_dmux.c
+@@ -823,17 +823,17 @@ static int bam_dmux_probe(struct platform_device *pdev)
+ 	ret = devm_request_threaded_irq(dev, pc_ack_irq, NULL, bam_dmux_pc_ack_irq,
+ 					IRQF_ONESHOT, NULL, dmux);
+ 	if (ret)
+-		return ret;
++		goto err_disable_pm;
+ 
+ 	ret = devm_request_threaded_irq(dev, dmux->pc_irq, NULL, bam_dmux_pc_irq,
+ 					IRQF_ONESHOT, NULL, dmux);
+ 	if (ret)
+-		return ret;
++		goto err_disable_pm;
+ 
+ 	ret = irq_get_irqchip_state(dmux->pc_irq, IRQCHIP_STATE_LINE_LEVEL,
+ 				    &dmux->pc_state);
+ 	if (ret)
+-		return ret;
++		goto err_disable_pm;
+ 
+ 	/* Check if remote finished initialization before us */
+ 	if (dmux->pc_state) {
+@@ -844,6 +844,11 @@ static int bam_dmux_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
++err_disable_pm:
++	pm_runtime_disable(dev);
++	pm_runtime_dont_use_autosuspend(dev);
++	return ret;
+ }
+ 
+ static void bam_dmux_remove(struct platform_device *pdev)
+diff --git a/drivers/net/xen-netback/hash.c b/drivers/net/xen-netback/hash.c
+index ff96f22648efde..45ddce35f6d2c6 100644
+--- a/drivers/net/xen-netback/hash.c
++++ b/drivers/net/xen-netback/hash.c
+@@ -95,7 +95,7 @@ static u32 xenvif_new_hash(struct xenvif *vif, const u8 *data,
+ 
+ static void xenvif_flush_hash(struct xenvif *vif)
+ {
+-	struct xenvif_hash_cache_entry *entry;
++	struct xenvif_hash_cache_entry *entry, *n;
+ 	unsigned long flags;
+ 
+ 	if (xenvif_hash_cache_size == 0)
+@@ -103,8 +103,7 @@ static void xenvif_flush_hash(struct xenvif *vif)
+ 
+ 	spin_lock_irqsave(&vif->hash.cache.lock, flags);
+ 
+-	list_for_each_entry_rcu(entry, &vif->hash.cache.list, link,
+-				lockdep_is_held(&vif->hash.cache.lock)) {
++	list_for_each_entry_safe(entry, n, &vif->hash.cache.list, link) {
+ 		list_del_rcu(&entry->link);
+ 		vif->hash.cache.count--;
+ 		kfree_rcu(entry, rcu);
+diff --git a/drivers/nvme/common/keyring.c b/drivers/nvme/common/keyring.c
+index 6f7e7a8fa5ae47..ed5167f942d89b 100644
+--- a/drivers/nvme/common/keyring.c
++++ b/drivers/nvme/common/keyring.c
+@@ -20,6 +20,28 @@ key_serial_t nvme_keyring_id(void)
+ }
+ EXPORT_SYMBOL_GPL(nvme_keyring_id);
+ 
++static bool nvme_tls_psk_revoked(struct key *psk)
++{
++	return test_bit(KEY_FLAG_REVOKED, &psk->flags) ||
++		test_bit(KEY_FLAG_INVALIDATED, &psk->flags);
++}
++
++struct key *nvme_tls_key_lookup(key_serial_t key_id)
++{
++	struct key *key = key_lookup(key_id);
++
++	if (IS_ERR(key)) {
++		pr_err("key id %08x not found\n", key_id);
++		return key;
++	}
++	if (nvme_tls_psk_revoked(key)) {
++		pr_err("key id %08x revoked\n", key_id);
++		return ERR_PTR(-EKEYREVOKED);
++	}
++	return key;
++}
++EXPORT_SYMBOL_GPL(nvme_tls_key_lookup);
++
+ static void nvme_tls_psk_describe(const struct key *key, struct seq_file *m)
+ {
+ 	seq_puts(m, key->description);
+@@ -36,14 +58,12 @@ static bool nvme_tls_psk_match(const struct key *key,
+ 		pr_debug("%s: no key description\n", __func__);
+ 		return false;
+ 	}
+-	match_len = strlen(key->description);
+-	pr_debug("%s: id %s len %zd\n", __func__, key->description, match_len);
+-
+ 	if (!match_data->raw_data) {
+ 		pr_debug("%s: no match data\n", __func__);
+ 		return false;
+ 	}
+ 	match_id = match_data->raw_data;
++	match_len = strlen(match_id);
+ 	pr_debug("%s: match '%s' '%s' len %zd\n",
+ 		 __func__, match_id, key->description, match_len);
+ 	return !memcmp(key->description, match_id, match_len);
+@@ -71,7 +91,7 @@ static struct key_type nvme_tls_psk_key_type = {
+ 
+ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ 		const char *hostnqn, const char *subnqn,
+-		int hmac, bool generated)
++		u8 hmac, u8 psk_ver, bool generated)
+ {
+ 	char *identity;
+ 	size_t identity_len = (NVMF_NQN_SIZE) * 2 + 11;
+@@ -82,8 +102,8 @@ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ 	if (!identity)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	snprintf(identity, identity_len, "NVMe0%c%02d %s %s",
+-		 generated ? 'G' : 'R', hmac, hostnqn, subnqn);
++	snprintf(identity, identity_len, "NVMe%u%c%02u %s %s",
++		 psk_ver, generated ? 'G' : 'R', hmac, hostnqn, subnqn);
+ 
+ 	if (!keyring)
+ 		keyring = nvme_keyring;
+@@ -107,21 +127,38 @@ static struct key *nvme_tls_psk_lookup(struct key *keyring,
+ /*
+  * NVMe PSK priority list
+  *
+- * 'Retained' PSKs (ie 'generated == false')
+- * should be preferred to 'generated' PSKs,
+- * and SHA-384 should be preferred to SHA-256.
++ * 'Retained' PSKs (ie 'generated == false') should be preferred to 'generated'
++ * PSKs, PSKs with hash (psk_ver 1) should be preferred to PSKs without hash
++ * (psk_ver 0), and SHA-384 should be preferred to SHA-256.
+  */
+ static struct nvme_tls_psk_priority_list {
+ 	bool generated;
++	u8 psk_ver;
+ 	enum nvme_tcp_tls_cipher cipher;
+ } nvme_tls_psk_prio[] = {
+ 	{ .generated = false,
++	  .psk_ver = 1,
++	  .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
++	{ .generated = false,
++	  .psk_ver = 1,
++	  .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
++	{ .generated = false,
++	  .psk_ver = 0,
+ 	  .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
+ 	{ .generated = false,
++	  .psk_ver = 0,
++	  .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
++	{ .generated = true,
++	  .psk_ver = 1,
++	  .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
++	{ .generated = true,
++	  .psk_ver = 1,
+ 	  .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
+ 	{ .generated = true,
++	  .psk_ver = 0,
+ 	  .cipher = NVME_TCP_TLS_CIPHER_SHA384, },
+ 	{ .generated = true,
++	  .psk_ver = 0,
+ 	  .cipher = NVME_TCP_TLS_CIPHER_SHA256, },
+ };
+ 
+@@ -137,10 +174,11 @@ key_serial_t nvme_tls_psk_default(struct key *keyring,
+ 
+ 	for (prio = 0; prio < ARRAY_SIZE(nvme_tls_psk_prio); prio++) {
+ 		bool generated = nvme_tls_psk_prio[prio].generated;
++		u8 ver = nvme_tls_psk_prio[prio].psk_ver;
+ 		enum nvme_tcp_tls_cipher cipher = nvme_tls_psk_prio[prio].cipher;
+ 
+ 		tls_key = nvme_tls_psk_lookup(keyring, hostnqn, subnqn,
+-					      cipher, generated);
++					      cipher, ver, generated);
+ 		if (!IS_ERR(tls_key)) {
+ 			tls_key_id = tls_key->serial;
+ 			key_put(tls_key);
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index b309c8be720f47..e6bb73c168872a 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -42,6 +42,7 @@ config NVME_HWMON
+ 
+ config NVME_FABRICS
+ 	select NVME_CORE
++	select NVME_KEYRING if NVME_TCP_TLS
+ 	tristate
+ 
+ config NVME_RDMA
+@@ -95,7 +96,6 @@ config NVME_TCP
+ config NVME_TCP_TLS
+ 	bool "NVMe over Fabrics TCP TLS encryption support"
+ 	depends on NVME_TCP
+-	select NVME_KEYRING
+ 	select NET_HANDSHAKE
+ 	select KEYS
+ 	help
+@@ -110,6 +110,7 @@ config NVME_HOST_AUTH
+ 	bool "NVMe over Fabrics In-Band Authentication in host side"
+ 	depends on NVME_CORE
+ 	select NVME_AUTH
++	select NVME_KEYRING if NVME_TCP_TLS
+ 	help
+ 	  This provides support for NVMe over Fabrics In-Band Authentication in
+ 	  host side.
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 5569cf4183b2a5..415ede9886c125 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4587,7 +4587,6 @@ static void nvme_free_ctrl(struct device *dev)
+ 
+ 	if (!subsys || ctrl->instance != subsys->instance)
+ 		ida_free(&nvme_instance_ida, ctrl->instance);
+-	key_put(ctrl->tls_key);
+ 	nvme_free_cels(ctrl);
+ 	nvme_mpath_uninit(ctrl);
+ 	cleanup_srcu_struct(&ctrl->srcu);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index b5a4b5fd573e0d..3e3db6a6524e04 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -650,7 +650,7 @@ static struct key *nvmf_parse_key(int key_id)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	key = key_lookup(key_id);
++	key = nvme_tls_key_lookup(key_id);
+ 	if (IS_ERR(key))
+ 		pr_err("key id %08x not found\n", key_id);
+ 	else
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index ff1769172778be..cde1cb906dbf23 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -375,7 +375,7 @@ struct nvme_ctrl {
+ 	struct nvme_dhchap_key *ctrl_key;
+ 	u16 transaction;
+ #endif
+-	struct key *tls_key;
++	key_serial_t tls_pskid;
+ 
+ 	/* Power saving configuration */
+ 	u64 ps_max_latency_us;
+diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
+index 3c55f7edd18193..5b1dee8a66ef80 100644
+--- a/drivers/nvme/host/sysfs.c
++++ b/drivers/nvme/host/sysfs.c
+@@ -671,9 +671,9 @@ static ssize_t tls_key_show(struct device *dev,
+ {
+ 	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ 
+-	if (!ctrl->tls_key)
++	if (!ctrl->tls_pskid)
+ 		return 0;
+-	return sysfs_emit(buf, "%08x", key_serial(ctrl->tls_key));
++	return sysfs_emit(buf, "%08x", ctrl->tls_pskid);
+ }
+ static DEVICE_ATTR_RO(tls_key);
+ #endif
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 8b5e4327fe83b8..8c79af3ed1f23e 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -165,6 +165,7 @@ struct nvme_tcp_queue {
+ 
+ 	bool			hdr_digest;
+ 	bool			data_digest;
++	bool			tls_enabled;
+ 	struct ahash_request	*rcv_hash;
+ 	struct ahash_request	*snd_hash;
+ 	__le32			exp_ddgst;
+@@ -213,7 +214,21 @@ static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue)
+ 	return queue - queue->ctrl->queues;
+ }
+ 
+-static inline bool nvme_tcp_tls(struct nvme_ctrl *ctrl)
++/*
++ * Check if the queue is TLS encrypted
++ */
++static inline bool nvme_tcp_queue_tls(struct nvme_tcp_queue *queue)
++{
++	if (!IS_ENABLED(CONFIG_NVME_TCP_TLS))
++		return 0;
++
++	return queue->tls_enabled;
++}
++
++/*
++ * Check if TLS is configured for the controller.
++ */
++static inline bool nvme_tcp_tls_configured(struct nvme_ctrl *ctrl)
+ {
+ 	if (!IS_ENABLED(CONFIG_NVME_TCP_TLS))
+ 		return 0;
+@@ -368,7 +383,7 @@ static inline bool nvme_tcp_queue_has_pending(struct nvme_tcp_queue *queue)
+ 
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+ {
+-	return !nvme_tcp_tls(&queue->ctrl->ctrl) &&
++	return !nvme_tcp_queue_tls(queue) &&
+ 		nvme_tcp_queue_has_pending(queue);
+ }
+ 
+@@ -1427,7 +1442,7 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ 	memset(&msg, 0, sizeof(msg));
+ 	iov.iov_base = icresp;
+ 	iov.iov_len = sizeof(*icresp);
+-	if (nvme_tcp_tls(&queue->ctrl->ctrl)) {
++	if (nvme_tcp_queue_tls(queue)) {
+ 		msg.msg_control = cbuf;
+ 		msg.msg_controllen = sizeof(cbuf);
+ 	}
+@@ -1439,7 +1454,7 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ 		goto free_icresp;
+ 	}
+ 	ret = -ENOTCONN;
+-	if (nvme_tcp_tls(&queue->ctrl->ctrl)) {
++	if (nvme_tcp_queue_tls(queue)) {
+ 		ctype = tls_get_record_type(queue->sock->sk,
+ 					    (struct cmsghdr *)cbuf);
+ 		if (ctype != TLS_RECORD_TYPE_DATA) {
+@@ -1581,13 +1596,16 @@ static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
+ 		goto out_complete;
+ 	}
+ 
+-	tls_key = key_lookup(pskid);
++	tls_key = nvme_tls_key_lookup(pskid);
+ 	if (IS_ERR(tls_key)) {
+ 		dev_warn(ctrl->ctrl.device, "queue %d: Invalid key %x\n",
+ 			 qid, pskid);
+ 		queue->tls_err = -ENOKEY;
+ 	} else {
+-		ctrl->ctrl.tls_key = tls_key;
++		queue->tls_enabled = true;
++		if (qid == 0)
++			ctrl->ctrl.tls_pskid = key_serial(tls_key);
++		key_put(tls_key);
+ 		queue->tls_err = 0;
+ 	}
+ 
+@@ -1768,7 +1786,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid,
+ 	}
+ 
+ 	/* If PSKs are configured try to start TLS */
+-	if (IS_ENABLED(CONFIG_NVME_TCP_TLS) && pskid) {
++	if (nvme_tcp_tls_configured(nctrl) && pskid) {
+ 		ret = nvme_tcp_start_tls(nctrl, queue, pskid);
+ 		if (ret)
+ 			goto err_init_connect;
+@@ -1829,6 +1847,8 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ 	mutex_lock(&queue->queue_lock);
+ 	if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ 		__nvme_tcp_stop_queue(queue);
++	/* Stopping the queue will disable TLS */
++	queue->tls_enabled = false;
+ 	mutex_unlock(&queue->queue_lock);
+ }
+ 
+@@ -1925,16 +1945,17 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
+ 	int ret;
+ 	key_serial_t pskid = 0;
+ 
+-	if (nvme_tcp_tls(ctrl)) {
++	if (nvme_tcp_tls_configured(ctrl)) {
+ 		if (ctrl->opts->tls_key)
+ 			pskid = key_serial(ctrl->opts->tls_key);
+-		else
++		else {
+ 			pskid = nvme_tls_psk_default(ctrl->opts->keyring,
+ 						      ctrl->opts->host->nqn,
+ 						      ctrl->opts->subsysnqn);
+-		if (!pskid) {
+-			dev_err(ctrl->device, "no valid PSK found\n");
+-			return -ENOKEY;
++			if (!pskid) {
++				dev_err(ctrl->device, "no valid PSK found\n");
++				return -ENOKEY;
++			}
+ 		}
+ 	}
+ 
+@@ -1957,13 +1978,14 @@ static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ 	int i, ret;
+ 
+-	if (nvme_tcp_tls(ctrl) && !ctrl->tls_key) {
++	if (nvme_tcp_tls_configured(ctrl) && !ctrl->tls_pskid) {
+ 		dev_err(ctrl->device, "no PSK negotiated\n");
+ 		return -ENOKEY;
+ 	}
++
+ 	for (i = 1; i < ctrl->queue_count; i++) {
+ 		ret = nvme_tcp_alloc_queue(ctrl, i,
+-				key_serial(ctrl->tls_key));
++				ctrl->tls_pskid);
+ 		if (ret)
+ 			goto out_free_queues;
+ 	}
+@@ -2144,6 +2166,11 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ 	if (remove)
+ 		nvme_unquiesce_admin_queue(ctrl);
+ 	nvme_tcp_destroy_admin_queue(ctrl, remove);
++	if (ctrl->tls_pskid) {
++		dev_dbg(ctrl->device, "Wipe negotiated TLS_PSK %08x\n",
++			ctrl->tls_pskid);
++		ctrl->tls_pskid = 0;
++	}
+ }
+ 
+ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index d669ce25b5f9c1..7e59283a44723c 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -8,6 +8,7 @@
+ #include <linux/logic_pio.h>
+ #include <linux/module.h>
+ #include <linux/of_address.h>
++#include <linux/overflow.h>
+ #include <linux/pci.h>
+ #include <linux/pci_regs.h>
+ #include <linux/sizes.h>
+@@ -1061,7 +1062,11 @@ static int __of_address_to_resource(struct device_node *dev, int index, int bar_
+ 	if (of_mmio_is_nonposted(dev))
+ 		flags |= IORESOURCE_MEM_NONPOSTED;
+ 
++	if (overflows_type(taddr, r->start))
++		return -EOVERFLOW;
+ 	r->start = taddr;
++	if (overflows_type(taddr + size - 1, r->end))
++		return -EOVERFLOW;
+ 	r->end = taddr + size - 1;
+ 	r->flags = flags;
+ 	r->name = name ? name : dev->full_name;
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 8fd63100ba8f08..36351ad6115eb1 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -357,8 +357,8 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ 	addr = of_get_property(device, "reg", &addr_len);
+ 
+ 	/* Prevent out-of-bounds read in case of longer interrupt parent address size */
+-	if (addr_len > (3 * sizeof(__be32)))
+-		addr_len = 3 * sizeof(__be32);
++	if (addr_len > sizeof(addr_buf))
++		addr_len = sizeof(addr_buf);
+ 	if (addr)
+ 		memcpy(addr_buf, addr, addr_len);
+ 
+@@ -716,8 +716,7 @@ struct irq_domain *of_msi_map_get_device_domain(struct device *dev, u32 id,
+  * @np: device node for @dev
+  * @token: bus type for this domain
+  *
+- * Parse the msi-parent property (both the simple and the complex
+- * versions), and returns the corresponding MSI domain.
++ * Parse the msi-parent property and returns the corresponding MSI domain.
+  *
+  * Returns: the MSI domain for this device (or NULL on failure).
+  */
+@@ -725,33 +724,14 @@ struct irq_domain *of_msi_get_domain(struct device *dev,
+ 				     struct device_node *np,
+ 				     enum irq_domain_bus_token token)
+ {
+-	struct device_node *msi_np;
++	struct of_phandle_iterator it;
+ 	struct irq_domain *d;
++	int err;
+ 
+-	/* Check for a single msi-parent property */
+-	msi_np = of_parse_phandle(np, "msi-parent", 0);
+-	if (msi_np && !of_property_read_bool(msi_np, "#msi-cells")) {
+-		d = irq_find_matching_host(msi_np, token);
+-		if (!d)
+-			of_node_put(msi_np);
+-		return d;
+-	}
+-
+-	if (token == DOMAIN_BUS_PLATFORM_MSI) {
+-		/* Check for the complex msi-parent version */
+-		struct of_phandle_args args;
+-		int index = 0;
+-
+-		while (!of_parse_phandle_with_args(np, "msi-parent",
+-						   "#msi-cells",
+-						   index, &args)) {
+-			d = irq_find_matching_host(args.np, token);
+-			if (d)
+-				return d;
+-
+-			of_node_put(args.np);
+-			index++;
+-		}
++	of_for_each_phandle(&it, err, np, "msi-parent", "#msi-cells", 0) {
++		d = irq_find_matching_host(it.node, token);
++		if (d)
++			return d;
+ 	}
+ 
+ 	return NULL;
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 9100d82bfabc0d..3569050f9cf375 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -41,7 +41,7 @@
+ 
+ /*
+  * Cache if the event is allowed to trace Context information.
+- * This allows us to perform the check, i.e, perfmon_capable(),
++ * This allows us to perform the check, i.e, perf_allow_kernel(),
+  * in the context of the event owner, once, during the event_init().
+  */
+ #define SPE_PMU_HW_FLAGS_CX			0x00001
+@@ -50,7 +50,7 @@ static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_C
+ 
+ static void set_spe_event_has_cx(struct perf_event *event)
+ {
+-	if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && perfmon_capable())
++	if (IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR) && !perf_allow_kernel(&event->attr))
+ 		event->hw.flags |= SPE_PMU_HW_FLAGS_CX;
+ }
+ 
+@@ -745,9 +745,8 @@ static int arm_spe_pmu_event_init(struct perf_event *event)
+ 
+ 	set_spe_event_has_cx(event);
+ 	reg = arm_spe_event_to_pmscr(event);
+-	if (!perfmon_capable() &&
+-	    (reg & (PMSCR_EL1_PA | PMSCR_EL1_PCT)))
+-		return -EACCES;
++	if (reg & (PMSCR_EL1_PA | PMSCR_EL1_PCT))
++		return perf_allow_kernel(&event->attr);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c
+index 04487ad7fba0b5..93c8e0fdb58985 100644
+--- a/drivers/perf/riscv_pmu_legacy.c
++++ b/drivers/perf/riscv_pmu_legacy.c
+@@ -22,13 +22,13 @@ static int pmu_legacy_ctr_get_idx(struct perf_event *event)
+ 	struct perf_event_attr *attr = &event->attr;
+ 
+ 	if (event->attr.type != PERF_TYPE_HARDWARE)
+-		return -EOPNOTSUPP;
++		return -ENOENT;
+ 	if (attr->config == PERF_COUNT_HW_CPU_CYCLES)
+ 		return RISCV_PMU_LEGACY_CYCLE;
+ 	else if (attr->config == PERF_COUNT_HW_INSTRUCTIONS)
+ 		return RISCV_PMU_LEGACY_INSTRET;
+ 	else
+-		return -EOPNOTSUPP;
++		return -ENOENT;
+ }
+ 
+ /* For legacy config & counter index are same */
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 765bda7924f7bf..1a8fd76f14b704 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -305,7 +305,7 @@ static void pmu_sbi_check_event(struct sbi_pmu_event_data *edata)
+ 			  ret.value, 0x1, SBI_PMU_STOP_FLAG_RESET, 0, 0, 0);
+ 	} else if (ret.error == SBI_ERR_NOT_SUPPORTED) {
+ 		/* This event cannot be monitored by any counter */
+-		edata->event_idx = -EINVAL;
++		edata->event_idx = -ENOENT;
+ 	}
+ }
+ 
+@@ -539,7 +539,7 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ 		}
+ 		break;
+ 	default:
+-		ret = -EINVAL;
++		ret = -ENOENT;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 4ed9c7fd2b62af..9d18dfca6a673b 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1774,6 +1774,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ 
+ 	/* "event_list" sysfs to list events supported by the block */
+ 	attr = &pmc->block[blk_num].attr_event_list;
++	sysfs_attr_init(&attr->dev_attr.attr);
+ 	attr->dev_attr.attr.mode = 0444;
+ 	attr->dev_attr.show = mlxbf_pmc_event_list_show;
+ 	attr->nr = blk_num;
+@@ -1787,6 +1788,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ 	if (strstr(pmc->block_name[blk_num], "l3cache") ||
+ 	    ((pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE))) {
+ 		attr = &pmc->block[blk_num].attr_enable;
++		sysfs_attr_init(&attr->dev_attr.attr);
+ 		attr->dev_attr.attr.mode = 0644;
+ 		attr->dev_attr.show = mlxbf_pmc_enable_show;
+ 		attr->dev_attr.store = mlxbf_pmc_enable_store;
+@@ -1814,6 +1816,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ 	/* "eventX" and "counterX" sysfs to program and read counter values */
+ 	for (j = 0; j < pmc->block[blk_num].counters; ++j) {
+ 		attr = &pmc->block[blk_num].attr_counter[j];
++		sysfs_attr_init(&attr->dev_attr.attr);
+ 		attr->dev_attr.attr.mode = 0644;
+ 		attr->dev_attr.show = mlxbf_pmc_counter_show;
+ 		attr->dev_attr.store = mlxbf_pmc_counter_store;
+@@ -1826,6 +1829,7 @@ static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_
+ 		attr = NULL;
+ 
+ 		attr = &pmc->block[blk_num].attr_event[j];
++		sysfs_attr_init(&attr->dev_attr.attr);
+ 		attr->dev_attr.attr.mode = 0644;
+ 		attr->dev_attr.show = mlxbf_pmc_event_show;
+ 		attr->dev_attr.store = mlxbf_pmc_event_store;
+@@ -1861,6 +1865,7 @@ static int mlxbf_pmc_init_perftype_reg(struct device *dev, unsigned int blk_num)
+ 	while (count > 0) {
+ 		--count;
+ 		attr = &pmc->block[blk_num].attr_event[count];
++		sysfs_attr_init(&attr->dev_attr.attr);
+ 		attr->dev_attr.attr.mode = 0644;
+ 		attr->dev_attr.show = mlxbf_pmc_counter_show;
+ 		attr->dev_attr.store = mlxbf_pmc_counter_store;
+diff --git a/drivers/platform/x86/amd/pmf/pmf-quirks.c b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+index 48870ca52b4139..7cde5733b9cacf 100644
+--- a/drivers/platform/x86/amd/pmf/pmf-quirks.c
++++ b/drivers/platform/x86/amd/pmf/pmf-quirks.c
+@@ -37,6 +37,14 @@ static const struct dmi_system_id fwbug_list[] = {
+ 		},
+ 		.driver_data = &quirk_no_sps_bug,
+ 	},
++	{
++		.ident = "ASUS TUF Gaming A14",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "FA401W"),
++		},
++		.driver_data = &quirk_no_sps_bug,
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index 713c0d1fa85fb9..857cc8697942b5 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -316,7 +316,9 @@ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn
+ 	    cpu >= nr_cpu_ids || cpu >= num_possible_cpus())
+ 		return NULL;
+ 
+-	pkg_id = topology_physical_package_id(cpu);
++	pkg_id = topology_logical_package_id(cpu);
++	if (pkg_id >= topology_max_packages())
++		return NULL;
+ 
+ 	bus_number = isst_cpu_info[cpu].bus_info[bus_no];
+ 	if (bus_number < 0)
+diff --git a/drivers/platform/x86/lenovo-ymc.c b/drivers/platform/x86/lenovo-ymc.c
+index e0bbd6a14a89cb..bd9f95404c7cb0 100644
+--- a/drivers/platform/x86/lenovo-ymc.c
++++ b/drivers/platform/x86/lenovo-ymc.c
+@@ -43,6 +43,8 @@ struct lenovo_ymc_private {
+ };
+ 
+ static const struct key_entry lenovo_ymc_keymap[] = {
++	/* Ignore the uninitialized state */
++	{ KE_IGNORE, 0x00 },
+ 	/* Laptop */
+ 	{ KE_SW, 0x01, { .sw = { SW_TABLET_MODE, 0 } } },
+ 	/* Tablet */
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index f74af0a689f20c..0a39f68c641d15 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -840,6 +840,21 @@ static const struct ts_dmi_data rwc_nanote_p8_data = {
+ 	.properties = rwc_nanote_p8_props,
+ };
+ 
++static const struct property_entry rwc_nanote_next_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 5),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1785),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1145),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-next.fw"),
++	{ }
++};
++
++static const struct ts_dmi_data rwc_nanote_next_data = {
++	.acpi_name = "MSSL1680:00",
++	.properties = rwc_nanote_next_props,
++};
++
+ static const struct property_entry schneider_sct101ctm_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1715),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1140),
+@@ -1589,6 +1604,17 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_SKU, "0001")
+ 		},
+ 	},
++	{
++		/* RWC NANOTE NEXT */
++		.driver_data = (void *)&rwc_nanote_next_data,
++		.matches = {
++			DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++			DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++			DMI_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			/* Above matches are too generic, add bios-version match */
++			DMI_MATCH(DMI_BIOS_VERSION, "S8A70R100-V005"),
++		},
++	},
+ 	{
+ 		/* Schneider SCT101CTM */
+ 		.driver_data = (void *)&schneider_sct101ctm_data,
+diff --git a/drivers/platform/x86/x86-android-tablets/core.c b/drivers/platform/x86/x86-android-tablets/core.c
+index 919ef447122958..2d62715359d811 100644
+--- a/drivers/platform/x86/x86-android-tablets/core.c
++++ b/drivers/platform/x86/x86-android-tablets/core.c
+@@ -390,8 +390,9 @@ static __init int x86_android_tablet_probe(struct platform_device *pdev)
+ 	for (i = 0; i < pdev_count; i++) {
+ 		pdevs[i] = platform_device_register_full(&dev_info->pdev_info[i]);
+ 		if (IS_ERR(pdevs[i])) {
++			ret = PTR_ERR(pdevs[i]);
+ 			x86_android_tablet_remove(pdev);
+-			return PTR_ERR(pdevs[i]);
++			return ret;
+ 		}
+ 	}
+ 
+@@ -443,8 +444,9 @@ static __init int x86_android_tablet_probe(struct platform_device *pdev)
+ 								  PLATFORM_DEVID_AUTO,
+ 								  &pdata, sizeof(pdata));
+ 		if (IS_ERR(pdevs[pdev_count])) {
++			ret = PTR_ERR(pdevs[pdev_count]);
+ 			x86_android_tablet_remove(pdev);
+-			return PTR_ERR(pdevs[pdev_count]);
++			return ret;
+ 		}
+ 		pdev_count++;
+ 	}
+diff --git a/drivers/platform/x86/x86-android-tablets/other.c b/drivers/platform/x86/x86-android-tablets/other.c
+index eb0e55c69dfedb..2549c348c8825a 100644
+--- a/drivers/platform/x86/x86-android-tablets/other.c
++++ b/drivers/platform/x86/x86-android-tablets/other.c
+@@ -670,7 +670,7 @@ static const struct software_node *ktd2026_node_group[] = {
+  * is controlled by the "pwm_soc_lpss_2" PWM output.
+  */
+ #define XIAOMI_MIPAD2_LED_PERIOD_NS		19200
+-#define XIAOMI_MIPAD2_LED_DEFAULT_DUTY		 6000 /* From Android kernel */
++#define XIAOMI_MIPAD2_LED_MAX_DUTY_NS		 6000 /* From Android kernel */
+ 
+ static struct pwm_device *xiaomi_mipad2_led_pwm;
+ 
+@@ -679,7 +679,7 @@ static int xiaomi_mipad2_brightness_set(struct led_classdev *led_cdev,
+ {
+ 	struct pwm_state state = {
+ 		.period = XIAOMI_MIPAD2_LED_PERIOD_NS,
+-		.duty_cycle = val,
++		.duty_cycle = XIAOMI_MIPAD2_LED_MAX_DUTY_NS * val / LED_FULL,
+ 		/* Always set PWM enabled to avoid the pin floating */
+ 		.enabled = true,
+ 	};
+@@ -701,11 +701,11 @@ static int __init xiaomi_mipad2_init(struct device *dev)
+ 		return -ENOMEM;
+ 
+ 	led_cdev->name = "mipad2:white:touch-buttons-backlight";
+-	led_cdev->max_brightness = XIAOMI_MIPAD2_LED_PERIOD_NS;
+-	/* "input-events" trigger uses blink_brightness */
+-	led_cdev->blink_brightness = XIAOMI_MIPAD2_LED_DEFAULT_DUTY;
++	led_cdev->max_brightness = LED_FULL;
+ 	led_cdev->default_trigger = "input-events";
+ 	led_cdev->brightness_set_blocking = xiaomi_mipad2_brightness_set;
++	/* Turn LED off during suspend */
++	led_cdev->flags = LED_CORE_SUSPENDRESUME;
+ 
+ 	ret = devm_led_classdev_register(dev, led_cdev);
+ 	if (ret)
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 0ab6008e863e82..4134a8344d2dab 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -1694,7 +1694,6 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 	genpd_lock(genpd);
+ 
+ 	genpd_set_cpumask(genpd, gpd_data->cpu);
+-	dev_pm_domain_set(dev, &genpd->domain);
+ 
+ 	genpd->device_count++;
+ 	if (gd)
+@@ -1703,6 +1702,7 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
+ 
+ 	genpd_unlock(genpd);
++	dev_pm_domain_set(dev, &genpd->domain);
+  out:
+ 	if (ret)
+ 		genpd_free_dev_data(dev, gpd_data);
+@@ -1759,12 +1759,13 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
+ 		genpd->gd->max_off_time_changed = true;
+ 
+ 	genpd_clear_cpumask(genpd, gpd_data->cpu);
+-	dev_pm_domain_set(dev, NULL);
+ 
+ 	list_del_init(&pdd->list_node);
+ 
+ 	genpd_unlock(genpd);
+ 
++	dev_pm_domain_set(dev, NULL);
++
+ 	if (genpd->detach_dev)
+ 		genpd->detach_dev(genpd, dev);
+ 
+diff --git a/drivers/power/reset/brcmstb-reboot.c b/drivers/power/reset/brcmstb-reboot.c
+index 0f2944dc935516..a04713f191a112 100644
+--- a/drivers/power/reset/brcmstb-reboot.c
++++ b/drivers/power/reset/brcmstb-reboot.c
+@@ -62,9 +62,6 @@ static int brcmstb_restart_handler(struct notifier_block *this,
+ 		return NOTIFY_DONE;
+ 	}
+ 
+-	while (1)
+-		;
+-
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/drivers/power/supply/power_supply_hwmon.c b/drivers/power/supply/power_supply_hwmon.c
+index c97893d4c25eb1..6465b5e4a3879c 100644
+--- a/drivers/power/supply/power_supply_hwmon.c
++++ b/drivers/power/supply/power_supply_hwmon.c
+@@ -299,7 +299,8 @@ static const struct hwmon_channel_info * const power_supply_hwmon_info[] = {
+ 			   HWMON_T_INPUT     |
+ 			   HWMON_T_MAX       |
+ 			   HWMON_T_MIN       |
+-			   HWMON_T_MIN_ALARM,
++			   HWMON_T_MIN_ALARM |
++			   HWMON_T_MAX_ALARM,
+ 
+ 			   HWMON_T_LABEL     |
+ 			   HWMON_T_INPUT     |
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index 39a47540c59006..2992fd4eca6486 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -194,6 +194,10 @@ static void k3_r5_rproc_mbox_callback(struct mbox_client *client, void *data)
+ 	const char *name = kproc->rproc->name;
+ 	u32 msg = omap_mbox_message(data);
+ 
++	/* Do not forward message from a detached core */
++	if (kproc->rproc->state == RPROC_DETACHED)
++		return;
++
+ 	dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+ 
+ 	switch (msg) {
+@@ -229,6 +233,10 @@ static void k3_r5_rproc_kick(struct rproc *rproc, int vqid)
+ 	mbox_msg_t msg = (mbox_msg_t)vqid;
+ 	int ret;
+ 
++	/* Do not forward message to a detached core */
++	if (kproc->rproc->state == RPROC_DETACHED)
++		return;
++
+ 	/* send the index of the triggered virtqueue in the mailbox payload */
+ 	ret = mbox_send_message(kproc->mbox, (void *)msg);
+ 	if (ret < 0)
+@@ -399,12 +407,9 @@ static int k3_r5_rproc_request_mbox(struct rproc *rproc)
+ 	client->knows_txdone = false;
+ 
+ 	kproc->mbox = mbox_request_channel(client, 0);
+-	if (IS_ERR(kproc->mbox)) {
+-		ret = -EBUSY;
+-		dev_err(dev, "mbox_request_channel failed: %ld\n",
+-			PTR_ERR(kproc->mbox));
+-		return ret;
+-	}
++	if (IS_ERR(kproc->mbox))
++		return dev_err_probe(dev, PTR_ERR(kproc->mbox),
++				     "mbox_request_channel failed\n");
+ 
+ 	/*
+ 	 * Ping the remote processor, this is only for sanity-sake for now;
+@@ -464,8 +469,6 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ 			ret);
+ 		return ret;
+ 	}
+-	core->released_from_reset = true;
+-	wake_up_interruptible(&cluster->core_transition);
+ 
+ 	/*
+ 	 * Newer IP revisions like on J7200 SoCs support h/w auto-initialization
+@@ -552,10 +555,6 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 	u32 boot_addr;
+ 	int ret;
+ 
+-	ret = k3_r5_rproc_request_mbox(rproc);
+-	if (ret)
+-		return ret;
+-
+ 	boot_addr = rproc->bootaddr;
+ 	/* TODO: add boot_addr sanity checking */
+ 	dev_dbg(dev, "booting R5F core using boot addr = 0x%x\n", boot_addr);
+@@ -564,7 +563,7 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 	core = kproc->core;
+ 	ret = ti_sci_proc_set_config(core->tsp, boot_addr, 0, 0);
+ 	if (ret)
+-		goto put_mbox;
++		return ret;
+ 
+ 	/* unhalt/run all applicable cores */
+ 	if (cluster->mode == CLUSTER_MODE_LOCKSTEP) {
+@@ -580,13 +579,15 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 		if (core != core0 && core0->rproc->state == RPROC_OFFLINE) {
+ 			dev_err(dev, "%s: can not start core 1 before core 0\n",
+ 				__func__);
+-			ret = -EPERM;
+-			goto put_mbox;
++			return -EPERM;
+ 		}
+ 
+ 		ret = k3_r5_core_run(core);
+ 		if (ret)
+-			goto put_mbox;
++			return ret;
++
++		core->released_from_reset = true;
++		wake_up_interruptible(&cluster->core_transition);
+ 	}
+ 
+ 	return 0;
+@@ -596,8 +597,6 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 		if (k3_r5_core_halt(core))
+ 			dev_warn(core->dev, "core halt back failed\n");
+ 	}
+-put_mbox:
+-	mbox_free_channel(kproc->mbox);
+ 	return ret;
+ }
+ 
+@@ -658,8 +657,6 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ 			goto out;
+ 	}
+ 
+-	mbox_free_channel(kproc->mbox);
+-
+ 	return 0;
+ 
+ unroll_core_halt:
+@@ -674,42 +671,22 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ /*
+  * Attach to a running R5F remote processor (IPC-only mode)
+  *
+- * The R5F attach callback only needs to request the mailbox, the remote
+- * processor is already booted, so there is no need to issue any TI-SCI
+- * commands to boot the R5F cores in IPC-only mode. This callback is invoked
+- * only in IPC-only mode.
++ * The R5F attach callback is a NOP. The remote processor is already booted, and
++ * all required resources have been acquired during probe routine, so there is
++ * no need to issue any TI-SCI commands to boot the R5F cores in IPC-only mode.
++ * This callback is invoked only in IPC-only mode and exists because
++ * rproc_validate() checks for its existence.
+  */
+-static int k3_r5_rproc_attach(struct rproc *rproc)
+-{
+-	struct k3_r5_rproc *kproc = rproc->priv;
+-	struct device *dev = kproc->dev;
+-	int ret;
+-
+-	ret = k3_r5_rproc_request_mbox(rproc);
+-	if (ret)
+-		return ret;
+-
+-	dev_info(dev, "R5F core initialized in IPC-only mode\n");
+-	return 0;
+-}
++static int k3_r5_rproc_attach(struct rproc *rproc) { return 0; }
+ 
+ /*
+  * Detach from a running R5F remote processor (IPC-only mode)
+  *
+- * The R5F detach callback performs the opposite operation to attach callback
+- * and only needs to release the mailbox, the R5F cores are not stopped and
+- * will be left in booted state in IPC-only mode. This callback is invoked
+- * only in IPC-only mode.
++ * The R5F detach callback is a NOP. The R5F cores are not stopped and will be
++ * left in booted state in IPC-only mode. This callback is invoked only in
++ * IPC-only mode and exists for sanity sake.
+  */
+-static int k3_r5_rproc_detach(struct rproc *rproc)
+-{
+-	struct k3_r5_rproc *kproc = rproc->priv;
+-	struct device *dev = kproc->dev;
+-
+-	mbox_free_channel(kproc->mbox);
+-	dev_info(dev, "R5F core deinitialized in IPC-only mode\n");
+-	return 0;
+-}
++static int k3_r5_rproc_detach(struct rproc *rproc) { return 0; }
+ 
+ /*
+  * This function implements the .get_loaded_rsc_table() callback and is used
+@@ -1278,6 +1255,10 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ 		kproc->rproc = rproc;
+ 		core->rproc = rproc;
+ 
++		ret = k3_r5_rproc_request_mbox(rproc);
++		if (ret)
++			return ret;
++
+ 		ret = k3_r5_rproc_configure_mode(kproc);
+ 		if (ret < 0)
+ 			goto err_config;
+@@ -1332,7 +1313,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ 			dev_err(dev,
+ 				"Timed out waiting for %s core to power up!\n",
+ 				rproc->name);
+-			return ret;
++			goto err_powerup;
+ 		}
+ 	}
+ 
+@@ -1348,6 +1329,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++err_powerup:
+ 	rproc_del(rproc);
+ err_add:
+ 	k3_r5_reserved_mem_exit(kproc);
+@@ -1395,6 +1377,8 @@ static void k3_r5_cluster_rproc_exit(void *data)
+ 			}
+ 		}
+ 
++		mbox_free_channel(kproc->mbox);
++
+ 		rproc_del(rproc);
+ 
+ 		k3_r5_reserved_mem_exit(kproc);
+diff --git a/drivers/rtc/rtc-at91sam9.c b/drivers/rtc/rtc-at91sam9.c
+index f93bee96e36233..993c0878fb6606 100644
+--- a/drivers/rtc/rtc-at91sam9.c
++++ b/drivers/rtc/rtc-at91sam9.c
+@@ -368,6 +368,7 @@ static int at91_rtc_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	rtc->gpbr = syscon_node_to_regmap(args.np);
++	of_node_put(args.np);
+ 	rtc->gpbr_offset = args.args[0];
+ 	if (IS_ERR(rtc->gpbr)) {
+ 		dev_err(&pdev->dev, "failed to retrieve gpbr regmap, aborting.\n");
+diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c
+index 00e245173320c3..4fcb73b727aa5d 100644
+--- a/drivers/scsi/NCR5380.c
++++ b/drivers/scsi/NCR5380.c
+@@ -1807,8 +1807,11 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 				return;
+ 			case PHASE_MSGIN:
+ 				len = 1;
++				tmp = 0xff;
+ 				data = &tmp;
+ 				NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
++				if (tmp == 0xff)
++					break;
+ 				ncmd->message = tmp;
+ 
+ 				switch (tmp) {
+@@ -1996,6 +1999,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
+ 				break;
+ 			case PHASE_STATIN:
+ 				len = 1;
++				tmp = ncmd->status;
+ 				data = &tmp;
+ 				NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
+ 				ncmd->status = tmp;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 7d5a155073c627..9b66fa29fb05ca 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -2029,8 +2029,8 @@ struct aac_srb_reply
+ };
+ 
+ struct aac_srb_unit {
+-	struct aac_srb		srb;
+ 	struct aac_srb_reply	srb_reply;
++	struct aac_srb		srb;
+ };
+ 
+ /*
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 7c147d6ea8a8ff..e5a9c5a323f8be 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -306,6 +306,14 @@ struct lpfc_stats {
+ 
+ struct lpfc_hba;
+ 
++/* Data structure to keep withheld FLOGI_ACC information */
++struct lpfc_defer_flogi_acc {
++	bool flag;
++	u16 rx_id;
++	u16 ox_id;
++	struct lpfc_nodelist *ndlp;
++
++};
+ 
+ #define LPFC_VMID_TIMER   300	/* timer interval in seconds */
+ 
+@@ -1430,9 +1438,7 @@ struct lpfc_hba {
+ 	uint16_t vlan_id;
+ 	struct list_head fcf_conn_rec_list;
+ 
+-	bool defer_flogi_acc_flag;
+-	uint16_t defer_flogi_acc_rx_id;
+-	uint16_t defer_flogi_acc_ox_id;
++	struct lpfc_defer_flogi_acc defer_flogi_acc;
+ 
+ 	spinlock_t ct_ev_lock; /* synchronize access to ct_ev_waiters */
+ 	struct list_head ct_ev_waiters;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 445cb6c2e80f54..9c8a6d2a290490 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1390,7 +1390,7 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 	phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
+ 
+ 	/* Check for a deferred FLOGI ACC condition */
+-	if (phba->defer_flogi_acc_flag) {
++	if (phba->defer_flogi_acc.flag) {
+ 		/* lookup ndlp for received FLOGI */
+ 		ndlp = lpfc_findnode_did(vport, 0);
+ 		if (!ndlp)
+@@ -1404,34 +1404,38 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		if (phba->sli_rev == LPFC_SLI_REV4) {
+ 			bf_set(wqe_ctxt_tag,
+ 			       &defer_flogi_acc.wqe.xmit_els_rsp.wqe_com,
+-			       phba->defer_flogi_acc_rx_id);
++			       phba->defer_flogi_acc.rx_id);
+ 			bf_set(wqe_rcvoxid,
+ 			       &defer_flogi_acc.wqe.xmit_els_rsp.wqe_com,
+-			       phba->defer_flogi_acc_ox_id);
++			       phba->defer_flogi_acc.ox_id);
+ 		} else {
+ 			icmd = &defer_flogi_acc.iocb;
+-			icmd->ulpContext = phba->defer_flogi_acc_rx_id;
++			icmd->ulpContext = phba->defer_flogi_acc.rx_id;
+ 			icmd->unsli3.rcvsli3.ox_id =
+-				phba->defer_flogi_acc_ox_id;
++				phba->defer_flogi_acc.ox_id;
+ 		}
+ 
+ 		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 				 "3354 Xmit deferred FLOGI ACC: rx_id: x%x,"
+ 				 " ox_id: x%x, hba_flag x%lx\n",
+-				 phba->defer_flogi_acc_rx_id,
+-				 phba->defer_flogi_acc_ox_id, phba->hba_flag);
++				 phba->defer_flogi_acc.rx_id,
++				 phba->defer_flogi_acc.ox_id, phba->hba_flag);
+ 
+ 		/* Send deferred FLOGI ACC */
+ 		lpfc_els_rsp_acc(vport, ELS_CMD_FLOGI, &defer_flogi_acc,
+ 				 ndlp, NULL);
+ 
+-		phba->defer_flogi_acc_flag = false;
+-		vport->fc_myDID = did;
++		phba->defer_flogi_acc.flag = false;
+ 
+-		/* Decrement ndlp reference count to indicate the node can be
+-		 * released when other references are removed.
++		/* Decrement the held ndlp that was incremented when the
++		 * deferred flogi acc flag was set.
+ 		 */
+-		lpfc_nlp_put(ndlp);
++		if (phba->defer_flogi_acc.ndlp) {
++			lpfc_nlp_put(phba->defer_flogi_acc.ndlp);
++			phba->defer_flogi_acc.ndlp = NULL;
++		}
++
++		vport->fc_myDID = did;
+ 	}
+ 
+ 	return 0;
+@@ -5240,9 +5244,10 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	/* ACC to LOGO completes to NPort <nlp_DID> */
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 			 "0109 ACC to LOGO completes to NPort x%x refcnt %d "
+-			 "Data: x%x x%x x%x\n",
+-			 ndlp->nlp_DID, kref_read(&ndlp->kref), ndlp->nlp_flag,
+-			 ndlp->nlp_state, ndlp->nlp_rpi);
++			 "last els x%x Data: x%x x%x x%x\n",
++			 ndlp->nlp_DID, kref_read(&ndlp->kref),
++			 ndlp->nlp_last_elscmd, ndlp->nlp_flag, ndlp->nlp_state,
++			 ndlp->nlp_rpi);
+ 
+ 	/* This clause allows the LOGO ACC to complete and free resources
+ 	 * for the Fabric Domain Controller.  It does deliberately skip
+@@ -5254,18 +5259,22 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 		goto out;
+ 
+ 	if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
+-		/* If PLOGI is being retried, PLOGI completion will cleanup the
+-		 * node. The NLP_NPR_2B_DISC flag needs to be retained to make
+-		 * progress on nodes discovered from last RSCN.
+-		 */
+-		if ((ndlp->nlp_flag & NLP_DELAY_TMO) &&
+-		    (ndlp->nlp_last_elscmd == ELS_CMD_PLOGI))
+-			goto out;
+-
+ 		if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
+ 			lpfc_unreg_rpi(vport, ndlp);
+ 
++		/* If came from PRLO, then PRLO_ACC is done.
++		 * Start rediscovery now.
++		 */
++		if (ndlp->nlp_last_elscmd == ELS_CMD_PRLO) {
++			spin_lock_irq(&ndlp->lock);
++			ndlp->nlp_flag |= NLP_NPR_2B_DISC;
++			spin_unlock_irq(&ndlp->lock);
++			ndlp->nlp_prev_state = ndlp->nlp_state;
++			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
++			lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
++		}
+ 	}
++
+  out:
+ 	/*
+ 	 * The driver received a LOGO from the rport and has ACK'd it.
+@@ -8454,9 +8463,9 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ 
+ 	/* Defer ACC response until AFTER we issue a FLOGI */
+ 	if (!test_bit(HBA_FLOGI_ISSUED, &phba->hba_flag)) {
+-		phba->defer_flogi_acc_rx_id = bf_get(wqe_ctxt_tag,
++		phba->defer_flogi_acc.rx_id = bf_get(wqe_ctxt_tag,
+ 						     &wqe->xmit_els_rsp.wqe_com);
+-		phba->defer_flogi_acc_ox_id = bf_get(wqe_rcvoxid,
++		phba->defer_flogi_acc.ox_id = bf_get(wqe_rcvoxid,
+ 						     &wqe->xmit_els_rsp.wqe_com);
+ 
+ 		vport->fc_myDID = did;
+@@ -8464,11 +8473,17 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ 		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 				 "3344 Deferring FLOGI ACC: rx_id: x%x,"
+ 				 " ox_id: x%x, hba_flag x%lx\n",
+-				 phba->defer_flogi_acc_rx_id,
+-				 phba->defer_flogi_acc_ox_id, phba->hba_flag);
++				 phba->defer_flogi_acc.rx_id,
++				 phba->defer_flogi_acc.ox_id, phba->hba_flag);
+ 
+-		phba->defer_flogi_acc_flag = true;
++		phba->defer_flogi_acc.flag = true;
+ 
++		/* This nlp_get is paired with nlp_puts that reset the
++		 * defer_flogi_acc.flag back to false.  We need to retain
++		 * a kref on the ndlp until the deferred FLOGI ACC is
++		 * processed or cancelled.
++		 */
++		phba->defer_flogi_acc.ndlp = lpfc_nlp_get(ndlp);
+ 		return 0;
+ 	}
+ 
+@@ -10504,7 +10519,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ 
+ 		lpfc_els_rcv_flogi(vport, elsiocb, ndlp);
+ 		/* retain node if our response is deferred */
+-		if (phba->defer_flogi_acc_flag)
++		if (phba->defer_flogi_acc.flag)
+ 			break;
+ 		if (newnode)
+ 			lpfc_disc_state_machine(vport, ndlp, NULL,
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 13b08c85440fec..e553fab869de9b 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -175,7 +175,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 			 ndlp->nlp_state, ndlp->fc4_xpt_flags);
+ 
+ 	/* Don't schedule a worker thread event if the vport is going down. */
+-	if (test_bit(FC_UNLOADING, &vport->load_flag)) {
++	if (test_bit(FC_UNLOADING, &vport->load_flag) ||
++	    !test_bit(HBA_SETUP, &phba->hba_flag)) {
+ 		spin_lock_irqsave(&ndlp->lock, iflags);
+ 		ndlp->rport = NULL;
+ 
+@@ -1246,7 +1247,14 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 	lpfc_scsi_dev_block(phba);
+ 	offline = pci_channel_offline(phba->pcidev);
+ 
+-	phba->defer_flogi_acc_flag = false;
++	/* Decrement the held ndlp if there is a deferred flogi acc */
++	if (phba->defer_flogi_acc.flag) {
++		if (phba->defer_flogi_acc.ndlp) {
++			lpfc_nlp_put(phba->defer_flogi_acc.ndlp);
++			phba->defer_flogi_acc.ndlp = NULL;
++		}
++	}
++	phba->defer_flogi_acc.flag = false;
+ 
+ 	/* Clear external loopback plug detected flag */
+ 	phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
+@@ -1368,7 +1376,7 @@ lpfc_linkup_port(struct lpfc_vport *vport)
+ 		(vport != phba->pport))
+ 		return;
+ 
+-	if (phba->defer_flogi_acc_flag) {
++	if (phba->defer_flogi_acc.flag) {
+ 		clear_bit(FC_ABORT_DISCOVERY, &vport->fc_flag);
+ 		clear_bit(FC_RSCN_MODE, &vport->fc_flag);
+ 		clear_bit(FC_NLP_MORE, &vport->fc_flag);
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index f6a53446e57f9d..4574716c8764fb 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -2652,8 +2652,26 @@ lpfc_rcv_prlo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 	/* flush the target */
+ 	lpfc_sli_abort_iocb(vport, ndlp->nlp_sid, 0, LPFC_CTX_TGT);
+ 
+-	/* Treat like rcv logo */
+-	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_PRLO);
++	/* Send PRLO_ACC */
++	spin_lock_irq(&ndlp->lock);
++	ndlp->nlp_flag |= NLP_LOGO_ACC;
++	spin_unlock_irq(&ndlp->lock);
++	lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
++
++	/* Save ELS_CMD_PRLO as the last elscmd and then set to NPR.
++	 * lpfc_cmpl_els_logo_acc is expected to restart discovery.
++	 */
++	ndlp->nlp_last_elscmd = ELS_CMD_PRLO;
++	ndlp->nlp_prev_state = ndlp->nlp_state;
++
++	lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_ELS | LOG_DISCOVERY,
++			 "3422 DID x%06x nflag x%x lastels x%x ref cnt %u\n",
++			 ndlp->nlp_DID, ndlp->nlp_flag,
++			 ndlp->nlp_last_elscmd,
++			 kref_read(&ndlp->kref));
++
++	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
++
+ 	return ndlp->nlp_state;
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 9f0b59672e1915..0eaede8275dac4 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -5555,11 +5555,20 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ 
+ 	iocb = &lpfc_cmd->cur_iocbq;
+ 	if (phba->sli_rev == LPFC_SLI_REV4) {
+-		pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+-		if (!pring_s4) {
++		/* if the io_wq & pring are gone, the port was reset. */
++		if (!phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq ||
++		    !phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring) {
++			lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
++					 "2877 SCSI Layer I/O Abort Request "
++					 "IO CMPL Status x%x ID %d LUN %llu "
++					 "HBA_SETUP %d\n", FAILED,
++					 cmnd->device->id,
++					 (u64)cmnd->device->lun,
++					 test_bit(HBA_SETUP, &phba->hba_flag));
+ 			ret = FAILED;
+ 			goto out_unlock_hba;
+ 		}
++		pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+ 		spin_lock(&pring_s4->ring_lock);
+ 	}
+ 	/* the command is in process of being cancelled */
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 3e55d5edd60abc..c6fcaeeb529454 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -4687,6 +4687,17 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
+ 	/* Look on all the FCP Rings for the iotag */
+ 	if (phba->sli_rev >= LPFC_SLI_REV4) {
+ 		for (i = 0; i < phba->cfg_hdw_queue; i++) {
++			if (!phba->sli4_hba.hdwq ||
++			    !phba->sli4_hba.hdwq[i].io_wq) {
++				lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
++						"7777 hdwq's deleted %lx "
++						"%lx %x %x\n",
++						phba->pport->load_flag,
++						phba->hba_flag,
++						phba->link_state,
++						phba->sli.sli_flag);
++				return;
++			}
+ 			pring = phba->sli4_hba.hdwq[i].io_wq->pring;
+ 
+ 			spin_lock_irq(&pring->ring_lock);
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 1e63cb6cd8e327..33e1eba62ca12c 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -100,10 +100,12 @@ static void pm8001_map_queues(struct Scsi_Host *shost)
+ 	struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
+ 	struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
+ 
+-	if (pm8001_ha->number_of_intr > 1)
++	if (pm8001_ha->number_of_intr > 1) {
+ 		blk_mq_pci_map_queues(qmap, pm8001_ha->pdev, 1);
++		return;
++	}
+ 
+-	return blk_mq_map_queues(qmap);
++	blk_mq_map_queues(qmap);
+ }
+ 
+ /*
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index c1524fb334eb5c..4230714e5f3a12 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5917,7 +5917,7 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info,
+ 	int rc;
+ 	struct pqi_scsi_dev *device;
+ 	struct pqi_stream_data *pqi_stream_data;
+-	struct pqi_scsi_dev_raid_map_data rmd;
++	struct pqi_scsi_dev_raid_map_data rmd = { 0 };
+ 
+ 	if (!ctrl_info->enable_stream_detection)
+ 		return false;
+@@ -9428,6 +9428,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x152d, 0x8a37)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x193d, 0x0462)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0x1104)
+@@ -9456,6 +9460,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0x110b)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x193d, 0x1110)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0x8460)
+@@ -9464,6 +9472,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0x8461)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x193d, 0x8462)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x193d, 0xc460)
+@@ -9572,6 +9584,14 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1bd4, 0x0089)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x00a1)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f3a, 0x0104)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x19e5, 0xd227)
+@@ -10164,6 +10184,110 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1137, 0x02fa)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x02fe)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x02ff)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x0300)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0045)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0046)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0047)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0048)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x004a)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x004b)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x004c)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x004f)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0051)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0052)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0053)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0054)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x006b)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x006c)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x006d)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x006f)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0070)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0071)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0072)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0086)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0087)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0088)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x0089)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1e93, 0x1000)
+@@ -10248,6 +10372,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       0x1f51, 0x1045)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1ff9, 0x00a3)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_ANY_ID, PCI_ANY_ID)
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index 0d8ce1a92168c4..d50bad3a2ce92b 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -834,6 +834,9 @@ static int flush_buffer(struct scsi_tape *STp, int seek_next)
+ 	int backspace, result;
+ 	struct st_partstat *STps;
+ 
++	if (STp->ready != ST_READY)
++		return 0;
++
+ 	/*
+ 	 * If there was a bus reset, block further access
+ 	 * to this device.
+@@ -841,8 +844,6 @@ static int flush_buffer(struct scsi_tape *STp, int seek_next)
+ 	if (STp->pos_unknown)
+ 		return (-EIO);
+ 
+-	if (STp->ready != ST_READY)
+-		return 0;
+ 	STps = &(STp->ps[STp->partition]);
+ 	if (STps->rw == ST_WRITING)	/* Writing */
+ 		return st_flush_write_buffer(STp);
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index 2fb8d4e55c7773..ef3a7226db125c 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -466,6 +466,7 @@ static const struct platform_device_id bcm63xx_spi_dev_match[] = {
+ 	{
+ 	},
+ };
++MODULE_DEVICE_TABLE(platform, bcm63xx_spi_dev_match);
+ 
+ static const struct of_device_id bcm63xx_spi_of_match[] = {
+ 	{ .compatible = "brcm,bcm6348-spi", .data = &bcm6348_spi_reg_offsets },
+@@ -583,13 +584,15 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+ 
+ 	bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS);
+ 
+-	pm_runtime_enable(&pdev->dev);
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		goto out_clk_disable;
+ 
+ 	/* register and we are done */
+ 	ret = devm_spi_register_controller(dev, host);
+ 	if (ret) {
+ 		dev_err(dev, "spi register failed\n");
+-		goto out_pm_disable;
++		goto out_clk_disable;
+ 	}
+ 
+ 	dev_info(dev, "at %pr (irq %d, FIFOs size %d)\n",
+@@ -597,8 +600,6 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-out_pm_disable:
+-	pm_runtime_disable(&pdev->dev);
+ out_clk_disable:
+ 	clk_disable_unprepare(clk);
+ out_err:
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index e5140532071d2b..81edf0a3ddf843 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -665,8 +665,8 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 
+ clk_dis_all:
+ 	if (!spi_controller_is_target(ctlr)) {
+-		pm_runtime_set_suspended(&pdev->dev);
+ 		pm_runtime_disable(&pdev->dev);
++		pm_runtime_set_suspended(&pdev->dev);
+ 	}
+ remove_ctlr:
+ 	spi_controller_put(ctlr);
+@@ -688,8 +688,10 @@ static void cdns_spi_remove(struct platform_device *pdev)
+ 
+ 	cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);
+ 
+-	pm_runtime_set_suspended(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
++	if (!spi_controller_is_target(ctlr)) {
++		pm_runtime_disable(&pdev->dev);
++		pm_runtime_set_suspended(&pdev->dev);
++	}
+ 
+ 	spi_unregister_controller(ctlr);
+ }
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 1439883326cfe1..4c041ad39dc5dd 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1870,8 +1870,8 @@ static int spi_imx_probe(struct platform_device *pdev)
+ 		spi_imx_sdma_exit(spi_imx);
+ out_runtime_pm_put:
+ 	pm_runtime_dont_use_autosuspend(spi_imx->dev);
+-	pm_runtime_set_suspended(&pdev->dev);
+ 	pm_runtime_disable(spi_imx->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 
+ 	clk_disable_unprepare(spi_imx->clk_ipg);
+ out_put_per:
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index e11146932828a2..7cce2d2ab9ca61 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -198,9 +198,16 @@ static int __maybe_unused rpcif_spi_resume(struct device *dev)
+ 
+ static SIMPLE_DEV_PM_OPS(rpcif_spi_pm_ops, rpcif_spi_suspend, rpcif_spi_resume);
+ 
++static const struct platform_device_id rpc_if_spi_id_table[] = {
++	{ .name = "rpc-if-spi" },
++	{ /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(platform, rpc_if_spi_id_table);
++
+ static struct platform_driver rpcif_spi_driver = {
+ 	.probe	= rpcif_spi_probe,
+ 	.remove_new = rpcif_spi_remove,
++	.id_table = rpc_if_spi_id_table,
+ 	.driver = {
+ 		.name	= "rpc-if-spi",
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index 833c58c88e4086..6ab416a3396690 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -245,7 +245,7 @@ static void s3c64xx_flush_fifo(struct s3c64xx_spi_driver_data *sdd)
+ 	loops = msecs_to_loops(1);
+ 	do {
+ 		val = readl(regs + S3C64XX_SPI_STATUS);
+-	} while (TX_FIFO_LVL(val, sdd) && loops--);
++	} while (TX_FIFO_LVL(val, sdd) && --loops);
+ 
+ 	if (loops == 0)
+ 		dev_warn(&sdd->pdev->dev, "Timed out flushing TX FIFO\n");
+@@ -258,7 +258,7 @@ static void s3c64xx_flush_fifo(struct s3c64xx_spi_driver_data *sdd)
+ 			readl(regs + S3C64XX_SPI_RX_DATA);
+ 		else
+ 			break;
+-	} while (loops--);
++	} while (--loops);
+ 
+ 	if (loops == 0)
+ 		dev_warn(&sdd->pdev->dev, "Timed out flushing RX FIFO\n");
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 006ffacf1c56cb..c50f3fb49a6686 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1029,20 +1029,23 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ 		/* virtio-scsi spec requires byte 0 of the lun to be 1 */
+ 		vq_err(vq, "Illegal virtio-scsi lun: %u\n", *vc->lunp);
+ 	} else {
+-		struct vhost_scsi_tpg **vs_tpg, *tpg;
+-
+-		vs_tpg = vhost_vq_get_backend(vq);	/* validated at handler entry */
+-
+-		tpg = READ_ONCE(vs_tpg[*vc->target]);
+-		if (unlikely(!tpg)) {
+-			vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
+-		} else {
+-			if (tpgp)
+-				*tpgp = tpg;
+-			ret = 0;
++		struct vhost_scsi_tpg **vs_tpg, *tpg = NULL;
++
++		if (vc->target) {
++			/* validated at handler entry */
++			vs_tpg = vhost_vq_get_backend(vq);
++			tpg = READ_ONCE(vs_tpg[*vc->target]);
++			if (unlikely(!tpg)) {
++				vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++				goto out;
++			}
+ 		}
+-	}
+ 
++		if (tpgp)
++			*tpgp = tpg;
++		ret = 0;
++	}
++out:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 8dd82afb3452ba..595b8e27bea66d 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -561,15 +561,10 @@ static int efifb_probe(struct platform_device *dev)
+ 		break;
+ 	}
+ 
+-	err = sysfs_create_groups(&dev->dev.kobj, efifb_groups);
+-	if (err) {
+-		pr_err("efifb: cannot add sysfs attrs\n");
+-		goto err_unmap;
+-	}
+ 	err = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (err < 0) {
+ 		pr_err("efifb: cannot allocate colormap\n");
+-		goto err_groups;
++		goto err_unmap;
+ 	}
+ 
+ 	err = devm_aperture_acquire_for_platform_device(dev, par->base, par->size);
+@@ -587,8 +582,6 @@ static int efifb_probe(struct platform_device *dev)
+ 
+ err_fb_dealloc_cmap:
+ 	fb_dealloc_cmap(&info->cmap);
+-err_groups:
+-	sysfs_remove_groups(&dev->dev.kobj, efifb_groups);
+ err_unmap:
+ 	if (mem_flags & (EFI_MEMORY_UC | EFI_MEMORY_WC))
+ 		iounmap(info->screen_base);
+@@ -608,12 +601,12 @@ static void efifb_remove(struct platform_device *pdev)
+ 
+ 	/* efifb_destroy takes care of info cleanup */
+ 	unregister_framebuffer(info);
+-	sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
+ }
+ 
+ static struct platform_driver efifb_driver = {
+ 	.driver = {
+ 		.name = "efi-framebuffer",
++		.dev_groups = efifb_groups,
+ 	},
+ 	.probe = efifb_probe,
+ 	.remove_new = efifb_remove,
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 2ef56fa28aff36..5ce02495cda638 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2403,6 +2403,7 @@ static void pxafb_remove(struct platform_device *dev)
+ 	info = &fbi->fb;
+ 
+ 	pxafb_overlay_exit(fbi);
++	cancel_work_sync(&fbi->task);
+ 	unregister_framebuffer(info);
+ 
+ 	pxafb_disable_controller(fbi);
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index c3f0c45ae9a9b6..e0885cfeb72a72 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -403,6 +403,7 @@ const struct netfs_request_ops afs_req_ops = {
+ 	.begin_writeback	= afs_begin_writeback,
+ 	.prepare_write		= afs_prepare_write,
+ 	.issue_write		= afs_issue_write,
++	.retry_request		= afs_retry_request,
+ };
+ 
+ static void afs_add_open_mmap(struct afs_vnode *vnode)
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 3546b087e791d4..428721bbe4f6e3 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -201,7 +201,7 @@ void afs_wait_for_operation(struct afs_operation *op)
+ 		}
+ 	}
+ 
+-	if (op->call_responded)
++	if (op->call_responded && op->server)
+ 		set_bit(AFS_SERVER_FL_RESPONDING, &op->server->flags);
+ 
+ 	if (!afs_op_error(op)) {
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index a2de5c05f97c89..57e327833ed1d6 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -3179,10 +3179,14 @@ void btrfs_backref_release_cache(struct btrfs_backref_cache *cache)
+ 		btrfs_backref_cleanup_node(cache, node);
+ 	}
+ 
+-	cache->last_trans = 0;
+-
+-	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
+-		ASSERT(list_empty(&cache->pending[i]));
++	for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
++		while (!list_empty(&cache->pending[i])) {
++			node = list_first_entry(&cache->pending[i],
++						struct btrfs_backref_node,
++						list);
++			btrfs_backref_cleanup_node(cache, node);
++		}
++	}
+ 	ASSERT(list_empty(&cache->pending_edge));
+ 	ASSERT(list_empty(&cache->useless_node));
+ 	ASSERT(list_empty(&cache->changed));
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 3791813dc7b621..da6dff405c7768 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4271,6 +4271,17 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 	/* clear out the rbtree of defraggable inodes */
+ 	btrfs_cleanup_defrag_inodes(fs_info);
+ 
++	/*
++	 * Wait for any fixup workers to complete.
++	 * If we don't wait for them here and they are still running by the time
++	 * we call kthread_stop() against the cleaner kthread further below, we
++	 * get an use-after-free on the cleaner because the fixup worker adds an
++	 * inode to the list of delayed iputs and then attempts to wakeup the
++	 * cleaner kthread, which was already stopped and destroyed. We parked
++	 * already the cleaner, but below we run all pending delayed iputs.
++	 */
++	btrfs_flush_workqueue(fs_info->fixup_workers);
++
+ 	/*
+ 	 * After we parked the cleaner kthread, ordered extents may have
+ 	 * completed and created new delayed iputs. If one of the async reclaim
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index f2935252b981a5..cb58a8796ef232 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -231,70 +231,6 @@ static struct btrfs_backref_node *walk_down_backref(
+ 	return NULL;
+ }
+ 
+-static void update_backref_node(struct btrfs_backref_cache *cache,
+-				struct btrfs_backref_node *node, u64 bytenr)
+-{
+-	struct rb_node *rb_node;
+-	rb_erase(&node->rb_node, &cache->rb_root);
+-	node->bytenr = bytenr;
+-	rb_node = rb_simple_insert(&cache->rb_root, node->bytenr, &node->rb_node);
+-	if (rb_node)
+-		btrfs_backref_panic(cache->fs_info, bytenr, -EEXIST);
+-}
+-
+-/*
+- * update backref cache after a transaction commit
+- */
+-static int update_backref_cache(struct btrfs_trans_handle *trans,
+-				struct btrfs_backref_cache *cache)
+-{
+-	struct btrfs_backref_node *node;
+-	int level = 0;
+-
+-	if (cache->last_trans == 0) {
+-		cache->last_trans = trans->transid;
+-		return 0;
+-	}
+-
+-	if (cache->last_trans == trans->transid)
+-		return 0;
+-
+-	/*
+-	 * detached nodes are used to avoid unnecessary backref
+-	 * lookup. transaction commit changes the extent tree.
+-	 * so the detached nodes are no longer useful.
+-	 */
+-	while (!list_empty(&cache->detached)) {
+-		node = list_entry(cache->detached.next,
+-				  struct btrfs_backref_node, list);
+-		btrfs_backref_cleanup_node(cache, node);
+-	}
+-
+-	while (!list_empty(&cache->changed)) {
+-		node = list_entry(cache->changed.next,
+-				  struct btrfs_backref_node, list);
+-		list_del_init(&node->list);
+-		BUG_ON(node->pending);
+-		update_backref_node(cache, node, node->new_bytenr);
+-	}
+-
+-	/*
+-	 * some nodes can be left in the pending list if there were
+-	 * errors during processing the pending nodes.
+-	 */
+-	for (level = 0; level < BTRFS_MAX_LEVEL; level++) {
+-		list_for_each_entry(node, &cache->pending[level], list) {
+-			BUG_ON(!node->pending);
+-			if (node->bytenr == node->new_bytenr)
+-				continue;
+-			update_backref_node(cache, node, node->new_bytenr);
+-		}
+-	}
+-
+-	cache->last_trans = 0;
+-	return 1;
+-}
+-
+ static bool reloc_root_is_dead(const struct btrfs_root *root)
+ {
+ 	/*
+@@ -550,9 +486,6 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
+ 	struct btrfs_backref_edge *new_edge;
+ 	struct rb_node *rb_node;
+ 
+-	if (cache->last_trans > 0)
+-		update_backref_cache(trans, cache);
+-
+ 	rb_node = rb_simple_search(&cache->rb_root, src->commit_root->start);
+ 	if (rb_node) {
+ 		node = rb_entry(rb_node, struct btrfs_backref_node, rb_node);
+@@ -922,7 +855,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
+ 	btrfs_grab_root(reloc_root);
+ 
+ 	/* root->reloc_root will stay until current relocation finished */
+-	if (fs_info->reloc_ctl->merge_reloc_tree &&
++	if (fs_info->reloc_ctl && fs_info->reloc_ctl->merge_reloc_tree &&
+ 	    btrfs_root_refs(root_item) == 0) {
+ 		set_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);
+ 		/*
+@@ -3678,11 +3611,9 @@ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
+ 			break;
+ 		}
+ restart:
+-		if (update_backref_cache(trans, &rc->backref_cache)) {
+-			btrfs_end_transaction(trans);
+-			trans = NULL;
+-			continue;
+-		}
++		if (rc->backref_cache.last_trans != trans->transid)
++			btrfs_backref_release_cache(&rc->backref_cache);
++		rc->backref_cache.last_trans = trans->transid;
+ 
+ 		ret = find_next_extent(rc, path, &key);
+ 		if (ret < 0)
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index d1a04c0c576edf..474ceebe67bd3e 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6188,8 +6188,29 @@ static int send_write_or_clone(struct send_ctx *sctx,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (clone_root->offset + num_bytes == info.size)
++	if (clone_root->offset + num_bytes == info.size) {
++		/*
++		 * The final size of our file matches the end offset, but it may
++		 * be that its current size is larger, so we have to truncate it
++		 * to any value between the start offset of the range and the
++		 * final i_size, otherwise the clone operation is invalid
++		 * because it's unaligned and it ends before the current EOF.
++		 * We do this truncate to the final i_size when we finish
++		 * processing the inode, but it's too late by then. And here we
++		 * truncate to the start offset of the range because it's always
++		 * sector size aligned while if it were the final i_size it
++		 * would result in dirtying part of a page, filling part of a
++		 * page with zeroes and then having the clone operation at the
++		 * receiver trigger IO and wait for it due to the dirty page.
++		 */
++		if (sctx->parent_root != NULL) {
++			ret = send_truncate(sctx, sctx->cur_ino,
++					    sctx->cur_inode_gen, offset);
++			if (ret < 0)
++				return ret;
++		}
+ 		goto clone_data;
++	}
+ 
+ write_data:
+ 	ret = send_extent_data(sctx, path, offset, num_bytes);
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index f53977169db48c..2b3f9935dbb44d 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -595,14 +595,12 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
+ 	 * write and readdir but not lookup or open).
+ 	 */
+ 	touch_atime(&file->f_path);
+-	dput(dentry);
+ 	return true;
+ 
+ check_failed:
+ 	fscache_cookie_lookup_negative(object->cookie);
+ 	cachefiles_unmark_inode_in_use(object, file);
+ 	fput(file);
+-	dput(dentry);
+ 	if (ret == -ESTALE)
+ 		return cachefiles_create_file(object);
+ 	return false;
+@@ -611,7 +609,6 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
+ 	fput(file);
+ error:
+ 	cachefiles_do_unmark_inode_in_use(object, d_inode(dentry));
+-	dput(dentry);
+ 	return false;
+ }
+ 
+@@ -654,7 +651,9 @@ bool cachefiles_look_up_object(struct cachefiles_object *object)
+ 		goto new_file;
+ 	}
+ 
+-	if (!cachefiles_open_file(object, dentry))
++	ret = cachefiles_open_file(object, dentry);
++	dput(dentry);
++	if (!ret)
+ 		return false;
+ 
+ 	_leave(" = t [%lu]", file_inode(object->file)->i_ino);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 73b5a07bf94dee..f68dd674c14db0 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -95,7 +95,6 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
+ 
+ 	/* dirty the head */
+ 	spin_lock(&ci->i_ceph_lock);
+-	BUG_ON(ci->i_wr_ref == 0); // caller should hold Fw reference
+ 	if (__ceph_have_pending_cap_snap(ci)) {
+ 		struct ceph_cap_snap *capsnap =
+ 				list_last_entry(&ci->i_cap_snaps,
+@@ -469,8 +468,11 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
+ 	rreq->netfs_priv = priv;
+ 
+ out:
+-	if (ret < 0)
++	if (ret < 0) {
++		if (got)
++			ceph_put_cap_refs(ceph_inode(inode), got);
+ 		kfree(priv);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index c2157f6e0c6982..d37e9ea5711377 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -6011,6 +6011,18 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
+ 		ceph_mdsmap_destroy(mdsc->mdsmap);
+ 	kfree(mdsc->sessions);
+ 	ceph_caps_finalize(mdsc);
++
++	if (mdsc->s_cap_auths) {
++		int i;
++
++		for (i = 0; i < mdsc->s_cap_auths_num; i++) {
++			kfree(mdsc->s_cap_auths[i].match.gids);
++			kfree(mdsc->s_cap_auths[i].match.path);
++			kfree(mdsc->s_cap_auths[i].match.fs_name);
++		}
++		kfree(mdsc->s_cap_auths);
++	}
++
+ 	ceph_pool_perm_destroy(mdsc);
+ }
+ 
+diff --git a/fs/dax.c b/fs/dax.c
+index becb4a6920c6aa..c62acd2812f8d4 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1305,11 +1305,15 @@ int dax_file_unshare(struct inode *inode, loff_t pos, loff_t len,
+ 	struct iomap_iter iter = {
+ 		.inode		= inode,
+ 		.pos		= pos,
+-		.len		= len,
+ 		.flags		= IOMAP_WRITE | IOMAP_UNSHARE | IOMAP_DAX,
+ 	};
++	loff_t size = i_size_read(inode);
+ 	int ret;
+ 
++	if (pos < 0 || pos >= size)
++		return 0;
++
++	iter.len = min(len, size - pos);
+ 	while ((ret = iomap_iter(&iter, ops)) > 0)
+ 		iter.processed = dax_unshare_iter(&iter);
+ 	return ret;
+diff --git a/fs/exec.c b/fs/exec.c
+index 0c17e59e3767b8..3274525d08d20c 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -782,7 +782,8 @@ int setup_arg_pages(struct linux_binprm *bprm,
+ 	stack_base = calc_max_stack_size(stack_base);
+ 
+ 	/* Add space for stack randomization. */
+-	stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++	if (current->flags & PF_RANDOMIZE)
++		stack_base += (STACK_RND_MASK << PAGE_SHIFT);
+ 
+ 	/* Make sure we didn't let the argument array grow too large. */
+ 	if (vma->vm_end - vma->vm_start > stack_base)
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index 0356c88252bd34..ce9be95c9172f6 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -91,11 +91,8 @@ int exfat_load_bitmap(struct super_block *sb)
+ 				return -EIO;
+ 
+ 			type = exfat_get_entry_type(ep);
+-			if (type == TYPE_UNUSED)
+-				break;
+-			if (type != TYPE_BITMAP)
+-				continue;
+-			if (ep->dentry.bitmap.flags == 0x0) {
++			if (type == TYPE_BITMAP &&
++			    ep->dentry.bitmap.flags == 0x0) {
+ 				int err;
+ 
+ 				err = exfat_allocate_bitmap(sb, ep);
+@@ -103,6 +100,9 @@ int exfat_load_bitmap(struct super_block *sb)
+ 				return err;
+ 			}
+ 			brelse(bh);
++
++			if (type == TYPE_UNUSED)
++				return -EINVAL;
+ 		}
+ 
+ 		if (exfat_get_next_cluster(sb, &clu.dir))
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index ff4514e4626bdb..b8b6b06015cd3b 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -279,12 +279,20 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
+ 					struct fscrypt_str de_name =
+ 							FSTR_INIT(de->name,
+ 								de->name_len);
++					u32 hash;
++					u32 minor_hash;
++
++					if (IS_CASEFOLDED(inode)) {
++						hash = EXT4_DIRENT_HASH(de);
++						minor_hash = EXT4_DIRENT_MINOR_HASH(de);
++					} else {
++						hash = 0;
++						minor_hash = 0;
++					}
+ 
+ 					/* Directory is encrypted */
+ 					err = fscrypt_fname_disk_to_usr(inode,
+-						EXT4_DIRENT_HASH(de),
+-						EXT4_DIRENT_MINOR_HASH(de),
+-						&de_name, &fstr);
++						hash, minor_hash, &de_name, &fstr);
+ 					de_name = fstr;
+ 					fstr.len = save_len;
+ 					if (err)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e067f2dd0335cd..c64f7c1b1d9082 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -957,6 +957,8 @@ ext4_find_extent(struct inode *inode, ext4_lblk_t block,
+ 
+ 	ext4_ext_show_path(inode, path);
+ 
++	if (orig_path)
++		*orig_path = path;
+ 	return path;
+ 
+ err:
+@@ -1877,6 +1879,7 @@ static void ext4_ext_try_to_merge_up(handle_t *handle,
+ 	path[0].p_hdr->eh_max = cpu_to_le16(max_root);
+ 
+ 	brelse(path[1].p_bh);
++	path[1].p_bh = NULL;
+ 	ext4_free_blocks(handle, inode, NULL, blk, 1,
+ 			 EXT4_FREE_BLOCKS_METADATA | EXT4_FREE_BLOCKS_FORGET);
+ }
+@@ -2103,6 +2106,7 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
+ 				       ppath, newext);
+ 	if (err)
+ 		goto cleanup;
++	path = *ppath;
+ 	depth = ext_depth(inode);
+ 	eh = path[depth].p_hdr;
+ 
+@@ -3230,6 +3234,24 @@ static int ext4_split_extent_at(handle_t *handle,
+ 	if (err != -ENOSPC && err != -EDQUOT && err != -ENOMEM)
+ 		goto out;
+ 
++	/*
++	 * Update path is required because previous ext4_ext_insert_extent()
++	 * may have freed or reallocated the path. Using EXT4_EX_NOFAIL
++	 * guarantees that ext4_find_extent() will not return -ENOMEM,
++	 * otherwise -ENOMEM will cause a retry in do_writepages(), and a
++	 * WARN_ON may be triggered in ext4_da_update_reserve_space() due to
++	 * an incorrect ee_len causing the i_reserved_data_blocks exception.
++	 */
++	path = ext4_find_extent(inode, ee_block, ppath,
++				flags | EXT4_EX_NOFAIL);
++	if (IS_ERR(path)) {
++		EXT4_ERROR_INODE(inode, "Failed split extent on %u, err %ld",
++				 split, PTR_ERR(path));
++		return PTR_ERR(path);
++	}
++	depth = ext_depth(inode);
++	ex = path[depth].p_ext;
++
+ 	if (EXT4_EXT_MAY_ZEROOUT & split_flag) {
+ 		if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
+ 			if (split_flag & EXT4_EXT_DATA_VALID1) {
+@@ -3282,12 +3304,12 @@ static int ext4_split_extent_at(handle_t *handle,
+ 	ext4_ext_dirty(handle, inode, path + path->p_depth);
+ 	return err;
+ out:
+-	ext4_ext_show_leaf(inode, path);
++	ext4_ext_show_leaf(inode, *ppath);
+ 	return err;
+ }
+ 
+ /*
+- * ext4_split_extents() splits an extent and mark extent which is covered
++ * ext4_split_extent() splits an extent and mark extent which is covered
+  * by @map as split_flags indicates
+  *
+  * It may result in splitting the extent into multiple extents (up to three)
+@@ -3363,7 +3385,7 @@ static int ext4_split_extent(handle_t *handle,
+ 			goto out;
+ 	}
+ 
+-	ext4_ext_show_leaf(inode, path);
++	ext4_ext_show_leaf(inode, *ppath);
+ out:
+ 	return err ? err : allocated;
+ }
+@@ -3828,14 +3850,13 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode,
+ 			struct ext4_ext_path **ppath, int flags,
+ 			unsigned int allocated, ext4_fsblk_t newblock)
+ {
+-	struct ext4_ext_path __maybe_unused *path = *ppath;
+ 	int ret = 0;
+ 	int err = 0;
+ 
+ 	ext_debug(inode, "logical block %llu, max_blocks %u, flags 0x%x, allocated %u\n",
+ 		  (unsigned long long)map->m_lblk, map->m_len, flags,
+ 		  allocated);
+-	ext4_ext_show_leaf(inode, path);
++	ext4_ext_show_leaf(inode, *ppath);
+ 
+ 	/*
+ 	 * When writing into unwritten space, we should not fail to
+@@ -3932,7 +3953,7 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode,
+ 	if (allocated > map->m_len)
+ 		allocated = map->m_len;
+ 	map->m_len = allocated;
+-	ext4_ext_show_leaf(inode, path);
++	ext4_ext_show_leaf(inode, *ppath);
+ out2:
+ 	return err ? err : allocated;
+ }
+@@ -5535,6 +5556,7 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
+ 	path = ext4_find_extent(inode, offset_lblk, NULL, 0);
+ 	if (IS_ERR(path)) {
+ 		up_write(&EXT4_I(inode)->i_data_sem);
++		ret = PTR_ERR(path);
+ 		goto out_stop;
+ 	}
+ 
+@@ -5880,7 +5902,7 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
+ int ext4_ext_replay_update_ex(struct inode *inode, ext4_lblk_t start,
+ 			      int len, int unwritten, ext4_fsblk_t pblk)
+ {
+-	struct ext4_ext_path *path = NULL, *ppath;
++	struct ext4_ext_path *path;
+ 	struct ext4_extent *ex;
+ 	int ret;
+ 
+@@ -5896,30 +5918,29 @@ int ext4_ext_replay_update_ex(struct inode *inode, ext4_lblk_t start,
+ 	if (le32_to_cpu(ex->ee_block) != start ||
+ 		ext4_ext_get_actual_len(ex) != len) {
+ 		/* We need to split this extent to match our extent first */
+-		ppath = path;
+ 		down_write(&EXT4_I(inode)->i_data_sem);
+-		ret = ext4_force_split_extent_at(NULL, inode, &ppath, start, 1);
++		ret = ext4_force_split_extent_at(NULL, inode, &path, start, 1);
+ 		up_write(&EXT4_I(inode)->i_data_sem);
+ 		if (ret)
+ 			goto out;
+-		kfree(path);
+-		path = ext4_find_extent(inode, start, NULL, 0);
++
++		path = ext4_find_extent(inode, start, &path, 0);
+ 		if (IS_ERR(path))
+-			return -1;
+-		ppath = path;
++			return PTR_ERR(path);
+ 		ex = path[path->p_depth].p_ext;
+ 		WARN_ON(le32_to_cpu(ex->ee_block) != start);
++
+ 		if (ext4_ext_get_actual_len(ex) != len) {
+ 			down_write(&EXT4_I(inode)->i_data_sem);
+-			ret = ext4_force_split_extent_at(NULL, inode, &ppath,
++			ret = ext4_force_split_extent_at(NULL, inode, &path,
+ 							 start + len, 1);
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			if (ret)
+ 				goto out;
+-			kfree(path);
+-			path = ext4_find_extent(inode, start, NULL, 0);
++
++			path = ext4_find_extent(inode, start, &path, 0);
+ 			if (IS_ERR(path))
+-				return -EINVAL;
++				return PTR_ERR(path);
+ 			ex = path[path->p_depth].p_ext;
+ 		}
+ 	}
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 3926a05eceeed1..95667512010e10 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -339,22 +339,29 @@ void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handl
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	tid_t tid;
++	bool has_transaction = true;
++	bool is_ineligible;
+ 
+ 	if (ext4_fc_disabled(sb))
+ 		return;
+ 
+-	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+ 	if (handle && !IS_ERR(handle))
+ 		tid = handle->h_transaction->t_tid;
+ 	else {
+ 		read_lock(&sbi->s_journal->j_state_lock);
+-		tid = sbi->s_journal->j_running_transaction ?
+-				sbi->s_journal->j_running_transaction->t_tid : 0;
++		if (sbi->s_journal->j_running_transaction)
++			tid = sbi->s_journal->j_running_transaction->t_tid;
++		else
++			has_transaction = false;
+ 		read_unlock(&sbi->s_journal->j_state_lock);
+ 	}
+ 	spin_lock(&sbi->s_fc_lock);
+-	if (tid_gt(tid, sbi->s_fc_ineligible_tid))
++	is_ineligible = ext4_test_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
++	if (has_transaction &&
++	    (!is_ineligible ||
++	     (is_ineligible && tid_gt(tid, sbi->s_fc_ineligible_tid))))
+ 		sbi->s_fc_ineligible_tid = tid;
++	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
+ 	spin_unlock(&sbi->s_fc_lock);
+ 	WARN_ON(reason >= EXT4_FC_REASON_MAX);
+ 	sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
+@@ -372,7 +379,7 @@ void ext4_fc_mark_ineligible(struct super_block *sb, int reason, handle_t *handl
+  */
+ static int ext4_fc_track_template(
+ 	handle_t *handle, struct inode *inode,
+-	int (*__fc_track_fn)(struct inode *, void *, bool),
++	int (*__fc_track_fn)(handle_t *handle, struct inode *, void *, bool),
+ 	void *args, int enqueue)
+ {
+ 	bool update = false;
+@@ -389,7 +396,7 @@ static int ext4_fc_track_template(
+ 		ext4_fc_reset_inode(inode);
+ 		ei->i_sync_tid = tid;
+ 	}
+-	ret = __fc_track_fn(inode, args, update);
++	ret = __fc_track_fn(handle, inode, args, update);
+ 	mutex_unlock(&ei->i_fc_lock);
+ 
+ 	if (!enqueue)
+@@ -413,7 +420,8 @@ struct __track_dentry_update_args {
+ };
+ 
+ /* __track_fn for directory entry updates. Called with ei->i_fc_lock. */
+-static int __track_dentry_update(struct inode *inode, void *arg, bool update)
++static int __track_dentry_update(handle_t *handle, struct inode *inode,
++				 void *arg, bool update)
+ {
+ 	struct ext4_fc_dentry_update *node;
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+@@ -428,14 +436,14 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 
+ 	if (IS_ENCRYPTED(dir)) {
+ 		ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_ENCRYPTED_FILENAME,
+-					NULL);
++					handle);
+ 		mutex_lock(&ei->i_fc_lock);
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+ 	node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
+ 	if (!node) {
+-		ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
++		ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, handle);
+ 		mutex_lock(&ei->i_fc_lock);
+ 		return -ENOMEM;
+ 	}
+@@ -447,7 +455,7 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
+ 		node->fcd_name.name = kmalloc(dentry->d_name.len, GFP_NOFS);
+ 		if (!node->fcd_name.name) {
+ 			kmem_cache_free(ext4_fc_dentry_cachep, node);
+-			ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
++			ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, handle);
+ 			mutex_lock(&ei->i_fc_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -569,7 +577,8 @@ void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
+ }
+ 
+ /* __track_fn for inode tracking */
+-static int __track_inode(struct inode *inode, void *arg, bool update)
++static int __track_inode(handle_t *handle, struct inode *inode, void *arg,
++			 bool update)
+ {
+ 	if (update)
+ 		return -EEXIST;
+@@ -607,7 +616,8 @@ struct __track_range_args {
+ };
+ 
+ /* __track_fn for tracking data updates */
+-static int __track_range(struct inode *inode, void *arg, bool update)
++static int __track_range(handle_t *handle, struct inode *inode, void *arg,
++			 bool update)
+ {
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+ 	ext4_lblk_t oldstart;
+@@ -1288,8 +1298,21 @@ static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
+ 		list_del_init(&iter->i_fc_list);
+ 		ext4_clear_inode_state(&iter->vfs_inode,
+ 				       EXT4_STATE_FC_COMMITTING);
+-		if (tid_geq(tid, iter->i_sync_tid))
++		if (tid_geq(tid, iter->i_sync_tid)) {
+ 			ext4_fc_reset_inode(&iter->vfs_inode);
++		} else if (full) {
++			/*
++			 * We are called after a full commit, inode has been
++			 * modified while the commit was running. Re-enqueue
++			 * the inode into STAGING, which will then be splice
++			 * back into MAIN. This cannot happen during
++			 * fastcommit because the journal is locked all the
++			 * time in that case (and tid doesn't increase so
++			 * tid check above isn't reliable).
++			 */
++			list_add_tail(&EXT4_I(&iter->vfs_inode)->i_fc_list,
++				      &sbi->s_fc_q[FC_Q_STAGING]);
++		}
+ 		/* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
+ 		smp_mb();
+ #if (BITS_PER_LONG < 64)
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index c89e434db6b7ba..be061bb640672c 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -334,10 +334,10 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+  * Clean up the inode after DIO or DAX extending write has completed and the
+  * inode size has been updated using ext4_handle_inode_extension().
+  */
+-static void ext4_inode_extension_cleanup(struct inode *inode, ssize_t count)
++static void ext4_inode_extension_cleanup(struct inode *inode, bool need_trunc)
+ {
+ 	lockdep_assert_held_write(&inode->i_rwsem);
+-	if (count < 0) {
++	if (need_trunc) {
+ 		ext4_truncate_failed_write(inode);
+ 		/*
+ 		 * If the truncate operation failed early, then the inode may
+@@ -586,7 +586,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		 * writeback of delalloc blocks.
+ 		 */
+ 		WARN_ON_ONCE(ret == -EIOCBQUEUED);
+-		ext4_inode_extension_cleanup(inode, ret);
++		ext4_inode_extension_cleanup(inode, ret < 0);
+ 	}
+ 
+ out:
+@@ -670,7 +670,7 @@ ext4_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 
+ 	if (extend) {
+ 		ret = ext4_handle_inode_extension(inode, offset, ret);
+-		ext4_inode_extension_cleanup(inode, ret);
++		ext4_inode_extension_cleanup(inode, ret < (ssize_t)count);
+ 	}
+ out:
+ 	inode_unlock(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 238e1963382344..a99c5ae05dc68b 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5233,8 +5233,9 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
+ {
+ 	unsigned offset;
+ 	journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
+-	tid_t commit_tid = 0;
++	tid_t commit_tid;
+ 	int ret;
++	bool has_transaction;
+ 
+ 	offset = inode->i_size & (PAGE_SIZE - 1);
+ 	/*
+@@ -5259,12 +5260,14 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
+ 		folio_put(folio);
+ 		if (ret != -EBUSY)
+ 			return;
+-		commit_tid = 0;
++		has_transaction = false;
+ 		read_lock(&journal->j_state_lock);
+-		if (journal->j_committing_transaction)
++		if (journal->j_committing_transaction) {
+ 			commit_tid = journal->j_committing_transaction->t_tid;
++			has_transaction = true;
++		}
+ 		read_unlock(&journal->j_state_lock);
+-		if (commit_tid)
++		if (has_transaction)
+ 			jbd2_log_wait_commit(journal, commit_tid);
+ 	}
+ }
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index d98ac2af8199f1..a5e1492bbaaa56 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -663,8 +663,8 @@ int ext4_ind_migrate(struct inode *inode)
+ 	if (unlikely(ret2 && !ret))
+ 		ret = ret2;
+ errout:
+-	ext4_journal_stop(handle);
+ 	up_write(&EXT4_I(inode)->i_data_sem);
++	ext4_journal_stop(handle);
+ out_unlock:
+ 	ext4_writepages_up_write(inode->i_sb, alloc_ctx);
+ 	return ret;
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 204f53b236229f..c95e3e526390d7 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -36,7 +36,6 @@ get_ext_path(struct inode *inode, ext4_lblk_t lblock,
+ 		*ppath = NULL;
+ 		return -ENODATA;
+ 	}
+-	*ppath = path;
+ 	return 0;
+ }
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 1311ad0464b2a6..b1942a1aff9edf 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1526,7 +1526,7 @@ static bool ext4_match(struct inode *parent,
+ }
+ 
+ /*
+- * Returns 0 if not found, -1 on failure, and 1 on success
++ * Returns 0 if not found, -EFSCORRUPTED on failure, and 1 on success
+  */
+ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ 		    struct inode *dir, struct ext4_filename *fname,
+@@ -1547,7 +1547,7 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ 			 * a full check */
+ 			if (ext4_check_dir_entry(dir, NULL, de, bh, search_buf,
+ 						 buf_size, offset))
+-				return -1;
++				return -EFSCORRUPTED;
+ 			*res_dir = de;
+ 			return 1;
+ 		}
+@@ -1555,7 +1555,7 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ 		de_len = ext4_rec_len_from_disk(de->rec_len,
+ 						dir->i_sb->s_blocksize);
+ 		if (de_len <= 0)
+-			return -1;
++			return -EFSCORRUPTED;
+ 		offset += de_len;
+ 		de = (struct ext4_dir_entry_2 *) ((char *) de + de_len);
+ 	}
+@@ -1707,8 +1707,10 @@ static struct buffer_head *__ext4_find_entry(struct inode *dir,
+ 			goto cleanup_and_exit;
+ 		} else {
+ 			brelse(bh);
+-			if (i < 0)
++			if (i < 0) {
++				ret = ERR_PTR(i);
+ 				goto cleanup_and_exit;
++			}
+ 		}
+ 	next:
+ 		if (++block >= nblocks)
+@@ -1802,7 +1804,7 @@ static struct buffer_head * ext4_dx_find_entry(struct inode *dir,
+ 		if (retval == 1)
+ 			goto success;
+ 		brelse(bh);
+-		if (retval == -1) {
++		if (retval < 0) {
+ 			bh = ERR_PTR(ERR_BAD_DX_DIR);
+ 			goto errout;
+ 		}
+@@ -2044,7 +2046,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ 		split = count/2;
+ 
+ 	hash2 = map[split].hash;
+-	continued = hash2 == map[split - 1].hash;
++	continued = split > 0 ? hash2 == map[split - 1].hash : 0;
+ 	dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
+ 			(unsigned long)dx_get_block(frame->at),
+ 					hash2, split, count-split));
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 0ba9837d65cac9..f12ccaabf13d8b 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -230,8 +230,8 @@ struct ext4_new_flex_group_data {
+ #define MAX_RESIZE_BG				16384
+ 
+ /*
+- * alloc_flex_gd() allocates a ext4_new_flex_group_data with size of
+- * @flexbg_size.
++ * alloc_flex_gd() allocates an ext4_new_flex_group_data that satisfies the
++ * resizing from @o_group to @n_group, its size is typically @flexbg_size.
+  *
+  * Returns NULL on failure otherwise address of the allocated structure.
+  */
+@@ -239,25 +239,27 @@ static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned int flexbg_size,
+ 				ext4_group_t o_group, ext4_group_t n_group)
+ {
+ 	ext4_group_t last_group;
++	unsigned int max_resize_bg;
+ 	struct ext4_new_flex_group_data *flex_gd;
+ 
+ 	flex_gd = kmalloc(sizeof(*flex_gd), GFP_NOFS);
+ 	if (flex_gd == NULL)
+ 		goto out3;
+ 
+-	if (unlikely(flexbg_size > MAX_RESIZE_BG))
+-		flex_gd->resize_bg = MAX_RESIZE_BG;
+-	else
+-		flex_gd->resize_bg = flexbg_size;
++	max_resize_bg = umin(flexbg_size, MAX_RESIZE_BG);
++	flex_gd->resize_bg = max_resize_bg;
+ 
+ 	/* Avoid allocating large 'groups' array if not needed */
+ 	last_group = o_group | (flex_gd->resize_bg - 1);
+ 	if (n_group <= last_group)
+-		flex_gd->resize_bg = 1 << fls(n_group - o_group + 1);
++		flex_gd->resize_bg = 1 << fls(n_group - o_group);
+ 	else if (n_group - last_group < flex_gd->resize_bg)
+-		flex_gd->resize_bg = 1 << max(fls(last_group - o_group + 1),
++		flex_gd->resize_bg = 1 << max(fls(last_group - o_group),
+ 					      fls(n_group - last_group));
+ 
++	if (WARN_ON_ONCE(flex_gd->resize_bg > max_resize_bg))
++		flex_gd->resize_bg = max_resize_bg;
++
+ 	flex_gd->groups = kmalloc_array(flex_gd->resize_bg,
+ 					sizeof(struct ext4_new_group_data),
+ 					GFP_NOFS);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index edc692984404d5..b75f84a1e500fa 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5331,6 +5331,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 	INIT_LIST_HEAD(&sbi->s_orphan); /* unlinked but open files */
+ 	mutex_init(&sbi->s_orphan_lock);
+ 
++	spin_lock_init(&sbi->s_bdev_wb_lock);
++
+ 	ext4_fast_commit_init(sb);
+ 
+ 	sb->s_root = NULL;
+@@ -5552,7 +5554,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 	 * Save the original bdev mapping's wb_err value which could be
+ 	 * used to detect the metadata async write error.
+ 	 */
+-	spin_lock_init(&sbi->s_bdev_wb_lock);
+ 	errseq_check_and_advance(&sb->s_bdev->bd_mapping->wb_err,
+ 				 &sbi->s_bdev_wb_err);
+ 	EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
+@@ -5632,8 +5633,8 @@ failed_mount8: __maybe_unused
+ failed_mount3:
+ 	/* flush s_sb_upd_work before sbi destroy */
+ 	flush_work(&sbi->s_sb_upd_work);
+-	del_timer_sync(&sbi->s_err_report);
+ 	ext4_stop_mmpd(sbi);
++	del_timer_sync(&sbi->s_err_report);
+ 	ext4_group_desc_free(sbi);
+ failed_mount:
+ 	if (sbi->s_chksum_driver)
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 46ce2f21fef9dc..aea9e3c405f1fb 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -2559,6 +2559,8 @@ ext4_xattr_set(struct inode *inode, int name_index, const char *name,
+ 
+ 		error = ext4_xattr_set_handle(handle, inode, name_index, name,
+ 					      value, value_len, flags);
++		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR,
++					handle);
+ 		error2 = ext4_journal_stop(handle);
+ 		if (error == -ENOSPC &&
+ 		    ext4_should_retry_alloc(sb, &retries))
+@@ -2566,7 +2568,6 @@ ext4_xattr_set(struct inode *inode, int name_index, const char *name,
+ 		if (error == 0)
+ 			error = error2;
+ 	}
+-	ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_XATTR, NULL);
+ 
+ 	return error;
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 9c8acb98f4dbf6..549361ee48503f 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -134,6 +134,12 @@ typedef u32 nid_t;
+ 
+ #define COMPRESS_EXT_NUM		16
+ 
++enum blkzone_allocation_policy {
++	BLKZONE_ALLOC_PRIOR_SEQ,	/* Prioritize writing to sequential zones */
++	BLKZONE_ALLOC_ONLY_SEQ,		/* Only allow writing to sequential zones */
++	BLKZONE_ALLOC_PRIOR_CONV,	/* Prioritize writing to conventional zones */
++};
++
+ /*
+  * An implementation of an rwsem that is explicitly unfair to readers. This
+  * prevents priority inversion when a low-priority reader acquires the read lock
+@@ -1295,6 +1301,7 @@ struct f2fs_gc_control {
+ 	bool no_bg_gc;			/* check the space and stop bg_gc */
+ 	bool should_migrate_blocks;	/* should migrate blocks */
+ 	bool err_gc_skipped;		/* return EAGAIN if GC skipped */
++	bool one_time;			/* require one time GC in one migration unit */
+ 	unsigned int nr_free_secs;	/* # of free sections to do GC */
+ };
+ 
+@@ -1563,6 +1570,8 @@ struct f2fs_sb_info {
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	unsigned int blocks_per_blkz;		/* F2FS blocks per zone */
+ 	unsigned int max_open_zones;		/* max open zone resources of the zoned device */
++	/* For adjust the priority writing position of data in zone UFS */
++	unsigned int blkzone_alloc_policy;
+ #endif
+ 
+ 	/* for node-related operations */
+@@ -1689,6 +1698,8 @@ struct f2fs_sb_info {
+ 	unsigned int max_victim_search;
+ 	/* migration granularity of garbage collection, unit: segment */
+ 	unsigned int migration_granularity;
++	/* migration window granularity of garbage collection, unit: segment */
++	unsigned int migration_window_granularity;
+ 
+ 	/*
+ 	 * for stat information.
+@@ -2862,13 +2873,26 @@ static inline bool is_inflight_io(struct f2fs_sb_info *sbi, int type)
+ 	return false;
+ }
+ 
++static inline bool is_inflight_read_io(struct f2fs_sb_info *sbi)
++{
++	return get_pages(sbi, F2FS_RD_DATA) || get_pages(sbi, F2FS_DIO_READ);
++}
++
+ static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
+ {
++	bool zoned_gc = (type == GC_TIME &&
++			F2FS_HAS_FEATURE(sbi, F2FS_FEATURE_BLKZONED));
++
+ 	if (sbi->gc_mode == GC_URGENT_HIGH)
+ 		return true;
+ 
+-	if (is_inflight_io(sbi, type))
+-		return false;
++	if (zoned_gc) {
++		if (is_inflight_read_io(sbi))
++			return false;
++	} else {
++		if (is_inflight_io(sbi, type))
++			return false;
++	}
+ 
+ 	if (sbi->gc_mode == GC_URGENT_MID)
+ 		return true;
+@@ -2877,6 +2901,9 @@ static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
+ 			(type == DISCARD_TIME || type == GC_TIME))
+ 		return true;
+ 
++	if (zoned_gc)
++		return true;
++
+ 	return f2fs_time_over(sbi, type);
+ }
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 448c75e80b89e6..e8bf72a88cac8c 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -81,6 +81,8 @@ static int gc_thread_func(void *data)
+ 			continue;
+ 		}
+ 
++		gc_control.one_time = false;
++
+ 		/*
+ 		 * [GC triggering condition]
+ 		 * 0. GC is not conducted currently.
+@@ -116,15 +118,29 @@ static int gc_thread_func(void *data)
+ 			goto next;
+ 		}
+ 
+-		if (has_enough_invalid_blocks(sbi))
++		if (f2fs_sb_has_blkzoned(sbi)) {
++			if (has_enough_free_blocks(sbi, LIMIT_NO_ZONED_GC)) {
++				wait_ms = gc_th->no_gc_sleep_time;
++				f2fs_up_write(&sbi->gc_lock);
++				goto next;
++			}
++			if (wait_ms == gc_th->no_gc_sleep_time)
++				wait_ms = gc_th->max_sleep_time;
++		}
++
++		if (need_to_boost_gc(sbi)) {
+ 			decrease_sleep_time(gc_th, &wait_ms);
+-		else
++			if (f2fs_sb_has_blkzoned(sbi))
++				gc_control.one_time = true;
++		} else {
+ 			increase_sleep_time(gc_th, &wait_ms);
++		}
+ do_gc:
+ 		stat_inc_gc_call_count(sbi, foreground ?
+ 					FOREGROUND : BACKGROUND);
+ 
+-		sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
++		sync_mode = (F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC) ||
++				gc_control.one_time;
+ 
+ 		/* foreground GC was been triggered via f2fs_balance_fs() */
+ 		if (foreground)
+@@ -179,9 +195,16 @@ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi)
+ 		return -ENOMEM;
+ 
+ 	gc_th->urgent_sleep_time = DEF_GC_THREAD_URGENT_SLEEP_TIME;
+-	gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
+-	gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
+-	gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
++
++	if (f2fs_sb_has_blkzoned(sbi)) {
++		gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME_ZONED;
++		gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME_ZONED;
++		gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME_ZONED;
++	} else {
++		gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
++		gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
++		gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
++	}
+ 
+ 	gc_th->gc_wake = false;
+ 
+@@ -1684,31 +1707,49 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
+ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 				unsigned int start_segno,
+ 				struct gc_inode_list *gc_list, int gc_type,
+-				bool force_migrate)
++				bool force_migrate, bool one_time)
+ {
+ 	struct page *sum_page;
+ 	struct f2fs_summary_block *sum;
+ 	struct blk_plug plug;
+ 	unsigned int segno = start_segno;
+ 	unsigned int end_segno = start_segno + SEGS_PER_SEC(sbi);
++	unsigned int sec_end_segno;
+ 	int seg_freed = 0, migrated = 0;
+ 	unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ?
+ 						SUM_TYPE_DATA : SUM_TYPE_NODE;
+ 	unsigned char data_type = (type == SUM_TYPE_DATA) ? DATA : NODE;
+ 	int submitted = 0;
+ 
+-	if (__is_large_section(sbi))
+-		end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
++	if (__is_large_section(sbi)) {
++		sec_end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
+ 
+-	/*
+-	 * zone-capacity can be less than zone-size in zoned devices,
+-	 * resulting in less than expected usable segments in the zone,
+-	 * calculate the end segno in the zone which can be garbage collected
+-	 */
+-	if (f2fs_sb_has_blkzoned(sbi))
+-		end_segno -= SEGS_PER_SEC(sbi) -
++		/*
++		 * zone-capacity can be less than zone-size in zoned devices,
++		 * resulting in less than expected usable segments in the zone,
++		 * calculate the end segno in the zone which can be garbage
++		 * collected
++		 */
++		if (f2fs_sb_has_blkzoned(sbi))
++			sec_end_segno -= SEGS_PER_SEC(sbi) -
+ 					f2fs_usable_segs_in_sec(sbi, segno);
+ 
++		if (gc_type == BG_GC || one_time) {
++			unsigned int window_granularity =
++				sbi->migration_window_granularity;
++
++			if (f2fs_sb_has_blkzoned(sbi) &&
++					!has_enough_free_blocks(sbi,
++					LIMIT_BOOST_ZONED_GC))
++				window_granularity *= BOOST_GC_MULTIPLE;
++
++			end_segno = start_segno + window_granularity;
++		}
++
++		if (end_segno > sec_end_segno)
++			end_segno = sec_end_segno;
++	}
++
+ 	sanity_check_seg_type(sbi, get_seg_entry(sbi, segno)->type);
+ 
+ 	/* readahead multi ssa blocks those have contiguous address */
+@@ -1787,7 +1828,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 
+ 		if (__is_large_section(sbi))
+ 			sbi->next_victim_seg[gc_type] =
+-				(segno + 1 < end_segno) ? segno + 1 : NULL_SEGNO;
++				(segno + 1 < sec_end_segno) ?
++					segno + 1 : NULL_SEGNO;
+ skip:
+ 		f2fs_put_page(sum_page, 0);
+ 	}
+@@ -1876,7 +1918,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 	}
+ 
+ 	seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type,
+-				gc_control->should_migrate_blocks);
++				gc_control->should_migrate_blocks,
++				gc_control->one_time);
+ 	if (seg_freed < 0)
+ 		goto stop;
+ 
+@@ -1887,6 +1930,9 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 		total_sec_freed++;
+ 	}
+ 
++	if (gc_control->one_time)
++		goto stop;
++
+ 	if (gc_type == FG_GC) {
+ 		sbi->cur_victim_sec = NULL_SEGNO;
+ 
+@@ -2011,8 +2057,7 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
+ 			.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ 		};
+ 
+-		do_garbage_collect(sbi, segno, &gc_list, FG_GC,
+-						dry_run_sections == 0);
++		do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
+ 		put_gc_inode(&gc_list);
+ 
+ 		if (!dry_run && get_valid_blocks(sbi, segno, true))
+diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
+index a8ea3301b815ad..78abeebd68b5ec 100644
+--- a/fs/f2fs/gc.h
++++ b/fs/f2fs/gc.h
+@@ -15,6 +15,11 @@
+ #define DEF_GC_THREAD_MAX_SLEEP_TIME	60000
+ #define DEF_GC_THREAD_NOGC_SLEEP_TIME	300000	/* wait 5 min */
+ 
++/* GC sleep parameters for zoned deivces */
++#define DEF_GC_THREAD_MIN_SLEEP_TIME_ZONED	10
++#define DEF_GC_THREAD_MAX_SLEEP_TIME_ZONED	20
++#define DEF_GC_THREAD_NOGC_SLEEP_TIME_ZONED	60000
++
+ /* choose candidates from sections which has age of more than 7 days */
+ #define DEF_GC_THREAD_AGE_THRESHOLD		(60 * 60 * 24 * 7)
+ #define DEF_GC_THREAD_CANDIDATE_RATIO		20	/* select 20% oldest sections as candidates */
+@@ -25,6 +30,11 @@
+ #define LIMIT_INVALID_BLOCK	40 /* percentage over total user space */
+ #define LIMIT_FREE_BLOCK	40 /* percentage over invalid + free space */
+ 
++#define LIMIT_NO_ZONED_GC	60 /* percentage over total user space of no gc for zoned devices */
++#define LIMIT_BOOST_ZONED_GC	25 /* percentage over total user space of boosted gc for zoned devices */
++#define DEF_MIGRATION_WINDOW_GRANULARITY_ZONED	3
++#define BOOST_GC_MULTIPLE	5
++
+ #define DEF_GC_FAILED_PINNED_FILES	2048
+ #define MAX_GC_FAILED_PINNED_FILES	USHRT_MAX
+ 
+@@ -152,6 +162,12 @@ static inline void decrease_sleep_time(struct f2fs_gc_kthread *gc_th,
+ 		*wait -= min_time;
+ }
+ 
++static inline bool has_enough_free_blocks(struct f2fs_sb_info *sbi,
++						unsigned int limit_perc)
++{
++	return free_sections(sbi) > ((sbi->total_sections * limit_perc) / 100);
++}
++
+ static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi)
+ {
+ 	block_t user_block_count = sbi->user_block_count;
+@@ -167,3 +183,10 @@ static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi)
+ 		free_user_blocks(sbi) <
+ 			limit_free_user_blocks(invalid_user_blocks));
+ }
++
++static inline bool need_to_boost_gc(struct f2fs_sb_info *sbi)
++{
++	if (f2fs_sb_has_blkzoned(sbi))
++		return !has_enough_free_blocks(sbi, LIMIT_BOOST_ZONED_GC);
++	return has_enough_invalid_blocks(sbi);
++}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 425479d7692167..297cc89cbcca0a 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2701,22 +2701,47 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 			goto got_it;
+ 	}
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
+ 	/*
+ 	 * If we format f2fs on zoned storage, let's try to get pinned sections
+ 	 * from beginning of the storage, which should be a conventional one.
+ 	 */
+ 	if (f2fs_sb_has_blkzoned(sbi)) {
+-		segno = pinning ? 0 : max(first_zoned_segno(sbi), *newseg);
++		/* Prioritize writing to conventional zones */
++		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning)
++			segno = 0;
++		else
++			segno = max(first_zoned_segno(sbi), *newseg);
+ 		hint = GET_SEC_FROM_SEG(sbi, segno);
+ 	}
++#endif
+ 
+ find_other_zone:
+ 	secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
++
++#ifdef CONFIG_BLK_DEV_ZONED
++	if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) {
++		/* Write only to sequential zones */
++		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) {
++			hint = GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi));
++			secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
++		} else
++			secno = find_first_zero_bit(free_i->free_secmap,
++								MAIN_SECS(sbi));
++		if (secno >= MAIN_SECS(sbi)) {
++			ret = -ENOSPC;
++			f2fs_bug_on(sbi, 1);
++			goto out_unlock;
++		}
++	}
++#endif
++
+ 	if (secno >= MAIN_SECS(sbi)) {
+ 		secno = find_first_zero_bit(free_i->free_secmap,
+ 							MAIN_SECS(sbi));
+ 		if (secno >= MAIN_SECS(sbi)) {
+ 			ret = -ENOSPC;
++			f2fs_bug_on(sbi, 1);
+ 			goto out_unlock;
+ 		}
+ 	}
+@@ -2758,10 +2783,8 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ out_unlock:
+ 	spin_unlock(&free_i->segmap_lock);
+ 
+-	if (ret == -ENOSPC) {
++	if (ret == -ENOSPC)
+ 		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_NO_SEGMENT);
+-		f2fs_bug_on(sbi, 1);
+-	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index b4c8ac6c085989..b0932a4a8e0473 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -711,6 +711,11 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 			if (!strcmp(name, "on")) {
+ 				F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ 			} else if (!strcmp(name, "off")) {
++				if (f2fs_sb_has_blkzoned(sbi)) {
++					f2fs_warn(sbi, "zoned devices need bggc");
++					kfree(name);
++					return -EINVAL;
++				}
+ 				F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_OFF;
+ 			} else if (!strcmp(name, "sync")) {
+ 				F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_SYNC;
+@@ -3796,6 +3801,8 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
+ 	sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
+ 	sbi->max_victim_search = DEF_MAX_VICTIM_SEARCH;
+ 	sbi->migration_granularity = SEGS_PER_SEC(sbi);
++	sbi->migration_window_granularity = f2fs_sb_has_blkzoned(sbi) ?
++		DEF_MIGRATION_WINDOW_GRANULARITY_ZONED : SEGS_PER_SEC(sbi);
+ 	sbi->seq_file_ra_mul = MIN_RA_MUL;
+ 	sbi->max_fragment_chunk = DEF_FRAGMENT_SIZE;
+ 	sbi->max_fragment_hole = DEF_FRAGMENT_SIZE;
+@@ -4231,6 +4238,7 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ 	sbi->aligned_blksize = true;
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	sbi->max_open_zones = UINT_MAX;
++	sbi->blkzone_alloc_policy = BLKZONE_ALLOC_PRIOR_SEQ;
+ #endif
+ 
+ 	for (i = 0; i < max_devices; i++) {
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 09d3ecfaa4f1a8..bb0dbe7665f9c4 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -561,6 +561,11 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ 			return -EINVAL;
+ 	}
+ 
++	if (!strcmp(a->attr.name, "migration_window_granularity")) {
++		if (t == 0 || t > SEGS_PER_SEC(sbi))
++			return -EINVAL;
++	}
++
+ 	if (!strcmp(a->attr.name, "gc_urgent")) {
+ 		if (t == 0) {
+ 			sbi->gc_mode = GC_NORMAL;
+@@ -627,6 +632,15 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ 	}
+ #endif
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
++	if (!strcmp(a->attr.name, "blkzone_alloc_policy")) {
++		if (t < BLKZONE_ALLOC_PRIOR_SEQ || t > BLKZONE_ALLOC_PRIOR_CONV)
++			return -EINVAL;
++		sbi->blkzone_alloc_policy = t;
++		return count;
++	}
++#endif
++
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 	if (!strcmp(a->attr.name, "compr_written_block") ||
+ 		!strcmp(a->attr.name, "compr_saved_block")) {
+@@ -1001,6 +1015,7 @@ F2FS_SBI_RW_ATTR(gc_pin_file_thresh, gc_pin_file_threshold);
+ F2FS_SBI_RW_ATTR(gc_reclaimed_segments, gc_reclaimed_segs);
+ F2FS_SBI_GENERAL_RW_ATTR(max_victim_search);
+ F2FS_SBI_GENERAL_RW_ATTR(migration_granularity);
++F2FS_SBI_GENERAL_RW_ATTR(migration_window_granularity);
+ F2FS_SBI_GENERAL_RW_ATTR(dir_level);
+ #ifdef CONFIG_F2FS_IOSTAT
+ F2FS_SBI_GENERAL_RW_ATTR(iostat_enable);
+@@ -1033,6 +1048,7 @@ F2FS_SBI_GENERAL_RW_ATTR(warm_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(last_age_weight);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
++F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy);
+ #endif
+ 
+ /* STAT_INFO ATTR */
+@@ -1140,6 +1156,7 @@ static struct attribute *f2fs_attrs[] = {
+ 	ATTR_LIST(min_ssr_sections),
+ 	ATTR_LIST(max_victim_search),
+ 	ATTR_LIST(migration_granularity),
++	ATTR_LIST(migration_window_granularity),
+ 	ATTR_LIST(dir_level),
+ 	ATTR_LIST(ram_thresh),
+ 	ATTR_LIST(ra_nid_pages),
+@@ -1187,6 +1204,7 @@ static struct attribute *f2fs_attrs[] = {
+ #endif
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	ATTR_LIST(unusable_blocks_per_sec),
++	ATTR_LIST(blkzone_alloc_policy),
+ #endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 	ATTR_LIST(compr_written_block),
+diff --git a/fs/file.c b/fs/file.c
+index 655338effe9c72..c2403cde40e4a4 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -272,59 +272,45 @@ static inline bool fd_is_open(unsigned int fd, const struct fdtable *fdt)
+ 	return test_bit(fd, fdt->open_fds);
+ }
+ 
+-static unsigned int count_open_files(struct fdtable *fdt)
+-{
+-	unsigned int size = fdt->max_fds;
+-	unsigned int i;
+-
+-	/* Find the last open fd */
+-	for (i = size / BITS_PER_LONG; i > 0; ) {
+-		if (fdt->open_fds[--i])
+-			break;
+-	}
+-	i = (i + 1) * BITS_PER_LONG;
+-	return i;
+-}
+-
+ /*
+  * Note that a sane fdtable size always has to be a multiple of
+  * BITS_PER_LONG, since we have bitmaps that are sized by this.
+  *
+- * 'max_fds' will normally already be properly aligned, but it
+- * turns out that in the close_range() -> __close_range() ->
+- * unshare_fd() -> dup_fd() -> sane_fdtable_size() we can end
+- * up having a 'max_fds' value that isn't already aligned.
+- *
+- * Rather than make close_range() have to worry about this,
+- * just make that BITS_PER_LONG alignment be part of a sane
+- * fdtable size. Becuase that's really what it is.
++ * punch_hole is optional - when close_range() is asked to unshare
++ * and close, we don't need to copy descriptors in that range, so
++ * a smaller cloned descriptor table might suffice if the last
++ * currently opened descriptor falls into that range.
+  */
+-static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
++static unsigned int sane_fdtable_size(struct fdtable *fdt, struct fd_range *punch_hole)
+ {
+-	unsigned int count;
+-
+-	count = count_open_files(fdt);
+-	if (max_fds < NR_OPEN_DEFAULT)
+-		max_fds = NR_OPEN_DEFAULT;
+-	return ALIGN(min(count, max_fds), BITS_PER_LONG);
++	unsigned int last = find_last_bit(fdt->open_fds, fdt->max_fds);
++
++	if (last == fdt->max_fds)
++		return NR_OPEN_DEFAULT;
++	if (punch_hole && punch_hole->to >= last && punch_hole->from <= last) {
++		last = find_last_bit(fdt->open_fds, punch_hole->from);
++		if (last == punch_hole->from)
++			return NR_OPEN_DEFAULT;
++	}
++	return ALIGN(last + 1, BITS_PER_LONG);
+ }
+ 
+ /*
+- * Allocate a new files structure and copy contents from the
+- * passed in files structure.
+- * errorp will be valid only when the returned files_struct is NULL.
++ * Allocate a new descriptor table and copy contents from the passed in
++ * instance.  Returns a pointer to cloned table on success, ERR_PTR()
++ * on failure.  For 'punch_hole' see sane_fdtable_size().
+  */
+-struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int *errorp)
++struct files_struct *dup_fd(struct files_struct *oldf, struct fd_range *punch_hole)
+ {
+ 	struct files_struct *newf;
+ 	struct file **old_fds, **new_fds;
+ 	unsigned int open_files, i;
+ 	struct fdtable *old_fdt, *new_fdt;
++	int error;
+ 
+-	*errorp = -ENOMEM;
+ 	newf = kmem_cache_alloc(files_cachep, GFP_KERNEL);
+ 	if (!newf)
+-		goto out;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	atomic_set(&newf->count, 1);
+ 
+@@ -341,7 +327,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 
+ 	spin_lock(&oldf->file_lock);
+ 	old_fdt = files_fdtable(oldf);
+-	open_files = sane_fdtable_size(old_fdt, max_fds);
++	open_files = sane_fdtable_size(old_fdt, punch_hole);
+ 
+ 	/*
+ 	 * Check whether we need to allocate a larger fd array and fd set.
+@@ -354,14 +340,14 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 
+ 		new_fdt = alloc_fdtable(open_files - 1);
+ 		if (!new_fdt) {
+-			*errorp = -ENOMEM;
++			error = -ENOMEM;
+ 			goto out_release;
+ 		}
+ 
+ 		/* beyond sysctl_nr_open; nothing to do */
+ 		if (unlikely(new_fdt->max_fds < open_files)) {
+ 			__free_fdtable(new_fdt);
+-			*errorp = -EMFILE;
++			error = -EMFILE;
+ 			goto out_release;
+ 		}
+ 
+@@ -372,7 +358,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 		 */
+ 		spin_lock(&oldf->file_lock);
+ 		old_fdt = files_fdtable(oldf);
+-		open_files = sane_fdtable_size(old_fdt, max_fds);
++		open_files = sane_fdtable_size(old_fdt, punch_hole);
+ 	}
+ 
+ 	copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG);
+@@ -406,8 +392,7 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ 
+ out_release:
+ 	kmem_cache_free(files_cachep, newf);
+-out:
+-	return NULL;
++	return ERR_PTR(error);
+ }
+ 
+ static struct fdtable *close_files(struct files_struct * files)
+@@ -748,37 +733,25 @@ int __close_range(unsigned fd, unsigned max_fd, unsigned int flags)
+ 	if (fd > max_fd)
+ 		return -EINVAL;
+ 
+-	if (flags & CLOSE_RANGE_UNSHARE) {
+-		int ret;
+-		unsigned int max_unshare_fds = NR_OPEN_MAX;
++	if ((flags & CLOSE_RANGE_UNSHARE) && atomic_read(&cur_fds->count) > 1) {
++		struct fd_range range = {fd, max_fd}, *punch_hole = &range;
+ 
+ 		/*
+ 		 * If the caller requested all fds to be made cloexec we always
+ 		 * copy all of the file descriptors since they still want to
+ 		 * use them.
+ 		 */
+-		if (!(flags & CLOSE_RANGE_CLOEXEC)) {
+-			/*
+-			 * If the requested range is greater than the current
+-			 * maximum, we're closing everything so only copy all
+-			 * file descriptors beneath the lowest file descriptor.
+-			 */
+-			rcu_read_lock();
+-			if (max_fd >= last_fd(files_fdtable(cur_fds)))
+-				max_unshare_fds = fd;
+-			rcu_read_unlock();
+-		}
+-
+-		ret = unshare_fd(CLONE_FILES, max_unshare_fds, &fds);
+-		if (ret)
+-			return ret;
++		if (flags & CLOSE_RANGE_CLOEXEC)
++			punch_hole = NULL;
+ 
++		fds = dup_fd(cur_fds, punch_hole);
++		if (IS_ERR(fds))
++			return PTR_ERR(fds);
+ 		/*
+ 		 * We used to share our file descriptor table, and have now
+ 		 * created a private one, make sure we're using it below.
+ 		 */
+-		if (fds)
+-			swap(cur_fds, fds);
++		swap(cur_fds, fds);
+ 	}
+ 
+ 	if (flags & CLOSE_RANGE_CLOEXEC)
+diff --git a/fs/inode.c b/fs/inode.c
+index 3df67672986aa9..aeb07c3b8f24e0 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -593,6 +593,7 @@ void dump_mapping(const struct address_space *mapping)
+ 	struct hlist_node *dentry_first;
+ 	struct dentry *dentry_ptr;
+ 	struct dentry dentry;
++	char fname[64] = {};
+ 	unsigned long ino;
+ 
+ 	/*
+@@ -629,11 +630,14 @@ void dump_mapping(const struct address_space *mapping)
+ 		return;
+ 	}
+ 
++	if (strncpy_from_kernel_nofault(fname, dentry.d_name.name, 63) < 0)
++		strscpy(fname, "<invalid>");
+ 	/*
+-	 * if dentry is corrupted, the %pd handler may still crash,
+-	 * but it's unlikely that we reach here with a corrupt mapping
++	 * Even if strncpy_from_kernel_nofault() succeeded,
++	 * the fname could be unreliable
+ 	 */
+-	pr_warn("aops:%ps ino:%lx dentry name:\"%pd\"\n", a_ops, ino, &dentry);
++	pr_warn("aops:%ps ino:%lx dentry name(?):\"%s\"\n",
++		a_ops, ino, fname);
+ }
+ 
+ void clear_inode(struct inode *inode)
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index d4655899027905..b9b035a5e7793b 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1226,7 +1226,15 @@ static int iomap_write_delalloc_release(struct inode *inode,
+ 			error = data_end;
+ 			goto out_unlock;
+ 		}
+-		WARN_ON_ONCE(data_end <= start_byte);
++
++		/*
++		 * If we race with post-direct I/O invalidation of the page cache,
++		 * there might be no data left at start_byte.
++		 */
++		if (data_end == start_byte)
++			continue;
++
++		WARN_ON_ONCE(data_end < start_byte);
+ 		WARN_ON_ONCE(data_end > scan_end_byte);
+ 
+ 		error = iomap_write_delalloc_scan(inode, &punch_start_byte,
+@@ -1366,11 +1374,15 @@ iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len,
+ 	struct iomap_iter iter = {
+ 		.inode		= inode,
+ 		.pos		= pos,
+-		.len		= len,
+ 		.flags		= IOMAP_WRITE | IOMAP_UNSHARE,
+ 	};
++	loff_t size = i_size_read(inode);
+ 	int ret;
+ 
++	if (pos < 0 || pos >= size)
++		return 0;
++
++	iter.len = min(len, size - pos);
+ 	while ((ret = iomap_iter(&iter, ops)) > 0)
+ 		iter.processed = iomap_unshare_iter(&iter);
+ 	return ret;
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 951f78634adfaa..b3971e91e8eb80 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -79,17 +79,23 @@ __releases(&journal->j_state_lock)
+ 		if (space_left < nblocks) {
+ 			int chkpt = journal->j_checkpoint_transactions != NULL;
+ 			tid_t tid = 0;
++			bool has_transaction = false;
+ 
+-			if (journal->j_committing_transaction)
++			if (journal->j_committing_transaction) {
+ 				tid = journal->j_committing_transaction->t_tid;
++				has_transaction = true;
++			}
+ 			spin_unlock(&journal->j_list_lock);
+ 			write_unlock(&journal->j_state_lock);
+ 			if (chkpt) {
+ 				jbd2_log_do_checkpoint(journal);
+-			} else if (jbd2_cleanup_journal_tail(journal) == 0) {
+-				/* We were able to recover space; yay! */
++			} else if (jbd2_cleanup_journal_tail(journal) <= 0) {
++				/*
++				 * We were able to recover space or the
++				 * journal was aborted due to an error.
++				 */
+ 				;
+-			} else if (tid) {
++			} else if (has_transaction) {
+ 				/*
+ 				 * jbd2_journal_commit_transaction() may want
+ 				 * to take the checkpoint_mutex if JBD2_FLUSHED
+@@ -407,6 +413,7 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ 	tid_t tid = 0;
+ 	unsigned long nr_freed = 0;
+ 	unsigned long freed;
++	bool first_set = false;
+ 
+ again:
+ 	spin_lock(&journal->j_list_lock);
+@@ -426,8 +433,10 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ 	else
+ 		transaction = journal->j_checkpoint_transactions;
+ 
+-	if (!first_tid)
++	if (!first_set) {
+ 		first_tid = transaction->t_tid;
++		first_set = true;
++	}
+ 	last_transaction = journal->j_checkpoint_transactions->t_cpprev;
+ 	next_transaction = transaction;
+ 	last_tid = last_transaction->t_tid;
+@@ -457,7 +466,7 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal,
+ 	spin_unlock(&journal->j_list_lock);
+ 	cond_resched();
+ 
+-	if (*nr_to_scan && next_tid)
++	if (*nr_to_scan && journal->j_shrink_transaction)
+ 		goto again;
+ out:
+ 	trace_jbd2_shrink_checkpoint_list(journal, first_tid, tid, last_tid,
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index c8d9d85e0e871c..cca73b8282d1e8 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -725,7 +725,7 @@ int jbd2_fc_begin_commit(journal_t *journal, tid_t tid)
+ 		return -EINVAL;
+ 
+ 	write_lock(&journal->j_state_lock);
+-	if (tid <= journal->j_commit_sequence) {
++	if (tid_geq(journal->j_commit_sequence, tid)) {
+ 		write_unlock(&journal->j_state_lock);
+ 		return -EALREADY;
+ 	}
+@@ -755,9 +755,9 @@ EXPORT_SYMBOL(jbd2_fc_begin_commit);
+  */
+ static int __jbd2_fc_end_commit(journal_t *journal, tid_t tid, bool fallback)
+ {
+-	jbd2_journal_unlock_updates(journal);
+ 	if (journal->j_fc_cleanup_callback)
+ 		journal->j_fc_cleanup_callback(journal, 0, tid);
++	jbd2_journal_unlock_updates(journal);
+ 	write_lock(&journal->j_state_lock);
+ 	journal->j_flags &= ~JBD2_FAST_COMMIT_ONGOING;
+ 	if (fallback)
+diff --git a/fs/jfs/jfs_discard.c b/fs/jfs/jfs_discard.c
+index 575cb2ba74fc86..5f4b305030ad5e 100644
+--- a/fs/jfs/jfs_discard.c
++++ b/fs/jfs/jfs_discard.c
+@@ -65,7 +65,7 @@ void jfs_issue_discard(struct inode *ip, u64 blkno, u64 nblocks)
+ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ {
+ 	struct inode *ipbmap = JFS_SBI(ip->i_sb)->ipbmap;
+-	struct bmap *bmp = JFS_SBI(ip->i_sb)->bmap;
++	struct bmap *bmp;
+ 	struct super_block *sb = ipbmap->i_sb;
+ 	int agno, agno_end;
+ 	u64 start, end, minlen;
+@@ -83,10 +83,15 @@ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ 	if (minlen == 0)
+ 		minlen = 1;
+ 
++	down_read(&sb->s_umount);
++	bmp = JFS_SBI(ip->i_sb)->bmap;
++
+ 	if (minlen > bmp->db_agsize ||
+ 	    start >= bmp->db_mapsize ||
+-	    range->len < sb->s_blocksize)
++	    range->len < sb->s_blocksize) {
++		up_read(&sb->s_umount);
+ 		return -EINVAL;
++	}
+ 
+ 	if (end >= bmp->db_mapsize)
+ 		end = bmp->db_mapsize - 1;
+@@ -100,6 +105,8 @@ int jfs_ioc_trim(struct inode *ip, struct fstrim_range *range)
+ 		trimmed += dbDiscardAG(ip, agno, minlen);
+ 		agno++;
+ 	}
++
++	up_read(&sb->s_umount);
+ 	range->len = trimmed << sb->s_blocksize_bits;
+ 
+ 	return 0;
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 0625d1c0d0649a..974ecf5e0d9522 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -2944,9 +2944,10 @@ static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl)
+ {
+ 	int ti, n = 0, k, x = 0;
+-	int max_size;
++	int max_size, max_idx;
+ 
+ 	max_size = is_ctl ? CTLTREESIZE : TREESIZE;
++	max_idx = is_ctl ? LPERCTL : LPERDMAP;
+ 
+ 	/* first check the root of the tree to see if there is
+ 	 * sufficient free space.
+@@ -2978,6 +2979,8 @@ static int dbFindLeaf(dmtree_t *tp, int l2nb, int *leafidx, bool is_ctl)
+ 		 */
+ 		assert(n < 4);
+ 	}
++	if (le32_to_cpu(tp->dmt_leafidx) >= max_idx)
++		return -ENOSPC;
+ 
+ 	/* set the return to the leftmost leaf describing sufficient
+ 	 * free space.
+@@ -3022,7 +3025,7 @@ static int dbFindBits(u32 word, int l2nb)
+ 
+ 	/* scan the word for nb free bits at nb alignments.
+ 	 */
+-	for (bitno = 0; mask != 0; bitno += nb, mask >>= nb) {
++	for (bitno = 0; mask != 0; bitno += nb, mask = (mask >> nb)) {
+ 		if ((mask & word) == mask)
+ 			break;
+ 	}
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 2999ed5d83f5e0..0fb05e314edf60 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -434,6 +434,8 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+ 	int rc;
+ 	int quota_allocation = 0;
+ 
++	memset(&ea_buf->new_ea, 0, sizeof(ea_buf->new_ea));
++
+ 	/* When fsck.jfs clears a bad ea, it doesn't clear the size */
+ 	if (ji->ea.flag == 0)
+ 		ea_size = 0;
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 32bc88bee5d18d..3c7eb43a2ec69b 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -408,13 +408,17 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
+ 	folio_unlock(folio);
+ 
+ 	if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) {
+-		if (!fscache_resources_valid(&wreq->cache_resources)) {
++		if (!cache->avail) {
+ 			trace_netfs_folio(folio, netfs_folio_trace_cancel_copy);
+ 			netfs_issue_write(wreq, upload);
+ 			netfs_folio_written_back(folio);
+ 			return 0;
+ 		}
+ 		trace_netfs_folio(folio, netfs_folio_trace_store_copy);
++	} else if (!upload->avail && !cache->avail) {
++		trace_netfs_folio(folio, netfs_folio_trace_cancel_store);
++		netfs_folio_written_back(folio);
++		return 0;
+ 	} else if (!upload->construct) {
+ 		trace_netfs_folio(folio, netfs_folio_trace_store);
+ 	} else {
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 14ec1565632090..5cae26917436c0 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -148,6 +148,7 @@ struct nfsd_net {
+ 	u32		s2s_cp_cl_id;
+ 	struct idr	s2s_cp_stateids;
+ 	spinlock_t	s2s_cp_lock;
++	atomic_t	pending_async_copies;
+ 
+ 	/*
+ 	 * Version information
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 2e39cf2e502a33..5768b2ff1d1d13 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -751,15 +751,6 @@ nfsd4_access(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			   &access->ac_supported);
+ }
+ 
+-static void gen_boot_verifier(nfs4_verifier *verifier, struct net *net)
+-{
+-	__be32 *verf = (__be32 *)verifier->data;
+-
+-	BUILD_BUG_ON(2*sizeof(*verf) != sizeof(verifier->data));
+-
+-	nfsd_copy_write_verifier(verf, net_generic(net, nfsd_net_id));
+-}
+-
+ static __be32
+ nfsd4_commit(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	     union nfsd4_op_u *u)
+@@ -1288,6 +1279,7 @@ static void nfs4_put_copy(struct nfsd4_copy *copy)
+ {
+ 	if (!refcount_dec_and_test(&copy->refcount))
+ 		return;
++	atomic_dec(&copy->cp_nn->pending_async_copies);
+ 	kfree(copy->cp_src);
+ 	kfree(copy);
+ }
+@@ -1630,7 +1622,6 @@ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
+ 		test_bit(NFSD4_COPY_F_COMMITTED, &copy->cp_flags) ?
+ 			NFS_FILE_SYNC : NFS_UNSTABLE;
+ 	nfsd4_copy_set_sync(copy, sync);
+-	gen_boot_verifier(&copy->cp_res.wr_verifier, copy->cp_clp->net);
+ }
+ 
+ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy,
+@@ -1803,9 +1794,11 @@ static __be32
+ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		union nfsd4_op_u *u)
+ {
++	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
++	struct nfsd4_copy *async_copy = NULL;
+ 	struct nfsd4_copy *copy = &u->copy;
++	struct nfsd42_write_res *result;
+ 	__be32 status;
+-	struct nfsd4_copy *async_copy = NULL;
+ 
+ 	/*
+ 	 * Currently, async COPY is not reliable. Force all COPY
+@@ -1814,6 +1807,9 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	 */
+ 	nfsd4_copy_set_sync(copy, true);
+ 
++	result = &copy->cp_res;
++	nfsd_copy_write_verifier((__be32 *)&result->wr_verifier.data, nn);
++
+ 	copy->cp_clp = cstate->clp;
+ 	if (nfsd4_ssc_is_inter(copy)) {
+ 		trace_nfsd_copy_inter(copy);
+@@ -1838,12 +1834,16 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	memcpy(&copy->fh, &cstate->current_fh.fh_handle,
+ 		sizeof(struct knfsd_fh));
+ 	if (nfsd4_copy_is_async(copy)) {
+-		struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+-
+-		status = nfserrno(-ENOMEM);
+ 		async_copy = kzalloc(sizeof(struct nfsd4_copy), GFP_KERNEL);
+ 		if (!async_copy)
+ 			goto out_err;
++		async_copy->cp_nn = nn;
++		/* Arbitrary cap on number of pending async copy operations */
++		if (atomic_inc_return(&nn->pending_async_copies) >
++				(int)rqstp->rq_pool->sp_nrthreads) {
++			atomic_dec(&nn->pending_async_copies);
++			goto out_err;
++		}
+ 		INIT_LIST_HEAD(&async_copy->copies);
+ 		refcount_set(&async_copy->refcount, 1);
+ 		async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL);
+@@ -1851,8 +1851,8 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			goto out_err;
+ 		if (!nfs4_init_copy_state(nn, copy))
+ 			goto out_err;
+-		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid.cs_stid,
+-			sizeof(copy->cp_res.cb_stateid));
++		memcpy(&result->cb_stateid, &copy->cp_stateid.cs_stid,
++			sizeof(result->cb_stateid));
+ 		dup_copy_fields(copy, async_copy);
+ 		async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
+ 				async_copy, "%s", "copy thread");
+@@ -1883,7 +1883,7 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	}
+ 	if (async_copy)
+ 		cleanup_async_copy(async_copy);
+-	status = nfserrno(-ENOMEM);
++	status = nfserr_jukebox;
+ 	goto out;
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fe06779ea527a1..3837f4e417247e 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1077,7 +1077,8 @@ static void nfs4_free_deleg(struct nfs4_stid *stid)
+  * When a delegation is recalled, the filehandle is stored in the "new"
+  * filter.
+  * Every 30 seconds we swap the filters and clear the "new" one,
+- * unless both are empty of course.
++ * unless both are empty of course.  This results in delegations for a
++ * given filehandle being blocked for between 30 and 60 seconds.
+  *
+  * Each filter is 256 bits.  We hash the filehandle to 32bit and use the
+  * low 3 bytes as hash-table indices.
+@@ -1106,9 +1107,9 @@ static int delegation_blocked(struct knfsd_fh *fh)
+ 		if (ktime_get_seconds() - bd->swap_time > 30) {
+ 			bd->entries -= bd->old_entries;
+ 			bd->old_entries = bd->entries;
++			bd->new = 1-bd->new;
+ 			memset(bd->set[bd->new], 0,
+ 			       sizeof(bd->set[0]));
+-			bd->new = 1-bd->new;
+ 			bd->swap_time = ktime_get_seconds();
+ 		}
+ 		spin_unlock(&blocked_delegations_lock);
+@@ -8574,6 +8575,7 @@ static int nfs4_state_create_net(struct net *net)
+ 	spin_lock_init(&nn->client_lock);
+ 	spin_lock_init(&nn->s2s_cp_lock);
+ 	idr_init(&nn->s2s_cp_stateids);
++	atomic_set(&nn->pending_async_copies, 0);
+ 
+ 	spin_lock_init(&nn->blocked_locks_lock);
+ 	INIT_LIST_HEAD(&nn->blocked_locks_lru);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 0869062280ccc9..2ad083d71a0370 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1245,14 +1245,6 @@ nfsd4_decode_putfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ 	return nfs_ok;
+ }
+ 
+-static __be32
+-nfsd4_decode_putpubfh(struct nfsd4_compoundargs *argp, union nfsd4_op_u *p)
+-{
+-	if (argp->minorversion == 0)
+-		return nfs_ok;
+-	return nfserr_notsupp;
+-}
+-
+ static __be32
+ nfsd4_decode_read(struct nfsd4_compoundargs *argp, union nfsd4_op_u *u)
+ {
+@@ -2374,7 +2366,7 @@ static const nfsd4_dec nfsd4_dec_ops[] = {
+ 	[OP_OPEN_CONFIRM]	= nfsd4_decode_open_confirm,
+ 	[OP_OPEN_DOWNGRADE]	= nfsd4_decode_open_downgrade,
+ 	[OP_PUTFH]		= nfsd4_decode_putfh,
+-	[OP_PUTPUBFH]		= nfsd4_decode_putpubfh,
++	[OP_PUTPUBFH]		= nfsd4_decode_noop,
+ 	[OP_PUTROOTFH]		= nfsd4_decode_noop,
+ 	[OP_READ]		= nfsd4_decode_read,
+ 	[OP_READDIR]		= nfsd4_decode_readdir,
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 0f9b4f7b56cd88..37f619ccafce05 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1746,7 +1746,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 			struct svc_pool *sp = &nn->nfsd_serv->sv_pools[i];
+ 
+ 			err = nla_put_u32(skb, NFSD_A_SERVER_THREADS,
+-					  atomic_read(&sp->sp_nrthreads));
++					  sp->sp_nrthreads);
+ 			if (err)
+ 				goto err_unlock;
+ 		}
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 89d7918de7b1a2..877f9263565492 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -705,7 +705,7 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
+ 
+ 	if (serv)
+ 		for (i = 0; i < serv->sv_nrpools && i < n; i++)
+-			nthreads[i] = atomic_read(&serv->sv_pools[i].sp_nrthreads);
++			nthreads[i] = serv->sv_pools[i].sp_nrthreads;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 29b1f3613800a3..911e5e0a17af7d 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -100,6 +100,7 @@ nfserrno (int errno)
+ 		{ nfserr_io, -EUCLEAN },
+ 		{ nfserr_perm, -ENOKEY },
+ 		{ nfserr_no_grace, -ENOGRACE},
++		{ nfserr_io, -EBADMSG },
+ 	};
+ 	int	i;
+ 
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index fbdd42cde1fa5b..2a21a7662e030c 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -713,6 +713,7 @@ struct nfsd4_copy {
+ 	struct nfsd4_ssc_umount_item *ss_nsui;
+ 	struct nfs_fh		c_fh;
+ 	nfs4_stateid		stateid;
++	struct nfsd_net		*cp_nn;
+ };
+ 
+ static inline void nfsd4_copy_set_sync(struct nfsd4_copy *copy, bool sync)
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 6be175a1ab3ce1..75991db2343e2d 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -156,9 +156,8 @@ int ocfs2_get_block(struct inode *inode, sector_t iblock,
+ 	err = ocfs2_extent_map_get_blocks(inode, iblock, &p_blkno, &count,
+ 					  &ext_flags);
+ 	if (err) {
+-		mlog(ML_ERROR, "Error %d from get_blocks(0x%p, %llu, 1, "
+-		     "%llu, NULL)\n", err, inode, (unsigned long long)iblock,
+-		     (unsigned long long)p_blkno);
++		mlog(ML_ERROR, "get_blocks() failed, inode: 0x%p, "
++		     "block: %llu\n", inode, (unsigned long long)iblock);
+ 		goto bail;
+ 	}
+ 
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index cdb9b9bdea1f6d..8f714406528d62 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -235,7 +235,6 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ 		if (bhs[i] == NULL) {
+ 			bhs[i] = sb_getblk(sb, block++);
+ 			if (bhs[i] == NULL) {
+-				ocfs2_metadata_cache_io_unlock(ci);
+ 				status = -ENOMEM;
+ 				mlog_errno(status);
+ 				/* Don't forget to put previous bh! */
+@@ -389,7 +388,8 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ 		/* Always set the buffer in the cache, even if it was
+ 		 * a forced read, or read-ahead which hasn't yet
+ 		 * completed. */
+-		ocfs2_set_buffer_uptodate(ci, bh);
++		if (bh)
++			ocfs2_set_buffer_uptodate(ci, bh);
+ 	}
+ 	ocfs2_metadata_cache_io_unlock(ci);
+ 
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 530fba34f6d312..1bf188b6866a67 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -1055,7 +1055,7 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
+ 	if (!igrab(inode))
+ 		BUG();
+ 
+-	num_running_trans = atomic_read(&(osb->journal->j_num_trans));
++	num_running_trans = atomic_read(&(journal->j_num_trans));
+ 	trace_ocfs2_journal_shutdown(num_running_trans);
+ 
+ 	/* Do a commit_cache here. It will flush our journal, *and*
+@@ -1074,9 +1074,10 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb)
+ 		osb->commit_task = NULL;
+ 	}
+ 
+-	BUG_ON(atomic_read(&(osb->journal->j_num_trans)) != 0);
++	BUG_ON(atomic_read(&(journal->j_num_trans)) != 0);
+ 
+-	if (ocfs2_mount_local(osb)) {
++	if (ocfs2_mount_local(osb) &&
++	    (journal->j_journal->j_flags & JBD2_LOADED)) {
+ 		jbd2_journal_lock_updates(journal->j_journal);
+ 		status = jbd2_journal_flush(journal->j_journal, 0);
+ 		jbd2_journal_unlock_updates(journal->j_journal);
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 5df34561c551c6..8ac42ea81a17bd 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -1002,6 +1002,25 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ 		start = bit_off + 1;
+ 	}
+ 
++	/* clear the contiguous bits until the end boundary */
++	if (count) {
++		blkno = la_start_blk +
++			ocfs2_clusters_to_blocks(osb->sb,
++					start - count);
++
++		trace_ocfs2_sync_local_to_main_free(
++				count, start - count,
++				(unsigned long long)la_start_blk,
++				(unsigned long long)blkno);
++
++		status = ocfs2_release_clusters(handle,
++				main_bm_inode,
++				main_bm_bh, blkno,
++				count);
++		if (status < 0)
++			mlog_errno(status);
++	}
++
+ bail:
+ 	if (status)
+ 		mlog_errno(status);
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 8ce462c64c51ba..73d3367c533b8a 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -692,7 +692,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ 	int status;
+ 	struct buffer_head *bh = NULL;
+ 	struct ocfs2_quota_recovery *rec;
+-	int locked = 0;
++	int locked = 0, global_read = 0;
+ 
+ 	info->dqi_max_spc_limit = 0x7fffffffffffffffLL;
+ 	info->dqi_max_ino_limit = 0x7fffffffffffffffLL;
+@@ -700,6 +700,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ 	if (!oinfo) {
+ 		mlog(ML_ERROR, "failed to allocate memory for ocfs2 quota"
+ 			       " info.");
++		status = -ENOMEM;
+ 		goto out_err;
+ 	}
+ 	info->dqi_priv = oinfo;
+@@ -712,6 +713,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ 	status = ocfs2_global_read_info(sb, type);
+ 	if (status < 0)
+ 		goto out_err;
++	global_read = 1;
+ 
+ 	status = ocfs2_inode_lock(lqinode, &oinfo->dqi_lqi_bh, 1);
+ 	if (status < 0) {
+@@ -782,10 +784,12 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ 		if (locked)
+ 			ocfs2_inode_unlock(lqinode, 1);
+ 		ocfs2_release_local_quota_bitmaps(&oinfo->dqi_chunk);
++		if (global_read)
++			cancel_delayed_work_sync(&oinfo->dqi_sync_work);
+ 		kfree(oinfo);
+ 	}
+ 	brelse(bh);
+-	return -1;
++	return status;
+ }
+ 
+ /* Write local info to quota file */
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index 1f303b1adf1ab9..53a0961f114d11 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -25,6 +25,7 @@
+ #include "namei.h"
+ #include "ocfs2_trace.h"
+ #include "file.h"
++#include "symlink.h"
+ 
+ #include <linux/bio.h>
+ #include <linux/blkdev.h>
+@@ -4155,8 +4156,9 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ 	int ret;
+ 	struct inode *inode = d_inode(old_dentry);
+ 	struct buffer_head *new_bh = NULL;
++	struct ocfs2_inode_info *oi = OCFS2_I(inode);
+ 
+-	if (OCFS2_I(inode)->ip_flags & OCFS2_INODE_SYSTEM_FILE) {
++	if (oi->ip_flags & OCFS2_INODE_SYSTEM_FILE) {
+ 		ret = -EINVAL;
+ 		mlog_errno(ret);
+ 		goto out;
+@@ -4182,6 +4184,26 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ 		goto out_unlock;
+ 	}
+ 
++	if ((oi->ip_dyn_features & OCFS2_HAS_XATTR_FL) &&
++	    (oi->ip_dyn_features & OCFS2_INLINE_XATTR_FL)) {
++		/*
++		 * Adjust extent record count to reserve space for extended attribute.
++		 * Inline data count had been adjusted in ocfs2_duplicate_inline_data().
++		 */
++		struct ocfs2_inode_info *new_oi = OCFS2_I(new_inode);
++
++		if (!(new_oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) &&
++		    !(ocfs2_inode_is_fast_symlink(new_inode))) {
++			struct ocfs2_dinode *new_di = (struct ocfs2_dinode *)new_bh->b_data;
++			struct ocfs2_dinode *old_di = (struct ocfs2_dinode *)old_bh->b_data;
++			struct ocfs2_extent_list *el = &new_di->id2.i_list;
++			int inline_size = le16_to_cpu(old_di->i_xattr_inline_size);
++
++			le16_add_cpu(&el->l_count, -(inline_size /
++					sizeof(struct ocfs2_extent_rec)));
++		}
++	}
++
+ 	ret = ocfs2_create_reflink_node(inode, old_bh,
+ 					new_inode, new_bh, preserve);
+ 	if (ret) {
+@@ -4189,7 +4211,7 @@ static int __ocfs2_reflink(struct dentry *old_dentry,
+ 		goto inode_unlock;
+ 	}
+ 
+-	if (OCFS2_I(inode)->ip_dyn_features & OCFS2_HAS_XATTR_FL) {
++	if (oi->ip_dyn_features & OCFS2_HAS_XATTR_FL) {
+ 		ret = ocfs2_reflink_xattrs(inode, old_bh,
+ 					   new_inode, new_bh,
+ 					   preserve);
+diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
+index 35c0cc2a51af82..fb1140c52f07cb 100644
+--- a/fs/ocfs2/xattr.c
++++ b/fs/ocfs2/xattr.c
+@@ -6520,16 +6520,7 @@ static int ocfs2_reflink_xattr_inline(struct ocfs2_xattr_reflink *args)
+ 	}
+ 
+ 	new_oi = OCFS2_I(args->new_inode);
+-	/*
+-	 * Adjust extent record count to reserve space for extended attribute.
+-	 * Inline data count had been adjusted in ocfs2_duplicate_inline_data().
+-	 */
+-	if (!(new_oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) &&
+-	    !(ocfs2_inode_is_fast_symlink(args->new_inode))) {
+-		struct ocfs2_extent_list *el = &new_di->id2.i_list;
+-		le16_add_cpu(&el->l_count, -(inline_size /
+-					sizeof(struct ocfs2_extent_rec)));
+-	}
++
+ 	spin_lock(&new_oi->ip_lock);
+ 	new_oi->ip_dyn_features |= OCFS2_HAS_XATTR_FL | OCFS2_INLINE_XATTR_FL;
+ 	new_di->i_dyn_features = cpu_to_le16(new_oi->ip_dyn_features);
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index a5ef2005a2cc54..051a802893a184 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -243,8 +243,24 @@ static int ovl_verify_area(loff_t pos, loff_t pos2, loff_t len, loff_t totlen)
+ 	return 0;
+ }
+ 
++static int ovl_sync_file(struct path *path)
++{
++	struct file *new_file;
++	int err;
++
++	new_file = ovl_path_open(path, O_LARGEFILE | O_RDONLY);
++	if (IS_ERR(new_file))
++		return PTR_ERR(new_file);
++
++	err = vfs_fsync(new_file, 0);
++	fput(new_file);
++
++	return err;
++}
++
+ static int ovl_copy_up_file(struct ovl_fs *ofs, struct dentry *dentry,
+-			    struct file *new_file, loff_t len)
++			    struct file *new_file, loff_t len,
++			    bool datasync)
+ {
+ 	struct path datapath;
+ 	struct file *old_file;
+@@ -342,7 +358,8 @@ static int ovl_copy_up_file(struct ovl_fs *ofs, struct dentry *dentry,
+ 
+ 		len -= bytes;
+ 	}
+-	if (!error && ovl_should_sync(ofs))
++	/* call fsync once, either now or later along with metadata */
++	if (!error && ovl_should_sync(ofs) && datasync)
+ 		error = vfs_fsync(new_file, 0);
+ out_fput:
+ 	fput(old_file);
+@@ -574,6 +591,7 @@ struct ovl_copy_up_ctx {
+ 	bool indexed;
+ 	bool metacopy;
+ 	bool metacopy_digest;
++	bool metadata_fsync;
+ };
+ 
+ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+@@ -634,7 +652,8 @@ static int ovl_copy_up_data(struct ovl_copy_up_ctx *c, const struct path *temp)
+ 	if (IS_ERR(new_file))
+ 		return PTR_ERR(new_file);
+ 
+-	err = ovl_copy_up_file(ofs, c->dentry, new_file, c->stat.size);
++	err = ovl_copy_up_file(ofs, c->dentry, new_file, c->stat.size,
++			       !c->metadata_fsync);
+ 	fput(new_file);
+ 
+ 	return err;
+@@ -701,6 +720,10 @@ static int ovl_copy_up_metadata(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ 		err = ovl_set_attr(ofs, temp, &c->stat);
+ 	inode_unlock(temp->d_inode);
+ 
++	/* fsync metadata before moving it into upper dir */
++	if (!err && ovl_should_sync(ofs) && c->metadata_fsync)
++		err = ovl_sync_file(&upperpath);
++
+ 	return err;
+ }
+ 
+@@ -860,7 +883,8 @@ static int ovl_copy_up_tmpfile(struct ovl_copy_up_ctx *c)
+ 
+ 	temp = tmpfile->f_path.dentry;
+ 	if (!c->metacopy && c->stat.size) {
+-		err = ovl_copy_up_file(ofs, c->dentry, tmpfile, c->stat.size);
++		err = ovl_copy_up_file(ofs, c->dentry, tmpfile, c->stat.size,
++				       !c->metadata_fsync);
+ 		if (err)
+ 			goto out_fput;
+ 	}
+@@ -1135,6 +1159,17 @@ static int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ 	    !kgid_has_mapping(current_user_ns(), ctx.stat.gid))
+ 		return -EOVERFLOW;
+ 
++	/*
++	 * With metacopy disabled, we fsync after final metadata copyup, for
++	 * both regular files and directories to get atomic copyup semantics
++	 * on filesystems that do not use strict metadata ordering (e.g. ubifs).
++	 *
++	 * With metacopy enabled we want to avoid fsync on all meta copyup
++	 * that will hurt performance of workloads such as chown -R, so we
++	 * only fsync on data copyup as legacy behavior.
++	 */
++	ctx.metadata_fsync = !OVL_FS(dentry->d_sb)->config.metacopy &&
++			     (S_ISREG(ctx.stat.mode) || S_ISDIR(ctx.stat.mode));
+ 	ctx.metacopy = ovl_need_meta_copy_up(dentry, ctx.stat.mode, flags);
+ 
+ 	if (parent) {
+diff --git a/fs/overlayfs/params.c b/fs/overlayfs/params.c
+index 4860fcc4611bb7..f9e3d139c07e8a 100644
+--- a/fs/overlayfs/params.c
++++ b/fs/overlayfs/params.c
+@@ -782,11 +782,6 @@ int ovl_fs_params_verify(const struct ovl_fs_context *ctx,
+ {
+ 	struct ovl_opt_set set = ctx->set;
+ 
+-	if (ctx->nr_data > 0 && !config->metacopy) {
+-		pr_err("lower data-only dirs require metacopy support.\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* Workdir/index are useless in non-upper mount */
+ 	if (!config->upperdir) {
+ 		if (config->workdir) {
+@@ -938,6 +933,39 @@ int ovl_fs_params_verify(const struct ovl_fs_context *ctx,
+ 		config->metacopy = false;
+ 	}
+ 
++	/*
++	 * Fail if we don't have trusted xattr capability and a feature was
++	 * explicitly requested that requires them.
++	 */
++	if (!config->userxattr && !capable(CAP_SYS_ADMIN)) {
++		if (set.redirect &&
++		    config->redirect_mode != OVL_REDIRECT_NOFOLLOW) {
++			pr_err("redirect_dir requires permission to access trusted xattrs\n");
++			return -EPERM;
++		}
++		if (config->metacopy && set.metacopy) {
++			pr_err("metacopy requires permission to access trusted xattrs\n");
++			return -EPERM;
++		}
++		if (config->verity_mode) {
++			pr_err("verity requires permission to access trusted xattrs\n");
++			return -EPERM;
++		}
++		if (ctx->nr_data > 0) {
++			pr_err("lower data-only dirs require permission to access trusted xattrs\n");
++			return -EPERM;
++		}
++		/*
++		 * Other xattr-dependent features should be disabled without
++		 * great disturbance to the user in ovl_make_workdir().
++		 */
++	}
++
++	if (ctx->nr_data > 0 && !config->metacopy) {
++		pr_err("lower data-only dirs require metacopy support.\n");
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 72a1acd03675cc..f389c69767fa52 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -85,6 +85,7 @@
+ #include <linux/elf.h>
+ #include <linux/pid_namespace.h>
+ #include <linux/user_namespace.h>
++#include <linux/fs_parser.h>
+ #include <linux/fs_struct.h>
+ #include <linux/slab.h>
+ #include <linux/sched/autogroup.h>
+@@ -117,6 +118,40 @@
+ static u8 nlink_tid __ro_after_init;
+ static u8 nlink_tgid __ro_after_init;
+ 
++enum proc_mem_force {
++	PROC_MEM_FORCE_ALWAYS,
++	PROC_MEM_FORCE_PTRACE,
++	PROC_MEM_FORCE_NEVER
++};
++
++static enum proc_mem_force proc_mem_force_override __ro_after_init =
++	IS_ENABLED(CONFIG_PROC_MEM_NO_FORCE) ? PROC_MEM_FORCE_NEVER :
++	IS_ENABLED(CONFIG_PROC_MEM_FORCE_PTRACE) ? PROC_MEM_FORCE_PTRACE :
++	PROC_MEM_FORCE_ALWAYS;
++
++static const struct constant_table proc_mem_force_table[] __initconst = {
++	{ "always", PROC_MEM_FORCE_ALWAYS },
++	{ "ptrace", PROC_MEM_FORCE_PTRACE },
++	{ "never", PROC_MEM_FORCE_NEVER },
++	{ }
++};
++
++static int __init early_proc_mem_force_override(char *buf)
++{
++	if (!buf)
++		return -EINVAL;
++
++	/*
++	 * lookup_constant() defaults to proc_mem_force_override to preseve
++	 * the initial Kconfig choice in case an invalid param gets passed.
++	 */
++	proc_mem_force_override = lookup_constant(proc_mem_force_table,
++						  buf, proc_mem_force_override);
++
++	return 0;
++}
++early_param("proc_mem.force_override", early_proc_mem_force_override);
++
+ struct pid_entry {
+ 	const char *name;
+ 	unsigned int len;
+@@ -835,6 +870,28 @@ static int mem_open(struct inode *inode, struct file *file)
+ 	return ret;
+ }
+ 
++static bool proc_mem_foll_force(struct file *file, struct mm_struct *mm)
++{
++	struct task_struct *task;
++	bool ptrace_active = false;
++
++	switch (proc_mem_force_override) {
++	case PROC_MEM_FORCE_NEVER:
++		return false;
++	case PROC_MEM_FORCE_PTRACE:
++		task = get_proc_task(file_inode(file));
++		if (task) {
++			ptrace_active =	READ_ONCE(task->ptrace) &&
++					READ_ONCE(task->mm) == mm &&
++					READ_ONCE(task->parent) == current;
++			put_task_struct(task);
++		}
++		return ptrace_active;
++	default:
++		return true;
++	}
++}
++
+ static ssize_t mem_rw(struct file *file, char __user *buf,
+ 			size_t count, loff_t *ppos, int write)
+ {
+@@ -855,7 +912,9 @@ static ssize_t mem_rw(struct file *file, char __user *buf,
+ 	if (!mmget_not_zero(mm))
+ 		goto free;
+ 
+-	flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
++	flags = write ? FOLL_WRITE : 0;
++	if (proc_mem_foll_force(file, mm))
++		flags |= FOLL_FORCE;
+ 
+ 	while (count > 0) {
+ 		size_t this_len = min_t(size_t, count, PAGE_SIZE);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index dd7b462387a00d..0101f8bcb4f6b1 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -29,8 +29,13 @@ static const struct inode_operations proc_sys_inode_operations;
+ static const struct file_operations proc_sys_dir_file_operations;
+ static const struct inode_operations proc_sys_dir_operations;
+ 
+-/* Support for permanently empty directories */
+-static struct ctl_table sysctl_mount_point[] = { };
++/*
++ * Support for permanently empty directories.
++ * Must be non-empty to avoid sharing an address with other tables.
++ */
++static struct ctl_table sysctl_mount_point[] = {
++	{ }
++};
+ 
+ /**
+  * register_sysctl_mount_point() - registers a sysctl mount point
+@@ -42,7 +47,7 @@ static struct ctl_table sysctl_mount_point[] = { };
+  */
+ struct ctl_table_header *register_sysctl_mount_point(const char *path)
+ {
+-	return register_sysctl(path, sysctl_mount_point);
++	return register_sysctl_sz(path, sysctl_mount_point, 0);
+ }
+ EXPORT_SYMBOL(register_sysctl_mount_point);
+ 
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index a1acf5bd1e3a4a..5955e265b3958a 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -313,8 +313,17 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	unsigned int xid;
+ 	int rc = 0;
++	const char *full_path;
++	void *page;
+ 
+ 	xid = get_xid();
++	page = alloc_dentry_path();
++
++	full_path = build_path_from_dentry(dentry, page);
++	if (IS_ERR(full_path)) {
++		rc = PTR_ERR(full_path);
++		goto statfs_out;
++	}
+ 
+ 	if (le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength) > 0)
+ 		buf->f_namelen =
+@@ -330,8 +339,10 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_ffree = 0;	/* unlimited */
+ 
+ 	if (server->ops->queryfs)
+-		rc = server->ops->queryfs(xid, tcon, cifs_sb, buf);
++		rc = server->ops->queryfs(xid, tcon, full_path, cifs_sb, buf);
+ 
++statfs_out:
++	free_dentry_path(page);
+ 	free_xid(xid);
+ 	return rc;
+ }
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 552792f28122c0..0ac026cec830ff 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -482,7 +482,7 @@ struct smb_version_operations {
+ 			__u16 net_fid, struct cifsInodeInfo *cifs_inode);
+ 	/* query remote filesystem */
+ 	int (*queryfs)(const unsigned int, struct cifs_tcon *,
+-		       struct cifs_sb_info *, struct kstatfs *);
++		       const char *, struct cifs_sb_info *, struct kstatfs *);
+ 	/* send mandatory brlock to the server */
+ 	int (*mand_lock)(const unsigned int, struct cifsFileInfo *, __u64,
+ 			 __u64, __u32, int, int, bool);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 73e2e6c230b735..54579a2003ac6a 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -800,10 +800,6 @@ static void cifs_open_info_to_fattr(struct cifs_fattr *fattr,
+ 		fattr->cf_mode = S_IFREG | cifs_sb->ctx->file_mode;
+ 		fattr->cf_dtype = DT_REG;
+ 
+-		/* clear write bits if ATTR_READONLY is set */
+-		if (fattr->cf_cifsattrs & ATTR_READONLY)
+-			fattr->cf_mode &= ~(S_IWUGO);
+-
+ 		/*
+ 		 * Don't accept zero nlink from non-unix servers unless
+ 		 * delete is pending.  Instead mark it as unknown.
+@@ -816,6 +812,10 @@ static void cifs_open_info_to_fattr(struct cifs_fattr *fattr,
+ 		}
+ 	}
+ 
++	/* clear write bits if ATTR_READONLY is set */
++	if (fattr->cf_cifsattrs & ATTR_READONLY)
++		fattr->cf_mode &= ~(S_IWUGO);
++
+ out_reparse:
+ 	if (S_ISLNK(fattr->cf_mode)) {
+ 		if (likely(data->symlink_target))
+@@ -1233,11 +1233,14 @@ static int cifs_get_fattr(struct cifs_open_info_data *data,
+ 				 __func__, rc);
+ 			goto out;
+ 		}
+-	}
+-
+-	/* fill in remaining high mode bits e.g. SUID, VTX */
+-	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)
++	} else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)
++		/* fill in remaining high mode bits e.g. SUID, VTX */
+ 		cifs_sfu_mode(fattr, full_path, cifs_sb, xid);
++	else if (!(tcon->posix_extensions))
++		/* clear write bits if ATTR_READONLY is set */
++		if (fattr->cf_cifsattrs & ATTR_READONLY)
++			fattr->cf_mode &= ~(S_IWUGO);
++
+ 
+ 	/* check for Minshall+French symlinks */
+ 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 48c27581ec511c..ad0e0de9a165d4 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -320,15 +320,21 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ 	unsigned int len;
+ 	u64 type;
+ 
++	len = le16_to_cpu(buf->ReparseDataLength);
++	if (len < sizeof(buf->InodeType)) {
++		cifs_dbg(VFS, "srv returned malformed nfs buffer\n");
++		return -EIO;
++	}
++
++	len -= sizeof(buf->InodeType);
++
+ 	switch ((type = le64_to_cpu(buf->InodeType))) {
+ 	case NFS_SPECFILE_LNK:
+-		len = le16_to_cpu(buf->ReparseDataLength);
+ 		data->symlink_target = cifs_strndup_from_utf16(buf->DataBuffer,
+ 							       len, true,
+ 							       cifs_sb->local_nls);
+ 		if (!data->symlink_target)
+ 			return -ENOMEM;
+-		convert_delimiter(data->symlink_target, '/');
+ 		cifs_dbg(FYI, "%s: target path: %s\n",
+ 			 __func__, data->symlink_target);
+ 		break;
+@@ -482,12 +488,18 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ 	u32 tag = data->reparse.tag;
+ 
+ 	if (tag == IO_REPARSE_TAG_NFS && buf) {
++		if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType))
++			return false;
+ 		switch (le64_to_cpu(buf->InodeType)) {
+ 		case NFS_SPECFILE_CHR:
++			if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
++				return false;
+ 			fattr->cf_mode |= S_IFCHR;
+ 			fattr->cf_rdev = reparse_nfs_mkdev(buf);
+ 			break;
+ 		case NFS_SPECFILE_BLK:
++			if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
++				return false;
+ 			fattr->cf_mode |= S_IFBLK;
+ 			fattr->cf_rdev = reparse_nfs_mkdev(buf);
+ 			break;
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index e1f2feb56f45f6..8c03250d85ae0c 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -909,7 +909,7 @@ cifs_oplock_response(struct cifs_tcon *tcon, __u64 persistent_fid,
+ 
+ static int
+ cifs_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+-	     struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++	     const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ 	int rc = -EOPNOTSUPP;
+ 
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 11a1c53c64e0bc..a6dab60e2c01ef 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1205,9 +1205,12 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ 	struct cifsFileInfo *cfile;
+ 	struct inode *new = NULL;
++	int out_buftype[4] = {};
++	struct kvec out_iov[4] = {};
+ 	struct kvec in_iov[2];
+ 	int cmds[2];
+ 	int rc;
++	int i;
+ 
+ 	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ 			     SYNCHRONIZE | DELETE |
+@@ -1228,7 +1231,7 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ 		cmds[1] = SMB2_OP_POSIX_QUERY_INFO;
+ 		cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
+ 		rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms,
+-				      in_iov, cmds, 2, cfile, NULL, NULL, NULL);
++				      in_iov, cmds, 2, cfile, out_iov, out_buftype, NULL);
+ 		if (!rc) {
+ 			rc = smb311_posix_get_inode_info(&new, full_path,
+ 							 data, sb, xid);
+@@ -1237,12 +1240,29 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ 		cmds[1] = SMB2_OP_QUERY_INFO;
+ 		cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
+ 		rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms,
+-				      in_iov, cmds, 2, cfile, NULL, NULL, NULL);
++				      in_iov, cmds, 2, cfile, out_iov, out_buftype, NULL);
+ 		if (!rc) {
+ 			rc = cifs_get_inode_info(&new, full_path,
+ 						 data, sb, xid, NULL);
+ 		}
+ 	}
++
++
++	/*
++	 * If CREATE was successful but SMB2_OP_SET_REPARSE failed then
++	 * remove the intermediate object created by CREATE. Otherwise
++	 * empty object stay on the server when reparse call failed.
++	 */
++	if (rc &&
++	    out_iov[0].iov_base != NULL && out_buftype[0] != CIFS_NO_BUFFER &&
++	    ((struct smb2_hdr *)out_iov[0].iov_base)->Status == STATUS_SUCCESS &&
++	    (out_iov[1].iov_base == NULL || out_buftype[1] == CIFS_NO_BUFFER ||
++	     ((struct smb2_hdr *)out_iov[1].iov_base)->Status != STATUS_SUCCESS))
++		smb2_unlink(xid, tcon, full_path, cifs_sb, NULL);
++
++	for (i = 0; i < ARRAY_SIZE(out_buftype); i++)
++		free_rsp_buf(out_buftype[i], out_iov[i].iov_base);
++
+ 	return rc ? ERR_PTR(rc) : new;
+ }
+ 
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 1d6e8eacdd742b..7ea02619e67340 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -2816,7 +2816,7 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ static int
+ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+-	     struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++	     const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ 	struct smb2_query_info_rsp *rsp;
+ 	struct smb2_fs_full_size_info *info = NULL;
+@@ -2825,7 +2825,7 @@ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+ 	int rc;
+ 
+ 
+-	rc = smb2_query_info_compound(xid, tcon, "",
++	rc = smb2_query_info_compound(xid, tcon, path,
+ 				      FILE_READ_ATTRIBUTES,
+ 				      FS_FULL_SIZE_INFORMATION,
+ 				      SMB2_O_INFO_FILESYSTEM,
+@@ -2853,28 +2853,33 @@ smb2_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ static int
+ smb311_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+-	       struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
++	       const char *path, struct cifs_sb_info *cifs_sb, struct kstatfs *buf)
+ {
+ 	int rc;
+-	__le16 srch_path = 0; /* Null - open root of share */
++	__le16 *utf16_path = NULL;
+ 	u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct cifs_open_parms oparms;
+ 	struct cifs_fid fid;
+ 
+ 	if (!tcon->posix_extensions)
+-		return smb2_queryfs(xid, tcon, cifs_sb, buf);
++		return smb2_queryfs(xid, tcon, path, cifs_sb, buf);
+ 
+ 	oparms = (struct cifs_open_parms) {
+ 		.tcon = tcon,
+-		.path = "",
++		.path = path,
+ 		.desired_access = FILE_READ_ATTRIBUTES,
+ 		.disposition = FILE_OPEN,
+ 		.create_options = cifs_create_options(cifs_sb, 0),
+ 		.fid = &fid,
+ 	};
+ 
+-	rc = SMB2_open(xid, &oparms, &srch_path, &oplock, NULL, NULL,
++	utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
++	if (utf16_path == NULL)
++		return -ENOMEM;
++
++	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ 		       NULL, NULL);
++	kfree(utf16_path);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 7889df8112b4ee..cac80e7bfefc74 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -39,7 +39,8 @@ void ksmbd_conn_free(struct ksmbd_conn *conn)
+ 	xa_destroy(&conn->sessions);
+ 	kvfree(conn->request_buf);
+ 	kfree(conn->preauth_info);
+-	kfree(conn);
++	if (atomic_dec_and_test(&conn->refcnt))
++		kfree(conn);
+ }
+ 
+ /**
+@@ -68,6 +69,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ 		conn->um = NULL;
+ 	atomic_set(&conn->req_running, 0);
+ 	atomic_set(&conn->r_count, 0);
++	atomic_set(&conn->refcnt, 1);
+ 	conn->total_credits = 1;
+ 	conn->outstanding_credits = 0;
+ 
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index b93e5437793e00..82343afc8d0499 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -106,6 +106,7 @@ struct ksmbd_conn {
+ 	bool				signing_negotiated;
+ 	__le16				signing_algorithm;
+ 	bool				binding;
++	atomic_t			refcnt;
+ };
+ 
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index e546ffa57b55ab..8ee86478287f93 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -51,6 +51,7 @@ static struct oplock_info *alloc_opinfo(struct ksmbd_work *work,
+ 	init_waitqueue_head(&opinfo->oplock_brk);
+ 	atomic_set(&opinfo->refcount, 1);
+ 	atomic_set(&opinfo->breaking_cnt, 0);
++	atomic_inc(&opinfo->conn->refcnt);
+ 
+ 	return opinfo;
+ }
+@@ -124,6 +125,8 @@ static void free_opinfo(struct oplock_info *opinfo)
+ {
+ 	if (opinfo->is_lease)
+ 		free_lease(opinfo);
++	if (opinfo->conn && atomic_dec_and_test(&opinfo->conn->refcnt))
++		kfree(opinfo->conn);
+ 	kfree(opinfo);
+ }
+ 
+@@ -163,9 +166,7 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ 		    !atomic_inc_not_zero(&opinfo->refcount))
+ 			opinfo = NULL;
+ 		else {
+-			atomic_inc(&opinfo->conn->r_count);
+ 			if (ksmbd_conn_releasing(opinfo->conn)) {
+-				atomic_dec(&opinfo->conn->r_count);
+ 				atomic_dec(&opinfo->refcount);
+ 				opinfo = NULL;
+ 			}
+@@ -177,26 +178,11 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ 	return opinfo;
+ }
+ 
+-static void opinfo_conn_put(struct oplock_info *opinfo)
++void opinfo_put(struct oplock_info *opinfo)
+ {
+-	struct ksmbd_conn *conn;
+-
+ 	if (!opinfo)
+ 		return;
+ 
+-	conn = opinfo->conn;
+-	/*
+-	 * Checking waitqueue to dropping pending requests on
+-	 * disconnection. waitqueue_active is safe because it
+-	 * uses atomic operation for condition.
+-	 */
+-	if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+-		wake_up(&conn->r_count_q);
+-	opinfo_put(opinfo);
+-}
+-
+-void opinfo_put(struct oplock_info *opinfo)
+-{
+ 	if (!atomic_dec_and_test(&opinfo->refcount))
+ 		return;
+ 
+@@ -1127,14 +1113,11 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp,
+ 			if (!atomic_inc_not_zero(&opinfo->refcount))
+ 				continue;
+ 
+-			atomic_inc(&opinfo->conn->r_count);
+-			if (ksmbd_conn_releasing(opinfo->conn)) {
+-				atomic_dec(&opinfo->conn->r_count);
++			if (ksmbd_conn_releasing(opinfo->conn))
+ 				continue;
+-			}
+ 
+ 			oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
+-			opinfo_conn_put(opinfo);
++			opinfo_put(opinfo);
+ 		}
+ 	}
+ 	up_read(&p_ci->m_lock);
+@@ -1167,13 +1150,10 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp)
+ 			if (!atomic_inc_not_zero(&opinfo->refcount))
+ 				continue;
+ 
+-			atomic_inc(&opinfo->conn->r_count);
+-			if (ksmbd_conn_releasing(opinfo->conn)) {
+-				atomic_dec(&opinfo->conn->r_count);
++			if (ksmbd_conn_releasing(opinfo->conn))
+ 				continue;
+-			}
+ 			oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
+-			opinfo_conn_put(opinfo);
++			opinfo_put(opinfo);
+ 		}
+ 	}
+ 	up_read(&p_ci->m_lock);
+@@ -1252,7 +1232,7 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ 	prev_opinfo = opinfo_get_list(ci);
+ 	if (!prev_opinfo ||
+ 	    (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) {
+-		opinfo_conn_put(prev_opinfo);
++		opinfo_put(prev_opinfo);
+ 		goto set_lev;
+ 	}
+ 	prev_op_has_lease = prev_opinfo->is_lease;
+@@ -1262,19 +1242,19 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ 	if (share_ret < 0 &&
+ 	    prev_opinfo->level == SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+ 		err = share_ret;
+-		opinfo_conn_put(prev_opinfo);
++		opinfo_put(prev_opinfo);
+ 		goto err_out;
+ 	}
+ 
+ 	if (prev_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ 	    prev_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+-		opinfo_conn_put(prev_opinfo);
++		opinfo_put(prev_opinfo);
+ 		goto op_break_not_needed;
+ 	}
+ 
+ 	list_add(&work->interim_entry, &prev_opinfo->interim_list);
+ 	err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II);
+-	opinfo_conn_put(prev_opinfo);
++	opinfo_put(prev_opinfo);
+ 	if (err == -ENOENT)
+ 		goto set_lev;
+ 	/* Check all oplock was freed by close */
+@@ -1337,14 +1317,14 @@ static void smb_break_all_write_oplock(struct ksmbd_work *work,
+ 		return;
+ 	if (brk_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH &&
+ 	    brk_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) {
+-		opinfo_conn_put(brk_opinfo);
++		opinfo_put(brk_opinfo);
+ 		return;
+ 	}
+ 
+ 	brk_opinfo->open_trunc = is_trunc;
+ 	list_add(&work->interim_entry, &brk_opinfo->interim_list);
+ 	oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II);
+-	opinfo_conn_put(brk_opinfo);
++	opinfo_put(brk_opinfo);
+ }
+ 
+ /**
+@@ -1376,11 +1356,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		if (!atomic_inc_not_zero(&brk_op->refcount))
+ 			continue;
+ 
+-		atomic_inc(&brk_op->conn->r_count);
+-		if (ksmbd_conn_releasing(brk_op->conn)) {
+-			atomic_dec(&brk_op->conn->r_count);
++		if (ksmbd_conn_releasing(brk_op->conn))
+ 			continue;
+-		}
+ 
+ 		rcu_read_unlock();
+ 		if (brk_op->is_lease && (brk_op->o_lease->state &
+@@ -1411,7 +1388,7 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		brk_op->open_trunc = is_trunc;
+ 		oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE);
+ next:
+-		opinfo_conn_put(brk_op);
++		opinfo_put(brk_op);
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+diff --git a/fs/smb/server/vfs_cache.c b/fs/smb/server/vfs_cache.c
+index 8b2e37c8716ed7..271a23abc82fdd 100644
+--- a/fs/smb/server/vfs_cache.c
++++ b/fs/smb/server/vfs_cache.c
+@@ -710,6 +710,8 @@ static bool session_fd_check(struct ksmbd_tree_connect *tcon,
+ 	list_for_each_entry_rcu(op, &ci->m_op_list, op_entry) {
+ 		if (op->conn != conn)
+ 			continue;
++		if (op->conn && atomic_dec_and_test(&op->conn->refcnt))
++			kfree(op->conn);
+ 		op->conn = NULL;
+ 	}
+ 	up_write(&ci->m_lock);
+@@ -807,6 +809,7 @@ int ksmbd_reopen_durable_fd(struct ksmbd_work *work, struct ksmbd_file *fp)
+ 		if (op->conn)
+ 			continue;
+ 		op->conn = fp->conn;
++		atomic_inc(&op->conn->refcnt);
+ 	}
+ 	up_write(&ci->m_lock);
+ 
+diff --git a/include/crypto/internal/simd.h b/include/crypto/internal/simd.h
+index d2316242a98843..be97b97a75dd2d 100644
+--- a/include/crypto/internal/simd.h
++++ b/include/crypto/internal/simd.h
+@@ -14,11 +14,10 @@
+ struct simd_skcipher_alg;
+ struct skcipher_alg;
+ 
+-struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
++struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
++						      const char *algname,
+ 						      const char *drvname,
+ 						      const char *basename);
+-struct simd_skcipher_alg *simd_skcipher_create(const char *algname,
+-					       const char *basename);
+ void simd_skcipher_free(struct simd_skcipher_alg *alg);
+ 
+ int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
+@@ -32,13 +31,6 @@ void simd_unregister_skciphers(struct skcipher_alg *algs, int count,
+ struct simd_aead_alg;
+ struct aead_alg;
+ 
+-struct simd_aead_alg *simd_aead_create_compat(const char *algname,
+-					      const char *drvname,
+-					      const char *basename);
+-struct simd_aead_alg *simd_aead_create(const char *algname,
+-				       const char *basename);
+-void simd_aead_free(struct simd_aead_alg *alg);
+-
+ int simd_register_aeads_compat(struct aead_alg *algs, int count,
+ 			       struct simd_aead_alg **simd_algs);
+ 
+diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
+index 089950ad8681a5..8fad7d09bedae1 100644
+--- a/include/drm/drm_print.h
++++ b/include/drm/drm_print.h
+@@ -220,7 +220,8 @@ drm_vprintf(struct drm_printer *p, const char *fmt, va_list *va)
+ 
+ /**
+  * struct drm_print_iterator - local struct used with drm_printer_coredump
+- * @data: Pointer to the devcoredump output buffer
++ * @data: Pointer to the devcoredump output buffer, can be NULL if using
++ * drm_printer_coredump to determine size of devcoredump
+  * @start: The offset within the buffer to start writing
+  * @remain: The number of bytes to write for this iteration
+  */
+@@ -265,6 +266,57 @@ struct drm_print_iterator {
+  *			coredump_read, ...)
+  *	}
+  *
++ * The above example has a time complexity of O(N^2), where N is the size of the
++ * devcoredump. This is acceptable for small devcoredumps but scales poorly for
++ * larger ones.
++ *
++ * Another use case for drm_coredump_printer is to capture the devcoredump into
++ * a saved buffer before the dev_coredump() callback. This involves two passes:
++ * one to determine the size of the devcoredump and another to print it to a
++ * buffer. Then, in dev_coredump(), copy from the saved buffer into the
++ * devcoredump read buffer.
++ *
++ * For example::
++ *
++ *	char *devcoredump_saved_buffer;
++ *
++ *	ssize_t __coredump_print(char *buffer, ssize_t count, ...)
++ *	{
++ *		struct drm_print_iterator iter;
++ *		struct drm_printer p;
++ *
++ *		iter.data = buffer;
++ *		iter.start = 0;
++ *		iter.remain = count;
++ *
++ *		p = drm_coredump_printer(&iter);
++ *
++ *		drm_printf(p, "foo=%d\n", foo);
++ *		...
++ *		return count - iter.remain;
++ *	}
++ *
++ *	void coredump_print(...)
++ *	{
++ *		ssize_t count;
++ *
++ *		count = __coredump_print(NULL, INT_MAX, ...);
++ *		devcoredump_saved_buffer = kvmalloc(count, GFP_KERNEL);
++ *		__coredump_print(devcoredump_saved_buffer, count, ...);
++ *	}
++ *
++ *	void coredump_read(char *buffer, loff_t offset, size_t count,
++ *			   void *data, size_t datalen)
++ *	{
++ *		...
++ *		memcpy(buffer, devcoredump_saved_buffer + offset, count);
++ *		...
++ *	}
++ *
++ * The above example has a time complexity of O(N*2), where N is the size of the
++ * devcoredump. This scales better than the previous example for larger
++ * devcoredumps.
++ *
+  * RETURNS:
+  * The &drm_printer object
+  */
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index 5acc64954a8830..e28bc649b5c9b7 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -574,7 +574,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
+ 
+ void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched);
+ void drm_sched_job_cleanup(struct drm_sched_job *job);
+-void drm_sched_wakeup(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity);
++void drm_sched_wakeup(struct drm_gpu_scheduler *sched);
+ bool drm_sched_wqueue_ready(struct drm_gpu_scheduler *sched);
+ void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched);
+ void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched);
+diff --git a/include/dt-bindings/clock/exynos7885.h b/include/dt-bindings/clock/exynos7885.h
+index 255e3aa9432373..54cfccff85086a 100644
+--- a/include/dt-bindings/clock/exynos7885.h
++++ b/include/dt-bindings/clock/exynos7885.h
+@@ -136,12 +136,12 @@
+ #define CLK_MOUT_FSYS_MMC_CARD_USER	2
+ #define CLK_MOUT_FSYS_MMC_EMBD_USER	3
+ #define CLK_MOUT_FSYS_MMC_SDIO_USER	4
+-#define CLK_MOUT_FSYS_USB30DRD_USER	4
+ #define CLK_GOUT_MMC_CARD_ACLK		5
+ #define CLK_GOUT_MMC_CARD_SDCLKIN	6
+ #define CLK_GOUT_MMC_EMBD_ACLK		7
+ #define CLK_GOUT_MMC_EMBD_SDCLKIN	8
+ #define CLK_GOUT_MMC_SDIO_ACLK		9
+ #define CLK_GOUT_MMC_SDIO_SDCLKIN	10
++#define CLK_MOUT_FSYS_USB30DRD_USER	11
+ 
+ #endif /* _DT_BINDINGS_CLOCK_EXYNOS_7885_H */
+diff --git a/include/dt-bindings/clock/qcom,gcc-sc8180x.h b/include/dt-bindings/clock/qcom,gcc-sc8180x.h
+index 90c6e021a0356d..2569f874fe13c6 100644
+--- a/include/dt-bindings/clock/qcom,gcc-sc8180x.h
++++ b/include/dt-bindings/clock/qcom,gcc-sc8180x.h
+@@ -248,6 +248,7 @@
+ #define GCC_USB3_SEC_CLKREF_CLK					238
+ #define GCC_UFS_MEM_CLKREF_EN					239
+ #define GCC_UFS_CARD_CLKREF_EN					240
++#define GPLL9							241
+ 
+ #define GCC_EMAC_BCR						0
+ #define GCC_GPU_BCR						1
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index 20f7e98ee8af90..df4f7663953645 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -1113,10 +1113,9 @@ static inline int parse_perf_domain(int cpu, const char *list_name,
+ 				    const char *cell_name,
+ 				    struct of_phandle_args *args)
+ {
+-	struct device_node *cpu_np;
+ 	int ret;
+ 
+-	cpu_np = of_cpu_device_node_get(cpu);
++	struct device_node *cpu_np __free(device_node) = of_cpu_device_node_get(cpu);
+ 	if (!cpu_np)
+ 		return -ENODEV;
+ 
+@@ -1124,9 +1123,6 @@ static inline int parse_perf_domain(int cpu, const char *list_name,
+ 					 args);
+ 	if (ret < 0)
+ 		return ret;
+-
+-	of_node_put(cpu_np);
+-
+ 	return 0;
+ }
+ 
+diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
+index 2944d4aa413b75..b1c5722f2b3ce4 100644
+--- a/include/linux/fdtable.h
++++ b/include/linux/fdtable.h
+@@ -22,7 +22,6 @@
+  * as this is the granularity returned by copy_fdset().
+  */
+ #define NR_OPEN_DEFAULT BITS_PER_LONG
+-#define NR_OPEN_MAX ~0U
+ 
+ struct fdtable {
+ 	unsigned int max_fds;
+@@ -106,7 +105,10 @@ struct task_struct;
+ 
+ void put_files_struct(struct files_struct *fs);
+ int unshare_files(void);
+-struct files_struct *dup_fd(struct files_struct *, unsigned, int *) __latent_entropy;
++struct fd_range {
++	unsigned int from, to;
++};
++struct files_struct *dup_fd(struct files_struct *, struct fd_range *) __latent_entropy;
+ void do_close_on_exec(struct files_struct *);
+ int iterate_fd(struct files_struct *, unsigned,
+ 		int (*)(const void *, struct file *, unsigned),
+@@ -115,8 +117,6 @@ int iterate_fd(struct files_struct *, unsigned,
+ extern int close_fd(unsigned int fd);
+ extern int __close_range(unsigned int fd, unsigned int max_fd, unsigned int flags);
+ extern struct file *file_close_fd(unsigned int fd);
+-extern int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+-		      struct files_struct **new_fdp);
+ 
+ extern struct kmem_cache *files_cachep;
+ 
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index c11624a3d9c04a..ab603b79f23fd3 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -748,6 +748,9 @@ struct i2c_adapter {
+ 	struct regulator *bus_regulator;
+ 
+ 	struct dentry *debugfs;
++
++	/* 7bit address space */
++	DECLARE_BITMAP(addrs_in_instantiation, 1 << 7);
+ };
+ #define to_i2c_adapter(d) container_of(d, struct i2c_adapter, dev)
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index d20c6c99eb887c..bf3eba0d9bdca2 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -354,7 +354,7 @@ struct napi_struct {
+ 
+ 	unsigned long		state;
+ 	int			weight;
+-	int			defer_hard_irqs_count;
++	u32			defer_hard_irqs_count;
+ 	unsigned long		gro_bitmask;
+ 	int			(*poll)(struct napi_struct *, int);
+ #ifdef CONFIG_NETPOLL
+@@ -2089,7 +2089,7 @@ struct net_device {
+ 	unsigned int		real_num_rx_queues;
+ 	struct netdev_rx_queue	*_rx;
+ 	unsigned long		gro_flush_timeout;
+-	int			napi_defer_hard_irqs;
++	u32			napi_defer_hard_irqs;
+ 	unsigned int		gro_max_size;
+ 	unsigned int		gro_ipv4_max_size;
+ 	rx_handler_func_t __rcu	*rx_handler;
+@@ -5000,6 +5000,24 @@ void netif_set_tso_max_segs(struct net_device *dev, unsigned int segs);
+ void netif_inherit_tso_max(struct net_device *to,
+ 			   const struct net_device *from);
+ 
++static inline unsigned int
++netif_get_gro_max_size(const struct net_device *dev, const struct sk_buff *skb)
++{
++	/* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */
++	return skb->protocol == htons(ETH_P_IPV6) ?
++	       READ_ONCE(dev->gro_max_size) :
++	       READ_ONCE(dev->gro_ipv4_max_size);
++}
++
++static inline unsigned int
++netif_get_gso_max_size(const struct net_device *dev, const struct sk_buff *skb)
++{
++	/* pairs with WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
++	return skb->protocol == htons(ETH_P_IPV6) ?
++	       READ_ONCE(dev->gso_max_size) :
++	       READ_ONCE(dev->gso_ipv4_max_size);
++}
++
+ static inline bool netif_is_macsec(const struct net_device *dev)
+ {
+ 	return dev->priv_flags & IFF_MACSEC;
+diff --git a/include/linux/nvme-keyring.h b/include/linux/nvme-keyring.h
+index e10333d78dbbe5..19d2b256180fd7 100644
+--- a/include/linux/nvme-keyring.h
++++ b/include/linux/nvme-keyring.h
+@@ -12,7 +12,7 @@ key_serial_t nvme_tls_psk_default(struct key *keyring,
+ 		const char *hostnqn, const char *subnqn);
+ 
+ key_serial_t nvme_keyring_id(void);
+-
++struct key *nvme_tls_key_lookup(key_serial_t key_id);
+ #else
+ 
+ static inline key_serial_t nvme_tls_psk_default(struct key *keyring,
+@@ -24,5 +24,9 @@ static inline key_serial_t nvme_keyring_id(void)
+ {
+ 	return 0;
+ }
++static inline struct key *nvme_tls_key_lookup(key_serial_t key_id)
++{
++	return ERR_PTR(-ENOTSUPP);
++}
+ #endif /* !CONFIG_NVME_KEYRING */
+ #endif /* _NVME_KEYRING_H */
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 393fb13733b024..a7f1a3a4d1dcee 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1608,13 +1608,7 @@ static inline int perf_is_paranoid(void)
+ 	return sysctl_perf_event_paranoid > -1;
+ }
+ 
+-static inline int perf_allow_kernel(struct perf_event_attr *attr)
+-{
+-	if (sysctl_perf_event_paranoid > 1 && !perfmon_capable())
+-		return -EACCES;
+-
+-	return security_perf_event_open(attr, PERF_SECURITY_KERNEL);
+-}
++int perf_allow_kernel(struct perf_event_attr *attr);
+ 
+ static inline int perf_allow_cpu(struct perf_event_attr *attr)
+ {
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 76214d7c819de6..afa7bd078f8ac8 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -637,6 +637,8 @@ struct sched_dl_entity {
+ 	 *
+ 	 * @dl_overrun tells if the task asked to be informed about runtime
+ 	 * overruns.
++	 *
++	 * @dl_server tells if this is a server entity.
+ 	 */
+ 	unsigned int			dl_throttled      : 1;
+ 	unsigned int			dl_yielded        : 1;
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index 23617da0e565e7..38a4fdf784e9a7 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -33,9 +33,9 @@
+  * node traffic on multi-node NUMA NFS servers.
+  */
+ struct svc_pool {
+-	unsigned int		sp_id;	    	/* pool id; also node id on NUMA */
++	unsigned int		sp_id;		/* pool id; also node id on NUMA */
+ 	struct lwq		sp_xprts;	/* pending transports */
+-	atomic_t		sp_nrthreads;	/* # of threads in pool */
++	unsigned int		sp_nrthreads;	/* # of threads in pool */
+ 	struct list_head	sp_all_threads;	/* all server threads */
+ 	struct llist_head	sp_idle_threads; /* idle server threads */
+ 
+diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
+index f46e0ca0169c72..d91e32aff5a134 100644
+--- a/include/linux/uprobes.h
++++ b/include/linux/uprobes.h
+@@ -76,6 +76,8 @@ struct uprobe_task {
+ 	struct uprobe			*active_uprobe;
+ 	unsigned long			xol_vaddr;
+ 
++	struct arch_uprobe              *auprobe;
++
+ 	struct return_instance		*return_instances;
+ 	unsigned int			depth;
+ };
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 276ca543ef44d8..02a9f4dc594d02 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -103,8 +103,10 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 
+ 		if (!skb_partial_csum_set(skb, start, off))
+ 			return -EINVAL;
++		if (skb_transport_offset(skb) < nh_min_len)
++			return -EINVAL;
+ 
+-		nh_min_len = max_t(u32, nh_min_len, skb_transport_offset(skb));
++		nh_min_len = skb_transport_offset(skb);
+ 		p_off = nh_min_len + thlen;
+ 		if (!pskb_may_pull(skb, p_off))
+ 			return -EINVAL;
+diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
+index 27684135bb4d1a..35507588a14d5c 100644
+--- a/include/net/mana/gdma.h
++++ b/include/net/mana/gdma.h
+@@ -224,7 +224,15 @@ struct gdma_dev {
+ 	struct auxiliary_device *adev;
+ };
+ 
+-#define MINIMUM_SUPPORTED_PAGE_SIZE PAGE_SIZE
++/* MANA_PAGE_SIZE is the DMA unit */
++#define MANA_PAGE_SHIFT 12
++#define MANA_PAGE_SIZE BIT(MANA_PAGE_SHIFT)
++#define MANA_PAGE_ALIGN(x) ALIGN((x), MANA_PAGE_SIZE)
++#define MANA_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), MANA_PAGE_SIZE)
++#define MANA_PFN(a) ((a) >> MANA_PAGE_SHIFT)
++
++/* Required by HW */
++#define MANA_MIN_QSIZE MANA_PAGE_SIZE
+ 
+ #define GDMA_CQE_SIZE 64
+ #define GDMA_EQE_SIZE 16
+diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
+index 5927bd9d46bef9..f384d3aaac7418 100644
+--- a/include/net/mana/mana.h
++++ b/include/net/mana/mana.h
+@@ -42,7 +42,8 @@ enum TRI_STATE {
+ 
+ #define MAX_SEND_BUFFERS_PER_QUEUE 256
+ 
+-#define EQ_SIZE (8 * PAGE_SIZE)
++#define EQ_SIZE (8 * MANA_PAGE_SIZE)
++
+ #define LOG2_EQ_THROTTLE 3
+ 
+ #define MAX_PORTS_IN_MANA_DEV 256
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index 24ec3434d32ee9..102696abe8c9e9 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -140,6 +140,7 @@
+ 	EM(netfs_streaming_cont_filled_page,	"mod-streamw-f+") \
+ 	/* The rest are for writeback */			\
+ 	EM(netfs_folio_trace_cancel_copy,	"cancel-copy")	\
++	EM(netfs_folio_trace_cancel_store,	"cancel-store")	\
+ 	EM(netfs_folio_trace_clear,		"clear")	\
+ 	EM(netfs_folio_trace_clear_cc,		"clear-cc")	\
+ 	EM(netfs_folio_trace_clear_g,		"clear-g")	\
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index b8e071abaea5ac..3eba3934512e60 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -132,6 +132,8 @@ static inline void cec_msg_init(struct cec_msg *msg,
+  * Set the msg destination to the orig initiator and the msg initiator to the
+  * orig destination. Note that msg and orig may be the same pointer, in which
+  * case the change is done in place.
++ *
++ * It also zeroes the reply, timeout and flags fields.
+  */
+ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ 					struct cec_msg *orig)
+@@ -139,7 +141,9 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ 	/* The destination becomes the initiator and vice versa */
+ 	msg->msg[0] = (cec_msg_destination(orig) << 4) |
+ 		      cec_msg_initiator(orig);
+-	msg->reply = msg->timeout = 0;
++	msg->reply = 0;
++	msg->timeout = 0;
++	msg->flags = 0;
+ }
+ 
+ /**
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 639894ed1b9732..2f71d91462331d 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -1694,7 +1694,7 @@ enum nft_flowtable_flags {
+  *
+  * @NFTA_FLOWTABLE_TABLE: name of the table containing the expression (NLA_STRING)
+  * @NFTA_FLOWTABLE_NAME: name of this flow table (NLA_STRING)
+- * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration(NLA_U32)
++ * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration (NLA_NESTED)
+  * @NFTA_FLOWTABLE_USE: number of references to this flow table (NLA_U32)
+  * @NFTA_FLOWTABLE_HANDLE: object handle (NLA_U64)
+  * @NFTA_FLOWTABLE_FLAGS: flags (NLA_U32)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index c0d8ee0c9786df..ff243f6b51199a 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -316,7 +316,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 			    sizeof(struct uring_cache));
+ 	ret |= io_futex_cache_init(ctx);
+ 	if (ret)
+-		goto err;
++		goto free_ref;
+ 	init_completion(&ctx->ref_comp);
+ 	xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
+ 	mutex_init(&ctx->uring_lock);
+@@ -344,6 +344,9 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	io_napi_init(ctx);
+ 
+ 	return ctx;
++
++free_ref:
++	percpu_ref_exit(&ctx->refs);
+ err:
+ 	io_alloc_cache_free(&ctx->rsrc_node_cache, kfree);
+ 	io_alloc_cache_free(&ctx->apoll_cache, kfree);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 09bb82bc209a10..a70bbd4bd7cb42 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1116,6 +1116,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 	int ret, min_ret = 0;
+ 	bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ 	size_t len = sr->len;
++	bool mshot_finished;
+ 
+ 	if (!(req->flags & REQ_F_POLLED) &&
+ 	    (sr->flags & IORING_RECVSEND_POLL_FIRST))
+@@ -1170,6 +1171,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 		req_set_fail(req);
+ 	}
+ 
++	mshot_finished = ret <= 0;
+ 	if (ret > 0)
+ 		ret += sr->done_io;
+ 	else if (sr->done_io)
+@@ -1177,7 +1179,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 	else
+ 		io_kbuf_recycle(req, issue_flags);
+ 
+-	if (!io_recv_finish(req, &ret, kmsg, ret <= 0, issue_flags))
++	if (!io_recv_finish(req, &ret, kmsg, mshot_finished, issue_flags))
+ 		goto retry_multishot;
+ 
+ 	return ret;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c821713249c81d..eb4b4f5b1284f6 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8017,6 +8017,15 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env,
+ 	return 0;
+ }
+ 
++static struct bpf_reg_state *get_iter_from_state(struct bpf_verifier_state *cur_st,
++						 struct bpf_kfunc_call_arg_meta *meta)
++{
++	int iter_frameno = meta->iter.frameno;
++	int iter_spi = meta->iter.spi;
++
++	return &cur_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++}
++
+ /* process_iter_next_call() is called when verifier gets to iterator's next
+  * "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer
+  * to it as just "iter_next()" in comments below.
+@@ -8101,12 +8110,10 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
+ 	struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st;
+ 	struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr;
+ 	struct bpf_reg_state *cur_iter, *queued_iter;
+-	int iter_frameno = meta->iter.frameno;
+-	int iter_spi = meta->iter.spi;
+ 
+ 	BTF_TYPE_EMIT(struct bpf_iter);
+ 
+-	cur_iter = &env->cur_state->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++	cur_iter = get_iter_from_state(cur_st, meta);
+ 
+ 	if (cur_iter->iter.state != BPF_ITER_STATE_ACTIVE &&
+ 	    cur_iter->iter.state != BPF_ITER_STATE_DRAINED) {
+@@ -8134,7 +8141,7 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
+ 		if (!queued_st)
+ 			return -ENOMEM;
+ 
+-		queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
++		queued_iter = get_iter_from_state(queued_st, meta);
+ 		queued_iter->iter.state = BPF_ITER_STATE_ACTIVE;
+ 		queued_iter->iter.depth++;
+ 		if (prev_st)
+@@ -12675,6 +12682,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 			regs[BPF_REG_0].btf = desc_btf;
+ 			regs[BPF_REG_0].type = PTR_TO_BTF_ID;
+ 			regs[BPF_REG_0].btf_id = ptr_type_id;
++
++			if (is_iter_next_kfunc(&meta)) {
++				struct bpf_reg_state *cur_iter;
++
++				cur_iter = get_iter_from_state(env->cur_state, &meta);
++
++				if (cur_iter->type & MEM_RCU) /* KF_RCU_PROTECTED */
++					regs[BPF_REG_0].type |= MEM_RCU;
++				else
++					regs[BPF_REG_0].type |= PTR_TRUSTED;
++			}
+ 		}
+ 
+ 		if (is_kfunc_ret_null(&meta)) {
+@@ -19933,13 +19951,46 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ 			/* Convert BPF_CLASS(insn->code) == BPF_ALU64 to 32-bit ALU */
+ 			insn->code = BPF_ALU | BPF_OP(insn->code) | BPF_SRC(insn->code);
+ 
+-		/* Make divide-by-zero exceptions impossible. */
++		/* Make sdiv/smod divide-by-minus-one exceptions impossible. */
++		if ((insn->code == (BPF_ALU64 | BPF_MOD | BPF_K) ||
++		     insn->code == (BPF_ALU64 | BPF_DIV | BPF_K) ||
++		     insn->code == (BPF_ALU | BPF_MOD | BPF_K) ||
++		     insn->code == (BPF_ALU | BPF_DIV | BPF_K)) &&
++		    insn->off == 1 && insn->imm == -1) {
++			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
++			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++			struct bpf_insn *patchlet;
++			struct bpf_insn chk_and_sdiv[] = {
++				BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++					     BPF_NEG | BPF_K, insn->dst_reg,
++					     0, 0, 0),
++			};
++			struct bpf_insn chk_and_smod[] = {
++				BPF_MOV32_IMM(insn->dst_reg, 0),
++			};
++
++			patchlet = isdiv ? chk_and_sdiv : chk_and_smod;
++			cnt = isdiv ? ARRAY_SIZE(chk_and_sdiv) : ARRAY_SIZE(chk_and_smod);
++
++			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
++			if (!new_prog)
++				return -ENOMEM;
++
++			delta    += cnt - 1;
++			env->prog = prog = new_prog;
++			insn      = new_prog->insnsi + i + delta;
++			goto next_insn;
++		}
++
++		/* Make divide-by-zero and divide-by-minus-one exceptions impossible. */
+ 		if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
+ 		    insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+ 		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
+ 		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+ 			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
+ 			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
++			bool is_sdiv = isdiv && insn->off == 1;
++			bool is_smod = !isdiv && insn->off == 1;
+ 			struct bpf_insn *patchlet;
+ 			struct bpf_insn chk_and_div[] = {
+ 				/* [R,W]x div 0 -> 0 */
+@@ -19959,10 +20010,62 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ 				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+ 				BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
+ 			};
++			struct bpf_insn chk_and_sdiv[] = {
++				/* [R,W]x sdiv 0 -> 0
++				 * LLONG_MIN sdiv -1 -> LLONG_MIN
++				 * INT_MIN sdiv -1 -> INT_MIN
++				 */
++				BPF_MOV64_REG(BPF_REG_AX, insn->src_reg),
++				BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++					     BPF_ADD | BPF_K, BPF_REG_AX,
++					     0, 0, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JGT | BPF_K, BPF_REG_AX,
++					     0, 4, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JEQ | BPF_K, BPF_REG_AX,
++					     0, 1, 0),
++				BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++					     BPF_MOV | BPF_K, insn->dst_reg,
++					     0, 0, 0),
++				/* BPF_NEG(LLONG_MIN) == -LLONG_MIN == LLONG_MIN */
++				BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++					     BPF_NEG | BPF_K, insn->dst_reg,
++					     0, 0, 0),
++				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++				*insn,
++			};
++			struct bpf_insn chk_and_smod[] = {
++				/* [R,W]x mod 0 -> [R,W]x */
++				/* [R,W]x mod -1 -> 0 */
++				BPF_MOV64_REG(BPF_REG_AX, insn->src_reg),
++				BPF_RAW_INSN((is64 ? BPF_ALU64 : BPF_ALU) |
++					     BPF_ADD | BPF_K, BPF_REG_AX,
++					     0, 0, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JGT | BPF_K, BPF_REG_AX,
++					     0, 3, 1),
++				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
++					     BPF_JEQ | BPF_K, BPF_REG_AX,
++					     0, 3 + (is64 ? 0 : 1), 1),
++				BPF_MOV32_IMM(insn->dst_reg, 0),
++				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++				*insn,
++				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
++				BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
++			};
+ 
+-			patchlet = isdiv ? chk_and_div : chk_and_mod;
+-			cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
+-				      ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
++			if (is_sdiv) {
++				patchlet = chk_and_sdiv;
++				cnt = ARRAY_SIZE(chk_and_sdiv);
++			} else if (is_smod) {
++				patchlet = chk_and_smod;
++				cnt = ARRAY_SIZE(chk_and_smod) - (is64 ? 2 : 0);
++			} else {
++				patchlet = isdiv ? chk_and_div : chk_and_mod;
++				cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
++					      ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
++			}
+ 
+ 			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
+ 			if (!new_prog)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 36191add55c376..557f34dcb6d0cb 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -264,6 +264,7 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
+ {
+ 	struct perf_event_context *ctx = event->ctx;
+ 	struct task_struct *task = READ_ONCE(ctx->task); /* verified in event_function */
++	struct perf_cpu_context *cpuctx;
+ 	struct event_function_struct efs = {
+ 		.event = event,
+ 		.func = func,
+@@ -291,22 +292,25 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
+ 	if (!task_function_call(task, event_function, &efs))
+ 		return;
+ 
+-	raw_spin_lock_irq(&ctx->lock);
++	local_irq_disable();
++	cpuctx = this_cpu_ptr(&perf_cpu_context);
++	perf_ctx_lock(cpuctx, ctx);
+ 	/*
+ 	 * Reload the task pointer, it might have been changed by
+ 	 * a concurrent perf_event_context_sched_out().
+ 	 */
+ 	task = ctx->task;
+-	if (task == TASK_TOMBSTONE) {
+-		raw_spin_unlock_irq(&ctx->lock);
+-		return;
+-	}
++	if (task == TASK_TOMBSTONE)
++		goto unlock;
+ 	if (ctx->is_active) {
+-		raw_spin_unlock_irq(&ctx->lock);
++		perf_ctx_unlock(cpuctx, ctx);
++		local_irq_enable();
+ 		goto again;
+ 	}
+ 	func(event, NULL, ctx, data);
+-	raw_spin_unlock_irq(&ctx->lock);
++unlock:
++	perf_ctx_unlock(cpuctx, ctx);
++	local_irq_enable();
+ }
+ 
+ /*
+@@ -4103,7 +4107,11 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count, bo
+ 	period = perf_calculate_period(event, nsec, count);
+ 
+ 	delta = (s64)(period - hwc->sample_period);
+-	delta = (delta + 7) / 8; /* low pass filter */
++	if (delta >= 0)
++		delta += 7;
++	else
++		delta -= 7;
++	delta /= 8; /* low pass filter */
+ 
+ 	sample_period = hwc->sample_period + delta;
+ 
+@@ -13362,6 +13370,15 @@ const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
+ 	return &event->attr;
+ }
+ 
++int perf_allow_kernel(struct perf_event_attr *attr)
++{
++	if (sysctl_perf_event_paranoid > 1 && !perfmon_capable())
++		return -EACCES;
++
++	return security_perf_event_open(attr, PERF_SECURITY_KERNEL);
++}
++EXPORT_SYMBOL_GPL(perf_allow_kernel);
++
+ /*
+  * Inherit an event from parent task to child task.
+  *
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 47cdec3e1df117..3dd1f146436480 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1491,7 +1491,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr)
+ 
+ 	area->xol_mapping.name = "[uprobes]";
+ 	area->xol_mapping.pages = area->pages;
+-	area->pages[0] = alloc_page(GFP_HIGHUSER);
++	area->pages[0] = alloc_page(GFP_HIGHUSER | __GFP_ZERO);
+ 	if (!area->pages[0])
+ 		goto free_bitmap;
+ 	area->pages[1] = NULL;
+@@ -2071,6 +2071,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
+ 	bool need_prep = false; /* prepare return uprobe, when needed */
+ 
+ 	down_read(&uprobe->register_rwsem);
++	current->utask->auprobe = &uprobe->arch;
+ 	for (uc = uprobe->consumers; uc; uc = uc->next) {
+ 		int rc = 0;
+ 
+@@ -2085,6 +2086,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
+ 
+ 		remove &= rc;
+ 	}
++	current->utask->auprobe = NULL;
+ 
+ 	if (need_prep && !remove)
+ 		prepare_uretprobe(uprobe, regs); /* put bp at return */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 99076dbe27d83f..1116946b7fba30 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1770,33 +1770,30 @@ static int copy_files(unsigned long clone_flags, struct task_struct *tsk,
+ 		      int no_files)
+ {
+ 	struct files_struct *oldf, *newf;
+-	int error = 0;
+ 
+ 	/*
+ 	 * A background process may not have any files ...
+ 	 */
+ 	oldf = current->files;
+ 	if (!oldf)
+-		goto out;
++		return 0;
+ 
+ 	if (no_files) {
+ 		tsk->files = NULL;
+-		goto out;
++		return 0;
+ 	}
+ 
+ 	if (clone_flags & CLONE_FILES) {
+ 		atomic_inc(&oldf->count);
+-		goto out;
++		return 0;
+ 	}
+ 
+-	newf = dup_fd(oldf, NR_OPEN_MAX, &error);
+-	if (!newf)
+-		goto out;
++	newf = dup_fd(oldf, NULL);
++	if (IS_ERR(newf))
++		return PTR_ERR(newf);
+ 
+ 	tsk->files = newf;
+-	error = 0;
+-out:
+-	return error;
++	return 0;
+ }
+ 
+ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
+@@ -3246,17 +3243,16 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp)
+ /*
+  * Unshare file descriptor table if it is being shared
+  */
+-int unshare_fd(unsigned long unshare_flags, unsigned int max_fds,
+-	       struct files_struct **new_fdp)
++static int unshare_fd(unsigned long unshare_flags, struct files_struct **new_fdp)
+ {
+ 	struct files_struct *fd = current->files;
+-	int error = 0;
+ 
+ 	if ((unshare_flags & CLONE_FILES) &&
+ 	    (fd && atomic_read(&fd->count) > 1)) {
+-		*new_fdp = dup_fd(fd, max_fds, &error);
+-		if (!*new_fdp)
+-			return error;
++		fd = dup_fd(fd, NULL);
++		if (IS_ERR(fd))
++			return PTR_ERR(fd);
++		*new_fdp = fd;
+ 	}
+ 
+ 	return 0;
+@@ -3314,7 +3310,7 @@ int ksys_unshare(unsigned long unshare_flags)
+ 	err = unshare_fs(unshare_flags, &new_fs);
+ 	if (err)
+ 		goto bad_unshare_out;
+-	err = unshare_fd(unshare_flags, NR_OPEN_MAX, &new_fd);
++	err = unshare_fd(unshare_flags, &new_fd);
+ 	if (err)
+ 		goto bad_unshare_cleanup_fs;
+ 	err = unshare_userns(unshare_flags, &new_cred);
+@@ -3406,7 +3402,7 @@ int unshare_files(void)
+ 	struct files_struct *old, *copy = NULL;
+ 	int error;
+ 
+-	error = unshare_fd(CLONE_FILES, NR_OPEN_MAX, &copy);
++	error = unshare_fd(CLONE_FILES, &copy);
+ 	if (error || !copy)
+ 		return error;
+ 
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index c6ac0d0377d726..101572d6a90836 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -159,22 +159,24 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key)
+ 	if (static_key_fast_inc_not_disabled(key))
+ 		return true;
+ 
+-	jump_label_lock();
+-	if (atomic_read(&key->enabled) == 0) {
+-		atomic_set(&key->enabled, -1);
++	guard(mutex)(&jump_label_mutex);
++	/* Try to mark it as 'enabling in progress. */
++	if (!atomic_cmpxchg(&key->enabled, 0, -1)) {
+ 		jump_label_update(key);
+ 		/*
+-		 * Ensure that if the above cmpxchg loop observes our positive
+-		 * value, it must also observe all the text changes.
++		 * Ensure that when static_key_fast_inc_not_disabled() or
++		 * static_key_dec_not_one() observe the positive value,
++		 * they must also observe all the text changes.
+ 		 */
+ 		atomic_set_release(&key->enabled, 1);
+ 	} else {
+-		if (WARN_ON_ONCE(!static_key_fast_inc_not_disabled(key))) {
+-			jump_label_unlock();
++		/*
++		 * While holding the mutex this should never observe
++		 * anything else than a value >= 1 and succeed
++		 */
++		if (WARN_ON_ONCE(!static_key_fast_inc_not_disabled(key)))
+ 			return false;
+-		}
+ 	}
+-	jump_label_unlock();
+ 	return true;
+ }
+ 
+@@ -245,7 +247,7 @@ void static_key_disable(struct static_key *key)
+ }
+ EXPORT_SYMBOL_GPL(static_key_disable);
+ 
+-static bool static_key_slow_try_dec(struct static_key *key)
++static bool static_key_dec_not_one(struct static_key *key)
+ {
+ 	int v;
+ 
+@@ -269,6 +271,14 @@ static bool static_key_slow_try_dec(struct static_key *key)
+ 		 * enabled. This suggests an ordering problem on the user side.
+ 		 */
+ 		WARN_ON_ONCE(v < 0);
++
++		/*
++		 * Warn about underflow, and lie about success in an attempt to
++		 * not make things worse.
++		 */
++		if (WARN_ON_ONCE(v == 0))
++			return true;
++
+ 		if (v <= 1)
+ 			return false;
+ 	} while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1)));
+@@ -279,15 +289,27 @@ static bool static_key_slow_try_dec(struct static_key *key)
+ static void __static_key_slow_dec_cpuslocked(struct static_key *key)
+ {
+ 	lockdep_assert_cpus_held();
++	int val;
+ 
+-	if (static_key_slow_try_dec(key))
++	if (static_key_dec_not_one(key))
+ 		return;
+ 
+ 	guard(mutex)(&jump_label_mutex);
+-	if (atomic_cmpxchg(&key->enabled, 1, 0) == 1)
++	val = atomic_read(&key->enabled);
++	/*
++	 * It should be impossible to observe -1 with jump_label_mutex held,
++	 * see static_key_slow_inc_cpuslocked().
++	 */
++	if (WARN_ON_ONCE(val == -1))
++		return;
++	/*
++	 * Cannot already be 0, something went sideways.
++	 */
++	if (WARN_ON_ONCE(val == 0))
++		return;
++
++	if (atomic_dec_and_test(&key->enabled))
+ 		jump_label_update(key);
+-	else
+-		WARN_ON_ONCE(!static_key_slow_try_dec(key));
+ }
+ 
+ static void __static_key_slow_dec(struct static_key *key)
+@@ -324,7 +346,7 @@ void __static_key_slow_dec_deferred(struct static_key *key,
+ {
+ 	STATIC_KEY_CHECK_USE(key);
+ 
+-	if (static_key_slow_try_dec(key))
++	if (static_key_dec_not_one(key))
+ 		return;
+ 
+ 	schedule_delayed_work(work, timeout);
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 8db4fedaaa1eb7..a5806baa1a2a8a 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -498,7 +498,7 @@ rcu_scale_writer(void *arg)
+ 			schedule_timeout_idle(torture_random(&tr) % writer_holdoff_jiffies + 1);
+ 		wdp = &wdpp[i];
+ 		*wdp = ktime_get_mono_fast_ns();
+-		if (gp_async) {
++		if (gp_async && !WARN_ON_ONCE(!cur_ops->async)) {
+ retry:
+ 			if (!rhp)
+ 				rhp = kmalloc(sizeof(*rhp), GFP_KERNEL);
+@@ -554,7 +554,7 @@ rcu_scale_writer(void *arg)
+ 			i++;
+ 		rcu_scale_wait_shutdown();
+ 	} while (!torture_must_stop());
+-	if (gp_async) {
++	if (gp_async && cur_ops->async) {
+ 		cur_ops->gp_barrier();
+ 	}
+ 	writer_n_durations[me] = i_max + 1;
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index ba3440a45b6dd4..bc8429ada7a51d 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -34,6 +34,7 @@ typedef void (*postgp_func_t)(struct rcu_tasks *rtp);
+  * @rtp_blkd_tasks: List of tasks blocked as readers.
+  * @rtp_exit_list: List of tasks in the latter portion of do_exit().
+  * @cpu: CPU number corresponding to this entry.
++ * @index: Index of this CPU in rtpcp_array of the rcu_tasks structure.
+  * @rtpp: Pointer to the rcu_tasks structure.
+  */
+ struct rcu_tasks_percpu {
+@@ -49,6 +50,7 @@ struct rcu_tasks_percpu {
+ 	struct list_head rtp_blkd_tasks;
+ 	struct list_head rtp_exit_list;
+ 	int cpu;
++	int index;
+ 	struct rcu_tasks *rtpp;
+ };
+ 
+@@ -76,6 +78,7 @@ struct rcu_tasks_percpu {
+  * @call_func: This flavor's call_rcu()-equivalent function.
+  * @wait_state: Task state for synchronous grace-period waits (default TASK_UNINTERRUPTIBLE).
+  * @rtpcpu: This flavor's rcu_tasks_percpu structure.
++ * @rtpcp_array: Array of pointers to rcu_tasks_percpu structure of CPUs in cpu_possible_mask.
+  * @percpu_enqueue_shift: Shift down CPU ID this much when enqueuing callbacks.
+  * @percpu_enqueue_lim: Number of per-CPU callback queues in use for enqueuing.
+  * @percpu_dequeue_lim: Number of per-CPU callback queues in use for dequeuing.
+@@ -110,6 +113,7 @@ struct rcu_tasks {
+ 	call_rcu_func_t call_func;
+ 	unsigned int wait_state;
+ 	struct rcu_tasks_percpu __percpu *rtpcpu;
++	struct rcu_tasks_percpu **rtpcp_array;
+ 	int percpu_enqueue_shift;
+ 	int percpu_enqueue_lim;
+ 	int percpu_dequeue_lim;
+@@ -182,6 +186,8 @@ module_param(rcu_task_collapse_lim, int, 0444);
+ static int rcu_task_lazy_lim __read_mostly = 32;
+ module_param(rcu_task_lazy_lim, int, 0444);
+ 
++static int rcu_task_cpu_ids;
++
+ /* RCU tasks grace-period state for debugging. */
+ #define RTGS_INIT		 0
+ #define RTGS_WAIT_WAIT_CBS	 1
+@@ -245,6 +251,8 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ 	int cpu;
+ 	int lim;
+ 	int shift;
++	int maxcpu;
++	int index = 0;
+ 
+ 	if (rcu_task_enqueue_lim < 0) {
+ 		rcu_task_enqueue_lim = 1;
+@@ -254,14 +262,9 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ 	}
+ 	lim = rcu_task_enqueue_lim;
+ 
+-	if (lim > nr_cpu_ids)
+-		lim = nr_cpu_ids;
+-	shift = ilog2(nr_cpu_ids / lim);
+-	if (((nr_cpu_ids - 1) >> shift) >= lim)
+-		shift++;
+-	WRITE_ONCE(rtp->percpu_enqueue_shift, shift);
+-	WRITE_ONCE(rtp->percpu_dequeue_lim, lim);
+-	smp_store_release(&rtp->percpu_enqueue_lim, lim);
++	rtp->rtpcp_array = kcalloc(num_possible_cpus(), sizeof(struct rcu_tasks_percpu *), GFP_KERNEL);
++	BUG_ON(!rtp->rtpcp_array);
++
+ 	for_each_possible_cpu(cpu) {
+ 		struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+ 
+@@ -273,14 +276,29 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
+ 		INIT_WORK(&rtpcp->rtp_work, rcu_tasks_invoke_cbs_wq);
+ 		rtpcp->cpu = cpu;
+ 		rtpcp->rtpp = rtp;
++		rtpcp->index = index;
++		rtp->rtpcp_array[index] = rtpcp;
++		index++;
+ 		if (!rtpcp->rtp_blkd_tasks.next)
+ 			INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks);
+ 		if (!rtpcp->rtp_exit_list.next)
+ 			INIT_LIST_HEAD(&rtpcp->rtp_exit_list);
++		maxcpu = cpu;
+ 	}
+ 
+-	pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name,
+-			data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust);
++	rcu_task_cpu_ids = maxcpu + 1;
++	if (lim > rcu_task_cpu_ids)
++		lim = rcu_task_cpu_ids;
++	shift = ilog2(rcu_task_cpu_ids / lim);
++	if (((rcu_task_cpu_ids - 1) >> shift) >= lim)
++		shift++;
++	WRITE_ONCE(rtp->percpu_enqueue_shift, shift);
++	WRITE_ONCE(rtp->percpu_dequeue_lim, lim);
++	smp_store_release(&rtp->percpu_enqueue_lim, lim);
++
++	pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d rcu_task_cpu_ids=%d.\n",
++			rtp->name, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim),
++			rcu_task_cb_adjust, rcu_task_cpu_ids);
+ }
+ 
+ // Compute wakeup time for lazy callback timer.
+@@ -348,7 +366,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ 			rtpcp->rtp_n_lock_retries = 0;
+ 		}
+ 		if (rcu_task_cb_adjust && ++rtpcp->rtp_n_lock_retries > rcu_task_contend_lim &&
+-		    READ_ONCE(rtp->percpu_enqueue_lim) != nr_cpu_ids)
++		    READ_ONCE(rtp->percpu_enqueue_lim) != rcu_task_cpu_ids)
+ 			needadjust = true;  // Defer adjustment to avoid deadlock.
+ 	}
+ 	// Queuing callbacks before initialization not yet supported.
+@@ -368,10 +386,10 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
+ 	raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
+ 	if (unlikely(needadjust)) {
+ 		raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
+-		if (rtp->percpu_enqueue_lim != nr_cpu_ids) {
++		if (rtp->percpu_enqueue_lim != rcu_task_cpu_ids) {
+ 			WRITE_ONCE(rtp->percpu_enqueue_shift, 0);
+-			WRITE_ONCE(rtp->percpu_dequeue_lim, nr_cpu_ids);
+-			smp_store_release(&rtp->percpu_enqueue_lim, nr_cpu_ids);
++			WRITE_ONCE(rtp->percpu_dequeue_lim, rcu_task_cpu_ids);
++			smp_store_release(&rtp->percpu_enqueue_lim, rcu_task_cpu_ids);
+ 			pr_info("Switching %s to per-CPU callback queuing.\n", rtp->name);
+ 		}
+ 		raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+@@ -444,6 +462,8 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ 
+ 	dequeue_limit = smp_load_acquire(&rtp->percpu_dequeue_lim);
+ 	for (cpu = 0; cpu < dequeue_limit; cpu++) {
++		if (!cpu_possible(cpu))
++			continue;
+ 		struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+ 
+ 		/* Advance and accelerate any new callbacks. */
+@@ -481,7 +501,7 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ 	if (rcu_task_cb_adjust && ncbs <= rcu_task_collapse_lim) {
+ 		raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
+ 		if (rtp->percpu_enqueue_lim > 1) {
+-			WRITE_ONCE(rtp->percpu_enqueue_shift, order_base_2(nr_cpu_ids));
++			WRITE_ONCE(rtp->percpu_enqueue_shift, order_base_2(rcu_task_cpu_ids));
+ 			smp_store_release(&rtp->percpu_enqueue_lim, 1);
+ 			rtp->percpu_dequeue_gpseq = get_state_synchronize_rcu();
+ 			gpdone = false;
+@@ -496,7 +516,9 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ 			pr_info("Completing switch %s to CPU-0 callback queuing.\n", rtp->name);
+ 		}
+ 		if (rtp->percpu_dequeue_lim == 1) {
+-			for (cpu = rtp->percpu_dequeue_lim; cpu < nr_cpu_ids; cpu++) {
++			for (cpu = rtp->percpu_dequeue_lim; cpu < rcu_task_cpu_ids; cpu++) {
++				if (!cpu_possible(cpu))
++					continue;
+ 				struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+ 
+ 				WARN_ON_ONCE(rcu_segcblist_n_cbs(&rtpcp->cblist));
+@@ -511,30 +533,32 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ // Advance callbacks and invoke any that are ready.
+ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu *rtpcp)
+ {
+-	int cpu;
+-	int cpunext;
+ 	int cpuwq;
+ 	unsigned long flags;
+ 	int len;
++	int index;
+ 	struct rcu_head *rhp;
+ 	struct rcu_cblist rcl = RCU_CBLIST_INITIALIZER(rcl);
+ 	struct rcu_tasks_percpu *rtpcp_next;
+ 
+-	cpu = rtpcp->cpu;
+-	cpunext = cpu * 2 + 1;
+-	if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+-		rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-		cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
+-		queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+-		cpunext++;
+-		if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+-			rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-			cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++	index = rtpcp->index * 2 + 1;
++	if (index < num_possible_cpus()) {
++		rtpcp_next = rtp->rtpcp_array[index];
++		if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
++			cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
+ 			queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
++			index++;
++			if (index < num_possible_cpus()) {
++				rtpcp_next = rtp->rtpcp_array[index];
++				if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
++					cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
++					queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
++				}
++			}
+ 		}
+ 	}
+ 
+-	if (rcu_segcblist_empty(&rtpcp->cblist) || !cpu_possible(cpu))
++	if (rcu_segcblist_empty(&rtpcp->cblist))
+ 		return;
+ 	raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
+ 	rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq));
+diff --git a/kernel/resource.c b/kernel/resource.c
+index b0e2b15ecb409a..c66147aa21761b 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -548,20 +548,62 @@ static int __region_intersects(struct resource *parent, resource_size_t start,
+ 			       size_t size, unsigned long flags,
+ 			       unsigned long desc)
+ {
+-	struct resource res;
++	resource_size_t ostart, oend;
+ 	int type = 0; int other = 0;
+-	struct resource *p;
++	struct resource *p, *dp;
++	bool is_type, covered;
++	struct resource res;
+ 
+ 	res.start = start;
+ 	res.end = start + size - 1;
+ 
+ 	for (p = parent->child; p ; p = p->sibling) {
+-		bool is_type = (((p->flags & flags) == flags) &&
+-				((desc == IORES_DESC_NONE) ||
+-				 (desc == p->desc)));
+-
+-		if (resource_overlaps(p, &res))
+-			is_type ? type++ : other++;
++		if (!resource_overlaps(p, &res))
++			continue;
++		is_type = (p->flags & flags) == flags &&
++			(desc == IORES_DESC_NONE || desc == p->desc);
++		if (is_type) {
++			type++;
++			continue;
++		}
++		/*
++		 * Continue to search in descendant resources as if the
++		 * matched descendant resources cover some ranges of 'p'.
++		 *
++		 * |------------- "CXL Window 0" ------------|
++		 * |-- "System RAM" --|
++		 *
++		 * will behave similar as the following fake resource
++		 * tree when searching "System RAM".
++		 *
++		 * |-- "System RAM" --||-- "CXL Window 0a" --|
++		 */
++		covered = false;
++		ostart = max(res.start, p->start);
++		oend = min(res.end, p->end);
++		for_each_resource(p, dp, false) {
++			if (!resource_overlaps(dp, &res))
++				continue;
++			is_type = (dp->flags & flags) == flags &&
++				(desc == IORES_DESC_NONE || desc == dp->desc);
++			if (is_type) {
++				type++;
++				/*
++				 * Range from 'ostart' to 'dp->start'
++				 * isn't covered by matched resource.
++				 */
++				if (dp->start > ostart)
++					break;
++				if (dp->end >= oend) {
++					covered = true;
++					break;
++				}
++				/* Remove covered range */
++				ostart = max(ostart, dp->end + 1);
++			}
++		}
++		if (!covered)
++			other++;
+ 	}
+ 
+ 	if (type == 0)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 3e84a3b7b7bb9b..0cfb5f5ee21339 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6005,6 +6005,14 @@ static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
+ #endif
+ 
+ 	put_prev_task(rq, prev);
++
++	/*
++	 * We've updated @prev and no longer need the server link, clear it.
++	 * Must be done before ->pick_next_task() because that can (re)set
++	 * ->dl_server.
++	 */
++	if (prev->dl_server)
++		prev->dl_server = NULL;
+ }
+ 
+ /*
+@@ -6035,6 +6043,13 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ 			p = pick_next_task_idle(rq);
+ 		}
+ 
++		/*
++		 * This is a normal CFS pick, but the previous could be a DL pick.
++		 * Clear it as previous is no longer picked.
++		 */
++		if (prev->dl_server)
++			prev->dl_server = NULL;
++
+ 		/*
+ 		 * This is the fast path; it cannot be a DL server pick;
+ 		 * therefore even if @p == @prev, ->dl_server must be NULL.
+@@ -6048,14 +6063,6 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ restart:
+ 	put_prev_task_balance(rq, prev, rf);
+ 
+-	/*
+-	 * We've updated @prev and no longer need the server link, clear it.
+-	 * Must be done before ->pick_next_task() because that can (re)set
+-	 * ->dl_server.
+-	 */
+-	if (prev->dl_server)
+-		prev->dl_server = NULL;
+-
+ 	for_each_class(class) {
+ 		p = class->pick_next_task(rq);
+ 		if (p)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index 507d7b8d79afa4..8d4a3d9de47972 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -765,13 +765,14 @@ static void record_times(struct psi_group_cpu *groupc, u64 now)
+ }
+ 
+ static void psi_group_change(struct psi_group *group, int cpu,
+-			     unsigned int clear, unsigned int set, u64 now,
++			     unsigned int clear, unsigned int set,
+ 			     bool wake_clock)
+ {
+ 	struct psi_group_cpu *groupc;
+ 	unsigned int t, m;
+ 	enum psi_states s;
+ 	u32 state_mask;
++	u64 now;
+ 
+ 	lockdep_assert_rq_held(cpu_rq(cpu));
+ 	groupc = per_cpu_ptr(group->pcpu, cpu);
+@@ -786,6 +787,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 	 * SOME and FULL time these may have resulted in.
+ 	 */
+ 	write_seqcount_begin(&groupc->seq);
++	now = cpu_clock(cpu);
+ 
+ 	/*
+ 	 * Start with TSK_ONCPU, which doesn't have a corresponding
+@@ -899,18 +901,15 @@ void psi_task_change(struct task_struct *task, int clear, int set)
+ {
+ 	int cpu = task_cpu(task);
+ 	struct psi_group *group;
+-	u64 now;
+ 
+ 	if (!task->pid)
+ 		return;
+ 
+ 	psi_flags_change(task, clear, set);
+ 
+-	now = cpu_clock(cpu);
+-
+ 	group = task_psi_group(task);
+ 	do {
+-		psi_group_change(group, cpu, clear, set, now, true);
++		psi_group_change(group, cpu, clear, set, true);
+ 	} while ((group = group->parent));
+ }
+ 
+@@ -919,7 +918,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ {
+ 	struct psi_group *group, *common = NULL;
+ 	int cpu = task_cpu(prev);
+-	u64 now = cpu_clock(cpu);
+ 
+ 	if (next->pid) {
+ 		psi_flags_change(next, 0, TSK_ONCPU);
+@@ -936,7 +934,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 				break;
+ 			}
+ 
+-			psi_group_change(group, cpu, 0, TSK_ONCPU, now, true);
++			psi_group_change(group, cpu, 0, TSK_ONCPU, true);
+ 		} while ((group = group->parent));
+ 	}
+ 
+@@ -974,7 +972,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		do {
+ 			if (group == common)
+ 				break;
+-			psi_group_change(group, cpu, clear, set, now, wake_clock);
++			psi_group_change(group, cpu, clear, set, wake_clock);
+ 		} while ((group = group->parent));
+ 
+ 		/*
+@@ -986,7 +984,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
+ 			clear &= ~TSK_ONCPU;
+ 			for (; group; group = group->parent)
+-				psi_group_change(group, cpu, clear, set, now, wake_clock);
++				psi_group_change(group, cpu, clear, set, wake_clock);
+ 		}
+ 	}
+ }
+@@ -997,8 +995,8 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 	int cpu = task_cpu(curr);
+ 	struct psi_group *group;
+ 	struct psi_group_cpu *groupc;
+-	u64 now, irq;
+ 	s64 delta;
++	u64 irq;
+ 
+ 	if (static_branch_likely(&psi_disabled))
+ 		return;
+@@ -1011,7 +1009,6 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 	if (prev && task_psi_group(prev) == group)
+ 		return;
+ 
+-	now = cpu_clock(cpu);
+ 	irq = irq_time_read(cpu);
+ 	delta = (s64)(irq - rq->psi_irq_time);
+ 	if (delta < 0)
+@@ -1019,12 +1016,15 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 	rq->psi_irq_time = irq;
+ 
+ 	do {
++		u64 now;
++
+ 		if (!group->enabled)
+ 			continue;
+ 
+ 		groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+ 		write_seqcount_begin(&groupc->seq);
++		now = cpu_clock(cpu);
+ 
+ 		record_times(groupc, now);
+ 		groupc->times[PSI_IRQ_FULL] += delta;
+@@ -1223,11 +1223,9 @@ void psi_cgroup_restart(struct psi_group *group)
+ 	for_each_possible_cpu(cpu) {
+ 		struct rq *rq = cpu_rq(cpu);
+ 		struct rq_flags rf;
+-		u64 now;
+ 
+ 		rq_lock_irq(rq, &rf);
+-		now = cpu_clock(cpu);
+-		psi_group_change(group, cpu, 0, 0, now, true);
++		psi_group_change(group, cpu, 0, 0, true);
+ 		rq_unlock_irq(rq, &rf);
+ 	}
+ }
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+index 639397b5491ca0..5259cda486d058 100644
+--- a/kernel/static_call_inline.c
++++ b/kernel/static_call_inline.c
+@@ -411,6 +411,17 @@ static void static_call_del_module(struct module *mod)
+ 
+ 	for (site = start; site < stop; site++) {
+ 		key = static_call_key(site);
++
++		/*
++		 * If the key was not updated due to a memory allocation
++		 * failure in __static_call_init() then treating key::sites
++		 * as key::mods in the code below would cause random memory
++		 * access and #GP. In that case all subsequent sites have
++		 * not been touched either, so stop iterating.
++		 */
++		if (!static_call_key_has_mods(key))
++			break;
++
+ 		if (key == prev_key)
+ 			continue;
+ 
+@@ -442,7 +453,7 @@ static int static_call_module_notify(struct notifier_block *nb,
+ 	case MODULE_STATE_COMING:
+ 		ret = static_call_add_module(mod);
+ 		if (ret) {
+-			WARN(1, "Failed to allocate memory for static calls");
++			pr_warn("Failed to allocate memory for static calls\n");
+ 			static_call_del_module(mod);
+ 		}
+ 		break;
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index b791524a6536ac..3bd6071441ade9 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -520,6 +520,8 @@ static void hwlat_hotplug_workfn(struct work_struct *dummy)
+ 	if (!hwlat_busy || hwlat_data.thread_mode != MODE_PER_CPU)
+ 		goto out_unlock;
+ 
++	if (!cpu_online(cpu))
++		goto out_unlock;
+ 	if (!cpumask_test_cpu(cpu, tr->tracing_cpumask))
+ 		goto out_unlock;
+ 
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 461b4ab60b501a..3e2bc029fa8c83 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1953,12 +1953,8 @@ static void stop_kthread(unsigned int cpu)
+ {
+ 	struct task_struct *kthread;
+ 
+-	mutex_lock(&interface_lock);
+-	kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
++	kthread = xchg_relaxed(&(per_cpu(per_cpu_osnoise_var, cpu).kthread), NULL);
+ 	if (kthread) {
+-		per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
+-		mutex_unlock(&interface_lock);
+-
+ 		if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask) &&
+ 		    !WARN_ON(!test_bit(OSN_WORKLOAD, &osnoise_options))) {
+ 			kthread_stop(kthread);
+@@ -1972,7 +1968,6 @@ static void stop_kthread(unsigned int cpu)
+ 			put_task_struct(kthread);
+ 		}
+ 	} else {
+-		mutex_unlock(&interface_lock);
+ 		/* if no workload, just return */
+ 		if (!test_bit(OSN_WORKLOAD, &osnoise_options)) {
+ 			/*
+@@ -1994,8 +1989,12 @@ static void stop_per_cpu_kthreads(void)
+ {
+ 	int cpu;
+ 
+-	for_each_possible_cpu(cpu)
++	cpus_read_lock();
++
++	for_each_online_cpu(cpu)
+ 		stop_kthread(cpu);
++
++	cpus_read_unlock();
+ }
+ 
+ /*
+@@ -2007,6 +2006,10 @@ static int start_kthread(unsigned int cpu)
+ 	void *main = osnoise_main;
+ 	char comm[24];
+ 
++	/* Do not start a new thread if it is already running */
++	if (per_cpu(per_cpu_osnoise_var, cpu).kthread)
++		return 0;
++
+ 	if (timerlat_enabled()) {
+ 		snprintf(comm, 24, "timerlat/%d", cpu);
+ 		main = timerlat_main;
+@@ -2061,11 +2064,10 @@ static int start_per_cpu_kthreads(void)
+ 		if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask)) {
+ 			struct task_struct *kthread;
+ 
+-			kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
++			kthread = xchg_relaxed(&(per_cpu(per_cpu_osnoise_var, cpu).kthread), NULL);
+ 			if (!WARN_ON(!kthread))
+ 				kthread_stop(kthread);
+ 		}
+-		per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
+ 	}
+ 
+ 	for_each_cpu(cpu, current_mask) {
+@@ -2095,6 +2097,8 @@ static void osnoise_hotplug_workfn(struct work_struct *dummy)
+ 	mutex_lock(&interface_lock);
+ 	cpus_read_lock();
+ 
++	if (!cpu_online(cpu))
++		goto out_unlock;
+ 	if (!cpumask_test_cpu(cpu, &osnoise_cpumask))
+ 		goto out_unlock;
+ 
+diff --git a/lib/buildid.c b/lib/buildid.c
+index 7954dd92e36c01..26007cc99a38f6 100644
+--- a/lib/buildid.c
++++ b/lib/buildid.c
+@@ -18,31 +18,37 @@ static int parse_build_id_buf(unsigned char *build_id,
+ 			      const void *note_start,
+ 			      Elf32_Word note_size)
+ {
+-	Elf32_Word note_offs = 0, new_offs;
+-
+-	while (note_offs + sizeof(Elf32_Nhdr) < note_size) {
+-		Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_offs);
++	const char note_name[] = "GNU";
++	const size_t note_name_sz = sizeof(note_name);
++	u64 note_off = 0, new_off, name_sz, desc_sz;
++	const char *data;
++
++	while (note_off + sizeof(Elf32_Nhdr) < note_size &&
++	       note_off + sizeof(Elf32_Nhdr) > note_off /* overflow */) {
++		Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_off);
++
++		name_sz = READ_ONCE(nhdr->n_namesz);
++		desc_sz = READ_ONCE(nhdr->n_descsz);
++
++		new_off = note_off + sizeof(Elf32_Nhdr);
++		if (check_add_overflow(new_off, ALIGN(name_sz, 4), &new_off) ||
++		    check_add_overflow(new_off, ALIGN(desc_sz, 4), &new_off) ||
++		    new_off > note_size)
++			break;
+ 
+ 		if (nhdr->n_type == BUILD_ID &&
+-		    nhdr->n_namesz == sizeof("GNU") &&
+-		    !strcmp((char *)(nhdr + 1), "GNU") &&
+-		    nhdr->n_descsz > 0 &&
+-		    nhdr->n_descsz <= BUILD_ID_SIZE_MAX) {
+-			memcpy(build_id,
+-			       note_start + note_offs +
+-			       ALIGN(sizeof("GNU"), 4) + sizeof(Elf32_Nhdr),
+-			       nhdr->n_descsz);
+-			memset(build_id + nhdr->n_descsz, 0,
+-			       BUILD_ID_SIZE_MAX - nhdr->n_descsz);
++		    name_sz == note_name_sz &&
++		    memcmp(nhdr + 1, note_name, note_name_sz) == 0 &&
++		    desc_sz > 0 && desc_sz <= BUILD_ID_SIZE_MAX) {
++			data = note_start + note_off + ALIGN(note_name_sz, 4);
++			memcpy(build_id, data, desc_sz);
++			memset(build_id + desc_sz, 0, BUILD_ID_SIZE_MAX - desc_sz);
+ 			if (size)
+-				*size = nhdr->n_descsz;
++				*size = desc_sz;
+ 			return 0;
+ 		}
+-		new_offs = note_offs + sizeof(Elf32_Nhdr) +
+-			ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4);
+-		if (new_offs <= note_offs)  /* overflow */
+-			break;
+-		note_offs = new_offs;
++
++		note_off = new_off;
+ 	}
+ 
+ 	return -EINVAL;
+@@ -71,20 +77,28 @@ static int get_build_id_32(const void *page_addr, unsigned char *build_id,
+ {
+ 	Elf32_Ehdr *ehdr = (Elf32_Ehdr *)page_addr;
+ 	Elf32_Phdr *phdr;
+-	int i;
++	__u32 i, phnum;
++
++	/*
++	 * FIXME
++	 * Neither ELF spec nor ELF loader require that program headers
++	 * start immediately after ELF header.
++	 */
++	if (ehdr->e_phoff != sizeof(Elf32_Ehdr))
++		return -EINVAL;
+ 
++	phnum = READ_ONCE(ehdr->e_phnum);
+ 	/* only supports phdr that fits in one page */
+-	if (ehdr->e_phnum >
+-	    (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr))
++	if (phnum > (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr))
+ 		return -EINVAL;
+ 
+ 	phdr = (Elf32_Phdr *)(page_addr + sizeof(Elf32_Ehdr));
+ 
+-	for (i = 0; i < ehdr->e_phnum; ++i) {
++	for (i = 0; i < phnum; ++i) {
+ 		if (phdr[i].p_type == PT_NOTE &&
+ 		    !parse_build_id(page_addr, build_id, size,
+-				    page_addr + phdr[i].p_offset,
+-				    phdr[i].p_filesz))
++				    page_addr + READ_ONCE(phdr[i].p_offset),
++				    READ_ONCE(phdr[i].p_filesz)))
+ 			return 0;
+ 	}
+ 	return -EINVAL;
+@@ -96,20 +110,28 @@ static int get_build_id_64(const void *page_addr, unsigned char *build_id,
+ {
+ 	Elf64_Ehdr *ehdr = (Elf64_Ehdr *)page_addr;
+ 	Elf64_Phdr *phdr;
+-	int i;
++	__u32 i, phnum;
++
++	/*
++	 * FIXME
++	 * Neither ELF spec nor ELF loader require that program headers
++	 * start immediately after ELF header.
++	 */
++	if (ehdr->e_phoff != sizeof(Elf64_Ehdr))
++		return -EINVAL;
+ 
++	phnum = READ_ONCE(ehdr->e_phnum);
+ 	/* only supports phdr that fits in one page */
+-	if (ehdr->e_phnum >
+-	    (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr))
++	if (phnum > (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr))
+ 		return -EINVAL;
+ 
+ 	phdr = (Elf64_Phdr *)(page_addr + sizeof(Elf64_Ehdr));
+ 
+-	for (i = 0; i < ehdr->e_phnum; ++i) {
++	for (i = 0; i < phnum; ++i) {
+ 		if (phdr[i].p_type == PT_NOTE &&
+ 		    !parse_build_id(page_addr, build_id, size,
+-				    page_addr + phdr[i].p_offset,
+-				    phdr[i].p_filesz))
++				    page_addr + READ_ONCE(phdr[i].p_offset),
++				    READ_ONCE(phdr[i].p_filesz)))
+ 			return 0;
+ 	}
+ 	return -EINVAL;
+@@ -138,6 +160,10 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
+ 	page = find_get_page(vma->vm_file->f_mapping, 0);
+ 	if (!page)
+ 		return -EFAULT;	/* page not mapped */
++	if (!PageUptodate(page)) {
++		put_page(page);
++		return -EFAULT;
++	}
+ 
+ 	ret = -EINVAL;
+ 	page_addr = kmap_local_page(page);
+diff --git a/mm/Kconfig b/mm/Kconfig
+index b4cb45255a5414..baf7ce6a888c00 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -146,12 +146,15 @@ config ZSWAP_ZPOOL_DEFAULT_ZBUD
+ 	help
+ 	  Use the zbud allocator as the default allocator.
+ 
+-config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+-	bool "z3fold"
+-	select Z3FOLD
++config ZSWAP_ZPOOL_DEFAULT_Z3FOLD_DEPRECATED
++	bool "z3foldi (DEPRECATED)"
++	select Z3FOLD_DEPRECATED
+ 	help
+ 	  Use the z3fold allocator as the default allocator.
+ 
++	  Deprecated and scheduled for removal in a few cycles,
++	  see CONFIG_Z3FOLD_DEPRECATED.
++
+ config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+ 	bool "zsmalloc"
+ 	select ZSMALLOC
+@@ -163,7 +166,7 @@ config ZSWAP_ZPOOL_DEFAULT
+        string
+        depends on ZSWAP
+        default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
+-       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
++       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD_DEPRECATED
+        default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+        default ""
+ 
+@@ -177,15 +180,25 @@ config ZBUD
+ 	  deterministic reclaim properties that make it preferable to a higher
+ 	  density approach when reclaim will be used.
+ 
+-config Z3FOLD
+-	tristate "3:1 compression allocator (z3fold)"
++config Z3FOLD_DEPRECATED
++	tristate "3:1 compression allocator (z3fold) (DEPRECATED)"
+ 	depends on ZSWAP
+ 	help
++	  Deprecated and scheduled for removal in a few cycles. If you have
++	  a good reason for using Z3FOLD over ZSMALLOC, please contact
++	  linux-mm@kvack.org and the zswap maintainers.
++
+ 	  A special purpose allocator for storing compressed pages.
+ 	  It is designed to store up to three compressed pages per physical
+ 	  page. It is a ZBUD derivative so the simplicity and determinism are
+ 	  still there.
+ 
++config Z3FOLD
++	tristate
++	default y if Z3FOLD_DEPRECATED=y
++	default m if Z3FOLD_DEPRECATED=m
++	depends on Z3FOLD_DEPRECATED
++
+ config ZSMALLOC
+ 	tristate
+ 	prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 1560a1546bb1c8..91284b7552e74a 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1176,6 +1176,13 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
+ 
+ 	/* If the object still fits, repoison it precisely. */
+ 	if (ks >= new_size) {
++		/* Zero out spare memory. */
++		if (want_init_on_alloc(flags)) {
++			kasan_disable_current();
++			memset((void *)p + new_size, 0, ks - new_size);
++			kasan_enable_current();
++		}
++
+ 		p = kasan_krealloc((void *)p, new_size, flags);
+ 		return (void *)p;
+ 	}
+diff --git a/mm/slub.c b/mm/slub.c
+index be0ef60984ac45..ccd770cf8f7983 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -756,6 +756,50 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
+ 	return false;
+ }
+ 
++/*
++ * kmalloc caches has fixed sizes (mostly power of 2), and kmalloc() API
++ * family will round up the real request size to these fixed ones, so
++ * there could be an extra area than what is requested. Save the original
++ * request size in the meta data area, for better debug and sanity check.
++ */
++static inline void set_orig_size(struct kmem_cache *s,
++				void *object, unsigned int orig_size)
++{
++	void *p = kasan_reset_tag(object);
++	unsigned int kasan_meta_size;
++
++	if (!slub_debug_orig_size(s))
++		return;
++
++	/*
++	 * KASAN can save its free meta data inside of the object at offset 0.
++	 * If this meta data size is larger than 'orig_size', it will overlap
++	 * the data redzone in [orig_size+1, object_size]. Thus, we adjust
++	 * 'orig_size' to be as at least as big as KASAN's meta data.
++	 */
++	kasan_meta_size = kasan_metadata_size(s, true);
++	if (kasan_meta_size > orig_size)
++		orig_size = kasan_meta_size;
++
++	p += get_info_end(s);
++	p += sizeof(struct track) * 2;
++
++	*(unsigned int *)p = orig_size;
++}
++
++static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
++{
++	void *p = kasan_reset_tag(object);
++
++	if (!slub_debug_orig_size(s))
++		return s->object_size;
++
++	p += get_info_end(s);
++	p += sizeof(struct track) * 2;
++
++	return *(unsigned int *)p;
++}
++
+ #ifdef CONFIG_SLUB_DEBUG
+ static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
+ static DEFINE_SPINLOCK(object_map_lock);
+@@ -969,50 +1013,6 @@ static void print_slab_info(const struct slab *slab)
+ 	       folio_flags(folio, 0));
+ }
+ 
+-/*
+- * kmalloc caches has fixed sizes (mostly power of 2), and kmalloc() API
+- * family will round up the real request size to these fixed ones, so
+- * there could be an extra area than what is requested. Save the original
+- * request size in the meta data area, for better debug and sanity check.
+- */
+-static inline void set_orig_size(struct kmem_cache *s,
+-				void *object, unsigned int orig_size)
+-{
+-	void *p = kasan_reset_tag(object);
+-	unsigned int kasan_meta_size;
+-
+-	if (!slub_debug_orig_size(s))
+-		return;
+-
+-	/*
+-	 * KASAN can save its free meta data inside of the object at offset 0.
+-	 * If this meta data size is larger than 'orig_size', it will overlap
+-	 * the data redzone in [orig_size+1, object_size]. Thus, we adjust
+-	 * 'orig_size' to be as at least as big as KASAN's meta data.
+-	 */
+-	kasan_meta_size = kasan_metadata_size(s, true);
+-	if (kasan_meta_size > orig_size)
+-		orig_size = kasan_meta_size;
+-
+-	p += get_info_end(s);
+-	p += sizeof(struct track) * 2;
+-
+-	*(unsigned int *)p = orig_size;
+-}
+-
+-static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
+-{
+-	void *p = kasan_reset_tag(object);
+-
+-	if (!slub_debug_orig_size(s))
+-		return s->object_size;
+-
+-	p += get_info_end(s);
+-	p += sizeof(struct track) * 2;
+-
+-	return *(unsigned int *)p;
+-}
+-
+ void skip_orig_size_check(struct kmem_cache *s, const void *object)
+ {
+ 	set_orig_size(s, (void *)object, s->object_size);
+@@ -1859,7 +1859,6 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node,
+ 							int objects) {}
+ static inline void dec_slabs_node(struct kmem_cache *s, int node,
+ 							int objects) {}
+-
+ #ifndef CONFIG_SLUB_TINY
+ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab,
+ 			       void **freelist, void *nextfree)
+@@ -2187,14 +2186,21 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
+ 	 */
+ 	if (unlikely(init)) {
+ 		int rsize;
+-		unsigned int inuse;
++		unsigned int inuse, orig_size;
+ 
+ 		inuse = get_info_end(s);
++		orig_size = get_orig_size(s, x);
+ 		if (!kasan_has_integrated_init())
+-			memset(kasan_reset_tag(x), 0, s->object_size);
++			memset(kasan_reset_tag(x), 0, orig_size);
+ 		rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0;
+ 		memset((char *)kasan_reset_tag(x) + inuse, 0,
+ 		       s->size - inuse - rsize);
++		/*
++		 * Restore orig_size, otherwize kmalloc redzone overwritten
++		 * would be reported
++		 */
++		set_orig_size(s, x, orig_size);
++
+ 	}
+ 	/* KASAN might put x into memory quarantine, delaying its reuse. */
+ 	return !kasan_slab_free(s, x, init);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 9493966cf389f5..a9feb323c7d290 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3792,6 +3792,8 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	hci_dev_lock(hdev);
+ 	conn = hci_conn_hash_lookup_handle(hdev, handle);
++	if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
++		mgmt_device_connected(hdev, conn, NULL, 0);
+ 	hci_dev_unlock(hdev);
+ 
+ 	if (conn) {
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 59d9086db75fee..f47da4aa0d708c 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3707,7 +3707,7 @@ static void hci_remote_features_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
+-	if (!ev->status && !test_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) {
++	if (!ev->status) {
+ 		struct hci_cp_remote_name_req cp;
+ 		memset(&cp, 0, sizeof(cp));
+ 		bacpy(&cp.bdaddr, &conn->dst);
+@@ -5325,19 +5325,16 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
+-	/* If no side requires MITM protection; auto-accept */
++	/* If no side requires MITM protection; use JUST_CFM method */
+ 	if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) &&
+ 	    (!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) {
+ 
+-		/* If we're not the initiators request authorization to
+-		 * proceed from user space (mgmt_user_confirm with
+-		 * confirm_hint set to 1). The exception is if neither
+-		 * side had MITM or if the local IO capability is
+-		 * NoInputNoOutput, in which case we do auto-accept
++		/* If we're not the initiator of request authorization and the
++		 * local IO capability is not NoInputNoOutput, use JUST_WORKS
++		 * method (mgmt_user_confirm with confirm_hint set to 1).
+ 		 */
+ 		if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) &&
+-		    conn->io_capability != HCI_IO_NO_INPUT_OUTPUT &&
+-		    (loc_mitm || rem_mitm)) {
++		    conn->io_capability != HCI_IO_NO_INPUT_OUTPUT) {
+ 			bt_dev_dbg(hdev, "Confirming auto-accept as acceptor");
+ 			confirm_hint = 1;
+ 			goto confirm;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 9988ba382b686a..6544c1ed714344 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4066,17 +4066,9 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+ static int l2cap_connect_req(struct l2cap_conn *conn,
+ 			     struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data)
+ {
+-	struct hci_dev *hdev = conn->hcon->hdev;
+-	struct hci_conn *hcon = conn->hcon;
+-
+ 	if (cmd_len < sizeof(struct l2cap_conn_req))
+ 		return -EPROTO;
+ 
+-	hci_dev_lock(hdev);
+-	if (hci_dev_test_flag(hdev, HCI_MGMT))
+-		mgmt_device_connected(hdev, hcon, NULL, 0);
+-	hci_dev_unlock(hdev);
+-
+ 	l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP);
+ 	return 0;
+ }
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index c383eb44d516bc..31cabc3e98ce42 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1454,10 +1454,15 @@ static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ {
+-	if (cmd->cmd_complete) {
+-		u8 *status = data;
++	struct cmd_lookup *match = data;
++
++	/* dequeue cmd_sync entries using cmd as data as that is about to be
++	 * removed/freed.
++	 */
++	hci_cmd_sync_dequeue(match->hdev, NULL, cmd, NULL);
+ 
+-		cmd->cmd_complete(cmd, *status);
++	if (cmd->cmd_complete) {
++		cmd->cmd_complete(cmd, match->mgmt_status);
+ 		mgmt_pending_remove(cmd);
+ 
+ 		return;
+@@ -9349,12 +9354,12 @@ void mgmt_index_added(struct hci_dev *hdev)
+ void mgmt_index_removed(struct hci_dev *hdev)
+ {
+ 	struct mgmt_ev_ext_index ev;
+-	u8 status = MGMT_STATUS_INVALID_INDEX;
++	struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX };
+ 
+ 	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
+ 		return;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
++	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+ 		mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
+@@ -9405,7 +9410,7 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
+ void __mgmt_power_off(struct hci_dev *hdev)
+ {
+ 	struct cmd_lookup match = { NULL, hdev };
+-	u8 status, zero_cod[] = { 0, 0, 0 };
++	u8 zero_cod[] = { 0, 0, 0 };
+ 
+ 	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
+ 
+@@ -9417,11 +9422,11 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ 	 * status responses.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
+-		status = MGMT_STATUS_INVALID_INDEX;
++		match.mgmt_status = MGMT_STATUS_INVALID_INDEX;
+ 	else
+-		status = MGMT_STATUS_NOT_POWERED;
++		match.mgmt_status = MGMT_STATUS_NOT_POWERED;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
++	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
+ 
+ 	if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
+ 		mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
+diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
+index bc37e47ad8299e..1a52a0bca086d6 100644
+--- a/net/bridge/br_mdb.c
++++ b/net/bridge/br_mdb.c
+@@ -1674,7 +1674,7 @@ int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ 	spin_lock_bh(&br->multicast_lock);
+ 
+ 	mp = br_mdb_ip_get(br, &group);
+-	if (!mp) {
++	if (!mp || (!mp->ports && !mp->host_joined)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ 		err = -ENOENT;
+ 		goto unlock;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2b4819b610b8a1..d716a046eaf971 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3502,7 +3502,7 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
+ 	if (gso_segs > READ_ONCE(dev->gso_max_segs))
+ 		return features & ~NETIF_F_GSO_MASK;
+ 
+-	if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size)))
++	if (unlikely(skb->len >= netif_get_gso_max_size(dev, skb)))
+ 		return features & ~NETIF_F_GSO_MASK;
+ 
+ 	if (!skb_shinfo(skb)->gso_type) {
+@@ -3748,7 +3748,7 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
+ 						sizeof(_tcphdr), &_tcphdr);
+ 			if (likely(th))
+ 				hdr_len += __tcp_hdrlen(th);
+-		} else {
++		} else if (shinfo->gso_type & SKB_GSO_UDP_L4) {
+ 			struct udphdr _udphdr;
+ 
+ 			if (skb_header_pointer(skb, hdr_len,
+@@ -3756,10 +3756,14 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
+ 				hdr_len += sizeof(struct udphdr);
+ 		}
+ 
+-		if (shinfo->gso_type & SKB_GSO_DODGY)
+-			gso_segs = DIV_ROUND_UP(skb->len - hdr_len,
+-						shinfo->gso_size);
++		if (unlikely(shinfo->gso_type & SKB_GSO_DODGY)) {
++			int payload = skb->len - hdr_len;
+ 
++			/* Malicious packet. */
++			if (payload <= 0)
++				return;
++			gso_segs = DIV_ROUND_UP(payload, shinfo->gso_size);
++		}
+ 		qdisc_skb_cb(skb)->pkt_len += (gso_segs - 1) * hdr_len;
+ 	}
+ }
+diff --git a/net/core/gro.c b/net/core/gro.c
+index b3b43de1a65027..87708483a5f460 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -98,7 +98,6 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ 	unsigned int headlen = skb_headlen(skb);
+ 	unsigned int len = skb_gro_len(skb);
+ 	unsigned int delta_truesize;
+-	unsigned int gro_max_size;
+ 	unsigned int new_truesize;
+ 	struct sk_buff *lp;
+ 	int segs;
+@@ -112,12 +111,8 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ 	if (p->pp_recycle != skb->pp_recycle)
+ 		return -ETOOMANYREFS;
+ 
+-	/* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */
+-	gro_max_size = p->protocol == htons(ETH_P_IPV6) ?
+-			READ_ONCE(p->dev->gro_max_size) :
+-			READ_ONCE(p->dev->gro_ipv4_max_size);
+-
+-	if (unlikely(p->len + len >= gro_max_size || NAPI_GRO_CB(skb)->flush))
++	if (unlikely(p->len + len >= netif_get_gro_max_size(p->dev, p) ||
++		     NAPI_GRO_CB(skb)->flush))
+ 		return -E2BIG;
+ 
+ 	if (unlikely(p->len + len >= GRO_LEGACY_MAX_SIZE)) {
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 15ad775ddd3c10..dc0c622d453e11 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -32,6 +32,7 @@
+ #ifdef CONFIG_SYSFS
+ static const char fmt_hex[] = "%#x\n";
+ static const char fmt_dec[] = "%d\n";
++static const char fmt_uint[] = "%u\n";
+ static const char fmt_ulong[] = "%lu\n";
+ static const char fmt_u64[] = "%llu\n";
+ 
+@@ -425,6 +426,9 @@ NETDEVICE_SHOW_RW(gro_flush_timeout, fmt_ulong);
+ 
+ static int change_napi_defer_hard_irqs(struct net_device *dev, unsigned long val)
+ {
++	if (val > S32_MAX)
++		return -ERANGE;
++
+ 	WRITE_ONCE(dev->napi_defer_hard_irqs, val);
+ 	return 0;
+ }
+@@ -438,7 +442,7 @@ static ssize_t napi_defer_hard_irqs_store(struct device *dev,
+ 
+ 	return netdev_store(dev, attr, buf, len, change_napi_defer_hard_irqs);
+ }
+-NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_dec);
++NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_uint);
+ 
+ static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr,
+ 			     const char *buf, size_t len)
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 05f9515d2c05c1..a17d7eaeb00192 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -216,10 +216,12 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 	rtnl_lock();
+ 
+ 	napi = napi_by_id(napi_id);
+-	if (napi)
++	if (napi) {
+ 		err = netdev_nl_napi_fill_one(rsp, napi, info);
+-	else
+-		err = -EINVAL;
++	} else {
++		NL_SET_BAD_ATTR(info->extack, info->attrs[NETDEV_A_NAPI_ID]);
++		err = -ENOENT;
++	}
+ 
+ 	rtnl_unlock();
+ 
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 55bcacf67df3b6..e0821390040937 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -626,12 +626,9 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ 	const struct net_device_ops *ops;
+ 	int err;
+ 
+-	np->dev = ndev;
+-	strscpy(np->dev_name, ndev->name, IFNAMSIZ);
+-
+ 	if (ndev->priv_flags & IFF_DISABLE_NETPOLL) {
+ 		np_err(np, "%s doesn't support polling, aborting\n",
+-		       np->dev_name);
++		       ndev->name);
+ 		err = -ENOTSUPP;
+ 		goto out;
+ 	}
+@@ -649,7 +646,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ 
+ 		refcount_set(&npinfo->refcnt, 1);
+ 
+-		ops = np->dev->netdev_ops;
++		ops = ndev->netdev_ops;
+ 		if (ops->ndo_netpoll_setup) {
+ 			err = ops->ndo_netpoll_setup(ndev, npinfo);
+ 			if (err)
+@@ -660,6 +657,8 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ 		refcount_inc(&npinfo->refcnt);
+ 	}
+ 
++	np->dev = ndev;
++	strscpy(np->dev_name, ndev->name, IFNAMSIZ);
+ 	npinfo->netpoll = np;
+ 
+ 	/* last thing to do is link it to the net device structure */
+@@ -677,6 +676,7 @@ EXPORT_SYMBOL_GPL(__netpoll_setup);
+ int netpoll_setup(struct netpoll *np)
+ {
+ 	struct net_device *ndev = NULL;
++	bool ip_overwritten = false;
+ 	struct in_device *in_dev;
+ 	int err;
+ 
+@@ -741,6 +741,7 @@ int netpoll_setup(struct netpoll *np)
+ 			}
+ 
+ 			np->local_ip.ip = ifa->ifa_local;
++			ip_overwritten = true;
+ 			np_info(np, "local IP %pI4\n", &np->local_ip.ip);
+ 		} else {
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -757,6 +758,7 @@ int netpoll_setup(struct netpoll *np)
+ 					    !!(ipv6_addr_type(&np->remote_ip.in6) & IPV6_ADDR_LINKLOCAL))
+ 						continue;
+ 					np->local_ip.in6 = ifp->addr;
++					ip_overwritten = true;
+ 					err = 0;
+ 					break;
+ 				}
+@@ -787,6 +789,9 @@ int netpoll_setup(struct netpoll *np)
+ 	return 0;
+ 
+ put:
++	DEBUG_NET_WARN_ON_ONCE(np->dev);
++	if (ip_overwritten)
++		memset(&np->local_ip, 0, sizeof(np->local_ip));
+ 	netdev_put(ndev, &np->dev_tracker);
+ unlock:
+ 	rtnl_unlock();
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 12521a7d404810..03ef2a2af43090 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -1579,6 +1579,7 @@ EXPORT_SYMBOL_GPL(dsa_unregister_switch);
+ void dsa_switch_shutdown(struct dsa_switch *ds)
+ {
+ 	struct net_device *conduit, *user_dev;
++	LIST_HEAD(close_list);
+ 	struct dsa_port *dp;
+ 
+ 	mutex_lock(&dsa2_mutex);
+@@ -1588,10 +1589,16 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ 
+ 	rtnl_lock();
+ 
++	dsa_switch_for_each_cpu_port(dp, ds)
++		list_add(&dp->conduit->close_list, &close_list);
++
++	dev_close_many(&close_list, true);
++
+ 	dsa_switch_for_each_user_port(dp, ds) {
+ 		conduit = dsa_port_to_conduit(dp);
+ 		user_dev = dp->user;
+ 
++		netif_device_detach(user_dev);
+ 		netdev_upper_dev_unlink(conduit, user_dev);
+ 	}
+ 
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index d09f557eaa7790..73effd2d2994ab 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -574,10 +574,6 @@ static int inet_set_ifa(struct net_device *dev, struct in_ifaddr *ifa)
+ 
+ 	ASSERT_RTNL();
+ 
+-	if (!in_dev) {
+-		inet_free_ifa(ifa);
+-		return -ENOBUFS;
+-	}
+ 	ipv4_devconf_setall(in_dev);
+ 	neigh_parms_data_state_setall(in_dev->arp_parms);
+ 	if (ifa->ifa_dev != in_dev) {
+@@ -1184,6 +1180,8 @@ int devinet_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr)
+ 
+ 		if (!ifa) {
+ 			ret = -ENOBUFS;
++			if (!in_dev)
++				break;
+ 			ifa = inet_alloc_ifa();
+ 			if (!ifa)
+ 				break;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 7ad2cafb927634..da540ddb7af651 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1343,7 +1343,7 @@ static void nl_fib_lookup(struct net *net, struct fib_result_nl *frn)
+ 	struct flowi4           fl4 = {
+ 		.flowi4_mark = frn->fl_mark,
+ 		.daddr = frn->fl_addr,
+-		.flowi4_tos = frn->fl_tos,
++		.flowi4_tos = frn->fl_tos & IPTOS_RT_MASK,
+ 		.flowi4_scope = frn->fl_scope,
+ 	};
+ 	struct fib_table *tb;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index ba205473522e4e..868ef18ad656c1 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -661,11 +661,11 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ 		if (skb_cow_head(skb, 0))
+ 			goto free_skb;
+ 
+-		tnl_params = (const struct iphdr *)skb->data;
+-
+-		if (!pskb_network_may_pull(skb, pull_len))
++		if (!pskb_may_pull(skb, pull_len))
+ 			goto free_skb;
+ 
++		tnl_params = (const struct iphdr *)skb->data;
++
+ 		/* ip_tunnel_xmit() needs skb->data pointing to gre header. */
+ 		skb_pull(skb, pull_len);
+ 		skb_reset_mac_header(skb);
+diff --git a/net/ipv4/netfilter/nf_dup_ipv4.c b/net/ipv4/netfilter/nf_dup_ipv4.c
+index 6cc5743c553a02..9a21175693db58 100644
+--- a/net/ipv4/netfilter/nf_dup_ipv4.c
++++ b/net/ipv4/netfilter/nf_dup_ipv4.c
+@@ -52,8 +52,9 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ {
+ 	struct iphdr *iph;
+ 
++	local_bh_disable();
+ 	if (this_cpu_read(nf_skb_duplicated))
+-		return;
++		goto out;
+ 	/*
+ 	 * Copy the skb, and route the copy. Will later return %XT_CONTINUE for
+ 	 * the original skb, which should continue on its way as if nothing has
+@@ -61,7 +62,7 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ 	 */
+ 	skb = pskb_copy(skb, GFP_ATOMIC);
+ 	if (skb == NULL)
+-		return;
++		goto out;
+ 
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ 	/* Avoid counting cloned packets towards the original connection. */
+@@ -90,6 +91,8 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ 	} else {
+ 		kfree_skb(skb);
+ 	}
++out:
++	local_bh_enable();
+ }
+ EXPORT_SYMBOL_GPL(nf_dup_ipv4);
+ 
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index da0f5025539917..7874b3718bc3cb 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -118,6 +118,9 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	int ts_recent_stamp;
+ 
++	if (tw->tw_substate == TCP_FIN_WAIT2)
++		reuse = 0;
++
+ 	if (reuse == 2) {
+ 		/* Still does not detect *everything* that goes through
+ 		 * lo, since we require a loopback src or dst address
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index e4ad3311e14895..2308665b51c538 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -101,8 +101,14 @@ static struct sk_buff *tcp4_gso_segment(struct sk_buff *skb,
+ 	if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)
+-		return __tcp4_gso_segment_list(skb, features);
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
++		struct tcphdr *th = tcp_hdr(skb);
++
++		if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
++			return __tcp4_gso_segment_list(skb, features);
++
++		skb->ip_summed = CHECKSUM_NONE;
++	}
+ 
+ 	if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ 		const struct iphdr *iph = ip_hdr(skb);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 5b54f4f32b1cd8..fdc032fda42c67 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -290,8 +290,26 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 		return NULL;
+ 	}
+ 
+-	if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
+-		return __udp_gso_segment_list(gso_skb, features, is_ipv6);
++	if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) {
++		 /* Detect modified geometry and pass those to skb_segment. */
++		if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size)
++			return __udp_gso_segment_list(gso_skb, features, is_ipv6);
++
++		 /* Setup csum, as fraglist skips this in udp4_gro_receive. */
++		gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head;
++		gso_skb->csum_offset = offsetof(struct udphdr, check);
++		gso_skb->ip_summed = CHECKSUM_PARTIAL;
++
++		uh = udp_hdr(gso_skb);
++		if (is_ipv6)
++			uh->check = ~udp_v6_check(gso_skb->len,
++						  &ipv6_hdr(gso_skb)->saddr,
++						  &ipv6_hdr(gso_skb)->daddr, 0);
++		else
++			uh->check = ~udp_v4_check(gso_skb->len,
++						  ip_hdr(gso_skb)->saddr,
++						  ip_hdr(gso_skb)->daddr, 0);
++	}
+ 
+ 	skb_pull(gso_skb, sizeof(*uh));
+ 
+diff --git a/net/ipv6/netfilter/nf_dup_ipv6.c b/net/ipv6/netfilter/nf_dup_ipv6.c
+index a0a2de30be3e7b..0c39c77fe8a8a4 100644
+--- a/net/ipv6/netfilter/nf_dup_ipv6.c
++++ b/net/ipv6/netfilter/nf_dup_ipv6.c
+@@ -47,11 +47,12 @@ static bool nf_dup_ipv6_route(struct net *net, struct sk_buff *skb,
+ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ 		 const struct in6_addr *gw, int oif)
+ {
++	local_bh_disable();
+ 	if (this_cpu_read(nf_skb_duplicated))
+-		return;
++		goto out;
+ 	skb = pskb_copy(skb, GFP_ATOMIC);
+ 	if (skb == NULL)
+-		return;
++		goto out;
+ 
+ #if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ 	nf_reset_ct(skb);
+@@ -69,6 +70,8 @@ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
+ 	} else {
+ 		kfree_skb(skb);
+ 	}
++out:
++	local_bh_enable();
+ }
+ EXPORT_SYMBOL_GPL(nf_dup_ipv6);
+ 
+diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c
+index 23971903e66de8..a45bf17cb2a172 100644
+--- a/net/ipv6/tcpv6_offload.c
++++ b/net/ipv6/tcpv6_offload.c
+@@ -159,8 +159,14 @@ static struct sk_buff *tcp6_gso_segment(struct sk_buff *skb,
+ 	if (!pskb_may_pull(skb, sizeof(*th)))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)
+-		return __tcp6_gso_segment_list(skb, features);
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
++		struct tcphdr *th = tcp_hdr(skb);
++
++		if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
++			return __tcp6_gso_segment_list(skb, features);
++
++		skb->ip_summed = CHECKSUM_NONE;
++	}
+ 
+ 	if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
+ 		const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index e6a7ff6ca6797a..db5675d24e488b 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -281,7 +281,9 @@ ieee80211_get_max_required_bw(struct ieee80211_link_data *link)
+ 	enum nl80211_chan_width max_bw = NL80211_CHAN_WIDTH_20_NOHT;
+ 	struct sta_info *sta;
+ 
+-	list_for_each_entry_rcu(sta, &sdata->local->sta_list, list) {
++	lockdep_assert_wiphy(sdata->local->hw.wiphy);
++
++	list_for_each_entry(sta, &sdata->local->sta_list, list) {
+ 		if (sdata != sta->sdata &&
+ 		    !(sta->sdata->bss && sta->sdata->bss == sdata->bss))
+ 			continue;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 1faf4d7c115f08..71cc5eb35bfcbd 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1020,7 +1020,7 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ 		bool disable_mu_mimo = false;
+ 		struct ieee80211_sub_if_data *other;
+ 
+-		list_for_each_entry_rcu(other, &local->interfaces, list) {
++		list_for_each_entry(other, &local->interfaces, list) {
+ 			if (other->vif.bss_conf.mu_mimo_owner) {
+ 				disable_mu_mimo = true;
+ 				break;
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 1c5d99975ad04d..3b2bde6360bcb6 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -504,7 +504,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
+ 	 * the scan was in progress; if there was none this will
+ 	 * just be a no-op for the particular interface.
+ 	 */
+-	list_for_each_entry_rcu(sdata, &local->interfaces, list) {
++	list_for_each_entry(sdata, &local->interfaces, list) {
+ 		if (ieee80211_sdata_running(sdata))
+ 			wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
+ 	}
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index c11dbe82ae1b30..d10e0c528c1bf4 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -751,7 +751,9 @@ static void __iterate_interfaces(struct ieee80211_local *local,
+ 	struct ieee80211_sub_if_data *sdata;
+ 	bool active_only = iter_flags & IEEE80211_IFACE_ITER_ACTIVE;
+ 
+-	list_for_each_entry_rcu(sdata, &local->interfaces, list) {
++	list_for_each_entry_rcu(sdata, &local->interfaces, list,
++				lockdep_is_held(&local->iflist_mtx) ||
++				lockdep_is_held(&local->hw.wiphy->mtx)) {
+ 		switch (sdata->vif.type) {
+ 		case NL80211_IFTYPE_MONITOR:
+ 			if (!(sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE))
+diff --git a/net/mac802154/scan.c b/net/mac802154/scan.c
+index 1c0eeaa76560cd..a6dab3cc3ad858 100644
+--- a/net/mac802154/scan.c
++++ b/net/mac802154/scan.c
+@@ -176,6 +176,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ 	struct ieee802154_local *local =
+ 		container_of(work, struct ieee802154_local, scan_work.work);
+ 	struct cfg802154_scan_request *scan_req;
++	enum nl802154_scan_types scan_req_type;
+ 	struct ieee802154_sub_if_data *sdata;
+ 	unsigned int scan_duration = 0;
+ 	struct wpan_phy *wpan_phy;
+@@ -209,6 +210,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ 	}
+ 
+ 	wpan_phy = scan_req->wpan_phy;
++	scan_req_type = scan_req->type;
+ 	scan_req_duration = scan_req->duration;
+ 
+ 	/* Look for the next valid chan */
+@@ -246,7 +248,7 @@ void mac802154_scan_worker(struct work_struct *work)
+ 		goto end_scan;
+ 	}
+ 
+-	if (scan_req->type == NL802154_SCAN_ACTIVE) {
++	if (scan_req_type == NL802154_SCAN_ACTIVE) {
+ 		ret = mac802154_transmit_beacon_req(local, sdata);
+ 		if (ret)
+ 			dev_err(&sdata->dev->dev,
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index 5ecf611c882009..5cf55bde366d18 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1954,6 +1954,8 @@ void ncsi_unregister_dev(struct ncsi_dev *nd)
+ 	list_del_rcu(&ndp->node);
+ 	spin_unlock_irqrestore(&ncsi_dev_lock, flags);
+ 
++	disable_work_sync(&ndp->work);
++
+ 	kfree(ndp);
+ }
+ EXPORT_SYMBOL_GPL(ncsi_unregister_dev);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 08de24658f4fa9..3f8197d5926ef5 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -1058,7 +1058,7 @@ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
+ int rxrpc_io_thread(void *data);
+ static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
+ {
+-	wake_up_process(local->io_thread);
++	wake_up_process(READ_ONCE(local->io_thread));
+ }
+ 
+ static inline bool rxrpc_protocol_error(struct sk_buff *skb, enum rxrpc_abort_reason why)
+diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c
+index 0300baa9afcd39..07c74c77d80214 100644
+--- a/net/rxrpc/io_thread.c
++++ b/net/rxrpc/io_thread.c
+@@ -27,11 +27,17 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
+ {
+ 	struct sk_buff_head *rx_queue;
+ 	struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
++	struct task_struct *io_thread;
+ 
+ 	if (unlikely(!local)) {
+ 		kfree_skb(skb);
+ 		return 0;
+ 	}
++	io_thread = READ_ONCE(local->io_thread);
++	if (!io_thread) {
++		kfree_skb(skb);
++		return 0;
++	}
+ 	if (skb->tstamp == 0)
+ 		skb->tstamp = ktime_get_real();
+ 
+@@ -47,7 +53,7 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
+ #endif
+ 
+ 	skb_queue_tail(rx_queue, skb);
+-	rxrpc_wake_up_io_thread(local);
++	wake_up_process(io_thread);
+ 	return 0;
+ }
+ 
+@@ -565,7 +571,7 @@ int rxrpc_io_thread(void *data)
+ 	__set_current_state(TASK_RUNNING);
+ 	rxrpc_see_local(local, rxrpc_local_stop);
+ 	rxrpc_destroy_local(local);
+-	local->io_thread = NULL;
++	WRITE_ONCE(local->io_thread, NULL);
+ 	rxrpc_see_local(local, rxrpc_local_stopped);
+ 	return 0;
+ }
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 504453c688d751..f9623ace22016f 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -232,7 +232,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 	}
+ 
+ 	wait_for_completion(&local->io_thread_ready);
+-	local->io_thread = io_thread;
++	WRITE_ONCE(local->io_thread, io_thread);
+ 	_leave(" = 0");
+ 	return 0;
+ 
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index b284a06b5a75fa..847e1cc6052ec2 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1952,7 +1952,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 			goto unlock;
+ 		}
+ 
+-		rcu_assign_pointer(q->admin_sched, new_admin);
++		/* Not going to race against advance_sched(), but still */
++		admin = rcu_replace_pointer(q->admin_sched, new_admin,
++					    lockdep_rtnl_is_held());
+ 		if (admin)
+ 			call_rcu(&admin->rcu, taprio_free_sched_cb);
+ 	} else {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index c009383369b267..cefac9eaddc300 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8552,8 +8552,10 @@ static int sctp_listen_start(struct sock *sk, int backlog)
+ 	 */
+ 	inet_sk_set_state(sk, SCTP_SS_LISTENING);
+ 	if (!ep->base.bind_addr.port) {
+-		if (sctp_autobind(sk))
++		if (sctp_autobind(sk)) {
++			inet_sk_set_state(sk, SCTP_SS_CLOSED);
+ 			return -EAGAIN;
++		}
+ 	} else {
+ 		if (sctp_get_port(sk, inet_sk(sk)->inet_num)) {
+ 			inet_sk_set_state(sk, SCTP_SS_CLOSED);
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index d9cda1e53a017a..6a15b831589c0f 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -682,7 +682,7 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
+ 	serv->sv_nrthreads += 1;
+ 	spin_unlock_bh(&serv->sv_lock);
+ 
+-	atomic_inc(&pool->sp_nrthreads);
++	pool->sp_nrthreads += 1;
+ 
+ 	/* Protected by whatever lock the service uses when calling
+ 	 * svc_set_num_threads()
+@@ -737,31 +737,22 @@ svc_pool_victim(struct svc_serv *serv, struct svc_pool *target_pool,
+ 	struct svc_pool *pool;
+ 	unsigned int i;
+ 
+-retry:
+ 	pool = target_pool;
+ 
+-	if (pool != NULL) {
+-		if (atomic_inc_not_zero(&pool->sp_nrthreads))
+-			goto found_pool;
+-		return NULL;
+-	} else {
++	if (!pool) {
+ 		for (i = 0; i < serv->sv_nrpools; i++) {
+ 			pool = &serv->sv_pools[--(*state) % serv->sv_nrpools];
+-			if (atomic_inc_not_zero(&pool->sp_nrthreads))
+-				goto found_pool;
++			if (pool->sp_nrthreads)
++				break;
+ 		}
+-		return NULL;
+ 	}
+ 
+-found_pool:
+-	set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
+-	set_bit(SP_NEED_VICTIM, &pool->sp_flags);
+-	if (!atomic_dec_and_test(&pool->sp_nrthreads))
++	if (pool && pool->sp_nrthreads) {
++		set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
++		set_bit(SP_NEED_VICTIM, &pool->sp_flags);
+ 		return pool;
+-	/* Nothing left in this pool any more */
+-	clear_bit(SP_NEED_VICTIM, &pool->sp_flags);
+-	clear_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
+-	goto retry;
++	}
++	return NULL;
+ }
+ 
+ static int
+@@ -840,7 +831,7 @@ svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+ 	if (!pool)
+ 		nrservs -= serv->sv_nrthreads;
+ 	else
+-		nrservs -= atomic_read(&pool->sp_nrthreads);
++		nrservs -= pool->sp_nrthreads;
+ 
+ 	if (nrservs > 0)
+ 		return svc_start_kthreads(serv, pool, nrservs);
+@@ -928,7 +919,7 @@ svc_exit_thread(struct svc_rqst *rqstp)
+ 
+ 	list_del_rcu(&rqstp->rq_all);
+ 
+-	atomic_dec(&pool->sp_nrthreads);
++	pool->sp_nrthreads -= 1;
+ 
+ 	spin_lock_bh(&serv->sv_lock);
+ 	serv->sv_nrthreads -= 1;
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 5a526ebafeb4b7..3c9e25f6a1d222 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -163,8 +163,12 @@ static int bearer_name_validate(const char *name,
+ 
+ 	/* return bearer name components, if necessary */
+ 	if (name_parts) {
+-		strcpy(name_parts->media_name, media_name);
+-		strcpy(name_parts->if_name, if_name);
++		if (strscpy(name_parts->media_name, media_name,
++			    TIPC_MAX_MEDIA_NAME) < 0)
++			return 0;
++		if (strscpy(name_parts->if_name, if_name,
++			    TIPC_MAX_IF_NAME) < 0)
++			return 0;
+ 	}
+ 	return 1;
+ }
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index e3bf14e489c5de..9675ceaa5bf608 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -10024,7 +10024,20 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+ 
+ 	err = rdev_start_radar_detection(rdev, dev, &chandef, cac_time_ms);
+ 	if (!err) {
+-		wdev->links[0].ap.chandef = chandef;
++		switch (wdev->iftype) {
++		case NL80211_IFTYPE_AP:
++		case NL80211_IFTYPE_P2P_GO:
++			wdev->links[0].ap.chandef = chandef;
++			break;
++		case NL80211_IFTYPE_ADHOC:
++			wdev->u.ibss.chandef = chandef;
++			break;
++		case NL80211_IFTYPE_MESH_POINT:
++			wdev->u.mesh.chandef = chandef;
++			break;
++		default:
++			break;
++		}
+ 		wdev->cac_started = true;
+ 		wdev->cac_start_time = jiffies;
+ 		wdev->cac_time_ms = cac_time_ms;
+diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
+index babc731bd5f626..ce2ee8d8786587 100644
+--- a/rust/kernel/sync/locked_by.rs
++++ b/rust/kernel/sync/locked_by.rs
+@@ -83,8 +83,12 @@ pub struct LockedBy<T: ?Sized, U: ?Sized> {
+ // SAFETY: `LockedBy` can be transferred across thread boundaries iff the data it protects can.
+ unsafe impl<T: ?Sized + Send, U: ?Sized> Send for LockedBy<T, U> {}
+ 
+-// SAFETY: `LockedBy` serialises the interior mutability it provides, so it is `Sync` as long as the
+-// data it protects is `Send`.
++// SAFETY: If `T` is not `Sync`, then parallel shared access to this `LockedBy` allows you to use
++// `access_mut` to hand out `&mut T` on one thread at the time. The requirement that `T: Send` is
++// sufficient to allow that.
++//
++// If `T` is `Sync`, then the `access` method also becomes available, which allows you to obtain
++// several `&T` from several threads at once. However, this is okay as `T` is `Sync`.
+ unsafe impl<T: ?Sized + Send, U: ?Sized> Sync for LockedBy<T, U> {}
+ 
+ impl<T, U> LockedBy<T, U> {
+@@ -118,7 +122,10 @@ impl<T: ?Sized, U> LockedBy<T, U> {
+     ///
+     /// Panics if `owner` is different from the data protected by the lock used in
+     /// [`new`](LockedBy::new).
+-    pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
++    pub fn access<'a>(&'a self, owner: &'a U) -> &'a T
++    where
++        T: Sync,
++    {
+         build_assert!(
+             size_of::<U>() > 0,
+             "`U` cannot be a ZST because `owner` wouldn't be unique"
+@@ -127,7 +134,10 @@ pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
+             panic!("mismatched owners");
+         }
+ 
+-        // SAFETY: `owner` is evidence that the owner is locked.
++        // SAFETY: `owner` is evidence that there are only shared references to the owner for the
++        // duration of 'a, so it's not possible to use `Self::access_mut` to obtain a mutable
++        // reference to the inner value that aliases with this shared reference. The type is `Sync`
++        // so there are no other requirements.
+         unsafe { &*self.data.get() }
+     }
+ 
+diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
+index 43c687e7a69de6..65dd1bd129641d 100644
+--- a/scripts/gdb/linux/proc.py
++++ b/scripts/gdb/linux/proc.py
+@@ -18,6 +18,7 @@ from linux import utils
+ from linux import tasks
+ from linux import lists
+ from linux import vfs
++from linux import rbtree
+ from struct import *
+ 
+ 
+@@ -172,8 +173,7 @@ values of that process namespace"""
+         gdb.write("{:^18} {:^15} {:>9} {} {} options\n".format(
+                   "mount", "super_block", "devname", "pathname", "fstype"))
+ 
+-        for mnt in lists.list_for_each_entry(namespace['list'],
+-                                             mount_ptr_type, "mnt_list"):
++        for mnt in rbtree.rb_inorder_for_each_entry(namespace['mounts'], mount_ptr_type, "mnt_node"):
+             devname = mnt['mnt_devname'].string()
+             devname = devname if devname else "none"
+ 
+diff --git a/scripts/gdb/linux/rbtree.py b/scripts/gdb/linux/rbtree.py
+index fe462855eefda3..fcbcc5f4153cdf 100644
+--- a/scripts/gdb/linux/rbtree.py
++++ b/scripts/gdb/linux/rbtree.py
+@@ -9,6 +9,18 @@ from linux import utils
+ rb_root_type = utils.CachedType("struct rb_root")
+ rb_node_type = utils.CachedType("struct rb_node")
+ 
++def rb_inorder_for_each(root):
++    def inorder(node):
++        if node:
++            yield from inorder(node['rb_left'])
++            yield node
++            yield from inorder(node['rb_right'])
++
++    yield from inorder(root['rb_node'])
++
++def rb_inorder_for_each_entry(root, gdbtype, member):
++    for node in rb_inorder_for_each(root):
++        yield utils.container_of(node, gdbtype, member)
+ 
+ def rb_first(root):
+     if root.type == rb_root_type.get_type():
+diff --git a/scripts/gdb/linux/timerlist.py b/scripts/gdb/linux/timerlist.py
+index 64bc87191003de..98445671fe8389 100644
+--- a/scripts/gdb/linux/timerlist.py
++++ b/scripts/gdb/linux/timerlist.py
+@@ -87,21 +87,22 @@ def print_cpu(hrtimer_bases, cpu, max_clock_bases):
+             text += "\n"
+ 
+         if constants.LX_CONFIG_TICK_ONESHOT:
+-            fmts = [("  .{}      : {}", 'nohz_mode'),
+-                    ("  .{}      : {} nsecs", 'last_tick'),
+-                    ("  .{}   : {}", 'tick_stopped'),
+-                    ("  .{}   : {}", 'idle_jiffies'),
+-                    ("  .{}     : {}", 'idle_calls'),
+-                    ("  .{}    : {}", 'idle_sleeps'),
+-                    ("  .{} : {} nsecs", 'idle_entrytime'),
+-                    ("  .{}  : {} nsecs", 'idle_waketime'),
+-                    ("  .{}  : {} nsecs", 'idle_exittime'),
+-                    ("  .{} : {} nsecs", 'idle_sleeptime'),
+-                    ("  .{}: {} nsecs", 'iowait_sleeptime'),
+-                    ("  .{}   : {}", 'last_jiffies'),
+-                    ("  .{}     : {}", 'next_timer'),
+-                    ("  .{}   : {} nsecs", 'idle_expires')]
+-            text += "\n".join([s.format(f, ts[f]) for s, f in fmts])
++            TS_FLAG_STOPPED = 1 << 1
++            TS_FLAG_NOHZ = 1 << 4
++            text += f"  .{'nohz':15s}: {int(bool(ts['flags'] & TS_FLAG_NOHZ))}\n"
++            text += f"  .{'last_tick':15s}: {ts['last_tick']}\n"
++            text += f"  .{'tick_stopped':15s}: {int(bool(ts['flags'] & TS_FLAG_STOPPED))}\n"
++            text += f"  .{'idle_jiffies':15s}: {ts['idle_jiffies']}\n"
++            text += f"  .{'idle_calls':15s}: {ts['idle_calls']}\n"
++            text += f"  .{'idle_sleeps':15s}: {ts['idle_sleeps']}\n"
++            text += f"  .{'idle_entrytime':15s}: {ts['idle_entrytime']} nsecs\n"
++            text += f"  .{'idle_waketime':15s}: {ts['idle_waketime']} nsecs\n"
++            text += f"  .{'idle_exittime':15s}: {ts['idle_exittime']} nsecs\n"
++            text += f"  .{'idle_sleeptime':15s}: {ts['idle_sleeptime']} nsecs\n"
++            text += f"  .{'iowait_sleeptime':15s}: {ts['iowait_sleeptime']} nsecs\n"
++            text += f"  .{'last_jiffies':15s}: {ts['last_jiffies']}\n"
++            text += f"  .{'next_timer':15s}: {ts['next_timer']}\n"
++            text += f"  .{'idle_expires':15s}: {ts['idle_expires']} nsecs\n"
+             text += "\njiffies: {}\n".format(jiffies)
+ 
+         text += "\n"
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index c6c42c0f4e5d58..b7fc5aeb78cc06 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -1174,7 +1174,7 @@ void ConfigInfoView::clicked(const QUrl &url)
+ {
+ 	QByteArray str = url.toEncoded();
+ 	const std::size_t count = str.size();
+-	char *data = new char[count + 1];
++	char *data = new char[count + 2];  // '$' + '\0'
+ 	struct symbol **result;
+ 	struct menu *m = NULL;
+ 
+diff --git a/security/Kconfig b/security/Kconfig
+index 412e76f1575d0d..a93c1a9b7c283b 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -19,6 +19,38 @@ config SECURITY_DMESG_RESTRICT
+ 
+ 	  If you are unsure how to answer this question, answer N.
+ 
++choice
++	prompt "Allow /proc/pid/mem access override"
++	default PROC_MEM_ALWAYS_FORCE
++	help
++	  Traditionally /proc/pid/mem allows users to override memory
++	  permissions for users like ptrace, assuming they have ptrace
++	  capability.
++
++	  This allows people to limit that - either never override, or
++	  require actual active ptrace attachment.
++
++	  Defaults to the traditional behavior (for now)
++
++config PROC_MEM_ALWAYS_FORCE
++	bool "Traditional /proc/pid/mem behavior"
++	help
++	  This allows /proc/pid/mem accesses to override memory mapping
++	  permissions if you have ptrace access rights.
++
++config PROC_MEM_FORCE_PTRACE
++	bool "Require active ptrace() use for access override"
++	help
++	  This allows /proc/pid/mem accesses to override memory mapping
++	  permissions for active ptracers like gdb.
++
++config PROC_MEM_NO_FORCE
++	bool "Never"
++	help
++	  Never override memory mapping permissions
++
++endchoice
++
+ config SECURITY
+ 	bool "Enable different security models"
+ 	depends on SYSFS
+diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c
+index 90b53500a236bd..aed9e3ef2c9ecb 100644
+--- a/security/tomoyo/domain.c
++++ b/security/tomoyo/domain.c
+@@ -723,10 +723,13 @@ int tomoyo_find_next_domain(struct linux_binprm *bprm)
+ 	ee->r.obj = &ee->obj;
+ 	ee->obj.path1 = bprm->file->f_path;
+ 	/* Get symlink's pathname of program. */
+-	retval = -ENOENT;
+ 	exename.name = tomoyo_realpath_nofollow(original_name);
+-	if (!exename.name)
+-		goto out;
++	if (!exename.name) {
++		/* Fallback to realpath if symlink's pathname does not exist. */
++		exename.name = tomoyo_realpath_from_path(&bprm->file->f_path);
++		if (!exename.name)
++			goto out;
++	}
+ 	tomoyo_fill_path_info(&exename);
+ retry:
+ 	/* Check 'aggregator' directive. */
+diff --git a/sound/core/control.c b/sound/core/control.c
+index 1dd2337e293000..c18a9e6539b37a 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -1164,10 +1164,7 @@ static int __snd_ctl_elem_info(struct snd_card *card,
+ #ifdef CONFIG_SND_DEBUG
+ 	info->access = 0;
+ #endif
+-	result = snd_power_ref_and_wait(card);
+-	if (!result)
+-		result = kctl->info(kctl, info);
+-	snd_power_unref(card);
++	result = kctl->info(kctl, info);
+ 	if (result >= 0) {
+ 		snd_BUG_ON(info->access);
+ 		index_offset = snd_ctl_get_ioff(kctl, &info->id);
+@@ -1205,12 +1202,17 @@ static int snd_ctl_elem_info(struct snd_ctl_file *ctl,
+ static int snd_ctl_elem_info_user(struct snd_ctl_file *ctl,
+ 				  struct snd_ctl_elem_info __user *_info)
+ {
++	struct snd_card *card = ctl->card;
+ 	struct snd_ctl_elem_info info;
+ 	int result;
+ 
+ 	if (copy_from_user(&info, _info, sizeof(info)))
+ 		return -EFAULT;
++	result = snd_power_ref_and_wait(card);
++	if (result)
++		return result;
+ 	result = snd_ctl_elem_info(ctl, &info);
++	snd_power_unref(card);
+ 	if (result < 0)
+ 		return result;
+ 	/* drop internal access flags */
+@@ -1254,10 +1256,7 @@ static int snd_ctl_elem_read(struct snd_card *card,
+ 
+ 	if (!snd_ctl_skip_validation(&info))
+ 		fill_remaining_elem_value(control, &info, pattern);
+-	ret = snd_power_ref_and_wait(card);
+-	if (!ret)
+-		ret = kctl->get(kctl, control);
+-	snd_power_unref(card);
++	ret = kctl->get(kctl, control);
+ 	if (ret < 0)
+ 		return ret;
+ 	if (!snd_ctl_skip_validation(&info) &&
+@@ -1282,7 +1281,11 @@ static int snd_ctl_elem_read_user(struct snd_card *card,
+ 	if (IS_ERR(control))
+ 		return PTR_ERR(no_free_ptr(control));
+ 
++	result = snd_power_ref_and_wait(card);
++	if (result)
++		return result;
+ 	result = snd_ctl_elem_read(card, control);
++	snd_power_unref(card);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1297,7 +1300,7 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ 	struct snd_kcontrol *kctl;
+ 	struct snd_kcontrol_volatile *vd;
+ 	unsigned int index_offset;
+-	int result;
++	int result = 0;
+ 
+ 	down_write(&card->controls_rwsem);
+ 	kctl = snd_ctl_find_id_locked(card, &control->id);
+@@ -1315,9 +1318,8 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ 	}
+ 
+ 	snd_ctl_build_ioff(&control->id, kctl, index_offset);
+-	result = snd_power_ref_and_wait(card);
+ 	/* validate input values */
+-	if (IS_ENABLED(CONFIG_SND_CTL_INPUT_VALIDATION) && !result) {
++	if (IS_ENABLED(CONFIG_SND_CTL_INPUT_VALIDATION)) {
+ 		struct snd_ctl_elem_info info;
+ 
+ 		memset(&info, 0, sizeof(info));
+@@ -1329,7 +1331,6 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file,
+ 	}
+ 	if (!result)
+ 		result = kctl->put(kctl, control);
+-	snd_power_unref(card);
+ 	if (result < 0) {
+ 		up_write(&card->controls_rwsem);
+ 		return result;
+@@ -1358,7 +1359,11 @@ static int snd_ctl_elem_write_user(struct snd_ctl_file *file,
+ 		return PTR_ERR(no_free_ptr(control));
+ 
+ 	card = file->card;
++	result = snd_power_ref_and_wait(card);
++	if (result < 0)
++		return result;
+ 	result = snd_ctl_elem_write(card, file, control);
++	snd_power_unref(card);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1827,7 +1832,7 @@ static int call_tlv_handler(struct snd_ctl_file *file, int op_flag,
+ 		{SNDRV_CTL_TLV_OP_CMD,   SNDRV_CTL_ELEM_ACCESS_TLV_COMMAND},
+ 	};
+ 	struct snd_kcontrol_volatile *vd = &kctl->vd[snd_ctl_get_ioff(kctl, id)];
+-	int i, ret;
++	int i;
+ 
+ 	/* Check support of the request for this element. */
+ 	for (i = 0; i < ARRAY_SIZE(pairs); ++i) {
+@@ -1845,11 +1850,7 @@ static int call_tlv_handler(struct snd_ctl_file *file, int op_flag,
+ 	    vd->owner != NULL && vd->owner != file)
+ 		return -EPERM;
+ 
+-	ret = snd_power_ref_and_wait(file->card);
+-	if (!ret)
+-		ret = kctl->tlv.c(kctl, op_flag, size, buf);
+-	snd_power_unref(file->card);
+-	return ret;
++	return kctl->tlv.c(kctl, op_flag, size, buf);
+ }
+ 
+ static int read_tlv_buf(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id,
+@@ -1962,16 +1963,28 @@ static long snd_ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg
+ 	case SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS:
+ 		return snd_ctl_subscribe_events(ctl, ip);
+ 	case SNDRV_CTL_IOCTL_TLV_READ:
+-		scoped_guard(rwsem_read, &ctl->card->controls_rwsem)
++		err = snd_power_ref_and_wait(card);
++		if (err < 0)
++			return err;
++		scoped_guard(rwsem_read, &card->controls_rwsem)
+ 			err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_READ);
++		snd_power_unref(card);
+ 		return err;
+ 	case SNDRV_CTL_IOCTL_TLV_WRITE:
+-		scoped_guard(rwsem_write, &ctl->card->controls_rwsem)
++		err = snd_power_ref_and_wait(card);
++		if (err < 0)
++			return err;
++		scoped_guard(rwsem_write, &card->controls_rwsem)
+ 			err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_WRITE);
++		snd_power_unref(card);
+ 		return err;
+ 	case SNDRV_CTL_IOCTL_TLV_COMMAND:
+-		scoped_guard(rwsem_write, &ctl->card->controls_rwsem)
++		err = snd_power_ref_and_wait(card);
++		if (err < 0)
++			return err;
++		scoped_guard(rwsem_write, &card->controls_rwsem)
+ 			err = snd_ctl_tlv_ioctl(ctl, argp, SNDRV_CTL_TLV_OP_CMD);
++		snd_power_unref(card);
+ 		return err;
+ 	case SNDRV_CTL_IOCTL_POWER:
+ 		return -ENOPROTOOPT;
+diff --git a/sound/core/control_compat.c b/sound/core/control_compat.c
+index 934bb945e702a2..ff0031cc7dfb8c 100644
+--- a/sound/core/control_compat.c
++++ b/sound/core/control_compat.c
+@@ -79,6 +79,7 @@ struct snd_ctl_elem_info32 {
+ static int snd_ctl_elem_info_compat(struct snd_ctl_file *ctl,
+ 				    struct snd_ctl_elem_info32 __user *data32)
+ {
++	struct snd_card *card = ctl->card;
+ 	struct snd_ctl_elem_info *data __free(kfree) = NULL;
+ 	int err;
+ 
+@@ -95,7 +96,11 @@ static int snd_ctl_elem_info_compat(struct snd_ctl_file *ctl,
+ 	if (get_user(data->value.enumerated.item, &data32->value.enumerated.item))
+ 		return -EFAULT;
+ 
++	err = snd_power_ref_and_wait(card);
++	if (err < 0)
++		return err;
+ 	err = snd_ctl_elem_info(ctl, data);
++	snd_power_unref(card);
+ 	if (err < 0)
+ 		return err;
+ 	/* restore info to 32bit */
+@@ -175,10 +180,7 @@ static int get_ctl_type(struct snd_card *card, struct snd_ctl_elem_id *id,
+ 	if (info == NULL)
+ 		return -ENOMEM;
+ 	info->id = *id;
+-	err = snd_power_ref_and_wait(card);
+-	if (!err)
+-		err = kctl->info(kctl, info);
+-	snd_power_unref(card);
++	err = kctl->info(kctl, info);
+ 	if (err >= 0) {
+ 		err = info->type;
+ 		*countp = info->count;
+@@ -275,8 +277,8 @@ static int copy_ctl_value_to_user(void __user *userdata,
+ 	return 0;
+ }
+ 
+-static int ctl_elem_read_user(struct snd_card *card,
+-			      void __user *userdata, void __user *valuep)
++static int __ctl_elem_read_user(struct snd_card *card,
++				void __user *userdata, void __user *valuep)
+ {
+ 	struct snd_ctl_elem_value *data __free(kfree) = NULL;
+ 	int err, type, count;
+@@ -296,8 +298,21 @@ static int ctl_elem_read_user(struct snd_card *card,
+ 	return copy_ctl_value_to_user(userdata, valuep, data, type, count);
+ }
+ 
+-static int ctl_elem_write_user(struct snd_ctl_file *file,
+-			       void __user *userdata, void __user *valuep)
++static int ctl_elem_read_user(struct snd_card *card,
++			      void __user *userdata, void __user *valuep)
++{
++	int err;
++
++	err = snd_power_ref_and_wait(card);
++	if (err < 0)
++		return err;
++	err = __ctl_elem_read_user(card, userdata, valuep);
++	snd_power_unref(card);
++	return err;
++}
++
++static int __ctl_elem_write_user(struct snd_ctl_file *file,
++				 void __user *userdata, void __user *valuep)
+ {
+ 	struct snd_ctl_elem_value *data __free(kfree) = NULL;
+ 	struct snd_card *card = file->card;
+@@ -318,6 +333,20 @@ static int ctl_elem_write_user(struct snd_ctl_file *file,
+ 	return copy_ctl_value_to_user(userdata, valuep, data, type, count);
+ }
+ 
++static int ctl_elem_write_user(struct snd_ctl_file *file,
++			       void __user *userdata, void __user *valuep)
++{
++	struct snd_card *card = file->card;
++	int err;
++
++	err = snd_power_ref_and_wait(card);
++	if (err < 0)
++		return err;
++	err = __ctl_elem_write_user(file, userdata, valuep);
++	snd_power_unref(card);
++	return err;
++}
++
+ static int snd_ctl_elem_read_user_compat(struct snd_card *card,
+ 					 struct snd_ctl_elem_value32 __user *data32)
+ {
+diff --git a/sound/core/init.c b/sound/core/init.c
+index b9b708cf980d6d..27e7569ace99b4 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -654,13 +654,19 @@ void snd_card_free(struct snd_card *card)
+ }
+ EXPORT_SYMBOL(snd_card_free);
+ 
++/* check, if the character is in the valid ASCII range */
++static inline bool safe_ascii_char(char c)
++{
++	return isascii(c) && isalnum(c);
++}
++
+ /* retrieve the last word of shortname or longname */
+ static const char *retrieve_id_from_card_name(const char *name)
+ {
+ 	const char *spos = name;
+ 
+ 	while (*name) {
+-		if (isspace(*name) && isalnum(name[1]))
++		if (isspace(*name) && safe_ascii_char(name[1]))
+ 			spos = name + 1;
+ 		name++;
+ 	}
+@@ -687,12 +693,12 @@ static void copy_valid_id_string(struct snd_card *card, const char *src,
+ {
+ 	char *id = card->id;
+ 
+-	while (*nid && !isalnum(*nid))
++	while (*nid && !safe_ascii_char(*nid))
+ 		nid++;
+ 	if (isdigit(*nid))
+ 		*id++ = isalpha(*src) ? *src : 'D';
+ 	while (*nid && (size_t)(id - card->id) < sizeof(card->id) - 1) {
+-		if (isalnum(*nid))
++		if (safe_ascii_char(*nid))
+ 			*id++ = *nid;
+ 		nid++;
+ 	}
+@@ -787,7 +793,7 @@ static ssize_t id_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	for (idx = 0; idx < copy; idx++) {
+ 		c = buf[idx];
+-		if (!isalnum(c) && c != '_' && c != '-')
++		if (!safe_ascii_char(c) && c != '_' && c != '-')
+ 			return -EINVAL;
+ 	}
+ 	memcpy(buf1, buf, copy);
+diff --git a/sound/core/oss/mixer_oss.c b/sound/core/oss/mixer_oss.c
+index 6a0508093ea688..81af725ea40e5d 100644
+--- a/sound/core/oss/mixer_oss.c
++++ b/sound/core/oss/mixer_oss.c
+@@ -901,8 +901,8 @@ static void snd_mixer_oss_slot_free(struct snd_mixer_oss_slot *chn)
+ 	struct slot *p = chn->private_data;
+ 	if (p) {
+ 		if (p->allocated && p->assigned) {
+-			kfree_const(p->assigned->name);
+-			kfree_const(p->assigned);
++			kfree(p->assigned->name);
++			kfree(p->assigned);
+ 		}
+ 		kfree(p);
+ 	}
+diff --git a/sound/isa/gus/gus_pcm.c b/sound/isa/gus/gus_pcm.c
+index 850544725da796..d55c3dc229c0e8 100644
+--- a/sound/isa/gus/gus_pcm.c
++++ b/sound/isa/gus/gus_pcm.c
+@@ -378,7 +378,7 @@ static int snd_gf1_pcm_playback_copy(struct snd_pcm_substream *substream,
+ 
+ 	bpos = get_bpos(pcmp, voice, pos, len);
+ 	if (bpos < 0)
+-		return pos;
++		return bpos;
+ 	if (copy_from_iter(runtime->dma_area + bpos, len, src) != len)
+ 		return -EFAULT;
+ 	return playback_copy_ack(substream, bpos, len);
+@@ -395,7 +395,7 @@ static int snd_gf1_pcm_playback_silence(struct snd_pcm_substream *substream,
+ 	
+ 	bpos = get_bpos(pcmp, voice, pos, len);
+ 	if (bpos < 0)
+-		return pos;
++		return bpos;
+ 	snd_pcm_format_set_silence(runtime->format, runtime->dma_area + bpos,
+ 				   bytes_to_samples(runtime, count));
+ 	return playback_copy_ack(substream, bpos, len);
+diff --git a/sound/pci/asihpi/hpimsgx.c b/sound/pci/asihpi/hpimsgx.c
+index d0caef2994818e..b68e6bfbbfbab5 100644
+--- a/sound/pci/asihpi/hpimsgx.c
++++ b/sound/pci/asihpi/hpimsgx.c
+@@ -708,7 +708,7 @@ static u16 HPIMSGX__init(struct hpi_message *phm,
+ 		phr->error = HPI_ERROR_PROCESSING_MESSAGE;
+ 		return phr->error;
+ 	}
+-	if (hr.error == 0) {
++	if (hr.error == 0 && hr.u.s.adapter_index < HPI_MAX_ADAPTERS) {
+ 		/* the adapter was created successfully
+ 		   save the mapping for future use */
+ 		hpi_entry_points[hr.u.s.adapter_index] = entry_point_func;
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index 68c883f202ca5b..c2d0109866e62e 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -28,7 +28,7 @@
+ #else
+ #define AZX_DCAPS_I915_COMPONENT 0		/* NOP */
+ #endif
+-#define AZX_DCAPS_AMD_ALLOC_FIX	(1 << 14)	/* AMD allocation workaround */
++/* 14 unused */
+ #define AZX_DCAPS_CTX_WORKAROUND (1 << 15)	/* X-Fi workaround */
+ #define AZX_DCAPS_POSFIX_LPIB	(1 << 16)	/* Use LPIB as default */
+ #define AZX_DCAPS_AMD_WORKAROUND (1 << 17)	/* AMD-specific workaround */
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 9cff87dfbecbb1..b34d84fedcc8ab 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -1383,7 +1383,7 @@ static int try_assign_dacs(struct hda_codec *codec, int num_outs,
+ 		struct nid_path *path;
+ 		hda_nid_t pin = pins[i];
+ 
+-		if (!spec->obey_preferred_dacs) {
++		if (!spec->preferred_dacs) {
+ 			path = snd_hda_get_path_from_idx(codec, path_idx[i]);
+ 			if (path) {
+ 				badness += assign_out_path_ctls(codec, path);
+@@ -1395,7 +1395,7 @@ static int try_assign_dacs(struct hda_codec *codec, int num_outs,
+ 		if (dacs[i]) {
+ 			if (is_dac_already_used(codec, dacs[i]))
+ 				badness += bad->shared_primary;
+-		} else if (spec->obey_preferred_dacs) {
++		} else if (spec->preferred_dacs) {
+ 			badness += BAD_NO_PRIMARY_DAC;
+ 		}
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 87203b819dd471..3500108f6ba375 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -40,7 +40,6 @@
+ 
+ #ifdef CONFIG_X86
+ /* for snoop control */
+-#include <linux/dma-map-ops.h>
+ #include <asm/set_memory.h>
+ #include <asm/cpufeature.h>
+ #endif
+@@ -307,7 +306,7 @@ enum {
+ 
+ /* quirks for ATI HDMI with snoop off */
+ #define AZX_DCAPS_PRESET_ATI_HDMI_NS \
+-	(AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_AMD_ALLOC_FIX)
++	(AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_SNOOP_OFF)
+ 
+ /* quirks for AMD SB */
+ #define AZX_DCAPS_PRESET_AMD_SB \
+@@ -1703,13 +1702,6 @@ static void azx_check_snoop_available(struct azx *chip)
+ 	if (chip->driver_caps & AZX_DCAPS_SNOOP_OFF)
+ 		snoop = false;
+ 
+-#ifdef CONFIG_X86
+-	/* check the presence of DMA ops (i.e. IOMMU), disable snoop conditionally */
+-	if ((chip->driver_caps & AZX_DCAPS_AMD_ALLOC_FIX) &&
+-	    !get_dma_ops(chip->card->dev))
+-		snoop = false;
+-#endif
+-
+ 	chip->snoop = snoop;
+ 	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index e851785ff05814..4a2c8274c3df7e 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -816,6 +816,23 @@ static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
+ 	{}
+ };
+ 
++/* pincfg quirk for Tuxedo Sirius;
++ * unfortunately the (PCI) SSID conflicts with System76 Pangolin pang14,
++ * which has incompatible pin setup, so we check the codec SSID (luckily
++ * different one!) and conditionally apply the quirk here
++ */
++static void cxt_fixup_sirius_top_speaker(struct hda_codec *codec,
++					 const struct hda_fixup *fix,
++					 int action)
++{
++	/* ignore for incorrectly picked-up pang14 */
++	if (codec->core.subsystem_id == 0x278212b3)
++		return;
++	/* set up the top speaker pin */
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		snd_hda_codec_set_pincfg(codec, 0x1d, 0x82170111);
++}
++
+ static const struct hda_fixup cxt_fixups[] = {
+ 	[CXT_PINCFG_LENOVO_X200] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -976,11 +993,8 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.v.pins = cxt_pincfg_sws_js201d,
+ 	},
+ 	[CXT_PINCFG_TOP_SPEAKER] = {
+-		.type = HDA_FIXUP_PINS,
+-		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x1d, 0x82170111 },
+-			{ }
+-		},
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cxt_fixup_sirius_top_speaker,
+ 	},
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2b674691ce4b64..9268f43e7779a7 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -583,6 +583,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	switch (codec->core.vendor_id) {
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
++	case 0x10ec0257:
+ 	case 0x19e58326:
+ 	case 0x10ec0283:
+ 	case 0x10ec0285:
+@@ -10161,6 +10162,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x88dd, "HP Pavilion 15z-ec200", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x890e, "HP 255 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8919, "HP Pavilion Aero Laptop 13-be0xxx", ALC287_FIXUP_HP_GPIO_LED),
+@@ -10302,6 +10304,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8caf, "HP Elite mt645 G8 Mobile Thin Client", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8cbd, "HP Pavilion Aero Laptop 13-bg0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ 	SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8cde, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10650,6 +10653,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x38cd, "Y790 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38d2, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x38d7, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN),
++	SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+@@ -10684,6 +10688,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1854, 0x0441, "LG CQ6 AIO", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x19e5, 0x3212, "Huawei KLV-WX9 ", ALC256_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ 	SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ 	SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index e7d1b43471a291..713ca262a0e979 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -1298,8 +1298,10 @@ static int snd_hdsp_midi_output_possible (struct hdsp *hdsp, int id)
+ 
+ static void snd_hdsp_flush_midi_input (struct hdsp *hdsp, int id)
+ {
+-	while (snd_hdsp_midi_input_available (hdsp, id))
+-		snd_hdsp_midi_read_byte (hdsp, id);
++	int count = 256;
++
++	while (snd_hdsp_midi_input_available(hdsp, id) && --count)
++		snd_hdsp_midi_read_byte(hdsp, id);
+ }
+ 
+ static int snd_hdsp_midi_output_write (struct hdsp_midi *hmidi)
+diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
+index 267c7848974aee..74215f57f4fc9d 100644
+--- a/sound/pci/rme9652/hdspm.c
++++ b/sound/pci/rme9652/hdspm.c
+@@ -1838,8 +1838,10 @@ static inline int snd_hdspm_midi_output_possible (struct hdspm *hdspm, int id)
+ 
+ static void snd_hdspm_flush_midi_input(struct hdspm *hdspm, int id)
+ {
+-	while (snd_hdspm_midi_input_available (hdspm, id))
+-		snd_hdspm_midi_read_byte (hdspm, id);
++	int count = 256;
++
++	while (snd_hdspm_midi_input_available(hdspm, id) && --count)
++		snd_hdspm_midi_read_byte(hdspm, id);
+ }
+ 
+ static int snd_hdspm_midi_output_write (struct hdspm_midi *hmidi)
+diff --git a/sound/soc/atmel/mchp-pdmc.c b/sound/soc/atmel/mchp-pdmc.c
+index dcc4e14b3dde27..206bbb5aaab5d9 100644
+--- a/sound/soc/atmel/mchp-pdmc.c
++++ b/sound/soc/atmel/mchp-pdmc.c
+@@ -285,6 +285,9 @@ static int mchp_pdmc_chmap_ctl_put(struct snd_kcontrol *kcontrol,
+ 	if (!substream)
+ 		return -ENODEV;
+ 
++	if (!substream->runtime)
++		return 0; /* just for avoiding error from alsactl restore */
++
+ 	map = mchp_pdmc_chmap_get(substream, info);
+ 	if (!map)
+ 		return -EINVAL;
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index 2169d939898419..1831d4487ba9d1 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -998,15 +998,19 @@ static const struct reg_sequence reg_init[] = {
+ 	{WSA883X_GMAMP_SUP1, 0xE2},
+ };
+ 
+-static void wsa883x_init(struct wsa883x_priv *wsa883x)
++static int wsa883x_init(struct wsa883x_priv *wsa883x)
+ {
+ 	struct regmap *regmap = wsa883x->regmap;
+-	int variant, version;
++	int variant, version, ret;
+ 
+-	regmap_read(regmap, WSA883X_OTP_REG_0, &variant);
++	ret = regmap_read(regmap, WSA883X_OTP_REG_0, &variant);
++	if (ret)
++		return ret;
+ 	wsa883x->variant = variant & WSA883X_ID_MASK;
+ 
+-	regmap_read(regmap, WSA883X_CHIP_ID0, &version);
++	ret = regmap_read(regmap, WSA883X_CHIP_ID0, &version);
++	if (ret)
++		return ret;
+ 	wsa883x->version = version;
+ 
+ 	switch (wsa883x->variant) {
+@@ -1041,6 +1045,8 @@ static void wsa883x_init(struct wsa883x_priv *wsa883x)
+ 				   WSA883X_DRE_OFFSET_MASK,
+ 				   wsa883x->comp_offset);
+ 	}
++
++	return 0;
+ }
+ 
+ static int wsa883x_update_status(struct sdw_slave *slave,
+@@ -1049,7 +1055,7 @@ static int wsa883x_update_status(struct sdw_slave *slave,
+ 	struct wsa883x_priv *wsa883x = dev_get_drvdata(&slave->dev);
+ 
+ 	if (status == SDW_SLAVE_ATTACHED && slave->dev_num > 0)
+-		wsa883x_init(wsa883x);
++		return wsa883x_init(wsa883x);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 0e18ccabe28c31..ce0d8cec375a85 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -713,6 +713,7 @@ static int imx_card_probe(struct platform_device *pdev)
+ 
+ 	data->plat_data = plat_data;
+ 	data->card.dev = &pdev->dev;
++	data->card.owner = THIS_MODULE;
+ 
+ 	dev_set_drvdata(&pdev->dev, &data->card);
+ 	snd_soc_card_set_drvdata(&data->card, data);
+diff --git a/sound/soc/intel/boards/bytcht_cx2072x.c b/sound/soc/intel/boards/bytcht_cx2072x.c
+index df3c2a7b64d23c..8c2b4ab764bbaf 100644
+--- a/sound/soc/intel/boards/bytcht_cx2072x.c
++++ b/sound/soc/intel/boards/bytcht_cx2072x.c
+@@ -255,7 +255,11 @@ static int snd_byt_cht_cx2072x_probe(struct platform_device *pdev)
+ 		snprintf(codec_name, sizeof(codec_name), "i2c-%s",
+ 			 acpi_dev_name(adev));
+ 		byt_cht_cx2072x_dais[dai_index].codecs->name = codec_name;
++	} else {
++		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++		return -ENOENT;
+ 	}
++
+ 	acpi_dev_put(adev);
+ 
+ 	/* override platform name, if required */
+diff --git a/sound/soc/intel/boards/bytcht_da7213.c b/sound/soc/intel/boards/bytcht_da7213.c
+index 08c598b7e1eeeb..9178bbe8d99506 100644
+--- a/sound/soc/intel/boards/bytcht_da7213.c
++++ b/sound/soc/intel/boards/bytcht_da7213.c
+@@ -258,7 +258,11 @@ static int bytcht_da7213_probe(struct platform_device *pdev)
+ 		snprintf(codec_name, sizeof(codec_name),
+ 			 "i2c-%s", acpi_dev_name(adev));
+ 		dailink[dai_index].codecs->name = codec_name;
++	} else {
++		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++		return -ENOENT;
+ 	}
++
+ 	acpi_dev_put(adev);
+ 
+ 	/* override platform name, if required */
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index 77b91ea4dc32ca..3539c9ff0fd2ca 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -562,7 +562,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+ 		byt_cht_es8316_dais[dai_index].codecs->name = codec_name;
+ 	} else {
+ 		dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+-		return -ENXIO;
++		return -ENOENT;
+ 	}
+ 
+ 	codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index db4a33680d9488..4479825c08b5e3 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1693,7 +1693,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
+ 		byt_rt5640_dais[dai_index].codecs->name = byt_rt5640_codec_name;
+ 	} else {
+ 		dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+-		return -ENXIO;
++		return -ENOENT;
+ 	}
+ 
+ 	codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index 8514b79f389bb5..1f54da98aacf47 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -926,7 +926,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
+ 		byt_rt5651_dais[dai_index].codecs->name = byt_rt5651_codec_name;
+ 	} else {
+ 		dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+-		return -ENXIO;
++		return -ENOENT;
+ 	}
+ 
+ 	codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
+index 1da9ceee4d593e..ac23a8b7cafca2 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
+@@ -582,7 +582,11 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 		snprintf(cht_rt5645_codec_name, sizeof(cht_rt5645_codec_name),
+ 			 "i2c-%s", acpi_dev_name(adev));
+ 		cht_dailink[dai_index].codecs->name = cht_rt5645_codec_name;
++	} else {
++		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++		return -ENOENT;
+ 	}
++
+ 	/* acpi_get_first_physical_node() returns a borrowed ref, no need to deref */
+ 	codec_dev = acpi_get_first_physical_node(adev);
+ 	acpi_dev_put(adev);
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
+index d68e5bc755dee5..c6c469d51243ef 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
+@@ -479,7 +479,11 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ 		snprintf(drv->codec_name, sizeof(drv->codec_name),
+ 			 "i2c-%s", acpi_dev_name(adev));
+ 		cht_dailink[dai_index].codecs->name = drv->codec_name;
++	}  else {
++		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++		return -ENOENT;
+ 	}
++
+ 	acpi_dev_put(adev);
+ 
+ 	/* Use SSP0 on Bay Trail CR devices */
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index c1fcc156a5752c..809532238c44fd 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -681,7 +681,7 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ 			dai_links[0].codecs->dai_name = "ES8326 HiFi";
+ 	} else {
+ 		dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
+-		return -ENXIO;
++		return -ENOENT;
+ 	}
+ 
+ 	codec_dev = acpi_get_first_physical_node(adev);
+diff --git a/sound/soc/intel/boards/sof_wm8804.c b/sound/soc/intel/boards/sof_wm8804.c
+index 4cb0d463bf4049..9c5b3f8f09f364 100644
+--- a/sound/soc/intel/boards/sof_wm8804.c
++++ b/sound/soc/intel/boards/sof_wm8804.c
+@@ -270,7 +270,11 @@ static int sof_wm8804_probe(struct platform_device *pdev)
+ 		snprintf(codec_name, sizeof(codec_name),
+ 			 "%s%s", "i2c-", acpi_dev_name(adev));
+ 		dailink[dai_index].codecs->name = codec_name;
++	} else {
++		dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
++		return -ENOENT;
+ 	}
++
+ 	acpi_dev_put(adev);
+ 
+ 	snd_soc_card_set_drvdata(card, ctx);
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index bdb04fa37a71df..8f01a4b1fa0fa6 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -382,6 +382,12 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ 	/* Creative/Toshiba Multimedia Center SB-0500 */
+ 	DEVICE_NAME(0x041e, 0x3048, "Toshiba", "SB-0500"),
+ 
++	/* Logitech Audio Devices */
++	DEVICE_NAME(0x046d, 0x0867, "Logitech, Inc.", "Logi-MeetUp"),
++	DEVICE_NAME(0x046d, 0x0874, "Logitech, Inc.", "Logi-Tap-Audio"),
++	DEVICE_NAME(0x046d, 0x087c, "Logitech, Inc.", "Logi-Huddle"),
++	DEVICE_NAME(0x046d, 0x0898, "Logitech, Inc.", "Logi-RB-Audio"),
++	DEVICE_NAME(0x046d, 0x08d2, "Logitech, Inc.", "Logi-RBM-Audio"),
+ 	DEVICE_NAME(0x046d, 0x0990, "Logitech, Inc.", "QuickCam Pro 9000"),
+ 
+ 	DEVICE_NAME(0x05e1, 0x0408, "Syntek", "STK1160"),
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index ffd8c157a28139..70de08635f54cb 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -507,7 +507,7 @@ static const struct line6_properties podhd_properties_table[] = {
+ 	[LINE6_PODHD500X] = {
+ 		.id = "PODHD500X",
+ 		.name = "POD HD500X",
+-		.capabilities	= LINE6_CAP_CONTROL
++		.capabilities	= LINE6_CAP_CONTROL | LINE6_CAP_HWMON_CTL
+ 				| LINE6_CAP_PCM | LINE6_CAP_HWMON,
+ 		.altsetting = 1,
+ 		.ep_ctrl_r = 0x81,
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 8cc2d4937f3403..197fd07e69edd4 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1377,6 +1377,19 @@ static int get_min_max_with_quirks(struct usb_mixer_elem_info *cval,
+ 
+ #define get_min_max(cval, def)	get_min_max_with_quirks(cval, def, NULL)
+ 
++/* get the max value advertised via control API */
++static int get_max_exposed(struct usb_mixer_elem_info *cval)
++{
++	if (!cval->max_exposed) {
++		if (cval->res)
++			cval->max_exposed =
++				DIV_ROUND_UP(cval->max - cval->min, cval->res);
++		else
++			cval->max_exposed = cval->max - cval->min;
++	}
++	return cval->max_exposed;
++}
++
+ /* get a feature/mixer unit info */
+ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ 				  struct snd_ctl_elem_info *uinfo)
+@@ -1389,11 +1402,8 @@ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ 	else
+ 		uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ 	uinfo->count = cval->channels;
+-	if (cval->val_type == USB_MIXER_BOOLEAN ||
+-	    cval->val_type == USB_MIXER_INV_BOOLEAN) {
+-		uinfo->value.integer.min = 0;
+-		uinfo->value.integer.max = 1;
+-	} else {
++	if (cval->val_type != USB_MIXER_BOOLEAN &&
++	    cval->val_type != USB_MIXER_INV_BOOLEAN) {
+ 		if (!cval->initialized) {
+ 			get_min_max_with_quirks(cval, 0, kcontrol);
+ 			if (cval->initialized && cval->dBmin >= cval->dBmax) {
+@@ -1405,10 +1415,10 @@ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol,
+ 					       &kcontrol->id);
+ 			}
+ 		}
+-		uinfo->value.integer.min = 0;
+-		uinfo->value.integer.max =
+-			DIV_ROUND_UP(cval->max - cval->min, cval->res);
+ 	}
++
++	uinfo->value.integer.min = 0;
++	uinfo->value.integer.max = get_max_exposed(cval);
+ 	return 0;
+ }
+ 
+@@ -1449,6 +1459,7 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ 				 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_info *cval = kcontrol->private_data;
++	int max_val = get_max_exposed(cval);
+ 	int c, cnt, val, oval, err;
+ 	int changed = 0;
+ 
+@@ -1461,6 +1472,8 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ 			if (err < 0)
+ 				return filter_error(cval, err);
+ 			val = ucontrol->value.integer.value[cnt];
++			if (val < 0 || val > max_val)
++				return -EINVAL;
+ 			val = get_abs_value(cval, val);
+ 			if (oval != val) {
+ 				snd_usb_set_cur_mix_value(cval, c + 1, cnt, val);
+@@ -1474,6 +1487,8 @@ static int mixer_ctl_feature_put(struct snd_kcontrol *kcontrol,
+ 		if (err < 0)
+ 			return filter_error(cval, err);
+ 		val = ucontrol->value.integer.value[0];
++		if (val < 0 || val > max_val)
++			return -EINVAL;
+ 		val = get_abs_value(cval, val);
+ 		if (val != oval) {
+ 			snd_usb_set_cur_mix_value(cval, 0, 0, val);
+@@ -2337,6 +2352,8 @@ static int mixer_ctl_procunit_put(struct snd_kcontrol *kcontrol,
+ 	if (err < 0)
+ 		return filter_error(cval, err);
+ 	val = ucontrol->value.integer.value[0];
++	if (val < 0 || val > get_max_exposed(cval))
++		return -EINVAL;
+ 	val = get_abs_value(cval, val);
+ 	if (val != oval) {
+ 		set_cur_ctl_value(cval, cval->control << 8, val);
+@@ -2699,6 +2716,8 @@ static int mixer_ctl_selector_put(struct snd_kcontrol *kcontrol,
+ 	if (err < 0)
+ 		return filter_error(cval, err);
+ 	val = ucontrol->value.enumerated.item[0];
++	if (val < 0 || val >= cval->max) /* here cval->max = # elements */
++		return -EINVAL;
+ 	val = get_abs_value(cval, val);
+ 	if (val != oval) {
+ 		set_cur_ctl_value(cval, cval->control << 8, val);
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index d43895c1ae5c6c..167fbfcf01ace9 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -88,6 +88,7 @@ struct usb_mixer_elem_info {
+ 	int channels;
+ 	int val_type;
+ 	int min, max, res;
++	int max_exposed; /* control API exposes the value in 0..max_exposed */
+ 	int dBmin, dBmax;
+ 	int cached;
+ 	int cache_val[MAX_CHANNELS];
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 212b5e6443d88f..dd88da420c21da 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -14,6 +14,7 @@
+  *	    Przemek Rudy (prudy1@o2.pl)
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/init.h>
+ #include <linux/math64.h>
+@@ -2925,6 +2926,415 @@ static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
+ 	return 0;
+ }
+ 
++/*
++ * RME Digiface USB
++ */
++
++#define RME_DIGIFACE_READ_STATUS 17
++#define RME_DIGIFACE_STATUS_REG0L 0
++#define RME_DIGIFACE_STATUS_REG0H 1
++#define RME_DIGIFACE_STATUS_REG1L 2
++#define RME_DIGIFACE_STATUS_REG1H 3
++#define RME_DIGIFACE_STATUS_REG2L 4
++#define RME_DIGIFACE_STATUS_REG2H 5
++#define RME_DIGIFACE_STATUS_REG3L 6
++#define RME_DIGIFACE_STATUS_REG3H 7
++
++#define RME_DIGIFACE_CTL_REG1 16
++#define RME_DIGIFACE_CTL_REG2 18
++
++/* Reg is overloaded, 0-7 for status halfwords or 16 or 18 for control registers */
++#define RME_DIGIFACE_REGISTER(reg, mask) (((reg) << 16) | (mask))
++#define RME_DIGIFACE_INVERT BIT(31)
++
++/* Nonconst helpers */
++#define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1))
++#define field_prep(_mask, _val) (((_val) << (ffs(_mask) - 1)) & (_mask))
++
++static int snd_rme_digiface_write_reg(struct snd_kcontrol *kcontrol, int item, u16 mask, u16 val)
++{
++	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++	struct snd_usb_audio *chip = list->mixer->chip;
++	struct usb_device *dev = chip->dev;
++	int err;
++
++	err = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++			      item,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++			      val, mask, NULL, 0);
++	if (err < 0)
++		dev_err(&dev->dev,
++			"unable to issue control set request %d (ret = %d)",
++			item, err);
++	return err;
++}
++
++static int snd_rme_digiface_read_status(struct snd_kcontrol *kcontrol, u32 status[4])
++{
++	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++	struct snd_usb_audio *chip = list->mixer->chip;
++	struct usb_device *dev = chip->dev;
++	__le32 buf[4];
++	int err;
++
++	err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0),
++			      RME_DIGIFACE_READ_STATUS,
++			      USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++			      0, 0,
++			      buf, sizeof(buf));
++	if (err < 0) {
++		dev_err(&dev->dev,
++			"unable to issue status read request (ret = %d)",
++			err);
++	} else {
++		for (int i = 0; i < ARRAY_SIZE(buf); i++)
++			status[i] = le32_to_cpu(buf[i]);
++	}
++	return err;
++}
++
++static int snd_rme_digiface_get_status_val(struct snd_kcontrol *kcontrol)
++{
++	int err;
++	u32 status[4];
++	bool invert = kcontrol->private_value & RME_DIGIFACE_INVERT;
++	u8 reg = (kcontrol->private_value >> 16) & 0xff;
++	u16 mask = kcontrol->private_value & 0xffff;
++	u16 val;
++
++	err = snd_rme_digiface_read_status(kcontrol, status);
++	if (err < 0)
++		return err;
++
++	switch (reg) {
++	/* Status register halfwords */
++	case RME_DIGIFACE_STATUS_REG0L ... RME_DIGIFACE_STATUS_REG3H:
++		break;
++	case RME_DIGIFACE_CTL_REG1: /* Control register 1, present in halfword 3L */
++		reg = RME_DIGIFACE_STATUS_REG3L;
++		break;
++	case RME_DIGIFACE_CTL_REG2: /* Control register 2, present in halfword 3H */
++		reg = RME_DIGIFACE_STATUS_REG3H;
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	if (reg & 1)
++		val = status[reg >> 1] >> 16;
++	else
++		val = status[reg >> 1] & 0xffff;
++
++	if (invert)
++		val ^= mask;
++
++	return field_get(mask, val);
++}
++
++static int snd_rme_digiface_rate_get(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	int freq = snd_rme_digiface_get_status_val(kcontrol);
++
++	if (freq < 0)
++		return freq;
++	if (freq >= ARRAY_SIZE(snd_rme_rate_table))
++		return -EIO;
++
++	ucontrol->value.integer.value[0] = snd_rme_rate_table[freq];
++	return 0;
++}
++
++static int snd_rme_digiface_enum_get(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	int val = snd_rme_digiface_get_status_val(kcontrol);
++
++	if (val < 0)
++		return val;
++
++	ucontrol->value.enumerated.item[0] = val;
++	return 0;
++}
++
++static int snd_rme_digiface_enum_put(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	bool invert = kcontrol->private_value & RME_DIGIFACE_INVERT;
++	u8 reg = (kcontrol->private_value >> 16) & 0xff;
++	u16 mask = kcontrol->private_value & 0xffff;
++	u16 val = field_prep(mask, ucontrol->value.enumerated.item[0]);
++
++	if (invert)
++		val ^= mask;
++
++	return snd_rme_digiface_write_reg(kcontrol, reg, mask, val);
++}
++
++static int snd_rme_digiface_current_sync_get(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_value *ucontrol)
++{
++	int ret = snd_rme_digiface_enum_get(kcontrol, ucontrol);
++
++	/* 7 means internal for current sync */
++	if (ucontrol->value.enumerated.item[0] == 7)
++		ucontrol->value.enumerated.item[0] = 0;
++
++	return ret;
++}
++
++static int snd_rme_digiface_sync_state_get(struct snd_kcontrol *kcontrol,
++					   struct snd_ctl_elem_value *ucontrol)
++{
++	u32 status[4];
++	int err;
++	bool valid, sync;
++
++	err = snd_rme_digiface_read_status(kcontrol, status);
++	if (err < 0)
++		return err;
++
++	valid = status[0] & BIT(kcontrol->private_value);
++	sync = status[0] & BIT(5 + kcontrol->private_value);
++
++	if (!valid)
++		ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_NOLOCK;
++	else if (!sync)
++		ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_LOCK;
++	else
++		ucontrol->value.enumerated.item[0] = SND_RME_CLOCK_SYNC;
++	return 0;
++}
++
++
++static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_info *uinfo)
++{
++	static const char *const format[] = {
++		"ADAT", "S/PDIF"
++	};
++
++	return snd_ctl_enum_info(uinfo, 1,
++				 ARRAY_SIZE(format), format);
++}
++
++
++static int snd_rme_digiface_sync_source_info(struct snd_kcontrol *kcontrol,
++					     struct snd_ctl_elem_info *uinfo)
++{
++	static const char *const sync_sources[] = {
++		"Internal", "Input 1", "Input 2", "Input 3", "Input 4"
++	};
++
++	return snd_ctl_enum_info(uinfo, 1,
++				 ARRAY_SIZE(sync_sources), sync_sources);
++}
++
++static int snd_rme_digiface_rate_info(struct snd_kcontrol *kcontrol,
++				      struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++	uinfo->count = 1;
++	uinfo->value.integer.min = 0;
++	uinfo->value.integer.max = 200000;
++	uinfo->value.integer.step = 0;
++	return 0;
++}
++
++static const struct snd_kcontrol_new snd_rme_digiface_controls[] = {
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 1 Sync",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_sync_state_info,
++		.get = snd_rme_digiface_sync_state_get,
++		.private_value = 0,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 1 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0H, BIT(0)) |
++			RME_DIGIFACE_INVERT,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 1 Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(3, 0)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 2 Sync",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_sync_state_info,
++		.get = snd_rme_digiface_sync_state_get,
++		.private_value = 1,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 2 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, BIT(13)) |
++			RME_DIGIFACE_INVERT,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 2 Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(7, 4)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 3 Sync",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_sync_state_info,
++		.get = snd_rme_digiface_sync_state_get,
++		.private_value = 2,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 3 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, BIT(14)) |
++			RME_DIGIFACE_INVERT,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 3 Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(11, 8)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 4 Sync",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_sync_state_info,
++		.get = snd_rme_digiface_sync_state_get,
++		.private_value = 3,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 4 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, GENMASK(15, 12)) |
++			RME_DIGIFACE_INVERT,
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Input 4 Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1L, GENMASK(3, 0)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Output 1 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.put = snd_rme_digiface_enum_put,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(0)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Output 2 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.put = snd_rme_digiface_enum_put,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(1)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Output 3 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.put = snd_rme_digiface_enum_put,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(3)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Output 4 Format",
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++		.info = snd_rme_digiface_format_info,
++		.get = snd_rme_digiface_enum_get,
++		.put = snd_rme_digiface_enum_put,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG2, BIT(4)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Sync Source",
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++		.info = snd_rme_digiface_sync_source_info,
++		.get = snd_rme_digiface_enum_get,
++		.put = snd_rme_digiface_enum_put,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG1, GENMASK(2, 0)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Current Sync Source",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_digiface_sync_source_info,
++		.get = snd_rme_digiface_current_sync_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG0L, GENMASK(12, 10)),
++	},
++	{
++		/*
++		 * This is writeable, but it is only set by the PCM rate.
++		 * Mixer apps currently need to drive the mixer using raw USB requests,
++		 * so they can also change this that way to configure the rate for
++		 * stand-alone operation when the PCM is closed.
++		 */
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "System Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_CTL_REG1, GENMASK(6, 3)),
++	},
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = "Current Rate",
++		.access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE,
++		.info = snd_rme_rate_info,
++		.get = snd_rme_digiface_rate_get,
++		.private_value = RME_DIGIFACE_REGISTER(RME_DIGIFACE_STATUS_REG1H, GENMASK(7, 4)),
++	}
++};
++
++static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
++{
++	int err, i;
++
++	for (i = 0; i < ARRAY_SIZE(snd_rme_digiface_controls); ++i) {
++		err = add_single_ctl_with_resume(mixer, 0,
++						 NULL,
++						 &snd_rme_digiface_controls[i],
++						 NULL);
++		if (err < 0)
++			return err;
++	}
++
++	return 0;
++}
++
+ /*
+  * Pioneer DJ DJM Mixers
+  *
+@@ -3483,6 +3893,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 	case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
+ 		err = snd_bbfpro_controls_create(mixer);
+ 		break;
++	case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++		err = snd_rme_digiface_controls_create(mixer);
++		break;
+ 	case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+ 		err = snd_djm_controls_create(mixer, SND_DJM_250MK2_IDX);
+ 		break;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index aaa6a515d0f8a4..24c981c9b2405d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -35,10 +35,87 @@
+ 	.bInterfaceClass = USB_CLASS_AUDIO, \
+ 	.bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL
+ 
++/* Quirk .driver_info, followed by the definition of the quirk entry;
++ * put like QUIRK_DRIVER_INFO { ... } in each entry of the quirk table
++ */
++#define QUIRK_DRIVER_INFO \
++	.driver_info = (unsigned long)&(const struct snd_usb_audio_quirk)
++
++/*
++ * Macros for quirk data entries
++ */
++
++/* Quirk data entry for ignoring the interface */
++#define QUIRK_DATA_IGNORE(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_IGNORE_INTERFACE
++/* Quirk data entry for a standard audio interface */
++#define QUIRK_DATA_STANDARD_AUDIO(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_AUDIO_STANDARD_INTERFACE
++/* Quirk data entry for a standard MIDI interface */
++#define QUIRK_DATA_STANDARD_MIDI(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_MIDI_STANDARD_INTERFACE
++/* Quirk data entry for a standard mixer interface */
++#define QUIRK_DATA_STANDARD_MIXER(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_AUDIO_STANDARD_MIXER
++
++/* Quirk data entry for Yamaha MIDI */
++#define QUIRK_DATA_MIDI_YAMAHA(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_MIDI_YAMAHA
++/* Quirk data entry for Edirol UAxx */
++#define QUIRK_DATA_EDIROL_UAXX(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_AUDIO_EDIROL_UAXX
++/* Quirk data entry for raw bytes interface */
++#define QUIRK_DATA_RAW_BYTES(_ifno) \
++	.ifnum = (_ifno), .type = QUIRK_MIDI_RAW_BYTES
++
++/* Quirk composite array terminator */
++#define QUIRK_COMPOSITE_END	{ .ifnum = -1 }
++
++/* Quirk data entry for composite quirks;
++ * followed by the quirk array that is terminated with QUIRK_COMPOSITE_END
++ * e.g. QUIRK_DATA_COMPOSITE { { quirk1 }, { quirk2 },..., QUIRK_COMPOSITE_END }
++ */
++#define QUIRK_DATA_COMPOSITE \
++	.ifnum = QUIRK_ANY_INTERFACE, \
++	.type = QUIRK_COMPOSITE, \
++	.data = &(const struct snd_usb_audio_quirk[])
++
++/* Quirk data entry for a fixed audio endpoint;
++ * followed by audioformat definition
++ * e.g. QUIRK_DATA_AUDIOFORMAT(n) { .formats = xxx, ... }
++ */
++#define QUIRK_DATA_AUDIOFORMAT(_ifno)	    \
++	.ifnum = (_ifno),		    \
++	.type = QUIRK_AUDIO_FIXED_ENDPOINT, \
++	.data = &(const struct audioformat)
++
++/* Quirk data entry for a fixed MIDI endpoint;
++ * followed by snd_usb_midi_endpoint_info definition
++ * e.g. QUIRK_DATA_MIDI_FIXED_ENDPOINT(n) { .out_cables = x, .in_cables = y }
++ */
++#define QUIRK_DATA_MIDI_FIXED_ENDPOINT(_ifno) \
++	.ifnum = (_ifno),		      \
++	.type = QUIRK_MIDI_FIXED_ENDPOINT,    \
++	.data = &(const struct snd_usb_midi_endpoint_info)
++/* Quirk data entry for a MIDIMAN MIDI endpoint */
++#define QUIRK_DATA_MIDI_MIDIMAN(_ifno) \
++	.ifnum = (_ifno),	       \
++	.type = QUIRK_MIDI_MIDIMAN,    \
++	.data = &(const struct snd_usb_midi_endpoint_info)
++/* Quirk data entry for a EMAGIC MIDI endpoint */
++#define QUIRK_DATA_MIDI_EMAGIC(_ifno) \
++	.ifnum = (_ifno),	      \
++	.type = QUIRK_MIDI_EMAGIC,    \
++	.data = &(const struct snd_usb_midi_endpoint_info)
++
++/*
++ * Here we go... the quirk table definition begins:
++ */
++
+ /* FTDI devices */
+ {
+ 	USB_DEVICE(0x0403, 0xb8d8),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "STARR LABS", */
+ 		/* .product_name = "Starr Labs MIDI USB device", */
+ 		.ifnum = 0,
+@@ -49,10 +126,8 @@
+ {
+ 	/* Creative BT-D1 */
+ 	USB_DEVICE(0x041e, 0x0005),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = 1,
+-		.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-		.data = &(const struct audioformat) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_AUDIOFORMAT(1) {
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 			.channels = 2,
+ 			.iface = 1,
+@@ -87,18 +162,11 @@
+  */
+ {
+ 	USB_AUDIO_DEVICE(0x041e, 0x4095),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(2) },
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 16,
+@@ -114,9 +182,7 @@
+ 					.rate_table = (unsigned int[]) { 48000 },
+ 				},
+ 			},
+-			{
+-				.ifnum = -1
+-			},
++			QUIRK_COMPOSITE_END
+ 		},
+ 	},
+ },
+@@ -128,31 +194,18 @@
+  */
+ {
+ 	USB_DEVICE(0x0424, 0xb832),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Standard Microsystems Corp.",
+ 		.product_name = "HP Wireless Audio",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			/* Mixer */
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE,
+-			},
++			{ QUIRK_DATA_IGNORE(0) },
+ 			/* Playback */
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE,
+-			},
++			{ QUIRK_DATA_IGNORE(1) },
+ 			/* Capture */
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_IGNORE_INTERFACE,
+-			},
++			{ QUIRK_DATA_IGNORE(2) },
+ 			/* HID Device, .ifnum = 3 */
+-			{
+-				.ifnum = -1,
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -175,20 +228,18 @@
+ 
+ #define YAMAHA_DEVICE(id, name) { \
+ 	USB_DEVICE(0x0499, id), \
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++	QUIRK_DRIVER_INFO { \
+ 		.vendor_name = "Yamaha", \
+ 		.product_name = name, \
+-		.ifnum = QUIRK_ANY_INTERFACE, \
+-		.type = QUIRK_MIDI_YAMAHA \
++		QUIRK_DATA_MIDI_YAMAHA(QUIRK_ANY_INTERFACE) \
+ 	} \
+ }
+ #define YAMAHA_INTERFACE(id, intf, name) { \
+ 	USB_DEVICE_VENDOR_SPEC(0x0499, id), \
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++	QUIRK_DRIVER_INFO { \
+ 		.vendor_name = "Yamaha", \
+ 		.product_name = name, \
+-		.ifnum = intf, \
+-		.type = QUIRK_MIDI_YAMAHA \
++		QUIRK_DATA_MIDI_YAMAHA(intf) \
+ 	} \
+ }
+ YAMAHA_DEVICE(0x1000, "UX256"),
+@@ -276,135 +327,67 @@ YAMAHA_DEVICE(0x105d, NULL),
+ YAMAHA_DEVICE(0x1718, "P-125"),
+ {
+ 	USB_DEVICE(0x0499, 0x1503),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Yamaha", */
+ 		/* .product_name = "MOX6/MOX8", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_YAMAHA
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_MIDI_YAMAHA(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0499, 0x1507),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Yamaha", */
+ 		/* .product_name = "THR10", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_YAMAHA
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_MIDI_YAMAHA(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0499, 0x1509),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Yamaha", */
+ 		/* .product_name = "Steinberg UR22", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_YAMAHA
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_MIDI_YAMAHA(3) },
++			{ QUIRK_DATA_IGNORE(4) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0499, 0x150a),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Yamaha", */
+ 		/* .product_name = "THR5A", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_YAMAHA
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_MIDI_YAMAHA(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0499, 0x150c),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Yamaha", */
+ 		/* .product_name = "THR10C", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_YAMAHA
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_MIDI_YAMAHA(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -438,7 +421,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	               USB_DEVICE_ID_MATCH_INT_CLASS,
+ 	.idVendor = 0x0499,
+ 	.bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = QUIRK_ANY_INTERFACE,
+ 		.type = QUIRK_AUTODETECT
+ 	}
+@@ -449,16 +432,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ {
+ 	USB_DEVICE(0x0582, 0x0000),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "UA-100",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -473,9 +452,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -490,106 +467,66 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0007,
+ 					.in_cables  = 0x0007
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0002),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-4",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x000f,
+ 					.in_cables  = 0x000f
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0003),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "SC-8850",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x003f,
+ 					.in_cables  = 0x003f
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0004),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "U-8",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0005,
+ 					.in_cables  = 0x0005
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -597,152 +534,92 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	/* Has ID 0x0099 when not in "Advanced Driver" mode.
+ 	 * The UM-2EX has only one input, but we cannot detect this. */
+ 	USB_DEVICE(0x0582, 0x0005),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-2",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0003,
+ 					.in_cables  = 0x0003
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0007),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "SC-8820",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0013,
+ 					.in_cables  = 0x0013
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0008),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "PC-300",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x009d when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0009),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-1",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x000b),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "SK-500",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0013,
+ 					.in_cables  = 0x0013
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -750,31 +627,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	/* thanks to Emiliano Grilli <emillo@libero.it>
+ 	 * for helping researching this data */
+ 	USB_DEVICE(0x0582, 0x000c),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "SC-D70",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0007,
+ 					.in_cables  = 0x0007
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -788,35 +653,23 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * the 96kHz sample rate.
+ 	 */
+ 	USB_DEVICE(0x0582, 0x0010),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-5",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x0013 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0012),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "XV-5050",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -825,12 +678,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0015 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0014),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-880",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x01ff,
+ 			.in_cables  = 0x01ff
+ 		}
+@@ -839,74 +690,48 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0017 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0016),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "SD-90",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x000f,
+ 					.in_cables  = 0x000f
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x001c when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x001b),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "MMP-2",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x001e when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x001d),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "V-SYNTH",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -915,12 +740,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0024 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0023),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-550",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x003f,
+ 			.in_cables  = 0x003f
+ 		}
+@@ -933,20 +756,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * and no MIDI.
+ 	 */
+ 	USB_DEVICE(0x0582, 0x0025),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-20",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -961,9 +777,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 2,
+ 					.iface = 2,
+@@ -978,28 +792,22 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x0028 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0027),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "SD-20",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1008,12 +816,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x002a when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0029),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "SD-80",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x000f,
+ 			.in_cables  = 0x000f
+ 		}
+@@ -1026,39 +832,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * but offers only 16-bit PCM and no MIDI.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x002b),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-700",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_EDIROL_UAXX(1) },
++			{ QUIRK_DATA_EDIROL_UAXX(2) },
++			{ QUIRK_DATA_EDIROL_UAXX(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x002e when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x002d),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "XV-2020",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1067,12 +858,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0030 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x002f),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "VariOS",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0007,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1081,12 +870,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0034 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0033),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "PCR",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1098,12 +885,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * later revisions use IDs 0x0054 and 0x00a2.
+ 	 */
+ 	USB_DEVICE(0x0582, 0x0037),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "Digital Piano",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1116,39 +901,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * and no MIDI.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x003b),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "BOSS",
+ 		.product_name = "GS-10",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			{ QUIRK_DATA_STANDARD_MIDI(3) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x0041 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0040),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "GI-20",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1157,12 +927,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0043 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0042),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "RS-70",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1171,36 +939,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0049 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0047),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "EDIROL", */
+ 		/* .product_name = "UR-80", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			/* in the 96 kHz modes, only interface 1 is there */
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x004a when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0048),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "EDIROL", */
+ 		/* .product_name = "UR-80", */
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1209,35 +965,23 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x004e when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x004c),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "PCR-A",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x004f when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x004d),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "PCR-A",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1249,76 +993,52 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * is standard compliant, but has only 16-bit PCM.
+ 	 */
+ 	USB_DEVICE(0x0582, 0x0050),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-3FX",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0052),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UM-1SX",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++		QUIRK_DATA_STANDARD_MIDI(0)
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0060),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "EXR Series",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++		QUIRK_DATA_STANDARD_MIDI(0)
+ 	}
+ },
+ {
+ 	/* has ID 0x0066 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0064),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "EDIROL", */
+ 		/* .product_name = "PCR-1", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x0067 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0065),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "EDIROL", */
+ 		/* .product_name = "PCR-1", */
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0003
+ 		}
+@@ -1327,12 +1047,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x006e when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x006d),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "FANTOM-X",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1345,39 +1063,24 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * offers only 16-bit PCM at 44.1 kHz and no MIDI.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x0074),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-25",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_EDIROL_UAXX(0) },
++			{ QUIRK_DATA_EDIROL_UAXX(1) },
++			{ QUIRK_DATA_EDIROL_UAXX(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* has ID 0x0076 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0075),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "BOSS",
+ 		.product_name = "DR-880",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1386,12 +1089,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x007b when not in "Advanced Driver" mode */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x007a),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		/* "RD" or "RD-700SX"? */
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0003
+ 		}
+@@ -1400,12 +1101,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x0081 when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x0080),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Roland",
+ 		.product_name = "G-70",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1414,12 +1113,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* has ID 0x008c when not in "Advanced Driver" mode */
+ 	USB_DEVICE(0x0582, 0x008b),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "PC-50",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1431,56 +1128,31 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * is standard compliant, but has only 16-bit PCM and no MIDI.
+ 	 */
+ 	USB_DEVICE(0x0582, 0x00a3),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-4FX",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_EDIROL_UAXX(0) },
++			{ QUIRK_DATA_EDIROL_UAXX(1) },
++			{ QUIRK_DATA_EDIROL_UAXX(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* Edirol M-16DX */
+ 	USB_DEVICE(0x0582, 0x00c4),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -1490,37 +1162,22 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * offers only 16-bit PCM at 44.1 kHz and no MIDI.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x00e6),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "EDIROL",
+ 		.product_name = "UA-25EX",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_EDIROL_UAXX
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_EDIROL_UAXX(0) },
++			{ QUIRK_DATA_EDIROL_UAXX(1) },
++			{ QUIRK_DATA_EDIROL_UAXX(2) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* Edirol UM-3G */
+ 	USB_DEVICE_VENDOR_SPEC(0x0582, 0x0108),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(0) {
+ 			.out_cables = 0x0007,
+ 			.in_cables  = 0x0007
+ 		}
+@@ -1529,45 +1186,29 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* BOSS ME-25 */
+ 	USB_DEVICE(0x0582, 0x0113),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* only 44.1 kHz works at the moment */
+ 	USB_DEVICE(0x0582, 0x0120),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Roland", */
+ 		/* .product_name = "OCTO-CAPTURE", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 10,
+ 					.iface = 0,
+@@ -1583,9 +1224,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 12,
+ 					.iface = 1,
+@@ -1601,40 +1240,26 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_IGNORE(3) },
++			{ QUIRK_DATA_IGNORE(4) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* only 44.1 kHz works at the moment */
+ 	USB_DEVICE(0x0582, 0x012f),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Roland", */
+ 		/* .product_name = "QUAD-CAPTURE", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -1650,9 +1275,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 6,
+ 					.iface = 1,
+@@ -1668,54 +1291,32 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_IGNORE(3) },
++			{ QUIRK_DATA_IGNORE(4) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x0159),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Roland", */
+ 		/* .product_name = "UA-22", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(2) {
+ 					.out_cables = 0x0001,
+ 					.in_cables = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -1723,19 +1324,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* UA101 and co are supported by another driver */
+ {
+ 	USB_DEVICE(0x0582, 0x0044), /* UA-1000 high speed */
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = QUIRK_NODEV_INTERFACE
+ 	},
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x007d), /* UA-101 high speed */
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = QUIRK_NODEV_INTERFACE
+ 	},
+ },
+ {
+ 	USB_DEVICE(0x0582, 0x008d), /* UA-101 full speed */
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = QUIRK_NODEV_INTERFACE
+ 	},
+ },
+@@ -1746,7 +1347,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	               USB_DEVICE_ID_MATCH_INT_CLASS,
+ 	.idVendor = 0x0582,
+ 	.bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = QUIRK_ANY_INTERFACE,
+ 		.type = QUIRK_AUTODETECT
+ 	}
+@@ -1761,12 +1362,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * compliant USB MIDI ports for external MIDI and controls.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x06f8, 0xb000),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Hercules",
+ 		.product_name = "DJ Console (WE)",
+-		.ifnum = 4,
+-		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_FIXED_ENDPOINT(4) {
+ 			.out_cables = 0x0001,
+ 			.in_cables = 0x0001
+ 		}
+@@ -1776,12 +1375,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Midiman/M-Audio devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1002),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 2x2",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x0003,
+ 			.in_cables  = 0x0003
+ 		}
+@@ -1789,12 +1386,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1011),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 1x1",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1802,12 +1397,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1015),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "Keystation",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1815,12 +1408,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1021),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 4x4",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x000f,
+ 			.in_cables  = 0x000f
+ 		}
+@@ -1833,12 +1424,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * Thanks to Olaf Giesbrecht <Olaf_Giesbrecht@yahoo.de>
+ 	 */
+ 	USB_DEVICE_VER(0x0763, 0x1031, 0x0100, 0x0109),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 8x8",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x01ff,
+ 			.in_cables  = 0x01ff
+ 		}
+@@ -1846,12 +1435,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1033),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 8x8",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x01ff,
+ 			.in_cables  = 0x01ff
+ 		}
+@@ -1859,12 +1446,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x1041),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "MidiSport 2x4",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(QUIRK_ANY_INTERFACE) {
+ 			.out_cables = 0x000f,
+ 			.in_cables  = 0x0003
+ 		}
+@@ -1872,76 +1457,41 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2001),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "Quattro",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			/*
+ 			 * Interfaces 0-2 are "Windows-compatible", 16-bit only,
+ 			 * and share endpoints with the other interfaces.
+ 			 * Ignore them.  The other interfaces can do 24 bits,
+ 			 * but captured samples are big-endian (see usbaudio.c).
+ 			 */
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 5,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 6,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 7,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 8,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 9,
+-				.type = QUIRK_MIDI_MIDIMAN,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
++			{ QUIRK_DATA_IGNORE(2) },
++			{ QUIRK_DATA_IGNORE(3) },
++			{ QUIRK_DATA_STANDARD_AUDIO(4) },
++			{ QUIRK_DATA_STANDARD_AUDIO(5) },
++			{ QUIRK_DATA_IGNORE(6) },
++			{ QUIRK_DATA_STANDARD_AUDIO(7) },
++			{ QUIRK_DATA_STANDARD_AUDIO(8) },
++			{
++				QUIRK_DATA_MIDI_MIDIMAN(9) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2003),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "AudioPhile",
+-		.ifnum = 6,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(6) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1949,12 +1499,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2008),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "Ozone",
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_MIDIMAN,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_MIDIMAN(3) {
+ 			.out_cables = 0x0001,
+ 			.in_cables  = 0x0001
+ 		}
+@@ -1962,93 +1510,45 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x200d),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "M-Audio",
+ 		.product_name = "OmniStudio",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 5,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 6,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 7,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 8,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 9,
+-				.type = QUIRK_MIDI_MIDIMAN,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
++			{ QUIRK_DATA_IGNORE(2) },
++			{ QUIRK_DATA_IGNORE(3) },
++			{ QUIRK_DATA_STANDARD_AUDIO(4) },
++			{ QUIRK_DATA_STANDARD_AUDIO(5) },
++			{ QUIRK_DATA_IGNORE(6) },
++			{ QUIRK_DATA_STANDARD_AUDIO(7) },
++			{ QUIRK_DATA_STANDARD_AUDIO(8) },
++			{
++				QUIRK_DATA_MIDI_MIDIMAN(9) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0763, 0x2019),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "M-Audio", */
+ 		/* .product_name = "Ozone Academic", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_MIDIMAN,
+-				.data = & (const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_MIDIMAN(3) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2058,21 +1558,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2030),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "M-Audio", */
+ 		/* .product_name = "Fast Track C400", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(1) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6,
+ 					.iface = 2,
+@@ -2096,9 +1589,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			/* Capture */
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 3,
+@@ -2120,30 +1611,21 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.clock = 0x80,
+ 				}
+ 			},
+-			/* MIDI */
+-			{
+-				.ifnum = -1 /* Interface = 4 */
+-			}
++			/* MIDI: Interface = 4*/
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2031),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "M-Audio", */
+ 		/* .product_name = "Fast Track C600", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(1) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 2,
+@@ -2167,9 +1649,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			/* Capture */
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6,
+ 					.iface = 3,
+@@ -2191,29 +1671,20 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.clock = 0x80,
+ 				}
+ 			},
+-			/* MIDI */
+-			{
+-				.ifnum = -1 /* Interface = 4 */
+-			}
++			/* MIDI: Interface = 4 */
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2080),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "M-Audio", */
+ 		/* .product_name = "Fast Track Ultra", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 1,
+@@ -2235,9 +1706,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 2,
+@@ -2259,28 +1728,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			/* interface 3 (MIDI) is standard compliant */
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2081),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "M-Audio", */
+ 		/* .product_name = "Fast Track Ultra 8R", */
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 1,
+@@ -2302,9 +1762,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 2,
+@@ -2326,9 +1784,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			/* interface 3 (MIDI) is standard compliant */
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2336,21 +1792,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Casio devices */
+ {
+ 	USB_DEVICE(0x07cf, 0x6801),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Casio",
+ 		.product_name = "PL-40R",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_YAMAHA
++		QUIRK_DATA_MIDI_YAMAHA(0)
+ 	}
+ },
+ {
+ 	/* this ID is used by several devices without a product ID */
+ 	USB_DEVICE(0x07cf, 0x6802),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Casio",
+ 		.product_name = "Keyboard",
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_YAMAHA
++		QUIRK_DATA_MIDI_YAMAHA(0)
+ 	}
+ },
+ 
+@@ -2363,23 +1817,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	.idVendor = 0x07fd,
+ 	.idProduct = 0x0001,
+ 	.bDeviceSubClass = 2,
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "MOTU",
+ 		.product_name = "Fastlane",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_MIDI_RAW_BYTES
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_RAW_BYTES(0) },
++			{ QUIRK_DATA_IGNORE(1) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2387,12 +1831,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Emagic devices */
+ {
+ 	USB_DEVICE(0x086a, 0x0001),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Emagic",
+ 		.product_name = "Unitor8",
+-		.ifnum = 2,
+-		.type = QUIRK_MIDI_EMAGIC,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_EMAGIC(2) {
+ 			.out_cables = 0x80ff,
+ 			.in_cables  = 0x80ff
+ 		}
+@@ -2400,12 +1842,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE(0x086a, 0x0002),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Emagic",
+ 		/* .product_name = "AMT8", */
+-		.ifnum = 2,
+-		.type = QUIRK_MIDI_EMAGIC,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_EMAGIC(2) {
+ 			.out_cables = 0x80ff,
+ 			.in_cables  = 0x80ff
+ 		}
+@@ -2413,12 +1853,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE(0x086a, 0x0003),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Emagic",
+ 		/* .product_name = "MT4", */
+-		.ifnum = 2,
+-		.type = QUIRK_MIDI_EMAGIC,
+-		.data = & (const struct snd_usb_midi_endpoint_info) {
++		QUIRK_DATA_MIDI_EMAGIC(2) {
+ 			.out_cables = 0x800f,
+ 			.in_cables  = 0x8003
+ 		}
+@@ -2428,38 +1866,35 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* KORG devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0944, 0x0200),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "KORG, Inc.",
+ 		/* .product_name = "PANDORA PX5D", */
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE,
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ 
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0944, 0x0201),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "KORG, Inc.",
+ 		/* .product_name = "ToneLab ST", */
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE,
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ 
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0944, 0x0204),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "KORG, Inc.",
+ 		/* .product_name = "ToneLab EX", */
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE,
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ 
+ /* AKAI devices */
+ {
+ 	USB_DEVICE(0x09e8, 0x0062),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "AKAI",
+ 		.product_name = "MPD16",
+ 		.ifnum = 0,
+@@ -2470,89 +1905,49 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* Akai MPC Element */
+ 	USB_DEVICE(0x09e8, 0x0021),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_MIDI_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_STANDARD_MIDI(1) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+-},
+-
+-/* Steinberg devices */
+-{
+-	/* Steinberg MI2 */
+-	USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x2040),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++},
++
++/* Steinberg devices */
++{
++	/* Steinberg MI2 */
++	USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x2040),
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = &(const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* Steinberg MI4 */
+ 	USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x4040),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = & (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = &(const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2560,34 +1955,31 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* TerraTec devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0012),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "TerraTec",
+ 		.product_name = "PHASE 26",
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0013),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "TerraTec",
+ 		.product_name = "PHASE 26",
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0014),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "TerraTec",
+ 		.product_name = "PHASE 26",
+-		.ifnum = 3,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++		QUIRK_DATA_STANDARD_MIDI(3)
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x0ccd, 0x0035),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Miditech",
+ 		.product_name = "Play'n Roll",
+ 		.ifnum = 0,
+@@ -2602,7 +1994,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Novation EMS devices */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x1235, 0x0001),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Novation",
+ 		.product_name = "ReMOTE Audio/XStation",
+ 		.ifnum = 4,
+@@ -2611,7 +2003,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x1235, 0x0002),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Novation",
+ 		.product_name = "Speedio",
+ 		.ifnum = 3,
+@@ -2620,38 +2012,29 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ },
+ {
+ 	USB_DEVICE(0x1235, 0x000a),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Novation", */
+ 		/* .product_name = "Nocturn", */
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_RAW_BYTES
++		QUIRK_DATA_RAW_BYTES(0)
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x1235, 0x000e),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		/* .vendor_name = "Novation", */
+ 		/* .product_name = "Launchpad", */
+-		.ifnum = 0,
+-		.type = QUIRK_MIDI_RAW_BYTES
++		QUIRK_DATA_RAW_BYTES(0)
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x1235, 0x0010),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Focusrite",
+ 		.product_name = "Saffire 6 USB",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -2678,9 +2061,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 2,
+ 					.iface = 0,
+@@ -2702,28 +2083,19 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_MIDI_RAW_BYTES
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_RAW_BYTES(1) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE(0x1235, 0x0018),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Novation",
+ 		.product_name = "Twitch",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = & (const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -2742,19 +2114,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_MIDI_RAW_BYTES
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_RAW_BYTES(1) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x1235, 0x4661),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Novation",
+ 		.product_name = "ReMOTE25",
+ 		.ifnum = 0,
+@@ -2766,25 +2133,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* VirusTI Desktop */
+ 	USB_DEVICE_VENDOR_SPEC(0x133e, 0x0815),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = &(const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(3) {
+ 					.out_cables = 0x0003,
+ 					.in_cables  = 0x0003
+ 				}
+ 			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_IGNORE(4) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2812,7 +2170,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* QinHeng devices */
+ {
+ 	USB_DEVICE(0x1a86, 0x752d),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "QinHeng",
+ 		.product_name = "CH345",
+ 		.ifnum = 1,
+@@ -2826,7 +2184,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Miditech devices */
+ {
+ 	USB_DEVICE(0x4752, 0x0011),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Miditech",
+ 		.product_name = "Midistart-2",
+ 		.ifnum = 0,
+@@ -2838,7 +2196,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* this ID used by both Miditech MidiStudio-2 and CME UF-x */
+ 	USB_DEVICE(0x7104, 0x2202),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.ifnum = 0,
+ 		.type = QUIRK_MIDI_CME
+ 	}
+@@ -2848,20 +2206,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	/* Thanks to Clemens Ladisch <clemens@ladisch.de> */
+ 	USB_DEVICE(0x0dba, 0x1000),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Digidesign",
+ 		.product_name = "MBox",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]){
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE{
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -2882,9 +2233,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -2905,9 +2254,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -2915,24 +2262,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* DIGIDESIGN MBOX 2 */
+ {
+ 	USB_DEVICE(0x0dba, 0x3000),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Digidesign",
+ 		.product_name = "Mbox 2",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 2,
+ 					.iface = 2,
+@@ -2950,15 +2287,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
++			{ QUIRK_DATA_IGNORE(3) },
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
+-				.formats = SNDRV_PCM_FMTBIT_S24_3BE,
++				QUIRK_DATA_AUDIOFORMAT(4) {
++					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 2,
+ 					.iface = 4,
+ 					.altsetting = 2,
+@@ -2975,14 +2307,9 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
++			{ QUIRK_DATA_IGNORE(5) },
+ 			{
+-				.ifnum = 5,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 6,
+-				.type = QUIRK_MIDI_MIDIMAN,
+-				.data = &(const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_MIDIMAN(6) {
+ 					.out_ep =  0x02,
+ 					.out_cables = 0x0001,
+ 					.in_ep = 0x81,
+@@ -2990,33 +2317,21 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.in_cables = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ /* DIGIDESIGN MBOX 3 */
+ {
+ 	USB_DEVICE(0x0dba, 0x5000),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Digidesign",
+ 		.product_name = "Mbox 3",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_IGNORE(1) },
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.fmt_bits = 24,
+ 					.channels = 4,
+@@ -3043,9 +2358,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.fmt_bits = 24,
+ 					.channels = 4,
+@@ -3069,36 +2382,25 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 4,
+-				.type = QUIRK_MIDI_FIXED_ENDPOINT,
+-				.data = &(const struct snd_usb_midi_endpoint_info) {
++				QUIRK_DATA_MIDI_FIXED_ENDPOINT(4) {
+ 					.out_cables = 0x0001,
+ 					.in_cables  = 0x0001
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ {
+ 	/* Tascam US122 MKII - playback-only support */
+ 	USB_DEVICE_VENDOR_SPEC(0x0644, 0x8021),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "TASCAM",
+ 		.product_name = "US122 MKII",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -3119,9 +2421,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3129,20 +2429,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ /* Denon DN-X1600 */
+ {
+ 	USB_AUDIO_DEVICE(0x154e, 0x500e),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Denon",
+ 		.product_name = "DN-X1600",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]){
++		QUIRK_DATA_COMPOSITE{
++			{ QUIRK_DATA_IGNORE(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE,
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 1,
+@@ -3163,9 +2456,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 2,
+@@ -3185,13 +2476,8 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_MIDI_STANDARD_INTERFACE,
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_STANDARD_MIDI(4) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3200,17 +2486,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	USB_DEVICE(0x045e, 0x0283),
+ 	.bInterfaceClass = USB_CLASS_PER_INTERFACE,
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Microsoft",
+ 		.product_name = "XboxLive Headset/Xbox Communicator",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+ 				/* playback */
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 1,
+ 					.iface = 0,
+@@ -3226,9 +2508,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			{
+ 				/* capture */
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 1,
+ 					.iface = 1,
+@@ -3242,9 +2522,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_max = 16000
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3253,18 +2531,11 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ {
+ 	USB_DEVICE(0x200c, 0x100b),
+ 	.bInterfaceClass = USB_CLASS_PER_INTERFACE,
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 1,
+@@ -3283,9 +2554,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3298,28 +2567,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * enabled in create_standard_audio_quirk().
+ 	 */
+ 	USB_DEVICE(0x1686, 0x00dd),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				/* Playback  */
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
+-			},
+-			{
+-				/* Capture */
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
+-			},
+-			{
+-				/* Midi */
+-				.ifnum = 3,
+-				.type = QUIRK_MIDI_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			},
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(1) }, /* Playback  */
++			{ QUIRK_DATA_STANDARD_AUDIO(2) }, /* Capture */
++			{ QUIRK_DATA_STANDARD_MIDI(3) }, /* Midi */
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3333,18 +2586,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 		       USB_DEVICE_ID_MATCH_INT_SUBCLASS,
+ 	.bInterfaceClass = USB_CLASS_AUDIO,
+ 	.bInterfaceSubClass = USB_SUBCLASS_MIDISTREAMING,
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_MIDI_STANDARD_INTERFACE
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_STANDARD_MIDI(QUIRK_ANY_INTERFACE)
+ 	}
+ },
+ 
+ /* Rane SL-1 */
+ {
+ 	USB_DEVICE(0x13e5, 0x0001),
+-	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_AUDIO_STANDARD_INTERFACE
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_STANDARD_AUDIO(QUIRK_ANY_INTERFACE)
+         }
+ },
+ 
+@@ -3360,24 +2611,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * and only the 48 kHz sample rate works for the playback interface.
+ 	 */
+ 	USB_DEVICE(0x0a12, 0x1243),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
+-			/* Capture */
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_IGNORE_INTERFACE,
+-			},
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
++			{ QUIRK_DATA_IGNORE(1) }, /* Capture */
+ 			/* Playback */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.iface = 2,
+@@ -3396,9 +2636,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			},
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3411,19 +2649,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * even on windows.
+ 	 */
+ 	USB_DEVICE(0x19b5, 0x0021),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.iface = 1,
+@@ -3442,29 +2673,20 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			},
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+ /* MOTU Microbook II */
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x07fd, 0x0004),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "MOTU",
+ 		.product_name = "MicroBookII",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 6,
+ 					.iface = 0,
+@@ -3485,9 +2707,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3BE,
+ 					.channels = 8,
+ 					.iface = 0,
+@@ -3508,9 +2728,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3522,14 +2740,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * The feedback for the output is the input.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0023),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 12,
+ 					.iface = 0,
+@@ -3546,9 +2760,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 10,
+ 					.iface = 0,
+@@ -3566,9 +2778,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3611,14 +2821,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * but not for DVS (Digital Vinyl Systems) like in Mixxx.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0017),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8, // outputs
+ 					.iface = 0,
+@@ -3635,9 +2841,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8, // inputs
+ 					.iface = 0,
+@@ -3655,9 +2859,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 48000 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3668,14 +2870,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * The feedback for the output is the dummy input.
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000e),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -3692,9 +2890,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 2,
+ 					.iface = 0,
+@@ -3712,9 +2908,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3725,14 +2919,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * PCM is 6 channels out & 4 channels in @ 44.1 fixed
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000d),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6, //Master, Headphones & Booth
+ 					.iface = 0,
+@@ -3749,9 +2939,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4, //2x RCA inputs (CH1 & CH2)
+ 					.iface = 0,
+@@ -3769,9 +2957,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3783,14 +2969,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * The Feedback for the output is the input
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x001e),
+-		.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 4,
+ 					.iface = 0,
+@@ -3807,9 +2989,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6,
+ 					.iface = 0,
+@@ -3827,9 +3007,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3840,14 +3018,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * 10 channels playback & 12 channels capture @ 44.1/48/96kHz S24LE
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000a),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 10,
+ 					.iface = 0,
+@@ -3868,9 +3042,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 12,
+ 					.iface = 0,
+@@ -3892,9 +3064,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3906,14 +3076,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * The Feedback for the output is the input
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0029),
+-		.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6,
+ 					.iface = 0,
+@@ -3930,9 +3096,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 6,
+ 					.iface = 0,
+@@ -3950,9 +3114,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -3970,20 +3132,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ {
+ 	USB_AUDIO_DEVICE(0x534d, 0x0021),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "MacroSilicon",
+ 		.product_name = "MS210x",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(2) },
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.iface = 3,
+@@ -3998,9 +3153,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_max = 48000,
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4018,20 +3171,13 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ {
+ 	USB_AUDIO_DEVICE(0x534d, 0x2109),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "MacroSilicon",
+ 		.product_name = "MS2109",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_MIXER,
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_MIXER(2) },
+ 			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(3) {
+ 					.formats = SNDRV_PCM_FMTBIT_S16_LE,
+ 					.channels = 2,
+ 					.iface = 3,
+@@ -4046,9 +3192,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_max = 48000,
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4058,14 +3202,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * 8 channels playback & 8 channels capture @ 44.1/48/96kHz S24LE
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x08e4, 0x017f),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 0,
+@@ -4084,9 +3224,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 0,
+@@ -4106,9 +3244,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4118,14 +3254,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * 10 channels playback & 12 channels capture @ 48kHz S24LE
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x001b),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 10,
+ 					.iface = 0,
+@@ -4144,9 +3276,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 12,
+ 					.iface = 0,
+@@ -4164,9 +3294,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 48000 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4178,14 +3306,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * Capture on EP 0x86
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x08e4, 0x0163),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 0,
+@@ -4205,9 +3329,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 				}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8,
+ 					.iface = 0,
+@@ -4227,9 +3349,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 44100, 48000, 96000 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4240,14 +3360,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * and 8 channels in @ 48 fixed (endpoint 0x82).
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0013),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8, // outputs
+ 					.iface = 0,
+@@ -4264,9 +3380,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					}
+ 			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(0) {
+ 					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
+ 					.channels = 8, // inputs
+ 					.iface = 0,
+@@ -4284,9 +3398,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.rate_table = (unsigned int[]) { 48000 }
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4297,28 +3409,15 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 */
+ 	USB_DEVICE(0x1395, 0x0300),
+ 	.bInterfaceClass = USB_CLASS_PER_INTERFACE,
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
+ 			// Communication
+-			{
+-				.ifnum = 3,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++			{ QUIRK_DATA_STANDARD_AUDIO(3) },
+ 			// Recording
+-			{
+-				.ifnum = 4,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++			{ QUIRK_DATA_STANDARD_AUDIO(4) },
+ 			// Main
+-			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
+-			{
+-				.ifnum = -1
+-			}
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4327,21 +3426,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * Fiero SC-01 (firmware v1.0.0 @ 48 kHz)
+ 	 */
+ 	USB_DEVICE(0x2b53, 0x0023),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Fiero",
+ 		.product_name = "SC-01",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4361,9 +3453,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			/* Capture */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4382,9 +3472,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.clock = 0x29
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4393,21 +3481,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * Fiero SC-01 (firmware v1.0.0 @ 96 kHz)
+ 	 */
+ 	USB_DEVICE(0x2b53, 0x0024),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Fiero",
+ 		.product_name = "SC-01",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4427,9 +3508,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			/* Capture */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4448,9 +3527,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.clock = 0x29
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4459,21 +3536,14 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * Fiero SC-01 (firmware v1.1.0)
+ 	 */
+ 	USB_DEVICE(0x2b53, 0x0031),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Fiero",
+ 		.product_name = "SC-01",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = &(const struct snd_usb_audio_quirk[]) {
+-			{
+-				.ifnum = 0,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE
+-			},
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_STANDARD_AUDIO(0) },
+ 			/* Playback */
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(1) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4494,9 +3564,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 			},
+ 			/* Capture */
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
+-				.data = &(const struct audioformat) {
++				QUIRK_DATA_AUDIOFORMAT(2) {
+ 					.formats = SNDRV_PCM_FMTBIT_S32_LE,
+ 					.channels = 2,
+ 					.fmt_bits = 24,
+@@ -4516,9 +3584,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 					.clock = 0x29
+ 				}
+ 			},
+-			{
+-				.ifnum = -1
+-			}
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+@@ -4527,30 +3593,187 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 	 * For the standard mode, Mythware XA001AU has ID ffad:a001
+ 	 */
+ 	USB_DEVICE_VENDOR_SPEC(0xffad, 0xa001),
+-	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++	QUIRK_DRIVER_INFO {
+ 		.vendor_name = "Mythware",
+ 		.product_name = "XA001AU",
+-		.ifnum = QUIRK_ANY_INTERFACE,
+-		.type = QUIRK_COMPOSITE,
+-		.data = (const struct snd_usb_audio_quirk[]) {
++		QUIRK_DATA_COMPOSITE {
++			{ QUIRK_DATA_IGNORE(0) },
++			{ QUIRK_DATA_STANDARD_AUDIO(1) },
++			{ QUIRK_DATA_STANDARD_AUDIO(2) },
++			QUIRK_COMPOSITE_END
++		}
++	}
++},
++{
++	/* Only claim interface 0 */
++	.match_flags = USB_DEVICE_ID_MATCH_VENDOR |
++		       USB_DEVICE_ID_MATCH_PRODUCT |
++		       USB_DEVICE_ID_MATCH_INT_CLASS |
++		       USB_DEVICE_ID_MATCH_INT_NUMBER,
++	.idVendor = 0x2a39,
++	.idProduct = 0x3f8c,
++	.bInterfaceClass = USB_CLASS_VENDOR_SPEC,
++	.bInterfaceNumber = 0,
++	QUIRK_DRIVER_INFO {
++		QUIRK_DATA_COMPOSITE {
++			/*
++			 * Three modes depending on sample rate band,
++			 * with different channel counts for in/out
++			 */
++			{ QUIRK_DATA_STANDARD_MIXER(0) },
++			{
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 34, // outputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x02,
++					.ep_idx = 1,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_32000 |
++						SNDRV_PCM_RATE_44100 |
++						SNDRV_PCM_RATE_48000,
++					.rate_min = 32000,
++					.rate_max = 48000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						32000, 44100, 48000,
++					},
++					.sync_ep = 0x81,
++					.sync_iface = 0,
++					.sync_altsetting = 1,
++					.sync_ep_idx = 0,
++					.implicit_fb = 1,
++				},
++			},
++			{
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 18, // outputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x02,
++					.ep_idx = 1,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_64000 |
++						SNDRV_PCM_RATE_88200 |
++						SNDRV_PCM_RATE_96000,
++					.rate_min = 64000,
++					.rate_max = 96000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						64000, 88200, 96000,
++					},
++					.sync_ep = 0x81,
++					.sync_iface = 0,
++					.sync_altsetting = 1,
++					.sync_ep_idx = 0,
++					.implicit_fb = 1,
++				},
++			},
+ 			{
+-				.ifnum = 0,
+-				.type = QUIRK_IGNORE_INTERFACE,
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 10, // outputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x02,
++					.ep_idx = 1,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_KNOT |
++						SNDRV_PCM_RATE_176400 |
++						SNDRV_PCM_RATE_192000,
++					.rate_min = 128000,
++					.rate_max = 192000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						128000, 176400, 192000,
++					},
++					.sync_ep = 0x81,
++					.sync_iface = 0,
++					.sync_altsetting = 1,
++					.sync_ep_idx = 0,
++					.implicit_fb = 1,
++				},
+ 			},
+ 			{
+-				.ifnum = 1,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 32, // inputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x81,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_32000 |
++						SNDRV_PCM_RATE_44100 |
++						SNDRV_PCM_RATE_48000,
++					.rate_min = 32000,
++					.rate_max = 48000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						32000, 44100, 48000,
++					}
++				}
+ 			},
+ 			{
+-				.ifnum = 2,
+-				.type = QUIRK_AUDIO_STANDARD_INTERFACE,
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 16, // inputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x81,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_64000 |
++						SNDRV_PCM_RATE_88200 |
++						SNDRV_PCM_RATE_96000,
++					.rate_min = 64000,
++					.rate_max = 96000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						64000, 88200, 96000,
++					}
++				}
+ 			},
+ 			{
+-				.ifnum = -1
+-			}
++				QUIRK_DATA_AUDIOFORMAT(0) {
++					.formats = SNDRV_PCM_FMTBIT_S32_LE,
++					.channels = 8, // inputs
++					.fmt_bits = 24,
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x81,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC |
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_KNOT |
++						SNDRV_PCM_RATE_176400 |
++						SNDRV_PCM_RATE_192000,
++					.rate_min = 128000,
++					.rate_max = 192000,
++					.nr_rates = 3,
++					.rate_table = (unsigned int[]) {
++						128000, 176400, 192000,
++					}
++				}
++			},
++			QUIRK_COMPOSITE_END
+ 		}
+ 	}
+ },
+-
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index e7b68c67852e92..f4c68eb7e07a12 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1389,6 +1389,27 @@ static int snd_usb_motu_m_series_boot_quirk(struct usb_device *dev)
+ 	return 0;
+ }
+ 
++static int snd_usb_rme_digiface_boot_quirk(struct usb_device *dev)
++{
++	/* Disable mixer, internal clock, all outputs ADAT, 48kHz, TMS off */
++	snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++			16, 0x40, 0x2410, 0x7fff, NULL, 0);
++	snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++			18, 0x40, 0x0104, 0xffff, NULL, 0);
++
++	/* Disable loopback for all inputs */
++	for (int ch = 0; ch < 32; ch++)
++		snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++				22, 0x40, 0x400, ch, NULL, 0);
++
++	/* Unity gain for all outputs */
++	for (int ch = 0; ch < 34; ch++)
++		snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0),
++				21, 0x40, 0x9000, 0x100 + ch, NULL, 0);
++
++	return 0;
++}
++
+ /*
+  * Setup quirks
+  */
+@@ -1616,6 +1637,8 @@ int snd_usb_apply_boot_quirk(struct usb_device *dev,
+ 		    get_iface_desc(intf->altsetting)->bInterfaceNumber < 3)
+ 			return snd_usb_motu_microbookii_boot_quirk(dev);
+ 		break;
++	case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++		return snd_usb_rme_digiface_boot_quirk(dev);
+ 	}
+ 
+ 	return 0;
+@@ -1771,6 +1794,38 @@ static void mbox3_set_format_quirk(struct snd_usb_substream *subs,
+ 		dev_warn(&subs->dev->dev, "MBOX3: Couldn't set the sample rate");
+ }
+ 
++static const int rme_digiface_rate_table[] = {
++	32000, 44100, 48000, 0,
++	64000, 88200, 96000, 0,
++	128000, 176400, 192000, 0,
++};
++
++static int rme_digiface_set_format_quirk(struct snd_usb_substream *subs)
++{
++	unsigned int cur_rate = subs->data_endpoint->cur_rate;
++	u16 val;
++	int speed_mode;
++	int id;
++
++	for (id = 0; id < ARRAY_SIZE(rme_digiface_rate_table); id++) {
++		if (rme_digiface_rate_table[id] == cur_rate)
++			break;
++	}
++
++	if (id >= ARRAY_SIZE(rme_digiface_rate_table))
++		return -EINVAL;
++
++	/* 2, 3, 4 for 1x, 2x, 4x */
++	speed_mode = (id >> 2) + 2;
++	val = (id << 3) | (speed_mode << 12);
++
++	/* Set the sample rate */
++	snd_usb_ctl_msg(subs->stream->chip->dev,
++		usb_sndctrlpipe(subs->stream->chip->dev, 0),
++		16, 0x40, val, 0x7078, NULL, 0);
++	return 0;
++}
++
+ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ 			      const struct audioformat *fmt)
+ {
+@@ -1795,6 +1850,9 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ 	case USB_ID(0x0dba, 0x5000):
+ 		mbox3_set_format_quirk(subs, fmt); /* Digidesign Mbox 3 */
+ 		break;
++	case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++		rme_digiface_set_format_quirk(subs);
++		break;
+ 	}
+ }
+ 
+@@ -2163,6 +2221,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DISABLE_AUTOSUSPEND),
+ 	DEVICE_FLG(0x17aa, 0x104d, /* Lenovo ThinkStation P620 Internal Speaker + Front Headset */
+ 		   QUIRK_FLAG_DISABLE_AUTOSUSPEND),
++	DEVICE_FLG(0x1852, 0x5062, /* Luxman D-08u */
++		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1852, 0x5065, /* Luxman DA-06 */
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+@@ -2221,6 +2281,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x2d95, 0x8011, /* VIVO USB-C HEADSET */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
+index 24b7d017ec2c11..b7965dfff33a9a 100644
+--- a/tools/arch/x86/kcpuid/kcpuid.c
++++ b/tools/arch/x86/kcpuid/kcpuid.c
+@@ -7,7 +7,8 @@
+ #include <string.h>
+ #include <getopt.h>
+ 
+-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
++#define ARRAY_SIZE(x)	(sizeof(x) / sizeof((x)[0]))
++#define min(a, b)	(((a) < (b)) ? (a) : (b))
+ 
+ typedef unsigned int u32;
+ typedef unsigned long long u64;
+@@ -207,12 +208,9 @@ static void raw_dump_range(struct cpuid_range *range)
+ #define MAX_SUBLEAF_NUM		32
+ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ {
+-	u32 max_func, idx_func;
+-	int subleaf;
++	u32 max_func, idx_func, subleaf, max_subleaf;
++	u32 eax, ebx, ecx, edx, f = input_eax;
+ 	struct cpuid_range *range;
+-	u32 eax, ebx, ecx, edx;
+-	u32 f = input_eax;
+-	int max_subleaf;
+ 	bool allzero;
+ 
+ 	eax = input_eax;
+@@ -258,7 +256,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ 		 * others have to be tried (0xf)
+ 		 */
+ 		if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+-			max_subleaf = (eax & 0xff) + 1;
++			max_subleaf = min((eax & 0xff) + 1, max_subleaf);
+ 
+ 		if (f == 0xb)
+ 			max_subleaf = 2;
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 968714b4c3d45b..0f2106218e1f0d 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -482,9 +482,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
+ 		if (prog_flags[i] || json_output) {
+ 			NET_START_ARRAY("prog_flags", "%s ");
+ 			for (j = 0; prog_flags[i] && j < 32; j++) {
+-				if (!(prog_flags[i] & (1 << j)))
++				if (!(prog_flags[i] & (1U << j)))
+ 					continue;
+-				NET_DUMP_UINT_ONLY(1 << j);
++				NET_DUMP_UINT_ONLY(1U << j);
+ 			}
+ 			NET_END_ARRAY("");
+ 		}
+@@ -493,9 +493,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
+ 			if (link_flags[i] || json_output) {
+ 				NET_START_ARRAY("link_flags", "%s ");
+ 				for (j = 0; link_flags[i] && j < 32; j++) {
+-					if (!(link_flags[i] & (1 << j)))
++					if (!(link_flags[i] & (1U << j)))
+ 						continue;
+-					NET_DUMP_UINT_ONLY(1 << j);
++					NET_DUMP_UINT_ONLY(1U << j);
+ 				}
+ 				NET_END_ARRAY("");
+ 			}
+@@ -824,6 +824,9 @@ static void show_link_netfilter(void)
+ 		nf_link_count++;
+ 	}
+ 
++	if (!nf_link_info)
++		return;
++
+ 	qsort(nf_link_info, nf_link_count, sizeof(*nf_link_info), netfilter_link_compar);
+ 
+ 	for (id = 0; id < nf_link_count; id++) {
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 3ce316cc9f970b..7a00f3066a9807 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -296,6 +296,13 @@ static int hv_fcopy_start(struct hv_start_fcopy *smsg_in)
+ 	file_name = (char *)malloc(file_size * sizeof(char));
+ 	path_name = (char *)malloc(path_size * sizeof(char));
+ 
++	if (!file_name || !path_name) {
++		free(file_name);
++		free(path_name);
++		syslog(LOG_ERR, "Can't allocate memory for file name and/or path name");
++		return HV_E_FAIL;
++	}
++
+ 	wcstoutf8(file_name, (__u16 *)in_file_name, file_size);
+ 	wcstoutf8(path_name, (__u16 *)in_path_name, path_size);
+ 
+diff --git a/tools/include/nolibc/arch-powerpc.h b/tools/include/nolibc/arch-powerpc.h
+index ac212e6185b26d..41ebd394b90c7a 100644
+--- a/tools/include/nolibc/arch-powerpc.h
++++ b/tools/include/nolibc/arch-powerpc.h
+@@ -172,7 +172,7 @@
+ 	_ret;                                                                \
+ })
+ 
+-#ifndef __powerpc64__
++#if !defined(__powerpc64__) && !defined(__clang__)
+ /* FIXME: For 32-bit PowerPC, with newer gcc compilers (e.g. gcc 13.1.0),
+  * "omit-frame-pointer" fails with __attribute__((no_stack_protector)) but
+  * works with __attribute__((__optimize__("-fno-stack-protector")))
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index 2e9e193179dd54..45aba6287b380c 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -636,7 +636,12 @@ static struct hist_entry *hists__findnew_entry(struct hists *hists,
+ 			 * mis-adjust symbol addresses when computing
+ 			 * the history counter to increment.
+ 			 */
+-			if (he->ms.map != entry->ms.map) {
++			if (hists__has(hists, sym) && he->ms.map != entry->ms.map) {
++				if (he->ms.sym) {
++					u64 addr = he->ms.sym->start;
++					he->ms.sym = map__find_symbol(entry->ms.map, addr);
++				}
++
+ 				map__put(he->ms.map);
+ 				he->ms.map = map__get(entry->ms.map);
+ 			}
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 8477edefc29978..706be5e4a07617 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2270,8 +2270,12 @@ static void save_lbr_cursor_node(struct thread *thread,
+ 		cursor->curr = cursor->first;
+ 	else
+ 		cursor->curr = cursor->curr->next;
++
++	map_symbol__exit(&lbr_stitch->prev_lbr_cursor[idx].ms);
+ 	memcpy(&lbr_stitch->prev_lbr_cursor[idx], cursor->curr,
+ 	       sizeof(struct callchain_cursor_node));
++	lbr_stitch->prev_lbr_cursor[idx].ms.maps = maps__get(cursor->curr->ms.maps);
++	lbr_stitch->prev_lbr_cursor[idx].ms.map = map__get(cursor->curr->ms.map);
+ 
+ 	lbr_stitch->prev_lbr_cursor[idx].valid = true;
+ 	cursor->pos++;
+@@ -2482,6 +2486,9 @@ static bool has_stitched_lbr(struct thread *thread,
+ 		memcpy(&stitch_node->cursor, &lbr_stitch->prev_lbr_cursor[i],
+ 		       sizeof(struct callchain_cursor_node));
+ 
++		stitch_node->cursor.ms.maps = maps__get(lbr_stitch->prev_lbr_cursor[i].ms.maps);
++		stitch_node->cursor.ms.map = map__get(lbr_stitch->prev_lbr_cursor[i].ms.map);
++
+ 		if (callee)
+ 			list_add(&stitch_node->node, &lbr_stitch->lists);
+ 		else
+@@ -2505,6 +2512,8 @@ static bool alloc_lbr_stitch(struct thread *thread, unsigned int max_lbr)
+ 	if (!thread__lbr_stitch(thread)->prev_lbr_cursor)
+ 		goto free_lbr_stitch;
+ 
++	thread__lbr_stitch(thread)->prev_lbr_cursor_size = max_lbr + 1;
++
+ 	INIT_LIST_HEAD(&thread__lbr_stitch(thread)->lists);
+ 	INIT_LIST_HEAD(&thread__lbr_stitch(thread)->free_lists);
+ 
+@@ -2560,8 +2569,12 @@ static int resolve_lbr_callchain_sample(struct thread *thread,
+ 						max_lbr, callee);
+ 
+ 		if (!stitched_lbr && !list_empty(&lbr_stitch->lists)) {
+-			list_replace_init(&lbr_stitch->lists,
+-					  &lbr_stitch->free_lists);
++			struct stitch_list *stitch_node;
++
++			list_for_each_entry(stitch_node, &lbr_stitch->lists, node)
++				map_symbol__exit(&stitch_node->cursor.ms);
++
++			list_splice_init(&lbr_stitch->lists, &lbr_stitch->free_lists);
+ 		}
+ 		memcpy(&lbr_stitch->prev_sample, sample, sizeof(*sample));
+ 	}
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 3107f5aa8c9a0c..226c3167f75822 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -17,7 +17,7 @@ src_feature_tests  = getenv('srctree') + '/tools/build/feature'
+ 
+ def clang_has_option(option):
+     cc_output = Popen([cc, cc_options + option, path.join(src_feature_tests, "test-hello.c") ], stderr=PIPE).stderr.readlines()
+-    return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o))] == [ ]
++    return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o) or (b"unknown warning option" in o))] == [ ]
+ 
+ if cc_is_clang:
+     from sysconfig import get_config_vars
+@@ -63,6 +63,8 @@ cflags = getenv('CFLAGS', '').split()
+ cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls', '-DPYTHON_PERF' ]
+ if cc_is_clang:
+     cflags += ["-Wno-unused-command-line-argument" ]
++    if clang_has_option("-Wno-cast-function-type-mismatch"):
++        cflags += ["-Wno-cast-function-type-mismatch" ]
+ else:
+     cflags += ['-Wno-cast-function-type' ]
+ 
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 87c59aa9fe38bf..0ffdd52d86d707 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -476,6 +476,7 @@ void thread__free_stitch_list(struct thread *thread)
+ 		return;
+ 
+ 	list_for_each_entry_safe(pos, tmp, &lbr_stitch->lists, node) {
++		map_symbol__exit(&pos->cursor.ms);
+ 		list_del_init(&pos->node);
+ 		free(pos);
+ 	}
+@@ -485,6 +486,9 @@ void thread__free_stitch_list(struct thread *thread)
+ 		free(pos);
+ 	}
+ 
++	for (unsigned int i = 0 ; i < lbr_stitch->prev_lbr_cursor_size; i++)
++		map_symbol__exit(&lbr_stitch->prev_lbr_cursor[i].ms);
++
+ 	zfree(&lbr_stitch->prev_lbr_cursor);
+ 	free(thread__lbr_stitch(thread));
+ 	thread__set_lbr_stitch(thread, NULL);
+diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
+index 8b4a3c69bad19c..6cbf6eb2812e05 100644
+--- a/tools/perf/util/thread.h
++++ b/tools/perf/util/thread.h
+@@ -26,6 +26,7 @@ struct lbr_stitch {
+ 	struct list_head		free_lists;
+ 	struct perf_sample		prev_sample;
+ 	struct callchain_cursor_node	*prev_lbr_cursor;
++	unsigned int prev_lbr_cursor_size;
+ };
+ 
+ DECLARE_RC_STRUCT(thread) {
+diff --git a/tools/testing/selftests/breakpoints/step_after_suspend_test.c b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+index b8703c499d28cd..0e6374aac96f54 100644
+--- a/tools/testing/selftests/breakpoints/step_after_suspend_test.c
++++ b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+@@ -153,7 +153,10 @@ void suspend(void)
+ 	if (err < 0)
+ 		ksft_exit_fail_msg("timerfd_settime() failed\n");
+ 
+-	if (write(power_state_fd, "mem", strlen("mem")) != strlen("mem"))
++	system("(echo mem > /sys/power/state) 2> /dev/null");
++
++	timerfd_gettime(timerfd, &spec);
++	if (spec.it_value.tv_sec != 0 || spec.it_value.tv_nsec != 0)
+ 		ksft_exit_fail_msg("Failed to enter Suspend state\n");
+ 
+ 	close(timerfd);
+diff --git a/tools/testing/selftests/devices/test_discoverable_devices.py b/tools/testing/selftests/devices/test_discoverable_devices.py
+index fbae8deb593d5f..37a58e94e7c3aa 100755
+--- a/tools/testing/selftests/devices/test_discoverable_devices.py
++++ b/tools/testing/selftests/devices/test_discoverable_devices.py
+@@ -39,7 +39,7 @@ def find_pci_controller_dirs():
+ 
+ 
+ def find_usb_controller_dirs():
+-    usb_controller_sysfs_dir = "usb[\d]+"
++    usb_controller_sysfs_dir = r"usb[\d]+"
+ 
+     dir_regex = re.compile(usb_controller_sysfs_dir)
+     for d in os.scandir(sysfs_usb_devices):
+@@ -69,7 +69,7 @@ def get_acpi_uid(sysfs_dev_dir):
+ 
+ 
+ def get_usb_version(sysfs_dev_dir):
+-    re_usb_version = re.compile("PRODUCT=.*/(\d)/.*")
++    re_usb_version = re.compile(r"PRODUCT=.*/(\d)/.*")
+     with open(os.path.join(sysfs_dev_dir, "uevent")) as f:
+         return int(re_usb_version.search(f.read()).group(1))
+ 
+diff --git a/tools/testing/selftests/hid/Makefile b/tools/testing/selftests/hid/Makefile
+index 2b5ea18bde38bd..346328e2295c30 100644
+--- a/tools/testing/selftests/hid/Makefile
++++ b/tools/testing/selftests/hid/Makefile
+@@ -17,6 +17,8 @@ TEST_PROGS += hid-tablet.sh
+ TEST_PROGS += hid-usb_crash.sh
+ TEST_PROGS += hid-wacom.sh
+ 
++TEST_FILES := run-hid-tools-tests.sh
++
+ CXX ?= $(CROSS_COMPILE)g++
+ 
+ HOSTPKG_CONFIG := pkg-config
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index d680c00d2853ac..67df7b47087f03 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -254,7 +254,7 @@ function cleanup_hugetlb_memory() {
+   local cgroup="$1"
+   if [[ "$(pgrep -f write_to_hugetlbfs)" != "" ]]; then
+     echo killing write_to_hugetlbfs
+-    killall -2 write_to_hugetlbfs
++    killall -2 --wait write_to_hugetlbfs
+     wait_for_hugetlb_memory_to_get_depleted $cgroup
+   fi
+   set -e
+diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
+index 09faffbc3d87c9..43e6d0c53fe4c5 100644
+--- a/tools/testing/selftests/mm/mseal_test.c
++++ b/tools/testing/selftests/mm/mseal_test.c
+@@ -146,6 +146,16 @@ static int sys_madvise(void *start, size_t len, int types)
+ 	return sret;
+ }
+ 
++static void *sys_mremap(void *addr, size_t old_len, size_t new_len,
++	unsigned long flags, void *new_addr)
++{
++	void *sret;
++
++	errno = 0;
++	sret = (void *) syscall(__NR_mremap, addr, old_len, new_len, flags, new_addr);
++	return sret;
++}
++
+ static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
+ {
+ 	int ret = syscall(__NR_pkey_alloc, flags, init_val);
+@@ -1151,12 +1161,12 @@ static void test_seal_mremap_shrink(bool seal)
+ 	}
+ 
+ 	/* shrink from 4 pages to 2 pages. */
+-	ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
++	ret2 = sys_mremap(ptr, size, 2 * page_size, 0, 0);
+ 	if (seal) {
+-		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
++		FAIL_TEST_IF_FALSE(ret2 == (void *) MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+ 	} else {
+-		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
++		FAIL_TEST_IF_FALSE(ret2 != (void *) MAP_FAILED);
+ 
+ 	}
+ 
+@@ -1183,7 +1193,7 @@ static void test_seal_mremap_expand(bool seal)
+ 	}
+ 
+ 	/* expand from 2 page to 4 pages. */
+-	ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
++	ret2 = sys_mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1216,7 +1226,7 @@ static void test_seal_mremap_move(bool seal)
+ 	}
+ 
+ 	/* move from ptr to fixed address. */
+-	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
++	ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1335,7 +1345,7 @@ static void test_seal_mremap_shrink_fixed(bool seal)
+ 	}
+ 
+ 	/* mremap to move and shrink to fixed address */
+-	ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
++	ret2 = sys_mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ 			newAddr);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+@@ -1366,7 +1376,7 @@ static void test_seal_mremap_expand_fixed(bool seal)
+ 	}
+ 
+ 	/* mremap to move and expand to fixed address */
+-	ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
++	ret2 = sys_mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ 			newAddr);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+@@ -1397,7 +1407,7 @@ static void test_seal_mremap_move_fixed(bool seal)
+ 	}
+ 
+ 	/* mremap to move to fixed address */
+-	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
++	ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+@@ -1426,14 +1436,13 @@ static void test_seal_mremap_move_fixed_zero(bool seal)
+ 	/*
+ 	 * MREMAP_FIXED can move the mapping to zero address
+ 	 */
+-	ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
++	ret2 = sys_mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ 			0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+ 	} else {
+ 		FAIL_TEST_IF_FALSE(ret2 == 0);
+-
+ 	}
+ 
+ 	TEST_END_CHECK();
+@@ -1456,13 +1465,13 @@ static void test_seal_mremap_move_dontunmap(bool seal)
+ 	}
+ 
+ 	/* mremap to move, and don't unmap src addr. */
+-	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
++	ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+ 	} else {
++		/* kernel will allocate a new address */
+ 		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+-
+ 	}
+ 
+ 	TEST_END_CHECK();
+@@ -1470,7 +1479,7 @@ static void test_seal_mremap_move_dontunmap(bool seal)
+ 
+ static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+ {
+-	void *ptr;
++	void *ptr, *ptr2;
+ 	unsigned long page_size = getpagesize();
+ 	unsigned long size = 4 * page_size;
+ 	int ret;
+@@ -1485,24 +1494,30 @@ static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+ 	}
+ 
+ 	/*
+-	 * The 0xdeaddead should not have effect on dest addr
+-	 * when MREMAP_DONTUNMAP is set.
++	 * The new address is any address that not allocated.
++	 * use allocate/free to similate that.
+ 	 */
+-	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+-			0xdeaddead);
++	setup_single_address(size, &ptr2);
++	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
++	ret = sys_munmap(ptr2, size);
++	FAIL_TEST_IF_FALSE(!ret);
++
++	/*
++	 * remap to any address.
++	 */
++	ret2 = sys_mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
++			(void *) ptr2);
+ 	if (seal) {
+ 		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+ 		FAIL_TEST_IF_FALSE(errno == EPERM);
+ 	} else {
+-		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+-		FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead);
+-
++		/* remap success and return ptr2 */
++		FAIL_TEST_IF_FALSE(ret2 ==  ptr2);
+ 	}
+ 
+ 	TEST_END_CHECK();
+ }
+ 
+-
+ static void test_seal_merge_and_split(void)
+ {
+ 	void *ptr;
+diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
+index 6a2caba19ee1d9..1289d311efd705 100644
+--- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
++++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
+@@ -28,7 +28,7 @@ enum method {
+ 
+ /* Global variables. */
+ static const char *self;
+-static char *shmaddr;
++static int *shmaddr;
+ static int shmid;
+ 
+ /*
+@@ -47,15 +47,17 @@ void sig_handler(int signo)
+ {
+ 	printf("Received %d.\n", signo);
+ 	if (signo == SIGINT) {
+-		printf("Deleting the memory\n");
+-		if (shmdt((const void *)shmaddr) != 0) {
+-			perror("Detach failure");
++		if (shmaddr) {
++			printf("Deleting the memory\n");
++			if (shmdt((const void *)shmaddr) != 0) {
++				perror("Detach failure");
++				shmctl(shmid, IPC_RMID, NULL);
++				exit(4);
++			}
++
+ 			shmctl(shmid, IPC_RMID, NULL);
+-			exit(4);
++			printf("Done deleting the memory\n");
+ 		}
+-
+-		shmctl(shmid, IPC_RMID, NULL);
+-		printf("Done deleting the memory\n");
+ 	}
+ 	exit(2);
+ }
+@@ -211,7 +213,8 @@ int main(int argc, char **argv)
+ 			shmctl(shmid, IPC_RMID, NULL);
+ 			exit(2);
+ 		}
+-		printf("shmaddr: %p\n", ptr);
++		shmaddr = ptr;
++		printf("shmaddr: %p\n", shmaddr);
+ 
+ 		break;
+ 	default:
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+index bd9317bf5adafb..dc056fec993bd3 100644
+--- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
++++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+@@ -207,6 +207,7 @@ static int conntrack_data_generate_v6(struct mnl_socket *sock,
+ static int count_entries(const struct nlmsghdr *nlh, void *data)
+ {
+ 	reply_counter++;
++	return MNL_CB_OK;
+ }
+ 
+ static int conntracK_count_zone(struct mnl_socket *sock, uint16_t zone)
+diff --git a/tools/testing/selftests/net/netfilter/nft_audit.sh b/tools/testing/selftests/net/netfilter/nft_audit.sh
+index 902f8114bc80fc..87f2b4c725aa02 100755
+--- a/tools/testing/selftests/net/netfilter/nft_audit.sh
++++ b/tools/testing/selftests/net/netfilter/nft_audit.sh
+@@ -48,12 +48,31 @@ logread_pid=$!
+ trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT
+ exec 3<"$logfile"
+ 
++lsplit='s/^\(.*\) entries=\([^ ]*\) \(.*\)$/pfx="\1"\nval="\2"\nsfx="\3"/'
++summarize_logs() {
++	sum=0
++	while read line; do
++		eval $(sed "$lsplit" <<< "$line")
++		[[ $sum -gt 0 ]] && {
++			[[ "$pfx $sfx" == "$tpfx $tsfx" ]] && {
++				let "sum += val"
++				continue
++			}
++			echo "$tpfx entries=$sum $tsfx"
++		}
++		tpfx="$pfx"
++		tsfx="$sfx"
++		sum=$val
++	done
++	echo "$tpfx entries=$sum $tsfx"
++}
++
+ do_test() { # (cmd, log)
+ 	echo -n "testing for cmd: $1 ... "
+ 	cat <&3 >/dev/null
+ 	$1 >/dev/null || exit 1
+ 	sleep 0.1
+-	res=$(diff -a -u <(echo "$2") - <&3)
++	res=$(diff -a -u <(echo "$2") <(summarize_logs <&3))
+ 	[ $? -eq 0 ] && { echo "OK"; return; }
+ 	echo "FAIL"
+ 	grep -v '^\(---\|+++\|@@\)' <<< "$res"
+@@ -152,31 +171,17 @@ do_test 'nft reset rules t1 c2' \
+ 'table=t1 family=2 entries=3 op=nft_reset_rule'
+ 
+ do_test 'nft reset rules table t1' \
+-'table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule'
++'table=t1 family=2 entries=9 op=nft_reset_rule'
+ 
+ do_test 'nft reset rules t2 c3' \
+-'table=t2 family=2 entries=189 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=126 op=nft_reset_rule'
++'table=t2 family=2 entries=503 op=nft_reset_rule'
+ 
+ do_test 'nft reset rules t2' \
+-'table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=186 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=129 op=nft_reset_rule'
++'table=t2 family=2 entries=509 op=nft_reset_rule'
+ 
+ do_test 'nft reset rules' \
+-'table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t1 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=3 op=nft_reset_rule
+-table=t2 family=2 entries=180 op=nft_reset_rule
+-table=t2 family=2 entries=188 op=nft_reset_rule
+-table=t2 family=2 entries=135 op=nft_reset_rule'
++'table=t1 family=2 entries=9 op=nft_reset_rule
++table=t2 family=2 entries=509 op=nft_reset_rule'
+ 
+ # resetting sets and elements
+ 
+@@ -200,13 +205,11 @@ do_test 'nft reset counters t1' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj'
+ 
+ do_test 'nft reset counters t2' \
+-'table=t2 family=2 entries=342 op=nft_reset_obj
+-table=t2 family=2 entries=158 op=nft_reset_obj'
++'table=t2 family=2 entries=500 op=nft_reset_obj'
+ 
+ do_test 'nft reset counters' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj
+-table=t2 family=2 entries=341 op=nft_reset_obj
+-table=t2 family=2 entries=159 op=nft_reset_obj'
++table=t2 family=2 entries=500 op=nft_reset_obj'
+ 
+ # resetting quotas
+ 
+@@ -217,13 +220,11 @@ do_test 'nft reset quotas t1' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj'
+ 
+ do_test 'nft reset quotas t2' \
+-'table=t2 family=2 entries=315 op=nft_reset_obj
+-table=t2 family=2 entries=185 op=nft_reset_obj'
++'table=t2 family=2 entries=500 op=nft_reset_obj'
+ 
+ do_test 'nft reset quotas' \
+ 'table=t1 family=2 entries=1 op=nft_reset_obj
+-table=t2 family=2 entries=314 op=nft_reset_obj
+-table=t2 family=2 entries=186 op=nft_reset_obj'
++table=t2 family=2 entries=500 op=nft_reset_obj'
+ 
+ # deleting rules
+ 
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index 994477ee87befc..4bd8360d54225f 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -534,7 +534,7 @@ int expect_strzr(const char *expr, int llen)
+ {
+ 	int ret = 0;
+ 
+-	llen += printf(" = <%s> ", expr);
++	llen += printf(" = <%s> ", expr ? expr : "(null)");
+ 	if (expr) {
+ 		ret = 1;
+ 		result(llen, FAIL);
+@@ -553,7 +553,7 @@ int expect_strnz(const char *expr, int llen)
+ {
+ 	int ret = 0;
+ 
+-	llen += printf(" = <%s> ", expr);
++	llen += printf(" = <%s> ", expr ? expr : "(null)");
+ 	if (!expr) {
+ 		ret = 1;
+ 		result(llen, FAIL);
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 4ae417372e9ebc..7dd5668ea8a6e3 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -36,6 +36,12 @@
+ #define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x)
+ #define ELF(x) ELF_BITS_XFORM(ELF_BITS, x)
+ 
++#ifdef __s390x__
++#define ELF_HASH_ENTRY ELF(Xword)
++#else
++#define ELF_HASH_ENTRY ELF(Word)
++#endif
++
+ static struct vdso_info
+ {
+ 	bool valid;
+@@ -47,8 +53,8 @@ static struct vdso_info
+ 	/* Symbol table */
+ 	ELF(Sym) *symtab;
+ 	const char *symstrings;
+-	ELF(Word) *bucket, *chain;
+-	ELF(Word) nbucket, nchain;
++	ELF_HASH_ENTRY *bucket, *chain;
++	ELF_HASH_ENTRY nbucket, nchain;
+ 
+ 	/* Version table */
+ 	ELF(Versym) *versym;
+@@ -115,7 +121,7 @@ void vdso_init_from_sysinfo_ehdr(uintptr_t base)
+ 	/*
+ 	 * Fish out the useful bits of the dynamic table.
+ 	 */
+-	ELF(Word) *hash = 0;
++	ELF_HASH_ENTRY *hash = 0;
+ 	vdso_info.symstrings = 0;
+ 	vdso_info.symtab = 0;
+ 	vdso_info.versym = 0;
+@@ -133,7 +139,7 @@ void vdso_init_from_sysinfo_ehdr(uintptr_t base)
+ 				 + vdso_info.load_offset);
+ 			break;
+ 		case DT_HASH:
+-			hash = (ELF(Word) *)
++			hash = (ELF_HASH_ENTRY *)
+ 				((uintptr_t)dyn[i].d_un.d_ptr
+ 				 + vdso_info.load_offset);
+ 			break;
+@@ -216,7 +222,8 @@ void *vdso_sym(const char *version, const char *name)
+ 		ELF(Sym) *sym = &vdso_info.symtab[chain];
+ 
+ 		/* Check for a defined global or weak function w/ right name. */
+-		if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
++		if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
++		    ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
+ 			continue;
+ 		if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ 		    ELF64_ST_BIND(sym->st_info) != STB_WEAK)
+diff --git a/tools/testing/selftests/vDSO/vdso_config.h b/tools/testing/selftests/vDSO/vdso_config.h
+index 7b543e7f04d7b0..fe0b3ec48c8d8d 100644
+--- a/tools/testing/selftests/vDSO/vdso_config.h
++++ b/tools/testing/selftests/vDSO/vdso_config.h
+@@ -18,18 +18,18 @@
+ #elif defined(__aarch64__)
+ #define VDSO_VERSION		3
+ #define VDSO_NAMES		0
+-#elif defined(__powerpc__)
++#elif defined(__powerpc64__)
+ #define VDSO_VERSION		1
+ #define VDSO_NAMES		0
+-#define VDSO_32BIT		1
+-#elif defined(__powerpc64__)
++#elif defined(__powerpc__)
+ #define VDSO_VERSION		1
+ #define VDSO_NAMES		0
+-#elif defined (__s390__)
++#define VDSO_32BIT		1
++#elif defined (__s390__) && !defined(__s390x__)
+ #define VDSO_VERSION		2
+ #define VDSO_NAMES		0
+ #define VDSO_32BIT		1
+-#elif defined (__s390X__)
++#elif defined (__s390x__)
+ #define VDSO_VERSION		2
+ #define VDSO_NAMES		0
+ #elif defined(__mips__)
+diff --git a/tools/testing/selftests/vDSO/vdso_test_correctness.c b/tools/testing/selftests/vDSO/vdso_test_correctness.c
+index e691a3cf149112..cdb697ae8343cc 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_correctness.c
++++ b/tools/testing/selftests/vDSO/vdso_test_correctness.c
+@@ -114,6 +114,12 @@ static void fill_function_pointers()
+ 	if (!vdso)
+ 		vdso = dlopen("linux-gate.so.1",
+ 			      RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
++	if (!vdso)
++		vdso = dlopen("linux-vdso32.so.1",
++			      RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
++	if (!vdso)
++		vdso = dlopen("linux-vdso64.so.1",
++			      RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
+ 	if (!vdso) {
+ 		printf("[WARN]\tfailed to find vDSO\n");
+ 		return;
+diff --git a/tools/tracing/rtla/Makefile.rtla b/tools/tracing/rtla/Makefile.rtla
+index 3ff0b8970896f7..cc1d6b615475f0 100644
+--- a/tools/tracing/rtla/Makefile.rtla
++++ b/tools/tracing/rtla/Makefile.rtla
+@@ -38,7 +38,7 @@ BINDIR		:= /usr/bin
+ .PHONY: install
+ install: doc_install
+ 	@$(MKDIR) -p $(DESTDIR)$(BINDIR)
+-	$(call QUIET_INSTALL,rtla)$(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
++	$(call QUIET_INSTALL,rtla)$(INSTALL) $(RTLA) -m 755 $(DESTDIR)$(BINDIR)
+ 	@$(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ 	@test ! -f $(DESTDIR)$(BINDIR)/osnoise || $(RM) $(DESTDIR)$(BINDIR)/osnoise
+ 	@$(LN) rtla $(DESTDIR)$(BINDIR)/osnoise
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index de33ea5005b6bb..51844cb9950a9f 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -434,7 +434,7 @@ struct osnoise_top_params *osnoise_top_parse_args(int argc, char **argv)
+ 		case 'd':
+ 			params->duration = parse_seconds_duration(optarg);
+ 			if (!params->duration)
+-				osnoise_top_usage(params, "Invalid -D duration\n");
++				osnoise_top_usage(params, "Invalid -d duration\n");
+ 			break;
+ 		case 'e':
+ 			tevent = trace_event_alloc(optarg);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 8c16419fe22aa7..210b0f533534ab 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -459,7 +459,7 @@ static void timerlat_top_usage(char *usage)
+ 		"	  -c/--cpus cpus: run the tracer only on the given cpus",
+ 		"	  -H/--house-keeping cpus: run rtla control threads only on the given cpus",
+ 		"	  -C/--cgroup[=cgroup_name]: set cgroup, if no cgroup_name is passed, the rtla's cgroup will be inherited",
+-		"	  -d/--duration time[m|h|d]: duration of the session in seconds",
++		"	  -d/--duration time[s|m|h|d]: duration of the session",
+ 		"	  -D/--debug: print debug info",
+ 		"	     --dump-tasks: prints the task running on all CPUs if stop conditions are met (depends on !--no-aa)",
+ 		"	  -t/--trace[file]: save the stopped trace to [file|timerlat_trace.txt]",
+@@ -613,7 +613,7 @@ static struct timerlat_top_params
+ 		case 'd':
+ 			params->duration = parse_seconds_duration(optarg);
+ 			if (!params->duration)
+-				timerlat_top_usage("Invalid -D duration\n");
++				timerlat_top_usage("Invalid -d duration\n");
+ 			break;
+ 		case 'e':
+ 			tevent = trace_event_alloc(optarg);


^ permalink raw reply related	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2024-10-10 11:35 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-07 14:23 [gentoo-commits] proj/linux-patches:6.10 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2024-10-10 11:35 Mike Pagano
2024-10-04 15:22 Mike Pagano
2024-09-30 16:11 Mike Pagano
2024-09-30 16:03 Mike Pagano
2024-09-24 18:52 Mike Pagano
2024-09-18 18:01 Mike Pagano
2024-09-12 13:08 Mike Pagano
2024-09-12 12:30 Mike Pagano
2024-09-08 11:05 Mike Pagano
2024-09-07 18:10 Mike Pagano
2024-09-07 14:26 Mike Pagano
2024-09-07 14:23 Mike Pagano
2024-09-05 13:58 Mike Pagano
2024-09-04 14:06 Mike Pagano
2024-09-04 13:51 Mike Pagano
2024-09-04 13:50 Mike Pagano
2024-09-04 13:50 Mike Pagano
2024-08-29 16:48 Mike Pagano
2024-08-19 10:23 Mike Pagano
2024-08-15 14:05 Mike Pagano
2024-08-15 13:19 Mike Pagano
2024-08-14 15:18 Mike Pagano
2024-08-14 14:49 Mike Pagano
2024-08-14 14:08 Mike Pagano
2024-08-14 14:08 Mike Pagano
2024-08-11 13:27 Mike Pagano
2024-08-09 18:13 Mike Pagano
2024-08-03 15:55 Mike Pagano
2024-08-03 15:16 Mike Pagano
2024-07-27 22:16 Mike Pagano
2024-07-27 22:04 Mike Pagano
2024-07-27 13:45 Mike Pagano
2024-07-24 16:59 Mike Pagano
2024-07-24 16:44 Mike Pagano
2024-07-24 16:43 Mike Pagano
2024-07-19 22:35 Mike Pagano
2024-07-15 22:24 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox